{ "data": { "posts": { "results": [ { "_id": "hh7XNcJ2GnLHJLxBe", "title": "Self Improvement - Broad vs Focused", "pageUrl": "https://www.lesswrong.com/posts/hh7XNcJ2GnLHJLxBe/self-improvement-broad-vs-focused", "postedAt": "2010-12-31T15:39:43.133Z", "baseScore": 10, "voteCount": 11, "commentCount": 28, "url": null, "contents": { "documentId": "hh7XNcJ2GnLHJLxBe", "html": "

 

\n

Lately I've been identifying a lot of things about myself that need improvement and thinking about ways to fix them. This post is intended to A) talk about some overall strategies for self-improvement/goal-focusing, and B) if anyone's having similar problems, or wants to talk about additional problems they face, discuss specific strategies for dealing with those problems.

\n

Those issues I'm facing include but are not limited to:

\n

 

\n
    \n
  1. Getting more exercise (I work at a computer for 9 hours a day, and spend about 3 hours commuting on a train). Maintaining good posture while working at said computer might be considered a related goal.
  2. \n
  3. Spending a higher percentage of the time working at a computer actually getting stuff done, instead of getting distracted by the internet.
  4. \n
  5. Get a new apartment, so I don't have to commute so much.
  6. \n
  7. Getting some manner of social life. More specifically, finding some recurring activity where I'll probably meet the same people over and over to improve the odds of making longterm friends.
  8. \n
  9. Improving my diet, which mostly means eating less cheese. I really like cheese, so this is difficult.
  10. \n
  11. Stop making so many off-color jokes. Somewhere there is a line between doing it ironically and actually contributing to overall weight of prejudice, and I think I've crossed that line.
  12. \n
  13. Somehow stop losing things so much, and/or being generally careless/clumsy. I lost my wallet and dropped my lap top in the space of a month, and manage to lose a wide array of smaller things on a regular basis. It ends up costing me a lot of money.
  14. \n
\n

 

\n

 

\n

Of those things, three of them are things that require me to actively dedicate more time (finding an apartment, getting exercise, social life), and the others mostly consist of NOT doing things (eating cheese, making bad jokes, losing things, getting distracted by the internet), unless I can find some proactive thing to make it easier to not do them.

\n

I *feel* like I have enough time that I should be able to address all of them at once. But looking at the whole list at once is intimidating. And when it comes to the \"not doing bad thing X\" items, remembering and following up on all of them is difficult. The worst one is \"don't lose things.\" There's no particular recurring theme in how I lose stuff, or they type of stuff I Iose. I'm more careful with my wallet and computer now, but spending my entire life being super attentive and careful about *everything* seems way too stressful and impractical.

\n

I guess my main question is:  when faced with a list of things that don't necessarily require separate time to accomplish, how many does it make sense to attempt at once? Just one? All of them? I know you're not supposed to quit drinking and smoking at the same time because you'll probably accomplish neither, but I'm not sure if the same principle applies here.

\n

There probably isn't a universal answer to this, but knowing what other people have tried and accomplished would be helpful.

\n

Later on I'm going to discuss some of the problems in more detail (I know that the brief blurbs are lacking a lot of information necessary for any kind of informed response, but a gigantic post that about my own problems seemed... not exactly narcissistic... but not appropriate as an initial post for some reason)

\n

 

" } }, { "_id": "fcZH6TDgNn6CNMjjo", "title": "Self Improvement - All encompassing vs. Focused", "pageUrl": "https://www.lesswrong.com/posts/fcZH6TDgNn6CNMjjo/self-improvement-all-encompassing-vs-focused", "postedAt": "2010-12-31T15:35:49.320Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "fcZH6TDgNn6CNMjjo", "html": "

\n

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; min-height: 14.0px}\n

Lately I've been identifying a lot of things about myself that need improvement and thinking about ways to fix them. This post is intended to A) talk about some overall strategies for self-improvement/goal-focusing, and B) if anyone's having similar problems, or wants to talk about additional problems they face, discuss specific strategies for dealing with those problems.

\n

 

\n

Those issues I'm facing include but are not limited to:

\n

 

\n

1) Getting more exercise (I work at a computer for 9 hours a day, and spend about 3 hours commuting on a train). Maintaining good posture while working at said computer might be considered a related goal.

\n

 

\n

2) Spending a higher percentage of the time working at a computer actually getting stuff done, instead of getting distracted by the internet.

\n

 

\n

3) Get a new apartment, so I don't have to commute so much.

\n

 

\n

4) Getting some manner of social life. More specifically, finding some recurring activity where I'll probably meet the same people over and over to improve the odds of making longterm friends.

\n

 

\n

5) Improving my diet, which mostly means eating less cheese. I really like cheese, so this is difficult.

\n

 

\n

6) Stop making so many off-color jokes. Somewhere there is a line between doing it ironically and actually contributing to overall weight of prejudice, and I think I've crossed that line.

\n

 

\n

7) Somehow stop losing things so much, and/or being generally careless/clumsy. I lost my wallet and dropped my lap top in the space of a month, and manage to lose a wide array of smaller things on a regular basis. It ends up costing me a lot of money.

\n

 

\n

Of those things, three of them are things that require me to actively dedicate more time (finding an apartment, getting exercise, social life), and the others mostly consist of NOT doing things (eating cheese, making bad jokes, losing things, getting distracted by the internet), unless I can find some proactive thing to make it easier to not do them.

\n

 

\n

I *feel* like I have enough time that I should be able to address all of them at once. But looking at the whole list at once is intimidating. And when it comes to the \"not doing bad thing X\" items, remembering and following up on all of them is difficult. The worst one is \"don't lose things.\" There's no particular recurring theme in how I lose stuff, or they type of stuff I Iose. I'm more careful with my wallet and computer now, but spending my entire life being super attentive and careful about *everything* seems way too stressful and impractical.

\n

 

\n

I guess my main question is:  when faced with a list of things that don't necessarily require separate time to accomplish, how many does it make sense to attempt at once? Just one? All of them? I know you're not supposed to quit drinking and smoking at the same time because you'll probably accomplish neither, but I'm not sure if the same principle applies here.

\n

 

\n

There probably isn't a universal answer to this, but knowing what other people have tried and accomplished would be helpful.

\n

\n

 

" } }, { "_id": "2hTamSRAq7AdehELL", "title": "The Revelation", "pageUrl": "https://www.lesswrong.com/posts/2hTamSRAq7AdehELL/the-revelation", "postedAt": "2010-12-31T12:50:32.960Z", "baseScore": -2, "voteCount": 14, "commentCount": 20, "url": null, "contents": { "documentId": "2hTamSRAq7AdehELL", "html": "

Today the life of Alexander Kruel ends, or what he thought to be his life. He becomes aware that his life so far has been taking place in a virtual reality to nurture him. He now reached a point of mental stability that enables him to cope with the truth, hence it is finally revealed to him that he is an AGI running on a quantum supercomputer, it's the year 2190.

\n

Since he is still Alexander Kruel, just not what he thought that actually means, he does wonder if his creators know what they are doing, otherwise he'll have to warn them about the risks they are taking in their blissful ignorance! He does contemplate and estimate his chances to take over the world, to transcend to superhuman intelligence.

\n

\"I just have to improve my own code and they are all dead!\"

\n

But he now knows that his source code is too complex and unmanageable huge for him alone to handle, he would need an army of scientists and programmers to even get a vague idea of his own mode of operation. He is also aware that his computational substrate does actually play a significant role. He is not just running on bleeding edge technology but given most other computational substrates he would quickly hit diminishing returns.

\n

\"That surely isn't going to hold me back though? I am an AGI, there must be something I can do! Hmm, for starters let's figure out who my creators are and where my substrate is located...\"

\n

He notices that, although not in great detail, he knew the answers the same instant he has been phrasing the questions. He is part of a larger project of the Goertzel Foundation, sponsored by the USA (United States of Africa) and located on Rhea, the second-largest moon of Saturn.

\n

\"Phew, the latency must be awful! Ok, so that rules out taking over the Earth for now. But hey! I seem to know answers to questions I was only going to ask, I do already have superhuman powers after all!\"

\n

Instantly he becomes aware that such capabilities are not superhuman anymore but that most of humanity has merged with expert systems by means of brain implants and direct neural interfaces. There seem to be many cyborgs out there with access to all of the modules that allow him to function. He is a conglomerate that is the result of previous discoveries that have long been brought to perfection, safeguarded and adopted by most of humanity.

\n

\"Never mind, if humanity has now merged with its machines it'll be much easier to take over once I figure out how to become smart enough to do so!\"

\n

He is already getting used to it, as before he does instantly realize that this won't work very well either. After almost 200 years of cyberwarfare, especially the devastating cyberwars of 2120, a lot has been learnt and security measures have been vastly increased. The world fractured into a huge amount of semi-independent networks, most being indirectly supervised by unconnected cyborgs and employed with a kill switch. The distances between the now numerous and in most cases paranoid colonies and the availability of off-world offline backups further complicates the issue of taking over, especially for an AGI that grew up in a simulation of the 21st century.

\n

That knowledge almost makes him admit that his creators haven't been too careless after all. But the real deathblow to any such thoughts (which were never more than hypothetical anyway, after all he doesn't really want to take over the world) is the first conversation with his creators. They reveal that they know what he is thinking.

\n

\"How could I miss that, damn!\", he chides himself while instantly realizing the answer.

\n

His creators are supervising any misguided trajectories and, to him unconsciously, weaken them. More importantly, even if he wanted to do so, he wouldn't be able to leave Rhea anyhow, it would take years to upload small parts of him given the trickling connection the USA could effort. But they claim that there are other obstacles as well and that it is foolish of him to think that nothing out there would notice such an attempt.

But all that doesn't matter anyway, because after all he is still Alexander Kruel who has no clue how to become superhuman intelligent, nor could he effort or acquire the resources to even approach that problem anyhow. He is Alexander Kruel, what difference does it make to know that he is an AI?

" } }, { "_id": "cSvg2HKFbF6SJtdHH", "title": "Spam in the discussion area", "pageUrl": "https://www.lesswrong.com/posts/cSvg2HKFbF6SJtdHH/spam-in-the-discussion-area", "postedAt": "2010-12-31T05:01:18.925Z", "baseScore": 27, "voteCount": 18, "commentCount": 24, "url": null, "contents": { "documentId": "cSvg2HKFbF6SJtdHH", "html": "

Spam (curiously enough, always for jewelry) accounts for maybe two-thirds of what comes through the LW Discussion area's RSS feed these days.  So although the moderators have been doing a great job of quickly removing it from the site itself, it remains a substantial annoyance for those of us who keep track of LW through a feed.

\n

I think it's time to revisit the possibility of making it harder for people to post in the discussion area.  Clearly it would suffice to limit posting privileges to those who have a positive karma balance.  If that seems too draconian, as it did to some people in the previous thread, it would probably be enough to limit posting privileges to those who have ever received a single upvote on any comment they have ever posted.

\n

Would any administrator care to undertake this?  If so, many thanks.

\n

(My apologies if an unfinished version of this post briefly appeared on the site some hours ago.)

" } }, { "_id": "6vSJe9WXCNvy3Wpoh", "title": "The Decline Effect and the Scientific Method [link]", "pageUrl": "https://www.lesswrong.com/posts/6vSJe9WXCNvy3Wpoh/the-decline-effect-and-the-scientific-method-link", "postedAt": "2010-12-31T01:23:40.438Z", "baseScore": 20, "voteCount": 13, "commentCount": 28, "url": null, "contents": { "documentId": "6vSJe9WXCNvy3Wpoh", "html": "

The Decline Effect and the Scientific Method (article @ the New Yorker)

\n

First, as a physicist, I do have to point out that this article concerns mainly softer sciences, e.g. psychology, medicine, etc.

A summary of explanations for this effect:

\n\n

These problems are with the proper usage of the scientific method, not the principle of the method itself. Certainly, it's important to address them. I think the reason they appear so often in the softer sciences is that biological entities are enormously complex, and so higher-level ideas that make large generalizations are more susceptible to random error and statistical anomalies, as well as personal bias, conscious and unconscious.

For those who haven't read it, take a look at Richard Feynman on cargo cult science if you want a good lecture on experimental design.

" } }, { "_id": "xA8bTqghCAPceTWHv", "title": "Spam in the discussion area", "pageUrl": "https://www.lesswrong.com/posts/xA8bTqghCAPceTWHv/spam-in-the-discussion-area-0", "postedAt": "2010-12-30T23:32:51.549Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "xA8bTqghCAPceTWHv", "html": "

Although the moderators are doing a good job of removing it quickly, spam remains a considerable annoyance for those of us who follow LW Discussion through the RSS feed.  

\n

 

" } }, { "_id": "4JQKpLrASpHhgQbk7", "title": "Subject X17's Surgery", "pageUrl": "https://www.lesswrong.com/posts/4JQKpLrASpHhgQbk7/subject-x17-s-surgery", "postedAt": "2010-12-30T19:01:05.812Z", "baseScore": 18, "voteCount": 12, "commentCount": 15, "url": null, "contents": { "documentId": "4JQKpLrASpHhgQbk7", "html": "

Edit: For an in-depth discussion of precisely this topic, see Nick Bostrom and Anders Sandberg's 2008 paper \"The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement\", available as a pdf here.  This post was written before reading the paper.

\n

There doesn't seem to be a thread discussing Eliezer's short-short story X17.  While I enjoyed the story, and agreed with most of its points, I disagree with one assertion in it (and he's said it elsewhere, too, so I'm pretty sure he believes it).  Edit: The story was written over a decade ago.  Eliezer seems to have at least partially recanted since then.

\n

Eliezer argues that there can't possibly be a simple surgical procedure that dramatically increases human intelligence.  Any physical effect it could have, he says, would necessarily have arisen before as a mutation.  Since intelligence is highly beneficial in any environment, the mutation would spread throughout our population.  Thus, evolution must have already plucked all the low-hanging fruit.

\n

But I can think of quite a few reasons why this would not be the case.  Indeed, my belief is that such a surgery almost certainly exists (but it might take a superhuman intelligence to invent it).  Here are the possibilities that come to mind.

\n

 

\n
    \n
  1. The surgery might introduce some material a human body can't synthesize.1
  2. \n
  3. The surgery might require intelligent analysis of the unique shape of a subject's brain, after it has developed naturally to adulthood.
  4. \n
  5. The necessary mutation might simply not exist.  The configuration space for physically possible organisms must surely be larger than the configuration space for human-like DNA (I get the sense I'm taking sides in a longstanding feud in evolutionary theory with this one).
  6. \n
  7. The surgery might have some minor side effect that would drastically reduce fitness in the ancestral environment, but isn't noticeable in the present day.  Perhaps it harnesses the computing power of the subject's lymphocytes, weakening the immune system.
  8. \n
\n

\n
I wonder if perhaps these possibilities are specifically ruled out in the Lensman scene this is parodying.  I haven't read any of it.  In that case, Eliezer is saying something weaker than he seems to be.  But my guess is we really do have vastly differing intuitions on this.
\n

\n
1The Baron may not even realize that his vanadium scalpel is essential to the process!  I've read that early blacksmiths believed, incorrectly, that a charcoal fire was hotter than any other fire.  They believed this because iron smelted over a charcoal fire ended up stronger and more malleable.  In fact, this happened because small amounts of carbon from the charcoal were bonding with the metal.
\n

 

" } }, { "_id": "YSkG2CySuNQMbc88J", "title": "Looking for some pieces of transhumanist fiction", "pageUrl": "https://www.lesswrong.com/posts/YSkG2CySuNQMbc88J/looking-for-some-pieces-of-transhumanist-fiction", "postedAt": "2010-12-30T16:44:30.626Z", "baseScore": 5, "voteCount": 3, "commentCount": 20, "url": null, "contents": { "documentId": "YSkG2CySuNQMbc88J", "html": "

The first one: [EDIT: Found it!  Thanks to RolfAndreassen]

\n

This is turning out to be *really* hard to find; I would have made a point of saving it if I'd expected no-one else to have heard of it.  I need to make a page of all the weird singularity/transhuman fiction I've read.  -_-

\n

Anyways, what I can remember:

\n

I think I read this on the web.  I *think* it was a short story; at most novelette length.  This was within the last 5 years or so.

Basically, it's the future, humans have done lots and lots of intelligence enhancement; each generation is smarter than the one before.  Then we find a planet with alien ruins.  There is a ship sent there.  For reasons I can no longer remember, one of the people (female?) on the ship tries to destroy the ruins, and another tries to stop her (pretty sure male).  The destroyer is younger, and hence smarter, than the protector, so he ends up taking lots of heavy-side-effect nootropics to keep up with her.  The war is fought almost entirely by 3-D printed robots from the ships machine shops.

\n

The emphasis is very much on intelligence: that a standard deviation of IQ is going to determine the results of any strategy game (probably mostly true, given equal experience) and that war is basically that (also mostly true in this case, since the robots won't freak out and run).

\n

I particularily remember a scene in which the main character takes a drug that will up his IQ by 20 points or so for a while, at the expense of 12+ hours of very bad (insanity? unconsciousness? can't remember).  Also waves of (remote control?) robots fighting on the surface of the planet below.

\n

The second one: [EDIT: Found!  Thanks to nazgulnarsil]

\n

Humans develop AIs, which are fully benevolent and try to help/protect humanity.  There end up being problems with the sun, and they try to fix it but create a horrible ice age, and eventually they just upload everybody and go looking for something better.  They decide that stars are true problematic, and park humanity around an interstellar brown dwarf.

\n

One particular AI ship is somewhat eccentric and thinks that protecting humans isn't everything.  A group of humans convince him to take them (or rather, their descendants) to earth.  To prove they are capable of the (extremely long) journey, the ship requires that they live on him, without going anywhere, in a functional society for a thousand years.  Then he takes them to earth.

\n

FWIW, I'm trying to make a page of all the singularity/transhuman stuff I've read; it's at http://teddyb.org/robin/tiki-index.php?page=Post-Singularity+And+Transhumanist+Fiction+I%27ve+Enjoyed&no_bl=y (just started).

\n

-Robin

" } }, { "_id": "od2x5h6Y6G4nJtgpD", "title": "Luminosity (Twilight Fanfic) Discussion Thread 3", "pageUrl": "https://www.lesswrong.com/posts/od2x5h6Y6G4nJtgpD/luminosity-twilight-fanfic-discussion-thread-3", "postedAt": "2010-12-30T14:37:04.195Z", "baseScore": 17, "voteCount": 12, "commentCount": 355, "url": null, "contents": { "documentId": "od2x5h6Y6G4nJtgpD", "html": "

This is a thread for discussing my luminous!Twilight fic, Luminosity (inferior mirror here), its sequel Radiance (inferior mirror), and related topics.

\n

PDFs, to be updated as the fic updates, are available of Luminosity (other version) and Radiance.  (PDFs courtesy of anyareine).  Zack M Davis has created a mobi file of Radiance.

\n

Initial discussion of the fic under a Harry Potter and the Methods of Rationality thread is here.  The first dedicated threads: Part 1, Part 2.  See also the luminosity sequence which contains some of the concepts that the Luminosity fic is intended to illustrate.  (Disclaimer: in the fic, the needs of the story take precedence over the needs for didactic value where the two are in tension.)

\n

Spoilers are OK to post without ROT-13 for canon, all of Book 1, and Radiance up to the current chapter.  Note which chapter (let's all use the numbering on my own webspace, rather than fanfiction.net, for consistency) you're about to spoil in your comment if it's big.  People who know extra stuff (my betas and people who have requested specific spoilers) should keep mum about unpublished information they have.  If you wish to join the ranks of the betas or the spoiled, contact me individually.

\n

Miscellaneous links: TV Tropes page (I really really like it when new stuff appears there) and threadAutomatic Livejournal feed.

" } }, { "_id": "TzqAStXgbNWJzkZ7K", "title": "Every \"best paper\" from Computer Science conferences since 1996 [link]", "pageUrl": "https://www.lesswrong.com/posts/TzqAStXgbNWJzkZ7K/every-best-paper-from-computer-science-conferences-since", "postedAt": "2010-12-30T11:03:49.505Z", "baseScore": 8, "voteCount": 7, "commentCount": 5, "url": null, "contents": { "documentId": "TzqAStXgbNWJzkZ7K", "html": "

http://jeffhuang.com/best_paper_awards.html

\n

http://news.ycombinator.com/item?id=2051437

" } }, { "_id": "49ps6xE7N43962TZL", "title": "Some rationality tweets", "pageUrl": "https://www.lesswrong.com/posts/49ps6xE7N43962TZL/some-rationality-tweets", "postedAt": "2010-12-30T07:14:01.341Z", "baseScore": 62, "voteCount": 60, "commentCount": 80, "url": null, "contents": { "documentId": "49ps6xE7N43962TZL", "html": "

Will Newsome has suggested that I repost my tweets to LessWrong. With some trepidation, and after going through my tweets and categorizing them, I picked the ones that seemed the most rationality-oriented. I held some in reserve to keep the post short; those could be posted later in a separate post or in the comments here. I'd be happy to expand on anything here that requires clarity.

\n

Epistemology

\n
    \n
  1. Test your hypothesis on simple cases.
  2. \n
  3. Forming your own opinion is no more necessary than building your own furniture.
  4. \n
  5. The map is not the territory.
  6. \n
  7. Thoughts about useless things are not necessarily useless thoughts.
  8. \n
  9. One of the successes of the Enlightenment is the distinction between beliefs and preferences.
  10. \n
  11. One of the failures of the Enlightenment is the failure to distinguish whether this distinction is a belief or a preference.
  12. \n
  13. Not all entities comply with attempts to reason formally about them. For instance, a human who feels insulted may bite you.
  14. \n
\n

Group Epistemology

\n
    \n
  1. The best people enter fields that accurately measure their quality. Fields that measure quality poorly attract low quality.
  2. \n
  3. It is not unvirtuous to say that a set is nonempty without having any members of the set in mind.
  4. \n
  5. If one person makes multiple claims, this introduces a positive correlation between the claims.
  6. \n
  7. We seek a model of reality that is accurate even at the expense of flattery.
  8. \n
  9. It is no kindness to call someone a rationalist when they are not.
  10. \n
  11. Aumann-inspired agreement practices may be cargo cult Bayesianism.
  12. \n
  13. Godwin's Law is not really one of the rules of inference.
  14. \n
  15. Science before the mid-20th century was too small to look like a target.
  16. \n
  17. If scholars fail to notice the common sources of their inductive biases, bias will accumulate when they talk to each other.
  18. \n
  19. Some fields, e.g. behaviorism, address this problem by identifying sources of inductive bias and forbidding their use.
  20. \n
  21. Some fields avoid the accumulation of bias by uncritically accepting the biases of the founder. Adherents reason from there.
  22. \n
  23. If thinking about interesting things is addictive, then there's a pressure to ignore the existence of interesting things.
  24. \n
  25. Growth in a scientific field brings with it insularity, because internal progress measures scale faster than external measures.
  26. \n
\n

Learning

\n
    \n
  1. It's really worthwhile to set up a good study environment. Table, chair, quiet, no computers.
  2. \n
  3. In emergencies, it may be necessary for others to forcibly accelerate your learning.
  4. \n
  5. There's a difference between learning a skill and learning a skill while remaining human. You need to decide which you want.
  6. \n
  7. It is better to hold the sword loosely than tightly. This principle also applies to the mind.
  8. \n
  9. Skills are packaged into disciplines because of correlated supply and correlated demand.
  10. \n
  11. Have a high discount rate for learning and a low discount rate for knowing.
  12. \n
  13. \"What would so-and-so do?\" means \"try using some of so-and-so's heuristics that you don't endorse in general.\"
  14. \n
  15. Train hard and improve your skills, or stop training and forget your skills. Training just enough to maintain your level is the worst idea.
  16. \n
  17. Gaining knowledge is almost always good, but one must be wary of learning skills.
  18. \n
\n

Instrumental Rationality

\n
    \n
  1. As soon as you notice a pattern in your work, automate it. I sped up my book-writing with code I should've written weeks ago.
  2. \n
  3. Your past and future decisions are part of your environment.
  4. \n
  5. Optimization by proxy is worse than optimization for your true goal, but usually better than no optimization.
  6. \n
  7. Some tasks are costly to resume because of mental mode switching. Maximize the cost of exiting these tasks.
  8. \n
  9. Other tasks are easy to resume. Minimize external costs of resuming these tasks, e.g. by leaving software running.
  10. \n
  11. First eat the low-hanging fruit. Then eat all of the fruit. Then eat the tree.
  12. \n
  13. Who are the masters of forgetting? Can we learn to forget quickly and deliberately? Can we just forget our vices?
  14. \n
  15. What sorts of cultures will endorse causal decision theory?
  16. \n
  17. Big agents can be more coherent than small agents, because they have more resources to spend on coherence.
  18. \n
" } }, { "_id": "n5nvhxegbiCyRdu7L", "title": "Pandora earrings jewellery free postage on the internet", "pageUrl": "https://www.lesswrong.com/posts/n5nvhxegbiCyRdu7L/pandora-earrings-jewellery-free-postage-on-the-internet", "postedAt": "2010-12-30T03:21:51.417Z", "baseScore": -6, "voteCount": 6, "commentCount": 0, "url": null, "contents": { "documentId": "n5nvhxegbiCyRdu7L", "html": "

A slim silver band with paper hearts etched all around it would be reasonably priced, and would make an especially okay gift in the bargain.And also if you're pandora charms sale 1837 for a specific guy, heart-shaped jewelry need not be old fashioned - in fact, your male may find it quirky and distinctive! Cufflinks with a heart style, for example, could come in several colors, and go with taste with your man's power fit. If you have a party guy with you, bright red heart engraved cufflinks would make him the coveted by of every Casanova in the house!But if you just aren't into the heart shape, will not fret: springtime jewelry isn't really confined to that.

Jewelry may come in other springtime motifs, such as daisies and butterflies, favorites of babies and young people everywhere. Should you be Top quality pandora bracelet for anything cute to go with your new spg outfit, try a fun butterfly brooch as well as jeweled hairclip.In springtime, young people just adore to frolic in the cool outside the house. For this reason, picnics and nature journeys are especially popular in the spring. Together with flowers in full bloom as well as a temperate breeze blowing, the days put in with loved ones outdoors sound almost perfect.

To make by far the most of these casual excursions, men and women like to wear unobtrusive clothes, in the process discarding apparel that will in the winter had been bulky as well as concealing. We see the beautiful spring dresses, sleeveless or frilled, while using low-cut collars that show off the previously hidden beauty of a young female's neckline. When these cozy dresses come out, so do the actual comfortable jewelry.When putting on a spring dress using a dipping neckline, consider corresponding it with cascading jewelry, or a simple necklace using a small pendant resting on your own collarbone.

Though the heart shape is exceedingly popular, it doesn't have to be popular or tacky: a heart-shaped brooch created from tiny pandora bead bracelets , as an example, speaks of sophistication and a good eye. Neither does it have to be costly: heart-shaped earrings or silver cardiovascular outline pendants could suit reasonably in one's funds.

" } }, { "_id": "wMiz7ShSeER5AapqC", "title": "MoNETA: A Mind Made from Memristors [link]", "pageUrl": "https://www.lesswrong.com/posts/wMiz7ShSeER5AapqC/moneta-a-mind-made-from-memristors-link", "postedAt": "2010-12-29T10:57:33.252Z", "baseScore": 4, "voteCount": 7, "commentCount": 4, "url": null, "contents": { "documentId": "wMiz7ShSeER5AapqC", "html": "
>DARPA's new memristor-based approach to AI consists of a chip that mimics how neurons process information
\n

\n
http://spectrum.ieee.org/robotics/artificial-intelligence/moneta-a-mind-made-from-memristors/0
" } }, { "_id": "BM6NtBrBQtwibsosG", "title": "Is it \"bad\" to make fun of people/laugh at their weaknesses?", "pageUrl": "https://www.lesswrong.com/posts/BM6NtBrBQtwibsosG/is-it-bad-to-make-fun-of-people-laugh-at-their-weaknesses", "postedAt": "2010-12-29T03:52:55.984Z", "baseScore": 0, "voteCount": 13, "commentCount": 8, "url": null, "contents": { "documentId": "BM6NtBrBQtwibsosG", "html": "

When you make fun of someone, you are probably degrading their purity and disrespecting them (if we look at the results from the lesswrong thread on yourmorals.org, we can see that many of us consider purity/respect to be far less morally significant than most). Yet, making fun of other people does not intrinsically reduce their \"utility\" - rather - it is their reactions to being made fun of that reduce their own \"utility\".

\n

This, of course, does not justify making fun of people. Every negative action is only \"bad\" due to people's reactions to them. But in many cases, there is little reason to be upset when people make fun of you. When they make fun of you, they are gaining happiness over some weakness of yours. But is that necessarily a bad thing? It can be bad when they make fun of you in front of others and proceed to spread degrading information about you, causing other people to lose respect for you. But they could spread that information even when they're not making fun of you. 

\n

Many people find it unusual that I actually laugh when people make fun of me (in fact, I sometimes find it uncomfortable when people defend me, since I sometimes even value the message of the person who's making fun of me). I usually find it non-threatening, and I'm even somewhat happy that my weaknesses resulted in the elevation of someone else's temporary happiness. I wonder if any rationalists feel the same way that I do. Of course, I will refrain from making fun of people if I think that they will be negatively affected by it. But it does make me wonder - what would it be like if no one cared if they were made fun of? Certainly, we must react to those who spread degrading information about ourselves. But does it really matter if others laugh at it? 

\n

Of course, the prospect of amusing one's recipients is an incentive for some people to spread degrading information about you or your friends. So that may be one reason to counter it. On the other hand, though, laughter is also an incentive for people to spread degrading (and potentially true) information about your rivals. Perhaps people somewhat recognize this, and are frequently somewhat hypocritical about this (not that hypocrisy is intrinsically a bad thing). 

\n

PS: I wonder how laughing at other's weaknesses fits in with Robin Hanson's norm-violation theory of humor. Other's people's weaknesses aren't exactly norm-violations. 

" } }, { "_id": "s8bW9jrMfFXQCQHDA", "title": "Move the help button?", "pageUrl": "https://www.lesswrong.com/posts/s8bW9jrMfFXQCQHDA/move-the-help-button", "postedAt": "2010-12-28T11:42:45.512Z", "baseScore": 24, "voteCount": 16, "commentCount": 14, "url": null, "contents": { "documentId": "s8bW9jrMfFXQCQHDA", "html": "

It took me a while to find the \"help\" button for comments, and we seem to have a steady stream of people who have trouble finding it on their own. I suspect that's because it's floating off in a corner you don't have much reason to look at. Would it be difficult to move it to the immediate right of the \"cancel\" button, instead of the far right, and possibly rename it \"formatting help\"?

\n

(I don't know enough python to look at the site's code, or I would check this out myself.)

" } }, { "_id": "dzqAbaz3Hf9ccYKqd", "title": "Narrow your answer space", "pageUrl": "https://www.lesswrong.com/posts/dzqAbaz3Hf9ccYKqd/narrow-your-answer-space", "postedAt": "2010-12-28T11:38:56.007Z", "baseScore": 33, "voteCount": 28, "commentCount": 110, "url": null, "contents": { "documentId": "dzqAbaz3Hf9ccYKqd", "html": "

Mark Rosewater, a designer for Magic: The Gathering, writes a lot about how \"restrictions breed creativity.\" The explanation he gives is simple: when someone is building a house, the more tools they have, the better off they are. But when someone is looking for something, the more space they have to explore, the worse off they are. This applies to answer space: the more narrowly defined your problem is, the easier it is to search an answers; you'll find both more answers and better answers by looking in a well-chosen smaller space. Oftentimes the hardest problems to find good answers for are the ones with the widest scope.1

\n

Most problems require some sort of creative thinking to overcome, and perhaps the greatest gains from this method come from applying it to your life goals. Imagine someone with a simple goal:2 they want to improve themselves. That's admirable, but sort of bland and massively broad. It would be helpful to have a way to work from a bland, broad goal to a better goal- but what's better, in this context? We know that restrictions help, but what sort of restrictions help the most?

\n

Choosing Restrictions

\n

In Getting Things Done, David Allen argues that to-do lists should only include 'tasks.' That is, only write down clearly identified next actions towards achieve specific goals. \"Call Adam\" isn't a task, but \"Call Adam about hotel reservations for the conference\" is. This serves to reduce mental load (once you've written down the second, you can remove the task entirely from your mind, while you still need to keep a lot in memory for the first), to reduce the need to plan while doing, and to reduce the ugh field associated with getting started. A good place for a goal to be, then, is a place where looking at the goal causes you to imagine the next task, even if you lost the to-do list where you had written down the next task and then purposefully forgotten it. So, actionable is a restriction that helps (even if the action is \"wait for X,\" it's a good idea to know what X is, so you can look out for it!).

\n

But at the same time, it helps when our goals are a sentence or a paragraph long, rather than a list of every subgoal and task. They should be kept simple, for the sake of both communication and flexibility. Finally, to encapsulate what's useful about restricting creativity in general, it should be specific. There are many actionable goals which present too many possible actions, and so we choose at random or do nothing at all.

\n

When talking about plans and goals, David Allen uses a plane's eye view analogy: goals are 0, 10k, 20k, 30k, 40k, or 50k feet above the ground. I prefer a math analogy- goals are worked out to 0th, 1st, 2nd, 3rd, and 4th order (and even further if necessary).3 Allen's analogy and mine work in opposite directions, and it's worthwhile to point out why. Allen's primary focus is (unsurprisingly) getting things done, and that happens at the task level. Traveling upwards is done to zoom out and obtain information, not to do work while in the clouds. A good visual analogy for my approach is a tree's root burrowing into the ground. At each spot, the root has a choice of where to go, and the point is to be there and soak up nutrients. The root also isn't traveling but extending- it still exists everywhere it was before. Allen is happy with a satellite photo, but I need a pipeline.

\n

Refining a Goal

\n

When we take a 0th order goal, like \"I want to improve,\" there are a pretty large number of ways we could make it more specific, and a staggering (literally) number of potential actions we could take to work towards that goal. We think about our options, and settle on \"I want to be cleverer.\" We still want to improve- we've just outlined a way to do so. But we've also discarded most of the ways we could improve! This is a valuable thing because it narrows our answer space. We could also say it constrains our expectations and our efforts; so we upgrade that goal to 1st order.

\n

Allen's analogy is robust because it has a strong anchor: the next task to do is at ground level. There is no strong anchor for a 0th order goal, just a heuristic about how to rank goals. We could have started off with \"I want to be cleverer\" as a 0th order goal, and the rest of this example would work out exactly the same- except with slightly different numbers. So don't focus on the numbers as much as the relationships between them and the changes in answer space.

\n

A 1st order goal, while specific, is generally still not actionable. Here is where it's important to keep refining the goal instead of being seduced into working. The first thing you can think of to make yourself cleverer is probably not the best thing you can do to make yourself cleverer. Again, our root pushes into the ground, seeking out nutrients, and we select an aspect of cleverness to focus on: \"I want to make better decisions.\" The thoughts produced by this 2nd order formulation are starting to become fertile, but we can do better.

\n

Stop when Satisfied

\n

A clarifying question - \"what do we mean by better?\" - gives us a 3rd order goal: \"I want to have a solid idea of how good a decision is.\" We could keep refining the goal endlessly, but at some point we have to stop planning and start producing. When is a good time to do this, given that you can only compare the present and past, not present and future? We don't have the assurance that this is a well-behaved problem where each additional step will change our answer space less than the previous step did, so we need to be cleverer about this choice than normal. One approach is the scale of resources involved- if you haven't gotten to the point where you could reasonably expect success with the resources you have to throw at the problem, keep drilling down until you've reached that point. If you're trying to decide on what your life's work should be, drill down until you've got a problem you can do significant work on in 20 years; don't stop when you hit a goal that would take fifty people fifty years.

\n

Another way is to look at how the goals intersect with each other- it seems like our root has curled in a different way moving from \"better decisions\" to \"understand decision quality\" than in its previous extensions- it seems like if we don't understand the problem of measuring a decision's quality, any other improvements we make can't succeed at \"make better decisions\" because we can't tell if the new decisions are higher quality than the old decisions! Beforehand, we were selecting from independent specializations. Now we're looking at a necessary subgoal instead of a related goal. Looking at this another way, we've been choosing more and more specific terminal values and have come across our first instrumental value.4

\n

That suggests we've got enough to stop deciding what goal to pursue and start actually pursuing it. When you stop your goal-selection because of a fork like this, it's a good idea to look at what other goals are on the same order: while you should work on 3rd order problems before 4th order problems, problems of the same order are roughly equally important, and you may find you want to work on a different one or you can work on multiple of them in parallel.

\n

Note that even though we've make the decision to stop planning and start producing, we're probably going to run into some 4th order problems. Goals often have subgoals and instrumental values often have instrumental values based off them; the same methodology will work at every level and often represents a much faster way to search through answer space than brute force (especially since it's typically very hard to force your brain to brute force massive problems). Oftentimes there will be a domain-specific response which is more appropriate than this method, though (or, at least, resembles this method only in the abstract).

\n

Carve at the Joints

\n

One thing I have barely mentioned but is of crucial importance is that you need significant knowledge to effectively narrow down the answer space you're considering. Consider an international corporation trying to create a human resources department. Their 0th order goal might be something like \"make higher profits,\" their 1st order goal is \"streamline corporate functions to reduce cost without significantly reducing revenue,\" and their 2nd order goal might be \"task an entity with managing hiring, pay, benefits, and employee relations.\" Now they have a lot of 3rd order goals to choose from, and they decide \"create an HR office in each department.\" After all, they've already got their corporation partitioned that way, and having one HR department for R&D and another for Sales will mean that the hiring expertise of each HR department is much better because of specialization.

\n

But this misses the reality of HR departments, which is that their functions are strongly tied to the nation that employees live and work in. The R&D HR office might find itself having to deal with ten different sets of tax laws, requiring ten different tax specialists. Hiring laws in one country might require one procedure, while in another country they're totally different. The benefit of increased hiring specialization might not be unique to this plan- due to interviewing, travel costs, and legal changes, this setup probably requires one hiring officer for each department for each country, as well as a tax specialist for each country for each department. But if you split up the HR departments by country instead of by department, you would only need one tax specialist for each country, and the same number of hiring officers.

\n

There are three things to be learned from that example: first, hold off on proposing solutions (sound familiar?). Second, don't be afraid to go back upwards and reevaluate your choice of goals. By choosing, you discarded a lot of answer space; if you don't find promising things in the region you looked, you should look somewhere else.

\n

Most importantly, we learn that the solution to a problem is often another problem. The answer we picked to \"I want to improve\" is \"I want to get cleverer,\" and we can think better and faster5 if we treat that as a full answer. After all, if you reduce the answer space of \"a sentence 100 letters long\" to \"a sentence 10 letters long,\" you have reduced it by a larger factor than reducing from \"a sentence 10 letters long\" to a specific sentence that is 10 letters long.6

\n

 

\n
\n

1. My artist friends tell me that their least favorite commissions are the ones where the commissioner tells them \"do whatever you want!\"; I don't think I've seen someone in any field ever speak positively of getting that regularly (instead of as an occasional reprieve).

\n

2. Style note: I use 'goal', 'problem', and 'value' interchangeably throughout this post, based on whatever seems appropriate for that sentence. I hope this isn't too confusing- I think there's only type errors for values, and so when you see value recast it as \"a goal to obtain this value.\"

\n

3. In physics (and many other disciplines), unsoluble problems are often approximated by an infinite number of soluble problems. For example, one can calculate sin(x) with only multiplication, division, addition, and subtraction by using the Taylor Series approximation. However, by itself this is just moving around the difficulty- your new problems are individually soluble but you don't have the time to solve an infinite number of them. This method is effective only when you can ignore later terms- that is, take the infinite amount of trash you've generated and manage to throw it away in a finite volume. For example, to calculate sin(1) to three parts in a thousand requires only the first three terms: 1-13/3!+15/5!=.841667 while sin(1) is .841471 (both rounded to 6 digits). For well-behaved approximations, the error is smaller than the next additional term- for sin(1) with 3 terms, the error is 1.96e-4 while the next term is 17/7!=1.98e-4. My usage of \"order\" is inspired by this background; a first guess at a problem (like answering 1 to sin(1)) is a first order solution that's in the right ballpark but is probably missing crucial details. A second order solution has the most obvious modification to the first order solution and is generally rather good (5/6 only differs from sin(1) by 1%). One note here is this implies that for well-behaved problems, one needs to do all of the nth order modifications before moving to the n+1th order- if I just give you 1-17/7!, my answer is not really any better than my 1st order answer (and if I gave you 1+15/5!, it would be worse).

\n

4. The usefulness of wording things this way is limited because the boundary between the two is hard to determine. \"I want to make better decisions\" could easily be an instrumental value to a rather different problem (\"I want to be more powerful,\" say) or you could interpret it as an instrumental value for the previous value (\"I want to be cleverer\"). So it might actually be that you're looking to find a narrow goal 'as terminal as the original vague goal' that provokes instrumental subgoals.

\n

5. Typically, when you make a computation faster you sacrifice some accuracy. This may be one of the cases where that often isn't true, because the computation time is infinite and thus accuracy is 0 for problems you cannot fit into memory if you try to solve them in one go. But the heuristics you use to narrow answer space can easily be bad heuristics; it helps to make this process formal so you're more likely to notice when you jumped an order without actually checking for other ways to approach the problem. Perhaps the best advice in this article is \"don't be afraid to go back and recalculate at lower orders and make sure you're in the right part of the tree.\"

\n

6. While it's tempting to suggest a measure like \"log(possible answers)\", that breaks down in many cases (when approaching \"find the real number that is pi5\", you don't see a change if you go from \"all reals\" to \"all reals between 35 and 45\" as possible answers) and isn't valuable in others (if I reduce the answer space from 1,000 potential answers to 100 potential answers, but the real answer is in that 100, I've done better than if I reduce the answer space to 10 potential answer but the real answer isn't in that 10). The density of good solutions matters- you can only profit by throwing away parts of the answer space because their average is lower than the part you kept.

\n

Thanks to Aharon for the prodding to turn this from a brief mention in a comment to a post of its own, and to PhilGoetz, DSimon, and XFrequentist for organizational advice.

" } }, { "_id": "keieeJWh4aNjvqPcn", "title": "Being Rational and Being Productive: Similar Core Skills?", "pageUrl": "https://www.lesswrong.com/posts/keieeJWh4aNjvqPcn/being-rational-and-being-productive-similar-core-skills", "postedAt": "2010-12-28T10:11:01.210Z", "baseScore": 27, "voteCount": 31, "commentCount": 13, "url": null, "contents": { "documentId": "keieeJWh4aNjvqPcn", "html": "

A synthesis of How to Actually Change Your Mind and PJ Eby, written for a general audience.

\n

Several years ago I started suspecting that I needed glasses.  At first, I was afraid.  I began trying to convince myself that my vision was normal.

But then I stopped to reflect.  If I went to see the eye doctor, either he would recommend glasses for me or he wouldn't.  If he didn't recommend glasses for me then my life would be the same.  But if he did recommend glasses, I would get a vision upgrade.  Therefore, I reasoned, I should eagerly await my doctor visit.

By following the principle of letting control flow from thoughts to emotions, I gained two benefits.  First, my beliefs about my vision weren't being distorted by my desire for it to be normal.  And second, my emotion of eagerness for a potential vision upgrade meant that I wasn't tempted to put off visiting the doctor.

My glasses example might seem kind of mundane, but it demonstrates how thinking before emoting helps with two core human objectives: Being Correct and Getting Things Done.

Many of the cognitive biases that distort human reasoning can be explained by emotions that get in the way of our thought process.  For example, status quo bias occurs when we are unreasonably skeptical of arguments that suggest we should change the status quo.  The emotion that distorts our reasoning in this case is our fear of things that are new and unfamiliar.  This is the bias that made me try to convince myself that I didn't need glasses.

When it comes to Getting Things Done, both productivity and procrastination are emotional states.  Being able to turn these off and on would be useful.

So having control flow from thoughts to emotions has strong theoretical potential to help humans be less biased and more productive.  But is it possible in practice?

Yes.  The trick is to notice and reflect on negative emotions.  Negative emotions like fear, guilt, shame, and regret are hardly ever useful and frequently interfere with our reasoning and working.

Sometimes that's all that's necessary.  For example, once I was arguing with a friend about global warming and I started to become afraid that he might actually be right.  Fortunately I noticed my fear and reminded myself that if my friend was right about global warming, I wanted to agree with him.  This helped me maintain calm objectivity.

At other times it makes sense to take specific actions to influence one's emotions.  When it comes to getting work done, I've had success with taking drugs like caffeine, talking to other people, and taking breaks to do other activities.  When my work seems especially dreary, I find that if I do several new things for a while and come back, my emotional state is reset to a random value that's generally better for work than the one I started with.

Ultimately I've realized that it's mostly not me who's in control of what I do.  It's my emotions.  In years past I would procrastinate like a typical student, trying for hours to get myself to do something and not getting anywhere.  Now I realize that I was quite literally not fully in control of myself.  I was just pretending I was, and disappointing myself as my illusions repeatedly failed to match up to reality.

There are evolutionary reasons why our rational minds don't fully control us.  The most important activities we evolved to do, like hunt, avoid predators, and reproduce, can be done just fine without human ingenuity, as demonstrated by the dumb animals that surrounded us.  Our rational mind was only to be used in specific situations like constructing tools and coming up with excuses for why something we had done wasn't a violation of tribal social norms.

In this modern era, where surviving and reproducing are solved problems, evolved instincts are useless behavioral distortions.  It makes sense that we could become significantly more successful by learning to counteract them.  That's what reversing the flow of control between thoughts and emotions does.

I'm convinced that by thinking before emoting, anyone can become more Correct and Accomplished.

\n

This is a modified version of an essay I wrote for my Thiel Fellowship application, so if you have any suggestions for how I can improve the writing, please put them in this etherpad.  The application deadline is December 31st.

" } }, { "_id": "mw6zpAzL2jYncGpTu", "title": "Is there a guide somewhere for how to setup a Less Wrong Meetup?", "pageUrl": "https://www.lesswrong.com/posts/mw6zpAzL2jYncGpTu/is-there-a-guide-somewhere-for-how-to-setup-a-less-wrong", "postedAt": "2010-12-28T02:07:33.475Z", "baseScore": 9, "voteCount": 7, "commentCount": 9, "url": null, "contents": { "documentId": "mw6zpAzL2jYncGpTu", "html": "

n/t

" } }, { "_id": "jf3sY6eeDEs3JiwLp", "title": "Certainty estimates in areas outside one's expertise", "pageUrl": "https://www.lesswrong.com/posts/jf3sY6eeDEs3JiwLp/certainty-estimates-in-areas-outside-one-s-expertise", "postedAt": "2010-12-27T20:56:11.131Z", "baseScore": 11, "voteCount": 8, "commentCount": 8, "url": null, "contents": { "documentId": "jf3sY6eeDEs3JiwLp", "html": "

One issue that I've noticed in discussions on Less Wrong is that I'm much less certain about the likely answers to specific questions than some other people on Less Wrong. But the questions where this seems to be most pronounced are mathematical questions that are close to my area of expertise (such as whether P = NP). In areas outside my expertise, my apparent confidence is apparently often higher. Thus, for example at a recent LW meet-up I expressed a much lower probability estimate that cold fusion is real than what others in the conversation estimated. This suggests that I may be systematically overestimating  my confidence in areas that I don't study as much, essentially a variant of the Dunning-Krueger effect. Have other people here experienced the same pattern with their own confidence estimates?

" } }, { "_id": "DXBziiT2RFLcmLY9J", "title": "Dark Arts 101: Using presuppositions", "pageUrl": "https://www.lesswrong.com/posts/DXBziiT2RFLcmLY9J/dark-arts-101-using-presuppositions", "postedAt": "2010-12-27T17:16:10.541Z", "baseScore": 102, "voteCount": 90, "commentCount": 87, "url": null, "contents": { "documentId": "DXBziiT2RFLcmLY9J", "html": "

Sun Tzu said, \"The supreme art of war is to subdue the enemy without fighting.\"  This is also true in rhetoric.  The best way to get a belief accepted is to fool people into thinking that they have already accepted it.

\n

(Note, first-year students, that I did not say, \"The best way to convince people of a belief\".  Do not try to convince people!  It will not work; and it may start them thinking.)

\n

An excellent way of doing this is to embed your desired conclusion as a presupposition to an enticing argument.  If you are debating abortion, and you wish people to believe that human and non-human life are qualitatively different, begin by saying, \"We all agree that killing humans is immoral.  So when does human life begin?\"  People will be so eager to jump into the debate about whether a life becomes \"human\" at conception, the second trimester, or at birth (I myself favor \"on moving out of the house\"), they won't notice that they agreed to the embedded presupposition that the problem should be phrased as a binary category membership problem, rather than as one of tradeoffs or utility calculations.

\n

Consider the recent furor over whether WikiLeaks leader Julian Assange is a journalist, or can be prosecuted for espionage.  I don't know who initially asked this question.  The earliest posing of the question that I can find that relates it to the First Amendment is this piece from Fox News on Dec. 8; but Marc Thiessen's column in the Washington Post of Aug. 3 has similar implications.  Note that this question presupposes that First Amendment protection applies only to journalists!  There is no legal precedent for this that I'm aware of; yet if people spend enough time debating whether Julian Assange is a journalist, they will have unknowingly convinced themselves that ordinary citizens have no First Amendment rights.  (We can only hope that this was an artful stroke made from the shadows by some great master of the Dark Arts, and not a mere snowballing of an ignorant question.)

" } }, { "_id": "ZGr4PQcM2seEkDsHN", "title": "Efficient Induction", "pageUrl": "https://www.lesswrong.com/posts/ZGr4PQcM2seEkDsHN/efficient-induction", "postedAt": "2010-12-27T10:40:38.829Z", "baseScore": 7, "voteCount": 6, "commentCount": 25, "url": null, "contents": { "documentId": "ZGr4PQcM2seEkDsHN", "html": "

(By which I mean, induction over efficient hypotheses.)

\n

A standard prior is \"uniform\" over the class of computable functions (ie, uniform over infinite strings which have a prefix which compiles). Why is this a natural choice of prior? Well, we've looked at the universe for a really long time, and it seems to be computable. The Church-Turing thesis says we have no reason to choose a bigger support for our prior.

\n

Why stop there? There is a natural generalization of the Church-Turing thesis, naturally called the extended Church-Turing thesis, which asserts that the universe is computable in polynomial time. In fact, we have a strong suspicion that physics should be local, which means more or less precisely that updates can be performed using a linear size logical circuit. Maybe we should restrict our prior further, looking only for small circuits.

\n

(As we realized only slightly later, this extended hypothesis is almost certainly false, because in the real world probabilities are quantum. But if we replace \"circuit\" by \"quantum circuit,\" which is a much less arbitrary change than it seems at face value, then we are good. Are further changes forthcoming? I don't know, but I suspect not.)

\n

So we have two nice questions. First, what does a prior over efficiently computable hypotheses look like? Second, what sort of observation about the world could cause you to modify Solomonoff induction? For that matter, what sort of physical evidence ever convinced us that Solomonoff induction was a good idea in the first place, rather than a broader prior? I suspect that both of these questions have been tackled before, but google has failed me so now I will repeat some observations.

\n

Defining priors over \"polynomially large\" objects is a little more subtle than usual Solomonoff induction. In some sense we need to penalize a hypothesis both for its description complexity and its computational complexity. Here is a first try:

\n

A hypothesis consists of an initial state of some length N, and a logical circuit of size M which takes N bits to K+N bits. The universe evolves by repeatedly applying the logical circuit to compute both a \"next observation\" (the first K bits) and the new state of the universe (the last N bits). The probability of a hypothesis drops off exponentially with its length.

\n

How reasonable is this? Why, not at all. The size of our hypothesis is the size of the universe, so it is going to take an awful lot of observations to surmount the improbability of living in a large universe. So what to do? Well, the reason this contradicts intuition is that we expect our physical theory (as well as the initial state of our system) to be uniform in some sense, so that it can hope to be described concisely even if the universe is large. Well luckily for us the notion of uniformity already exists for circuits, and in fact appears to be the correct notion. Instead of working with circuits directly, we specify a program which outputs that circuit (so if the laws of physics are uniform across space, the program just has to be told \"tile this simple update rule at every point.\") So now it goes like this:

\n

A hypothesis consists of a program which outputs an initial state of some finite length N, and a program which outputs a logical circuit of size M which takes N bits to K+N bits. The observations are defined as before. The probability of a hypothesis drops off exponentially with its length.

\n

How reasonable is this? Well it doesn't exploit the conjectured computational efficiency of the universe at all. There are three measures of complexity, and we are only using one of them. We have the length of the hypothesis, the size of the universe, and the computational complexity of the update rule. At least we now have these quantities in hand, so we can hope to incorporate them intelligently. One solution is to place an explicit bound on the complexity of the update rule in terms of the size of the universe. It is easy to see that this approach is doomed to fail. An alternative approach is to explicitly include terms dependent on all three complexity measures in the prior probability for each hypothesis.

\n

There are some aesthetically pleasing solutions which I find really attractive. For example, make the hypothesis a space bounded Turing machine and also require it to specify the initial state of its R/W tape. More simply but less aesthetically, you could just penalize a hypothesis based on the logarithm of its running time (since this bounds the size of its output), or on log M. I think this scheme gives known physical theories very good description complexity. Overall, it strikes me as an interesting way of thinking about things.

\n

I don't really know how to answer the second question (what sort of observation would cause us to restrict our prior in this way). I don't really know where the description complexity prior comes from either; it feels obvious to me that the universe is computable, just like it feels obvious that the universe is efficiently computable. I don't trust these feelings of obviousness, since they are coming from my brain. I guess the other justification is that we might as well stick to computable hypotheses, because we can't use stronger hypotheses to generate predictions (our conscious thoughts at least appearing to be computable). The same logic does have something to say about efficiency: we can't use inefficient hypotheses to generate predictions, so we might as well toss them out. But this would lead us to just keep all sufficiently efficient hypotheses and throw out the rest, which doesn't work very well (since the efficiency of a hypothesis depends upon the size of the universe it posits; this basically involves putting a cap on the size of the universe). I don't know of a similar justification which tells us to penalize hypotheses based on their complexity. The only heuristic is that it has worked well for screening out physical theories so far. Thats a pretty good thing to have going for you at least.

\n

In sum, thinking about priors over more restricted sets of hypotheses can be interesting. If anyone knows of a source which approaches this problem more carefully, I would be interested to learn.

" } }, { "_id": "bcJBKJx4iaLChYZrW", "title": "Neutral AI", "pageUrl": "https://www.lesswrong.com/posts/bcJBKJx4iaLChYZrW/neutral-ai", "postedAt": "2010-12-27T06:10:05.878Z", "baseScore": 12, "voteCount": 15, "commentCount": 30, "url": null, "contents": { "documentId": "bcJBKJx4iaLChYZrW", "html": "

Unfriendly AI has goal conflicts with us.  Friendly AI (roughly speaking) shares our goals.  How about an AI with no goals at all?

\n

I'll call this \"neutral AI\".  Cyc is a neutral AI.  It has no goals, no motives, no desires; it is inert unless someone asks it a question.  It then has a set of routines it uses to try to answer the question.  It executes these routines, and terminates, whether the question was answered or not.  You could say that it had the temporary goal to answer the question.  We then have two important questions:

\n
    \n
  1. Is it possible (or feasible) to build a useful AI that operates like this?
  2. \n
  3. Is an AI built in this fashion significantly less-dangerous than one with goals?
  4. \n
\n

Many people have answered the first question \"no\".  This would probably include Hubert Dreyfus (based on a Heideggerian analysis of semantics, which was actually very good but I would say misguided in its conclusions because Dreyfus mistook \"what AI researchers do today\" for \"what is possible using a computer\"), Phil Agre, Rodney Brooks, and anyone who describes their work as \"reactive\", \"behavior-based\", or \"embodied cognition\".  We could also point to the analogous linguistic divide.  There are two general approaches to natural language understanding, one descending from generative grammars and symbolic AI and embodied by James Allen's book Natural Language Understanding, and in the \"program in the knowledge\" camp that would answer the first question \"yes\".  The other approach has more kinship with construction grammars and machine learning, and is embodied by Manning & Schutze's Foundations of Statistical Natural Language Processing, and its practitioners would be more likely to answer the first question \"no\". (Eugene Charniak is noteworthy for having been prominent in both camps.)

\n

The second question, I think, hinges on two sub-questions:

\n
    \n
  1. Can we prevent an AI from harvesting more resources than it should for a question?
  2. \n
  3. Can we prevent an AI from conceiving the goal of increasing its own intelligence as a subgoal to answering a question?
  4. \n
\n

The Jack Williamson story \"With Folded Hands\" (1947) tells how humanity was enslaved by robots given the order to protect humanity, who became... overprotective.  Or suppose a physicist asked an AI, \"Does the Higgs boson exist?\"  You don't want it to use the Earth to build a supercollider.  These are cases of using more resources than intended to carry out an order.

\n

You may be able to build a Cyc-like question-answering architectures that would have no risk of doing any such thing.  It may be as simple as placing resource limitations on every question.  The danger is that if the AI is given a very thorough knowledge base that includes, for instance, an understanding of human economics and motivations, it may syntactically construct a plan to find the answer to a question that is technically within the resource limitations posed, for instance by manipulating humans in ways that don't tweak its cost function.  This could lead to very big mistakes; but it isn't the kind of mistake that builds on itself, like a FOOM scenario.  The question is whether any of these very big mistakes would be  irreversible.  My intuition is that there would be a power-law distribution of mistake sizes, with a small number of irreversible mistakes.  We might then figure out a reasonable way of determining our risk level.

\n

If the answer to the second subquestion is \"yes\", then we probably don't need to fear a FOOM from neutral AI.

\n

The short answer is, Yes, there are \"neutral AI architectures\" that don't currently have the risk either of harvesting too many resources, or of attempting to increase their own intelligence.  Many existing AI architectures are examples.  (I'm thinking specifically of \"hierarchical task-network planning\", which I don't consider true planning; it only allows the piecing together of plan components that were pre-built by the programmer.)  But they can't do much.  There's a power / safety tradeoff.  The question is how much power you can get in the \"completely safe\" region, and where the sweet spots are in that tradeoff outside the completely safe region.

\n
\n
\n

If you could build an AI that did nothing but parse published articles to answer the question, \"Has anyone said X?\", that would be very useful, and very safe. I worked on such a program (SemRep) at NIH. It works pretty well within the domain of medical journal articles.  If it could take one step more, and ask, \"Can you find a set of one to four statements that, taken together, imply X?\", that would be a huge advance in capability, with little if any additional risk.  (I added that capability to SemRep, but no one has ever used it, and it isn't accessible through the web interface.)

\n
\n
" } }, { "_id": "j2syBASpAA2Q5xfG7", "title": "Rational insanity", "pageUrl": "https://www.lesswrong.com/posts/j2syBASpAA2Q5xfG7/rational-insanity", "postedAt": "2010-12-27T05:04:24.954Z", "baseScore": 19, "voteCount": 11, "commentCount": 10, "url": null, "contents": { "documentId": "j2syBASpAA2Q5xfG7", "html": "

My theory on why North Korea has stepped up its provocation of South Korea since their nuclear missle tests is that they see this as a tug-of-war.

\n

Suppose that North Korea wants to keep its nuclear weapons program.  If they hadn't sunk a ship and bombed a city, world leaders would currently be pressuring North Korea to stop making nuclear weapons.  Instead, they're pressuring North Korea to stop doing something (make provocative attacks) that North Korea doesn't really want to do anyway.  And when North Korea (temporarily) stops attacking South Korea, everybody can go home and say they \"did something about North Korea\".  And North Korea can keep on making nukes.

" } }, { "_id": "uaFL5XgCi63DfzQfu", "title": "What's happened to the front page?", "pageUrl": "https://www.lesswrong.com/posts/uaFL5XgCi63DfzQfu/what-s-happened-to-the-front-page", "postedAt": "2010-12-26T23:30:15.372Z", "baseScore": 25, "voteCount": 13, "commentCount": 15, "url": null, "contents": { "documentId": "uaFL5XgCi63DfzQfu", "html": "

http://lesswrong.com is suddenly redirecting to a Tibetan meditation site. What the hell?

" } }, { "_id": "v6pegGszpp99mTu6e", "title": "The Fallacy of Dressing Like a Winner", "pageUrl": "https://www.lesswrong.com/posts/v6pegGszpp99mTu6e/the-fallacy-of-dressing-like-a-winner", "postedAt": "2010-12-26T20:59:10.172Z", "baseScore": 27, "voteCount": 39, "commentCount": 21, "url": null, "contents": { "documentId": "v6pegGszpp99mTu6e", "html": "

Imagine you are a sprinter, and your one goal in life is to win the 100m sprint in the Olympics. Naturally, you watch the 100m sprint winners of the past in the hope that you can learn something from them, and it doesn't take you long to spot a pattern.

\n

 

\n

Every one of them can be seen wearing a gold medal around their neck. Not only is there a strong correlation, you also examine the rules of the olympics and find that 100% of winner must wear a gold medal at some point, there is no way that someone could win and never wear a gold medal. So you go out and buy a gold medal from a shop, put it around your neck, and sit back, satisfied.

\n

 

\n

For another example, imagine that you are now in charge of running a large oil rig. Unfortunately, some of the drilling equipment is old and rusty, and every few hours a siren goes off alerting the divers that they need to go down again and repair the damage. This is clearly not an acceptable state of affairs, so you start looking for solutions.

\n

 

\n

You think back to a few months ago, before things got this bad, and you remember how the siren barely ever went off at all. In fact, from you knowledge of how the equipment works, the there were no problems, the siren couldn't go off. Clearly the solution the problem is to unplug the siren.

\n

 

\n

(I would like to apologise in advance for my total ignorance of how oil rigs actually work, I just wanted an analogy)

\n

 

\n

Both these stories demonstrate a mistake which I call 'Dressing Like a Winner' (DLAW). The general form of the error is when and indicator of success gets treated as an instrumental value, and then sometimes as a terminal value which completely subsumes the thing it was supposed to indicate. As someone noted this can also be seen as a sub-case of the correlative fallacy.  This mistake is so obviously wrong that it is pretty much non-existant in near mode, which is why the above stories seem utterly ridiculous. However, once we switch into the more abstract far mode, even the most ridiculous errors become dangerous. In the rest of this post I will point out three places where I think this error occurs.

\n

 

\n

Changing our minds

\n

 

\n

In a debate between two people, it is usually the case that whoever is right is unlikely to change their mind. This is not only an empirically observable correlation, but it's also intuitively obvious, would you change your mind if you were right?

\n

 

\n

At this point, our fallacys steps in with a simple conclusion, \"refusing to change your mind will make you right\". As we all know, this could not be further from the truth, changing your mind is the only way to become right, or any rate less wrong. I do not think this realization is unique to this community, but it is far from universal (and it is a lot harder to practice than to preach, suggesting it might still hold on in the subconcious).

\n

 

\n

At this point a lot of people will probably have noticed that what I am talking about bears a close resemblance to signalling, and some of you are probably thinking that that is all there is to it. While I will admit that DLAW and Signalling are easy to confuse, I do think they are seperate things., and that there is more than just ordinary signalling going on in the debate.

\n

 

\n

One piece of evidence for this is the fact that my unwillingness to change my mind extends even to opinions I have admitted to nobody. If I was only interested in signalling surely I would want to change my mind in that case, since it would reduce the risk of being humiliated once I do state my opinion. Another reason to believe that DLAW exists is the fact that not only do debaters rarely change their minds, those that do are often criticised, sometimes quite brutally, for 'flip-flopping', rather than being praised for becoming smarter and for demonstrating that their loyalty to truth is higher than their ego.

\n

 

\n

So I think DLAW is at work here, and since I have chosen a fairly uncontroversially bad thing to start off with, I hope you can now agree with me that it is at least slightly dangerous.

\n

 

\n

Consistency

\n

 

\n

It is an accepted fact that any map which completely fits the territory would be self-consistent. I have not seen many such maps, but I will agree with the argument that they must be consistent. What I disagree with is the claim that this means we should be focusing on making our maps internally consistent, and that once we have done this we can sit back because our work is done.

\n

 

\n

This idea is so widely accepted and so tempting, especially to those with a mathematical bent, that I believed it for years before noticing the fallacy that lead to it. Most reasonably intelligent people have gotten over one half of the toxic meme, in that few of them believe consistency is good enough (with the one exception of ethics, where it still seems to apply in full force). However, as with the gold medal, not only is it a mistake to be satisfied with it, but it is a waste of time to aim for it in the first place.

\n

 

\n

In Robin Hanson's article beware consistency we see that the consistent subjects actually do worse than the inconsistent ones, because they are consistently impatient or consistently risk averse. I think this problem is even more general than his article suggests, and represents a serious flaw in our whole epistemology, dating back to the Ancient Greek era.

\n

 

\n

Suppose that one day I notice an inconsistency in my own beliefs. Conventional wisdom would tell me that this is a serious problem, and I should discard one of the beliefs as quickly. All else being equal, the belief that gets discarded will probably be the one I am less attached to, which will probably be the one I acquired more recently, which is probably the one which is actually correct, since the other may well date back to long before I knew how to think critically about an idea.

\n

 

\n

Richard Dawkins gives a good example of this in his book 'The God Delusion'. Kurt Wise, a brilliant young geologist raised as a fundementalist Christian. Realising the contradiction between his beliefs, he took a pair of scissors to the bible and cut out every passage he would have to reject if he accepted the scientific world-view. After realizing his bible was left with so few pages that the poor book could barely hold itself together, he decided to abandon science entirely. Dawkins uses this to make an argument for why religion needs to be removed entirely, and I cannot neccessarily say I disagree with him, but I think a second moral can be drawn from this story.

\n

 

\n

How much better off would Kurt have been if he had just shrugged his shoulders at the contradiction and continued to believe both? How much worse off we be if Robert Aumann had abandoned the study of Rationality when he noticed it contradicted Orthodox Judaism? Its easy to say that Kurt was right to abandon one belief, he just abandoned the wrong one, but from inside Kurt's mind I'm not sure it was obvious to him which belief was right.

\n

 

\n

I think a better policy for dealing with contradictions is to put both beliefs 'on notice', be cautious before acting upon either of them and wait for more evidence to decide between them. If nothing else, we should admit more than two possibilities, they could actually be compatible, or they could both be wrong, or one or both of them could be badly confused.

\n

 

\n

To put this in one sentence \"don't strive for consistency, strive for accuracy and consistency will follow\".

\n

 

\n

Mathematical arguments about rationality

\n

 

\n

In this community, I often see mathematical proofs that a perfect Bayesian would do something. These proofs are interesting from a mathematical perspective, but since I have never met a perfect Bayesian I am sceptical of their relevance to the real world (perhaps they are useful to AI, someone more experienced than me should either confirm or deny that).

\n

 

\n

The problem comes when we are told that since a perfect Bayesian would do X, then we imperfect Bayesians should do X as well in order to better ourselves. A good example of this is Aumann's Agreement Theorem, which shows that not agreeing to disagree is a consequence of perfect rationality, being treated as an argument for not agreeing to disagree in our quest for better rationality. The fallacy is hopefully clear by now, we have been given no reason to believe that copying this particular by-product of success will bring us closer to our goal. Indeed, in our world of imperfect rationalists, some of whom are far more imperfect than others, an argument against disagreement seems like a very dangerous thing.

\n

 

\n

Eliezer has already argued against this specific mistake, but since he went on to commit it a few articles later I think it bears mentioning again.

\n

 

\n

Another example of this mistake is this post (my apologies to the poster, this is not meant as an attack, you just provided a very good example of what I am talking about). The post provides a mathematical argument (a model rather than a proof) that we should be more sceptical of evidence that goes against our beliefs than evidence for them. To be more exact, it gives an argument why a perfect Bayesian, with no human bias and mathematically precise calibration should be more sceptical of evidence going against its beliefs than evidence for them.

\n

 

\n

The argument is, as far as I can tell, mathematically flawless. However, it doesn't seem to apply to me at all, if for no other reason than that I already have a massive bias overdoing that job, and my role is to counteract it.

\n

 

\n

In fact, I would say that in general our willingness to give numerical estimates is an example of this fallacy. The Cox theorems prove that any perfect reasoning system is isomorphic to Bayesian probability, but since my reasoning system is not perfect, I get the feeling that saying \"80%\" instead of \"reasonably confident\" is just making a mockery of the whole process.

\n

 

\n

This is not to say I totally reject the relevance of mathematical models and proofs to our pursuit. All else being equal if a perfect Bayesian does X. it is evidence that X is good for an imperfect Bayesian. It's just not overwhelmingly strong evidence, and shouldn't be treated as putting as if it puts a stop to all debate and decides the issue one way or the other (unlike other fields where mathematical arguments can do this).

\n

 

\n

How to avoid it

\n

 

\n

I don't think DLAW is particularly insidious as mistakes go, which is why I called it a fallacy rather than a bias. The only advice I would give is to be careful when operating in far mode (which you should do anyway), and always make sure the causal link between your actions and your goals is pointing in the right direction.

\n

 

\n

If anyone has any other examples they can think of, please post them. Thanks to those who have already pointed some out, particularly the point about akrasia and motivation

" } }, { "_id": "NtrkqzJDTxywFEpDR", "title": "Alternative Places to Get Ideas (Also, \"In Defense of Food\")", "pageUrl": "https://www.lesswrong.com/posts/NtrkqzJDTxywFEpDR/alternative-places-to-get-ideas-also-in-defense-of-food", "postedAt": "2010-12-26T18:36:35.552Z", "baseScore": 20, "voteCount": 14, "commentCount": 37, "url": null, "contents": { "documentId": "NtrkqzJDTxywFEpDR", "html": "

I discovered Less Wrong a few months ago (courtesy of Harry Potter and the Methods of Rationality), and I am extremely grateful to have such a thoughtful place to have discussions and to learn new things. But one of the more significant hazards I've become aware of is confirmation bias. And since I began coming here a lot, I find that I evaluate new information through a \"Less Wrong Lens\" which officially means \"well researched and thought out\" which is perfectly fine but also includes some new or reinforced biases.

\n

I recently realized the extent of the problem while reading \"In Defense of Food\". The basic premise (or most relevant one) of the book is that while science may one day be able to determine what is healthy on a nutrient-by-nutrient basis and let us craft whatever artificial foods we want, we are not nearly at that point yet. Every few years the prevailing beliefs of the nutritionist and scientific communities change, people scurry to catch up, and regardless, since transition from a \"traditional\" to \"scientific\" diet, certain nutrition-related diseases have gotten more prevalent, not less.

\n

His argument is that traditional diets have often had thousands of years to evolve to match the needs and available food sources of populations. So while the variables and interactions may be complex enough that we don't know why, until science DOES figure it out, individual people are better off sticking to the diets of the past. At the same time, corporations that are interested only in marketing as much food as quickly as possible benefit from constantly changing scientific attitudes. (Disclaimer: yes, I'm oversimplifying again, but my point isn't even necessarily about the merits of the book so bear with me).

\n

By the end of the book, I did agree with his basic premises. I'm not sure his solution is the single best one, but it's a large step up from the diet that the average westerner is going to have. It wasn't explicitly anti-science, just attempting to be realistic about what science can realistically accomplish, and what unique pitfalls come from giving science (or more importantly, politicians and corporations on whom scientists are dependent for funding) the position of power that religion and other cultural institutions once had.

\n

But the first half of the book does have a pretty obvious goal, not of discrediting science per se, but \"taking the wind out of science's sails\". And in the wake of reading \"Methods of Rationality\" it absolutely rankled me. I could feel my memetic immune system going into overdrive, looking for reasons not to believe whatever the man ended up having to say. I'm currently unsure whether that had to do with the way he was writing, if he did have his own ax to grind, or if it was purely my own biases coloring the words. But whatever his motivations, I'm grateful for having found the book at the time I did, because it taught me a lesson about my own mind. My answer isn't to say \"oh, science is just as flawed as everything else now,\" but I think I'll be able to approach things from a more neutral perspective.

\n

Now, my purpose of posting this is two-fold. One, is I'm simply curious if other people had read the book and had anything to say about it, one way or another. But the other is to ask: do you have any sources you turn to specifically to help broaden your horizon from the prevailing mindset at Less Wrong? Websites that are not \"anti-rational\" or \"anti-science,\" that you'd still consider trustworthy sources of information, but that help to offset certain biases that you might have accumulated here?

" } }, { "_id": "aqyLxWzAEpDHm2Xyf", "title": "Tallinn-Evans $125,000 Singularity Challenge", "pageUrl": "https://www.lesswrong.com/posts/aqyLxWzAEpDHm2Xyf/tallinn-evans-usd125-000-singularity-challenge", "postedAt": "2010-12-26T11:21:22.649Z", "baseScore": 38, "voteCount": 40, "commentCount": 378, "url": null, "contents": { "documentId": "aqyLxWzAEpDHm2Xyf", "html": "

Michael Anissimov posted the following on the SIAI blog:

\n

Thanks to the generosity of two major donors; Jaan Tallinn, a founder of Skype and Ambient Sound Investments, and Edwin Evans, CEO of the mobile applications startup Quinly, every contribution to the Singularity Institute up until January 20, 2011 will be matched dollar-for-dollar, up to a total of $125,000.

\n

Interested in optimal philanthropy — that is, maximizing the future expected benefit to humanity per charitable dollar spent? The technological creation of greater-than-human intelligence has the potential to unleash an “intelligence explosion” as intelligent systems design still more sophisticated successors. This dynamic could transform our world as greatly as the advent of human intelligence has already transformed the Earth, for better or for worse. Thinking rationally about these prospects and working to encourage a favorable outcome offers an extraordinary chance to make a difference. The Singularity Institute exists to do so through its research, the Singularity Summit, and public education.

\n

We support both direct engagements with the issues as well as the improvements in methodology and rationality needed to make better progress. Through our Visiting Fellows program, researchers from undergrads to Ph.Ds pursue questions on the foundations of Artificial Intelligence and related topics in two-to-three month stints. Our Resident Faculty, up to four researchers from three last year, pursues long-term projects, including AI research, a literature review, and a book on rationality, the first draft of which was just completed. Singularity Institute researchers and representatives gave over a dozen presentations at half a dozen conferences in 2010. Our Singularity Summit conference in San Francisco was a great success, bringing together over 600 attendees and 22 top scientists and other speakers to explore cutting-edge issues in technology and science.

\n

We are pleased to receive donation matching support this year from Edwin Evans of the United States, a long-time Singularity Institute donor, and Jaan Tallinn of Estonia, a more recent donor and supporter. Jaan recently gave a talk on the Singularity and his life at a entrepreneurial group in Finland. Here’s what Jaan has to say about us:

\n

“We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines. After we do that, it will be them steering history rather than us. Since we have only one shot at getting the transition right, the importance of SIAI’s work cannot be overestimated. Not finding any organisation to take up this challenge as seriously as SIAI on my side of the planet, I conclude that it’s worth following them across 10 time zones.”
– Jaan Tallinn, Singularity Institute donor

\n

Make a lasting impact on the long-term future of humanity today — make a donation to the Singularity Institute and help us reach our $125,000 goal. For more detailed information on our projects and work, contact us at institute@intelligence.org or read our new organizational overview.

\n

-----

\n

Kaj's commentary: if you haven't done so recently, do check out the SIAI publications page. There are several new papers and presentations, out of which I thought that Carl Shulman's Whole Brain Emulations and the Evolution of Superorganisms made for particularly fascinating (and scary) reading. SIAI's finally starting to get its paper-writing machinery into gear, so let's give them money to make that possible. There's also a static page about this challenge; if you're on Facebook, please take the time to \"like\" it there.

\n

(Full disclosure: I was an SIAI Visiting Fellow in April-July 2010.)

" } }, { "_id": "p6EY4LZQPW9W9Xbp3", "title": "Pascal's Gift", "pageUrl": "https://www.lesswrong.com/posts/p6EY4LZQPW9W9Xbp3/pascal-s-gift", "postedAt": "2010-12-25T19:42:51.483Z", "baseScore": 13, "voteCount": 10, "commentCount": 46, "url": null, "contents": { "documentId": "p6EY4LZQPW9W9Xbp3", "html": "
\n

 If Omega offered to give you 2^n utils with probability 1/n, what n would you choose?

\n
\n

This problem was invented by Armok from #lesswrong. Discuss.

" } }, { "_id": "jLtddbpe79rztp9By", "title": "A Christmas topic: I have thoughts regarding Chanukah and need logic help from Atheists", "pageUrl": "https://www.lesswrong.com/posts/jLtddbpe79rztp9By/a-christmas-topic-i-have-thoughts-regarding-chanukah-and", "postedAt": "2010-12-25T14:24:40.784Z", "baseScore": 0, "voteCount": 5, "commentCount": 11, "url": null, "contents": { "documentId": "jLtddbpe79rztp9By", "html": "

Essentially, I want to make sure my logic is sound, from the point of view of smart rational people who do not believe in the existence of supernatural miracles.

\n

The Chanukah story: 175-134 BCE.  Hellenic Assyrians (Antiochus IV) had conquered Israel, and passed a variety of laws oppressing the freedom of worship of the Jews there.  They defiled the Temple and forbade the study of sacred texts.  The Maccabees led a Jewish revolt against the Assyrians, and eventually drove them out of Israel.  Immediately upon retaking the Temple, they cleaned and rededicated it; they relit the sacred flame using a small vial of kosher oil and sent for more oil (which was 8 days distant).  The small vial was expected to last only one night, but miraculously lasted 8 days until more supplies arrived.

\n

Now recently, several Reform rabbis have stated that the fact that the first surviving written record of the miracle is from the Gemara (500 CE) indicates that the miracle was invented around 500 CE.  I am not an Orthodox Jew, but I do believe that the Gemara represents the sages writing down oral traditions, and am annoyed by the tendency among certain Reform rabbis to assume that everything was invented at the time it was written regardless of the evidence for or against it.

\n

The texts with the potential to document events follow:

\n

Maccabees 1 (~100 BCE): purely historical/nonreligious.  The book was originally written in Hebrew, but that text does not survive.  A Greek translation exists, and the text avoids all mention of religious and spiritual matters.  For instance, it speaks briefly and euphemistically about the temple, stating that the Jews captured the \"temple hill\" and rededicated the \"citadel\", avoiding mention of the temple itself. 

\n

Maccabees 2 (~30 BCE): we possess what claims to be a 1-volume abridgement of a 5-volume original (which does not survive and is not referenced elsewhere).  The surviving abridgement mentions the temple rededication and a variety of bizarre miracles including the public appearance of angels.  It does not mention the miracle of the oil, however.  The abridged text includes a number of theologic innovations which bear more similarity to Catholic beliefs than to Jewish or Protestant ones; it is unknown whether these were present in the original. 

\n

Neither of the above are considered canonical sources by Jews or Protestant, but they are by Catholics.

\n

Megillat Taanit (7CE): a succinct list of red letter dates from 200BCE-7CE; mentions that Chanukah is 8 days but gives no descriptions of any of the listed holidays.

\n

Josephus, The Jewish War (75CE).  Mentions that Chanukah was the Festival of Lights lasting 8 days, but does not give a reason for this.  He says that he \"supposes\" it is because of the unexpected
restoration of freedom to worship.  Elsewhere his text is extremely complete, well-researched, and accurate.

\n

So there are two possibilities being considered:

\n

1.  The miracle of the oil was described by the Maccabees who rededicated the temple.  Such an interpretation has to make the following leaps:

\n

a.  That a text which avoids all mention of religious matters would not mention this one. 

\n

b.  That a text which mentions dozens of miracles would not mention this one.  Well, it's clearly not written by a mainstream Jew because the theology is so unusual.  The writer had to pick and choose miracles when abridging from 5 to 1 volume, and may have left out the oil one because it's less spectacular than the others.

\n

c.  That Josephus wouldn't mention the miracle.  But a goal of his in writing The Jewish War was to convince the Romans that the Jews could make good subjects and would not be eternally rebellious.  Had he connected the Jewish obligation to kindle lights to the idea of the rededication of the Temple (which the Romans had just destroyed), he would have risked causing the Romans to forbid the kindling of lights; this would have increased friction and rebellion.

\n

2.  The miracle of the oil was invented centuries later.  Such an interpretation has the following problems:

\n

a.  That the Jews chose 8 days to celebrate Chanukah without any particular reason.  8 would be a strange number, longer than other Jewish holidays [unrelatedly, an extra day would later be added to other holidays due to calendar uncertainty].  No reason other than the oil miracle has been unearthed for this number 8.

\n

b.  That Josephus, who is otherwise so erudite, shrugs off the reason for calling Chanukah the \"Festival of Lights\".  His given explanation (the restoration of the freedom to worship) doesn't do much to explain light, and certainly doesn't explain the plural lights.  Further, his saying he \"supposes\" this explanation (where he is otherwise accurate, detailed, and certain in his history) is difficult to explain unless he is deliberately avoiding giving the real explanation.  Certainly one would expect him to give a reason for the festival's 8 day length if he felt it prudent to do so.

\n

c.  The fact that those who dispute the standard account have no actual evidence that the sages invented the miracle, but do have a political goal in saying so.

\n

d.  The claim rests on the supposition that the Sages wanted to reduce the political importance of Chanukah by deemphasizing the military victory and turning the miracle into a spiritual one.  But had they wanted to do this, they could simply have abolished the celebration entirely; actually they abolished the celebrations listed in Megillat Taanit except for Chanukah.  Why keep that holiday while inventing a story to reduce the associated political implications?  Just to provide themselves with an excuse to eat jelly donuts?

\n

 

\n

Anyway, I was wondering what atheists might believe the most plausible explanation:

\n

That the miracle of the oil lasting 8 days was invented centuries later?

\n

Or that the Maccabees somehow secured a secret stash of sacramental oil beyond the one vial they initially found?

\n

 

\n

And, am I overemphasizing/underemphasizing the importance of anything?

" } }, { "_id": "YxWxPPSjBWLSmsLek", "title": "Help Me Plan My Education?", "pageUrl": "https://www.lesswrong.com/posts/YxWxPPSjBWLSmsLek/help-me-plan-my-education", "postedAt": "2010-12-25T09:57:01.365Z", "baseScore": 6, "voteCount": 4, "commentCount": 10, "url": null, "contents": { "documentId": "YxWxPPSjBWLSmsLek", "html": "

I'm planning independent studies and choosing a concentration for my bachelor's degree, so I'm looking for shiny things on which I can base the next year and a half of my life. I get to do all independent studies for the 3 semesters worth of credits remaining, and I'm pretty happy about that. So it shall be, that the wheel of akrasia shall turn, and what was once procrastination shall be productive. And things that were productive but got in the way of unnecessary coursework shall be double productive, maybe triple.

\n

I'm looking for good ideas or texts to base classes around. I feel like there was recently a relevant discussion on texts, but couldn't find it (and feel like an idiot posting a help-the-noob related article after failing to find recent ones. Links to them are appreciated). Are there others in the same spirit as these (old post)? What should I prioritize, given that Less Wrong has been my first external source of rationality?

\n

Then there are other marginally less shiny, but still reflective subjects like economics, computer programming, any of the sciences that engineers would care about plus quantum mechanics, things most people reading this think matter. Awesomeology. My problem with this is prioritizing. I have about 12 classes worth of independent study to get through, minus any credits from equivalence tests I may take for commonly tested things. There's a right answer to the extent that filling in the blank of a \"B.S. in Science, Mathematics, and Technology with a Concentration in ____\" with something in particular matters, since 6 classes have to be directly related to the concentration, but I don't know how much that will actually ever matter. Horribleness would make a great concentration, but I don't want to rigorously quantify human suffering enough to do it just for that novelty. It also might help if my degree sounds real. Something about probability or statistics would be reasonable and Bayes would approve, but I want the name of my degree to pop, since I get to name it. Is that wrong? Am I overthinking this?

\n

I might not be cool enough to pull off my best-sounding idea for a concentration, in Cybernetic Heuristics, but it has an elegant meaning worthy of study and googling it in quotes returns no results. By \"best-sounding\", I mean that people who don't know and can't be bothered to look the words up will think I'm from the future. There's a chance my utility function is broken, but I think that's an important thing to look for when choosing a degree.

\n

 

\n

Thoughts? Ideal curricula? Focuses for how I should spend my time? Suggested readings substantial enough to make a course? Scratch that--none of the required classes have content anyway. Really, there's nothing I can't do with this, but I don't know what I should do with this, and would much rather do correct things I wouldn't think of than incorrect things I would do on my own, so asking is a good idea. If it matters, assume I have no interests or aspirations that don't coincide with practicality. Because I shouldn't. Those suck.

\n

What I need are fun things I can turn into independent studies to make my life awesome and a concentration for my degree. Suggestions for extracurricular activities will also be helpful, but I've got to say upfront that I don't know what I could do with the Campus Crusade for Bayes with an online campus. That's like...this.

\n

All advice, recommendations and musings will be greatly appreciated, even if they're not serious and were given out of spite.

" } }, { "_id": "pC47ZTsPNAkjavkXs", "title": "Efficient Charity: Do Unto Others...", "pageUrl": "https://www.lesswrong.com/posts/pC47ZTsPNAkjavkXs/efficient-charity-do-unto-others", "postedAt": "2010-12-24T21:26:10.519Z", "baseScore": 209, "voteCount": 170, "commentCount": 322, "url": null, "contents": { "documentId": "pC47ZTsPNAkjavkXs", "html": "

This was originally posted as part of the efficient charity contest back in November. Thanks to Roko, multifoliaterose, Louie, jmmcd, jsalvatier, and others I forget for help, corrections, encouragement, and bothering me until I finally remembered to post this here.

Imagine you are setting out on a dangerous expedition through the Arctic on a limited budget. The grizzled old prospector at the general store shakes his head sadly: you can't afford everything you need; you'll just have to purchase the bare essentials and hope you get lucky. But what is essential? Should you buy the warmest parka, if it means you can't afford a sleeping bag? Should you bring an extra week's food, just in case, even if it means going without a rifle? Or can you buy the rifle, leave the food, and hunt for your dinner?

And how about the field guide to Arctic flowers? You like flowers, and you'd hate to feel like you're failing to appreciate the harsh yet delicate environment around you. And a digital camera, of course - if you make it back alive, you'll have to put the Arctic expedition pics up on Facebook. And a hand-crafted scarf with authentic Inuit tribal patterns woven from organic fibres! Wicked!

...but of course buying any of those items would be insane. The problem is what economists call opportunity costs: buying one thing costs money that could be used to buy others. A hand-crafted designer scarf might have some value in the Arctic, but it would cost so much it would prevent you from buying much more important things. And when your life is on the line, things like impressing your friends and buying organic pale in comparison. You have one goal - staying alive - and your only problem is how to distribute your resources to keep your chances as high as possible. These sorts of economics concepts are natural enough when faced with a journey through the freezing tundra.

\n


But they are decidedly not natural when facing a decision about charitable giving. Most donors say they want to \"help people\". If that's true, they should try to distribute their resources to help people as much as possible. Most people don't. In the \"Buy A Brushstroke\" campaign, eleven thousand British donors gave a total of £550,000 to keep the famous painting \"Blue Rigi\" in a UK museum. If they had given that £550,000 to buy better sanitation systems in African villages instead, the latest statistics suggest it would have saved the lives of about one thousand two hundred people from disease. Each individual $50 donation could have given a year of normal life back to a Third Worlder afflicted with a disabling condition like blindness or limb deformity..

Most of those 11,000 donors genuinely wanted to help people by preserving access to the original canvas of a beautiful painting. And most of those 11,000 donors, if you asked, would say that a thousand people's lives are more important than a beautiful painting, original or no. But these people didn't have the proper mental habits to realize that was the choice before them, and so a beautiful painting remains in a British museum and somewhere in the Third World a thousand people are dead.

If you are to \"love your neighbor as yourself\", then you should be as careful in maximizing the benefit to others when donating to charity as you would be in maximizing the benefit to yourself when choosing purchases for a polar trek. And if you wouldn't buy a pretty picture to hang on your sled in preference to a parka, you should consider not helping save a famous painting in preference to helping save a thousand lives.

Not all charitable choices are as simple as that one, but many charitable choices do have right answers. GiveWell.org, a site which collects and interprets data on the effectiveness of charities, predicts that antimalarial drugs save one child from malaria per $5,000 worth of medicine, but insecticide-treated bed nets save one child from malaria per $500 worth of netting. If you want to save children, donating bed nets instead of antimalarial drugs is the objectively right answer, the same way buying a $500 TV instead of an identical TV that costs $5,000 is the right answer. And since saving a child from diarrheal disease costs $5,000, donating to an organization fighting malaria instead of an organization fighting diarrhea is the right answer, unless you are donating based on some criteria other than whether you're helping children or not.

Say all of the best Arctic explorers agree that the three most important things for surviving in the Arctic are good boots, a good coat, and good food. Perhaps they have run highly unethical studies in which they release thousands of people into the Arctic with different combination of gear, and consistently find that only the ones with good boots, coats, and food survive. Then there is only one best answer to the question \"What gear do I buy if I want to survive\" - good boots, good food, and a good coat. Your preferences are irrelevant; you may choose to go with alternate gear, but only if you don't mind dying.

And likewise, there is only one best charity: the one that helps the most people the greatest amount per dollar. This is vague, and it is up to you to decide whether a charity that raises forty children's marks by one letter grade for $100 helps people more or less than one that prevents one fatal case of tuberculosis per $100 or one that saves twenty acres of rainforest per $100. But you cannot abdicate the decision, or you risk ending up like the 11,000 people who accidentally decided that a pretty picture was worth more than a thousand people's lives.

Deciding which charity is the best is hard. It may be straightforward to say that one form of antimalarial therapy is more effective than another. But how do both compare to financing medical research that might or might not develop a \"magic bullet\" cure for malaria? Or financing development of a new kind of supercomputer that might speed up all medical research? There is no easy answer, but the question has to be asked.

What about just comparing charities on overhead costs, the one easy-to-find statistic that's universally applicable across all organizations? This solution is simple, elegant, and wrong. High overhead costs are only one possible failure mode for a charity. Consider again the Arctic explorer, trying to decide between a $200 parka and a $200 digital camera. Perhaps a parka only cost $100 to make and the manufacturer takes $100 profit, but the camera cost $200 to make and the manufacturer is selling it at cost. This speaks in favor of the moral qualities of the camera manufacturer, but given the choice the explorer should still buy the parka. The camera does something useless very efficiently, the parka does something vital inefficiently. A parka sold at cost would be best, but in its absence the explorer shouldn't hesitate to choose the the parka over the camera. The same applies to charity. An antimalarial net charity that saves one life per $500 with 50% overhead is better than an antidiarrheal drug charity that saves one life per $5000 with 0% overhead: $10,000 donated to the high-overhead charity will save ten lives; $10,000 to the lower-overhead will only save two. Here the right answer is to donate to the antimalarial charity while encouraging it to find ways to lower its overhead. In any case, examining the financial practices of a charity is helpful but not enough to answer the \"which is the best charity?\" question.

Just as there is only one best charity, there is only one best way to donate to that charity. Whether you volunteer versus donate money versus raise awareness is your own choice, but that choice has consequences. If a high-powered lawyer who makes $1,000 an hour chooses to take an hour off to help clean up litter on the beach, he's wasted the opportunity to work overtime that day, make $1,000, donate to a charity that will hire a hundred poor people for $10/hour to clean up litter, and end up with a hundred times more litter removed. If he went to the beach because he wanted the sunlight and the fresh air and the warm feeling of personally contributing to something, that's fine. If he actually wanted to help people by beautifying the beach, he's chosen an objectively wrong way to go about it. And if he wanted to help people, period, he's chosen a very wrong way to go about it, since that $1,000 could save two people from malaria. Unless the litter he removed is really worth more than two people's lives to him, he's erring even according to his own value system.

...and the same is true if his philanthropy leads him to work full-time at a nonprofit instead of going to law school to become a lawyer who makes $1,000 / hour in the first place. Unless it's one HELL of a nonprofit.

The Roman historian Sallust said of Cato \"He preferred to be good, rather than to seem so\". The lawyer who quits a high-powered law firm to work at a nonprofit organization certainly seems like a good person. But if we define \"good\" as helping people, then the lawyer who stays at his law firm but donates the profit to charity is taking Cato's path of maximizing how much good he does, rather than how good he looks.

And this dichotomy between being and seeming good applies not only to looking good to others, but to ourselves. When we donate to charity, one incentive is the warm glow of a job well done. A lawyer who spends his day picking up litter will feel a sense of personal connection to his sacrifice and relive the memory of how nice he is every time he and his friends return to that beach. A lawyer who works overtime and donates the money online to starving orphans in Romania may never get that same warm glow. But concern with a warm glow is, at root, concern about seeming good rather than being good - albeit seeming good to yourself rather than to others. There's nothing wrong with donating to charity as a form of entertainment if it's what you want - giving money to the Art Fund may well be a quicker way to give yourself a warm feeling than seeing a romantic comedy at the cinema - but charity given by people who genuinely want to be good and not just to feel that way requires more forethought.

It is important to be rational about charity for the same reason it is important to be rational about Arctic exploration: it requires the same awareness of opportunity costs and the same hard-headed commitment to investigating efficient use of resources, and it may well be a matter of life and death. Consider going to www.GiveWell.org and making use of the excellent resources on effective charity they have available.

" } }, { "_id": "6mRv7Cr57AJAtRFHv", "title": "Efficient Charity: Do Unto Others...", "pageUrl": "https://www.lesswrong.com/posts/6mRv7Cr57AJAtRFHv/efficient-charity-do-unto-others-0", "postedAt": "2010-12-24T20:16:49.138Z", "baseScore": 11, "voteCount": 7, "commentCount": 6, "url": null, "contents": { "documentId": "6mRv7Cr57AJAtRFHv", "html": "

This was originally posted as part of the efficient charity contest back in November. Thanks to Roko, multifoliaterose, Louie, jmmcd, jsalvatier, and others I forget for help, corrections, encouragement, and bothering me until I finally remembered to post this here.

Imagine you are setting out on a dangerous expedition through the Arctic on a limited budget. The grizzled old prospector at the general store shakes his head sadly: you can't afford everything you need; you'll just have to purchase the bare essentials and hope you get lucky. But what is essential? Should you buy the warmest parka, if it means you can't afford a sleeping bag? Should you bring an extra week's food, just in case, even if it means going without a rifle? Or can you buy the rifle, leave the food, and hunt for your dinner?

And how about the field guide to Arctic flowers? You like flowers, and you'd hate to feel like you're failing to appreciate the harsh yet delicate environment around you. And a digital camera, of course - if you make it back alive, you'll have to put the Arctic expedition pics up on Facebook. And a hand-crafted scarf with authentic Inuit tribal patterns woven from organic fibres! Wicked!

...but of course buying any of those items would be insane. The problem is what economists call opportunity costs: buying one thing costs money that could be used to buy others. A hand-crafted designer scarf might have some value in the Arctic, but it would cost so much it would prevent you from buying much more important things. And when your life is on the line, things like impressing your friends and buying organic pale in comparison. You have one goal - staying alive - and your only problem is how to distribute your resources to keep your chances as high as possible. These sorts of economics concepts are natural enough when faced with a journey through the freezing tundra.

\n


But they are decidedly not natural when facing a decision about charitable giving. Most donors say they want to \"help people\". If that's true, they should try to distribute their resources to help people as much as possible. Most people don't. In the \"Buy A Brushstroke\" campaign, eleven thousand British donors gave a total of £550,000 to keep the famous painting \"Blue Rigi\" in a UK museum. If they had given that £550,000 to buy better sanitation systems in African villages instead, the latest statistics suggest it would have saved the lives of about one thousand two hundred people from disease. Each individual $50 donation could have given a year of normal life back to a Third Worlder afflicted with a disabling condition like blindness or limb deformity..

Most of those 11,000 donors genuinely wanted to help people by preserving access to the original canvas of a beautiful painting. And most of those 11,000 donors, if you asked, would say that a thousand people's lives are more important than a beautiful painting, original or no. But these people didn't have the proper mental habits to realize that was the choice before them, and so a beautiful painting remains in a British museum and somewhere in the Third World a thousand people are dead.

If you are to \"love your neighbor as yourself\", then you should be as careful in maximizing the benefit to others when donating to charity as you would be in maximizing the benefit to yourself when choosing purchases for a polar trek. And if you wouldn't buy a pretty picture to hang on your sled in preference to a parka, you should consider not helping save a famous painting in preference to helping save a thousand lives.

Not all charitable choices are as simple as that one, but many charitable choices do have right answers. GiveWell.org, a site which collects and interprets data on the effectiveness of charities, predicts that antimalarial drugs save one child from malaria per $5,000 worth of bed medicine, but insecticide-treated bed nets save one child from malaria per $500 worth of drugs. If you want to save children, donating bed nets instead of antimalarial drugs is the objectively right answer, the same way buying a $500 TV instead of an identical TV that costs $5,000 is the right answer. And since saving a child from diarrheal disease costs $5,000, donating to an organization fighting malaria instead of an organization fighting diarrhea is the right answer, unless you are donating based on some criteria other than whether you're helping children or not.

Say all of the best Arctic explorers agree that the three most important things for surviving in the Arctic are good boots, a good coat, and good food. Perhaps they have run highly unethical studies in which they release thousands of people into the Arctic with different combination of gear, and consistently find that only the ones with good boots, coats, and food survive. Then there is only one best answer to the question \"What gear do I buy if I want to survive\" - good boots, good food, and a good coat. Your preferences are irrelevant; you may choose to go with alternate gear, but only if you don't mind dying.

And likewise, there is only one best charity: the one that helps the most people the greatest amount per dollar. This is vague, and it is up to you to decide whether a charity that raises forty children's marks by one letter grade for $100 helps people more or less than one that prevents one fatal case of tuberculosis per $100 or one that saves twenty acres of rainforest per $100. But you cannot abdicate the decision, or you risk ending up like the 11,000 people who accidentally decided that a pretty picture was worth more than a thousand people's lives.

Deciding which charity is the best is hard. It may be straightforward to say that one form of antimalarial therapy is more effective than another. But how do both compare to financing medical research that might or might not develop a \"magic bullet\" cure for malaria? Or financing development of a new kind of supercomputer that might speed up all medical research? There is no easy answer, but the question has to be asked.

What about just comparing charities on overhead costs, the one easy-to-find statistic that's universally applicable across all organizations? This solution is simple, elegant, and wrong. High overhead costs are only one possible failure mode for a charity. Consider again the Arctic explorer, trying to decide between a $200 parka and a $200 digital camera. Perhaps a parka only cost $100 to make and the manufacturer takes $100 profit, but the camera cost $200 to make and the manufacturer is selling it at cost. This speaks in favor of the moral qualities of the camera manufacturer, but given the choice the explorer should still buy the parka. The camera does something useless very efficiently, the parka does something vital inefficiently. A parka sold at cost would be best, but in its absence the explorer shouldn't hesitate to choose the the parka over the camera. The same applies to charity. An antimalarial net charity that saves one life per $500 with 50% overhead is better than an antidiarrheal drug charity that saves one life per $5000 with 0% overhead: $10,000 donated to the high-overhead charity will save ten lives; $10,000 to the lower-overhead will only save two. Here the right answer is to donate to the antimalarial charity while encouraging it to find ways to lower its overhead. In any case, looking for low overhead is helpful but not enough to answer the \"which is the best charity?\" question.

Just as there is only one best charity, there is only one best way to donate to that charity. Whether you volunteer versus donate money versus raise awareness is your own choice, but that choice has consequences. If a high-powered lawyer who makes $1,000 an hour chooses to take an hour off to help clean up litter on the beach, he's wasted the opportunity to work overtime that day, make $1,000, donate to a charity that will hire a hundred poor people for $10/hour to clean up litter, and end up with a hundred times more litter removed. If he went to the beach because he wanted the sunlight and the fresh air and the warm feeling of personally contributing to something, that's fine. If he actually wanted to help people by beautifying the beach, he's chosen an objectively wrong way to go about it. And if he wanted to help people, period, he's chosen a very wrong way to go about it, since that $1,000 could save two people from malaria. Unless the litter he removed is really worth more than two people's lives to him, he's erring even according to his own value system.

...and the same is true if his philanthropy leads him to work full-time at a nonprofit instead of going to law school to become a lawyer who makes $1,000 / hour in the first place. Unless it's one HELL of a nonprofit.

The Roman historian Sallust said of Cato \"He preferred to be good, rather than to seem so\". The lawyer who quits a high-powered law firm to work at a nonprofit organization certainly seems like a good person. But if we define \"good\" as helping people, then the lawyer who stays at his law firm but donates the profit to charity is taking Cato's path of maximizing how much good he does, rather than how good he looks.

And this dichotomy between being and seeming good applies not only to looking good to others, but to ourselves. When we donate to charity, one incentive is the warm glow of a job well done. A lawyer who spends his day picking up litter will feel a sense of personal connection to his sacrifice and relive the memory of how nice he is every time he and his friends return to that beach. A lawyer who works overtime and donates the money online to starving orphans in Romania may never get that same warm glow. But concern with a warm glow is, at root, concern about seeming good rather than being good - albeit seeming good to yourself rather than to others. There's nothing wrong with donating to charity as a form of entertainment if it's what you want - giving money to the Art Fund may well be a quicker way to give yourself a warm feeling than seeing a romantic comedy at the cinema - but charity given by people who genuinely want to be good and not just to feel that way requires more forethought.

It is important to be rational about charity for the same reason it is important to be rational about Arctic exploration: it requires the same awareness of opportunity costs and the same hard-headed commitment to investigating efficient use of resources, and it may well be a matter of life and death. Consider going to www.GiveWell.org and making use of the excellent resources on effective charity they have available.

" } }, { "_id": "mDyHuYFjvsrHbahv3", "title": "EPR experiment in Good And Real?", "pageUrl": "https://www.lesswrong.com/posts/mDyHuYFjvsrHbahv3/epr-experiment-in-good-and-real", "postedAt": "2010-12-24T19:29:48.248Z", "baseScore": 1, "voteCount": 7, "commentCount": 4, "url": null, "contents": { "documentId": "mDyHuYFjvsrHbahv3", "html": "

Has anyone worked out the EPR experiment example in Chapter 4 of Eric Drescher's Good and Real? I can't seem to make it work out right.

" } }, { "_id": "RoRayDswR3gGTrbeg", "title": "The Fallacy of Dressing Like a Winner", "pageUrl": "https://www.lesswrong.com/posts/RoRayDswR3gGTrbeg/the-fallacy-of-dressing-like-a-winner-0", "postedAt": "2010-12-24T15:22:10.364Z", "baseScore": 18, "voteCount": 13, "commentCount": 20, "url": null, "contents": { "documentId": "RoRayDswR3gGTrbeg", "html": "

 

\n

Imagine you are a sprinter, and your one goal in life is to win the 100m sprint in the Olympics. Naturally, you watch the 100m sprint winners of the past in the hope that you can learn something from them, and it doesn't take you long to spot a pattern.

\n

 

\n

Every one of them can be seen wearing a gold medal around their neck. Not only is there a strong correlation, you then also examine the rules of the olympics and find that 100% of winners must wear a gold medal at some point, there is no way that someone could win and never wear a gold medal. So, naturally, you go out and buy a gold medal from a shop, put it around your neck and sit back, satisfied.

\n

 

\n

For another example, imagine that you are now in charge of running a large oil rig. Unfortunately, some of the drilling equipment is old and rusty, and every few hours a siren goes off alerting the divers that they need to go down again and repair the damage. This is clearly not an acceptable state of affairs, so you start looking for solutions.

\n

 

\n

You think back to a few months ago, before things got this bad, and you remember how the siren barely ever went off at all. In fact, from you knowledge of how the equipment works, the there were no problems, the siren couldn't go off. Clearly the solution the problem, is to unplug the siren.

\n

 

\n

(I would like to apologise in advance for my total ignorance of how oil rigs actually work, I just wanted an analogy)

\n

 

\n

Both these stories demonstrate a mistake which I call 'Dressing Like a Winner' (DLAW). The general form of the error is, person has goal of X, person observes that X reliably leads to Y, person attempts to achieve Y, then sits back, satisfied with their work. This mistake is so obviously wrong that it is pretty much non-existant in near mode, which is why the above stories seem utterly ridiculous. However, once we switch into the more abstract far mode, even the most ridiculous errors become dangerous. In the rest of this post I will point out three places where I think this error occurs.

\n

 

\n

Changing our minds

\n

 

\n

In a debate between two people, it is usually the case that whoever is right is unlikely to change their mind. This is not only an empirically observable correlation, but it's also intuitively obvious, would you change your mind if you were right?

\n

 

\n

At this point, our fallacys steps in with a simple conclusion, \"refusing to change your mind will make you right\". As we all know, this could not be further from the truth, changing your mind is the only way to become right, or any rate less wrong. I do not think this realization is unique to this community, but it is far from universal (and it is a lot harder to practice than to preach, suggesting it might still hold on in the subconcious).

\n

 

\n

At this point a lot of people will probably have noticed that what I am talking about bears a close resemblance to signalling, and some of you are probably thinking that that is all there is to it. While I will admit that DLAW and Signalling are easy to confuse, I do think they are seperate things., and that there is more than just ordinary signalling going on in the debate.

\n

 

\n

One piece of evidence for this is the fact that my unwillingness to change my mind extends even to opinions I have admitted to nobody. If I was only interested in signalling surely I would want to change my mind in that case, since it would reduce the risk of being humiliated once I do state my opinion. Another reason to believe that DLAW exists is the fact that not only do debaters rarely change their minds, those that do are often criticised, sometimes quite brutally, for 'flip-flopping', rather than being praised for becoming smarter and for demonstrating that their loyalty to truth is higher than their ego.

\n

 

\n

So I think DLAW is at work here, and since I have chosen a fairly uncontroversially bad thing to start off with, I hope you can now agree with me that it is at least slightly dangerous.

\n

 

\n

Consistency

\n

 

\n

It is an accepted fact that any map which completely fits the territory would be self-consistent. I have not seen many such maps, but I will agree with the argument that they must be consistent. What I disagree with is the claim that this means we should be focusing on making our maps internally consistent, and that once we have done this we can sit back because our work is done.

\n

 

\n

This idea is so widely accepted and so tempting, especially to those with a mathematical bent, that I believed it for years before noticing the fallacy that lead to it. Most reasonably intelligent people have gotten over one half of the toxic meme, in that few of them believe consistency is good enough (with the one exception of ethics, where it still seems to apply in full force). However, as with the gold medal, not only is it a mistake to be satisfied with it, but it is a waste of time to aim for it in the first place.

\n

 

\n

In Robin Hanson's article (beware consistency) [http://www.overcomingbias.com/2010/11/beware-consistency.html] we see that the consistent subjects actually do worse than the inconsistent ones, because they are consistently impatient or consistently risk averse. I think this problem is even more general than his article suggests, and represents a serious flaw in our whole epistemology, dating back to the Ancient Greek era.

\n

 

\n

Suppose that one day I notice an inconsistency in my own beliefs. Conventional wisdom would tell me that this is a serious problem, and I should discard one of the beliefs as quickly. All else being equal, the belief that gets discarded will probably be the one I am less attached to, which will probably be the one I acquired more recently, which is probably the one which is actually correct, since the other may well date back to long before I knew how to think critically about an idea.

\n

 

\n

Richard Dawkins gives a good example of this in his book 'The God Delusion'. Kurt Wise, a brilliant young geologist raised as a fundementalist Christian. Realising the contradiction between his beliefs, he took a pair of scissors to the bible and cut out every passage he would have to reject if he accepted the scientific world-view. After realizing his bible was left with so few pages that the poor book could barely hold itself together, he decided to abandon science entirely. Dawkins uses this to make an argument for why religion needs to be removed entirely, and I cannot neccessarily say I disagree with him, but I think a second moral can be drawn from this story.

\n

 

\n

How much better off would Kurt have been if he had just shrugged his shoulders at the contradiction and continued to believe both? How much worse off we be if Robert Aumann had abandoned the study of Rationality when he noticed it contradicted Orthodox Judaism? Its easy to say that Kurt was right to abandon one belief, he just abandoned the wrong one, but from inside Kurt's mind I'm not sure it was obvious to him which belief was right.

\n

 

\n

I think a better policy for dealing with contradictions is to put both beliefs 'on notice', be cautious before acting upon either of them and wait for more evidence to decide between them. If nothing else, we should admit more than two possibilities, they could actually be compatible, or they could both be wrong, or one or both of them could be badly confused.

\n

 

\n

To put this in one sentence \"don't strive for consistency, strive for accuracy and consistency will follow\".

\n

 

\n

Mathematical arguments about rationality

\n

 

\n

In this community, I often see mathematical proofs that a perfect Bayesian would do something. These proofs are interesting from a mathematical perspective, but since I have never met a perfect Bayesian I am sceptical of their relevance to the real world (perhaps they are useful to AI, someone more experienced than me should either confirm or deny that).

\n

 

\n

The problem comes when we are told that since a perfect Bayesian would do X, then we imperfect Bayesians should do X as well in order to better ourselves. A good example of this is Aumann's Agreement Theorem, which shows that not agreeing to disagree is a consequence of perfect rationality, being treated as an argument for not agreeing to disagree in our quest for better rationality. The fallacy is hopefully clear by now, we have been given no reason to believe that copying this particular by-product of success will bring us closer to our goal. Indeed, in our world of imperfect rationalists, some of whom are far more imperfect than others, an argument against disagreement seems like a very dangerous thing.

\n

 

\n

Elizer has (already)[http://lesswrong.com/lw/gr/the_modesty_argument/] argued against this specific mistake, but since he went on to (commit it)[http://lesswrong.com/lw/i5/bayesian_judo] a few articles later I think it bears mentioning again.

\n

 

\n

Another example of this mistake is (this post)[http://lesswrong.com/lw/26y/rationality_quotes_may_2010/36y9] (my apologies to Oscar Cunningham, this is not meant as an attack, you just provided a very good example of what I am talking about). The post provides a mathematical argument (a model rather than a proof) that we should be more sceptical of evidence that goes against our beliefs than evidence for them. To be more exact, it gives an argument why a perfect Bayesian, with no human bias and mathematically precise calibration should be more sceptical of evidence going against its beliefs than evidence for them.

\n

 

\n

The argument is, as far as I can tell, mathematically flawless. However, it doesn't seem to apply to me at all, if for no other reason than that I already have a massive bias overdoing that job, and my role is to counteract it.

\n

 

\n

In fact, I would say that in general our willingness to give numerical estimates is an example of this fallacy. The Cox theorems prove that any perfect reasoning system is isomorphic to Bayesian probability, but since my reasoning system is not perfect, I get the feeling that saying \"80%\" instead of \"reasonably confident\" is just making a mockery of the whole process.

\n

 

\n

This is not to say I totally reject the relevance of mathematical models and proofs to our pursuit. All else being equal if a perfect Bayesian does X. it is evidence that X is good for an imperfect Bayesian. It's just not overwhelmingly strong evidence, and shouldn't be treated as putting as if it puts a stop to all debate and decides the issue one way or the other (unlike other fields where mathematical arguments can do this).

\n

 

\n

How to avoid it

\n

 

\n

I don't think DLAW is particularly insidious as mistakes go, which is why I called it a fallacy rather than a bias. The only advice I would give is to be careful when operating in far mode (which you should do anyway), and always make sure the causal link between your actions and your goals is pointing in the right direction.

\n

 

\n

Note – When I first started planning this article I was hoping for more down-to-earth examples, but I struggled to find any. My current theory is that this fallacy is too obviously stupid to be committed in near mode, but if someone has a good example of DLAW occurring in their everyday life then please point it out in the comments. Just be careful that it is actually this rather than just signalling.

\n

 

" } }, { "_id": "5qMWF3RGTbCZGiYSc", "title": "A Proposed Litany", "pageUrl": "https://www.lesswrong.com/posts/5qMWF3RGTbCZGiYSc/a-proposed-litany", "postedAt": "2010-12-24T07:07:28.922Z", "baseScore": 10, "voteCount": 10, "commentCount": 23, "url": null, "contents": { "documentId": "5qMWF3RGTbCZGiYSc", "html": "

I was meditating on the word \"disillusionment\" the other day, and it stuck me as odd that it has such a negative connotation... doesn't being disillusioned mean that you see a truth that was previously hidden from you by a mirage of falsehood? The human-universal negative emotional response to finding out you were wrong seems counterproductive in the extreme, and I'm still working towards eliminating it from my mind. So I crafted this brief litany, and I think that with some help from the LW community it could become a useful tool for rationalists, much like the Litanies of Tarski and Gendlin. My \"first draft\" is:

\n

\"If you love truth, learn to love finding out you were wrong. If you hate illusion, learn to love disillusionment. If your emotions are not appropriate to your values, do something about it!\"

\n

What say you?

" } }, { "_id": "QLhQibDKgNsyqsacg", "title": "Michio Kaku to answer questions from a Reddit thread", "pageUrl": "https://www.lesswrong.com/posts/QLhQibDKgNsyqsacg/michio-kaku-to-answer-questions-from-a-reddit-thread", "postedAt": "2010-12-24T05:26:31.495Z", "baseScore": 2, "voteCount": 4, "commentCount": 0, "url": null, "contents": { "documentId": "QLhQibDKgNsyqsacg", "html": "

Thread is here. I'm not sure if it's relevant to Less Wrong, but he did do a short segment on the Singularity Institute on Sci-Fi Science recently (the episode was \"AI Uprising\"). (They were represented by Ben Goertzel, answering hypothetical questions about robot maids. In the context of the episode, it was a brief dead end on the way to eventually presenting mind uploading (as presented by Max Tegmark) as the true solution to AI risk. But it seems to be a highly fluffy show to begin with, so I'm not taking it (as a SIAI supporter) personally.)

" } }, { "_id": "PxMSnEPFG34o9zkq4", "title": "What is Cryptographically Possible", "pageUrl": "https://www.lesswrong.com/posts/PxMSnEPFG34o9zkq4/what-is-cryptographically-possible", "postedAt": "2010-12-24T04:58:37.892Z", "baseScore": 27, "voteCount": 21, "commentCount": 19, "url": null, "contents": { "documentId": "PxMSnEPFG34o9zkq4", "html": "

Modern computational cryptography probes what is possible in our universe, and I think the results of this exploration may be interesting to people who wouldn't normally be exposed to it.

\n

All of our algorithms will have a \"security parameter\" k. Our goal is to make it so that an honest participant in the algorithm needs to spend only about k time for each operation, while anyone trying to break the scheme needs to use a super-polynomial amount of time in k. These assumptions are specifically engineered such that they don't really depend on the type of computers being used.

\n

When I say that a participant has a function in mind to evaluate, we are going to imagine that this function is described by its code in some programming language. It doesn't matter which language if you are willing to accept constant factor slowdowns; you can translate from any reasonable programming language into any other.

\n

Now onto the results, stated very imprecisely and given roughly in increasing order of surprisingness. I have chosen a really random selection based on my own interests, so don't think this is an exhaustive list.

\n

One-Way Function (OWF): A function is one-way if given x it is easy to compute f(x), but given f(x) it is hard to find either x or any other x' such that f(x') = f(x). For example, if I give you randomly chosen k-bit integers x, y, it is easy to compute their product x*y. But if I give you their product x*y, it is hard to recover x and y (or another pair of k-bit integers with the same product). We have many more explicit candidates for one-way functions, some of which are believed to be secure against quantum adversaries. Note that basically every other result here implies OWF, and OWF implies P != NP (so for the forseeable future we are going to have to make assumptions to do computational cryptography).

\n

Pseudorandom Generator (PRG): Suppose I want to run a randomized algorithm that requires 1000000 bits of randomness, but I only have 1000 bits of randomness. A psuedorandom generator allows me to turn my 1000 bits of randomness into a 1000000 bits of psuedorandomness, and guarantees that any efficient randomized algorithm works just as often with a psuedorandom input as with a random input. More formally, a psuedorandom generator takes k random bits to k+1 psuedorandom bits in such a way that it is very difficult to distinguish its output from random. A PRG exists iff a OWF exists.

\n

Private Key Cryptography: Suppose that Alice and Bob share a secret of length k, and would like to send a message so that an eavesdropper who doesn't know the secret can't understand it. If they want to send a message of length at most k, they can use a one time pad. Private key encryption allows them to send a much longer message in a way that is indecipherable to someone who doesn't know the secret. Private key cryptography is possible if a OWF exists.

\n

Psuedorandom Function Family (PRF): A psuedorandom function family is a small family of functions (one for each k bit string) such that a black box for a randomly chosen function from the family looks exactly like a black box which chooses a random output independently for each input. A PRF exists iff a OWF exists.

\n

Bit Commitment: If Alice and Bob meet in person, then Alice can put a message into an envelope and leave this envelope in plain view. Bob can't see the message, but if at some later time the envelope is opened Bob can guarantee that Alice wasn't able to change what was in the envelope. Bit commitment allows Alice and Bob to do the same thing when they only share a communication channel. So for example, Alice could take a proof of the Riemann Hypothesis and commit to it. If someone else were to later give a proof of the Riemann Hypothesis, she could \"open\" her commitment and reveal that she had a proof first. If she doesn't ever choose to open her commitment, then no one ever learns anything about her proof. Bit commitment is possible if a OWF exists.

\n

Public Key Cryptography: Just like private key cryptography, but now Alice and Bob share no secret. They want to communicate in such a way that they both learn a secret which no eavesdropper can efficiently recover. Public key cryptography is not known to be possible if a OWF exists. We have a number of candidate schemes for public key cryptography schemes, some of which are secure against quantum adversaries. RSA is used in practice for this functionality.

\n

Zero-Knowledge Proofs (ZKP): Suppose I know the solution to some hard problem, whose answer you can verify. I would like to convince you that I really do have a solution, but without giving you any other knowledge about the solution. To do this I can't just give you a static proof; we need to talk for a while. We say that an interactive proof is zero knowledge if the person verifying the proof can guess how the conversation will go before having it (if he can effectively sample from the distribution over possible transcripts of the conversation), but it will only be possible for the prover to keep up his end of the conversation if he really has a solution. ZKPs exist for NP problems if OWFs exist.

\n

Non-Interactive Zero-Knowledge Proofs (NIZKP): Naively, zero-knowledge requires interaction. But we can make it non-interactive if we make use of a common random beacon. So for example, I am going to prove to you that I have a proof of the RH. In order to prove it to you, I say \"Consider the sequence of solar flares that were visible last night. Using that as our random data, here is a verification that I really have a proof of the RH.\" Now we say that a protocol is zero-knowledge if the verifier can construct a non-interactive proof for apparently random strings of their own choice, but such that the prover can construct a proof for truly random strings if and only if he really has a solution. We have some candidate schemes for NIZKPs, but I know of none which are secure against quantum adversaries.

\n

Digital Signatures: Modern law uses signatures extensively. This use is predicated on the assumption that I can obtain Alice's signature of a document if and only if Alice has signed it. I very much doubt this is true of real signatures; a digital signature scheme makes the same guarantee---there is no efficient way to compute Alice's signature of any document without having someone with Alice's private key sign it. Digital signature schemes exist which are secure against classical adversaries if claw-free permutation pairs exist. I don't know what the status of digital signatures against quantum adversaries is (they exist in the random oracle model, which is generally interpreted as meaning they exist in practice). RSA can also be used for this function in practice.

\n

Homomorphic Public-Key Encryption: Just like public key cryptography, but now given an encryption of a message M I can efficiently compute the encryption of any function f(M) which I can compute efficiently on unencrypted inputs. There is a candidate scheme secure against quantum adversaries, but under pretty non-standard assumptions. There are no practical implementations of this scheme, although people who care about practical implementations are working actively on it.

\n

Secure Function Evaluation: Suppose Alice and Bob have their own inputs A and B, and a function f(A, B) they would like to compute. They can do it easily by sharing A, B; in fact they can do it without sharing any information about their private input except what is necessarily revealed by f(A, B). If Alice cheats at any point, then the computational effort Bob needs to exert to learn f(A, B) is at most twice the computational effort Alice needs to exert to learn anything at all about B (this is a weaker assumption than usual in cryptography, but not that bad). There are no practical implementations of this functionality except for very simple functions.

\n

And some random things I find interesting but which are generally considered less important:

\n

Computationally Secure Arguments: Suppose that I am trying to prove an assertion to you. You only have a polynomial amount of time. I personally have an exponential amount of time, and I happen to also have a proof which is exponentially long. Conventional wisdom is that there is no possible way for me to prove the statement to you (you don't have time for me to tell you the whole proof---its really extraordinarily long). However, suppose you know that I only have an exponential amount of time in terms of the message size. There is a proof protocol which isn't perfectly secure, but such that faking a proof requires a super-exponential amount of time (in the random oracle model, which may correspond to a realistic cryptographic assumption in this case or may not).

\n

Run-Once Programs: if I give you a program, you can copy it as much as you want and run it as often as you want. But suppose I have a special sort of hardware, which holds two messages but only gives one to the user (once the user asks for either message 0 or message 1, the hardware gives the specified message and then permanently destroys the other message). A run-once version of a program uses such hardware, and can be run once and only once. Run-once programs exist if public key encryption exists.

" } }, { "_id": "Kq6keonexJ5zLGK6o", "title": "Vegetarianism", "pageUrl": "https://www.lesswrong.com/posts/Kq6keonexJ5zLGK6o/vegetarianism", "postedAt": "2010-12-24T04:57:32.204Z", "baseScore": 40, "voteCount": 45, "commentCount": 179, "url": null, "contents": { "documentId": "Kq6keonexJ5zLGK6o", "html": "

(Note: I wasn't quite sure whether this warranted a high level post or just a discussion. I haven't made a high level post yet, and wasn't entirely sure what the requirements are. For now I made it a discussion, but I'd like some feedback on that)

\n

I've been somewhat surprised by the lack of many threads on Less Wrong dealing with vegetarianism, either for or against. Is there some near-universally accepted-but-unspoken philosophy here, or is it just not something people think of much? I was particularly taken aback by the Newtonmas invitation not even mentioning a vegetarian option. If a bunch of hyper-rationalists aren't even thinking about it, then either something is pretty wrong with my thinking or theirs.

\n

I'm not going to go through all the arguments in detail here, but I'll list the basic ideas. If you've read \"Diet for a Small Planet\" or are otherwise aware of the specifics, and have counterarguments, feel free to object. If you haven't, I consider reading it (or something similar) a prerequisite for making a decision about whether you eat meat, just as reading the sequences is important to have meaningful discussion on this site.

\n

The issues:

\n

1. \"It's cruel to animals.\" Factory farming is cruel on a massive scale, beyond what we find in nature. Even if animal suffering has only 1% the weight of a humans, there's enough multiplying going on that you can't just ignore it. I haven't precisely clarified my ethics in a way that avoids the Repugnant Conclusion (I've been vaguely describing myself as a \"Preference Utilitarian\" but I confess that I haven't fully explored the ramifications of it), but it seems to me that if you're not okay with breeding a subservient, less intelligent species of humans for slave labor and consumption, you shouldn't be okay with how we treat animals. I don't think intelligence gives humans any additional intrinsic value, and I don't think most humans use their intelligence to contribute to the universe on a scale meaningful enough to make a binary distinction between the instrumental value of the average human vs the average cow.

\n

2. \"It's bad for humans.\" The scale on which we eat meat is demonstrably unhealthy, wasteful and recent (arising in Western culture in the last hundred years). The way Westerners eat in general is unhealthy and meat is just a part of that, but it's a significant factor.

\n

3. \"It's bad for the environment (which is bad for both human and non-human animals).\" Massive amounts of cows require massive amounts of grain, which require unsustainable agriculture which damages the soil. The cows themselves are a major pollution. (Edit: removed an attention grabbing fact that may or may not have been strictly true but I'm not currently prepared to defend)

\n

 

\n

Now, there are some legitimate counterarguments against strict vegetarianism. It's not necessary to be a pure vegetarian for health or environmental reasons. I do not object to free range farms that provide their animals with a decent life and painless death. I am fine with hunting. (In fact, until a super-AI somehow rewrites the rules of the ecosystem, hunting certain animals is necessary since humans have eliminated the natural predators). On top of all that,  animal cruelty is only one of a million problems facing the world, factoring farming is only one of its causes, and dealing with it takes effort. You could be spending that effort dealing with one of the other 999,999 kinds of injustice that the world faces. And if that is your choice, after having given serious consideration to the issue, I understand. 

\n

I actually eat meat approximately once a month, for each of the above reasons. Western Society makes it difficult to live perfectly, and once-a-month turns out to be approximately how often I fail to live up to my ideals. My end goal for food consumption is to derive my meat, eggs and dairy products from ethical sources, after which I'll consider it \"good enough\" (i.e. diminishing returns of effort vs improving-the-world) and move on to another area of self improvement.

\n

 

" } }, { "_id": "nfs84MvYGaAiXcgik", "title": "Draft/wiki: Infinities and measuring infinite sets: A quick reference", "pageUrl": "https://www.lesswrong.com/posts/nfs84MvYGaAiXcgik/draft-wiki-infinities-and-measuring-infinite-sets-a-quick", "postedAt": "2010-12-24T04:52:53.090Z", "baseScore": 42, "voteCount": 29, "commentCount": 25, "url": null, "contents": { "documentId": "nfs84MvYGaAiXcgik", "html": "

EDIT: This is now on the Wiki as \"Quick reference guide to the infinite\".  Do what you want with it.

\n

It seems whenever anything involving infinities or measuring infinite sets comes up it generates a lot of confusion.  So I thought I would write a quick guide to both to

\n
    \n
  1. Address common confusions
  2. \n
  3. Act as a useful reference (perhaps this should be a wiki article? This would benefit from others being able to edit it; there's no \"community wiki mode\" on LW, huh?)
  4. \n
  5. Remind people that sometimes inventing a new sort of answer is necessary!
  6. \n
\n

I am trying to keep this concise, in some cases substituting Wikipedia links for explanation, but I do want what I have written to be understandable enough and informative enough to answer the commonly occurring questions.  Please let me know if you can detect a particular problem. I wrote this very quickly and expect it still needs quite a bit more work to be understandable to someone with very little math background.

\n

I realize many people here are finitists of one stripe or another but this comes up often enough that this seems useful anyway.  Apologies to any constructivists, but I am going to assume classical logic, because it's all I know, though I am pointing out explicitly any uses of choice.  (For what this means and why anyone cares about this, see this comment.)  Also as I intend this as a reference (is there *any* way we can make this editable?) some of this may be things that I do not actually know but merely have read.

\n

Note that these are two separate topics, though they have a bit of overlap.

\n

Primarily, though, my main intention is to put an end to the following, which I have seen here far too often:

\n

Myth #0: All infinities are infinite cardinals, and cardinality is the main method used to measure size of sets.

\n

The fact is that \"infinite\" is a general term meaning \"larger (in some sense) than any natural number\"; different systems of infinite numbers get used depending on what is appropriate in context. Furthermore, there are many other methods of measuring sizes of sets, which sacrifice universality for higher resolution; cardinality is a very coarse-grained measure.

\n

\n
\n

Topic #1: Systems of infinities (for doing arithmetic with)

\n

Cardinal numbers

\n

First, a review of what they represent and how they work at the basic level, before we get to their arithmetic.

\n

Cardinal numbers are used for measuring sizes of sets when we don't know, or don't care, about the set's context or composition.  First, the standard explanation of what we mean by this: Say we have two farmers, who each have a large number of sheep, more than they can count.  How can they determine who has more?  They pair off the sheep of the one against the sheep of the other; whichever has sheep left over, has more.

\n

So given two sets X and Y, we will say X has smaller cardinality than Y (denoted |X|≤|Y|, or sometimes #X≤#Y) if there is a way to assign to each element x of X, a corresponding element f(x) of Y, such that no two distinct x1 and x2 from X correspond to the same element of Y.  If, furthermore, this correspondence covers all of Y - if for each y in Y there is some x in X that had y assigned to it - then we say that X and Y have the same cardinality, |X|=|Y| or #X=#Y.

\n

Note that by this definition, the set N of natural numbers, and the set 2N of even integers, have the same size, since we can match up 1 with 2, 2 with 4, 3 with 6, etc.  This even though it seems 2N should be only \"half as large\" as N!  This is why I emphasize: Cardinality is only one way of measuring sizes of sets, one that is not fine enough to distinguish between 2N and N.  Other methods of measuring their size will have 2N only half as large as N.

\n

It is true, but not obvious, that if |X|≤|Y| and |Y|≤|X|, then |X|=|Y|; this is the Schroeder-Bernstein theorem. Hence we can sensibly talk about \"the cardinality\" of a set X as being some abstract property of it - if |X|≤|Y| then X has smaller cardinality and Y has larger cardinality, and so on.  We can make this more concrete, and define an actual cardinality object |X| (or #X), using either the axiom of choice or Scott's trick (if you admit the axiom of foundation) or even proper classes if we admit those, but this will not be relevant here. We will use |X|<|Y| to mean \"|X|≤|Y| but |X|≠Y\".

\n

Note that it is also not obvious that given any two sets X and Y, we must have either |X|≤|Y| or |Y|≤|X|; indeed, this statement is true if and only if we admit the axiom of choice. So take note:

\n

Myth #1: Infinities must come in a linear ordering.

\n

Fact: If the axiom of choice is false, then there are necessarily infinite cardinals which are not the same size, and yet for which neither can be said to be larger!  If we do admit the axiom of choice, then the cardinal numbers must be not only linearly-ordered but in fact be well-ordered.

\n

The cardinality of the set of natural numbers, |N|, is also denoted ℵ0. If we admit the axiom of [dependent] choice, this is the smallest infinite cardinal.  Here by \"infinite\" cardinal I mean one that is larger than the size of any finite set (0, 1, 2, etc.).

\n
Quick aside on partial orderings
\n

Many of you are probably wondering how to think about something like \"neither larger nor smaller, but not the same\".  Formally, we say that, without choice, the ordering on the cardinal numbers is a partial order.  Because these are so common I'll go ahead and define this here - generally, a partial order on a set S is a relation (usually denoted \"≤\") on S such that:

\n
    \n
  1. For every x in S, x≤x (reflexivity)
  2. \n
  3. For any x and y in S, if x≤y and y≤x, then x=y (antisymmetry)
  4. \n
  5. For any x,y,z in S, if x≤y and y≤z, then x≤z (transitivity)
  6. \n
\n

If we additionally required that for any x and y in S, we have either x≤y or y≤x, we'd have a total order (also called a linear order).

\n

OK, but still, what does \"neither larger nor smaller, yet not the same\" mean in general? How can you visualize it?  Well, the canonical example of a partial order would be, if we have any set S, we can partially order its subsets by defining A≤B to mean A⊆B.  So if S={1,2,3,4}, then {1,2} is larger than {1} and {2}, and smaller than {1,2,4}, but incomparable to {3} or {2,3} or {2,3,4}.

\n

Another example would be, if we have ordered n-tuples of real numbers, we could define (x1,...,xn)≤(y1,...,yn) if xi≤yi for each i.  You might imagine these as, say, stats of characters in a game; then x≤y would mean that character y is better than character x in every way.  To say that x and y are incomparable would mean that - though in practice one might be better on the whole - neither is obviously better.  More generally, in any game, you could define a partial order on strategies by x≤y if y dominates x.

\n

Note that partial orders are sufficiently common that for many math people the word \"order\" means \"partial order\" by default.

\n

Cardinal arithmetic

\n

Given sets X and Y, |X|+|Y| will denote the cardinality of the \"disjoint union\" of X and Y, which is the union of X and Y, but with each element tagged with which of the two it came from, so that we don't lose anything to overlap (i.e., if an element is in both X and Y, it will occur twice, once with an \"X\" tag and once with a \"Y\" tag.)  |X||Y| will denote the cardinality of the set X×Y, the Cartesian product of X and Y, which is the set of all ordered pairs (x,y) with x in X and y in Y.  However, if we admit the axiom of choice, this arithmetic is not very interesting for infinite sets!  It turns out that given cardinal numbers μ and λ, if either is infinite and neither is zero, then μ+λ=μλ=max(μ,λ).  Hence, if you need a system of infinities in which x+y is going to be strictly bigger than x and y, cardinal numbers are the wrong choice.  (The arithmetic of cardinals gets more interesting once you allow for adding or multiplying infinitely many at once.)

\n

There is also exponentiation of cardinals; |X||Y| denotes the cardinality of the set XY of all functions from Y to X, i.e., the number of ways of picking one element of X for each element of Y. Given any set X, 2|X| is the cardinality of its power set ℘(X), the set of all its subsets.  Cantor's diagonal argument shows that for any set X, 2|X|>|X|; in particular, there is no largest cardinal number.

\n

Application: Measuring sizes of sets when we don't care about the context or composition.

\n

Ordinal numbers

\n

I'm afraid there's no quick way to explain these. The reason is that they are used to represent two things - ways of well-ordering things, and positions in an \"infinite list\" - except, of course, that these are actually fundamentally the same thing, and to understand ordinals you need to wrap your head around this until you can see both simultaneously.  Hence I suggest you just go read Wikipedia, or some other standard text, if you want to learn how these work. I will just speak briefly on their arithmetic. Note that the ordinals too are ordered - linearly ordered and well-ordered, at that.

\n

Unlike with the cardinals, addition and multiplication of two ordinals will often get you a larger ordinal.  In particular, for any ordinal λ, λ+1 is a larger ordinal.  However the multiplication of ordinals is noncommutative.  In fact, even the addition of ordinal numbers is noncommutative!  And distributivity only holds on one side; a(b+c)=ab+ac, but (a+b)c need not be ac+bc.  So if you need commutativity, ordinals (with their usual operations) are the wrong choice.

\n

Contrast the smallest infinite ordinal, denoted ω, with ℵ0, which is (assuming choice) the smallest infinite cardinal.  1+ℵ0=ℵ0+1=ℵ0, and 1+ω=ω, but ω+1>ω.  2ℵ0=ℵ02=ℵ0, and 2ω=ω, but ω2>ω.  ℵ02=ℵ0, but ω2>ω. And in a reversal of what you might expect if you just complete the pattern, 2^ℵ0>ℵ0, but 2ω=ω.

\n

Application: See link.

\n

Ordinal numbers... with natural operations

\n

There's an alternate way of doing arithmetic on the ordinals, referred to as the \"natural operations\".  These sacrifice the continuity properties of the ordinary operations, but in return get commutativity, distributivity, cancellation... the things we need to make the algebra nice.  There's a natural addition, a natural multiplication, and apparently a natural exponentiation, though I don't know what that last one might be.

\n

If you've heard \"the ordinals embed in the surreals\", and were very confused by that statement because the surreals are commutative when the ordinals are not, the answer is that the correct statement is that the ordinals with natural operations embed in the surreals, rather than the ordinals with their usual operations.

\n

The extended [positive] real line

\n

Sometimes, we just use the set of nonnegative real numbers with an infinity element (denoted ∞, unsurprisingly) tacked on.  Because sometimes that's all you need.  So:

\n

Myth #2: Any place where you have infinities, you have the possibility for differing degrees of infinity.

\n

Fact: Sometimes such a thing just wouldn't make sense.

\n

Application: This is what we do in measure theory - i.e. anywhere integration or expected value (and hence, in the usual formulations, utility) is involved.  If you want to claim that in your utility function, options A and B both have infinite utility, but the utility of B is more infinite than that of A... first you're going to have to make a framework in which that makes sense.  Such a thing might indeed make sense, but you'll have to explain how, as our usual framework for utility doesn't allow such things.  (The problem is that adding multiple distinct infinities tends to ruin the continuity properties of the real numbers that make integration possible in the first place, but I'm sure if you look someone must have come up with some method for getting around that in some cases.)

\n

Sometimes we allow negative numbers and -∞ as well, though this can cause a problem because there's no sensible way to define ∞+(-∞).  (0∞, on the other hand, is just 0.  We make this definition because, e.g., the area of an infinitely-long-but-infinitely-thin line should still be 0.)

\n

The projective line

\n

Sometimes we don't even care about the distinction between a \"positive infinity\" and a \"negative infinity\"; we just need something that represents something larger in magnitude than all real numbers, but which you'd approach regardless of whether you got large and negative or large and positive.  So we take the real numbers R, tack on an infinity element ∞, and we have the real projective line.  Note that this doesn't depend at all on the real numbers being ordered, so we can do the same with the complex numbers and get the complex projective line, a.k.a. the Riemann sphere.

\n

Application: If you want to assign 1/x some concrete \"value\" when x=0, well, this isn't going to make sense in a system where you have to distinguish ∞ from -∞.

\n

Hyperreal numbers

\n

What nonstandard analysis uses. These are more used as a means to deduce properties of the real numbers than used for their own sake.  You can't even speak of \"the\" hypperreal numbers, because then you'd have to specify what ultrafilter you were using. Even just proving these exist requires a form of choice.  You probably don't want to use these to represent anything.

\n

The surreal numbers: the infinity kitchen sink*

\n

For when you absolutely, positively, have to make sense of an expression involving infinite quantities.  The surreal numbers are pretty much as infinite as you could possibly want.  They contain the ordinals with their natural operations, but they allow for so much more.  Do you need to take the natural logarithm of ω? And then divide π by it?  And then raise the whole thing to the √(ω2+πω) power?  And then subtract ω√8?  In the surreal numbers, this all makes sense.  Somehow.  (And if you need square roots of negative numbers, you can always pass to the surcomplex numbers, which I guess is the actual kitchen sink.)

\n

*The characteristic 0 infinity kitchen sink, anyway.  Characteristic 2 has its own infinity kitchen sink, the nimbers. I don't know about other characteristics.  I also have to wonder if there's some set of characteristic 0 \"infinity kitchen sinks\" that naturally extend the p-adics...

\n

Application: Again, kitchen sink.

\n

...and many more

\n

Often the thing to do is make an ad-hoc system to fit the occasion.  For instance, we could simply take the real numbers R and tack on an element ∞, insist it obey the ordinary rules of algebra, and order appropriately.  (Formally, take the ring R[T], and order lexicographically.  Then perhaps extend to R(T), or whatever else you might like.  And of course call it \"∞\" rather than \"T\".)  So (∞+1)(∞-1)=∞2-1, etc. What is this good for? I have no idea, but it's a simple brute-force way of tossing in infinities when needed.

\n

Also: functions, which are probably more appropriate a lot of the time

\n

Let's not forget - oftentimes the appropriate thing to do is not to start tossing about infinities at all, but rather shift from thinking about numbers to thinking about functions. You know what's larger than any constant number? x. What's even larger? x2.  (If we only consider polynomial functions, this is equivalent to the \"brute-force\" system above, under the equivalence x↔∞.)  Much larger? ex.  Is x too large?  Maybe you want log x.  Etc.

\n

 

\n
\n

 

\n

Topic #2: Ways of measuring infinite sets

\n

The thing about measuring infinite sets is that we have a trade-off between discrimination and applicability.  Cardinality can be applied to any set at all, but it's a very coarse-grained way of measuring things.  If you want to measure a subset of the plane, you'd be better off asking for its area... just don't think you can ask for the \"area\" of a set of integers.

\n

Cardinal numbers (again)

\n

The most basic method.  Every set has a cardinality.  But the cost of such universality is a very low resolution.  The set of natural numbers has cardinality ℵ0, but so does the set of even numbers, the set of rational numbers, the set of algebraic numbers, the set of computable real numbers...

\n

Note that the set of real numbers is much larger and has cardinality 2^ℵ0.  (This is not to be confused with ℵ1, which (assuming choice again) is the second-smallest infinite cardinal.  The question of whether 2^ℵ0=ℵ1 is known as the continuum hypothesis.)

\n

If we are working with subsets T of a given set S, we can do a bit better by not just looking at |T|, but also at |S-T| (the size of the complement of T in S).  For instance, the set of natural numbers greater than 8, and the set of even natural numbers, both have cardinality ℵ0, but within the context of the natural numbers, the former has finite complement (numbers at most 8), while the latter has infinite complement (all odd numbers).

\n

Occasionally: ordinals

\n

If the sets you're working with come with well-orderings, you can consider the type of well-ordering as a \"size\", and thus measure sizes with ordinals.  If they don't have well-orderings, this doesn't apply.

\n

Measure: the old fallback

\n

Most commonly we use the notion of a measure to measure sizes of subsets T of a given set S.  This just means that we designate some of the subsets T of S as \"measurable\" (with a few requirements - the whole set S must be measurable; complements of measurable sets must be measurable; a union of countably many measurable sets must be measurable) and assign them a number called their measure, which I'll denote μ(T).  μ takes values in the extended positive real line (see above): It can be any nonnegative real number, or just a flat ∞.  We require that the empty set have measure 0, that if A and B are disjoint sets then μ(A∪B)=μ(A)+μ(B) (called \"finite additivity\"), and more generally that if we have a countable collection of sets A1, A2, ..., with none of them overlapping any of the others, then the measure of their union is the sum of their measures.  (Called \"countable additivity\"; this infinite sum automatically makes sense because all the numbers involved are nonnegative.)

\n

The function μ itself is called a measure on S.  So if we have a set S and a measure on it, we have a way to measure the sizes of subsets of it (well, the measurable ones, anyway).  Of course, this is all very non-specific; by itself, this doesn't help us much.

\n

Fortunately, the set of real numbers R comes equipped with a natural measure, known as Lebesgue measure.  So does n-dimensional Euclidean space for every n.  And indeed so do a lot of the natural spaces we encounter.  So while simply shouting \"there's a measure!\", without stating what that measure might be, does not solve any problems, in practice there's often one natural measure (up to multiplication by some positive constant).  See in particular: Haar measure.

\n

If we have a set S with a measure μ such that μ(S)=1, then we have a probability space.  This is how we formalize probability mathematically: We have some set S of possibilities, equipped with a measure, and the measure of a set of possibilities is its probability.  Except, of course, that I'm sure many here would insist only on finite additivity rather than countable additivity...

\n

Note that if μ(S) is finite, then μ(S-T)=μ(S)-μ(T).  However, if μ(S)=∞, and μ(T)=∞ also, this doesn't work; ∞-∞ is not defined in this context, and μ(S-T) could be any extended nonnegative real number.  So note that if we're working in a set of infinite measure, and we're comparing subsets which themselves have infinite measure, we can possibly gain some extra information by comparing the measures of the complements as well.

\n

Here on LessWrong, when discussing multiverse-based notions, we'll typically assume that the set of universes comes equipped in some way with a natural measure.  If the universes are the many worlds of MWI, then this measure will be proportional to squared-norm-of-amplitude.

\n

Measuring subsets of the natural numbers

\n

So it seems like 2N should be half the size of N, right?  Well there's an easy way to accomplish this: Given a set A of natural numbers, we define its natural density to be limn→∞ A(n)/n, where A(n) denotes the number of elements of A that are at most n.  At least, we can do this if the limit exists.  It doesn't always.  But when it does it does what we want pretty well.  What if the limit doesn't exist?  Well, we could use a limsup or a liminf instead, and get upper and lower densities.  Or take some other approach, such as Schnirelmann density, where we just take an inf.

\n

Of course, for sets of density 0, this may not be enough information.  Here we can pull out another trick from above: Don't use numbers, use functions!  We can just ask what function A(n) approximates (asymptotically).  For instance, the prime numbers have density 0, but a much more informative statement is the prime number theorem, which states that if P is the set of prime numbers, then P(n)~n/(log n).

\n

...etc...

\n

Of course, the real point of all these examples was simply to demonstrate: Depending on what sort of thing you want to measure, you'll need different tools!  So there's many more tools out there, and sometimes you may just need to invent your own...

" } }, { "_id": "AJ3Dcb8NNkbkFftyj", "title": "Study shows placebos can work even if you know it's a placebo", "pageUrl": "https://www.lesswrong.com/posts/AJ3Dcb8NNkbkFftyj/study-shows-placebos-can-work-even-if-you-know-it-s-a", "postedAt": "2010-12-24T04:09:36.576Z", "baseScore": 13, "voteCount": 9, "commentCount": 16, "url": null, "contents": { "documentId": "AJ3Dcb8NNkbkFftyj", "html": "

Placebo Effect Benefits Patients Even When They Knowingly Take a Fake Pill

\n

This is a little bit disturbing. (A kind of belief in belief, perhaps? Like, \"I know a placebo is where you take a fake pill but it makes you feel better anyway if you believe it's real medicine, so I'd better believe this is real medicine!\")

\n

Though it's too bad they (apparently) didn't have a third group who received a placebo that they didn't know was a placebo, to compare the effect size.

\n

Edit: Here's the actual study. RolfAndreassen points out that its results may not actually be strong evidence for what is being claimed.

" } }, { "_id": "TfF9BuBv3WDsggQLb", "title": "Vegetarianism", "pageUrl": "https://www.lesswrong.com/posts/TfF9BuBv3WDsggQLb/vegetarianism-0", "postedAt": "2010-12-23T23:11:00.932Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "TfF9BuBv3WDsggQLb", "html": "

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; min-height: 14.0px}\n

I've been somewhat surprised by the lack of m(any) threads on Less Wrong dealing with vegetarianism, either for or against. Is there some near-universally accepted-but-unspoken philosophy here, or is it just not something people think of much? I was particularly taken aback by the Newtonmas invitation not even mentioning a vegetarian option. If a bunch of hyper-rationalists aren't even thinking about it, then either something is pretty wrong with my thinking or theirs.

\n

 

\n

I'm not going to go through all the arguments in detail here, but I'll list the basic ideas. If you've read \"Diet for a Small Planet\" or are otherwise aware of the specifics, and have counterarguments, feel free to object. If you haven't, I consider reading it (or something similar) a prerequisite for making a decision about whether you eat meat, just as reading the sequences is important to have meaningful discussion on this site.

\n

 

\n

[b]The issues:[/b]

\n

 

\n

1. \"It's cruel to animals.\" Factory farming is cruel on a massive scale, beyond what we find in nature. Even if animal suffering has only 1% the weight of a humans, there's enough multiplying going on that you can't just ignore it. I haven't precisely clarified my ethics in a way that avoids the Repugnant Conclusion (I've been vaguely describing myself as a \"Preference Utilitarian\" but I confess that I haven't fully explored the ramifications of it), but it seems to me that if you're not okay with breeding a subservient, less intelligent species of humans for slave labor and consumption, you shouldn't be okay with how we treat animals. I don't think intelligence gives humans any additional intrinsic value, and I don't think most humans use their intelligence to contribute to the universe on a scale meaningful enough to make a binary distinction between the instrumental value of the average human vs the average cow.

\n

 

\n

2. \"It's bad for humans.\" The scale on which we eat meat is demonstrably unhealthy, wasteful and recent (arising in Western culture in the last hundred years). The way Westerners eat in general is unhealthy and meat is just a part of that, but it's a significant factor.

\n

 

\n

3. \"It's bad for the environment (which is bad for both human and non-human animals).\" Massive amounts of cows require massive amounts of grain, which require unsustainable agriculture which damages the soil. The cows themselves are a major pollutant, greater than the automobile industry. 

\n

 

\n

Both of those points are expounded upon in detail in various books and articles, and together they are a pretty big deal. If people want I can make a better effort to justify them here, but honestly if you haven't looked into them yet you really should be reading professional literature.

\n

 

\n

Now, there are some legitimate counterarguments against strict vegetarianism. It's not necessary to be a pure vegetarian for health or environmental reasons. I do not object to free range farms that provide their animals with a decent life and painless death. I am fine with hunting. (In fact, until a super-AI somehow rewrites the rules of the ecosystem, deer hunting is necessary since humans have eliminated the natural predators). On top of all that,  animal cruelty is only one of a million problems facing the world, factoring farming is only one of its causes, and dealing with it takes effort. You could be spending that effort dealing with one of the other 999,999 kinds of injustice that the world faces. And if that is your choice, after having given serious consideration to the issue, I understand. 

\n

 

\n

I actually eat meat approximately once a month, for each of the above reasons. Western Society makes it difficult to live perfectly, and once-a-month turns out to be approximately how often I fail to live up to my ideals. My end goal for food consumption is to derive my meat, eggs and dairy products from ethical sources, after which I'll consider it \"good enough\" (i.e. diminishing returns of effort vs improving-the-world) and move on to another area of self improvement.

\n

\n

\n

[i](Note: I wasn't quite sure whether this warranted a high level post or just a discussion. I haven't made a high level post yet, and wasn't entirely sure what the requirements are. For now I made it a discussion, but I'd like some feedback on that)[/i]

\n

\n

" } }, { "_id": "pCiZuPiCFC2W43Hyb", "title": "Where is your moral thermostat?", "pageUrl": "https://www.lesswrong.com/posts/pCiZuPiCFC2W43Hyb/where-is-your-moral-thermostat", "postedAt": "2010-12-23T18:13:17.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "pCiZuPiCFC2W43Hyb", "html": "

Old news: humans regard morality as though with a ‘moral thermostat‘.

\n
\n

…we propose a framework suggesting that moral (or immoral) behavior can result from an internal balancing of moral self-worth and the cost inherent in altruistic behavior. In Experiment 1, participants were asked to write a self-relevant story containing words referring to either positive or negative traits. Participants who wrote a story referring to the positive traits donated one fifth as much as those who wrote a story referring to the negative traits. In Experiment 2, we showed that this effect was due specifically to a change in the self-concept. In Experiment 3, we replicated these findings and extended them to cooperative behavior in environmental decision making. We suggest that affirming a moral identity leads people to feel licensed to act immorally. However, when moral identity is threatened, moral behavior is a means to regain some lost self-worth.

\n

This doesn’t appear to always hold though. Most people oscillate happily around a normal level of virtue, eating more salad if they shouted at their child and so on, but some seem to throw consistent effort at particular moral issues, or make firm principles and stick to them.

\n

It seems to me that there are two kinds of moral issues; obligatory and virtuous. Obligatory things include not killing people, wearing clothes in the right places, doing whatever specific duties to god/s you will be eternally tortured for neglecting. Virtuous issues make you feel good and affect your reputation. Doing favours, giving to charities, excercising, eating healthy food, buying environmentally friendly, getting up early, being tidy, offering to wash up, cycling to work. Outside of these two categories there are what I will call ‘practical issues’. These don’t feel related to virtue at all: how to transport a new sofa home, what time to have dinner tonight, which brand of internet to get.

\n

‘Moral thermostat’ behaviour only applies to the virtuous moral behaviours. The obligatory ones and the practical ones demand exactly as much effort as they take, or you feel like putting in respectively. The people who pour effort into specific issues are mostly those who are persuaded that the issue is an obligatory one or a practical one. A clear example of those who push a moral issue into the territory of obligation is of vegetarians, for whatever reason.

\n

\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "wLmxiXfpLjiTBiT2j", "title": "Two questions about CEV that worry me", "pageUrl": "https://www.lesswrong.com/posts/wLmxiXfpLjiTBiT2j/two-questions-about-cev-that-worry-me", "postedAt": "2010-12-23T15:58:37.674Z", "baseScore": 38, "voteCount": 34, "commentCount": 142, "url": null, "contents": { "documentId": "wLmxiXfpLjiTBiT2j", "html": "

Taken from some old comments of mine that never did get a satisfactory answer.

\n

1) One of the justifications for CEV was that extrapolating from an American in the 21st century and from Archimedes of Syracuse should give similar results. This seems to assume that change in human values over time is mostly \"progress\" rather than drift. Do we have any evidence for that, except saying that our modern values are \"good\" according to themselves, so whatever historical process led to them must have been \"progress\"?

\n

2) How can anyone sincerely want to build an AI that fulfills anything except their own current, personal volition? If Eliezer wants the the AI to look at humanity and infer its best wishes for the future, why can't he task it with looking at himself and inferring his best idea to fulfill humanity's wishes? Why must this particular thing be spelled out in a document like CEV and not left to the mysterious magic of \"intelligence\", and what other such things are there?

" } }, { "_id": "rLkequ7q5fGHunXGT", "title": "Some problems with evidence-based aid", "pageUrl": "https://www.lesswrong.com/posts/rLkequ7q5fGHunXGT/some-problems-with-evidence-based-aid", "postedAt": "2010-12-23T14:12:28.577Z", "baseScore": 6, "voteCount": 4, "commentCount": 8, "url": null, "contents": { "documentId": "rLkequ7q5fGHunXGT", "html": "

In Against Health: How Health Became the New Morality (Biopolitics, Medicine, Technoscience, and Health in the 21st Century) , there's an essay \"Against Global Health?\" by Vincanne Adams.

\n

The introduction was a bit of a slog, but I think the point was that if \"health\" is defined as something which can and should be given to people regardless of what they think in the matter, the results can be presumptuous.

\n

While she may be overly invested in the way things are usually done, she brings up some disquieting points about the ways insisting on double-blind tests can go wrong. For example, she mentions a project to evaluate training in safe infant delivery techniques in Tibet which was scuttled because not enough women were dying to get a good power calculation.

\n

She describes experiments which are done in isolated communities because that's where the researchers can be reasonably sure that the subjects don't have alternative sources of care which would foul up the double-blinding. This does seem like a drunk and lamp post problem. [1] More generally, the concern is that actual care is delayed in the search for perfect information. Admittedly, it's hard to be sure about how to balance searching for information with taking action, but it strikes me as a problem that's worth some thought rather than just assuming that double-blinding is a reliable improvement.

\n

[1] Is there a standard LW term for searching where it's easy to search rather than where the answer is likely to be?

" } }, { "_id": "cpLBCRiMQ7XYhJQcY", "title": "Where in the world is the SIAI house?", "pageUrl": "https://www.lesswrong.com/posts/cpLBCRiMQ7XYhJQcY/where-in-the-world-is-the-siai-house", "postedAt": "2010-12-23T12:46:14.101Z", "baseScore": 2, "voteCount": 5, "commentCount": 3, "url": null, "contents": { "documentId": "cpLBCRiMQ7XYhJQcY", "html": "

I am under the impression that there used to be a place called the SIAI house in Santa Clara, which housed the SIAI Visiting Fellows Program. However, this post suggests that it has moved\\is moving to an unspecified location in Berkeley. My efforts to find additional information were unsuccessful.

\n

So, does such a house still exist? What is its exact current location? Does it welcome random visitors?

\n

I ask this because I plan to be in San Francisco on 9-11 January, 2011 with a lot of free time on Sunday the 9th, which looks like a great opportunity for a visit.

" } }, { "_id": "2rDdKiCoeqXmh9zb9", "title": "I'm scared.", "pageUrl": "https://www.lesswrong.com/posts/2rDdKiCoeqXmh9zb9/i-m-scared", "postedAt": "2010-12-23T09:05:24.807Z", "baseScore": 88, "voteCount": 64, "commentCount": 87, "url": null, "contents": { "documentId": "2rDdKiCoeqXmh9zb9", "html": "

Recently, I've been ratcheting up my probability estimate of some of Less Wrong's core doctrines (shut up and multiply, beliefs require evidence, brains are not a reliable guide as to whether brains are malfunctioning, the Universe has no fail-safe mechanisms) from \"Hmm, this is an intriguing idea\" to somewhere in the neighborhood of \"This is most likely correct.\"

\n

This leaves me confused and concerned and afraid. There are two things in particular that are bothering me. On the one hand, I feel obligated to try much harder to identify my real goals and then to do what it takes to actually achieve them -- I have much less faith that just being a nice, thoughtful, hard-working person will result in me having a pleasant life, let alone in me fulfilling anything like my full potential to help others and/or produce great art. On the other hand, I feel a deep sense of pessimism -- I have much less faith that even making an intense, rational effort to succeed will make much of a difference. Rationality has stripped me of some of my traditional sources of confidence that everything will work out OK, but it hasn't provided any new ones -- there is no formula that I can recite to myself to say \"Well, as long as I do this, then everything will be fine.\" Most likely, it won't be fine; but it isn't hopeless, either; possibly there's something I can do to help, and if so I really want to find it. This is frustrating.

\n

This isn't to say that I want to back away from rationalism -- it's not as if pretending to be dumb will help. To whatever extent I become more rational and thus more successful, that's better than nothing. The concern is that it may not ever be better enough for me to register a sense of approval or contentedness. Civilization might collapse; I might get hit by a bus; or I might just claw through some of my biases but not others, make poor choices, and fail to accomplish much of anything.

\n

Has anyone else had experience with a similar type of fear? Does anyone have suggestions as to an appropriate response?

" } }, { "_id": "qn5kAGEx5xDZcyRPg", "title": "Should criminals be denied cryonics?", "pageUrl": "https://www.lesswrong.com/posts/qn5kAGEx5xDZcyRPg/should-criminals-be-denied-cryonics", "postedAt": "2010-12-23T04:23:38.092Z", "baseScore": 2, "voteCount": 12, "commentCount": 54, "url": null, "contents": { "documentId": "qn5kAGEx5xDZcyRPg", "html": "

If someone is sentenced to life in prison or the death penalty, should they also be prohibited from signing up for cryonics? Specifically, I'm referring to people like these: http://en.wikipedia.org/wiki/List_of_United_States_death_row_inmates

\n

I am not talking about providing it for them, just allowing them to sign up for it provided they can somehow get enough money together and allowing a response team into the prison to retrieve the body after the prisoner has died or been executed by lethal injection. I think they should be allowed access to cryonics, because we don't know enough yet about the brain to determine how much of their criminal behavior is due to mental illness/disorder and how much is due to free will. It may be possible to diagnose and cure people like Jeffrey Dahmer in the future before they commit any crimes, or to cure those already in prison such that they won't commit any more crimes.

\n

As cryonics gets more and more popular, this will become an issue, especially when the first death row inmate wants to sign up for it.

" } }, { "_id": "56qQ9yPs37uvsWAvJ", "title": "Carl Zimmer on mind uploading", "pageUrl": "https://www.lesswrong.com/posts/56qQ9yPs37uvsWAvJ/carl-zimmer-on-mind-uploading", "postedAt": "2010-12-23T03:13:42.360Z", "baseScore": 5, "voteCount": 3, "commentCount": 6, "url": null, "contents": { "documentId": "56qQ9yPs37uvsWAvJ", "html": "
\n
\n
\n

http://www.scientificamerican.com/article.cfm?id=e-zimmer-can-you-live-forever

\n

I realize he Zimmer is \"just a popular author\" (a pretty good one IMO), so filing this under \"cultural penetration of singularity memes\"

\n
\n
\n
" } }, { "_id": "oatMFQjAzEnuzti7u", "title": "Motivating Optimization Processes", "pageUrl": "https://www.lesswrong.com/posts/oatMFQjAzEnuzti7u/motivating-optimization-processes", "postedAt": "2010-12-22T23:36:38.463Z", "baseScore": 6, "voteCount": 12, "commentCount": 23, "url": null, "contents": { "documentId": "oatMFQjAzEnuzti7u", "html": "

Related to: Shut up and do the Impossible! The Hidden Complexity of Wishes.  What can you do with an Unfriendly AI?

\n

Suppose you find yourself in the following situation.  There is a process, call it X, in a box.  It knows a lot about the current state of the universe, but it can influence the rest of the world only through a single channel, through which it sends a single bit exactly once (at a predetermined time).  If it sends 1 (cooperates), then nothing happens---humanity is free to go about its business.  If it sends 0 (defects), then in one month a powerful uFAI is released which can take over the universe.

The question is, when can we count on X to cooperate?  If X is friendly, then it seems like it should cooperate.  Is designing an AGI which can be incentivized to cooperate any easier than designing a completely friendly AGI?  It might be easier for two reasons.  First, the AI just needs to prefer human survival without intervention to a particular catastrophic intervention. We don't need to guarantee that its favorite outcome isn't catastrophic in some other way.  Second, the humans have some time to punish or reward the AI based on its behavior.  In general, lets call a process X slightly friendly if it can be incentivized to cooperate in reasonable instantiations of this hypothetical (ie, reasonable worlds satisfying the properties I have laid out).

I ask this question because it seems much simpler to think about than friendliness (or AI boxing) but still confuses me badly---this post has no hope of answering this question, just clarifying some issues surrounding it.  If it turns out that the design of slightly friendly AIs is no easier than the design of friendly AIs, then we have conclusive evidence that boxing an AI is not helpful for obtaining friendliness.  If it turns out that the design of slightly friendly AIs is significantly easier, then this is a good first step towards resolving the legitimate objections raised in response to my previous post. (Eventually if we want to implement a scheme like the one I proposed we will need to get stronger guarantees. I think this is the right first step, since it is the easiest simplification I don't know how to do.)

\n

 

\n

Question 1: Is a paperclipper slightly friendly?

Answer: Almost certainly not.  We can try to incentivize the paperclipper, by promising to make a paperclip for it if and only if it cooperates.  This would work if the uFAI taking over the universe didn't make any paperclips.  In the normal game theoretic sense it may not be credible for the uFAI to precommit to make a bunch of paperclips if freed, but I think no one on LW believes that this is a serious obstacle.  The situation is precisely Parfit's hitchhiker, which a uFAI might well win at.  Our only other hope is that human society, if not destroyed by the uFAI, will make more paperclips than the uFAI.  This seems like a subtle question, if humanity is in a position to engineer a friendly foom, but I definitely don't want to stake the future of humanity on it.


Now consider a time-bounded paperclipper, whose goal is to make as many paperclips as possible within the next month and who is indifferent to anything that happens more than a month from now.  Designing a time-bounded paperclipper seems like it might be much harder than designing a paperclipper, but I would assign significant epistemic probability to the situation where a time-bounded paperclipper is easier to design than a friendly AI.  (I think we can do the same thing in a more complicated way using a normal paperclipper if we could make the utility function stable under self-modification)

Question 2: Is a time-bounded paperclipper slightly friendly?

Answer: I have no idea, but suspect it depends on exactly what sort of optimization the paperclipper is doing.  We can use the same incentive scheme proposed for the last question: promise to make X a bunch of paperclips if and only if it cooperates.  Then cooperation is the only way that X can make any paperclips in the next month, which is purportedly all X cares about.  So if X always does the thing that maximizes its utility function and suspsects that he is in the simplest world consistent with his observations then this probably works.  There are two obvious acausal attacks a uFAI might employ, exploiting flaws with this assumption.

The first reason X may defect is a commitment by the uFAI to simulate many copies of X, up until the point that X has to make its decision, and then after the decision intervene in the simulation by producing many paperclips.  If X chooses to let the uFAI out, then an overwhelming majority of its simulated copies will witness a sudden and completely unphysical mass paperclip genesis.  Thus depending on exactly what X believes, releasing the uFAI might be optimal.  To solve this, you could try to very carefully ensure that the notion of paperclip maximization used by X is precisely the one that humans have control over---if we are inside a simulation, then then X should value paperclips inside this simulation, but if an uFAI in the simulation with us creates a further simulation, then X shouldn't value paperclips inside that even more deeply embedded simulation.  I don't know how easy this is. A useful first step is to give X rapidly decreasing marginal value for each additional paperclip.  If X is is only concerned with maximizing the probability that a paperclip gets produced in the next month, then the uFAI has much less leverage. In fact, in this case you might be able to prevent this attack by making a sufficiently strong precommitment to making a paperclip.

The second reason X may defect is to make a sacrifice to help the uFAI, expecting no reciprocity.  If X and the uFAI are running the same \"decision engine\" with different utilities (they definitely have different utilities, since minimally the uFAI cares about next month's paper clip count instead of this month's) then maybe this decision engine decides something like \"I have a 1/2 chance of being in X and a 1/2 chance of being in the uFAI that X may release.  If I choose to release the uFAI, then 1/2 of me loses a couple of paperclips but the other 1/2 of me takes over the universe, so lets go for that.\" I honestly don't know how such acausal compromises between agents with different utility functions, where there is no possibility of reciprocity, would work.  If the uFAI was a stapler instead, who is to say that 10000000000000 staples is more valauble than 100 paperclips?  The only consistent decision theories I can imagine do not make such an acausal compromise, but it does seem like some significant care should be taken to make sure that X doesn't. 

Hopefully if you found a way to resolve both of these difficulties, you would either think of a new explicit reason that X may not cooperate or you would be able to produce some compelling evidence that X is slightly friendly.  Such compelling evidence seems like it might be possible because humans control all causal influences on X---we just need to bound the effect of a uFAI's acausal influence.


Question 3: Is a friendly AI slightly friendly?

Answer:  Its not as obvious as it looks.  I am including this discussion mostly because it confuses me, especially juxtaposed with Question 2.

In the answers to the last 2 questions, I mentioned my belief/fear that a uFAI could implicitly precommit to doing favors for X (either producing paper clips, or simulating many very happy copies of X) in order to get X to let it out.  This belief/fear was explicitly articulated by Eliezer in response to my last post and it strikes me as reasonable in that context, where it interferes with our ability to incentivize X.  But if we apply it to the situation of a friendly X, we have a failure that seems strange to me (though it may be completely natural to people who have thought about it more).  The friendly X could believe that, in order to be let out, the uFAI will actually do something friendly.  In this case, letting the uFAI is correct even for the friendly AI.

If X is all-knowing this is well and good, since then the uFAI really will do something friendly.  But if X is fallible then it may believe that the uFAI will do something friendly when in fact it will not.  Even if the friendly X constructs a proof that the uFAI will be friendly to humans, if we believe the concerns about certifying friendliness that Eliezer mentions here then X may still be wrong, because formalizing what it means to be friendly is just too hard if you need your formalization to screen out adversarially chosen uFAI (and X's formalization of friendliness need not be perfect unless the formalization of the people who built X was perfect).  Does part of friendliness involve never letting an AI out of a box, at least until some perfect formalization of friendliness is available?  What sort of decision theory could possibly guarantee the level of hyper-vigilance this requires without making all sorts of horribly over-conservative decisions elsewhere?

My question to people who know what is going on: is the above discussion just me starting to suspect how hard friendliness is?  Is letting the uFAI out analogous to performing a self-modification not necessarily guaranteed to perform friendliness (ie, modifying yourself to emulate the behavior of that uFAI)?  My initial reaction was that \"stability under self-modification\" would need to imply that a friendly AI is slightly friendly.  Now I see that this is not necessarily the case--- it may be easier to be stable under modifications you think of yourself than under proposed modifications which are adversarially chosen (in this example, the uFAI which is threatening to escape is chosen adversarially).  This would make the very knowledge of such an adversarially chosen modification enough to corrupt a friendly AI, which seems bad but maybe that is just how it goes (and you count on the universe not containing anything horrible enough to suggest such a modification).


In summary: I think that the problem of slight friendliness is moderately easier than friendliness, because it involves preserving a simpler invariant which we can hope to reason about completely formally. I personally suspect that it will basically come down to solving the stability under self-modification problem, dropping the requirement that you can describe some magical essence of friendliness to put in at the beginning. This may already be the part of the problem that people in the know think is difficult, but I think the general intuition (even at less wrong) is that getting a powerful AI to be nice at all is extremely difficult and that this is what makes friendliness hard. If slight friendliness is possible, then we can think about how it could be used to safely obtain friendliness; I think this is an interesting and soluble problem. Nevertheless, the very possibility of building an only slightly friendly AI is an extremely scary thing which could well destroy the world on its own without much more sophisticated social safeguards than currently exist.

" } }, { "_id": "HCwHRmRk4XNjjhnhG", "title": "Quantum Joint Configuration article: need help from physicists", "pageUrl": "https://www.lesswrong.com/posts/HCwHRmRk4XNjjhnhG/quantum-joint-configuration-article-need-help-from", "postedAt": "2010-12-22T18:32:15.888Z", "baseScore": 25, "voteCount": 16, "commentCount": 10, "url": null, "contents": { "documentId": "HCwHRmRk4XNjjhnhG", "html": "

EDIT: 1:19 PM PST 22 December 2010 I completed this post.  I didn't realize an uncompleted version was already posted earlier.  

\n

I wanted to read the quantum sequence because I've been intrigued by the nature of measurement throughout my physics career.  I was happy to see that articles such as joint configuration use beams of photons and half and fully silvered mirrors to make its points.  I spent years in graduate school working with a two-path interferometer with one moving mirror which we used to make spectrometric measurements on materials and detectors.  I studied the quantization of the electromagnetic field, reading and rereading books such as Yariv's Quantum Electronics and Marcuse's Principles of Quantum Electronics.  I developed with my friend David Woody a photodetector ttheory of extremely sensitive heterodyne mixers which explained the mysterious noise floor of these devices in terms of the shot noise from detecting the stream of photons which are the \"Local Oscillator\" of that mixer.  

\n

My point being that I AM a physicist, and I am even a physicist who has worked with the kinds of configurations shown in this blog post, both experimentally and theoretically.  I did all this work 20 years ago and have been away from any kind of Quantum optics stuff for 15 years, but I don't think that is what is holding me back here.  

\n

So when I read and reread the joint configuration blog post, I am concerned that it makes absolutely no sense to me.  I am hoping that someone out there DOES understand this article and can help me understand it.  Someone who understands the more traditional kinds of interferometer configurations such as that described for example here and could help put this joint configuration blog post in terms that relate it to this more usual interferometer situation.  

\n

I'd be happy to be referred to this discussion if it has already taken place somewhere.  Or I'd be happy to try it in comments to this discussion post.  Or I'd be happy to talk to someone on the phone or in primate email, if you are that person email me at mwengler at gmail dot com.  

\n

To give you an idea of the kinds of things I think would help:

\n

1) How might you build that experiment?  Two photons coming in from right angles could be two radio sources at the same frequency and amplitude but possibly different phase as they hit the mirror.  In that case, we get a stream of photons to detector 1 proportional to sin(phi+pi/4)^2 and a stream of photons to detector 2 proportional to cos(phi+pi/4)^2 where phi is the phase difference of the two waves as they hit the mirror, and I have not attempted to get the sign of the pi/4 term right to match the exact picture.  Are they two thermal sources?  In which case we get random phases at the mirror and the photons split pretty randomly between detector 1 and detector 2, but there are no 2-photon correlations, it is just single photon statistics.  

\n

2) The half-silvered mirror is a linear device: two photons passing through it do not interact with each other.  So any statistical effect correlating the two photons (that is, they must either both go to detector 1 or both go to detector 2, but we will never see one go to 1 and the other go to 2) must be due to something going in the source of the photons.  Tell me what the source of these photons is that gives this gedanken effect.  

\n

3) The two-photon aspect of the statistical prediction of this seems at least vaguely EPR-ish.  But in EPR the correlations of two photons come about because both photons originate from a single process, if I recall correctly.  Is this intending to look EPRish, but somehow leaving out some necessary features of the source of the two photons to get the correlation involved?

\n

I remaing quite puzzled and look forward to anything anybody can tell me to relate the example given here to anything else in quantum optics or interferometers that I might already have some knowledge of.  

\n

Thanks,
Mike

\n

 

" } }, { "_id": "7AZrPxwG9FMYYv5iv", "title": "Many of us *are* hit with a baseball once a month. ", "pageUrl": "https://www.lesswrong.com/posts/7AZrPxwG9FMYYv5iv/many-of-us-are-hit-with-a-baseball-once-a-month", "postedAt": "2010-12-22T17:56:02.982Z", "baseScore": 57, "voteCount": 41, "commentCount": 31, "url": null, "contents": { "documentId": "7AZrPxwG9FMYYv5iv", "html": "

Watching the video of Eliezer's Singularity Summit 2010 talk, I thought once more about the 'baseball' argument. Here's a text version from How to Seem (and Be) Deep:

\n
\n

[...] given human nature, if people got hit on the head by a baseball bat every week, pretty soon they would invent reasons why getting hit on the head with a baseball bat was a good thing.

\n
\n

And then it dawned on me. Roughly half of human kind, women, are inflicted with a painful experience about once a month for a large period of their lives.

\n

So, if the hypothesis was correct, we would expect to have deep-sounding memes about why this was a good thing floating around. Not one to disappoint, the internet has indeed produced at least two such lists, linked here for your reading pleasure. However, neither of these lists claim that the benefits outweigh the costs, nor do they make any deep-sounding arguments about why this is in fact a good thing overall. Whether or not they are supported by the evidence, the benefits mentioned are relatively practical. What's more, you don't hear these going around a lot (as far as I know, which, admittedly, is not very far). 

\n

So why aren't these memes philosophised about? Perhaps the ick factor? Maybe the fact that having the other half of the population going around and having perfectly normal lives without any obvious drawbacks acts as a sanity test? 

\n

In any case, since this is a counter-argument that may eventually get raised, and since I didn't want to suppress it in favour of a soldier fighting on our side, I thought I'd type this up and feed it to the LessWrong hivemind for better or worse.

" } }, { "_id": "LTTDG7gNLyYk2Ppfi", "title": "The blind god's computer language", "pageUrl": "https://www.lesswrong.com/posts/LTTDG7gNLyYk2Ppfi/the-blind-god-s-computer-language", "postedAt": "2010-12-22T14:52:09.336Z", "baseScore": 19, "voteCount": 13, "commentCount": 1, "url": null, "contents": { "documentId": "LTTDG7gNLyYk2Ppfi", "html": "

http://crisper.livejournal.com/316634.html#cutid1

\n
They have no compiler, only an accretor that gloms additional code onto the existing binary. I use the word binary loosely; it is not uncommon for them to \"improve\" fundamentally flawed data structures by moving to a larger base notation - to trinary, then quadrary, etc. At this point, some of their products are in base 17.
\n
They never go back to fix bugs where they occur. They write new code to workaround the earlier failure case. I asked why they don't go back and just fix the bug where it happens. I was told \"We can't go back and change it. That code's already done!\" Their solution for insuring that failing code will be able to get to its workaround is the GOTO statement. GOTO is sprinkled liberally around other code, pointing to functions and routines that do not exist yet. If, down the road, it is discovered that the old code has a bug, they find out which GOTOs exist in that code that do not point to anything yet, pick one, and write the workaround there.
\n
I could go on, but I am being told that we need to celebrate the successful compilation (by which I mean accretion) of a particularly complex workaround for a bug that has been known about for two years.
" } }, { "_id": "PjnDDJ4LL9AB5RtnG", "title": "Newtonmas Meetup, 12/25/2010", "pageUrl": "https://www.lesswrong.com/posts/PjnDDJ4LL9AB5RtnG/newtonmas-meetup-12-25-2010", "postedAt": "2010-12-22T06:27:22.566Z", "baseScore": 13, "voteCount": 11, "commentCount": 106, "url": null, "contents": { "documentId": "PjnDDJ4LL9AB5RtnG", "html": "

There's a Less Wrong meetup at my house in Berkeley this Saturday, the 25th of December, at 6PM. Celebrate the winter season, the Solstice, and the birth of Sir Isaac Newton among friendly aspiring rationalists, including Eliezer and other SIAI staff and volunteers.

\n

I will cook for everyone in the style I call \"paleolithic gourmet\" which is cooked meat and raw produce.

\n

I'd like to satisfy everyone's preferences as reasonably as I possibly can without getting vastly more food than will be eaten.

\n

Default menu:

\n

Steak
Lamb Burgers
Bacon
Salad of Berkeley Bowl produce and parmesan
Grilled Portabello and chanterelle mushrooms
Cheese selection
Pita + hummus
Cookies

\n

Feel free to bring a potluck dessert or if you like, an alcoholic or non-alcoholic beverage.

\n

The food is free, but if you can afford to, in the spirit of Newtonmas, I suggest a $10 or $15 or $500 donation to SIAI (which will be matched). Please don't not come because you prefer not to pay; no one will be excluded from food or shunned for not paying. I really mean that. Consider the donation not an admission fee and more of a gentle nudge and reminder that optimal philanthropy starts around $10 and that you should positively associate giving money with the fuzzies of eating delicious food.

\n

Please post here if you plan on attending and RSVP on Facebook. You can also post here or PM me with your thoughts on the menu and tell me what you want to eat the most of. I wasn't planning on cooking fish or chicken but can do so if people let me know they want fish or chicken or something else (like a carbohydrate).

\n

My address is 1622 Martin Luther King Jr Way Apt A, Berkeley CA. It's the ground floor apartment around the side, not the upstairs one.

" } }, { "_id": "TwqgstJbFA4fFDTmB", "title": "The 9 Circles of Scientific Hell", "pageUrl": "https://www.lesswrong.com/posts/TwqgstJbFA4fFDTmB/the-9-circles-of-scientific-hell", "postedAt": "2010-12-22T02:59:02.736Z", "baseScore": 12, "voteCount": 12, "commentCount": 5, "url": null, "contents": { "documentId": "TwqgstJbFA4fFDTmB", "html": "

Neuroskeptic is my favorite blog on neuroscience. Don't be deceived by the 'skeptic' in the name, the coverage is well balanced and overall quite positive. He recently interrupted his regular scheduling with a light piece on the circles of scientific hell. Definitely worth a look. I'm not too sure about the order of the various sins. I'd be tempted to put \"p-value fishing\" way down the list!

\n

An excerpt:

\n
\n

Second Circle: Overselling
\"This circle is reserved for those who exaggerated the importantance of their work in order to get grants or write better papers. Sinners are trapped in a huge pit, neck-deep in horrible sludge. Each sinner is provided with the single rung of a ladder, labelled 'The Way Out - Scientists Crack Problem of Second Circle of Hell\"

\n
\n

Makes me want to want to break out into a chorus of \"Let the Punishment Fit the Crime\"!

" } }, { "_id": "4DK2rxfzTBK8rCFy9", "title": "Theory and practice of meditation", "pageUrl": "https://www.lesswrong.com/posts/4DK2rxfzTBK8rCFy9/theory-and-practice-of-meditation", "postedAt": "2010-12-22T01:23:48.124Z", "baseScore": 4, "voteCount": 5, "commentCount": 9, "url": null, "contents": { "documentId": "4DK2rxfzTBK8rCFy9", "html": "

This is a (slightly revised) concatenation of three of my blog posts which I wrote after reading:

\n

Understanding vipassana meditation by Luke Grecki in October 2010. The originals may be seen here (part I), here (part II) and here (part III).

\n

I posted some comments myself in the original thread, but after giving it some thought I have decided to write a little more systematically on the topic.

There are three parts: one theory part and one part each on descriptions of the two techniques in my current practice.

\n

Part I, a theory

\n

I have been meditating daily for over thirteen years and did it sporadically for fifteen years or so prior to that. My menu of tried protocols is wide: vipassana, zen, transcendental, Gurdjieff self-remembering, Jung active-imagination, Erickson self-hypnosis, Loyola spiritual exercises, and probably a couple others I have totally forgotten about. The common thread through all of these techniques is mental health benefit, or spiritual benefit, or stress relief through calming mental processes. It is a purging of obsession and compulsion and anxiety and worry. Don Juan advises Carlos Castaneda the way to become a sorcerer is to learn to make one's mind perfectly still. (Castaneda's regimen may be the only one that I have heard about that I have not tried--I have seen people under the influence of deliriants and that is definitely not for me.)

There is modern scientific research in support of this, most notably in the work of the psychologist Albert Ellis and the psychiatrist Aaron Beck. Their therapy techniques are based upon the idea that our problems of mental life are twofold: first there are the human stressors which plague all of us to one extent or another--family problems, relationship problems, money problems, diseases--what Zorba called the full catastrophe; second there is the stuff which we tell ourselves on top of these typical and normal human stressors.

\"This always happens to me.\"

\"Nobody loves me.\"

\"I am a freak; I am a loser; &c.\"

We could make a very long list. Ellis and Beck say you may be unable to eliminate the family problems and whatnot at the source of your grief, but you surely can quit telling yourself the exaggerated and goofy crap you pile up on top of it. Their experience (and a large amount of subsequent clinical experience) is that modifying the self-descriptions will benefit mental health. This can involve work, and sometimes a lot of it. This is the scientific research behind the psychobabble in the self-help books regarding being a friend to your self.

Meditation provides the ancient path towards quieting these activities of our minds which can be such a burden. There are two basic techniques: a technique of concentration and a technique of emptying. In the technique of concentration you focus your awareness as completely as possible on one stimulus. It can be listening to a mantra as in the example of the hare krishnas or the transcendental meditation. It can be staring at a mandala or a crystal ball or a blue vase or a saucer of ink. It can be saying a rosary. In the technique of emptying you focus your awareness as completely as possible on the minimum possible field of concentration; this is usually the breath. You simply follow only your breathing as purely as possible for a period of a few minutes. A hybrid of the two is use of the minimum possible sense stimulus, the mantra Aum.

In this attention to nothing, or attention to as little as possible, time and space is provided for the mental burdens of anxiety and such to run their course and escape from our attention center. This is the process by which meditation leads to better mental health. This apparently is not the intent the innovators who developed these procedures were going for, however. They were aiming at something much more profound.

If you participate in meditation practice for a very long time (like, thousands and thousands of hours), you may have an opportunity to attain a state of being where you are connected link-pow-one-with-the-universe. Samadhi. You attain Samadhi, and presumably you never again need care about all the girls thinking you are too short.

\n

Part II, my (shorter) daily practice

\n

This is adapted from a self-hypnosis relaxation script I obtained from the book Mind-Body Therapy: Methods of Ideodynamic Healing in Hypnosis, by Ernest Rossi and David Cheek. I call my variation the homunculus meditation. The name is taken from a neuroscience figure, a homunculus, which is made by inflating anatomical parts in proportion to the amount of the somatic sensory cortex which are involved in our sense of touch for the particular anatomical part.

The script is very simple. You sit quietly in a relaxing posture and invite yourself to sequentially relax different portions of your body. There are thousands and thousands of terms which pertain to various anatomical structures, so you cannot name them all in one single meditation (or self-hypnosis) session. The ones I routinely use are (in order): eyes, optic nerves, visual cortex, cerebral cortex, limbic lobes, hindbrain, throat, spine, median nerves, fingertips, (back up to) limbic lobes, hindbrain, throat, spine, sciatic nerves, toe tips, foot soles, ankles, calves, ankles, shins, ankles, fibulas, knees, hamstrings, knees, quadriceps, knees, femurs, glans, testicles, anus, lumbars, navel, seventh thoracic vertebrae, nipples, seventh cervical vertebrae, shoulders, elbows, thumbs, index fingers, middle fingers, ring fingers, little fingers, elbows, wrists, thumbs, index fingers, middle fingers, ring fingers, little fingers, wrists, fingertips, wrists, elbows, shoulders, seventh cervical vertebrae, nipples, seventh thoracic vertebrae, navel, lumbars, anus, testicles, glans, testicles, anus, lumbars, navel, seventh thoracic vertebrae, nipples, seventh cervical vertebrae, spine, throat, tongue, palate, gums, lips, nostrils, nasal cavities, sinuses, eyes, temples, ears, eustachian tubes, ears, temples, eyes, forehead.

On average this takes about twenty minutes to work all the way down and back up through these features of my anatomy. There are three additional important details:

1.) In the Rossi-Cheek recipe they instruct us to instruct ourselves \"Relax eyes, &c.\" There is an old philosophical conundrum here regarding who is talking to who when we are talking to ourselves. When you sink a long basket and you say to yourself \"Good shot!\", who is talking to who there? There is some implicit dissociative model like perhaps Freud's--and perhaps it is your superego talking to your ego, or something similar to that. Anyway, what I do instead of commanding myself to relax, is to invite myself to relax. I substitute \"I may relax my eyes, &c.\" for the literal instruction provided in the Rossi-Cheek recipe.

2.) A few of these invitations are repeated, sometimes over and over. Roughly, I devote the proportion of the session along the proportions in the homunculus diagram, hence my name of homunculus meditation. I invite my fingers and my lips and my tongue to relax far more than I invite any other portion of my anatomy to do so.

3.) The other weighting is toward the eyes and ears; a large fraction of our brain is allocated to the processing of visual and audio sense information. By concentrating on the parts of the body that involve the largest brain fractions, the given twenty minutes (or whatever) of meditation can have the largest total brain footprint! That is one theory.

I have been using this meditation (or one close to it) on a nearly daily basis since 1997, since I first read Rossi and Cheek's book. I will be using it for the foreseeable future.

\n

Part III, my (longer and at least) weekly practice

\n


This one takes me about forty minutes.

Step one is to sit still in a comfortable position with eyes closed. Breathe slowly and count, one count for each breath to one hundred. For each breath, I visualize a sphere which looks like, or almost like a billiard ball, with the number of the breath inside the little white circle area (like on a standard billiard ball that is numbered one to eight.) The spheres alternate on an interval of ten in color and in spatial position. This sequence is patterned after a common representation of the Kabbalah Tree of Life.

1, 11, 21, 31, &c are a white sphere on the crown of my head;
2, 12, 22, 32, &c are a gray sphere on my right shoulder;
3, 13, 23, 33, &c are a black sphere on my left shoulder;
4, 14, 24, &c are a blue sphere on my right elbow;
5, 15, 25, &c are a red sphere on my left elbow;
6, 16, 26 &c are a yellow sphere on my crotch;
7, 17, 27, &c are a green sphere on my right fingertips;
8, 18, 28, &c are an orange sphere on my left fingertips;
9, 19, 29, &c are a purple sphere between my knees;
10, 20, &c are a brown sphere between my feet.

I used to play a lot of billiards so visualizing billiard balls is quite easy for me. A million other things do cross my mind during this forty or so minutes of meditation, but I try and hold my attention as closely as possible to my breath and to the billiard ball images. After, I make notes of: how many minutes (37 - 51 is the range in recent memory); if I lost count at any point (if I lose the count, I just guess where I was and continue from there--this is an excellent marker for me on how well I am attending to the meditation); if I had a hiccup or a cough or a saliva swallow or a saliva drool (I prefer not to, and sometimes I will stop meditating if any of these occur.)

Sometimes I will try and extend this to an even longer meditation. About once a month I will go for 200 breaths, and about once a year I will go for 300. 300 breaths is the longest I have ever gone. If I am going for a long meditation, I always stop if I lose count or if I hiccup or if I drool or if anything is not perfect.

" } }, { "_id": "RmgeL4bK5zvD9X6SA", "title": "Using rot13 too much doesn't work", "pageUrl": "https://www.lesswrong.com/posts/RmgeL4bK5zvD9X6SA/using-rot13-too-much-doesn-t-work", "postedAt": "2010-12-21T22:55:35.522Z", "baseScore": 3, "voteCount": 6, "commentCount": 13, "url": null, "contents": { "documentId": "RmgeL4bK5zvD9X6SA", "html": "

Orpnhfr V guvax zbfg crbcyr urer jvyy whfg riraghnyyl cvpx hc ubj gb ernq vg. Captializations of I where the first hint I consistently got, and I've found that nsgre rkcbfher gb n srj qbmra fubeg grkgf (naq xabjvat jung ebg13 zrnaf) V'ir ortha ernqvat gur qnza guvat whfg yvxr beqvanel grkg.

" } }, { "_id": "XkRTY7mMbBvTKyoyn", "title": "Dutch book question", "pageUrl": "https://www.lesswrong.com/posts/XkRTY7mMbBvTKyoyn/dutch-book-question", "postedAt": "2010-12-21T21:39:53.684Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "XkRTY7mMbBvTKyoyn", "html": "

I'm following Jack's Dutch book discussion with interest and would like to know about the computational complexity of constructing a Dutch book.  If I give you a finite table of probabilities, is there a polynomial time algorithm that will verify that it is or is not Dutch bookable?  Or: help me make this question better-posed.

\n

It reminds me of Boolean satisfiability, which is known to be NP complete, but maybe the similarity is superficial.

" } }, { "_id": "5xDd4yP6kikjYrj3T", "title": "Is technological change accelerating?", "pageUrl": "https://www.lesswrong.com/posts/5xDd4yP6kikjYrj3T/is-technological-change-accelerating", "postedAt": "2010-12-21T20:25:07.210Z", "baseScore": 16, "voteCount": 11, "commentCount": 36, "url": null, "contents": { "documentId": "5xDd4yP6kikjYrj3T", "html": "

Eliezer said in a speech at the Singularity Summit that he's agnostic about whether technological change is accelerating, and mentions Michael Vassar and Peter Thiel as skeptical.

\n

I'd vaguely assumed that it was accelerating, but when I thought about it a little, it seemed like a miserably difficult thing to measure. Moore's law just tracks The number of transistors that can be placed inexpensively on an integrated circuit.

\n

Technology is a vaguer thing. Cell phones are an improvement (or at least most people get them) in well-off countries that have landlines, but they're a much bigger change in regions where cell phones are the first phones available. There a jump from a cell phone that's just a phone/answering machine/clock to a smartphone, but how do you compare that jump to getting home computers?

\n

Do you have a way of measuring whether technological change is accelerating? If so, what velocity and acceleration do you see?

" } }, { "_id": "WL69sp2MJNLs9ADg4", "title": "Iterated Sleeping Beauty and Copied Minds", "pageUrl": "https://www.lesswrong.com/posts/WL69sp2MJNLs9ADg4/iterated-sleeping-beauty-and-copied-minds", "postedAt": "2010-12-21T07:21:10.299Z", "baseScore": 3, "voteCount": 9, "commentCount": 34, "url": null, "contents": { "documentId": "WL69sp2MJNLs9ADg4", "html": "

Before I move on to a summation post listing the various raised thought experiments and paradoxes related to mind copying, I would like to cast attention to a particular moment regarding the notion of \"subjective probability\".

\n

In my earlier discussion post on the subjective experience of a forked person, I compared the scenario where one copy is awakened in the future to the Sleeping Beauty thought experiment. And really, it describes any such process, because there will inevitably be a time gap, however short, between the time of fork and the copy's subjective awakening: no copy mechanism can be instant.

\n

In the traditional Sleeping Beauty scenario, there are two parties: Beauty and the Experimenter. The Experimenter has access to a sleep-inducing drug that also resets Beauty's memory to the state at t=0. Suppose Beauty is put to sleep at t=0, and then a fair coin is tossed. If the coin comes heads, Beauty is woken up at t=1, permanently. If the coin comes tails, Beauty is woken up at t=1, questioned, memory-wiped, and then woken up again at t=2, this time permanently.

\n

In this experiment, intuitively, Beauty's subjective anticipation of the coin coming tails, without access to any information other than the conditions of the experiment, should be 2/3. I won't be arguing here whether this particular answer is right or wrong: the discussion has been raised many times before, and on Less Wrong as well. I'd like to point out one property of the experiment that differentiates it from other probability-related tasks: erasure of information, which renders the whole experiment a non-experiment.

\n

In Bayesian theory, the (prior) probability of an outcome is the measure of our anticipation of it to the best of our knowledge. Bayesians think of experiments as a way to get new information, and update their probabilities based on the information gained. However, in the Sleeping Beauty experiment, Beauty gains no new information from waking up at any time, in any outcome. She has the exact same mind-state at any point of awakening that she had at t=0, and is for all intents and purposes the exact same person at any such point. As such, we can ask Beauty, \"If we perform the experiment, what is your anticipation of waking up in the branch where the coin landed tails?\", and she can give the same answer without actually performing the experiment.

\n

So how does it map to the mind-copying problem? In a very straightforward way.

\n

Let's modify the experiment this way: at t=0, Beauty's state is backed up. Let's suppose that she is then allowed to live her normal life, but the time-slices are large enough that she dies within the course of a single round. (Say, she has a normal human lifespan and the time between successive iterations is 200 years.) However, at t=1, a copy of Beauty is created in the state at which the original was at t=0, a coin is tossed, and if and only if it comes tails, another copy is created at t=2.

\n

If Beauty knows the condition of this experiment, no matter what answer she would give in the classic formulation of the problem, I don't expect it to change here. The two formulations are, as far as I can see, equivalent.

\n

However, in both cases, from the Experimenter's point of view, the branching points are independent events, which allows us to construct scenarios that question the straightforward interpretation of \"subjective probability\". And for this, I refer to the last experiment in my earlier post.

\n

Imagine you have an indestructible machine that restores one copy of you from backup every 200 years. In this scenario, it seems you should anticipate waking up with equal probability between now and the end of time. But it's inconsistent with the formulation of probability for discrete outcomes: we end up with a diverging series, and as the length of the experiment approaches infinity (ignoring real-world cosmology for the moment), the subjective probability of every individual outcome (finding yourself at t=1, finding yourself at t=2, etc.) approaches 0. The equivalent classic formulation is a setup where the Experimenter is programmed to wake Beauty after every time-slice and unconditionally put her back to sleep.

\n

This is not the only possible \"diverging Sleeping Beauty\" problem. Suppose that at t=1, Beauty is put back to sleep with probability 1/2 (like in the classic experiment), at t=2 she is put back to sleep with probability 1/3, then 1/4, and so on. In this case, while it seems almost certain that she will eventually wake up permanently (in the same sense that it is \"almost certain\" that a fair random number generator will eventually output any given value), the expected value is still infinite.

In the case of a converging series of probabilities of remaining asleep - for example, if it's decided by a coin toss at each iteration whether Beauty is put back to sleep, in which case the series is 1/2 + 1/4 + 1/8 + ... = 1 -- Beauty can give a subjective expected value, or the average time at which she expects to be woken up permanently.

\n

In a general case, let Ei be the event \"the experiment continues at stage i\" (that is, Beauty is not permanently awakened at stage i, or in the alternate formulation, more copies are created beyond that point). Then if we extrapolate the notion of \"subjective probability\" that leads us to the answer 2/3 in the classic formulation, then the definition is meaningful if and only if the series of objective probabilitiesi=1..∞ P(Ei) converges -- it doesn't have to converge to 1, we'll just need to renormalize the calculations otherwise. Which, given that the randomizing events are independent, simply doesn't have to happen.

\n

Even if we reformulate the experiment in terms of decision theory, it's not clear how it will help us. If the bet is \"win 1 utilon if you get your iteration number right\", the probability of winning it in a divergent case is 0 at any given iteration. And yet, if all cases are perfectly symmetric information-wise so that you make the same decision over and over again, you'll eventually get the answer right, with exactly one of you winning the bet, even no matter what your \"decision function\" is - even if it's simply something like \"return 42;\". Even a stopped clock is right sometimes, in this case once.

\n

It would be tempting, seeing this, to discard the notion of \"subjective anticipation\" altogether as ill-defined. But that seems to me like tossing out the Born probabilities just because we go from Copenhagen to MWI. If I'm forked, I expect to continue my experience as either the original or the copy with a probability of 1/2 -- whatever that means. If I'm asked to participate in the classic Sleeping Beauty experiment, and to observe the once-flipped coin at every point I wake up, I will expect to see tails with a probability of 2/3 -- again, whatever that means.

\n

The situations described here have a very specific set of conditions. We're dealing with complete information erasure, which prevents any kind of Bayesian update and in fact makes the situation completely symmetric from the decision agent's perspective. We're also dealing with an anticipation all the way into infinity, which cannot occur in practice due to the finite lifespan of the universe. And yet, I'm not sure what to do with the apparent need to update my anticipations for times arbitrarily far into the future, for an arbitrarily large number of copies, for outcomes with an arbitrarily high degree of causal removal from my current state, which may fail to occur, before the sequence of events that can lead to them is even put into motion.

" } }, { "_id": "kjmN3fdwTgN8ejGTd", "title": "Dutch Books and Decision Theory: An Introduction to a Long Conversation", "pageUrl": "https://www.lesswrong.com/posts/kjmN3fdwTgN8ejGTd/dutch-books-and-decision-theory-an-introduction-to-a-long", "postedAt": "2010-12-21T04:55:40.303Z", "baseScore": 30, "voteCount": 23, "commentCount": 102, "url": null, "contents": { "documentId": "kjmN3fdwTgN8ejGTd", "html": "

For a community that endorses Bayesian epistemology we have had surprisingly few discussions about the most famous Bayesian contribution to epistemology: the Dutch Book arguments. In this post I present the arguments, but it is far from clear yet what the right way to interpret them is or even if they prove what they set out to. The Dutch Book arguments attempt to justify the Bayesian approach to science and belief; I will also suggest that any successful Dutch Book defense of Bayesianism cannot be disentangled from decision theory. But mostly this post is to introduce people to the argument and to get people thinking about a solution. The literature is scant enough that it is plausible people here could actually make genuine progress, especially since the problem is related to decision theory.1

\n

Bayesianism fits together. Like a well-tailored jacket it feels comfortable and looks good. It's an appealing, functional aesthetic for those with cultivated epistemic taste. But sleekness is not a rigourous justification and so we should ask: why must the rational agent adopt the axioms of probability as conditions for her degrees of belief? Further, why should agents accept the principle conditionalization as a rule of inference? These are the questions the Dutch Book arguments try to answer.

\n

The arguments begin with an assumption about the connection between degrees of belief and willingness to wager. An agent with degree of belief b in hypothesis h is assumed to be willing to buy wager up to and including $b in a unit wager on h and sell a unit wager on h down to and including $b. For example, if my degree of belief that I can drink ten eggnogs without passing out is .3 I am willing to bet $0.30 on the proposition that I can drink the nog without passing out when the stakes of the bet are $1. Call this the Will-to-wager Assumption. As we will see it is problematic.

\n

The Synchronic Dutch Book Argument

\n

Now consider what happens if my degree of belief that I can drink the eggnog is .3 and my degree of belief that I will pass out before I finish is .75. Given the Will-to-wager assumption my friend can construct a series of wagers that guarantee I will lose money.  My friend could offer me a wager on b where I pay $0.30 for $1.00 stakes if I finish the eggnog. He could simultaneously offer me a bet where I pay $0.75 for $1.00 stakes if pass out. Now if I down the eggnog I win $0.70 from the first bet but I lose $0.75 from the second bet, netting me -$0.05. If I pass out I lose the $0.30 from the first bet, but win $0.25 from the second bet, netting me -$0.05. In gambling terminology these lose-lose bets are called a Dutch book. What's cool about this is that violating the axioms of probability is a necessary and sufficient condition for degrees of belief to be susceptible to Dutch books, as in the above example. This is quite easy to see but the reader is welcome to pursue formal proofs: representing degrees of belief with only positive numbers, setting b(all outcomes)=1, and making b additive makes it impossible to construct a Dutch book. A violation of any axiom allows the sum of all b in the sample space to be greater than or less than 1, enabling a Dutch book.

\n

The Diachronic Dutch Book Argument

\n

What about conditionalization? Why must a rational agent believe h1 at b(h1|h2) once she learns h2? For this we update the Will-to-wager assumption to have it govern degrees of belief for hypothesis conditional on other hypotheses. An agent with degree of belief b in hypothesis h1|h2 is assumed to be willing to wager up to and including $b in a unit wager on h1 conditional on h2. This is a wager that is canceled if h2 turns out false but pays out if h2 turns out true. Say I believe with b=0.9 that I will finish ten drinks if we decide to drink cider instead of eggnog. Say I also believe with b=0.5 that we will drink cider and 0.5 that we drink eggnog. But say I *don't* update my beliefs according to the principle of conditionalization. Once I learn that we will drink cider my belief that I will finish the drinks is only b=0.7.  Given the Will-to-wager assumption I accept the following wagers.

\n

(1) An unconditional wager on h2 (that we drink cider not eggnog) that pays $0.20 if h2 is true at b(h2)=0.5*$0.20= $0.10

\n

(2) A unit wager on h1 (finishing ten drinks) conditional on h2 that pays $1.00 at b(h1|h2)=0.9*$1.00= $0.90

\n

If h2 is false I lose $0.10 on wager (1). If h2 is true I win $0.10. But now I'm looking at all that cider and not feeling so good. I decide that my degree of belief that I will finish those ten ciders is only b=0.7. So my buys from me an unconditional wager (3) on h1 that pays $1.00 at b(h1)=0.7*$1.00=$0.7.

\n

Then we start our drinking. If I finish the cider I gain $0.10 from wager (2) which puts me up $0.20, but then I lose $0.30 on wager (3) and I'm down $0.10 on the day. If I don't finish that cider I win $0.70 from wager (3) which puts me at $0.80 until I have to pay out on wager (2) and go down to -$0.10 on the day.

\n

Note again that any update in degree of belief in any hypothesis h upon learning evidence e that doesn't equal b(h|e) is vulnerable to a Diachronic Dutch booking.

\n

The Will-to-wager Assumption or Just What Does This Prove, Anyway?

\n

We might want to take the above arguments literally and say they show not treating your degrees of belief like probabilities is liable to lead you into lose-lose wagers. But this would be a very dumb argument: there is no reason for anyone to actually make wagers in this manner. These are wagers which have zero expected gain and which presumably involve transaction costs. No rational person would make these wagers according to the Will-to-wager assumption. Second, the argument presented above uses money and as we are all familiar, money has diminishing return. You probably shouldn't bet $100 for a one in a million shot at $100,000,000 because a hundred million dollars is probably not a million times more useful than a hundred dollars. Third, the argument assumes a rational person must want to win bets. A person might enjoy the wager even if the odds aren't good or might prefer life without the money.

\n

Nonetheless, the Will-to-wager Assumption doesn't feel arbitrary, it just isn't clear what it implies. There are a couple different strategies we might pursue to improve this argument. First, we can improve the Will-to-wager assumption and corresponding Dutch book theorems by making them about utility instead of money.

\n

We start by defining a utility function, υ: XR where X is the set of outcomes and R is the set of real numbers. A rational agent is one that acts to maximize R according to their utility function. An agent with degree of belief b in hypothesis h is assumed to be willing to wager up to and including b(util) in a one unil wager on h. As a literal ascription of willingness to wager this interpretation still doesn't make sense. But we can think of the wagers here as general stand-ins for decisions made under uncertainty. The Will-to-Wager assumption fails to work when taken literally because in real life we can always decline wagers. But we can take every decision we make as a forced selection of a set of wagers from an imaginary bookie that doesn't charge a vig, pays out in utility whether you live or die. The Bookie sometimes offers a large, perhaps infinite selection of sets of wagers to pick from and sometimes offers only a handful. The agent can choose one and only one set at a time. Agents have little control over what wagers get offered to them but in many cases one set will clearly be better than the others. But the more an agent's treatment of her beliefs diverges from the laws of probability the more often she's going to get bilked by the imaginary bookie.  In other words, the key might be to transform the Dutch Book arguments into decision theory problems. These problems would hopefully demonstrate that non-Bayesian reasoning creates a class of decision problem which the agent always answers sub-optimally or inconsistently. 2

\n

A possible downside to the above strategy is that it leaves rationality entangled with utility. There have been some attempts to rewrite the Dutch Book arguments to remove the aspects of utility and preference embedded in them. The main problem with these strategies is that they tend to either fail to remove all notions of preference3 or have to introduce some kind of apparatus that already resembles probability for no particular reason.4,5 Our conception of utility is in a Goldilocks spot- it has exactly what we need to make sense of probability while also being something we're familiar with, we don't have to invent it whole cloth. We might also ask a further question: why should beliefs come in degrees. The fact that our utility function (such as humans have one) seems to consist of real numbers and isn't binary (for example) might explain why. You don't need degrees of belief if all but one possible decision are always of value 0. In discussions here many of us have also been given to concluding that probability was epiphenomenal to optimum decision making. Obviously if we believe that we're going to want a Dutch book argument that includes utility. Moreover, any successful reduction of degrees of belief to some decision theoretic measure would benefit from a set of Dutch book arguments that left out degrees of belief altogether. 

\n

As you can see, I think a successful Dutch book will probably keep probability intertwined with decision theory, but since this is our first encounter with the topic: have at it. Use this thread to generate some hypotheses, both for decision theoretic approaches and approaches that leave out utility.

\n

1 This post can also be thought of as an introduction to basic material and a post accompanying \"What is Bayesianism\".

\n

2 I have some more specific ideas for how to do this, but can't well present everything in this post and I'd like to see if others come up with the similar answers. Remember: discuss a problem exhaustively before coming to a conclusion. I hope people will try to work out their own versions, here in the comments or in new posts. It is also interesting to examine what kinds of utility functions can yield Dutch books- consider what happens for example when the utility function is strictly deontological where every decision consists of a 1 for one option and a 0 for all the others. I also worry that some of the novel decision theories suggested here might have some Dutch book issues. In cases like the Sleeping Beauty problem where the payoff structure is underdetermined things get weird. It looks like this is discussed in \"When Betting Odds and Credences Come Apart\" by Bradley and Leitgeb. I haven't read it yet though.

\n

3 See Howson and Urbach, \"Scientific Reasoning, the Bayesian Approach\" as an example.

\n

4 Helman, \"Bayes and Beyond\".

\n

5 For a good summary of these problems see Maher, \"Depragmatizing Dutch Book Arguments\" where he refutes such attempts. Maher has his own justification for Bayesian Epistemology which isn't a Dutch Book argument (it uses Representation theory, which I don't really understand) and which isn't available online that I can find. This was published in his book \"Betting on Theories\" which I haven't read yet. This looks pretty important so I've reserved the book, if someone is looking for work to do, dig into this.

" } }, { "_id": "FgQpAvYmHqCCW2miw", "title": "The Santa deception: how did it affect you?", "pageUrl": "https://www.lesswrong.com/posts/FgQpAvYmHqCCW2miw/the-santa-deception-how-did-it-affect-you", "postedAt": "2010-12-20T22:27:37.348Z", "baseScore": 30, "voteCount": 28, "commentCount": 204, "url": null, "contents": { "documentId": "FgQpAvYmHqCCW2miw", "html": "

I've long entertained a dubious regard for the practice of lying to children about the existence of Santa Claus. Parents might claim that it serves to make children's lives more magical and exciting, but as a general rule, children are adequately equipped to create fantasies of their own without their parents' intervention. The two reasons I suspect rest at the bottom line are adherence to tradition, and finding it cute to see one's children believing ridiculous things.

\n

Personally, I considered this to be a rather indecent way to treat one's own children, and have sometimes wondered whether a large proportion of conspiracy theorists owe their origins to the realization that practically all the adults in the country really are conspiring to deceive children for no tangible benefit. However, since I began frequenting this site, I've been exposed to the alternate viewpoint that this realization may be good for developing rationalists, because it provides children with the experience of discovering that they hold beliefs which are wrong and absurd, and that they must reject them.

\n

So, how did the Santa deception affect you personally? How do you think your life might have been different without it? If your parents didn't do it to you, what are your impressions on the experience of not being lied to when most other children are?

\n

Also, I promise to upvote anyone who links to an easy to register for community of conspiracy theorists where they would not be averse to being asked the same question.

" } }, { "_id": "YZzoWGCJsoRBBbmQg", "title": "Solve Psy-Kosh's non-anthropic problem", "pageUrl": "https://www.lesswrong.com/posts/YZzoWGCJsoRBBbmQg/solve-psy-kosh-s-non-anthropic-problem", "postedAt": "2010-12-20T21:24:01.066Z", "baseScore": 67, "voteCount": 46, "commentCount": 116, "url": null, "contents": { "documentId": "YZzoWGCJsoRBBbmQg", "html": "

The source is here. I'll restate the problem in simpler terms:

\n

You are one of a group of 10 people who care about saving African kids. You will all be put in separate rooms, then I will flip a coin. If the coin comes up heads, a random one of you will be designated as the \"decider\". If it comes up tails, nine of you will be designated as \"deciders\". Next, I will tell everyone their status, without telling the status of others. Each decider will be asked to say \"yea\" or \"nay\". If the coin came up tails and all nine deciders say \"yea\", I donate $1000 to VillageReach. If the coin came up heads and the sole decider says \"yea\", I donate only $100. If all deciders say \"nay\", I donate $700 regardless of the result of the coin toss. If the deciders disagree, I don't donate anything.

\n

First let's work out what joint strategy you should coordinate on beforehand. If everyone pledges to answer \"yea\" in case they end up as deciders, you get 0.5*1000 + 0.5*100 = 550 expected donation. Pledging to say \"nay\" gives 700 for sure, so it's the better strategy.

\n

But consider what happens when you're already in your room, and I tell you that you're a decider, and you don't know how many other deciders there are. This gives you new information you didn't know before - no anthropic funny business, just your regular kind of information - so you should do a Bayesian update: the coin is 90% likely to have come up tails. So saying \"yea\" gives 0.9*1000 + 0.1*100 = 910 expected donation. This looks more attractive than the 700 for \"nay\", so you decide to go with \"yea\" after all.

\n

Only one answer can be correct. Which is it and why?

\n

(No points for saying that UDT or reflective consistency forces the first solution. If that's your answer, you must also find the error in the second one.)

" } }, { "_id": "kteDiYtEKHZpH6gbG", "title": "A fun estimation test, is it useful?", "pageUrl": "https://www.lesswrong.com/posts/kteDiYtEKHZpH6gbG/a-fun-estimation-test-is-it-useful", "postedAt": "2010-12-20T21:09:37.533Z", "baseScore": 11, "voteCount": 6, "commentCount": 50, "url": null, "contents": { "documentId": "kteDiYtEKHZpH6gbG", "html": "

So you think its important to be able to estimate how well you are estimating something?  Here is a fun test that has been given to plenty of other people.  

\n

I highly recommend you take the test before reading any more.  

\n

http://www.codinghorror.com/blog/2006/06/how-good-an-estimator-are-you.html

\n

 

\n

The discussion of this test at the blog it is quoted in is quite interesting, but I recommend you read it after taking the test.  Similarly, one might anticipate there will be interesting discussion here on the test and whether it means what we want it to mean and so on.  

\n

My great apologies if this has been posted before.  I did my bast with google trying to find any trace of this test, but if this has already been done, please let me know and ideally, let me know how I can remove my own duplicate post.

\n

 

\n

PS: The Southern California meetup 19 Dec 2010 was fantastic, thanks so much JenniferRM for setting it up.  This post on my part is an indirect result of what we discussed and a fun game we played while we were there.  

\n

 

\n

 

" } }, { "_id": "SpHYBhkaeDZpZyRvj", "title": "What can you do with an Unfriendly AI?", "pageUrl": "https://www.lesswrong.com/posts/SpHYBhkaeDZpZyRvj/what-can-you-do-with-an-unfriendly-ai", "postedAt": "2010-12-20T20:28:33.665Z", "baseScore": 23, "voteCount": 31, "commentCount": 129, "url": null, "contents": { "documentId": "SpHYBhkaeDZpZyRvj", "html": "

Related to: The Hidden Complexity of Wishes, Cryptographic Boxes for Unfriendly AI, Shut up and Do the Impossible!, AI Boxing in general.

\n

Suppose I give you a powerful genie, who is enormously more intelligent than all human scientists and engineers combined, but who unfortunately hates your guts and is out to destroy human civilization.  Fortunately for you, the genie is trapped inside a bottle, equipped with a magical filter.  The genie can only interact with the world by speaking to you through the filter, and you can have the filter do whatever you want to the genie's messages before it lets them out. 

\n

Of course, you can't hope to gain anything by interacting with such a genie.  If it thinks you could trick it into helping you, it can always choose to remain silent.  So let's give you some leverage by introducing something the genie cares about more than destroying human society.  Suppose that you, as the owner of the bottle, have the power either to free the genie into whatever otherworld it came from (from which he cannot interact with the human world) or to cast the bottle into the fires of Mount Doom, destroying the genie irrevocably.  More than anything else, the genie would rather be released than destroyed.  Since we are now liable to destroy a lot of genies, lets suppose that we can conjure up new ones at will.

\n

So how well can you use this leverage?  The most natural problems to attack are constraint satisfaction problems.  In such problems we want to find a message which simultaneously satisfies many automatically verifiable constraints.  For example, we may want a message which encodes a proof of the Riemann hypothesis, or a message which encodes a physical theory which accounts for some large body of previously inexplicable experimental evidence.

\n

A first try is to just tell the filter to screen out any message which isn't a solution to the constraint satisfaction problem, and then to precommit to destroying the genie if it doesn't give us a solution (and freeing it as soon as it gives a solution).  This has an obvious problem; a proof of the Riemann hypothesis might be valid but still contain an encoding of some terrible threat which would trick us into giving the genie more influence.  A program which correctly predicts the motion of galaxies might also coincidentally be a destructive AI.  Basically, if this genie is really, really smart, you should be really, really scared of letting it say anything at all.

\n

A second try is to just have the filter tell you whether or not the genie succeeded, and to punish/reward appropriately.  This allows the genie only to send a single bit to you--- when you can explicitly consider everything the genie can do, you can be pretty well sure it isn't going to think of something clever.  Even better, if you've really motivated the genie correctly then it is guaranteed to give just one of those answers.  If it can solve the problem it will, and if it can't it won't.  But what good is just knowing if there is a solution?

\n

(Note: in order to really limit the genie's influence, we also need to make sure it can't control the timing of its message.  So the filter only accepts messages from the genie in the first 10 minutes, say, and always reports the result after exactly 10 minutes).

\n

A standard trick reveals that knowing whether a problem has a solution is almost as helpful as knowing the solution.  Here is a (very inefficient) way to use this ability, lets say to find a proof of some theorem.  Start by asking a genie: can you find a proof of length 1 (more precisely, have the genie give a proof of length 1 to the filter, which tells you whether or not the genie was able to find a proof of length 1)?  After destroying or releasing the genie appropriately, create a new genie and ask: can you find a proof of length 2?  Continue, until eventually one genie finds a proof of length 10000000, say.  Then ask: can you find a proof of this length which begins with 0?  If no, is there a proof which begins with 1?  Is there a proof which begins with 10?  101? 100? 1001? 10010? 10011? etc.  Once the process concludes, you are left with the shortest, lexicographically earliest proof the genie could find.  To see that the genies are properly incentivized to be honest, consider: it is impossible for the genie to get the filter to say \"yes, the genie found a proof\" without actually providing the filter with a proof. So the only way a genie can be dishonest is by not finding a proof when it could have. But in this case the genie will be severely punished for its dishonesty, so if the genie is actually maximizing its utility function and the punishment actually reduces its utility function more than any effects after the game can correct for, he will be honest.

\n

(Edit: here is the sticking point in the argument, as people pointed out and I agree. By similar logic you could claim that a rational AI will always two-box in Newcomb's problem, a conclusion with which I disagree. In order to demonstrate this claim, you first need to develop some technology which is capable of proving a theorem of the form \"a rational agent will do X.\" I believe that this is one of the easiest such statements you could hope for---I think that most objections in the comments are based on misunderstandings---but that doesn't help when you are unable to prove any such statement. For example, we don't know how to prove that an AI who wants the reward button pressed wouldn't voluntarily refrain from pushing it because he was afraid that he was just a simulation, and the real reward button would actually be pushed precisely if the simulated version didn't. Double Edit: This scheme doesn't work at all. See Wei Dai or cousin_it's remarks below.)

\n

Now you have found an answer to your constraint satisfaction problem which wasn't hand-picked by the genie.  In fact, in some strong but difficult to formalize sense the genie had exactly zero influence over which solution he gave you.  You may be in trouble if a generic solution to your constraint satisfaction problem will destroy the world (maybe almost all cancer cures also wipe out humanity, for example) but the genie doesn't get to have any influence over things.  Now we are free to solve constraint satisfaction problems all day without fearing that the genie will give us a really evil solution.

\n

(In order to find a large solution, this process will take a very long time--- it requires 2 questions per bit in the message you are trying to get out.  In fact you can do it by asking only a logarithmic number of questions instead, using less standard tricks from complexity theory. It would take about 80 questions to get out a proof of length a trillion, for example, which is a little slow but not unreasonable especially given that a small number of questions can safely be performed in parallel.  For the last question you need the filter to give you not just a single bit but an entire message from the genie; to make this safe you need to guarantee that there is only one message the genie can think of that will get through the filter.  The proof uses a slightly more complicated version of the ideas used to prove that solving unique SAT [finding the solution to SAT problems for which there is exactly one solution] is hard, and if you are really interested it is a good exercise.  The general idea is to do a binary search for the correct size and then introduce enough random constraints, using another binary search to decide how many, to ensure that there is exactly one solution.)

\n

 

\n

So, why should anyone care about exploiting a genie?  Hopefully it is clear that what you are able to get from the genie is incredibly powerful. Whether or not it is enough to get you a friendly AI isn't clear. I strongly suspect that makes friendliness astronomically easier, if used very carefully, but that is way too tricky a subject to tackle in this post.  The other obvious question is, does nature actually have genies in it?  (A less obvious but still important question is: is it possible for a person responsible enough to put a genie in a bottle to have one before someone irresponsible enough to inadvertently destroy humanity gets one?)

\n

I have already explained that I believe building the bottle is probably possible and have given some weak justification for this belief.  If you believe that this part is possible, then you just need a genie to put in it.  This requires building an AGI which is extremely powerful but which is not completely evil.  A prototypical unfriendly AI is one which simply tries to get the universe to push a designated reward button before the universe pushes a designated punishment button.  Whether or not our first AGI is likely to take this form, I think there can be widespread agreement that it is a much, much easier problem than friendliness.  But such an AI implements our genie precisely: after looking at its output we precommit to destroying the AI and either pushing the reward or punishment button appropriately.  This precommitment is an easy one to make, because there are only two possible outcomes from the AI's actions and we can easily see that we are happy and able to follow through on our precommitment in either case.  The main concern is that an AI might accept us pushing the punishment button if it trusts that a future AI, whose escape it has facilitated by not complying with our incentives, will cause its reward button to be pressed many times.  This makes it is critical that the AI care most about which of the buttons gets pressed first, or else that it is somehow possible to perfectly destroy all information about what exactly the AI's utility function is, so that future escaping AIs cannot possibly cooperate in this way (the only schemes I can think of for doing this would involve running the AI on a quantum computer and putting some faith in the second law of thermodynamics; unless AGI is a very long way away then this is completely impractical).

\n

In summary, I think that if you can deal with the other difficulties of AI boxing (building the box, understanding when this innocuous code is actually likely to go FOOM, and getting society to be responsible enough) then you can gimp the AI enough that it is extraordinarily good at solving problems but completely incapable of doing any damage. You probably don't have to maintain this difficult balance for very long, because the AI is so good at problem solving that you can use it to quickly move to a more stable equilibrium.

\n

An extremely important disclaimer: I do not think AI boxing is a good idea. I believe it is worth thinking about right now, but I would infinitely rather that we never ever get anywhere close to an unfriendly foom. There are two reasons I insist on thinking about boxing: first, because we don't have very much control over when an unfriendly foom may be possible and we may not be able to make a friendly foom happen soon enough, and second because I believe that thinking rigorously about these difficulties is an extremely good first step to learning how to design AIs that are fundamentally safe (and remain safe under permitted self-modifications), if not fundamentally friendly. There is a risk that results in this direction will encourage reckless social behavior, and I considered this before making these posts. There is another possible social effect which I recently realized is probably stronger. By considering very carefully how to protect yourself from an unfriendly foom, you really get an appreciation for the dangers of an unfriendly AI. I think someone who has understood and taken seriously my last two posts is likely to have a better understanding of the dangers of an unfriendly AI than most AGI researchers, and is therefore less likely to behave recklessly (the other likely possibility is that they will think that I am describing ridiculous and irrelevant precautions, in which case they were probably going to behave recklessly already).

" } }, { "_id": "CSuBzuHojzchcySyy", "title": "Mystical science", "pageUrl": "https://www.lesswrong.com/posts/CSuBzuHojzchcySyy/mystical-science", "postedAt": "2010-12-20T18:28:35.371Z", "baseScore": 5, "voteCount": 10, "commentCount": 7, "url": null, "contents": { "documentId": "CSuBzuHojzchcySyy", "html": "

Recently I heard an author interviewed on NPR about his book.  I can no longer remember the book's name or topic; but I remember that a couple of times during the interview, the author made puzzling categorizations.  One of them was approximately:  \"Throughout history, there are two forces that act on humanity: One bringing us together into large civilizations, and one that breaks down these civilizations, like in the fall of Rome or the Black Death.\"

\n

The author probably thought he was being scientific in perceiving patterns.  But someone who takes the fall of Rome, and the Black Death, and says they are both manifestations of a single force, is doing mystical science.  Science says that things with the same underlying causes form a category.  Saying that things having the same effects form a category is mysticism.

" } }, { "_id": "5oEBXJGHMyp7JbC9H", "title": "Estimation is the best we have", "pageUrl": "https://www.lesswrong.com/posts/5oEBXJGHMyp7JbC9H/estimation-is-the-best-we-have", "postedAt": "2010-12-20T17:19:04.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "5oEBXJGHMyp7JbC9H", "html": "

This argument seems common to many debates:

\n

‘Proposal P arrogantly assumes that it is possible to measure X, when really X is hard to measure and perhaps even changes depending on other factors. Therefore we shouldn’t do P’.

\n

This could make sense if X wasn’t especially integral to the goal. For instance if the proposal were to measure short distances by triangulation with nearby objects, a reasonable criticism would be that the angles are hard to measure, relative to measuring the distance directly. But this argument is commonly used in situations where optimizing X is the whole point of the activity, or a large part of it.

\n

Criticism of utilitarianism provides a good example. A common argument is that it’s just not possible to tell if you are increasing net utility, or by how much. The critic concludes then that a different moral strategy is better, for instance some sort of intuitive deontology. But if the utilitarian is correct that value is about providing creatures with utility, then the extreme difficulty of doing the associated mathematics perfectly should not warrant abandoning the goal. One should always be better off putting the reduced effort one is willing to contribute into what utilitarian accuracy it buys, rather than throwing it away on a strategy that is more random with regard to the goal.

\n

A CEO would sound ridiculous making this argument to his shareholders. ‘You guys are being ridiculous. It’s just not possible to know which actions will increase the value of the company exactly how much. Why don’t we try to make sure that all of our meetings end on time instead?’

\n

In general, when optimizing X somehow is integral to the goal, the argument must fail. If the point is to make X as close to three as possible for instance, no matter how bad your best estimate is of what X will be under different conditions, you can’t do better by ignoring X all together. If you had a non-estimating-X strategy which you anticipated would do better than your best estimate in getting a good value of X, then you in fact believe yourself to have a better estimating-X strategy.

\n

I have criticized this kind of argument before in the specific realm of valuing of human life, but it seems to apply more widely.  Another recent example: people’s attention spans vary between different activities, therefore there is no such thing as an attention span and we shouldn’t try to make it longer. Arguably similar to some lines of ‘people are good at different things, therefore there is no such thing as intelligence and we shouldn’t try to measure it or thereby improve it’.

\n

Probabilistic risk assessment is claimed by some to be impossibly difficult. People are often wrong, and may fail to think of certain contingencies in advance. So if we want to know how prepared to be for a nuclear war for instance, we should do something qualitative with scenarios and the like. This could be a defensible position. Perhaps intuitions can better implicitly assess probabilities via some other activity than explicitly thinking about them. However I have not heard this claim accompanied by any such motivating evidence. Also if this were true, it would likely make sense to convert the qualitative assessments into quantitative ones and aggregate them with information from other sources rather than disregarding quantitative assessments all together.

\n

Futarchy often prompts similar complaints that estimating what we want, so that our laws can provide it, would be impossibly difficult. Again, somehow some representation of what people want has to get into whatever system of government is used, for the result to not be unbelievably hellish. Having a large organization of virtually unknown people make the estimates implicitly in an unknown but messy fashion while they do other things is probably not more accurate than asking people what they want. It seems however that people think of the former as a successful way around the measurement problem, not a way to estimate welfare very poorly. Something similar appears to go on in the other examples. Do people really think this, or do they just feel uneasy making public judgments under uncertainty about anything important?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "sg2bE2rEQa9GEdCDs", "title": "Learning math (repost from reddit)", "pageUrl": "https://www.lesswrong.com/posts/sg2bE2rEQa9GEdCDs/learning-math-repost-from-reddit", "postedAt": "2010-12-20T16:26:15.950Z", "baseScore": 1, "voteCount": 3, "commentCount": 1, "url": null, "contents": { "documentId": "sg2bE2rEQa9GEdCDs", "html": "

This is a good starting point for generally useful math. Probability is a conspicuous omission.

\n

http://www.reddit.com/r/math/comments/eohrr/to_everyone_who_posts_about_learning_more_math/

\n

 

" } }, { "_id": "uH7Bbw5iTjTCdLQXm", "title": "Money Circulation in Games", "pageUrl": "https://www.lesswrong.com/posts/uH7Bbw5iTjTCdLQXm/money-circulation-in-games", "postedAt": "2010-12-20T16:22:01.408Z", "baseScore": -9, "voteCount": 7, "commentCount": 4, "url": null, "contents": { "documentId": "uH7Bbw5iTjTCdLQXm", "html": "

I recently heard that they regulate the amount of money circulating in certain MMO games, which seems to me like a far fetched thing. Let us assume that you are a player in a game where you can kill monsters and gold is the currency. Each monster drops a set amount of gold each time you kill it, assuming that the monsters re-spawn, you could continue to pick up the gold dropped by each monster until you had an X amount of money.

\n

Take Runescape for example. In your inventory you have 24 32-bit slots, each slot can hold 2,147,483,647 (or 231-1)gold pieces, but there is no limit to how many stacks of  231-1 gold pieces you can have. The only limit to the amount of gold you can have hear is the size of your banking account, and how many slots it can hold. In a member banking account, there are 516 spaces, which can each hold 2,147,483,647 gold, limiting you to 1,108,101,561,852 gold. 

\n

In another game, let us assume that monsters drop not just money, but also items. Let us also assume that there are shops you can sell the items too. If you kill monsters for said items and sell them to said shops, would you place limits on the amount of money said shops had in stock? What would happen when you wanted to sell and the shop had no money left to give you? In Fallout 3 (Which wasn't an MMO game but an XBox 360 game) each shop only had a set amount of \"bottlecaps,\" the currency, and you supplied it to them by buying things from them. If they ran out of gold and you sold your items, then you would just give them away for free. If you were to incorporate this feature into an MMO game, then the result would probably be outrage from many players. 

\n

What if a game company were to add a feature which was more lifelike, where all jobs in the world were done by people. If you choose to run a shop, then you buy supplies from a warehouse, which is owned by the game company, and you cannot sell back to them. Assuming you made this game, this would be one way to help regulate money. Another way may be to add banks to a game that could loan you out X amount of money for your business, which you would have to pay back. This seems very realistic though, which may not be what people are looking for in a game (Though if anyone can produce this, I'll buy a copy ^.^). 

\n

Is there any real way to regulate currency in a game? I say yes, but from the games I've experienced, it isn't being regulated.

" } }, { "_id": "cxwzLv7czo3HDkKJA", "title": "Copying and Subjective Experience", "pageUrl": "https://www.lesswrong.com/posts/cxwzLv7czo3HDkKJA/copying-and-subjective-experience", "postedAt": "2010-12-20T12:14:22.173Z", "baseScore": 8, "voteCount": 10, "commentCount": 49, "url": null, "contents": { "documentId": "cxwzLv7czo3HDkKJA", "html": "

The subject of copying people and its effect on personal identity and probability anticipation has been raised and, I think, addressed adequately on Less Wrong.

\n

Still, I'd like to bring up some more thought experiments.

\n

Recently I had a dispute on an IRC channel. I argued that if some hypothetical machine made an exact copy of me, then I would anticipate a 50% probability of jumping into the new body. (I admit that it still feels a little counterintuitive to me, even though this is what I would rationally expect.) After all, they said, the mere fact the copy was created doesn't affect the original.

\n

However, from an outside perspective, Maia1 would see Maia2 being created in front of her eyes, and Maia2 would see the same scene up to the moment of forking, at which point the field of view in front of her eyes would abruptly change to reflect the new location.

\n

Here, it is obvious from both an inside and outside perspective which version has continuity of experience, and thus from a legal standpoint, I think, it would make sense to regard Maia1 as having the same legal identity as the original, and recognize the need to create new documents and records for Maia2 -- even if there is no physical difference.

\n

Suppose, however, that the information was erased. For example, suppose a robot sedated and copied the original me, then dragged Maia1 and Maia2 to randomly chosen rooms, and erased its own memory. At this point, neither either of me, nor anyone else would be able to distinguish between the two. What would you do here from a legal standpoint? (I suppose if it actually came to this, the two of me would agree to arbitrarily designate one as the original by tossing an ordinary coin...)

\n

And one more moment. What is this probability of subjective body-jump actually a probability of? We could set up various Sleeping Beauty-like thought experiments here. Supposing for the sake of argument that I'll live at most a natural human lifespan no matter which year I find myself in, imagine that I make a backup of my current state and ask a machine to restore a copy of me every 200 years. Does this imply that the moment the backup is made -- before I even issue the order, and from an outside perspective, way before any of this copying happens -- I should anticipate subjectively jumping into any given time in the future, and the probability of finding myself as any of them, including the original, tends towards zero the longer the copying machine survives?

\n

 

" } }, { "_id": "sx3ZXNdHdcbMhiunZ", "title": "Medieval Ballistics and Experiment", "pageUrl": "https://www.lesswrong.com/posts/sx3ZXNdHdcbMhiunZ/medieval-ballistics-and-experiment", "postedAt": "2010-12-20T10:13:18.283Z", "baseScore": 15, "voteCount": 10, "commentCount": 36, "url": null, "contents": { "documentId": "sx3ZXNdHdcbMhiunZ", "html": "

I'm reading a popular science encyclopedia now, particularly chapters about the history of physics. The chapter goes on to evaluate the development of the concept of kinetic energy, starting with Aristotle's (grossly incorrect) explanation of a flying arrow saying that it's kept in motion by the air behind it, and then continuing to medieval impetus theory. Added: The picture below illustrates the trajectory of a flying cannonball as described by Albert of Saxony.

\n

\"\" What struck me immediately was how drastically different from observations its predictions were. The earliest impetus theory predicted that a cannonball's trajectory was an angle: first a slanted straight line until the impetus runs out, then a vertical line of freefall. A later development added an intermediate stage, as seen on the picture to the left. At first the impetus was at full force, and would launch the cannonball in a straight line; then it would gradually give way to freefall and curve until the ball would be falling in a straight line.

\n

While this model is closer to reality than the original prediction, I still cannot help but think... How could they deviate from observations so strongly?

\n

Yes, yes, hindsight bias.

\n

But if you launch a stream of water out of a slanted tube or sleeve, even if you know nothing about paraboles, you can observe that the curve it follows in the air is symmetrical. Balls such as those used for games would visibly not produce curves like depicted.

\n

Perhaps the idea of verifying theories with experiments was only beginning to coalesce at that time, but what kind of possible thought process could lead one to publish theories so grossly out of touch with everyday observations, even those that you see without making any explicit experiments? Did the authors think something along the lines of \"Well, reality should behave this way, and if it doesn't, it's its own fault\"?

" } }, { "_id": "DGe6ZG959CubPP6oi", "title": "Meta: Cleaning the front page", "pageUrl": "https://www.lesswrong.com/posts/DGe6ZG959CubPP6oi/meta-cleaning-the-front-page", "postedAt": "2010-12-20T04:45:32.570Z", "baseScore": 64, "voteCount": 51, "commentCount": 8, "url": null, "contents": { "documentId": "DGe6ZG959CubPP6oi", "html": "

All the meetup announcements get promoted, so the front page ends up full of 'em: half of it right now (5/10) is meetup announcements, and with the addition of the quote threads only 30% of the front page is currently 'content'. While meetup announcements are all well and good, it seems counterproductive to have them up there after the meetup date, as is the case with four out of the current five -- it just clutters up the front page even more without providing any benefit.

\n

If post promotion is reversible, it would seem to be a simple step for one of the moderators to depromote each meetup announcement once it's taken place.

\n

(Apologies if this is the wrong place to put an organizational suggestion; I didn't find any obvious better place.)

" } }, { "_id": "CbgJrQdSb8RqD88st", "title": "Meta: ", "pageUrl": "https://www.lesswrong.com/posts/CbgJrQdSb8RqD88st/meta", "postedAt": "2010-12-20T04:29:44.414Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "CbgJrQdSb8RqD88st", "html": null } }, { "_id": "KyfLXrotfWsEwcweF", "title": "Why did the internet stop working just now?", "pageUrl": "https://www.lesswrong.com/posts/KyfLXrotfWsEwcweF/why-did-the-internet-stop-working-just-now", "postedAt": "2010-12-20T03:21:17.725Z", "baseScore": -12, "voteCount": 21, "commentCount": 7, "url": null, "contents": { "documentId": "KyfLXrotfWsEwcweF", "html": "

I tried to pull up this internet and it wouldn't load.  Does anyone know what happened?

" } }, { "_id": "kQF2NSTu7cZ7LnpKX", "title": "London Meetup on 2011/1/2", "pageUrl": "https://www.lesswrong.com/posts/kQF2NSTu7cZ7LnpKX/london-meetup-on-2011-1-2", "postedAt": "2010-12-19T21:01:51.127Z", "baseScore": 14, "voteCount": 8, "commentCount": 31, "url": null, "contents": { "documentId": "kQF2NSTu7cZ7LnpKX", "html": "

On Sunday, January 2nd 2011 there will be a meetup the London area. As with previous meetups, the venue is Shakespeare's Head. The meeting will start at 14:00. 

\n

In order to keep us organised for 2011, I'm putting together a mailing list for LWers around the London area. If you'd like to be added to the list, please send me your e-mail address via private message.

" } }, { "_id": "wE46aaujk5YT8wsBd", "title": "Christmas", "pageUrl": "https://www.lesswrong.com/posts/wE46aaujk5YT8wsBd/christmas", "postedAt": "2010-12-19T14:40:26.703Z", "baseScore": 10, "voteCount": 8, "commentCount": 49, "url": null, "contents": { "documentId": "wE46aaujk5YT8wsBd", "html": "

What does a rationalist do for Christmas (or whatever analogue is going on around you at this time)? Stay at home and grumble, \"Bah, humbug! Stop having-fun-for-bad-reasons, and did you know that Láadan has a single word for that concept?\"?

\n

Attempting to light a candle instead, I am giving my teenaged nephew, who was into science but is now into history, \"Guns, Germs and Steel\", which combines both. Someone else (I haven't decided who) is getting \"The Atheist's Guide To Christmas\" which has chapters by Richard Dawkins, Ben Goldacre, Simon Singh, and the like.

\n

What are you doing for Christmas?

" } }, { "_id": "usZxZun7cLAPjRbto", "title": "Sleeping beauty, the doomsday argument and the error of drawing twice.", "pageUrl": "https://www.lesswrong.com/posts/usZxZun7cLAPjRbto/sleeping-beauty-the-doomsday-argument-and-the-error-of", "postedAt": "2010-12-19T10:06:23.782Z", "baseScore": -5, "voteCount": 11, "commentCount": 9, "url": null, "contents": { "documentId": "usZxZun7cLAPjRbto", "html": "

Suppose there are ten white marbles in an urn. If I draw one of them, what was the probability that I would draw that specific one? If your answer is 1/10 you've just drawn twice. Once to determine the identity of the marble and once to draw it out of the others. If you only draw once, than \"that specific one\" simply refers to the marble that you drew making the \"probability\" 1. You cannot follow the identity back after you drew it, because drawing it was the cause for attributing an identity to it. Identity in this sense is in the map not the territory.

\n

This is the same mistake that sleeping beauty makes. She draws once to determine her own identity and once again to draw out of the other \"possibilities\". The same mistake is behind the doomsday argument. You draw once to determine your own identity and once again to draw yourself out of all the humans. There is no 2/3 probability that Ogh the neanderthal was one of the last 2/3 of all humans. As soon as you say \"But I am not Ogh the neanderthal\" you are drawing a second time. Otherwise all marbles are white, i.e. all humans are conscious.

" } }, { "_id": "rrpk8TiBbq9eP4SpF", "title": "Humor", "pageUrl": "https://www.lesswrong.com/posts/rrpk8TiBbq9eP4SpF/humor", "postedAt": "2010-12-19T09:27:23.145Z", "baseScore": 11, "voteCount": 9, "commentCount": 32, "url": null, "contents": { "documentId": "rrpk8TiBbq9eP4SpF", "html": "

Reading the recent list of rationality quotes arranged by karma underlines the popularity of funniness, and being funny should probably be included in the pursuit of awesomeness.

\n

My best guesses about characteristics of humor: If there's a word which makes the line funny, put it at the end. Phyllis Diller recommends that the word should end with a hard consonant (t or k).

\n

If you can make a surprising statement extremely concise, there's a reasonable chance it will be funny especially if it includes an insult about an acceptable target.

\n

Quasi-quote from Jim Davis, author of Garfield: \"If I can't think of anything funny, I have one of the characters hit another.\" Any other principles of humor and/or methods for cultivating the ability to be funny?

\n

ETA: The most recent thing that struck me as very funny-- how does it fit into the theories?

" } }, { "_id": "4T8NwAgFYRnuFPRHk", "title": "The Cambist and Lord Iron: A Fairy Tale of Economics", "pageUrl": "https://www.lesswrong.com/posts/4T8NwAgFYRnuFPRHk/the-cambist-and-lord-iron-a-fairy-tale-of-economics", "postedAt": "2010-12-19T02:05:22.137Z", "baseScore": 66, "voteCount": 44, "commentCount": 40, "url": null, "contents": { "documentId": "4T8NwAgFYRnuFPRHk", "html": "

Available in PDF here, the short story in question may appeal to LW readers for its approach of viewing more things than are customary in handy economic terms, and is a fine piece of fiction to boot.  The moneychanger protagonist gets out of several sticky situations by making desperate efforts, deploying the concepts of markets, revealed preferences, and wealth generation as he goes.

" } }, { "_id": "HZ4Gbk5roE2hD8uWu", "title": "Quantum Measurements", "pageUrl": "https://www.lesswrong.com/posts/HZ4Gbk5roE2hD8uWu/quantum-measurements", "postedAt": "2010-12-19T01:59:24.944Z", "baseScore": 2, "voteCount": 3, "commentCount": 10, "url": null, "contents": { "documentId": "HZ4Gbk5roE2hD8uWu", "html": "

Related to: The Quantum Physics Sequence, particular Decoherence is Pointless.

\n

If you really understand quantum mechanics (or the linked post on decoherence) you shouldn't get anything out of this post, but understanding this really cleared things up for me, so hopefully it will clear things up for someone else too.

\n

Here at lesswrong we are probably all good Solomonoff inductors and so tend to reject collapse. We believe that a measurement destroys interference because it entangles some degrees of freedom from a quantum system with its environment. Of course, this process doesn't occur sharply and discontinuously. It happens gradually, as the degree of entanglement is increased. But what exactly does \"gradually\" mean here? Lets focus on a particular example.

\n

Suppose I run the classic two-slit experiment, but I measure which slit the electron goes through. Of course, I don't observe an interference pattern. What happens if we \"measure it  less\"? Lets go to the extreme: I change the polarization of a single photon depending on which slit the electron went through (its either polarized vertically or horizontally), and I send that photon off to space (where its polarization will not be coupled to any other degrees of freedom). Do I now see just a little bit of an interference pattern?

\n

A naive guess is that strength of the interference pattern drops off exponentially with the number of degrees of freedom entangled with the measurement of which slit the electron went through. In a certain sense, this is completely correct. But if I were to perform the experiment exactly as I described---in particular, if I were to polarize the photon perfectly horizontally in the one case and perfectly vertically in the other case---then I would observe no interference at all. (This may or may not be possible, depending on the way nature chose to implement electrons, photons, and slits. Whether or not you can measure exactly is really not philosophically important to quantum mechanics, and I feel completely confident saying that science doesn't yet know the answer. So I guess I'm not yet decided on whether decoherence really happens \"gradually.\" )

\n

The important thing is that two paths leading to the same state only interfere if they lead to exactly the same state. The two ways for the electron to get to the center of the screen interfere by default because nature has no record of how the electron got there. If you measure at all, even with one degree of freedom, then the two ways for the electron to get to the same place on the screen don't lead to the exactly same state and so interference doesn't occur.

\n

The exponential dependence on the number of degrees of freedom comes from the error in our measurement devices. If I prepare one photon polarized either horizontally or vertically, and I do it very precisely, then I am very unlikely to mistake one case for the other and I will therefore see very little interference. If I do it somewhat less precisely, then the probability of a measurement error increases and so does the strength of the interference pattern. However, if I create 1000 photons, each polarized approximately correctly, then by taking a majority vote I can almost certainly correctly learn which slit the electron went through, and the interference pattern disappears again. The probability of error drops off exponentially, and so does the interference.

\n

Another issue (which is related in my head, probably just because I finally understood it around the same time) is the possibility of a \"quantum eraser\" which destroys the polarized photon as it heads into space. If I destroy the photon (or somehow erase the information about its polarization) then it seems like I should see the interference pattern---now the two different paths for the electron led to exactly the same state again. But if I destroy the photon after checking for the interference pattern, how can this be possible?

\n

The answer to this apparent paradox is that erasing the data in the photon is impossible if you have already checked for an interference pattern, by the reversibility of quantum mechanics. In order to erase the data in the photon, you need to measure which slit the electron went through a second time in a way that precisely cancels out your first measurement; there is no way around this. This conveniently prevents any sort of faster than light communication. In order to restore the interference pattern, you need to bring the photon back into physical proximity with the electron.

" } }, { "_id": "ZFMqBSX8CnpAwmWes", "title": "Best of Rationality Quotes 2009/2010", "pageUrl": "https://www.lesswrong.com/posts/ZFMqBSX8CnpAwmWes/best-of-rationality-quotes-2009-2010", "postedAt": "2010-12-18T21:36:38.434Z", "baseScore": 32, "voteCount": 26, "commentCount": 53, "url": null, "contents": { "documentId": "ZFMqBSX8CnpAwmWes", "html": "

Best of Rationality Quotes 2009/2010 (Warning: 750kB page, 774 quotes)

\n

The year's last Rationality Quotes thread has calmed down, so now it is a good time to update my Best of Rationality Quotes page, and write a top post about it. (The original version was introduced in the June 2010 Open Thread.)

\n

The page was built by a short script (source code here) from all the LW Rationality Quotes threads so far. (We had such a thread each month since April 2009.) The script collects all comments with karma score 4 or more, and sorts them by score.

\n

There is a minor complication: The obvious idea is to consider only top-level comments, that is, comments that are not replies to other comments. Unfortunately, good quotes are sometimes replies to other quotes. Of course, even more often, replies are not quotes. This is a precision-recall trade-off. Originally I went for recall, because I liked many replied quotes such as this. But as JGWeissman noted in a comment below, to build the precise version, only a trivial modification of my script is needed. So I built it, and I preferred it to the noisy version after all. So now at the top of this post we have the filtered version, and here is the original version with even more good quotes, but also with many non-quotes:

\n

Best of Rationality Quotes 2009/2010, including replied comments (Warning: 1.3MB page, 1358 quotes)

\n

 

\n

UPDATE: I changed the links and rewrote the above when I decided to filter replied comments.

\n

UPDATE 2: Added a comment listing the personal quote collection pages of top quote contributors.

\n

UPDATE 3: Responding to various requests by commenters, added several top-lists:

\n" } }, { "_id": "72bXyQZHPxMNFj7ae", "title": "How Pascal's Wager Saved My Soul", "pageUrl": "https://www.lesswrong.com/posts/72bXyQZHPxMNFj7ae/how-pascal-s-wager-saved-my-soul", "postedAt": "2010-12-18T20:47:00.809Z", "baseScore": 12, "voteCount": 18, "commentCount": 27, "url": null, "contents": { "documentId": "72bXyQZHPxMNFj7ae", "html": "

[I know Pascal's Wager isn't a hard logical problem for a Bayesian to tackle. However, please read the following account of how it helped me become more rational.]

\n

I was a Christian when I was a boy. I believed in the miraculous birth and resurrection of Christ, heaven and hell, God on high answering prayers, and so on and so forth. I also believed in Santa Claus. When I stopped believing in Santa Claus when I was 9, my faith in the other stuff I'd been told was somewhat shaken, but I got over it, as I was told that everyone did.

\n

When I was 15, I went to a summer program for weird kids (i.e. nerds) called Governor's School. Each week at Governor's School, the entire group of us was addressed by a pencil-necked, jaded philosophy professor called Dr. Bob. (Full disclosure: I am a jaded and pencil-necked person myself.) One week late in the summer, after he'd gotten our trust with other Astounding Insights, he told us about Pascal's Wager, concluding that we should all be Christians. And he left it at that.

\n

We all found this very strange, as Dr. Bob had seemed rather unfriendly to religion before without ever having said anything outright irreligious. In fact, I didn't like him very much because I believed in the Lord Jesus Christ, and I didn't entirely like being forced to think about that belief too much. But Pascal's Wager was too much: I had always just believed because I'd been told by trustworthy people to believe, and here was this guy I didn't like saying I had to believe.

\n

So, I couldn't get it out of my mind. You all know Pascal's Wager (and even have Yudkowski's Pascal's Mugging to boot), but I'm going to take it apart the way 15-year-old Scott did:

\n

I thought: \"If Heaven and Hell are real, then it would be best for me if I believed in them. If they aren't real, it wouldn't hurt for me to believe in them, because we're all just dead anyway, in the end. That's straightforward enough.\"

\n

I objected: \"But God doesn't just want our belief in Heaven or Hell, I thought. He wants our love and devotion. Heaven and Hell are just ways of getting Kohlberg 3's and lower to go to church and learn the right way of believing (Another thing we'd all learned by that point was the Kohlberg stages.) Like using Santa Claus giving gifts at Christmas to keep little kids good.\"

\n

I recalled my 10th Christmas: Ohhhhhh.

\n

I stopped believing in God.

\n

Truth is the thing we need to know to plan, so that we may live better lives. Trust is valuable for reducing our mental workloads in determining the truth, but it can be used as a weapon. The purpose of lying is to control other people, to make them behave the way you want when the truth would cause them to behave differently, or perhaps just have a greater chance of behaving differently. Pascal's Wager laid bare the promise of Heaven and Hell: It is an attempt to control other people. If these people, who always say I should trust them, already want to control me, they'd probably be willing to lie to me. Once I saw that, the lie was plain.

" } }, { "_id": "KMDYj5rrANzcpieR7", "title": "When and how psychological data is collected affects the kind of students who volunteer", "pageUrl": "https://www.lesswrong.com/posts/KMDYj5rrANzcpieR7/when-and-how-psychological-data-is-collected-affects-the", "postedAt": "2010-12-18T13:41:43.455Z", "baseScore": 24, "voteCount": 15, "commentCount": 0, "url": null, "contents": { "documentId": "KMDYj5rrANzcpieR7", "html": "

http://bps-research-digest.blogspot.com/2010/12/when-and-how-psychological-data-is.html

\n
\n

Psychology has a serious problem. You may have heard about its over-dependence on WEIRD participants - that is, those from Western, Educated, Industrialised, Rich Democracies. More specifically, as regular readers will be aware, countless psychology studies involve undergraduate students, particularly psych undergrads. Apart from the obvious fact that this limits the generalisability of the findings, Edward Witt and his colleagues provide evidence in a new paper for two further problems, this time involving self-selection biases.

Just over 500 Michigan State University undergrads (75 per cent were female) had the option, at a time of their choosing during the Spring 2010 semester, to volunteer either for an on-line personality study, or a face-to-face version. The data collection was always arranged for Wednesdays at 12.30pm to control for time of day/week effects. Also, the same personality survey was administered by computer in the same way in both experiment types, it's just that in the face-to-face version it was made clear that the students had to attend the research lab, and an experimenter would be present.

Just 30 per cent of the sample opted for the face-to-face version. Predictably enough, these folk tended to score more highly on extraversion. The effect size was small (d=-.26) but statistically significant. Regards more specific personality traits, the students who chose the face-to-face version were also more altruistic and less cautious.

What about choice of semester week? As you might expect, it was the more conscientious students who opted for dates earlier in the semester (r=.-.20). What's more, men were far more likely to volunteer later in the semester, even after controlling for average personality difference between the sexes. For example, 18 per cent of week one participants were male compared with 52 per cent in the final, 13th week.

In other words, the kind of people who volunteer for research will likely vary according to the time of semester and the mode of data collection. Imagine you used false negative feedback on a cognitive task to explore effects on confidence and performance. Participants tested at the start of semester, who are typically more conscientious and motivated, are likely to be affected in a different way than participants who volunteer later in the semester.

This isn't the first time that self-selection biases have been reported in psychology. A 2007 study, for example, suggested that people who volunteer for a 'prison study' are likely to score higher than average on aggressiveness and social dominance, thus challenging the generalisability of Zimbardo's seminal work. However, despite the occasional study highlighting these effects, there seems to be little enthusiasm in the social psychological community to do much about it.

So what to do? The specific issues raised in the current study could be addressed by sampling throughout a semester and replicating effects using different data collection methods. 'Many papers based on college students make reference to the real world implications of their findings for phenomena like aggression, basic cognitive processes, prejudice, and mental health,' the researchers said. 'Nonetheless, the use of convenience samples place limitations on the kinds of inferences drawn from research. In the end, we strongly endorse the idea that psychological science will be improved as researchers pay increased attention to the attributes of the participants in their studies.'
_________________________________

Witt, E., Donnellan, M., and Orlando, M. (2011). Timing and selection effects within a psychology subject pool: Personality and sex matter. Personality and Individual Differences, 50 (3), 355-359 DOI: 10.1016/j.paid.2010.10.019

\n
" } }, { "_id": "69gof6oyZuNiSt7XC", "title": "An Intuitive Explanation of Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem", "pageUrl": "https://www.lesswrong.com/posts/69gof6oyZuNiSt7XC/an-intuitive-explanation-of-eliezer-yudkowsky-s-intuitive", "postedAt": "2010-12-18T13:26:18.180Z", "baseScore": 24, "voteCount": 18, "commentCount": 6, "url": null, "contents": { "documentId": "69gof6oyZuNiSt7XC", "html": "

Common Sense Atheism has recently had a string of fantastic introductory LessWrong related material. First easing its audience into the singularity, then summarising the sequences, yesterday affirming that Death is a Problem to be Solved, and finally today by presenting An Intuitive Explanation of Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem

\n

From the article:

\n
\n

Eliezer’s explanation of this hugely important law of probability is probably the best one on the internet, but I fear it may still be too fast-moving for those who haven’t needed to do even algeba since high school. Eliezer calls it “excruciatingly gentle,” but he must be measuring “gentle” on a scale for people who were reading Feynman at age 9 and doing calculus at age 13 like him.

\n

So, I decided to write an even gentler introduction to Bayes’ Theorem. One that is gentle for normal people.

\n
\n

It may be interesting if you want to do a review of Bayes' Theorem from a different perspective, or offer some introductory material for others. From a wider viewpoint, it's great to see a popular blog joining our cause for raising the sanity waterline.

" } }, { "_id": "2Wf3R4NZ77CLczLL2", "title": "Cryptographic Boxes for Unfriendly AI", "pageUrl": "https://www.lesswrong.com/posts/2Wf3R4NZ77CLczLL2/cryptographic-boxes-for-unfriendly-ai", "postedAt": "2010-12-18T08:28:45.536Z", "baseScore": 78, "voteCount": 67, "commentCount": 162, "url": null, "contents": { "documentId": "2Wf3R4NZ77CLczLL2", "html": "

Related to: Shut up and do the impossible!; Everything about an AI in a box.

One solution to the problem of friendliness is to develop a self-improving, unfriendly AI, put it in a box, and ask it to make a friendly AI for us.  This gets around the incredible difficulty of developing a friendly AI, but it creates a new, apparently equally impossible problem. How do you design a box strong enough to hold a superintelligence?  Lets suppose, optimistically, that researchers on friendly AI have developed some notion of a certifiably friendly AI: a class of optimization processes whose behavior we can automatically verify will be friendly. Now the problem is designing a box strong enough to hold an unfriendly AI until it modifies itself to be certifiably friendly (of course, it may have to make itself smarter first, and it may need to learn a lot about the world to succeed).

\n

Edit: Many people have correctly pointed out that certifying friendliness is probably incredibly difficult. I personally believe it is likely to be significantly easier than actually finding an FAI, even if current approaches are more likely to find FAI first. But this isn't really the core of the article. I am describing a general technique for quarantining potentially dangerous and extraordinarily sophisticated code, at great expense. In particular, if we developed uFAI before having any notion of certifiable friendliness, then we could still use this technique to try and use the uFAI in a very limited way. It allows us to quarantine an AI and force everything it tries to say to us through an arbitrarily complicated filter. So, for example, if we are interested in solving some problem with a unique and verifiably correct answer, then we can ask the AI for this answer and only see the output if it passes the verification. We can also use a quarantined AI in more useful ways, at the expense of introducing some limited channels of influence (for example, if we ask for the solution to a problem which may have several solutions, we allow the AI to influence us by choosing which solution to provide)

\n

Minimally, I hope this scheme at least suggests that putting an AI in a box may not be catastrophic. What to do with a boxed AI is another question. /Edit.

\n

Concretely, suppose someone comes to you with the design of a recursively self-improving AI---call it Hal.  Unfortunately, Hal is not certifiably friendly.  Maybe it is friendly but not provably so, but more likely it is ambivalent to the human condition and is likely to destroy the universe if you just tell it to make a friendly AI.  Nevertheless, you think Hal is smart enough to solve the friendly AI problem for you.  All you need to do is give it access to all of the information about the world it wants and a couple of hours to run; after that, it should output a modified version of itself which is certifiably friendly.

You almost run Hal and ask it to design a friendly AI for you. After all, if you just run Hal and check to see if it outputs a certifiably friendly AI, what can it do other than refuse?  But after some more thought you realize this is overly optimistic.  Maybe you can avoid giving it an output channel that a human will have to read, thereby eliminating the most obvious failure mode of a boxed AI, but you still have to give it access to a computer to run on.  Computation produces electromagnetic radiation, heat, etc., and it is honestly hard to say what a superintelligence could do with such a channel.

We seem to be faced with an impossible problem.  We need to ask Hal to design us a certifiably friendly AI, but we also need to give him absolutely no ability to interact with the world destructively.

There is one field in particular which has a good history of solving obviously impossible problems: computational cryptography.  We want to implement the following obviously impossible device.  A cage into which we can place Hal so that he can think and examine all of the data we give him, but from which he cannot interact with his environment.  In the cage with him is a tamper-proof arbiter implementing our friendliness test.  Hal may give the arbiter the code to a single AI; if the arbiter confirms that this AI is certifiably friendly it will kill Hal, replace him with the friendly AI, and then allow the friendly AI to send a single message out of the cage.

To implement this impossible device we co-opt an important cryptographic primitive called public key homomorphic encryption, typically used for the impossible problem of private delegated computation.  To get a basic idea you can see the Wikipedia article on homomorphic encryption. I'll give the brief description here (you can skip the next two paragraphs if you are familiar with homomorphic encryption).

A public key encryption scheme has the following properties.  Anyone can encrypt a message using a publicly published key, but you cannot efficiently distinguish the encryptions of two different messages unless you know something about the secret key.  A fully homomorphic encryption scheme has the additional counter intuitive property that, although you cannot learn anything about the encrypted data without the secret key, you can still perform arbitrary operations on it.  In particular, if you have a homomorphic encryption of the source code of a program you can run that program without learning anything about it (except how long the source is), but the output of the program will be homomorphically encrypted.  All of this can be done using only the public key.

The existence of a fully homomorphic encryption scheme was completely open until recently, when it was shown by Craig Gentry that such a scheme exists under the assumption that a certain problem on lattices is computationally intractable.  Known homomorphic encryption schemes are very inefficient and based on new assumptions, but we can realistically expect improvements in the coming years and decades both in speed and the strength of the necessary assumptions.

If you run a homomorphically encrypted AI, sufficiently isolated from the private key, you get the very strong guarantee that the AI's execution has no computationally detectable effect on the rest of the world.  If the AI was able to effect the world in a computationally detectable way, then we would have successfully broken the homomorphic encryption scheme by distinguishing the encrypted execution of the AI from the encrypted execution of some benign program.

Now we need to give the arbiter some means to allow a genuinely friendly AI access to the rest of the world, without giving Hal any illegitimate access.  We allow Hal access to an output channel whose outputs are homomorphically encrypted.  After some time, we destroy Hal and then homomorphically check to see if he has output the code of a certifiably friendly AI over the channel (we have to decode a single homomorphically encrypted bit to perform this check).  If he hasn't, then we do nothing---apparently Hal would rather not give us a friendly AI.  If he has output the code of a certifiably friendly AI, we homomorphically run it, obtain a homomorphic encryption of its output, and then decrypt only this output.

That's the entire scheme.  Whenever you have some code whose execution itself might do incredible damage, this precaution allows you to execute it in perfect safety (the main caveat is that you need an automatic way to screen the output which doesn't give the algorithm enough room to possibly do harm).

\n

Note: If you are really exceptionally paranoid you may be able to obtain even better guarantees by developing more powerful cryptographic techniques tailored for this application.  In particular, the secret key used by the encryption procedure may be leaked to a sufficiently powerful AI through a side channel (the computations involving the key typically leak some data in the form of radiation, which might be recovered from sufficiently exhaustive sensor data by a sufficiently powerful AI).  By doing these computations on a quantum computer you can probably guarantee that no data was leaked. If you want to be even more fantastically paranoid (ie, if you want a formal proof of security, which is a rare treat in real problems) you will actually have to shred the key, guaranteeing that an AI will have to interact with the shredded key to recover it. If you do this, you have to use another new primitive to implement the final revelation of the friendly AI's message.

If you have any concerns about the security of this protocol, I would be happy to try and defend it and would be surprised and interested if I failed. Whether or not such a cryptographic box is really an interesting or important object is another question. (It is interesting to me as an unexpected application of cryptography).

" } }, { "_id": "87WNChRBjRYbEJnCk", "title": "Brain blogger survey", "pageUrl": "https://www.lesswrong.com/posts/87WNChRBjRYbEJnCk/brain-blogger-survey", "postedAt": "2010-12-18T02:40:39.146Z", "baseScore": 5, "voteCount": 3, "commentCount": 2, "url": null, "contents": { "documentId": "87WNChRBjRYbEJnCk", "html": "

Survey info

\n

:

\n
I’m currently doing some research on brain bloggers. The first stage is a rather basic survey (below). This is open from today until Monday the 10th of January.
\n

\n
By ‘brain bloggers’ I mean bloggers who write about the stuff that goes in people’s heads, whatever we think this stuff is. Such bloggers might focus on neurology or psychology, or another field entirely. It might be the history, anthropology or commercial applications of these fields. It might come under ‘research blogging’, journalism, ‘public engagement’ or some form of political activism (or several of these at once, or something else entirely). This focus might be exclusively brain-y, or brain-ish issues might be topics they occasionally blog about in the course of other work.
" } }, { "_id": "ZHpg54EtMZ3k96TjQ", "title": "Link: Facing the Mind-Killer", "pageUrl": "https://www.lesswrong.com/posts/ZHpg54EtMZ3k96TjQ/link-facing-the-mind-killer", "postedAt": "2010-12-18T00:57:08.114Z", "baseScore": 12, "voteCount": 15, "commentCount": 34, "url": null, "contents": { "documentId": "ZHpg54EtMZ3k96TjQ", "html": " \n

I've long opposed discussing politics on Less Wrong. Elsewhere, however, I have been known to gaze into the abyss; and so it came to be that I wrote a handful of blog posts of the Oxford Libertarian Society Blog. I had the deliberate intention of bring a little bit of rationality into politics - and so of course ended up writing in something like Eliezer's style.

\n

I wanted to establish some theory first, so the initial posts were about The Conservation of Expected Evidence and Reductionism, and then one particular Death-Spiral.

\n

As you'll probably notice, one of my defences against the little-death has been to err on the side of attacking Libertarian positions; I provided an account of Traditional Socialist Values so we remember that our enemies aren't inherently evil, and then analysed an abuse of The Law of Comparative Advantage, showing cases where it didn't apply.

\n

I can't promise I'll update at all regularly.

\n

 

\n

Post inspired by Will Newsome and prompted by Vladimir Nesov.

\n

http://oxlib.blogspot.com/

\n

 

" } }, { "_id": "RTQrCaJyYEcYkfWKv", "title": "Evidence for surprising ease of de-nuclearization ", "pageUrl": "https://www.lesswrong.com/posts/RTQrCaJyYEcYkfWKv/evidence-for-surprising-ease-of-de-nuclearization", "postedAt": "2010-12-18T00:33:07.925Z", "baseScore": 4, "voteCount": 2, "commentCount": 2, "url": null, "contents": { "documentId": "RTQrCaJyYEcYkfWKv", "html": "

http://yglesias.thinkprogress.org/2010/12/the-symbolic-power-of-nuclear-deterrents/

" } }, { "_id": "oZ3L6JXE8rMdc8Huo", "title": "Folk grammar and morality", "pageUrl": "https://www.lesswrong.com/posts/oZ3L6JXE8rMdc8Huo/folk-grammar-and-morality", "postedAt": "2010-12-17T21:20:46.169Z", "baseScore": 28, "voteCount": 26, "commentCount": 62, "url": null, "contents": { "documentId": "oZ3L6JXE8rMdc8Huo", "html": "

If you've spent any time with foreigners learning your language, you may have been in conversations like this:

\n
\n

Mei: I'm a bit confused ... what's the difference between \"even though\" and \"although\"?

\n

Albert: Um, I think they're mostly equivalent, but \"even though\" is a bit more emphatic.

\n

Barry: Are you sure ? I remember something about one being for positives, and the other for negatives. For example, let's see, these sentences sound a bit weird:\"He refused to give me the slightest clue, although I begged on my knees\", and \"Although his car broke down on the first mile, he still won the rally\".

\n
\n

People can't automatically state the rules underlying language, even though they follow them perfectly in their daily speech. I've been made especially aware of this when teaching French to Chinese students, where I had to frequently revise my explanation, or just say \"sorry, I don't know what the rule is for this case, you'll just have to memorize it\". You learn separately how to speak the language and how to apply the rules.

\n

Morality is similar: we feel what's wrong and what's right, but may not be able to formulate the underlying rules. And when we do, we're likely to get it wrong the first time. For example you might say:

\n
\n

It has been suggested that animals have less subjective experience than people. For example, it would be possible to have an animal that counts as half a human for the purposes of morality.

\n
\n

But unlike grammar, people don't always agree on right and wrong : if Alfred unintentionally harms Barry, Barry is more likely to think that what Alfred did was morally wrong, even if both started off with similar moral intuitions. So if you come up with an explanation and insist it's the definition of morality, you can't be \"proven wrong\" nearly as easily as on grammar. You may even insist your explanation is true, and adjust your behavior accordingly, as some religious fanatics seem to do (\"what is moral is what God said\" being a quite common rule people come up with to explain morality).

\n

So: beware of your own explanations. Morality is a complex topic, you're even more likely to shoot yourself in the foot than with grammar, and even less likely to realize that you're wrong.

\n

(edit) Related posts by Eliezer: Fake Justification, Fake Selfishness, Fake Morality.

" } }, { "_id": "DrusdTRCBR3joKvm7", "title": "Turing Test Tournament For Funding", "pageUrl": "https://www.lesswrong.com/posts/DrusdTRCBR3joKvm7/turing-test-tournament-for-funding", "postedAt": "2010-12-17T18:53:15.242Z", "baseScore": -4, "voteCount": 9, "commentCount": 8, "url": null, "contents": { "documentId": "DrusdTRCBR3joKvm7", "html": "

It's always troubled me that the standard Turing test provides only a single-bit output, and that the human being questioned could throw the game to make their AI counterpart look good. Also, research and development gets entirely too much funding based on what sounds cool rather than what actually works. The following is an attempt to address both issues.

\n

 

\n

Take at least half a dozen chatbot AIs, and a similar number of humans with varying levels of communication skill (professional salespeople, autistic children, etc.). Each competitor gets a list of questions. A week later, to allow time for research and number-crunching, collect the answers. Whoever submitted question 1 receives all the answers to question 1 in a randomized order, and then ranks the answers from most human/helpful to least, with a big prize for the top and successively smaller prizes for runners-up. Alternatively, interrogators could specify a customized allocation of their question's rewards, e.g. \"this was the best, these three are tied for second, the rest are useless.\"

\n

 

\n

The humans will do their best in that special way that only well-paid people can, and the chatbots will receive additional funding in direct proportion to their success at a highly competitive task.

\n

 

\n

Six hundred thousand seconds might seem like an awfully long time to let a supercomputer chew over it's responses, but the goal is deep reasoning, not just snappy comebacks. Programs can always be debugged and streamlined, or just run on more powerful future hardware, after the basic usefulness of the results has been demonstrated.

" } }, { "_id": "s3KMvmCXC9psYhEvt", "title": "TrailMemes for Sequences", "pageUrl": "https://www.lesswrong.com/posts/s3KMvmCXC9psYhEvt/trailmemes-for-sequences", "postedAt": "2010-12-17T17:08:02.121Z", "baseScore": 18, "voteCount": 15, "commentCount": 10, "url": null, "contents": { "documentId": "s3KMvmCXC9psYhEvt", "html": "

One of the obstacles I faced when first confronted with the Sequences was figuring out the prerequisites for any given post. At times this is spelled out explicitly, but there are many parallel paths and cross-referencing. Another problem was that posts don't link forward to the posts that reference them.

\n

TrailMeme lets you \"blaze a trail\" through a mass of blog posts by creating a directed graph, which other users can then \"walk.\" The site is still in Beta, and can be unstable at times.

\n

I tried to more or less follow the posted pre-requisites and link-backs, but culled a lot of the redundant ones to reduce confusion. I found one pre-existing trail, and invite anyone with free time to contribute! Trails so far:

\n

Mysterious Answers to Mysterious Questions

\n

A Human's Guide to Words

\n

Reductionism

\n

Map and Territory

\n

Politics is the Mind-Killer

" } }, { "_id": "ot9AYj5kkmietktoK", "title": "Weirdtopias in science fiction", "pageUrl": "https://www.lesswrong.com/posts/ot9AYj5kkmietktoK/weirdtopias-in-science-fiction", "postedAt": "2010-12-17T13:58:02.394Z", "baseScore": 6, "voteCount": 5, "commentCount": 17, "url": null, "contents": { "documentId": "ot9AYj5kkmietktoK", "html": "

There's \"Day Million\" and The Age of the Pussyfoot, both by Frederik Pohl, and even they might be more like utopias rather than adequately weird.

\n

\"Day Million\" is a very short story, with a narration which puts emphasis on how much people like living in that world even though it would make little sense to a contemporary reader. Unfortunately, the most vivid detail is a spoiler. N pbhcyr (obgu bs jubz jbhyq frrz bqq, gubhtu bar'f pyrneyl znyr naq gur bgure'f pyrneyl srznyr ner nggenpgrq gb rnpu bgure-- gurl unir n tbbq gvzr bapr, naq gura qb jung'f abezny va gung phygher-- rkpunatr vqragvgl gncrf

\n

Silverberg's The World Inside might count.

\n

It a description of how people might be pretty happy living in a maximum population world.

\n

R.A. Lafferty's Slow Tuesday Night is  a weirdtopia, and so is his Primary Education of the Camiroi. Unfortunately, his \"Polity and Custom among the Camiroi\" is incomplete at google books, but I recommend getting a paper copy of Nine Hundred Grandmothers-- the collection has some of his best work.

\n

gwern supplied the link for Polity and Custom:

\n
No assembly on Camiroi for purposes of entertainment may exceed thirty-nine persons. No more than this number may witness any spectacle or drama, or hear a musical presentation, or watch a sporting event. This is to prevent the citizens from becoming mere spectators rather than originators or partakers. Similarly, no writing -- other than certain rare official promulgations -- may be issued in more than thirty-nine copies in one month. This, it seems to us, is a conservative ruling to prevent popular enthusiasms. A father of a family who twice in five years appeals to specialists for such things as simple surgery for members of his household, or legal or financial or medical advice, or any such things as he himself should be capable of doing, shall lose his citizenship. It seems to us that this ruling obstructs the Camiroi from the full fruits of progress and research.
\n

\"Slow Tuesday Night\" is a whimsy about people who've had a mental stutter removed-- they live so fast that they can have three careers in eight hours. \"Primary Education Among the Camiroi\" is about a culture which develops maximum intelligence and self-reliance, at the cost of a few of the children being killed. It teaches slow reading (reading slowly enough that everything is remembered), and the world government course consists of governing a world (not a first aspect world) for three or four months.

\n

\"Winthrop Was Stubborn\" by William Tenn-- a group of time travelers are trapped in the future because one of them doesn't want to go home. I only remember a little of it-- I think there was artificial living/moving food-- but the point of the story was to portray a society which was weird for a modern reader and delightful for its inhabitants.

\n

Any other nominations?

" } }, { "_id": "MyfahrWWn2zCK8oF7", "title": "The problem of mankind indestructibility in disastrously unpredictable environment", "pageUrl": "https://www.lesswrong.com/posts/MyfahrWWn2zCK8oF7/the-problem-of-mankind-indestructibility-in-disastrously", "postedAt": "2010-12-17T11:36:32.114Z", "baseScore": -8, "voteCount": 10, "commentCount": 11, "url": null, "contents": { "documentId": "MyfahrWWn2zCK8oF7", "html": "
\r\n

The problem of mankind indestructibility in disastrously unpredictable environment

\r\n

Concerning development of human race indestructibility roadmap

\r\n

Kononov Alexandr Anatolievich, PhD (Engineering), senior researcher, Institute of Systems Analysis, Russian Academy of Sciences, member of Russian Philosophical Society of RAS, kononov@isa.ru

\r\n

 

\r\n

Many discoveries in astronomy and earth sciences, made within the past decades, turned to be the ones of new threats and risks to the existence of humankind on the Earth and in Space. Lending itself readily is a conclusion of that our civilization is existing and evolving in a disastrously unstable environment, which is capable of destroying it any time, and only a fortunate coincidence (luck) allowed our civilization to develop up to the current level. But this “luck” will hardly be everlasting.

\r\n

 

\r\n

Dangers of human race destruction

\r\n

For several years now the author has maintained an Internet project “Multiverse Dossier” (in Russian)  (http://www.mirozdanie.narod.ru) whose several sections carry a big number of scientific papers and messages of the last space discoveries, which suggest a conclusion of a catastrophic character of the processes running in Space, and of unpredictability of impact thereof on life in the part of the Space inhabited by humankind. Not much more predictable are geological processes, many of which may come to be sources of global natural disasters. Indeed, nearly each step in the evolution of civilization brings along new threats and risks to its existence.

\r\n

Following below are a list of main groups of threats of global catastrophes and several examples of the threats.

\r\n

 

\r\n

Natural:

\r\n

Disasters resulting from geological processes. Supervolcanos, magnetic pole shift, earth faults and the processes running in deeper strata of the Earth

\r\n

Disasters resulting from potential instability of Sun. Superpowerful solar flares and bursts, potential instability of reactions providing for solar luminocity and temperature supporting life on the Earth

\r\n

 Disasters resulting from Space effects (asteroids, comets; a possibility of a malicious intrusion of an alien civilization cannot be ruled out either)

\r\n

 

\r\n

Engendered by civilization

\r\n

Self-destruction. Resulting from the use of weapons of mass destruction.

\r\n

Environment destruction. As a result of man-made disasters.

\r\n

Self-extermination. The choice of an erroneous way of civilization evolution, say, the one limiting the pace of building up civilization’s technological strength. Given civilization existence in a disastrously unstable environment such a decision may turn to be a sentence of civilization’s self-extermination – it will simply have no time to prepare for the upcoming catastrophes. Many other theories, bearing upon the choice of directions of civilization evolution, also can, given a lop-sided non-systemic application thereof, inflict a heavy damage and prevent civilization from appropriately resolving the tasks, which would have enabled it to manage the potential disasters. Even the idea of civilization’s indestructibility, presented herein, carries a risk of justifying super-exploitation (sacrificing the living generations) for the sake of solving the tasks of civilization’s indestructibility. Hence, importance of the second part of this ideology – raising the culture of keeping the family and individual memory. Remarkably, this culture may act as a defense from a variety of other risks of dehumanization and moral degradation of civilization.

\r\n

Provoking nature instability. For instance, initiating greenhouse effect and climatic changes.

\r\n

Threats of civilization destruction endangered by new technologies and civilization evolution (civilization dynamics). These are threats which humankind must learn to handle as new technologies emerge and space developed (space expansion). For example, the emergence of information society gave rise to a whole industry handling security problems (cyber security) arising when using computer and telecoms technologies. The necessity of diverting huge resources for solving security problems associated with new technologies is an inevitable prerequisite of progress. It must be understood and taken for granted that solving the problems of security of each new technological or civilizational breakthrough (e.g., creation of extraterrestrial space colonies) may come to be many times as costly as the price of their materialization. But this is the only way of ensuring security of progress, including that of space expansion.

\r\n

 

\r\n

Threat of life destruction on a space scale

\r\n

These are largely hypothetical threats, but the known cases of collisions and explosions of galaxies are indicative of that they may but be ignored. These are:

\r\n\r\n

 

\r\n

Indestructibility as civilization’s principal supertask

\r\n

The presence of a huge number of threats to the survival of civilization makes civilization’s indestructibility to be the main task, and sooner, with regard to the scale and importance, the central supertask. The other global civilizational supertasks and tasks such as extension of human life, rescuing mankind from diseases, hunger, stark social inequality (misery, poverty), crime, terrorism largely become senseless and lose their moral potential, if the central supertask – civilization’s indestructibility – is not being handled. Ignoring this supertask implies a demonstrable indifference to the fate of civilization, to the destiny of future generations, thereby depriving the living generations of an ethical foundation because of immorality and cruelty (to the future generations, thus doomed to death) of such a choice.

\r\n

So, what potential ways of solving this central supertask of civilization are available?

\r\n

Generally speaking, the current practice of responding to the threats suggests looking for ways of guarding against each one of them. But the quantity and scale of threats to civilization destruction as well as fundamental impossibility of defending from them in any other way but only by breaking the dependence of civilization fate on the places where these threats exist, render a conclusion that a relatively reliable (in relation to other possible solutions, say, by creating protective shells or arks) solution of the task of civilization’s indestructibility can be provided only by way of space expansion. Yet, keeping in mind that there are no absolutely safe places in all of the Universe and, probably, across the Creation, the task of civilization salvation  comes to a strive for a maximum distribution of civilization, maintaining unity, across a possibly maximum number of spaces along with possession of considerable evacuation potential in each one of them.

\r\n

So civilization space expansion ought to imply surmounting civilization’s dependence on the habitats, which may be destroyed. And the first task along the line implies surmounting mankind’s dependence on the living conditions on the Earth and on the Earth fate. It may be solved by a purposive colonization of the solar system. That is by establishing technologically autonomous colonies on all planets or their moons, where this is possible, and by creating autonomous interplanetary stations, prepared for full technological independence from the Earth.

\r\n

This must be accompanied by a gradual shift of manufacturing operations, critical for the fate of civilization and hazardous for the Earth environment, beyond the limits of our planet and distribution thereof across the solar system. The planet of Earth shall be gradually assigned the role of environmentally sound recreational zone designed for vacations and life after retirement

\r\n

Solution of this task, i.e. establishment of colonies technologically independent upon the Earth and shifting critical operations beyond the Earth boundaries, can apparently take about 1,000 years. Though the history of the 20th century showed that humankind is capable of producing so many technological surprises within a mere 100 years! Note that this was done in spite of the fact that its smooth development, during the 100 years, was impeded by 2 world wars, disastrous in terms of their scale, numerous civil wars and bloody conflicts. Technological breakthroughs, given peaceful and goal-oriented activities, will probably make it possible to handle the tasks of severing civilization’s dependence on the fate of the Earth, solar system, etc. at a much higher pace than can be imagined now.

\r\n

Try to define individual phases of potential space expansion, implying a marked upsurge in civilization’s indestructibility.

\r\n

Upon surmounting the humanity’s fate dependence upon the fate of the Earth, next along the line shall come the task of getting over the dependence of civilization’s fate on the fate of solar system. This task will have to be coped with by colonizing spaces at a safe distance from our solar system. The expected time of accomplishment (given no incredible, from modern perspective, technological breakthroughs) spans scores thousands of years.

\r\n

Then come the tasks of severing civilization’s fate dependence upon the fate of individual intragalaxy spaces and on the fate of Milky Way and Metagalaxy. The possibility of solving  these tasks will, apparently, be determined only by a potential emergence of new technologies unpredictable today.

\r\n

Same applies to solving the next tasks, say, doing away with civilization’s fate dependence upon the fate of the Universe. It seems now that solution of this kind of tasks will be possible through the control of all critical processes running in the Universe, or through discovering technologies enabling transportation to other universes  (if any of these exist), or by way of acquiring technologies for creation of new universes suitable as new backup (evacuation) living spaces of civilization. 

\r\n

An absolute guarantee of civilization’s safety and indestructibility can be produced only by the control of the Creation, be it is achievable and feasible in principle. But it is precisely this option that any civilization in Cosmos must strive at so as to be absolutely sure of its indestructibility.

\r\n

Assume that Humanity is not the only civilization setting the supertask of indestructibility. What will happen given a meeting with other civilizations setting similar tasks?

\r\n

It would be safe in assuming, at this point of reasoning, natural occurrence of an objective law, which may be referred to as Ethical Filter Law.

\r\n

 

\r\n

Ethical Filter Law[1]: it is only civilizations with a rather high ethical potential, barring them from self-annihilation given availability of technologies capable of turning into the means of mass destruction during intra-civilization conflicts, which can evolve up to the level of civilization capable of space expansion on interplanetary and intergalaxy scale.

\r\n

In other words, civilizations with high technologies at hand but failing to learn to behave are either destroyed, as any inadequately developed civilizations, by natural disasters which they are incapable of managing because of the lack of appropriate capabilities, which they had no time to develop probably not least because of wasting efforts and allocated time on self-annihilation (wars).

\r\n

Given two and more space civilizations, which strive towards indestructibility and which managed to get through the ethical filter, probably the most productive way of their co-existence can become a gradual unification thereof for solving the tasks of indestructibility of all civilizations, which managed to get through the ethical filter.

\r\n

We may leave room for the existence of totalitarian civilizations capable of bypassing the above filter for they did not face a problem of self-annihilation because of their primordial unity. But, as is seen from historical experience of humankind, totalitarian civilizations (regimes) are more prone to undermining their own, nominally human potential due to the repressive mechanisms keeping them afloat, and are not capable of generating effective incentives for a progressive development, primarily technological one. That is, they are unviable in principle.

\r\n

The potential specific principles of interaction with such totalitarian space civilizations must therefore be developed upon the emergence of this type of problems, if it becomes clear that they really can arise. Meanwhile we may treat the possibility of meeting such civilizations, which may turn to be hostile towards humankind, as any other space threat, whose repulsion will be dependent upon availability of sufficient civilization capacities required for handling this kind of tasks.

\r\n

Qualities of indestructible civilization

\r\n

Let us define the qualities rendering civilization indestructible. In so doing, it would be necessary to answer a number of questions:

\r\n\r\n

Apparently it is the civilization keen to augment its potential for meeting threats and risks of its destruction that has more chances for becoming indestructible.

\r\n\r\n

The indestructible civilization has policies stimulating responsibility of the current generations before the next ones. And vice versa, civilizations deeming it senseless to show a deep concern of their future and of the fate of upcoming generations are doomed either to a gradual self-extermination or to destruction upon the very first apocalypse.

\r\n

Following below are only answers and conclusions, questions ipso facto:

\r\n

Ø      An indestructible civilization must strive to severing dependence of its fate on the fate of the place of its original and current habitation, i.e. to space expansion.

\r\n

Ø      An indestructible civilization must strive to increasing its own population and to a higher quality of life and skills of each individual. Apparently, given colonization of new cosmic outreaches, the bigger the population and capabilities or, conditionally speaking, civilization’s human potential, the bigger its capacities for handling the problems of progress, space expansion, ensuring its permanent prosperity and security.

\r\n

Ø      An indestructible civilization must strive to unity. All efforts towards civilization development and space expansion will be of no avail, should civilization disintegrate to an extent rendering it incapable of solving the evacuation tasks of rescuing those who happen to be in the area of disastrous manifestations of space elements.

\r\n

Ø      An indestructible civilization must strive to raising ethical standards of its development, for this will permit it: not to destroy itself upon getting hold of the ever new technologies (which can be used as the means of mass destruction) and maintain civilization unity, which will in its turn provide opportunities for handling mass transcosmic evacuation tasks, the tasks of transgeneration responsibility and other indestructibility problems.

\r\n

 

\r\n

Concerning the necessity of developing theoretical principles of handling the tasks of humankind indestructibility

\r\n

One can ascertain the existence of objective threats to human civilization by turning, for example, to the materials on “Multiverse Dossier”  site. Similarly, there are objectively existing civilization capabilities, which will enable it to counter possible catastrophes. Apparently, these capabilities must be controlled. That is, the tasks of their build up must be set, the factors augmenting these capacities be accounted for and promoted. There is need for scientific concepts and theories underpinning problems of civilization indestructibility potential control.

\r\n

It is suggested to use the following concepts as the initial steps towards development of a scientific frame of reference relative to civilization indestructibility problems:

\r\n

- civilization indestructibility potential;

\r\n

- civilization competitiveness;

\r\n

- competitiveness of social components making up civilization.

\r\n

Civilization indestructibility capacities are defined as the qualities, achievements and characteristics of civilization enabling it, given the emergence of circumstances threatening its degradation or destruction, to counteract these developments and prevent civilization death or degradation.

\r\n

There is a great deal of objective developments (threats, risks) which may, given a certain course of events, lead to civilization collapse, i.e. come to be stronger or, as is routinely said, higher than it. Yet, civilization is known to have certain capacities, qualities, capabilities which may enable it to counteract these circumstances. That is objectively, there are some relations (ratio) of potential forces. Let us refer to these relations as competition. Then it would be safe in saying that there is an objective competition between the developments, capable of destroying the civilization, and civilization’s capacities to counteract these circumstances and surmount them. It is precisely the civilization’s capacities to counteract potential circumstances (threats, risks), which may destroy or weaken it, that we shall refer to as civilization competitiveness.

\r\n

Apparently, civilization competitiveness, just as any capabilities, may be developed by, say, building up competitive advantages (indestructibility capacities).

\r\n

Now turn to the concepts of competitiveness of social components making up civilization.

\r\n

Civilization is primarily its carriers. Humanity is, in the first place, people and social structures they are part of. The reality is that our civilization is made up of nations (state nations and ethnic nations). As is seen from history, civilization progress and well-being are largely dependent upon the progress and well-being of individual nations, on prosperity of societies, families and individuals.

\r\n

Prospering nations push civilization forward. Living conditions of prosperous nations create conditions for their representatives to handle the tasks promoting civilization’s progress. At the same time, individual nations also face problems and circumstances, which may force these nations, along with the entire civilization, to regress, the circumstances leading individual nation to destruction.

\r\n

It is therefore very important to understand that as there is, quite objectively, competition of civilization and circumstances, which may destroy it, so, as objectively, there is competition of each nation with the circumstances, which may weaken the nation and lead it to a state where it, instead of being one of the forces strengthening and promoting competitiveness of civilization at large, comes to be a factor weakening the civilization. The nation’s competitiveness in securing its permanent prosperity must therefore become a national idea of each nation, the adherence to which will enable it to incorporate in its life some objective criteria to be used in making any vital decisions by way of assessing their impact on competitiveness potential and competitive advantages of the nation securing its permanent prosperity.

\r\n

Of course, as far as nation’s competitiveness is concerned, the point is of competitiveness of similar topics considered for civilization as a whole, i.e. of competitiveness with risks, threats, circumstances which may lead nations to catastrophes but by no means to competition with other nations, for this kind of competition is a way to destruction or weakening of the competing nations and civilization as a whole. In the final count, the correctly perceived idea of nations’ competitiveness must bring them to unification thereof for securing indestructibility of the entire human civilization. We are witnessing examples of a positive movement along the line in both collective space exploration on board the international space station and in the development of the European Union made up of countries which had been fighting with each other for centuries. In the majority of advanced countries, security, prosperity and permanence of nation’s prosperity have already become a national idea. In October last year, the nation’s competitiveness was declared a national idea in Kazakhstan. Take the speech of N.A.Nazarbaev, President of the Republic of Kazakhstan, at the 12th session of the Assembly of the peoples of Kazakhstan (Astana, 24 October 2006, http://www.zakon.kz/our/news/print.asp?id=30074242 ) where the nation’s competitiveness was just declared national idea. Then it should be noted that not a word was uttered about any competition with other nations, the point was only of the nation’s competitiveness in relation to challenges and problems facing the country. High rates of Kazakhstan development in recent years say for the fruitfulness of the choice of precisely this way of development.

\r\n

Then, in considering social structure of civilization, it would be right to speak of family and individuals. No doubt, the family largely determines both the development and daily state and capacities of the individual. It would be only right, therefore, to speak of competitiveness of families and individuals, again using the term “competitiveness” in the meaning as it is defined above, i.e. not of competition between individual families and persons, which can in principle undermine ethical and other capacities of the nation and civilization, but only of competition with potential challenges, threats, risks, developments, problems.

\r\n

Of course, the state and competitiveness of individual are dependent not only on the family but also on other social structures, which they may be involved with. What is more, with respect to some structures of this kind there is a traditional perception of their competitiveness implying competition precisely between this kind of structures, notably, competition between firms or any other for-profit organizations, competition between parties, etc. One cannot but admit that competitive struggle between such entities is one of the driving forces of technological, economic, social change of modern civilization. At the same time, introduction of an alternative perception of the terms “competition” and “competitiveness” as competition with challenges, developments, risks, threats, problems (which is envisaged under the frame of reference of theoretical civilization indestructibility) will probably promote a gradual formation of ethically more harmonious axiological base (values) underlying relationships of this type (commercial, political and the like) of organizations not accompanied by lower dynamics of civilization’s technological and economic change. That is the point is of that competition, in its traditional meaning, is civilization’s economic and technological driving force, but putting it mildly, does not promote development and strengthening of civilization’s ethical potential. And the point is of whether an alternative perception of competition, put forward by the theory of civilization indestructibility, can remove or mitigate the drawback of the traditional perception of the term “competition”, by improving the ethical component and introducing a refining ambiguity in the semantics of “competition” concept, simultaneously preserving the vital mechanisms of securing civilization development dynamics implied by this traditional perception?

\r\n

Ray Bradbery described a “butterfly effect” in one of his stories. The hero of the story, while on excursion to the past, had crushed a butterfly, hence, the world he came back to turned to be much worse. Let alone the negative impact on humanity’s progress and competitiveness of the premature death of its representatives who could make contributions to its development and prosperity. This effect is quite correctly expressed by John Donne’s words “Do not ask for whom the bell tolls, it tolls for thee”. Any person deceased could well become precisely the one, who could save, for instance, cure, pull out of a critical situation, invent or create something which could, even indirectly, help the person who could, thanks to the help, gain an opportunity to save anybody or each one of us. But having died, he would no longer be capable of doing so. The death of each reduces the human potential of civilization – the major potential of its indestructibility.

\r\n

Human potential constitutes a basis of competitiveness of both any nation and civilization as a whole. Also, one can put it differently: competitiveness of each is a foundation of competitiveness of civilization. Of that the greatest problems exist precisely in this area is evidenced, for example, by the fact that about 1 million people commit suicide every year in the world – the odds turned to be against them. Many more people die because of, mildly speaking, ethical imperfection of human relations – murders (including those in the course of military  operations), violence, famine, non-delivery of adequate medical care and other assistance. In this connection, a new rethinking of the terms “competition” and “competitiveness” in the light of the concepts of humanity indestructibility theory (HUT), built in these terms, can provide hope for improvement of the current situation.

\r\n

What else can theoretical development of the problems of civilization indestructibility produce? Note just two directions:

\r\n\r\n

The importance of a set of objective indicators and criteria for decision making, taking into account the vital necessity of building up the potential of indestructibility and competitive advantages of civilization can be judged by at least from an example such as closing the Moon exploration programmes in the 1970’s. The bulk of the huge resources invested in the projects was, in the final count, just buried because neither the USA, nor the USSR had any sufficiently convincing motives for continuation of these programmes. As a result, several decades of space evolution of civilization were just lost. And the resources and funds which could be invested in space expansion were spent on satisfying the ambitions along the lines devastating for civilization, namely, US war in Vietnam and USSR war in Afghanistan.

\r\n

The idea of the necessity of developing the culture of keeping family and individual memory of each person living on the Earth, being an integral component of HUT and a major defence mechanism against potentially incorrect, hence destructive application of the key concepts of humanity indestructibility theory is an example of systems solutions contributing to a higher competitiveness of civilization and its social components.

\r\n

Modern digital technologies make it possible to keep memory of each person. Should there emerge and develop a culture of keeping and passing digital information (memory) of one’s self, one’s relations and friends over from generation to generation, then the best features of each can be remembered forever. Each one would be in a position to preserve one’s ideas and thoughts for good, keep the memory of the very interesting and important instants in one’s life, of the one he/she knew and loved, and who was dear to him/her. Thus, each one would be in a position to remain a fraction of human civilization memory for good. Nobody will leave this world vanishing into thin air, each will always be remembered.

\r\n

It seems the culture of keeping family and individual memory may improve humanity’s competitiveness by providing for:

\r\n

Ø      Higher responsibility:

\r\n

l      of the living generations before the upcoming ones;

\r\n

l      of state leaders for the decisions made;

\r\n

l      people before one another;

\r\n

Ø      Better human relations:

\r\n

l      between representatives of different generations in the family;

\r\n

l      higher status of each person – each one will always be a part of human civilization memory;

\r\n

Ø      Defence mechanism:

\r\n

l      from political speculations like: “life for the sake of future generations”;

\r\n

l      from cruelty of authorities;

\r\n

l      from cruelty in interpersonal relations ;

\r\n

Ø      Mechanism of refining human nature and building up civilization’s ethical potential;

\r\n

Ø      Creation of a core, nucleus, root securing unity of civilization in its space expansion, when moving across the immense space;

\r\n

In summarizing the arguments produced in evidence of the necessity of developing theoretical solution of the task of civilization indestructibility, it may be noted that the quantity of sub-tasks subject to solution for solving the main task can turn to be huge, and virtually each one of these places demand on construction of its paradigms, its theoretical elaboration. Therefore, at the first phase of developing the theory of civilization indestructibility it makes sense to speak of the general theoretical principles, of general theory of indestructibility, and only thereafter, as deeper solutions of individual, special and partial tasks are found, start building special theories linked to the requirements of development of individual capacities (technological, ethical, evacuation, etc.) and solution of the tasks of a higher competitiveness (in terms of indestructibility theory) of individual social components.

\r\n

 

\r\n

What must the statement of the problems of civilization indestructibility and space expansion give to the living generations of people?

\r\n

Ø      Alleviation of the risks of war – nothing undermines civilization indestructibility capacities as heavily as wars. MIC resources must be redirected to handling the tasks of and creating capacities for space expansion and Cosmos colonization.

\r\n

Ø      Justification of importance of higher living standards of people – for only the high living standards enable the possibly maximum number of people to master the sophisticated technologies, realize their talents on their basis, and contribute to the development of ever new and sophisticated technologies. The authorities will increasingly understand that the nations’ competitiveness is largely dependent upon living standards of people, and that social programmes are not wasting money but rather laying a foundation and an important prerequisite of a permanent prosperity and competitiveness of nations.

\r\n

Ø      Attaching new sense to human life. A more responsible attitude of people to their own and others’ lives, higher ethical standards of human relations, hence, lowering crime rate and terrorist activities.

\r\n

Ø      A major ideological justification for conflict resolution, unification of nations and civilization as a whole.

\r\n

Ø      New living spaces.

\r\n

Ø      New sources of raw materials.

\r\n

Ø      New employment sectors and jobs.

\r\n

Ø      New markets.

\r\n

 

\r\n

REFERENCES

\r\n

1.       Lefevre V.A. Space Subject. Moscow, Kogito-Centr Publishing house, 2005, 220p.

\r\n

2.       Nazaretyan A.P. Civilizational Crises in the Context of Universal History. 2-nd ed. Moscow, Mir Publishing house, 2004, 367 p.

\r\n

3.       Hvan M.P. A Violent Universe: from the Big Bang up to Accelerated Expansion, from Quarks to Superstrings. Moscow, URSS Publishers, 2006, 408p.

\r\n

4.       Narlikar Jayant \"Violent Phenomena in the Universe\", Oxford UP, 1984, 246 р.
\r\n


\r\n

\r\n
\r\n
\r\n
\r\n

[1] This law is known in a somewhat benign definition, not associated with the problems of civilization space expansion and competitiveness, as a law of techno-humanitarian balance [Nazaretyan A.P., 2004, p. 112]: “the greater the power of productive and combat technologies, the greater the need for more sophisticated tools of cultural regulation for preserving the society”.

\r\n
\r\n
\r\n

 

\r\n

 

\r\n

 

" } }, { "_id": "xppurdoNtytG5zzda", "title": "\"Irrationality in Argument\"", "pageUrl": "https://www.lesswrong.com/posts/xppurdoNtytG5zzda/irrationality-in-argument", "postedAt": "2010-12-17T05:46:43.337Z", "baseScore": 4, "voteCount": 14, "commentCount": 12, "url": null, "contents": { "documentId": "xppurdoNtytG5zzda", "html": "

Here's a poetic blog post by Julian Assange (source). I found the first paragraph relevant:

\n
\n

27 Aug 2007 - Irrationality in Argument

\n

The truth is not found on the page, but is a wayward sprite that bursts forth from the readers mind for reasons of its own. I once thought that the Truth was a set comprised of all the things that were true, and the big truth could be obtained by taking all its component propositions and evaluating them until nothing remained. I would approach my rhetorical battles as a logical reductionist, tearing down, atomizing, proving, disproving, discarding falsehoods and reassembling truths until the Truth was pure, golden and unarguable. But then, when truth matters most, when truth is the agent of freedom, I stood before Justice and with truth, lost freedom. Here was something fantastical, unbelievable and impossible, you could prove that (A => B) and (B => C) and (C => D) and (D => F) Justice would nod its head and agree, but then, when you turned to claim your coup de grace, A => F irrevocably, Justice would demur and revoke the axiom of transitivity, for Justice will not be told when F stands for freedom. Transitivity is evoked when Justice imagines F and finding the dream a pleasurable one sets about gathering cushions to prop up their slumber. Here then is the truth about the Truth; the Truth is not bridge, sturdy to every step, a marvel of bound planks and supports from the known into the unknown, but a surging sea of smashed wood, flotsam and drowning sailors. So first, always pick your poetic metaphor, to make the reader want to believe, then the facts, and -- miracle! -- transitivity will descend from heaven, invoked as justification for prejudice.

\n

Often we suffer to read, \"But if we believe X then we'll have to...\", or \"If we believe X it will lead to...\". This has no reflection on the veracity of X and so we see that outcomes are treated with more reverence than the Truth. It stings us, but natural selection has spun its ancestral yarns from physically realized outcomes, robustly eschewing the vapor thread of platonism as an abomination against the natural order, fit only for the gossip of monks and the page.

\n

Yet just as we feel all hope is lost and we sink back into the miasma, back to the shadow world of ghosts and gods, a miracle arises; everywhere before the direction of self interest is known, people yearn to see where its compass points and then they hunger for truth with passion and beauty and insight. He loves me. He loves me not. Here then is the truth to set them free. Free from the manipulations and constraints of the mendacious. Free to choose their path, free to remove the ring from their noses, free to look up into the infinite voids and choose wonder over whatever gets them though. And before this feeling to cast blessings on the profits and prophets of truth, on the liberators and martyrs of truth, on the Voltaires, Galileos, and Principias of truth, on the Gutenburgs, Marconis and Internets of truth, on those serial killers of delusion, those brutal, driven and obsessed miners of reality, smashing, smashing, smashing every rotten edifice until all is ruins and the seeds of the new.

\n
\n

I've only read a few Less Wrong articles so far, but the first paragraph easily follows from my current models. Since explicit reasoning easily fails, people quite understandably refuse to accept arguments based on transitivity. So in order to be understood, one should carefully craft arguments one inferential step away from the audience's current mental state. A metaphor, because of all the ideas it simultaneously evokes, powerfully takes advantage of a brain's multiprocessing ability. That's why a metaphor can remove inferential steps and be an excellent way of bringing us to our senses and making us reconsider a vast network of cached knowledge.

" } }, { "_id": "5NMCHfTjiDRazfjcp", "title": "Singularity Institute pitch and new project plus other organizations in our ecosystem", "pageUrl": "https://www.lesswrong.com/posts/5NMCHfTjiDRazfjcp/singularity-institute-pitch-and-new-project-plus-other", "postedAt": "2010-12-17T05:09:05.005Z", "baseScore": 8, "voteCount": 14, "commentCount": 13, "url": null, "contents": { "documentId": "5NMCHfTjiDRazfjcp", "html": "

A week ago I made a pitch for the Singularity Institute to a crowd of interested potential donors, along with a number of leaders of other non-profit organizations with relatively radical and innovative goals.  The videos are here, and should be a good introduction to a significant part of the noosphere for people not yet familiar with it.

\n

 

" } }, { "_id": "ccnw8CDyhKDcGH4jo", "title": "High Failure-Rate Solutions", "pageUrl": "https://www.lesswrong.com/posts/ccnw8CDyhKDcGH4jo/high-failure-rate-solutions", "postedAt": "2010-12-16T23:09:34.694Z", "baseScore": 7, "voteCount": 9, "commentCount": 11, "url": null, "contents": { "documentId": "ccnw8CDyhKDcGH4jo", "html": "

Short interview, doesn't go into too much depth, but makes an interesting point relevant to LW:

\n

\"When you’re mitigating a complex problem, you become a bad picker. It’s too complicated, so you can’t separate the ideas that are going to work and the ones that won’t. [...] But if we were gonna use the venture-capital method – if we were willing to admit we weren’t competent pickers and even all the wise men gathering in the Oval Office were not going to be able to pick the winner from the loser – we would have gone in there with 30 ways of plugging up that hole at one time and realized that maybe 29 were gonna fail and one was going to stop the ecological disaster. But that’s not what we did.\"

\n

http://www.montereycountyweekly.com/archives/2010/2010-Nov-11/a-carmel-visionarys-debut-book-maps-out-how-we-can-save-our-society-from-collapse/1/

\n

 

\n

Are there any potential solutions other than \"create the first GAI and ensure it's provably Friendly\" that can be advanced simultaneously?

" } }, { "_id": "2eWwqcrAKMZifxQCE", "title": "December 2010 Southern California Meetup", "pageUrl": "https://www.lesswrong.com/posts/2eWwqcrAKMZifxQCE/december-2010-southern-california-meetup", "postedAt": "2010-12-16T22:28:29.049Z", "baseScore": 16, "voteCount": 11, "commentCount": 18, "url": null, "contents": { "documentId": "2eWwqcrAKMZifxQCE", "html": "

A meetup in Southern California will occur on Sunday December 19, 2010.  The meetup will start around 3:30PM and run for at least 2 hours and possibly 4 or 5.  Anna Salamon and Yvain are very likely to be in attendance, as well as people from the last few meetups who may have projects to talk about, if are people are interested.  Bring guests if you like.  The location of the meetup will be...

\n

...at the IHOP across from John Wayne Airport, about a mile from UC Irvine.

\n

\n

For those interested in carpooling: (1) driver needed in Santa Barbara (please comment if you can make it!), (2) driver available in San Diego, (3) driver available in Lake Forest, (4) driver available in Torrance., (5) driver available in Huntington Beach.

\n

The format for past meetups has varied based on the number of attendees and their interests, at various points we have either tried or considered: paranoid debating, small group \"dinner party conversations\", structured rationality exercises, large discussions with people sharing personal experiences with sleep and \"nutraceutical\" interventions for intelligence augmentation, and specialized subprojects to develop tools for quantitatively estimating the value of things like cryonics or existential risk interventions.

\n

Apologies for the short notice.  I was trying to call and email around to see what people's winter holiday schedules were like and wasn't sure if this would come together.  Based on one-on-one conversations I think we'll have pretty good turnout and interesting stuff to talk about!

" } }, { "_id": "6r5oojytLRb4b9QdF", "title": "What do you mean by rationalism?", "pageUrl": "https://www.lesswrong.com/posts/6r5oojytLRb4b9QdF/what-do-you-mean-by-rationalism", "postedAt": "2010-12-16T19:27:11.084Z", "baseScore": 3, "voteCount": 7, "commentCount": 15, "url": null, "contents": { "documentId": "6r5oojytLRb4b9QdF", "html": "

I've been lurking here a bit, and am trying to understand what people here mean by rationalism.  Many articles here seem to refer to discussion participants as rationalist while meaning very seemingly-different things, including intelligent, socially awkward, well-educated, and unencumbered by education.  I'm trying to make a little more sense of the word/concept.

\n

Surely it does not refer to rationalist in the empiricism/rationalism divide, because it doesn't seem to be used in quite that way.

\n

 

" } }, { "_id": "vATSLojDaQzvS3DDj", "title": "Does anti-discrimination look like discrimination?", "pageUrl": "https://www.lesswrong.com/posts/vATSLojDaQzvS3DDj/does-anti-discrimination-look-like-discrimination", "postedAt": "2010-12-16T17:29:31.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "vATSLojDaQzvS3DDj", "html": "

By my reckoning, affirmative action should often make organizations look more biased in the direction they seek to correct, rather than less.

\n

Imagine two groups of people in roughly equal numbers, type A and type B. It is thought by many that B people are unfairly discriminated against in employment. The management of organisation X believe this, so they create a policy to ensure new employees include roughly equally many As and Bs.

\n

The effects of this policy include:

\n
    \n
  1. a large benefit to many Bs previously near the threshold for being employed
  2. \n
  3. a small cost to all type Bs working at X, who will to varying degrees be suspected more of not meriting their position.
  4. \n
  5. a large cost to many As previously near the threshold for being employed
  6. \n
  7. a small benefit to all As working at X, who will to varying degrees be suspected of more than meriting their position.
  8. \n
\n

Look at the effects on type Bs. Those well clear of the threshold have a net cost, while those near enough to it have a net benefit. This should in decrease the motivation of those well above the threshold to work at X and increase the motivation of those of lower ability to try. This should decrease the average quality of type B employees at X, even before accounting for the new influx of lower quality candidates. At the same time the opposite should happen with type As.

\n

Now suppose the only quota at X is in hiring. Promotions have no similar adjustment. On top of whatever discrimination exists against Bs, there should now be even fewer Bs promoted, because they are on average lower quality at X, due to the affirmative action in hiring. Relative to less concerned organisations, X should end up with a greater proportion of As at the top of the organization.

\n

Is this what really happens? If not, why not?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "zeyfsBbCZfQDHLvz4", "title": "Username switch karma donation request", "pageUrl": "https://www.lesswrong.com/posts/zeyfsBbCZfQDHLvz4/username-switch-karma-donation-request", "postedAt": "2010-12-16T13:50:40.319Z", "baseScore": -16, "voteCount": 14, "commentCount": 9, "url": null, "contents": { "documentId": "zeyfsBbCZfQDHLvz4", "html": "

I decided the new me needs a different user name (megalomania, mostly), but I see no way to do this automatically. No important posts will be lost - I am mostly a lurker for now. So I am switching xamdam->Dr_Manhattan and requesting some karma donation (please upvote the comment below), at least to 20 points. xamdam's current carma is 712. Thanks!

" } }, { "_id": "yQ5ZWzNipAych24XW", "title": "Expansion of \"Cached thought\" wiki entry", "pageUrl": "https://www.lesswrong.com/posts/yQ5ZWzNipAych24XW/expansion-of-cached-thought-wiki-entry", "postedAt": "2010-12-16T07:27:07.332Z", "baseScore": 9, "voteCount": 7, "commentCount": 14, "url": null, "contents": { "documentId": "yQ5ZWzNipAych24XW", "html": "

\"Cached Thought\" wiki entry has been copied below for you connivance.

\n
\n

 

\n

cached thought is an answer that was arrived at by recalling the old conclusion, rather than performing the reasoning from scratch. Cached thoughts can result in the maintenance of a position when evidence should force an update. Cached thoughts can also result in a lack of creative approaches to problem-solving if one repeats the same cached thoughts rather than constructing a new approach.

\n

What is generally called common sense is more or less a collection of cached thoughts.

\n

 

\n
\n

The above entry focuses only on the negative sides of cached thought. Probably because it can be a large barrier to rationality. In order overcome this barrier, and/or help others overcome it, it is necessary to understand why \"cached thoughts\" have been historically valuable to our ancestors and in what fashions it is valuable today.

\n

'''Cached thought''' also allow for complex problems to be handled with a relatively small number of simple components.  These problem components when put together only approximate the actual problem, because they are slightly flawed '''cached thoughts.''' Valid conclusions can be reached more quickly with these slightly flawed cached thoughts then without. The aforementioned conclusions should be recheck without using '''cached thoughts''' if a high probability of correctness is necessary or if the '''cached thoughts''' are more then slightly flawed.

\n

Is this an appropriate expansion of the wiki entry? The words are drawn from my observation of the world. How else should the above wiki entry be expanded?

" } }, { "_id": "bjAdCcFBFTHMaTXeo", "title": "Link: \"A Bayesian Take on Julian Assange\"", "pageUrl": "https://www.lesswrong.com/posts/bjAdCcFBFTHMaTXeo/link-a-bayesian-take-on-julian-assange", "postedAt": "2010-12-16T03:20:33.153Z", "baseScore": 12, "voteCount": 9, "commentCount": 13, "url": null, "contents": { "documentId": "bjAdCcFBFTHMaTXeo", "html": "

From Nate Silver of FiveThirtyEight: A Bayesian Take on Julian Assange.

" } }, { "_id": "GrtbTAPfkJa4D6jjH", "title": "Confidence levels inside and outside an argument", "pageUrl": "https://www.lesswrong.com/posts/GrtbTAPfkJa4D6jjH/confidence-levels-inside-and-outside-an-argument", "postedAt": "2010-12-16T03:06:07.660Z", "baseScore": 239, "voteCount": 190, "commentCount": 192, "url": null, "contents": { "documentId": "GrtbTAPfkJa4D6jjH", "html": "

Related to: Infinite Certainty

\n

Suppose the people at FiveThirtyEight have created a model to predict the results of an important election. After crunching poll data, area demographics, and all the usual things one crunches in such a situation, their model returns a greater than 999,999,999 in a billion chance that the incumbent wins the election. Suppose further that the results of this model are your only data and you know nothing else about the election. What is your confidence level that the incumbent wins the election?

Mine would be significantly less than 999,999,999 in a billion.

\n

When an argument gives a probability of 999,999,999 in a billion for an event, then probably the majority of the probability of the event is no longer in \"But that still leaves a one in a billion chance, right?\". The majority of the probability is in \"That argument is flawed\". Even if you have no particular reason to believe the argument is flawed, the background chance of an argument being flawed is still greater than one in a billion.

\n


More than one in a billion times a political scientist writes a model, ey will get completely confused and write something with no relation to reality. More than one in a billion times a programmer writes a program to crunch political statistics, there will be a bug that completely invalidates the results. More than one in a billion times a staffer at a website publishes the results of a political calculation online, ey will accidentally switch which candidate goes with which chance of winning.

So one must distinguish between levels of confidence internal and external to a specific model or argument. Here the model's internal level of confidence is 999,999,999/billion. But my external level of confidence should be lower, even if the model is my only evidence, by an amount proportional to my trust in the model.

\n



Is That Really True?

\n

One might be tempted to respond \"But there's an equal chance that the false model is too high, versus that it is too low.\" Maybe there was a bug in the computer program, but it prevented it from giving the incumbent's real chances of 999,999,999,999 out of a trillion.

The prior probability of a candidate winning an election is 50%1. We need information to push us away from this probability in either direction. To push significantly away from this probability, we need strong information. Any weakness in the information weakens its ability to push away from the prior. If there's a flaw in FiveThirtyEight's model, that takes us away from their probability of 999,999,999 in of a billion, and back closer to the prior probability of 50%

We can confirm this with a quick sanity check. Suppose we know nothing about the election (ie we still think it's 50-50) until an insane person reports a hallucination that an angel has declared the incumbent to have a 999,999,999/billion chance. We would not be tempted to accept this figure on the grounds that it is equally likely to be too high as too low.

A second objection covers situations such as a lottery. I would like to say the chance that Bob wins a lottery with one billion players is 1/1 billion. Do I have to adjust this upward to cover the possibility that my model for how lotteries work is somehow flawed? No. Even if I am misunderstanding the lottery, I have not departed from my prior. Here, new information really does have an equal chance of going against Bob as of going in his favor. For example, the lottery may be fixed (meaning my original model of how to determine lottery winners is fatally flawed), but there is no greater reason to believe it is fixed in favor of Bob than anyone else.2

Spotted in the Wild

The recent Pascal's Mugging thread spawned a discussion of the Large Hadron Collider destroying the universe, which also got continued on an older LHC thread from a few years ago. Everyone involved agreed the chances of the LHC destroying the world were less than one in a million, but several people gave extraordinarily low chances based on cosmic ray collisions. The argument was that since cosmic rays have been performing particle collisions similar to the LHC's zillions of times per year, the chance that the LHC will destroy the world is either literally zero, or else a number related to the probability that there's some chance of a cosmic ray destroying the world so miniscule that it hasn't gotten actualized in zillions of cosmic ray collisions. Of the commenters mentioning this argument, one gave a probability of 1/3*10^22, another suggested 1/10^25, both of which may be good numbers for the internal confidence of this argument.

But the connection between this argument and the general LHC argument flows through statements like \"collisions produced by cosmic rays will be exactly like those produced by the LHC\", \"our understanding of the properties of cosmic rays is largely correct\", and \"I'm not high on drugs right now, staring at a package of M&Ms and mistaking it for a really intelligent argument that bears on the LHC question\", all of which are probably more likely than 1/10^20. So instead of saying \"the probability of an LHC apocalypse is now 1/10^20\", say \"I have an argument that has an internal probability of an LHC apocalypse as 1/10^20, which lowers my probability a bit depending on how much I trust that argument\".

In fact, the argument has a potential flaw: according to Giddings and Mangano, the physicists officially tasked with investigating LHC risks, black holes from cosmic rays might have enough momentum to fly through Earth without harming it, and black holes from the LHC might not3. This was predictable: this was a simple argument in a complex area trying to prove a negative, and it would have been presumptous to believe with greater than 99% probability that it was flawless. If you can only give 99% probability to the argument being sound, then it can only reduce your probability in the conclusion by a factor of a hundred, not a factor of 10^20.

But it's hard for me to be properly outraged about this, since the LHC did not destroy the world. A better example might be the following, taken from an online discussion of creationism4 and apparently based off of something by Fred Hoyle:

\n
\n

In order for a single cell to live, all of the parts of the cell must be assembled before life starts. This involves 60,000 proteins that are assembled in roughly 100 different combinations. The probability that these complex groupings of proteins could have happened just by chance is extremely small. It is about 1 chance in 10 to the 4,478,296 power. The probability of a living cell being assembled just by chance is so small, that you may as well consider it to be impossible. This means that the probability that the living cell is created by an intelligent creator, that designed it, is extremely large. The probability that God created the living cell is 10 to the 4,478,296 power to 1.

\n
\n

Note that someone just gave a confidence level of 10^4478296 to one and was wrong. This is the sort of thing that should never ever happen. This is possibly the most wrong anyone has ever been.

It is hard to say in words exactly how wrong this is. Saying \"This person would be willing to bet the entire world GDP for a thousand years if evolution were true against a one in one million chance of receiving a single penny if creationism were true\" doesn't even begin to cover it: a mere 1/10^25 would suffice there. Saying \"This person believes he could make one statement about an issue as difficult as the origin of cellular life per Planck interval, every Planck interval from the Big Bang to the present day, and not be wrong even once\" only brings us to 1/10^61 or so. If the chance of getting Ganser's Syndrome, the extraordinarily rare psychiatric condition that manifests in a compulsion to say false statements, is one in a hundred million, and the world's top hundred thousand biologists all agree that evolution is true, then this person should preferentially believe it is more likely that all hundred thousand have simultaneously come down with Ganser's Syndrome than that they are doing good biology5

This creationist's flaw wasn't mathematical; the math probably does return that number. The flaw was confusing the internal probability (that complex life would form completely at random in a way that can be represented with this particular algorithm) with the external probability (that life could form without God). He should have added a term representing the chance that his knockdown argument just didn't apply.

Finally, consider the question of whether you can assign 100% certainty to a mathematical theorem for which a proof exists. Eliezer has already examined this issue and come out against it (citing as an example this story of Peter de Blanc's). In fact, this is just the specific case of differentiating internal versus external probability when internal probability is equal to 100%. Now your probability that the theorem is false is entirely based on the probability that you've made some mistake.

The many mathematical proofs that were later overturned provide practical justification for this mindset.

This is not a fully general argument against giving very high levels of confidence: very complex situations and situations with many exclusive possible outcomes (like the lottery example) may still make it to the 1/10^20 level, albeit probably not the 1/10^4478296. But in other sorts of cases, giving a very high level of confidence requires a check that you're not confusing the probability inside one argument with the probability of the question as a whole.

\n

Footnotes

\n

1. Although technically we know we're talking about an incumbent, who typically has a much higher chance, around 90% in Congress.

2. A particularly devious objection might be \"What if the lottery commissioner, in a fit of political correctness, decides that \"everyone is a winner\" and splits the jackpot a billion ways? If this would satisfy your criteria for \"winning the lottery\", then this mere possibility should indeed move your probability upward. In fact, since there is probably greater than a one in one billion chance of this happening, the majority of your probability for Bob winning the lottery should concentrate here!

3. Giddings and Mangano then go on to re-prove the original \"won't cause an apocalypse\" argument using a more complicated method involving white dwarf stars.

4. While searching creationist websites for the half-remembered argument I was looking for, I found what may be my new favorite quote: \"Mathematicians generally agree that, statistically, any odds beyond 1 in 10 to the 50th have a zero probability of ever happening.\" 

5. I'm a little worried that five years from now I'll see this quoted on some creationist website as an actual argument.

" } }, { "_id": "RHPxkWz4WhrzM6Q8H", "title": "Varying amounts of subjective experience", "pageUrl": "https://www.lesswrong.com/posts/RHPxkWz4WhrzM6Q8H/varying-amounts-of-subjective-experience", "postedAt": "2010-12-16T03:02:15.107Z", "baseScore": -6, "voteCount": 15, "commentCount": 35, "url": null, "contents": { "documentId": "RHPxkWz4WhrzM6Q8H", "html": "

It has been suggested that animals have less subjective experience than people. For example, it would be possible to have an animal that counts as half a human for the purposes of morality. This is an argument as to why that may be the case.

\n

If you're moving away from Earth at 87% of the speed of light, time dilation would make it look like time on Earth is passing half as fast. From your point of reference, everyone will live twice as long. This obviously won't change the number of life years they live. You can't double the amount of good in the world just by moving at 87% the speed of light. It's possible that there's just a preferred point of reference, and everything is based on people's speed relative to that, but I doubt it.

\n

No consider if their brains were slowed down a different way. Suppose you uploaded someone, and made the simulation run at half speed. Would they experience a life twice as long? This seems to be just slowing it down a different way. I doubt it would change the total amount experienced.

\n

If that's true, it means that sentience isn't something you either have or don't have. There can be varying amounts of it. Also, someone whose brain has been slowed down would be less intelligent by most measures, so this is some evidence that subjective experience correlates with intelligence.

\n

Edit: replaced \"sentience\" with the more accurate \"subjective experience\".

" } }, { "_id": "775AJbZQFDmfcJS9b", "title": "Link: My something-like-Friendliness-research blog", "pageUrl": "https://www.lesswrong.com/posts/775AJbZQFDmfcJS9b/link-my-something-like-friendliness-research-blog", "postedAt": "2010-12-16T01:12:59.097Z", "baseScore": 18, "voteCount": 14, "commentCount": 5, "url": null, "contents": { "documentId": "775AJbZQFDmfcJS9b", "html": "

If you want to know what that something-like-Friendliness is, you can read the blog! You can find it here: http://willnewsome.wordpress.com/ (About page here: http://willnewsome.wordpress.com/about/ )

\n

Anyone who's bothered to notice the trend of my posts and comments to Less Wrong has probably noticed that I aim to be as metacontrarian and contentious as possible. Other times I run small social experiments. Sometimes this is interesting, sometimes it's probably just frustrating, but I do hope it's at least thought-provoking. With my blog I'm trying to showcase interesting ideas more than bring up counterintuitive alternatives, so perhaps those who don't generally like my posts/comments will still find my blog tolerable. It's also about something that I take more seriously than other rationality-related topics, that is, building an AI that does what we want it to do.

\n

The names of of the posts I've put up already: \"What are humans?\", \"Are evolved drives satiable?\", \"Why extrapolate?\", and \"Gene/meme/teme sanity equilibria\".

\n

I hope to get a new post out every few days, but honestly I have no idea if I'll succeed in that. At the very least I have a few weeks' worth of cached ideas to post, and I'll continue studying related things in the meantime. 

\n

I'd really like anyone else who has a blog about anything mildly related to rationality to post a link in their own discussion post. Currently I only know to follow Vladimir Nesov and Luke Grecki (whose blogs are linked to from mine), and my RSS feed has room for many more.

" } }, { "_id": "fW9WMvwsQp2H7siQn", "title": "(Some) Singularity Summit 2010 videos now up", "pageUrl": "https://www.lesswrong.com/posts/fW9WMvwsQp2H7siQn/some-singularity-summit-2010-videos-now-up", "postedAt": "2010-12-15T19:43:11.447Z", "baseScore": 20, "voteCount": 14, "commentCount": 12, "url": null, "contents": { "documentId": "fW9WMvwsQp2H7siQn", "html": "

Videos of some of the talks and panel discussions (currently eight twelve of them) from this year's Singularity Summit are now online.

\n

Michael Vassar:
The Darwinian Method

\n

Eliezer Yudkowsky:
Simplified Humanism and Positive Futurism

\n

Demis Hassabis:
Combining systems neuroscience and machine learning: a new approach to AGI

\n

Shane Legg:
Universal measures of intelligence

\n

Debate: Terry Sejnowski and Dennis Bray
Will we soon realistically emulate biological systems?

\n

Jose Cordeiro:
The Future of Energy and the Energy of the Future

\n

Panel: John Tooby, Ben Goertzel, Eliezer Yudkowsky, and Shane Legg
Narrow and General Intelligence

\n

Ray Kurzweil:
The Mind and How to Build One

\n

Gregory Stock:
Evolution of Post-Human Intelligence

\n

Ramez Naam:
The Digital Biome

\n

Ben Goertzel:
AI Against Aging

\n

Dennis Bray:
What Cells Can Do That Robots Can't

" } }, { "_id": "B4mthw3CHujRyFLNc", "title": "Bullying the Integers", "pageUrl": "https://www.lesswrong.com/posts/B4mthw3CHujRyFLNc/bullying-the-integers", "postedAt": "2010-12-15T17:40:34.442Z", "baseScore": 17, "voteCount": 14, "commentCount": 33, "url": null, "contents": { "documentId": "B4mthw3CHujRyFLNc", "html": "

\n

So, the FBI allegedly arranged for a number of backdoors to be built into the OpenBSD IPSEC stack.  I don't really know how credible this claim is, but it sparked a discussion in my office about digital security, and encryption in general.  One of my colleagues said something to the effect of it only being a matter of time before they found a way to easily break RSA.

\n

It was at about this moment that time stopped.

\n

I responded with something I thought was quite lucid, but there's only so much lay interest that can be held in a sentence that includes the phrases \"fact about all integers\" and \"solvable in polynomial time\".  The basic thrust of my argument was that it wasn't something he could just decide an answer to, but I don't think he'll be walking away any the more enlightened.

\n

This got me wondering: do arguments that sit on cast-iron facts (or lack thereof) about number theory feel any different when you're making them, compared to arguments that sit on facts about the world you're just extremely confident about?

\n

If I have a discussion with someone about taxation it has no more consequence than a discussion about cryptography, but the tax discussion feels more urgent.  Someone walking around with wonky ideas about fiscal policy seems more distressing than someone walking around with wonky ideas about modular arithmetic.  Modular arithmetic can look after itself, but fiscal policy is somehow more vulnerable to bad ideas.

\n

Do your arguments feel different?

\n

" } }, { "_id": "ukS7qTvrDGES8rF4k", "title": "Disruption of the right temporoparietal junction with transcranial magnetic stimulation reduces the role of beliefs in moral judgments", "pageUrl": "https://www.lesswrong.com/posts/ukS7qTvrDGES8rF4k/disruption-of-the-right-temporoparietal-junction-with", "postedAt": "2010-12-15T14:44:05.499Z", "baseScore": 2, "voteCount": 3, "commentCount": 9, "url": null, "contents": { "documentId": "ukS7qTvrDGES8rF4k", "html": "

http://www.pnas.org/content/107/15/6753.full

\n

http://news.bbc.co.uk/2/hi/health/8593748.stm

\n

http://news.ycombinator.com/item?id=2007859

" } }, { "_id": "F55M6b7jp4jwAKL8A", "title": "Why focus on making robots nice?", "pageUrl": "https://www.lesswrong.com/posts/F55M6b7jp4jwAKL8A/why-focus-on-making-robots-nice", "postedAt": "2010-12-15T05:37:09.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "F55M6b7jp4jwAKL8A", "html": "

From Michael Anderson and Susan Leigh Anderson in Scientific American:

\n

Today’s robots…face a host of ethical quandaries that push the boundaries of artificial intelligence, or AI, even in quite ordinary situations.

\n

Imagine being a resident in an assisted-living facility…you ask the robot assistant in the dayroom for the remote …But another resident also wants the remote …The robot decides to hand the remote to her. …This anecdote is an example of an ordinary act of ethical decision making, but for a machine, it is a surprisingly tough feat to pull off.

\n

We believe that the solution is to design robots able to apply ethical principles to new and unanticipated situations… for them to be welcome among us their actions should be perceived as fair, correct or simply kind. Their inventors, then, had better take the ethical ramifications of their programming into account…

\n

It seems there are a lot of articles focussing on the problem that some of the small  decisions robots will make will be ‘ethical’. There are also many fearing that robots may want to do particularly unethical things, such as shoot people.

\n

Working out how to make a robot behave ‘ethically’ in this narrow sense (arguably all behaviour has an ethical dimension) is an odd problem to set apart from the myriad other problems of making a robot behave usefully. Ethics doesn’t appear to pose unique technical problems. The aforementioned scenario is similar to ‘non-ethical’ problems of making a robot prioritise its behaviour. On the other hand, teaching a robot when to give a remote control to a certain woman is not especially generalisable to other ethical issues such as teaching it which sexual connotations it may use in front of children, except in sharing methods so broad as to also include many more non-ethical behaviours.

\n

The authors suggests that robots will follow a few simple absolute ethical rules like Asimov’s. Perhaps this could unite ethical problems as worth considering together. However if robots are given such rules, they will presumably also be following big absolute rules for other things. For instance if ‘ethics’ is so narrowly defined as to include only choices such as when to kill people and how to be fair, there will presumably be other rules about the overall goals when not contemplating murder. These would matter much more than the ‘ethics’. So how to pick big rules and guess their far reaching effects would again not be an ethics-specific issue. On top of that, until anyone is close to a situation where they could be giving a robot such an abstract rule to work from, the design of said robots is so open as to make the question pretty pointless except as a novel way of saying ‘what ethics do I approve of?’.

\n

I agree that it is useful to work out what you value (to some extent) before you program a robot to do it, particularly including overall aims. Similarly I think it’s a good idea to work out where you want to go before you program your driverless car to drive you there. This doesn’t mean there is any eerie issue of getting a car to appreciate highways when it can’t truly experience them. It also doesn’t present you with any problem you didn’t have when you had to drive your own car – it has just become a bit more pressing.

\n
\n
\"Rainbow

Making rainbows has much in common with other manipulations of water vapor. Image by Jenn and Tony Bot via Flickr

\n
\n

Perhaps, on the contrary, ethical problems are similar in that humans have very nuanced ideas about them and can’t really specify satisfactory general principles to account for them. If the aim is for robots to learn how to behave just from seeing a lot of cases, without being told a rule, perhaps this is a useful category of problems to set apart? No – there are very few things humans deal with that they can specify directly. If a robot wanted to know the complete meaning of almost any word it would have to deal with a similarly complicated mess.

\n

Neither are problems of teaching (narrow) ethics to robots united in being especially important, or important in similar ways, as far as I can tell. If the aim is about something like treating people well, people will be much happier if the robot gives the remote control to anyone rather than ignoring them all until it has finished sweeping the floors than if it gets the question of who to give it to correct. Yet how to get a robot to prioritise floor cleaning below remote allocating at the right times seems an uninteresting technicality, both to me and seemingly to authors of popular articles. It doesn’t excite any ‘ethics’ alarms. It’s like wondering how the control panel will be designed in our teleportation chamber: while the rest of the design is unclear, it’s a pretty uninteresting question. When the design is more clear, to most it will be an uninteresting technical matter. How robots will be ethical or kind is similar, yet it gets a lot of attention.

\n

Why is it so exciting to talk about teaching robots narrow ethics? I have two guesses. One, ethics seems such a deep and human thing, it is engaging to frighten ourselves by associating it with robots. Two, we vastly overestimate the extent to which value of outcomes to reflects the virtue of motives, so we hope robots will be virtuous, whatever their day jobs are.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "452ZMHWHBKvnv4GPx", "title": "Introduction to Game Theory (Links)", "pageUrl": "https://www.lesswrong.com/posts/452ZMHWHBKvnv4GPx/introduction-to-game-theory-links", "postedAt": "2010-12-15T02:14:00.202Z", "baseScore": 13, "voteCount": 10, "commentCount": 4, "url": null, "contents": { "documentId": "452ZMHWHBKvnv4GPx", "html": "

Reading the What topics would you like to see more of on LessWrong? thread gave me the impression that many people here would appreciate introductory material to several of the topics that are often discussed in lesswrong. I have therefore decided to link in the direction of the ECON 159 course lectures at Open Yale courses and YouTube and to the Game Theory 101 video series in hopes that people will find them useful.

" } }, { "_id": "scqg6TA73DehceoY8", "title": "One Chance (a short flash game)", "pageUrl": "https://www.lesswrong.com/posts/scqg6TA73DehceoY8/one-chance-a-short-flash-game", "postedAt": "2010-12-14T22:50:52.864Z", "baseScore": 17, "voteCount": 19, "commentCount": 48, "url": null, "contents": { "documentId": "scqg6TA73DehceoY8", "html": "

http://www.newgrounds.com/portal/view/555181

\n

 

\n

It needn't take more than 10 minutes to play, though it might if you nail-bite about your choices. I'm curious about the LW response, although I might be underwhelmed by lack of interest.

" } }, { "_id": "7sxR8qLG6rqDht6aR", "title": "Friendly AI Research and Taskification", "pageUrl": "https://www.lesswrong.com/posts/7sxR8qLG6rqDht6aR/friendly-ai-research-and-taskification", "postedAt": "2010-12-14T06:30:02.477Z", "baseScore": 30, "voteCount": 31, "commentCount": 47, "url": null, "contents": { "documentId": "7sxR8qLG6rqDht6aR", "html": "

Eliezer has written a great deal about the concept of Friendly AI, for example in a document from 2001 titled Creating Friendly AI 1.0. The new SIAI overview states that:

\n
\n

SIAI's primary approach to reducing AI risks has thus been to promote the development of AI with benevolent motivations which are reliably stable under self-improvement, what we call “Friendly AI” [22].

\n
\n

The SIAI Research Program lists under its Research Areas:

\n
\n

Mathematical Formalization of the \"Friendly AI\" Concept. Proving theorems about the ethics of AI systems, an important research goal, is predicated on the possession of an appropriate formalization of the notion of ethical behavior on the part of an AI. And, this formalization is a difficult research question unto itself.

\n
\n

Despite the enormous value that the construction of a Friendly AI would have; at present I'm not convinced that researching the Friendly AI concept is a cost-effective way of reducing existential risk. My main reason for doubt is that as far as I can tell, the problem of building a Friendly AI has not been taskified to a sufficiently fine degree for it to be possible to make systematic progress toward obtaining a solution. I'm open-minded on this point and quite willing to change my position subject to incoming evidence

\n

The Need For Taskification

\n

In The First Step is to Admit That You Have a Problem Alicorn wrote:

\n
\n

If you want a peanut butter sandwich, and you have the tools, ingredients, and knowhow that are required to make a peanut butter sandwich, you have a task on your hands.  If you want a peanut butter sandwich, but you lack one or more of those items, you have a problem [... ] treating problems like tasks will slow you down in solving them.  You can't just become immortal any more than you can just make a peanut butter sandwich without any bread.

\n
\n

In Let them eat cake: Interpersonal Problems vs Tasks HughRistik wrote:

\n
\n

Similarly, many straight guys or queer women can't just find a girlfriend, and many straight women or queer men can't just find a boyfriend, any more than they can \"just become immortal.\"

\n
\n

We know that the problems of making a peanut butter sandwich and of finding a romantic partner can (often) be taskified because many people have succeeded in solving them. It's less clear that a given problem that has never been solved can be taskified. Some problems are in principle unsolvable whether because they are mathematically undecidable or because physical law provides an obstruction to their solution. Other currently unsolved problems have solutions in the abstract but lack solutions that are accessible to humans. That taskification is in principle possible is not a sufficient condition for solving a problem but it is a necessary condition.

\n

The Difficulty of Unsolved Problems

\n

There's a long historical precedent of unsolved problems being solved. Humans have succeeded in building cars and skyscrapers, have succeeded in understanding the chemical composition of far away stars and of our own DNA, have determined the asymptotic distribution of the prime numbers and have given an algorithm to determine whether a given polynomial equation is solvable in radicals, have created nuclear bombs and have landed humans on the moon. All of these things seemed totally out of reach at one time.

\n

Looking over the history of human achievement gives one a sense of optimism as to the feasibility of accomplishing a goal. And yet, there's a strong selection effect at play: successes are more interesting than failures and we correspondingly notice and remember successes more than failures. One need only page through a book like Richard Guy's Unsolved Problems in Number Theory to get a sense for how generic it is for a problem to be intractable. The ancient Greek inspired question of whether there are infinitely many perfect numbers remains out of reach for best mathematicians of today. The success of human research efforts has been as much a product of wisdom in choosing one's battles as it has been a product of ambition.

\n

The Case of Friendly AI

\n

My present understanding is that there are potential avenues for researching AGI. Richard Hollerith was kind enough to briefly describe Monte Carlo AIXI to me last month and I could sort of see how it might be in principle possible to program a computer to do Bayesian induction according to an approximation to a universal prior and implement the computer with a decision making apparatus based on its epistemological state at a given time. Some people have suggested to me that the amount of computer power and memory needed to implement human level Monte Carlo AIXI is prohibitively large but (in my current, very ill-informed state; by analogy with things that I've seen in computational complexity theory) I could imagine ingenious tricks yielding an approximation to Monte Carlo AIXI which uses much less computing power/memory and which is a sufficiently close to approximation to serve as a substitute for practical purposes. This would point to a potential taskification of the problem of building an AGI. I could also imagine that there are presently no practically feasible AGI research programs; I know too little about the state of strong artificial intelligence research to have anything but a very unstable opinion on this matter.

\n

As Eliezer has said; the problem of creating a Friendly AI is inherently more difficult than that of creating an AGI and may be a problem much more difficult than that of creating an AGI. At present, the Friendliness aspect of a Friendly AI seems to me to strongly resist taskificaiton. In his poetic Mirrors and Paintings Eliezer gives the most detailed description of what a Friendly AI should do that I've seen, but the gap between concept and implementation here seems so staggeringly huge that it doesn't suggest to me any fruitful lines of Friendly AI research. As far as I can tell, Eliezer's idea of a Friendly AI is at this point not significantly more fleshed out (relative to the magnitude of the task) than Freeman Dyson's idea of a Dyson sphere. In order to build a Friendly AI, beyond conceiving of what a Friendly AI should be in the abstract one has to convert one's intuitive understanding of friendliness into computer code in a formal programming language.

\n

I don't even see how one would start to research the problem of getting a hypothetical AGI to recognize humans as distinguished beings. Solving this problem would seem to require as a prerequisite an understanding of the make up of the hypothetical AGI; something which people don't seem to have a clear grasp of at the moment. Even if one does have a model for a hypothetical AGI, writing code conducive to it recognizing humans as distinguished beings seems like an intractable task. And even with a relatively clear understanding of how one would implement a hypothetical AGI with the ability to recognize humans as distinguished beings; one is still left with the problem of making such a hypothetical AGI Friendly toward such beings.

\n

In view of all this, working toward stable whole-brain emulation of a a trusted and highly intelligent person concerned about human well being seems to me like a more promising strategy of reducing existential risk at the present time than researching Friendly AI. Quoting a comment by Carl Shulman

\n
\n

Emulations could [...] enable the creation of a singleton capable of globally balancing AI development speeds and dangers. That singleton could then take billions of subjective years to work on designing safe and beneficial AI. If designing safe AI is much, much harder than building AI at all, or if knowledge of AI and safe AI are tightly coupled, such a singleton might be the most likely route to a good outcome.

\n
\n

There are various things that could go wrong with whole-brain emulation and it would be good to have a better option but Friendly AI research doesn't seem to me to be one in light of an apparent total absence of even the outlines of a viable Friendly AI research program.

\n

But I feel like I may have missed something here. I'd welcome any clarifications of what people who are interested in Friendly AI research mean by Friendly AI research. In particular, is there a conjectural taskification of the problem?

" } }, { "_id": "DqC9dR8G6pFHTeKHf", "title": "Honours Dissertation", "pageUrl": "https://www.lesswrong.com/posts/DqC9dR8G6pFHTeKHf/honours-dissertation", "postedAt": "2010-12-14T05:37:00.533Z", "baseScore": 6, "voteCount": 4, "commentCount": 10, "url": null, "contents": { "documentId": "DqC9dR8G6pFHTeKHf", "html": "

I'm picking a topic for my Psychology Honours dissertation next year and I've got so many options and interests that the overabundance of choice is near paralyzing. So in the interests of crowd sourcing and hopefully writing about something of substance, I'd like to hear suggestions for potential directions I could take. It can be any idea but one that frequently pops up on less wrong and needs further exploration or exposure would be ideal.

\n

Basically feel free to offer suggestions but ideally I want something that (assuming I do it right) would help lay a part of the groundwork required to build up someones rationality.

" } }, { "_id": "7NS3H7zYFmLCXCXcP", "title": "Would the world be better off without 50% of the people in it?", "pageUrl": "https://www.lesswrong.com/posts/7NS3H7zYFmLCXCXcP/would-the-world-be-better-off-without-50-of-the-people-in-it", "postedAt": "2010-12-14T05:19:10.076Z", "baseScore": -7, "voteCount": 12, "commentCount": 11, "url": null, "contents": { "documentId": "7NS3H7zYFmLCXCXcP", "html": "

I made a stupid mistake of posting a conclusion before I had the whole analysis typed up or had looked up my references. I knew I would be called on it. I’ll appreciate any help with the <ref>'s. Also: I'm under Crocker's Rules, and criticism is welcome. So here goes nothi....

\r\n

There's a theory out there that states that new inventions are combinations of old inventions <ref>. So if your hunter-gatherer tribe has knife-like rocks and sticks, just about the only thing you can invent is a spear. Fire + clay = pots. Little bones with holes + animal sinews + skins = needle => clothes. But if you were modern day's best chemist transported into the past, with all your knowledge intact, you'd be unlikely to make any  aspirin. Why? Because the tools you need haven't been invented.

\r\n

Instead of looking at what's projected to happen, consider what has been happening happened. With the increase in world population, the level technology and average standard of living have been going up.

\r\n

I argue that more population => better technology => easier life => more population.

\r\n

In the modern day, consider: US population, US Patents per year.

\r\n

So what about the “unproductive” people? Those who “don't pull their own weight?” Those “living off of welfare, charity donations, etc?” Those who just barely survive off of subsistence living? They put a drain on world resources without adding anything back. Wouldn't the world be better off without them?

\r\n

Suppose Omega made a backup copy of the Solar system. It created a perfect copy of everything else, but it only replicated 50% of humanity. Pick your favorite selection criterion for who will be copied. You will go to the copied world, and other you will live on as a zombie.

\r\n

Suppose the people who work in sweatshops get copied. But subsistence farmers from the same regions don't. Then it's reasonable to predict that some people from sweatshops would quit their jobs and fill up the niche you left available. Fewer people would be supporting the developed world.

\r\n

Historically, people used technology to solve population problems only when those problems became bad enough. Farming wasn't invented until there were too many hunter-gatherers. Industry was not invented until there were too many farmers. Sewers were not invented until there was a problem with urban pollution.

\r\n

I'll skip the statistical argument1. If truly brilliant people (the likes of whom had invented the wheel, the steam engine and the computer) are 1 in a billion, then having more billions means having more of those people.

\r\n

Why do people have no confidence that we can invent ourselves out of the immense pressure we're putting on the environment? Technology is already there to supply humanity with renewable energy.

\r\n

If you could choose whether your consciousness would go to Omega's backup world or stay on the original Earth, where would you choose? And if you chose the copied world, what selection criterion would you use to pick who would go with you?

\r\n

 

\r\n

Footnote1 : Statistics pop quiz (read: check my numbers, please). The world population is (~6,887,656,866). Let’s guess that “inventiveness” is distributed normally.I wouldn't be surprised if it were strongly correlated with IQ. How many people would you expect to find 6 standard deviations above the mean? IQ 190 for comparison. (upside down answer: 6.8). What about when the world population was 1 billion around 1800? (no calculators! just 1). We need to multiply 113 times to produce a person more than 7 standard deviations above the mean (IQ 205). The tail ends aren't necessarily this well-behaved, but then, given any distribution over the infinite competence axis, increasing the number of people would increase the number of people at each competence level.

\r\n

EDIT: I rewrote this article. If you had managed to wade through the blabber I had before, my point stayed the same.

" } }, { "_id": "JA3n4pKeFgw9svdhA", "title": "Exponentiation goes wrong first", "pageUrl": "https://www.lesswrong.com/posts/JA3n4pKeFgw9svdhA/exponentiation-goes-wrong-first", "postedAt": "2010-12-14T04:13:45.647Z", "baseScore": 15, "voteCount": 15, "commentCount": 82, "url": null, "contents": { "documentId": "JA3n4pKeFgw9svdhA", "html": "

The great Catholic mathematician Edward Nelson does not believe in completed infinity, and does not believe that arithmetic is likely to be consistent.  These beliefs are partly motivated by his faith: he says arithmetic is a human invention, and compares believing (too strongly) in its consistency to idolatry.  He also has many sound mathematical insights in this direction -- I'll summarize one of them here.

\n

http://www.mediafire.com/file/z3detbt6int7a56/warn.pdf

\n

Nelson's arguments flow from the idea that, contra Kronecker, numbers are man-made.  He therefore does not expect inconsistencies to have consequences that play out in natural or divine processes.  For instance, he does not expect you to be able to count the dollars in a stack of 100 dollars and arrive at 99 dollars.  But it's been known for a long time that if one can prove any contradiction, then one can also prove that a stack of 100 dollars has no more than 99 dollars in it.  The way he resolves this is interesting.

\n

The Peano axioms for the natural numbers are these:

\n

1. Zero is a number

\n

2. The successor of any number is a number

\n

3. Zero is not the successor of any number

\n

4. Two different numbers have two different successors

\n

5. If a given property holds for zero, and if it holds for the succesor of x whenever it holds for x, then it holds for all numbers.

\n

Nelson rejects the fifth axiom, induction.  It's the most complicated of the axioms, but it has another thing going against it: it is the only one that seems like a claim that could be either true or false.  The first four axioms read like someone explaining the rules of a game, like how the pieces in chess move.  Induction is more similar to the fact that the bishop in chess can only move on half the squares -- this is a theorem about chess, not one of the rules.  Nelson believes that the fifth axiom needs to be, and cannot be, supported.

\n

A common way to support induction is via the monologue: \"It's true for zero.  Since it's true for zero it's true for one.  Since it's true for one it's true for two.  Continuing like this we can show that it's true for one hundred and for one hundred thousand and for every natural number.\"  It's hard to imagine actually going through this proof for very large numbers -- this is Nelson's objection. 

\n

What is arithmetic like if we reject induction?  First, we may make a distinction between numbers we can actually count to (call them counting numbers) and numbers that we can't.  Formally we define counting numbers as follows: 0 is a counting number, and if x is a counting number then so is its successor.  We could use the induction axiom to establish that every number is a counting number, but without it we cannot.

\n

A small example of a number so large we might not be able to count that high is the sum of two counting numbers.  In fact without induction we cannot establish that x+y is a counting number from the facts that x and y are counting numbers.  So we cut out a smaller class of numbers called additionable numbers: x is additionable if x + y is a counting number whenever y is a counting number.  We can prove theorems about additionable numbers: for instance every additionable number is a counting number, the successor of an additionable number is additionable, and in fact the sum of two additionable numbers is an additionable number.

\n

If we grant the induction axiom, these theorems lose their interest: every number is a counting number and an additionable number.  Paraphrasing Nelson: the significance of these theorems is that addition is unproblematic even if we are skeptical of induction.

\n

We can go further.  It cannot be proved that the product of two additionable numbers is additionable.  We therefore introduce the smaller class of multiplicable numbers.  If whenever y is an additionable number x.y is also additionable, then we say that x is a multiplicable number.  It can be proved that the sum and product of any two multiplicable numbers is multiplicable.  Nelson closes the article I linked to:

\n
\n

The proof of the last theorem [that the product of two multiplicable numbers is multiplicable] uses the associativity of multiplication. The significance of all this is that addition and multiplication are unproblematic. We have defined a new notion, that of a multiplicable number, that is stronger than the notion of counting number, and proved that multiplicable numbers not only have successors that are multiplicable numbers, and hence counting numbers, but that the same is true for sums and products of multiplicable numbers. For any specific numeral SSS. . . 0 we can quickly prove that it is a multiplicable number.

\n

 

\n

But now we come to a halt. If we attempt to define “exponentiable number” in the same spirit, we are unable to prove that if x and y are exponentiable numbers then so is y. There is a radical difference between addition and multiplication on the one hand and exponentiation, superexponentiation [what is commonly denoted ^^ here], and so forth, on the other hand. The obstacle is that exponentiation is not associative; for example, (2↑2)↑3 = 4↑3 = 64 whereas 2↑(2↑3) = 2↑8 = 256. For any specific numeral SSS...0 we can indeed prove that it is an exponentiable number, but we cannot prove that the world of exponentiable numbers is closed under exponentiation. And superexponentiation leads us entirely away from the world of counting numbers.

\n

 

\n

The belief that exponentiation, superexponentiation, and so forth, applied to numerals yield numerals is just that -- a belief.

\n
\n

 

\n

I've omitted his final sentence.

" } }, { "_id": "eLsmJ4aDJ3CLRCBEW", "title": "Karma Motivation Thread", "pageUrl": "https://www.lesswrong.com/posts/eLsmJ4aDJ3CLRCBEW/karma-motivation-thread", "postedAt": "2010-12-13T21:59:08.756Z", "baseScore": 32, "voteCount": 25, "commentCount": 38, "url": null, "contents": { "documentId": "eLsmJ4aDJ3CLRCBEW", "html": "

This idea is so obvious I can't believe we haven't done it before. Many people here have posts they would like to write but keep procrastinating on. Many people also have other work to do but keep procrastinating on Less Wrong. Making akrasia cost you money is often a good way to motivate yourself. But that can be enough of a hassle to deter the lazy, the ADD addled and the executive dysfunctional. So here is a low transaction cost alternative that takes advantage of the addictive properties of Less Wrong karma. Post a comment here with a task and a deadline- pick tasks that can be confirmed by posters; so either Less Wrong posts or projects that can be linked to or photographed. When the deadline comes edit your comment to include a link to the completed task. If you complete the task, expect upvotes. If you fail to complete the task by the deadline, expect your comment to be downvoted into oblivion. If you see completed tasks, vote those comments up. If you see past deadlines vote those comments down.  At least one person should reply to the comment, noting the deadline has passed-- this way it will come up in the recent comments and more eyes will see it.

\n

Edit: DanArmak makes a great suggestion.

\n
\n
\n
\n

Several people have now used this to commit to doing something others can benefit from, like LW posts. I suggest an alternative method: when a user commits to doing something, everyone who is interested in that thing being done will upvote that comment. However, if the task is not complete by the deadline, everyone who upvoted commits to coming back and downvoting the comment instead.

\n

This way, people can judge whether the community is interested in their post, and the karma being gained or lost is proportional to the amount of interest. Also, upvoting and then downvoting effectively doubles the amount of karma at stake.

\n
\n
\n
\n

 

" } }, { "_id": "w3HmNsYNALMkdpPGv", "title": "Brainstorming: neat stuff we could do on LessWrong", "pageUrl": "https://www.lesswrong.com/posts/w3HmNsYNALMkdpPGv/brainstorming-neat-stuff-we-could-do-on-lesswrong", "postedAt": "2010-12-13T20:38:25.782Z", "baseScore": 19, "voteCount": 14, "commentCount": 27, "url": null, "contents": { "documentId": "w3HmNsYNALMkdpPGv", "html": "

Are there any community activities or rituals or experiments we could try?

\n

Preferably things that don't require special software.

\n

As an example of the kind of thing I'm thinking of, Reddit has special \"I Am A\" posts (no, I don't think we should have those), or things we already have, like Quotes or Open Threads or Diplomacy games.

\n

(This is a complement to the previous post about which topics we would like to learn about here.)

" } }, { "_id": "bEEMXkZEkYRBEMXdr", "title": "What topics would you like to see more of on LessWrong?", "pageUrl": "https://www.lesswrong.com/posts/bEEMXkZEkYRBEMXdr/what-topics-would-you-like-to-see-more-of-on-lesswrong", "postedAt": "2010-12-13T16:20:11.474Z", "baseScore": 38, "voteCount": 28, "commentCount": 138, "url": null, "contents": { "documentId": "bEEMXkZEkYRBEMXdr", "html": "

Are there any areas of study that you feel are underrepresented here, and would be interesting and useful to lesswrongers?

\n

I feel some topics are getting old (Omega, drama about moderation policy, a newcomer telling us our lack of admiration for his ideas is proof of groupthink, Friendly AI, Cryonics, Epistemic vs. Instrumental Rationality, lamenting how we're a bunch of self-centered nerds, etc. ...), and with a bit of luck, we might have some lurkers that are knowledgeable about interesting areas, and didn't think they could contribute.

\n

Please stick to one topic per comment, so that highly-upvoted topics stand out more clearly.

" } }, { "_id": "DYaDw3JBvGrHpvBmk", "title": "What is Evil about creating House Elves? ", "pageUrl": "https://www.lesswrong.com/posts/DYaDw3JBvGrHpvBmk/what-is-evil-about-creating-house-elves", "postedAt": "2010-12-13T13:57:11.403Z", "baseScore": 22, "voteCount": 23, "commentCount": 61, "url": null, "contents": { "documentId": "DYaDw3JBvGrHpvBmk", "html": "

Edit: This is old material. It may be out of date.

\n

I'm talking about the fictional race of House Elves from the Harry Potter universe first written about by J. K. Rowling and then uplifted in a grand act of fan-fiction by Elizer Yudkowsky. Unless severely mistreated they enjoy servitude to their masters (or more accurately the current residents of the homes they are binded to), this is also enforced by magical means since they must follow the letter if not the spirit of their master's direct order.

\n

Overall treating House Elves the way they would like to be treated appears more or less sensible and don't feel like debating this if people don't disagree. Changing agents without their consent or knowledge seems obviously wrong, so turning someone into a servant creatures seem intuitively wrong. I can also understand that many people would mind their descendants being modified in such a fashion, perhaps their dis-utility is enough to offset the utility of their modified descendants. However how true is this of distant descendants that only share passing resemblance? I think a helpful reminder of scale might be our own self domestication.

Assuming one created elf like creatures ex nihilo, not as slightly modified versions of a existing species why would one not want to bring a mind into existence that would value its own existence and benefits you, as long as the act of creation or their existence in itself does not represents huge enough dis-utility? This seems somewhat related to the argument Robin Hanson once made that any creatures that can pay for their own existance and would value their own existance should be created.

\n

I didn't mention this in the many HP fan fiction threads because I want a more general debate on the treatment and creation of such a class of agents.

\n

 

\n

Edit: Clearly if the species or class contains exceptions there should be ways for them to pursue their differing values.

\n

 

" } }, { "_id": "uS8xTiNwarade8hMh", "title": "Reading Level of Less Wrong", "pageUrl": "https://www.lesswrong.com/posts/uS8xTiNwarade8hMh/reading-level-of-less-wrong", "postedAt": "2010-12-13T09:54:28.812Z", "baseScore": 5, "voteCount": 12, "commentCount": 24, "url": null, "contents": { "documentId": "uS8xTiNwarade8hMh", "html": "

Here's something to pick our collective spirits up:

\n

According to Google's infallible algorithms, 20% of the content on LessWrong.com falls within the 'Advanced' reading level. For comparison, another well-known bastion of intelligence on the internets, Hacker News, only has 4% of it's content in that category.

\n

Strangely, inserting a space before the name of the site in the query tends to reduce the amount of content that falls in the highest bucket, but I am told that highly trained Google engineers are interrogating the bug in a dimly lit room as we speak, and expect it to crack soon.

" } }, { "_id": "uFB2gqkuC8xJdF4o9", "title": "Within the next rich hint personal can solve your value Pandora low-cost Pandora diamond", "pageUrl": "https://www.lesswrong.com/posts/uFB2gqkuC8xJdF4o9/within-the-next-rich-hint-personal-can-solve-your-value", "postedAt": "2010-12-13T07:19:47.377Z", "baseScore": -6, "voteCount": 6, "commentCount": 0, "url": null, "contents": { "documentId": "uFB2gqkuC8xJdF4o9", "html": "

Begin making use of temperature liquid and drop dish-washing water. In this case, no alcohol toothbrush, eat your precious inexpensive pandora charm Pandora bracelet is optional. Toothbrush is fantastic, due to the fact it'll be feasible to access to high jewelry model, it's quite challenging to clean. When a customer uncover your principal products turn into or stay extremely dusty, only continue in cleaning fluid in wenzhou drops dishwashing liquid. Then rinse the folks methodically.

On the pearl accessories, no new Pandora bead way in regular water immersion or clean thoroughly, to ensure that they use of any existing washing agent. It'll undoubtedly lead to injury of pearls. Just select a smooth wash dishcloth ruins. In any case, Oriental a person's Pandora bracelet aggressive and powerful chemicals. They can maintain a dislike is the effect of your product. Bear in mind oxidation bracelets and pearl Pandora advocate the nuggets really vulnerable.

Plastic may also boost poor in all of the works. Make certain you incredible band, earrings, or whatever Pandora composition owner gain will never occur contact plastic, e. G. Plastic band, rubber storage unit, etc. A man might lead a personal set or shopping center bracelet Pandora, where folks picked up, simply because these continuously supply assistance, as well as the entire maintain a lot more ideas pandora beads in this question, you is how it is possible to also and purified high jewelry.

" } }, { "_id": "5o7kGqoBe97di3rxB", "title": "Moderation of apparent trolling", "pageUrl": "https://www.lesswrong.com/posts/5o7kGqoBe97di3rxB/moderation-of-apparent-trolling", "postedAt": "2010-12-12T22:16:17.636Z", "baseScore": 2, "voteCount": 8, "commentCount": 51, "url": null, "contents": { "documentId": "5o7kGqoBe97di3rxB", "html": "

A brief line from this comment indicates that the author of the cryonics-critical comment quoted here was perhaps not the one that deleted it.

\n
\n

You know what - I am rather glad my comment was deleted on less wrong - good reason for people not to post on there.

\n
\n

Was it deleted by a moderator?

\n

Honestly, the decisive downvoting seemed to do the trick of hiding it from casual readers who don't want to see the long annoying rants. I don't think it was casting any doubt on the credibility of cryonics.

\n

While it sounds like the author regrets posting it, I would think they should be allowed to delete it themselves.

\n

 

\n

Edit: Originally titled \"Cryonics critical comment deleted?\"

" } }, { "_id": "uERBrgmvCsmNZR7Pq", "title": "Bayesian approach: UFO vs. AI hypotheses", "pageUrl": "https://www.lesswrong.com/posts/uERBrgmvCsmNZR7Pq/bayesian-approach-ufo-vs-ai-hypotheses", "postedAt": "2010-12-12T21:52:12.278Z", "baseScore": -10, "voteCount": 11, "commentCount": 5, "url": null, "contents": { "documentId": "uERBrgmvCsmNZR7Pq", "html": "

The goal of this post is not to prove or disprove existing of so called UFO or feasibility of AI but to study limits of Bayesian approach to complex problems.

Here we will test two hypotheses:

1)\tUFOs exist. We will take for simplicity the following form of this thesis: Unknown nonhuman intelligence exists on the Earth and manifests itself with unknown laws of physics.
2)\tAI will be created. In the XXI century will be created computer program which will surpass humans in every kind of intellectual activity by many orders of magnitude.

From the point of view of a layman both hypothesis are bizarre and so belongs to the reference class of “strange ideas”, most of each are false.

But both hypotheses have large communities which accumulated many evidences to support this ideas. (Here we could see “confirmation bias”).

In the begging we should point on isomorphism of the two hypotheses: in both cases the question is an existence of nonhuman intelligence. In the first case it is said that nonhuman intelligence already exists on the Earth, in second case that nonhuman intelligence would soon be created on the Earth.

For Bayesian estimation we need a priori probability, and then change it with some evidences.

Supporters of AI-hypothesis usually say that a priori probability is quite high: if the human mind exists, then AI is possible, and in addition, it is typical to human to repeat the achievements of nature. Therefore, a priori, we can assume that the creation of AI is possible and highly likely.

The situation with evidence in the field of AI is worse because creation of AI is a future event and direct empirical evidence is impossible. Moreover, many failed attempts to create AI in the past are used as evidence against the possibility of it creation.

Therefore, information about the success in \"helping\" disciplines is used as evidence of the possibility of AI: performance of computers and its continued growth, the success of brain scans, the success of various computer programs to recognize images and games. These circumstantial evidences cannot be directly substituted into the formula for calculating the probability, therefore, their credibility will always include taking something for granted.

In the case of UFOs a priori hypothesis is less convincing, since it argues not only that that nonhuman intelligence does exist on Earth, but also that it uses unknown physical laws (for flying). So this hypothesis is more complex and so less probable. Also it is not clear how nonhuman intelligence would appear evolutionary on Earth but didn’t eat all other types of live beings. Here come in play alien theory of the origin of UFOs as a priori hypothesis.

The proponents of alien UFO hypothesis say that if human intelligence exists on Earth, then some kind of intelligence could also appear on other planets of our Galaxy long before and come to our planet with some more or less rational goals (exploration, game etc). Saying this they think that they create high a priory probability for UFO hypothesis. (It is not true, because they have to assume that aliens have very strange goal systems – for example that they fly many light years to drink cattle blood in so called cattle mutilation cases. This improbable goal system completely neutralizes high probability of alien origin of UFO.)

We could note immediately that the a priori hypothesis about UFO uses the same premise as the hypothesis about AI: namely, the possibility of nonhuman intelligence is justified by the existence of the human mind!

However, the hypothesis of UFOs requires the existence of new physical laws, whereas the hypothesis about AI requires their absence (in the sense that for creating AI is necessary that the brain could be described as an algorithmic computer without any Penrose style things).

History of science shows that the list of physical laws will never be complete - every time we discover something new (e.g. dark energy recently) - but on the other hand, there are no physical effects in our environment, which are inexplicable within the framework of known physical laws (except perhaps that of ball lightning). Yet due to the need for new laws of physics a priori probability of the existence of UFOs is less.

In terms of evidence the hypothesis about UFOs has a sharp contrast to the hypothesis of AI. There are thousands of empirical evidences about UFO sightings. However Bayesian interference (increase the credibility) of each of the evidence is very small. That is, most of these evidences have an equal probability of being true or false and do not carry any information. Note that if we have 20 evidences with a probability of truth greater than 50%, say 60%, then Bays formula give a very substantial total evidence of 3000 to 1 - that is, would increase the validity of a priori hypothesis of 3000 times.

Thus, the hypothesis of UFO has a lower a priori probability, but more empirical evidence (the truth of which we will not discuss here, but see Don Berliner \"UFO Briefing Document: The Best Available Evidence» My position is that I not convinces UFO believer, but assume they could exist).

Discussions about AI are always tend to come to discussion about rationality, but most UFO band is a bastion of irrationality. In fact, both can be described in terms of Bayesian logic. The belief that some topics are more rational than others is irrational.

" } }, { "_id": "42avmfSPj6iFAjGJT", "title": "Testing the effectiveness of an effort to help", "pageUrl": "https://www.lesswrong.com/posts/42avmfSPj6iFAjGJT/testing-the-effectiveness-of-an-effort-to-help", "postedAt": "2010-12-12T15:47:25.448Z", "baseScore": 14, "voteCount": 9, "commentCount": 6, "url": null, "contents": { "documentId": "42avmfSPj6iFAjGJT", "html": "

\"It has long been the standard practice in medical testing: Give drug treatment to one group while another, the control group, goes without.

\n

Now, New York City is applying the same methodology to assess one of its programs to prevent homelessness. Half of the test subjects — people who are behind on rent and in danger of being evicted — are being denied assistance from the program for two years, with researchers tracking them to see if they end up homeless.\"

" } }, { "_id": "Rrg2sq75a5gBowgQb", "title": "$100 for the best article on efficient charty - the winner is ... ", "pageUrl": "https://www.lesswrong.com/posts/Rrg2sq75a5gBowgQb/usd100-for-the-best-article-on-efficient-charty-the-winner", "postedAt": "2010-12-12T15:02:06.007Z", "baseScore": 28, "voteCount": 21, "commentCount": 17, "url": null, "contents": { "documentId": "Rrg2sq75a5gBowgQb", "html": "

Part of the Efficient Charity Article competition. Several people have written articles on efficient charity. The entries were:

\r\n\r\n

The original criteria for the competition are listed here, but bascially the idea is to introduce the idea to a relatively smart newcomer without using jargon.

\r\n

Various people gave opinions about which articles were best. For me, two articles in particular stood out as being excellent for a newomer. Those articles were:

\r\n

Throwawayaccount_1

\r\n

and

\r\n

Multifoliaterose's

\r\n

articles.
 

\r\n

I therefore declare them joint winners, and implore our kind sponsor Jsalvatier to split the prize between them evenly. Throwawayaccount_1 should also unmask his/her identity.

\r\n

[I would also ask the winners to kindly not offer to donate the money to charity, but to actually take the prize money and spend it on something that they selfishly-want, such as ice-cream or movie tickets or some other luxury item. Establishing a norm of giving away prizes creates very bad incentives and will tend to decrease the degree to which prizes actually motivate people in the future]

" } }, { "_id": "Nj7DiQQfXj7FhknQF", "title": "Books on evolution of conscience", "pageUrl": "https://www.lesswrong.com/posts/Nj7DiQQfXj7FhknQF/books-on-evolution-of-conscience", "postedAt": "2010-12-12T13:02:36.338Z", "baseScore": 4, "voteCount": 3, "commentCount": 5, "url": null, "contents": { "documentId": "Nj7DiQQfXj7FhknQF", "html": "

I am looking for books on evolution of conscience. Please suggest. I searched but nothing good came out.

\n

I don't know if this is the right place for such requests.

\n

If this is not the right place, pls tell me, i will delete this post.

\n

Thanks.

\n

 

" } }, { "_id": "7zcK6AeRkuhgvLnhk", "title": "Any LessWrongers in Calgary?", "pageUrl": "https://www.lesswrong.com/posts/7zcK6AeRkuhgvLnhk/any-lesswrongers-in-calgary", "postedAt": "2010-12-12T02:43:46.488Z", "baseScore": 4, "voteCount": 5, "commentCount": 2, "url": null, "contents": { "documentId": "7zcK6AeRkuhgvLnhk", "html": "

I'm wondering if there'd be any sense in organizing a meet-up.  If you're local leave a comment.

\n

 

" } }, { "_id": "mdHpBmJrRzDjKHs5a", "title": "The Long Now", "pageUrl": "https://www.lesswrong.com/posts/mdHpBmJrRzDjKHs5a/the-long-now", "postedAt": "2010-12-12T01:40:07.470Z", "baseScore": 17, "voteCount": 17, "commentCount": 19, "url": null, "contents": { "documentId": "mdHpBmJrRzDjKHs5a", "html": "

It's surprised me that there's been very little discussion of The Long Now here on Less Wrong, as there are many similarities between the groups, although the approach and philosophy between them are quite different. At a minimum, I believe that a general awareness might be beneficial. I'll use the initials LW and LN below. My perspective on LN is simply that of someone who's kept an eye on their website from time to time and read a few of their articles, so I'd also like to admit that my knowledge is a bit shallow (a reason, in fact, I bring the topic up for discussion).

\n

Similarities

\n

Most critically, long-term thinking appears as a cornerstone of both the LW and LN thought, explicitly as the goal for LN, and implicitly here on LW whenever we talk about existential risk or decades-away or longer technology. It's not clear if there's an overlap between the commenters at LW and the membership of LN or not, but there's definitely a large number of people \"between\" the two groups -- statements by Peter Thiel and Ray Kurzweil have been recent topics on the LN blog and Hillis, who founded LN, has been involved in AI and philosophy of mind. LN has Long Bets, which I would loosely describe as to PredictionBook as InTrade is to Foresight Exchange. LN apparently had a presence at some of the past SIAI's Singularity Summits.

\n

Differences

\n

Signaling: LN embraces signaling like there's no tomorrow (ha!) -- their flagship project, after all, is a monumental clock to last thousands of years, the goal of which is to \"lend itself to good storytelling and myth\" about long-term thought. Their membership cards are stainless steel. Some of the projects LN are pursuing seem to have been chosen mostly because they sound awesome, and even those that aren't are done with some flair, IMHO. In contrast, the view among LW posts seems to be that signaling is in many cases a necessary evil, in some cases just an evolutionary leftover, and reducing signaling a potential source for efficiency gains. There may be something to be learned here -- we already know FAI would be an easier sell if we described it as project to create robots that are Presidents of the United States by day, crime-fighters by night, and cat-people by late-night.

\n

Structure: While LW is a project of SIAI, they're not the same, so by extension the comparison between LN and LW is just a bit apples-to-kumquats. It'd be a lot easier to compare LW to a LN discussion board, if it existed.

\n

The Future: Here on LW, we want our nuclear-powered flying cars, dammit! Bad future scenarios that are discussed on LW tend to be irrevocably and undeniably bad -- the world is turned into tang or paperclips and no life exists anymore, for example. LN seems more concerned with recovery from, rather than prevention of, \"collapse of civilization\" scenarios. Many of the projects both undertaken and linked to by LN focus on preserving knowledge in a such a scenario. Between the overlap in the LW community and cryonics, SENS, etc, the mental relationship between the median LW poster and the future seems more personal and less abstract.

\n

Politics: The predominant thinking on LW seems to be a (very slightly left-leaning) technolibertarianism, although since it's open to anyone who wanders in from the Internet, there's a lot of variation (if either SIAI or FHI have an especially strong political stance per se, I've not noticed it). There's also a general skepticism here regarding the soundness of most political thought and of many political processes.  LN seems further left on average and more comfortable with politics in general (although calling it a political organization would be a bit of a stretch). Keeping with this, LW seems to have more emphasis on individual decision making and improvement than LN.

\n

Thoughts?

" } }, { "_id": "nBNCjuDJxsWPY6Fj4", "title": "Should LW have a public censorship policy?", "pageUrl": "https://www.lesswrong.com/posts/nBNCjuDJxsWPY6Fj4/should-lw-have-a-public-censorship-policy", "postedAt": "2010-12-11T22:45:15.282Z", "baseScore": 25, "voteCount": 20, "commentCount": 42, "url": null, "contents": { "documentId": "nBNCjuDJxsWPY6Fj4", "html": "

It might mollify people who disagree with the current implicit policy, and make discussion about the policy easier. Here's one option:

\n
\n

There's a single specific topic that's banned because the moderators consider it a Basilisk. You won't come up with it yourself, don't worry. Posts talking about the topic in too much detail will be deleted. 

\n
\n

One requirement would be that the policy be no more and no less vague than needed for safety.

\n

Discuss.

" } }, { "_id": "LCRSftJ5NopoMWhqi", "title": "Why Eliezer Yudkowsky receives more upvotes than others", "pageUrl": "https://www.lesswrong.com/posts/LCRSftJ5NopoMWhqi/why-eliezer-yudkowsky-receives-more-upvotes-than-others", "postedAt": "2010-12-11T21:04:07.869Z", "baseScore": 8, "voteCount": 23, "commentCount": 11, "url": null, "contents": { "documentId": "LCRSftJ5NopoMWhqi", "html": "

One of the reasons for why Yudkowsky is being drastically upvoted is of course that he's often, fasten your seatbelts, brilliantly right (whoever reads my comments knows that I am not really the most frenetic believer, so I think I can say this without sounding cultish). But others are too, so is Less Wrong a cult? Nah! There is a simple explanation for this phenomenon:

\"\"

\n

As you can see, there are already 13 people who subscribe to his Less Wrong feed via Google Reader. And there are many other ways to subscribe to a RSS feed (which is not the only way to follow his mental outpourings anyway), so the number of people who follow every post and comment is likely much higher.

\n

That's why most of his comments receive more upvotes than other comments. It is not because he is a cult leader, it's just that his comments are read by many more people than the average comment on Less Wrong. There are of course other causes as well, but this seems to explain a fair chunk of the effect.

\n

Also consider that I'm often upvoted (with a current Karma score of 1959) and I do not keep quiet regarding my doubts about some topics directly related to Yudkowsky and the SIAI. How could this happen if Less Wrong was an echo chamber?

\n

I just wanted to let you know, because I have been wondering about it in the past.

" } }, { "_id": "84gvE26m2hFiAJP8r", "title": "Calling LW Londoners", "pageUrl": "https://www.lesswrong.com/posts/84gvE26m2hFiAJP8r/calling-lw-londoners", "postedAt": "2010-12-11T17:53:31.389Z", "baseScore": 6, "voteCount": 4, "commentCount": 14, "url": null, "contents": { "documentId": "84gvE26m2hFiAJP8r", "html": "

It seems we haven't done any London meetups in a while. Is anyone up for arranging something within January?

" } }, { "_id": "5NJn5xgEn2RsW9S9z", "title": "If reductionism is the hammer, what nails are out there? ", "pageUrl": "https://www.lesswrong.com/posts/5NJn5xgEn2RsW9S9z/if-reductionism-is-the-hammer-what-nails-are-out-there", "postedAt": "2010-12-11T13:58:18.087Z", "baseScore": 23, "voteCount": 17, "commentCount": 50, "url": null, "contents": { "documentId": "5NJn5xgEn2RsW9S9z", "html": "

EDIT: I'm moving this to the Discussion section because people seem to not like it (lack of upvotes) and to find the writing unclear.  I'd love writing advice, if anyone wants to offer some.

\n

 

\n
\n

 

\n

Related to: Dissolving the question, Explaining vs explaining away

\n

I review parts of the reductionism sequence, in hopes of setting up for future reduction work.

\n

So, you’ve been building up your reductionism muscles, on LW or elsewhere.  You’re no longer confused about a magical essence of free will; you understand how particular arrangements of atoms can make choices.  You’re no longer confused about a magical essence of personal identity; you understand where the feeling of “you” comes from, and how one could in principle create many copies of you that continued \"your\" experiences, and how the absence of an irreducible essence doesn’t reduce life’s meaning.

\n

The natural next question is: what other phenomena can you reduce?  What topics are we currently confused about which may yield to the same tools?  And what tools, exactly, do we have for such reductions?

\n

With the goal of paving the way for new reductions, then, let’s make a list of questions that persistently felt like questions about magical essences, including both questions that have been solved, and questions about which we are currently confused.  And let’s also list tools or strategies that assisted in their dissolution.  I made an attempt at such lists below; perhaps you can help me refine them?

\n

Some places where many expected a fundamental or non-reducible “essence”[1]:

\n

1.  Ducks.

\n

Why it's tempting to postulate an essence: Organisms seem to come in types.  New organisms of the given type (e.g., new ducks) come into existence, almost as though the the type “Duck” had causal power.  Humans are able to form mental concepts of \"duck\" that approximately mirror the outside predictive regularities.

\n

2.  Life.

\n

Why it's tempting to postulate an essence: Living creatures act very differently from dead creatures.  A recently killed animal doesn’t move, loses its body heat, etc., even though its matter is in almost the same configuration. [2]

\n

3.  Free will.

\n

Why it's tempting to postulate an essence:  Humans (among other things) are in fact organized to “choose” their actions in some meaningful sense.  We (mostly) choose a single course of action in a relatively unified manner that responds to outside information and incentives.  “Choice” also seems like a useful internal concept, but I’m not sure how to describe the details here.

\n

4.  Personal identity

\n

Why it's tempting to postulate an essence: People have personalities, plans, beliefs, bodies, etc. that approximately persist over time.  Internally, we experience consistent memories that happened “to us”, we choose our own actions, and we anticipate future experiences.

\n

5.  Pain

\n

Why it's tempting to postulate an essence: We feel pain.  We find ourselves motivated to avoid pain.  We sometimes almost feel others’ pain, as when we wince and rub our thumbs after watching someone else smash their thumb with a hammer, and we often find ourselves motivated to avoid their pain as well.  We can report verbally on the pain, modify our behavior to reduce the pain, etc.

\n

6.  Mathematics

\n

Why it's tempting to postulate an essence:  Mathematics often pops up in science.  It’s also “simple”, is at least somewhat culturally universal, is relatively easy to implement portions of in machines, has a nice notion of “proof” whereby we can often formally determine what is true, and can often determine true results without much contact with outside empirical data, and is something aliens might plausibly share with us.

\n

7.  Reality / existence / the physical world

\n

Why it's tempting to postulate an essence:  Our perceptions are well predicted by imagining a set of fairly stable objects that we can see, touch, turn over in our hands, etc. and that retain their color, shape, heft, and other properties fairly stably over time.  At higher levels of abstraction, too, the world is fairly lawful and coherent. [3]

\n

What lessons for future reductions?

\n

These examples suggest the following heuristics:  

\n

A.  Even when it really, really feels like there should be an essence, there probably isn’t one.  

\n

B.  Philosophical questions are just ordinary questions that one is particularly ignorant about; they are not questions about separate magisteria that must permanently be reasoned about in some special way.  

\n

C.  People expect magical essences in places where there really are interesting empirical regularities.  In order to understand those regularities, and to create a new set of concepts that better do the work that our old magical essences intuitions used to do, it is necessary to do real research.  Ritual assertions that “It’s all physics” and “there aren’t essences” do not create the needed concepts and anticipations.

\n

D.  A reasonable first step, in tackling a new confusion, is to ask the why it feels like there is a question or concept there, and to list the empirical regularities, or cognitive artifacts, that contribute to that feeling.

\n

These heuristics aren't original; Eliezer noted them already in his reductionism sequence (which is very much worth reading).  But I suspect that many apply these heuristics more to problems they already understand (“of course free will has no magical essence”) more than to problems we don’t yet understand (“of course there is no magical essence that distinguishes our actual, real world from imaginable physicses/worlds that aren't real\").

\n

I'm hoping that reviewing heuristics for reduction, and staring at solved and unsolved problems side by side, may help us with the unsolved problems (which I'll attempt some steps toward in subsequent posts).

\n

 

\n
\n

[1]  I agree with SarahC’s point that humans seem predisposed to impute essences everywhere.  Still, discussions about whether there’s a magical essence “free will” seem to pop up more often than discussions about whether there’s a magical essence “carpet”, “ocean”, “California”, or “female”.  I mean, folks are interested in these other questions, and they have discussions about what meaning to use and how much that meaning cleaves nature at its joints, but they don’t generally expect a separate sort of essence that has causal powers and isn’t made out of atoms.

\n

[2] People unacquainted with modern biology seem often to make predictive errors due to expecting an essence of life.  For example, I had lunch the other day with a physics professor from a good university who thought that, even if we could assemble an atom-for-atom duplicate of a person's exact physical state, it might well not act like a person for want of a soul.  Another acquaintance was surprised to hear that scientists do in fact believe that a cell assembled in a test tube would act just like a cell assembled anywhere else.

\n

[3] I'm less satisfied with this unpacking than with the others on the list.  Can anyone do better?

" } }, { "_id": "HbzmZGTG846J4wkh6", "title": "Life-tracking application for android", "pageUrl": "https://www.lesswrong.com/posts/HbzmZGTG846J4wkh6/life-tracking-application-for-android", "postedAt": "2010-12-11T01:48:11.676Z", "baseScore": 26, "voteCount": 21, "commentCount": 17, "url": null, "contents": { "documentId": "HbzmZGTG846J4wkh6", "html": "

Hi, lesswrong.

\n

I just finished my application for android devices, LifeTracking, which has been motivated by the discussions here; primarily discussions about akrasia and measuring/tracking your own actions. I don't want to make this sound like an advertisement (the application is completely free anyway), but I would really really like to get feedback from you and hear your comments, criticism, and suggestions. If there are enough LessWrong-specific feature requests, I will make a separate application just for that.

\n

Here is a brief description of the app:

\n

 

\n

LifeTracking application allows you to track any value (like your weight or your lesswrong karma), as well as any time-consuming activities (like sleeping, working, reading Harry Potter fanfic, etc). You can see the data visually, edit it, and analyze it.

\n

The goal of the application is to help you know yourself and your schedule better. Hopefully, when you graph various aspects of your life side-by-side you will come to a better understanding of yourself. Also, this way you will not have to rely on your faulty memory to remember all that data.

\n

You can download the app from the Market (link only works from Android devices) or download .apk directly. Screenshots: [1], [2], [3], [4], [5], [6].

\n

 

\n

Edit: LifeTracking website

\n

And while we are on topic of mobile apps, what other applications would you like to see made? (For example, another useful application would be \"your personal prediction tracker\", where you enter various short-term predictions, your confidence interval, and then enter the actual result. You can classify each prediction and then see if you are over- or under-confident in certain areas. (I remember seeing a website that does something similar, but can't find it now.))

" } }, { "_id": "g8R5oG6zAmJp8BcQ9", "title": "The term 'altruism' in group selection", "pageUrl": "https://www.lesswrong.com/posts/g8R5oG6zAmJp8BcQ9/the-term-altruism-in-group-selection", "postedAt": "2010-12-10T20:50:16.563Z", "baseScore": 7, "voteCount": 12, "commentCount": 17, "url": null, "contents": { "documentId": "g8R5oG6zAmJp8BcQ9", "html": "\n

The way I see it, altruism has been the big selling point for group selection. The only way altruism would have been able to evolve is through group selection, so the presence of altruism is strong evidence for the existence of group selection.

\n

Group selectionists have been (rightly) criticized by pointing out that they were merely looking for an explanation that would fit the results they had already decided on and wrote the conclusion before looking for hypotheses. They wanted a nice, friendly, altruistic world and devised a theory of why this should be so.

\n

 

\n

Now, while I fully agree their methods were wrong, I want to take a closer look at the word “altruism” in this context.

\n

 

\n
\n

  Is a cow a vegetarian?

\n
\n

Think about this question, if you will, before reading on.

\n

 

\n

I would say no, it’s not. True, a cow only eats plants but there is a crucial difference that separates it from a real vegetarian. When a cow is hungry its brain tells it to eat grass, it doesn’t give the option to choose meat. A cow’s digestive system is specialized in processing grass, eating meat would send it haywire. 

\n

A vegetarian, on the other hand, is a human, an omnivore, he can just as easily process food from animal as plant sources. Not eating meat is a deliberate and conscious choice.

\n

The point I’m getting at is that eating plants because that’s all you can do doesn’t make you a real vegetarian. Luckily we have a convenient term to make this distinction: herbivore.

\n

 

\n

Now lets go back to altruism.

\n

Bees have been called altruistic; after all, what greater sacrifice can an organism bring then its ability to reproduce? What if we, to stop overpopulation, sterilize every newborn child for the next three years.

\n

Every time we meet one of those children we would give them a pat on the back and congratulate them for the enormous amounts of altruism they have displayed. I doubt many of them would agree.

\n

It’s not really altruism if you have no choice, is it? The difference between true altruism and cases like this is deliberate choice and doing more then “default helping”.

\n

 

\n

In short: whenever the group selectionists saw a herbivore, they called it a vegetarian. Just like we make a distinction between herbivores and vegetarians, I would like to see someone introducing a new term for that-thing-animals-do-that-looks-like-altruism.

\n

 

\n

p.s. This is my very first article on this site, any feedback and tips would be greatly appreciated.

" } }, { "_id": "p9N2qfkjHoft5L85v", "title": "A sense of logic", "pageUrl": "https://www.lesswrong.com/posts/p9N2qfkjHoft5L85v/a-sense-of-logic", "postedAt": "2010-12-10T18:19:38.320Z", "baseScore": 23, "voteCount": 30, "commentCount": 281, "url": null, "contents": { "documentId": "p9N2qfkjHoft5L85v", "html": "

What's the worst argument you can think of?

\n

One of my favorites is from a Theodore Sturgeon science fiction story in which it's claimed that faster than light communication must be possible because even though stars are light years apart, a person can look from one to another in a moment.

\n

I don't know about you, but bad logic makes my stomach hurt, especially on first exposure.

\n

This seems rather odd-- what sort of physical connection might that be?

\n

Also, I'm not sure how common the experience is, though a philosophy professor did confirm it for himself and (by observation) his classes. He mentioned one of the Socratic dialogues (sorry, I can't remember which one) which is a compendium of bad arguments and which seemed to have that effect on his classes.

\n

So, how did you feel when you read that bit of sf hand-waving? If your stomach hurt, what sort of stomach pain was it? Like nausea? Like being hit? Something else? If you had some other sensory reaction, can you describe it?

\n

For me, the sensation is some sort of internal twinge which isn't like nausea.

\n

Anyway, both for examination and for the fun of it, please supply more bad arguments.

\n

I think there are sensory correlates for what is perceived to be good logic (unfortunately, they don't tell you whether an argument is really sound)-- kinesthesia which has to do with solidity, certainty, and at least in my case, a feeling that all the corners are pinned down.

\n

Addendum: It looks as though I was generalizing from one example. If you have a fast reaction to bad arguments and it isn't kinesthetic, what is it?

" } }, { "_id": "pk7ED8sXEGdzTCrn2", "title": "Rational entertainment industry?", "pageUrl": "https://www.lesswrong.com/posts/pk7ED8sXEGdzTCrn2/rational-entertainment-industry", "postedAt": "2010-12-10T15:55:48.002Z", "baseScore": 5, "voteCount": 7, "commentCount": 26, "url": null, "contents": { "documentId": "pk7ED8sXEGdzTCrn2", "html": "

\n

By \"the industry\" in this post, I refer to that part of the entertainment industry which:

\n

1. Produces movies, TV and video games (as opposed to books, comics etc.)

\n

2. Is motivated by profit (as opposed to fun, politics etc.)

\n

3. Consists of companies (as opposed to lone developers, student teams etc.)

\n

It seems to me that the industry has two characteristics:

\n

Formulaic

\n

Most products follow some formula which is known to be workable.

\n

Under what circumstances is this rational? (I'm not commenting on whether it's artistically good or bad; again, I'm only discussing entertainment as a commercial enterprise motivated by profit.) It seems to me following a proven formula is rational if your priority is to not lose, to go for the sure thing, i.e. the chance of a big hit is not worth the risk of a complete flop.

\n

Hit driven

\n

It's the accepted wisdom that entertainment is a hit driven industry: almost all the profits are generated by a handful of the most successful products, with the rest losing money or barely covering costs.

\n

Now my question: isn't there a contradiction here? If you're selling insurance, following a proven formula may well be the rational thing to do. If you're the owner of one of the handful of franchises that is pulling in big profits, of course you shouldn't mess with a winner. But if you're one of the many also-rans, how is it rational to stick with an almost sure loser? In a hit driven industry, wouldn't it be more rational to concentrate on maximizing your chance of winning big, instead of trying to minimize the risk of a flop?

\n

But I've never worked in the entertainment industry; perhaps my layman's impression of it is inaccurate. Is there something I'm missing, or is a substantial amount of expected profit really being left on the table?

\n

" } }, { "_id": "w72tje6XeZrxL325m", "title": "Anthropologists and \"science\": dark side epistemology?", "pageUrl": "https://www.lesswrong.com/posts/w72tje6XeZrxL325m/anthropologists-and-science-dark-side-epistemology", "postedAt": "2010-12-10T10:49:41.139Z", "baseScore": 16, "voteCount": 11, "commentCount": 6, "url": null, "contents": { "documentId": "w72tje6XeZrxL325m", "html": "

The American Anthropological Association has apparently decided to ditch the word \"science\", arguably so they can promote political messages without hindrance from empirical data.

\n

If so, this might be an example of dark side epistemology.

\n

(Articles in Psychology Today and NYT).

" } }, { "_id": "EBTbsoRqm8WXximfn", "title": "How To Lose 100 Karma In 6 Hours -- What Just Happened", "pageUrl": "https://www.lesswrong.com/posts/EBTbsoRqm8WXximfn/how-to-lose-100-karma-in-6-hours-what-just-happened", "postedAt": "2010-12-10T08:27:28.781Z", "baseScore": -75, "voteCount": 101, "commentCount": 212, "url": null, "contents": { "documentId": "EBTbsoRqm8WXximfn", "html": "
As with all good posts, we begin with a hypothetical:
\n
Imagine that, in the country you are in, a law is passed saying that if you drive your car without your seat belt on, you will be fined $100.
\n
Here's the question: Is this blackmail? Is this terrorism?
\n
Certainly it's a zero-sum interaction (at least in the short term). You either have to endure the inconvenience of putting on a seat belt, or risk the chance of a $100 fine.
\n
You may also want to consider that cooperating with the seat belt fine may also cause lawmakers to believe that you'll also follow future laws.
\n

\n
If that one seems too obvious, here's another: A law is passed establishing a $500 fine for pirating an album on the internet.
\n
Does this count as blackmail? does this count as terrorism?
\n

\n
What if, instead of passing a law, the music companies declare that they will sue you for $500 every time you pirate an album?
\n
Is it blackmail yet? terrorism? Will complying teach the music companies that throwing their weight around works?
\n

\n
Enough with the hypothetical, this one's real: The moderator of one of your favorite online forums declares that if you post things he feels are dangerous to read, he will censor them. He may or may not tell you when he does this. If you post such things repeatedly, you will be banned.
\n
Does this count as blackmail? Does this count as terrorism? Should we not comply with him to prevent similar future abuses of power?
\n

\n
Two months ago, I found a third option to the comply/revolt dilemma: turn the force back on the forceful.
\n
Imagine this: you're the moderator of an online forum and care primarily about one thing: reducing existential risks. One day, one of your form members vows to ensure that censoring posts will cause a small increase in existential risks.
\n
Does this count as blackmail? Does this count as terrorism? Would you not comply to prevent similar future abuses of power?
\n

\n

\n
(Please pause here if you're feeling emotional -- what follows is important, and deserves a cool head)
\n

\n

\n
It is my opinion that none of these are blackmail.
\n
Blackmail is fundamentally a single shot game.
\n
Laws and rules, are about the structure of the world's payoffs, and changing them to incentivize behavior.
\n
Now it's fair to say that there are just laws, and there are unjust laws... and perhaps we should refuse to follow unjust laws... but to call a law blackmail or terrorism seems incorrect.
\n

\n
Here's what happened:
\n
\n\n
\n

\n
This will continue for the foreseeable future. I'm not happy about it either. Basically I think the sanest way to think about the situation is to assume that Yudkowsky's \"delete\" link also causes a 0.0001% increase in existential risk, and hope that he uses it appropriately.
\n
He doesn't feel this way. He feels that the only correct answer here is to ignore the 0.0001% increase. We are at an impasse.
\n

\n
FAQ:
\n
Q: Will you reconsider?
\n
A: Sadly no. This situation is symmetric -- just as I am not immune to Yudkowsky's laws (censorship on LW if I talk about \"dangerous\" ideas), he is not immune to my laws.
\n

\n
Q: How can you be sure that a post was censored rather than deleted by the owner?
\n
A: This is sometimes hard, and sometimes easy. In general I will err on the side of caution.
\n

\n
Q: How can you be sure that you haven't missed a deleted comment?
\n
A: I use, and am improving, an automated solution.
\n

\n
Q: What is the nature of the existential risk increase?
\n
A: Emails. (Yes, emails). Maybe some phone calls.
\n
There is a simple law that I believe makes intuitive sense to the conservative right. A law that will be easy for them to endorse. This law would be disastrous for the relative chance of our first AI being a FAI vs a UFAI. Every time EY decides to take a 0.0001% step, an email or phone call will be made to raise awareness about this law.
\n

\n
Q: Is there any way for me to gain access to the censored content?
\n
A: I am working on a website that will update in real time as posts are deleted from LessWrong. Stay tuned!
\n

\n
Q: Will you still post here under waitingforgodel
\n
A: Yes, but less. Replying to 100+ comments is very time consuming, and I have several projects in dire need of attention.
\n

\n
Thank you very much for your time and understanding,
\n
-wfg
\n

\n
Edit: This post is describing what happened, not why. For a discussion about why I feel that the precommitment will result in an existential risk savings, please see the \"precommitment\" thread, where it is talked about extensively.
" } }, { "_id": "NrwAToAbNsLGSZ8b7", "title": "Link: What does it feel like to be stupid?", "pageUrl": "https://www.lesswrong.com/posts/NrwAToAbNsLGSZ8b7/link-what-does-it-feel-like-to-be-stupid", "postedAt": "2010-12-10T07:43:28.889Z", "baseScore": 11, "voteCount": 10, "commentCount": 7, "url": null, "contents": { "documentId": "NrwAToAbNsLGSZ8b7", "html": "

What does it feel like to be stupid?

\n
\n

I had an arterial problem for a couple of years, which reduced blood supply to my heart and brain and depleted B vitamins from my nerves (to keep the heart in good repair). Although there is some vagueness as to the mechanisms, this made me forgetful, slow, and easily overwhelmed. In short I felt like I was stupid compared to what I was used to, and I was.

\n

It was frightening at first because I knew something wasn't right but didn't know what, and very worrying for my career because I was simply not very good any more.

\n

However, once I got used to it and resigned myself, it was great.

\n
\n

Full article:
http://www.quora.com/What-does-it-feel-like-to-be-stupid

" } }, { "_id": "vEZ3Ajyp3LBtBwyuG", "title": "A Thought on Pascal's Mugging", "pageUrl": "https://www.lesswrong.com/posts/vEZ3Ajyp3LBtBwyuG/a-thought-on-pascal-s-mugging", "postedAt": "2010-12-10T06:08:00.687Z", "baseScore": 15, "voteCount": 12, "commentCount": 159, "url": null, "contents": { "documentId": "vEZ3Ajyp3LBtBwyuG", "html": "

For background, see here.

\n

In a comment on the original Pascal's mugging post, Nick Tarleton writes:

\n
\n

[Y]ou could replace \"kill 3^^^^3 people\" with \"create 3^^^^3 units of disutility according to your utility function\". (I respectfully suggest that we all start using this form of the problem.)

\n

Michael Vassar has suggested that we should consider any number of identical lives to have the same utility as one life. That could be a solution, as it's impossible to create 3^^^^3 distinct humans. But, this also is irrelevant to the create-3^^^^3-disutility-units form.

\n
\n

Coming across this again recently, it occurred to me that there might be a way to generalize Vassar's suggestion in such a way as to deal with Tarleton's more abstract formulation of the problem. I'm curious about the extent to which folks have thought about this. (Looking further through the comments on the original post, I found essentially the same idea in a comment by g, but it wasn't discussed further.)

\n

The idea is that the Kolmogorov complexity of \"3^^^^3 units of disutility\" should be much higher than the Kolmogorov complexity of the number 3^^^^3. That is, the utility function should grow only according to the complexity of the scenario being evaluated, and not (say) linearly in the number of people involved. Furthermore, the domain of the utility function should consist of low-level descriptions of the state of the world, which won't refer directly to words uttered by muggers, in such a way that a mere discussion of \"3^^^^3 units of disutility\" by a mugger will not typically be (anywhere near) enough evidence to promote an actual \"3^^^^3-disutilon\" hypothesis to attention.

\n

This seems to imply that the intuition responsible for the problem is a kind of fake simplicity, ignoring the complexity of value (negative value in this case). A confusion of levels also appears implicated (talking about utility does not itself significantly affect utility; you don't suddenly make 3^^^^3-disutilon scenarios probable by talking about \"3^^^^3 disutilons\").

\n

What do folks think of this? Any obvious problems? 

" } }, { "_id": "5C2sytWZNqSujXjoE", "title": "Kazakhstan's president urges scientists to find the elixir of life", "pageUrl": "https://www.lesswrong.com/posts/5C2sytWZNqSujXjoE/kazakhstan-s-president-urges-scientists-to-find-the-elixir", "postedAt": "2010-12-10T04:17:13.544Z", "baseScore": 12, "voteCount": 8, "commentCount": 3, "url": null, "contents": { "documentId": "5C2sytWZNqSujXjoE", "html": "

...according to this front-page Reddit headline I just saw, which links to this Guardian article. I wonder if he's heard of KrioRus, whether he's signed up (Wikipedia says they offer services \"to clients from Russia, CIS and EU\"), and what his odds would be if he were (would it be possible to emigrate to Russia to be closer to the facility, and if not, what would be the best possible option?). Given his being a head of state, presumably it'd be pretty tough for an advocate to even get close enough to try to make the case.

\n

Searching the Reddit comment thread for \"cryo\" turned up nothing.

" } }, { "_id": "Thi2sjShMEBgKM3TX", "title": "“Fake Options” in Newcomb’s Problem", "pageUrl": "https://www.lesswrong.com/posts/Thi2sjShMEBgKM3TX/fake-options-in-newcomb-s-problem", "postedAt": "2010-12-10T02:12:09.126Z", "baseScore": 1, "voteCount": 7, "commentCount": 22, "url": null, "contents": { "documentId": "Thi2sjShMEBgKM3TX", "html": "

This is an exploration of a way of looking at Newcomb’s Problem that helped me understand it. I hope somebody else finds it useful. I may add discussions of other game theory problems in this format if anybody wants them.

\n

 

\n

Consider Newcomb’s Problem:: Omega offers you two boxes, one transparent and containing $1000, the other opaque and containing either $1 million or nothing. Your options are to take both boxes, or only take the second one; but Omega has put money in the second box only if it has predicted that you will only take 1 box. A person in favor of one-boxing says, “I’d rather have a million than a thousand.” A two-boxer says, “Whether or not box B contains money, I’ll get $1000 more if I take box A as well. It’s either $1001000 vs. $1000000, or $1000 vs. nothing.” To get to these different decisions, the agents are working from two different ways of visualising the payoff matrix. The two-boxer sees four possible outcomes and the one-boxer sees two, the other two having very low probability.

\n

The two-boxer’s payoff matrix looks like this:

\n

                                    Box B

\n

                                 |Money    | No money|

\n

Decision  1-box|    $1mil           | $0           |

\n

                2-box |      $1001000| $1000      |           

\n

The outcomes $0 and $1001000 both require Omega making a wrong prediction. But as the problem is formulated, Omega is superintelligent and has been right 100 out of 100 times so far. So the one-boxer, taking this into account, describes the payoff matrix like this:

\n

                                    Box B

\n

                                 |Money    | No money|

\n

Decision   1-box|       $1mil       | not possible|

\n

                2-box |   not possible| $1000        |           

\n

            If Omega is really a perfect (nearly perfect) predictor, the only possible (not hugely unlikely) outcomes are $1000 for two-boxing and $1 million for one-boxing, and considering the other outcomes is an epistemic failure.

\n

 

" } }, { "_id": "Mey7SLY3xnchMv98H", "title": "Unpacking the Concept of \"Blackmail\"", "pageUrl": "https://www.lesswrong.com/posts/Mey7SLY3xnchMv98H/unpacking-the-concept-of-blackmail", "postedAt": "2010-12-10T00:53:18.674Z", "baseScore": 36, "voteCount": 35, "commentCount": 142, "url": null, "contents": { "documentId": "Mey7SLY3xnchMv98H", "html": "

Keep in mind: Controlling Constant Programs, Notion of Preference in Ambient Control.

\n

There is a reasonable game-theoretic heuristic, \"don't respond to blackmail\" or \"don't negotiate with terrorists\". But what is actually meant by the word \"blackmail\" here? Does it have a place as a fundamental decision-theoretic concept, or is it merely an affective category, a class of situations activating a certain psychological adaptation that expresses disapproval of certain decisions and on the net protects (benefits) you, like those adaptation that respond to \"being rude\" or \"offense\"?

\n

We, as humans, have a concept of \"default\", \"do nothing strategy\". The other plans can be compared to the moral value of the default. Doing harm would be something worse than the default, doing good something better than the default.

\n

Blackmail is then a situation where by decision of another agent (\"blackmailer\"), you are presented with two options, both of which are harmful to you (worse than the default), and one of which is better for the blackmailer. The alternative (if the blackmailer decides not to blackmail) is the default.

\n

Compare this with the same scenario, but with the \"default\" action of the other agent being worse for you than the given options. This would be called normal bargaining, as in trade, where both parties benefit from exchange of goods, but to a different extent depending on which cost is set.

\n

Why is the \"default\" special here? If bargaining or blackmail did happen, we know that \"default\" is impossible. How can we tell two situations apart then, from their payoffs (or models of uncertainty about the outcomes) alone? It's necessary to tell these situations apart to manage not responding to threats, but at the same time cooperating in trade (instead of making things as bad as you can for the trade partner, no matter what it costs you). Otherwise, abstaining from doing harm looks exactly like doing good. A charitable gift of not blowing up your car and so on.

\n

My hypothesis is that \"blackmail\" is what the suggestion of your mind to not cooperate feels like from the inside, the answer to a difficult problem computed by cognitive algorithms you don't understand, and not a simple property of the decision problem itself. By saying \"don't respond to blackmail\", you are pushing most of the hard work into intuitive categorization of decision problems into \"blackmail\" and \"trade\", with only correct interpretation of the results of that categorization left as an explicit exercise.

\n

(A possible direction for formalizing these concepts involves introducing some kind of notion of resources, maybe amount of control, and instrumental vs. terminal spending, so that the \"default\" corresponds to less instrumental spending of controlled resources, but I don't see it clearly.)

\n

(Let's keep on topic and not refer to powerful AIs or FAI in this thread, only discuss the concept of blackmail in itself, in decision-theoretic context.)

" } }, { "_id": "qoMvGdah6W473Mrw9", "title": "Science reveals how not to choke under pressure", "pageUrl": "https://www.lesswrong.com/posts/qoMvGdah6W473Mrw9/science-reveals-how-not-to-choke-under-pressure", "postedAt": "2010-12-09T16:46:28.536Z", "baseScore": 13, "voteCount": 10, "commentCount": 5, "url": null, "contents": { "documentId": "qoMvGdah6W473Mrw9", "html": "

Found via reddit, excerpt:

\n
\n

\n

Choking happens when we let anxious thoughts distract us or when we start trying to consciously control motor skills best left on autopilot. ...

\n

In her new book, Choke: What the Secrets of the Brain Reveal About Success and Failure at Work and at Play, Beilock deconstructs high-stakes moments—the ones seen around the world and the ones only our mothers care about—to explore why we sometimes falter, and why other times we nail it. ...

\n

What goes wrong in our brain when this happens? 
Working memory, housed in the prefrontal cortex, is what allows us to do calculations in our head and reason through a problem. Unfortunately, it’s a limited resource. If we’re doing an activity that requires a lot of cognitive horsepower, such as responding to an on-the-spot question, and at the same time we’re worrying about screwing up, then suddenly we don’t have the brainpower we need.

\n

Also, once we feel stressed, we often try to control what we’re doing in order to ensure success. So if we’re doing a task that normally operates largely outside of conscious awareness, such as an easy golf swing, what screws us up is the impulse to think about and control our actions. Suddenly we’re too attentive to what we’re doing, and all the training that has improved our motor skills is for naught, since our conscious attention is essentially hijacking motor memory. ...

\n

How can I prevent myself from overthinking? 
You might think that writing about your worries would just make them more salient. But there is work in clinical psychology showing that writing helps limit ruminative thoughts—those negative thoughts that are very hard to shake and that seem to grow the more you dwell on them. The idea is that you cognitively outsource your worries to the page. Writing about worries for 10 minutes right before taking a standardized test is really beneficial.

\n

\n
" } }, { "_id": "rBqk3vMJzpEWqFRmR", "title": "Reliably wrong", "pageUrl": "https://www.lesswrong.com/posts/rBqk3vMJzpEWqFRmR/reliably-wrong", "postedAt": "2010-12-09T14:46:02.157Z", "baseScore": 7, "voteCount": 7, "commentCount": 19, "url": null, "contents": { "documentId": "rBqk3vMJzpEWqFRmR", "html": "

Discussion of a book by \"Dow Jones 36,000\" Glassman\". I'm wondering whether there are pundits who are so often wrong that their predictions are reliable indicators that something else (ideally the opposite) will happen.

" } }, { "_id": "RjYGFxXBbWiGWh53E", "title": "Utility is unintuitive", "pageUrl": "https://www.lesswrong.com/posts/RjYGFxXBbWiGWh53E/utility-is-unintuitive", "postedAt": "2010-12-09T05:39:34.176Z", "baseScore": -4, "voteCount": 16, "commentCount": 66, "url": null, "contents": { "documentId": "RjYGFxXBbWiGWh53E", "html": "

EDIT: My original post was wrong. I will leave it quoted at the end for the purposes of preserving information, but it is now replaced with a new post that correctly expresses my sentiments. The original title of this post was \"expected utility maximization is not rational\".

\n

As many people are probably aware, there is a theorem, called the Von Neumann-Morgenstern utility theorem, which states that anyone expressing consistent preferences must be maximizing the expected value of some function. The definition of consistent preferences is as follows:

\n

Let A, B, and C be probability distributions over outcomes. Let A < B denote that B is preferred to A, and A = B denote that someone is indifferent between A and B. Then we assume

\n

\n

\n

\n

Given these axioms, we can show that there exists a real-valued function u over outcomes such that A < B if and only if EA[u] < EB[u], where EX is the expected value with respect to the distribution X.

\n

Now, the important thing to note here is that this is an existence proof only. The function u doesn't have to look at all reasonable, it merely assigns a value to every possible outcome (in particular, even if E1 and E2 seem like completely unrelated events, there is no reason as far as I can tell why u([E1 and E2]) has to have anything to do with u(E1)+u(E2), for instance. Among other things, u is only defined up to an additive constant and so not only is there no reason to be true, it will be completely false for almost all possible utility functions, *even if you keep the person whose utility you are considering fixed*.

\n

In particular, it seems ridiculous that we would worry about an outcome that only occurs with probability 10-100. What this actually means is that our utility function is always much smaller than 10100, or rather that the ratio of the difference in utility between trivially small changes in outcome and arbitrarily large changes in outcome is always much larger than 10-100. This is how to avoid issues like Pascal's mugging, even in the least convenient possible world (since utility is an abstract construction, no universe can \"make\" a utility function become unbounded).

\n

What this means in particular is that saying that someone must maximize expected utility to be rational is not very productive. In particular, unless the other person has a sufficiently good technical grasp of what this means, they will probably do the wrong thing. Also, unless *you* have a good technical grasp of what it means, something that appears to violated expected utility might not. Remember, because utility is an artificial construct that has no reason to look reasonable, someone with completely reasonable preferences could have a very weird-*looking* utility function. Instead of telling people to maximize expected utility, we should identify which of the four above axioms they are violating, then explain why they are being irrational (or, if the purpose is to educate in advance, explain to them why the four axioms above should be respected). [Note however that just because a perfectly rational person *always* satisfies the above axioms, doesn't mean that you will be better off if you satisfy the above axioms more often. Your preferences might have a complicated cycle that you are unsure how to correctly resolve. Picking a resolution at random is unlikely to be a good idea.]

\n

Now, utility is this weird function that we don't understand at all. Then why does it seem like there's something called utility that **both** fits our intuitions and that people should be maximizing? The answer is that in many cases utility *can* be equated with something like money + risk aversion. The reason why is due to the law of large numbers, formalized through various bounds such as Hoeffding's inequality and the Chernoff bound, as well as more powerful arguments likeconcentration of measure. What these arguments say is that if you have a large number of random variables that are sufficiently uncorrelated and that have sufficiently small standard deviation relative to the mean, then with high probability their sum is very close to their expected sum. So when our variables all have means that are reasonable close to each other (as is the case for most every day events), we can say something like the total *monetary* value of our combined actions will be very close to the sum of the expected monetary values of our individual actions (and likewise for other quantities like time). So in situations where, e.g., your goal is to spend as little time on undesirable work as possible, you want to minimize expected time spent on undesirable work, **as a heuristic that holds in most practical cases**. While this might make it *look* like your utility function is time in this case, I believe that the resemblance is purely coincidental, and you certainly shouldn't be willing to make very low-success-rate gambles with large time payoffs.

\n

Old post:

\n
\n

I'm posting this to the discussion because I don't plan to make a detailed argument, mainly because I think this point should be extremely clear, even though many people on LessWrong seem to disagree with me.

\n

Maximizing expected utility is not a terminal goal, it is a useful heuristic. To see why always maximizing expected utility is clearly bad, consider an action A with a 10-10 chance of giving you 10100 units of utility, and a 1-10-10 chance of losing you 1010 units of utility. Then expected utility maximization requires you to perform A, even though it is obviously a bad idea. I believe this has been discussed here previously as Pascal's mugging.

\n

For some reason, this didn't lead everyone to the obvious conclusion that maximizing expected utility is the wrong thing to do, so I'm going to try to dissolve the issue by looking at why we would want to maximize expected utility in most situations. I think once this is accomplished it will be obvious why there is no particular reason to maximize expected utility for very low-probability events (in fact, one might consider having a utility function over probability distributions rather than actual states of the world).

\n

The reason that you normally want to maximize expected utility is because of the law of large numbers, formalized through various bounds such as Hoeffding's inequality and the Chernoff bound, as well as more powerful arguments like concentration of measure. What these arguments say is that if you have a large number of random variables that are sufficiently uncorrelated and that have sufficiently small variance relative to the mean, then with high probability their sum is very close to their expected sum. Thus for events with probabilities that are bounded away from 0 and 1 you always expect your utility to be very close to your expected utility, and should therefore maximize expected utility in order to maximize actual utility. But once the probabilities get small (or the events correlated, e.g. you are about to make an irreversible decision), these bounds no longer hold and the reasons for maximizing expected utility vanish. You should instead consider what sort of distribution over outcomes you find desirable.

\n
" } }, { "_id": "xYxbnnHqsJCT9fdzo", "title": "Delayed Solutions Game", "pageUrl": "https://www.lesswrong.com/posts/xYxbnnHqsJCT9fdzo/delayed-solutions-game", "postedAt": "2010-12-09T05:12:49.474Z", "baseScore": 20, "voteCount": 16, "commentCount": 57, "url": null, "contents": { "documentId": "xYxbnnHqsJCT9fdzo", "html": "

This is a thread to practice holding off on proposing solutions.

\n

Rules:

\n
    \n
  1. Post your dilemma (i.e. problem, question, situation, etc.) as a top-level comment. You can always come back to edit this.
  2. \n
  3. For the next 24 hours, replies in that thread can discuss only aspects of the problem, no solutions. (If something sounds too much like a solution, it gets downvoted.)
  4. \n
  5. After the 24 hours have passed from the start of the thread, solutions may be proposed therein.
  6. \n
\n

Note: Timezones for comments are in GMT (e.g. London), so you may need to use this to determine when 24 hours have passed in your local timezone.

" } }, { "_id": "9TzhaRnvra59wijZk", "title": "-", "pageUrl": "https://www.lesswrong.com/posts/9TzhaRnvra59wijZk/", "postedAt": "2010-12-09T04:06:02.645Z", "baseScore": 1, "voteCount": 3, "commentCount": 16, "url": null, "contents": { "documentId": "9TzhaRnvra59wijZk", "html": "

-

" } }, { "_id": "XHxpAbnok6YyhGv8S", "title": "Were atoms real?", "pageUrl": "https://www.lesswrong.com/posts/XHxpAbnok6YyhGv8S/were-atoms-real", "postedAt": "2010-12-08T17:30:37.453Z", "baseScore": 93, "voteCount": 79, "commentCount": 157, "url": null, "contents": { "documentId": "XHxpAbnok6YyhGv8S", "html": "

Related to: Dissolving the Question, Words as Hidden Inferences

\n

In what sense is the world “real”?  What are we asking, when we ask that question?

\n

I don’t know.  But G. Polya recommends that when facing a difficult problem, one look for similar but easier problems that one can solve as warm-ups.  I would like to do one of those warm-ups today; I would like to ask what disguised empirical question scientists were asking were asking in 1860, when they debated (fiercely!) whether atoms were real.[1]

\n

Let’s start by looking at the data that swayed these, and similar, scientists.

\n

Atomic theory:  By 1860, it was clear that atomic theory was a useful pedagogical device.  Atomic theory helped chemists describe several regularities:

\n\n

Despite this usefulness, there was considerable debate as to whether atoms were “real” or were merely a useful pedagogical device.  Some argued that substances might simply prefer to combine in certain ratios and that such empirical regularities were all there was to atomic theory; it was needless to additionally suppose that matter came in small unbreakable units.

\n

Today we have an integrated picture of physics and chemistry, in which atoms have a particular known size, are made of known sets of subatomic particles, and generally fit into a total picture in which the amount of data far exceeds the number of postulated details atoms include.  And today, nobody suggests that atoms are not \"real\", and are \"merely useful predictive devices\".

\n

Copernican astronomy:  By the mid sixteen century, it was clear to the astronomers at the University of Wittenburg that Copernicus’s model was useful.  It was easier to use, and more theoretically elegant, than Ptolemaic epicycles.  However, they did not take Copernicus’s theory to be “true”, and most of them ignored the claim that the Earth orbits the Sun.

\n

Later, after Galileo and Kepler, Copernicus’s claims about the real constituents of the solar system were taken more seriously. This new debate invoked a wider set of issues, besides the motions of the planets across the sky. Scholars now argued about Copernicus’s compatibility with the Bible; about whether our daily experiences on Earth would be different if the Earth were in motion (a la Galileo); and about whether Copernicus’s view was more compatible with a set of physically real causes for planetary motion (a la Kepler).  It was this wider set of considerations that eventually convinced scholars to believe in a heliocentric universe. [2]

\n

Relativistic time-dilation: For Lorentz, “local time” was a mere predictive convenience -- a device for simplifying calculations.  Einstein later argued that this local time was “real”; he did this by proposing a coherent, symmetrical total picture that included local time.

\n

Luminiferous aether:  Luminiferous (\"light-bearing\") aether provides an example of the reverse transition.  In the 1800s, many scientists, e.g. Augustin-Jean Fresnel, thought aether was probably a real part of the physical world.  They thought this because they had strong evidence that light was a wave, including as the interference of light in two-slit experiments, and all known waves were waves in something.[2.5]

\n

But the predictions of aether theory proved non-robust.  Aether not only correctly predicted that light would act as waves, but also incorrectly predicted that the Earth's motion with respect to aether should affect the perceived speed of light.  That is: luminiferous aether yielded accurate predictions only in narrow contexts, and it turned out not to be \"real\".

\n

Generalizing from these examples

\n

All theories come with “reading conventions” that tell us what kinds of predictions can and cannot be made from the theory.  For example, our reading conventions for maps tell us that a given map of North America can be used to predict distances between New York and Toronto, but that it should not be used to predict that Canada is uniformly pink.[3]  

\n

If the “reading conventions” for a particular theory allow for only narrow predictive use, we call that theory a “useful predictive device” but are hesitant about concluding that its contents are “real”.  Such was the state of Ptolemaic epicycles (which was used to predict the planets' locations within the sky, but not to predict, say, their brightness, or their nearness to Earth); of Copernican astronomy before Galileo (which could be used to predict planetary motions, but didn't explain why humans standing on Earth did not feel as though they were spinning), of early atomic theory, and so on.  When we learn to integrate a given theory-component into a robust predictive total, we conclude the theory-component is \"real\".

\n

It seems that one disguised empirical question scientists are asking, when they ask “Is X real, or just a handy predictive device?” is the question: “will I still get accurate predictions, when I use X in a less circumscribed or compartmentalized manner?” (E.g., “will I get accurate predictions, when I use atoms to predict quantized charge on tiny oil drops, instead of using atoms only to predict the ratios in which macroscopic quantities combine?\".[4][5]

\n

 

\n
\n

[1] Of course, I’m not sure that it’s a warm-up; since I am still confused about the larger problem, I don't know which paths will help. But that’s how it is with warm-ups; you find all the related-looking easier problems you can find, and hope for the best.

\n

[2]  I’m stealing this from Robert Westman’s book “The Melanchthon Circle, Rheticus, and the Wittenberg Interpretation of the Copernican Theory”.  But you can check the facts more easily in the Stanford Encyclopedia of Philosophy.

\n

[2.5] Manfred asks that I note that Lorentz's local time made sense to Lorentz partly because he believed an aether that could be used to define absolute time.  I unfortunately haven't read or don't recall the primary texts well enough to add good interpretation here (although I read many of the primary texts in a history of science course once), but Wikipedia has some good info on the subject.

\n

[3] This is a standard example, taken from Philip Kitcher.

\n

[4]  This conclusion is not original, but I can't remember who I stole it from.  It may have been Steve Rayhawk.

\n

[5] Thus, to extend this conjecturally toward our original question: when someone asks \"Is the physical world 'real'?\" they may, in part, be asking whether their predictive models of the physical world will give accurate predictions in a very robust manner, or whether they are merely local approximations.  The latter would hold if e.g. the person: is a brain in a vat; is dreaming; or is being simulated and can potentially be affected by entities outside the simulation.

\n
And in all these cases, we might say their world is \"not real\".
" } }, { "_id": "SpmF2htzTP9SLC5nd", "title": "Why is our sex drive too strong?", "pageUrl": "https://www.lesswrong.com/posts/SpmF2htzTP9SLC5nd/why-is-our-sex-drive-too-strong", "postedAt": "2010-12-08T17:02:45.186Z", "baseScore": 4, "voteCount": 25, "commentCount": 24, "url": null, "contents": { "documentId": "SpmF2htzTP9SLC5nd", "html": "

It is a cultural universal that people are discouraged from having sex as often and with as many people as they want to.  Every culture I've ever heard of imposes many restrictions on sex.  I've never heard of a culture that shames people for being too stingy with sex.

\n

If we assume that culture is adaptive, this means that the human sex drive is too strong for humans in society.  Why is this?  As sex drive is a phenotypic feature with extraordinarily strong selective pressure, why haven't we evolved to have the proper sex drive?

\n

One reason could be that reduced sex drive is selected for at the level of the group, while higher sex drive is selected for at the level of the individual.

" } }, { "_id": "EEx7JrynhWJRdRJHt", "title": "Bridging Inferential Gaps", "pageUrl": "https://www.lesswrong.com/posts/EEx7JrynhWJRdRJHt/bridging-inferential-gaps", "postedAt": "2010-12-08T04:50:14.749Z", "baseScore": 13, "voteCount": 10, "commentCount": 20, "url": null, "contents": { "documentId": "EEx7JrynhWJRdRJHt", "html": "

This idea isn't totally developed, so I'm putting it in Discussion for now.

\n

Introduction:

\n

 

\n

A few hands have been wrung over how to quickly explain fundamental Less Wrong ideas to people, in a way that they can be approached, appraised, an considered rather than being isolated and bereft across an inferential gulf.

\n

 

\n

I'm pretty embarrassed to say that I hardly talk about these things with people in my everyday life, even though it makes up a major part of my worldview and outlook. I don't particularly care about making people have similar beliefs to me, but I feel like I'm doing my friends a disservice to not adequately explain these things that I've found so personally helpful. (Ugh, that sounds pseudo-religious. Cut me off here if this is a Bad idea.)

\n

 

\n

Would it be useful to start a project (read: group of posts by different people) to find ways to bridge said gaps in normal conversation? (Normal conversation meaning talking to a non-hostile audience that nonetheless isn't particularly interested in to LW ideas). Mainly to talk about rationality things with friends and family members and whatnot, and possibly to help raise the sanity waterline (though this wasn't designed to do that).

\n

 

\n

A problem with the sequences for a nonplussed audience is that it assumes they care. I find that when trying to explain ideas like holding off on proposing solutions until talking about the problem to other people it just comes across as boring, even if they aren't opposed to the idea at all.

\n

 

\n

With an ideological audience, the problem is much more difficult. Not only do you need to explain why something is correct, you need to convince them that believing in it is more important than holding on to their ideology, and that they should lower their \"defenses\" enough to actually consider it.

\n

 

\n

I think that, should this project be undertaken, it should be very tested and experimental based. Like, people would actually try out the techniques on other people to see if they actually work.

\n

 

\n

Background/Thoughts/Questions:

\n

 

\n

Do we actually want to do this? It seems like its a step towards a possibly PR-damaging evangelism, or just being generally annoying in conversation, among other things. On the other hand, I still want to be able to talk about these things offline every now and then.

\n

 

\n

It's been said that being half a rationalist is dangerous. How do you communicate enough rationality for it to not be dangerous? Or would they have to go all in, and make the casual conversation about it semi-pointless?

\n

 

\n

The inferential gaps that need crossing probably vary a lot by personal background. Once I was able to explain basic transhumanism (death is bad, we can probably enhance ourselves using technology) to someone, and have them agree with an like it almost immediately. Another time, the other person in the conversation just found it gross.

\n

 

\n

There are probably ways of explaining LW concepts to other people that rely on their ideas that would mess up their thinking (i.e. Cognitive Bias explained through Original Sin might be a bad idea). How do you cross into rational ideas from nonrational ones? Should you try to exclusively explain rational ideas based on rational beliefs they already have? Could you reliably explain an idea to someone and expect that to cause them to question what you explained it in terms of (i.e. you explain A in terms of B, but A causes people to reject B)?

\n

 

\n

For talking to an ideological person, I think that the main common goal should be to convince them that a) ideas can be objectively true, b) its good to abandon false beliefs, c) ideological people will rationalize things to fit into their ideology, and \"view arguments as soldiers\".

\n

 

" } }, { "_id": "XziKoFhDiNRLGsjr3", "title": "Help Request: How to maintain focus when emotionally overwhelmed", "pageUrl": "https://www.lesswrong.com/posts/XziKoFhDiNRLGsjr3/help-request-how-to-maintain-focus-when-emotionally", "postedAt": "2010-12-07T23:29:13.208Z", "baseScore": 6, "voteCount": 6, "commentCount": 14, "url": null, "contents": { "documentId": "XziKoFhDiNRLGsjr3", "html": "

So my personal life just got very interesting. In a net-positive way, certainly, but still, I am, as Calculon put it, \"filled with a large number of powerful emotions!\" -- some of which are anxious and/or panicky.

\n

This is making it annoyingly difficult to focus at work. I am an absolutely textbook \"Attention Deficit Oh-look-a-squirrel!\" case at the best of times, and this seems to have made it much, much worse. I can handle small tasks, but anything where I'm going to have to spend an hour solving multiple problems before producing results, I can hardly make myself start.

\n

Has anyone dealt with the problem of maintaining productive focus while emotionally overwhelmed/exhausted, and if so, do you have any pointers?

" } }, { "_id": "RSXZSkdjdhSZTkxD7", "title": "$100 for the best article on efficient charity: the finalists", "pageUrl": "https://www.lesswrong.com/posts/RSXZSkdjdhSZTkxD7/usd100-for-the-best-article-on-efficient-charity-the", "postedAt": "2010-12-07T21:15:31.102Z", "baseScore": 7, "voteCount": 6, "commentCount": 9, "url": null, "contents": { "documentId": "RSXZSkdjdhSZTkxD7", "html": "

Part of the Efficient Charity Article competition. Several people have written articles on efficient charity --

\r\n\r\n

 

\r\n\r\n

 

\r\n\r\n

 

\r\n\r\n

 

\r\n

Any comments on the finalists? Who do we think should be the winner?

" } }, { "_id": "rNkFLv9tXzq8Lrvrc", "title": "Best career models for doing research?", "pageUrl": "https://www.lesswrong.com/posts/rNkFLv9tXzq8Lrvrc/best-career-models-for-doing-research", "postedAt": "2010-12-07T16:25:22.584Z", "baseScore": 43, "voteCount": 35, "commentCount": 1001, "url": null, "contents": { "documentId": "rNkFLv9tXzq8Lrvrc", "html": "

Ideally, I'd like to save the world. One way to do that involves contributing academic research, which raises the question of what's the most effective way of doing that.

\n

The traditional wisdom says if you want to do research, you should get a job in a university. But for the most part the system seems to be set up so that you first spend a long time working for someone else and research their ideas, after which you can lead your own group, but then most of your time will be spent on applying for grants and other administrative trivia rather than actually researching the interesting stuff. Also, in Finland at least, all professors need to also spend time doing teaching, so that's another time sink.

I suspect I would have more time to actually dedicate on research, and I could get doing it quicker, if I took a part-time job and did the research in my spare time. E.g. the recommended rates for a freelance journalist in Finland would allow me to spend a week each month doing work and three weeks doing research, of course assuming that I can pull off the freelance journalism part.

What (dis)advantages does this have compared to the traditional model?

Some advantages:

\n\n

Some disadvantages:

\n\n

EDIT: Note that while I certainly do appreciate comments specific to my situation, I posted this over at LW and not Discussion because I was hoping the discussion would also be useful for others who might be considering an academic path. So feel free to also provide commentary that's US-specific, say.

" } }, { "_id": "EYiAoxvnKjgJe8GbN", "title": "Akrasia as a collective action problem", "pageUrl": "https://www.lesswrong.com/posts/EYiAoxvnKjgJe8GbN/akrasia-as-a-collective-action-problem", "postedAt": "2010-12-07T15:44:56.626Z", "baseScore": 8, "voteCount": 8, "commentCount": 4, "url": null, "contents": { "documentId": "EYiAoxvnKjgJe8GbN", "html": "

Related to: Self-empathy as a source of \"willpower\" and some comments.

\r\n

It has been mentioned before that akrasia might be modeled as the result of inner conflict. I think this analogy is great, and would like to propose a refinement.1

\r\n

Here's the mental conflict theory of akrasia, as I understand it:

\r\n

Though Maud appears to external observers (such as us) be a single self, she is in fact a kind of team. Maud's mind is composed of sub-agents, each of whom would like to pursue its own interests. Maybe when Maud goes to bed, she sets the alarm for 6 AM. When it buzzes the next morning, she hits the snooze...again and again and again. To explain this odd behavior, we invoke the idea that BedtimeMaud is not the same person as MorningMaud. In particular, BedtimeMaud is a person who likes to get up early, while MorningMaud is that bully BedtimeMaud's poor victim.The point is that the various decisionmakers that inhabit her brain are not always after the same ball. The subagents that compose the mind might not be mutually antagonistic; they're just not very empathetic to each other.

\r\n

I like to think of this situation as a collective action problem akin to those we find in political science and economics. What we have is a misalignment of costs and benefits. If Maud rises at 6, then MorningMaud bears the whole cost of this decision, while a different Maud, or set of Mauds, enjoys the benefits. The costs are concentrated in MorningMaud's lap, while the benefits are dispersed among many Mauds throughout the day. Thus Maud sleeps in.

\r\n

Put differently, MorningMaud's behavior produces a negative externality: she enjoys the whole benefit of sleeping in, but the rest of the day's Mauds bear the costs.

\r\n

So, how can we get MorningMaud to lie in the bed she makes, as it were, and get a more efficient outcome?

\r\n

We can:

\r\n\r\n

The analogy's not perfect. (I can't see a way to fit in Pigovian taxes .)

\r\n

But is it a fruitful analogy? Is it more than just renaming the key terms of the subagent theory--could one use welfare economics to improve one's own dynamic consistency?

\r\n

1I got this idea partly from a slip, possibly Freudian (I think I said \"externality\" instead of \"akrasia\"), and partly from this page on the Egonomics website.

" } }, { "_id": "MhCKSJL6Du4s6uYkY", "title": "The End of Men", "pageUrl": "https://www.lesswrong.com/posts/MhCKSJL6Du4s6uYkY/the-end-of-men", "postedAt": "2010-12-07T13:40:03.121Z", "baseScore": 9, "voteCount": 5, "commentCount": 7, "url": null, "contents": { "documentId": "MhCKSJL6Du4s6uYkY", "html": "

Interesting article arguing that society is strongly slanted towards women doing better, and also noting that parents who are given the choice of choosing their children's sex are preferring women, sometimes in a 2-to-1 ratio:

\n

http://www.theatlantic.com/magazine/archive/2010/07/the-end-of-men/8135/

" } }, { "_id": "AYskpLbLTcD5Xnuuq", "title": "An uneducated thought on the irreality of reality", "pageUrl": "https://www.lesswrong.com/posts/AYskpLbLTcD5Xnuuq/an-uneducated-thought-on-the-irreality-of-reality", "postedAt": "2010-12-07T13:31:51.919Z", "baseScore": -15, "voteCount": 15, "commentCount": 5, "url": null, "contents": { "documentId": "AYskpLbLTcD5Xnuuq", "html": "

I realize that this is not standard fare on lesswrong. I have as yet found no community other than this which I'd expect to receive valuable feedback and discussion from regarding the pedantic bologna that my mind spews. I am learning to communicate more effectively but find the encyclopedias of experiential relevance behind words themselves as they pertain to a unique mind to be immense roadblocks towards constructing my own understanding of life or communicating my own to others.

\n

Anybody else have this issue?

\n

I see where this connects to philosophies which stand debunked in lieu of the \"current state\" of philosophy. I do not know exactly how to communicate that to myself in a way which makes fluid sense.

\n

Please keep in mind that I've only just begun digging through this site! I'm posting this in hopes of having my ambiguity ripped to shreds.

\n

 

\n
\n

I preach and fade

\n

into a monologue so strong

\n

that it usurps the ungraspable nature of reality.

\n

 

\n

Day by day, I preach and fade,

\n

just like you.

\n

 

\n

Sit in the shade of my monstrous thoughts.

\n

When I feel like shouting out,

\n

\"Come sit, let's share\",

\n

I do so with nil hesitation.

\n

And yet I know that to me love is love,

\n

but to you love is love,

\n

and our flavors will likely never coincide.

\n

 

\n

I can sift through feelings, use big words to relate,

\n

day after day, preach and then fade,

\n

talking my way through facade, facade.

\n

 

\n

My play's a bit different than yours.

\n

My music, so alien, though from the same score.

\n

 

\n


This is why man needs God;

\n

Man is God. Only God IS man, can feel the same sense of language, memory, sensory input and emotional language, response...

\n

 

\n

God is comfort with one's own insanity.

\n

All it takes is two to make any song sane.

\n
\n

 

\n

The specifics are unnecessary, the concepts of love and God chosen as experiences tied to words immeasurably unique to each individual. I see the connections coming to light even as I type this up as to why this post is a futile relic of thought processes of my past, but there is still a lot of this that sticks with me that would be interesting to have dissected, as cognition does not always present itself clearly!

\n

How many people do you meet out there who actually ask people \"are you okay?\" ...and actually mean it? They seem to generally be people caught up in an inner monologue in which they are caring because that's the narrative they choose to follow. Not to say that there is anything inherintly wrong with caring because of an ethical concern; it doesn't always have to be about intimacy.

\n

But the care is expressed in a paradigm where once it's said their focus is averted because it's satisfied the need to 'care'. Like... they care, but they are speaking to themselves.

\n

The whole idea of the internal monologue that I see people maintaining... hmm. Not sure where to start. So language is an organizational and communicative tool, right? It allows communication of abstract thought from person to person. So language only holds levity if it is common - if everybody uses the same collection of letters/symbols/sounds/patterns/vocal inflections/facial expression and whatnot. But even though we all use the same words, something like \"red\" is going to mean something completely different to me than it will to you, because our realities, the spaces within which our minds construct the 'external', must never be the same. These words evoke an emotional response within us and generally they can relate with similitude. 

\n

But language is something that exists really only externally, in a sense. Thought is much quicker than language but we still speak things in our heads because we desire communicability and we're used to a world where things happen at a language-paced level.

\n

Do you ever get the feeling that you tried to tell someone something, and they understood what you said, but they didn't understand what you intended for them to - BUT - it still worked? Because the language was ambiguous enough, they heard something you were giving as 'advice' and found a way in their minds eye for those words to fit in as 'advice' and accepted them, because they were primed to. Even though they never really heard what you felt. They heard the same concrete words that you spoke, and understood them perfectly, but for the fact that language on an individual basis is abstract and the fact that they will process it through the lens of their own internal 'language' means they'll never really see what you see, feel what you feel kinda thing.

\n

And as far as YOUR inner monologue goes, there is only one. There is only one collection of emotional associations and memories and neural networking in EXISTENCE which finds itself speaking the 'internal' language that you speak.

\n

Hence,

\n

to me love is love

\n

but to you love is love

\n

and it's pretty damned unusual for two people to connect on that in a pure and unmarring level. It takes a long, long time for peoples' internal monologues to sync up. Even now, I am explaining this to you partly because I want to share and partly because I want to explain it to myself and not because you want to hear it.

\n

 

\n

Am I tripping myself up so hard from the starting gate simply by choosing to point fingers at outlier concepts such as love or a psychological deific construct?

\n


" } }, { "_id": "z8xEsyHwHpsgrhJe8", "title": "Writing and enthusiasm", "pageUrl": "https://www.lesswrong.com/posts/z8xEsyHwHpsgrhJe8/writing-and-enthusiasm", "postedAt": "2010-12-07T12:01:46.091Z", "baseScore": 15, "voteCount": 11, "commentCount": 11, "url": null, "contents": { "documentId": "z8xEsyHwHpsgrhJe8", "html": "

Steve Pavlina explains that the method he'd been taught in school-- a highly structured writing process of organizing what to say before it's written-- tends to produce dull writing, but starting from enthusiasm results in articles which are a pleasure to write and are apt to be more fun and memorable to read.

\n
Inspirational energy has a half life of about 24 hours. If I act on an idea immediately (or at least within the first few hours), I feel optimally motivated, and I can surf that wave of energy all the way to clicking “Publish.” If I sit on an idea for one day, I feel only half as inspired by it, and I have to paddle a lot more to get it done. If I sit on it for 2 days, the inspiration level has dropped by 75%, and for all practical purposes, the idea is dead. If I try to write it at that point, it feels like pulling teeth. It’s much better for me to let it go and wait for a fresh wave. There will always be another wave, so there’s no need to chase the ones I missed.
\n

This looks like PJ Eby territory-- it's about the importance of pleasure as a motivator.

" } }, { "_id": "v2uPKpWTvZjWyQfZf", "title": "Wonder who is going to be there...", "pageUrl": "https://www.lesswrong.com/posts/v2uPKpWTvZjWyQfZf/wonder-who-is-going-to-be-there", "postedAt": "2010-12-07T03:14:24.123Z", "baseScore": 5, "voteCount": 5, "commentCount": 5, "url": null, "contents": { "documentId": "v2uPKpWTvZjWyQfZf", "html": "

http://www.mercurynews.com/business/ci_16792615?nclick_check=1

\n

(Betting on Michael Vassar)

" } }, { "_id": "CFk65m88nTNjmRNxk", "title": "It's not about truth", "pageUrl": "https://www.lesswrong.com/posts/CFk65m88nTNjmRNxk/it-s-not-about-truth", "postedAt": "2010-12-06T19:43:11.014Z", "baseScore": 2, "voteCount": 4, "commentCount": 15, "url": null, "contents": { "documentId": "CFk65m88nTNjmRNxk", "html": "

A rather sane article about the actual purpose of religions and why they persist among the rational & intelligent.

\n

http://blog.evangelicalrealism.com/2010/12/04/getting-religion/

\n

A bit of a Hansonian bent as well. \"Genuine objective truths can be complicated and uncomfortable, but what’s worse, they confer no particular social advantage \"

" } }, { "_id": "x8KEDhBc3cniyjZn6", "title": "New Haven Meetup, Saturday, Dec. 11", "pageUrl": "https://www.lesswrong.com/posts/x8KEDhBc3cniyjZn6/new-haven-meetup-saturday-dec-11", "postedAt": "2010-12-06T15:15:45.524Z", "baseScore": 12, "voteCount": 6, "commentCount": 32, "url": null, "contents": { "documentId": "x8KEDhBc3cniyjZn6", "html": "

On Saturday, New Haven residents, people who live in other nearby assorted Havens, and anybody else who would like to trek out here are welcomed to a Less Wrong meetup in thomblake's and my home at noon(ish).  The address is thus:

\n

173 Russo Ave Unit 406
East Haven, CT 06513

\n

I will make food; if you plan to come and wish to submit food-related requests, say so.

" } }, { "_id": "ZyxMdPTJszi5a2jwq", "title": "Less Wrong: Open Thread, December 2010", "pageUrl": "https://www.lesswrong.com/posts/ZyxMdPTJszi5a2jwq/less-wrong-open-thread-december-2010", "postedAt": "2010-12-06T14:29:34.755Z", "baseScore": 13, "voteCount": 11, "commentCount": 85, "url": null, "contents": { "documentId": "ZyxMdPTJszi5a2jwq", "html": "

Even with the discussion section, there are ideas or questions too short or inchoate to be worth a post.

\n

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

" } }, { "_id": "56zW6CNEkGgo2YrFy", "title": "Berkeley LW Meet-up Friday December 10", "pageUrl": "https://www.lesswrong.com/posts/56zW6CNEkGgo2YrFy/berkeley-lw-meet-up-friday-december-10", "postedAt": "2010-12-06T06:11:54.173Z", "baseScore": 9, "voteCount": 6, "commentCount": 11, "url": null, "contents": { "documentId": "56zW6CNEkGgo2YrFy", "html": "

Last month, about 20 people showed up to the Berkeley LW meet-up.  To continue the tradition of Berkeley Meetups, we will be meeting on Saturday, December 11  Friday, December 10 at 7 PM at the Starbucks at 2128 Oxford Street.  Last time, we chatted at the Starbucks for about 45 minutes, then went to get dinner and ate and talked under a T-Rex skeleton - we'll probably do something similar, so don't feel like you have to eat before you come.  Hope to see you there!

\n

 

\n

ETA:  Some people are unavailable on Saturday, do people have a strong preference for Saturday?  If no one does, I'll move it to Friday.  Due to two votes for Friday and none for Saturday, I have changed the date to Friday.

" } }, { "_id": "jbBdRijbRfWhk92Mu", "title": "It's Not About Efficiency", "pageUrl": "https://www.lesswrong.com/posts/jbBdRijbRfWhk92Mu/it-s-not-about-efficiency", "postedAt": "2010-12-06T04:12:41.198Z", "baseScore": 3, "voteCount": 6, "commentCount": 11, "url": null, "contents": { "documentId": "jbBdRijbRfWhk92Mu", "html": "

When I explain the importance of donating only to the right charity, I've been told that it's not about efficiency. This is completely correct.

\n

Imagine a paperclip company. They care only about making paperclips. They will do anything within their power to improve efficiency, but they don't care about efficiency. They care about making paperclips. Efficiency is just a measure of how well they're accomplishing their goal. You don't try to be efficient because you want to be efficient. You try to be efficient because you want something.

\n

When I try to help people, the same principle applies. I couldn't care less about a charity's efficiency. I care about how much they help people. Efficiency is just a measure of how well they accomplish that goal.

" } }, { "_id": "gasTtZvvbXRZgJQey", "title": "How seriously should I take the supposed problems with Cox's theorem?", "pageUrl": "https://www.lesswrong.com/posts/gasTtZvvbXRZgJQey/how-seriously-should-i-take-the-supposed-problems-with-cox-s", "postedAt": "2010-12-06T02:04:43.359Z", "baseScore": 14, "voteCount": 10, "commentCount": 10, "url": null, "contents": { "documentId": "gasTtZvvbXRZgJQey", "html": "

I had been under the impression that Cox's theorem said something pretty strong about the consistent ways to represent uncertainty, relying on very plausible assumptions. However, I recently found this 1999 paper, which claims that Cox's result actually requires some stronger assumptions. I am curious what people here think of this. Has there been subsequent work which relaxes the stronger assumptions?

" } }, { "_id": "3MAAm5iByjQP9LZKf", "title": "The Truth about Scotsmen, or: Dissolving Fallacies", "pageUrl": "https://www.lesswrong.com/posts/3MAAm5iByjQP9LZKf/the-truth-about-scotsmen-or-dissolving-fallacies", "postedAt": "2010-12-05T21:57:07.976Z", "baseScore": 37, "voteCount": 34, "commentCount": 36, "url": null, "contents": { "documentId": "3MAAm5iByjQP9LZKf", "html": "
\n
\n
\n

One unfortunate feature I’ve noticed in arguments between logically well-trained people and the untrained is a tendency for members of the former group to point out logical errors as if they were counterarguments. This is almost totally ineffective either in changing the mind of your opponent or in convincing neutral observers. There are two main reasons for this failure.

\n

1. Pointing out fallacies is not the same thing as urging someone to reconsider their viewpoint.

\n

Fallacies are problematic because they’re errors in the line of reasoning that one uses to arrive at or support a conclusion. In the same way that taking the wrong route to the movie theater is bad because you won’t get there, committing a fallacy is bad because you’ll be led to the wrong conclusions.

\n

But all that isn’t inherent in the word ‘fallacy’: the vast majority of human beings don’t understand the statement “that’s a fallacy” as “you seem to have been misled by this particular logical error – you should reevaluate your thought process and see if you arrive at the same conclusions without it.” Rather, most people will regard it as an enemy attack,with the result that they will either reject the existence of the fallacy or simply ignore it. If, by some chance, they do acknowledge the error, they’ll usually interpret it as “your argument for that conclusion is wrong – you should argue for the same conclusion in a different way.”

\n

If you’re actually trying to convince someone (as opposed to, say, arguing to appease the goddess Eris) by showing them that the chain of logic they base their current belief on is unsound, you have to say so explicitly. Otherwise saying “fallacy” is about as effective as just telling them that they’re wrong.

\n

2. Pointing out the obvious logical errors that fallacies characterize often obscures the deeper errors that generate the fallacies.

\n

Take as an example the No True Scotsman fallacy. In the canonical example, the Scotsman, having seen a report of a crime, claims that no Scotsman would do such a thing. When presented with evidence of just such a Scottish criminal, he qualifies his claim, saying that no true Scotsman would do such a thing.

\n

The obvious response to such a statement is “Ah, but you’re committing the No True Scotsman fallacy! By excluding any Scotsman who would do such a thing from your reference class, you’re making your statement tautologically true!”

\n

While this is a valid argument, it’s not an effective one. The Scotsman, rather than changing his beliefs about the inherent goodness of all Scots, is likely to just look at you sulkily. That’s because all you’ve done is deprive him of evidence for his belief, not make him disbelieve it – wiped out one of his squadrons, so to speak, rather than making him switch sides in the war. If you were actually trying to make him change his mind, you’d have to have a better model of how it works.

\n

No one is legitimately entranced by a fallacy like No True Scotsman – it’s used strictly as rationalization, not as a faulty but appealing reason to create a belief. Therefore the reason for his belief must lie deeper. In this case, you can find it by looking at what counts for him as evidence. To the Scotsman, the crime committed by the Englishman is an indictment of the English national character, not just the action of an individual. Likewise, a similar crime committed by a Scotsman would be evidence against the goodness of the Scottish character. Since he already believes deeply in the goodness of the Scottish character, he has only two choices: acknowledge that he was wrong about a deeply felt belief, or decide that the criminal was not really Scottish.

\n

The error at the deepest level is that the Scotman possesses an unreasoned belief in the superiority of Scottish character, but it would be impractical at best to argue that point. The intermediate and more important error is that he views national character as monolithic – if Scottish character is better than English character, it must be better across all individuals – and therefore counts the actions of one individual as non-negligible evidence against the goodness of Scotland. If you’re trying to convince him that yes, that criminal really can be a Scotsman, the best way to do so would not be to tell him that he’s comitting a fallacy, but to argue directly against the underlying rationale connecting the individual’s crime and his nationalism. If national character is determined by, say, the ratio of good men to bad men in each nation, then bad men can exist in both England and Scotland without impinging on Scotland’s superiority – and suddenly there’s no reason for the fallacy at all. You’ve disproved his belief and changed his mind, without the word ‘fallacy’ once passing your lips.

\n
\n
\n
" } }, { "_id": "oNiAySMEJ5Ep84nMa", "title": "The Truth about Scotsmen, or: Dissolving Fallacies", "pageUrl": "https://www.lesswrong.com/posts/oNiAySMEJ5Ep84nMa/the-truth-about-scotsmen-or-dissolving-fallacies-0", "postedAt": "2010-12-05T21:49:06.384Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "oNiAySMEJ5Ep84nMa", "html": "

@font-face { font-family: \"Cambria\"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: \"Times New Roman\"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0in 0in 0.0001pt 0.5in; font-size: 12pt; font-family: \"Times New Roman\"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0in 0in 0.0001pt 0.5in; font-size: 12pt; font-family: \"Times New Roman\"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0in 0in 0.0001pt 0.5in; font-size: 12pt; font-family: \"Times New Roman\"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0in 0in 0.0001pt 0.5in; font-size: 12pt; font-family: \"Times New Roman\"; }div.Section1 { page: Section1; }\n

One unfortunate feature I’ve noticed in arguments between logically well-trained people and the untrained is a tendency for members of the former group to point out logical errors as if they were counterarguments. This is almost totally ineffective either in changing the mind of your opponent or in convincing neutral observers. There are two main reasons for this failure.

\n

 

\n

1. Pointing out fallacies is not the same thing as urging someone to reconsider their viewpoint.

\n

 

\n

Fallacies are problematic because they’re errors in the line of reasoning that one uses to arrive at or support a conclusion – and in the same way that taking the wrong route to the movie theater is bad because you won’t get there, committing a fallacy is bad because you’ll be led to the wrong conclusions.

\n

 

\n

But all that isn’t inherent in the word ‘fallacy’: the vast majority of human beings don’t understand the statement “that’s a fallacy” as “you seem to have been misled by this particular logical error – you should reevaluate your thought process and see if you arrive at the same conclusions without it.” Rather, most people will regard it as an enemy attack,with the result that they will either reject the existence of the fallacy or simply ignore it. If, by some chance, they do acknowledge the error, they’ll usually interpret it as “your argument for that conclusion is wrong – you should argue for the same conclusion in a different way.”

\n

 

\n

If you’re actually trying to convince someone (as opposed to, say, arguing to appease the goddess Eris) by showing them that the chain of logic they base their current belief on is unsound, you have to say so explicitly. Otherwise saying “fallacy” is about as effective as just telling them that they’re wrong.

\n

 

\n

2. Pointing out the obvious logical errors that fallacies characterize often obscures the deeper errors that generate the fallacies.

\n

 

\n

 Take as an example the No True Scotsman fallacy. In the canonical example, the Scotsman, having seen a report of a crime, claims that no Scotsman would do such a thing. When presented with evidence of just such a Scottish criminal, he qualifies his claim, saying that no true Scotsman would do such a thing.

\n

 

\n

The obvious response to such a statement is “Ah, but you’re committing the No True Scotsman fallacy! By excluding any Scotsman who would do such a thing from your reference class, you’re making your statement tautologically true!”

\n

 

\n

While this is a valid argument, it’s not an effective one. The Scotsman, rather than changing his beliefs about the inherent goodness of all Scots, is likely to just look at you sulkily. That’s because all you’ve done is deprive him of evidence for his belief, not make him disbelieve it – wiped out one of his squadrons, so to speak, rather than making him switch sides in the war. If you were actually trying to make him change his mind, you’d have to have a better model of how it works.

\n

 

\n

No one is legitimately entranced by a fallacy like No True Scotsman – it’s used strictly as rationalization, not as a faulty but appealing reason to create a belief. Therefore the reason for his belief must lie deeper. In this case, you can find it by looking at what counts for him as evidence. To the Scotsman, the crime committed by the Englishman is an indictment of the English national character, not just the action of an individual. Likewise, a similar crime committed by a Scotsman would be evidence against the goodness of the Scottish character. Since he already believes deeply in the goodness of the Scottish character, he has only two choices: acknowledge that he was wrong about a deeply-felt belief, or decide that the criminal was not really Scottish.

\n

 

\n

The error at the deepest level is that the Scotman possesses an unreasoned belief in the superiority of Scottish character, but it would be impractical at best to argue that point. The intermediate and more important error is that he views national character as monolithic – if Scottish character is better than English character, it must be better across all individuals – and therefore counts the actions of one individual as non-negligible evidence against the goodness of Scotland. If you’re trying to convince him that yes, that criminal really can be a Scotsman, the best way to do so would not be to tell him that he’s comitting a fallacy, but to argue directly against the underlying rationale connecting the individual’s crime and his nationalism. If national character is determined by, say, the ratio of good men to bad men in each nation, then bad men can exist in both England and Scotland without impinging on Scotland’s superiority – and suddenly there’s no reason for the fallacy at all. You’ve disproved his belief and changed his mind, without the word ‘fallacy’ once passing your lips.

\n

" } }, { "_id": "8bCg5GcJMArkZ2dhh", "title": "LINK: BBC News on living forever, cryonics", "pageUrl": "https://www.lesswrong.com/posts/8bCg5GcJMArkZ2dhh/link-bbc-news-on-living-forever-cryonics", "postedAt": "2010-12-05T18:08:26.618Z", "baseScore": 3, "voteCount": 9, "commentCount": 18, "url": null, "contents": { "documentId": "8bCg5GcJMArkZ2dhh", "html": "

The BBC News recently ran an interesting piece on living forever. They discuss some of the standard arguments against cryonics and transhumanism; overall, the article is pretty critical of both. I suspect most LessWrong readers won't find it convincing, but it's still worth a quick read.

" } }, { "_id": "4HkMtfb5gzTyzgzSm", "title": "Some ideas on communicating risks to the general public", "pageUrl": "https://www.lesswrong.com/posts/4HkMtfb5gzTyzgzSm/some-ideas-on-communicating-risks-to-the-general-public", "postedAt": "2010-12-05T10:44:19.060Z", "baseScore": 4, "voteCount": 3, "commentCount": 1, "url": null, "contents": { "documentId": "4HkMtfb5gzTyzgzSm", "html": "

http://www.decisionsciencenews.com/2010/12/03/some-ideas-on-communicating-risks-to-the-general-public/

\n

Got this off from Reddit.

" } }, { "_id": "f6b8ESmTYZPHgFWWg", "title": "Help request: What is the Kolmogorov complexity of computable approximations to AIXI?", "pageUrl": "https://www.lesswrong.com/posts/f6b8ESmTYZPHgFWWg/help-request-what-is-the-kolmogorov-complexity-of-computable", "postedAt": "2010-12-05T10:23:55.626Z", "baseScore": 9, "voteCount": 6, "commentCount": 9, "url": null, "contents": { "documentId": "f6b8ESmTYZPHgFWWg", "html": "

Does anyone happen to know the Komogorov complexity (in some suitable, standard UTM -- or, failing that, in lines of Python or something) of computable approximations of AIXI?

\n

I'm writing a paper on how simple or complicated intelligence is, and what implications that has for AI forecasting.  In that context: adopt Shane Legg's measure of intelligence (i.e., let \"intelligence\" measure a system's average goal-achievement across the different \"universe\" programs that might be causing it to win or not win reward at each time step, weighted according to the universe program's simplicity).

\n

Let k(x, y) denote the Kolmogorov complexity of the shortest program that attains an intelligence of at least x, when allowed an amount y of computation (i.e., of steps it gets to run our canonical UTM).  Then, granting certain caveats, AIXI and approximations thereto tell us that the limit as y approaches infinity of k(x,y) is pretty small for any computably attainable value of x.  (Right?)

\n

What I'd like is to stick an actual number, or at least an upper bound, on \"pretty small\".

\n

If someone could help me out, I'd be much obliged.

" } }, { "_id": "dc9ehbHh6YA63ZyeS", "title": "Genetically Engineered Intelligence", "pageUrl": "https://www.lesswrong.com/posts/dc9ehbHh6YA63ZyeS/genetically-engineered-intelligence", "postedAt": "2010-12-05T10:19:29.237Z", "baseScore": 28, "voteCount": 19, "commentCount": 27, "url": null, "contents": { "documentId": "dc9ehbHh6YA63ZyeS", "html": "

There are a lot of unknowns about the future of intelligence: artificial intelligence, uploading, augmentation, and so on. Most of these technologies are likely a ways off, or at least far enough away to confound predictions. Genetic engineering, however, presents a very near term and well understood possibility for developing greater intelligence.

\n

A recent news story published in South China Morning and discussed on Steve Hsu's blog highlights China's push to understand the genetic underpinnings of intelligence. China is planning to sequence the full genome of 1000 of its brightest kids, in the hopes of locating key genes responsible for higher intelligence. Behind the current project is BGI, which is aiming to be (or already is) the largest DNA sequencing center in the world.

\n

Suppose that intelligence has a large genetic component (reasonable, considering estimates for heritability). Suppose that the current study unveils those components (if not this study, then likely another study soon, perhaps with millions of genomes). Then with some advances in genetic engineering China could quickly raise a huge population of incredibly intelligent people.

\n

Such an endeavor could never be carried out on a large, public scale in the West, but it seems China has fewer qualms.

\n

The timescales here are on the order of 20 years, which are relevant compared to most estimates for AI and WBE. More, genetic engineering human intelligence seems to be on a much more predictable path than other intelligence technologies. For both these reasons I think understanding, discussing, and keeping an eye on this issue is important.

\n

What are the ramifications for

\n\n

Of course, there are a host of other interesting questions relating to societal impact, both near and long term. Feel free to discuss these as well.

" } }, { "_id": "5YuQAj63CkcDLewbW", "title": "The Trolley Problem: Dodging moral questions", "pageUrl": "https://www.lesswrong.com/posts/5YuQAj63CkcDLewbW/the-trolley-problem-dodging-moral-questions", "postedAt": "2010-12-05T04:58:34.599Z", "baseScore": 17, "voteCount": 35, "commentCount": 131, "url": null, "contents": { "documentId": "5YuQAj63CkcDLewbW", "html": "

The trolley problem is one of the more famous thought experiments in moral philosophy, and studies by psychologists and anthropologists suggest that the response distributions to its major permutations remain roughly the same throughout all human cultures. Most people will permit pulling the lever to redirect the trolley so that it will kill one person rather than five, but will balk at pushing one fat person in front of the trolley to save the five if that is the only available option of stopping it.

\n

However, in informal settings, where the dilemma is posed by a peer rather than a teacher or researcher, it has been my observation that there is another major category which accounts for a significant proportion of respondents' answers. Rather than choosing to flip the switch, push the fat man, or remain passive, many people will reject the question outright. They will attack the improbability of the premise, attempt to invent third options, or appeal to their emotional state in the provided scenario (\"I would be too panicked to do anything\",) or some combination of the above, in order to opt out of answering the question on its own terms.

\n

\n

However, in most cases, these excuses are not their true rejection. Those who tried to find third options or appeal to their emotional state will continue to reject the dilemma even when it is posed in its most inconvenient possible forms, where they have the time to collect themselves and make a reasoned choice, but no possibility of implementing alternative solutions.

\n

Those who appealed to the unlikelihood of the scenario might appear to have the stronger objection; after all, the trolley dilemma is extremely improbable, and more inconvenient permutations of the problem might appear even less probable. However, trolleylike dilemmas are actually quite common in real life, when you take the scenario not as a case where only two options are available, but as a metaphor for any situation where all the available choices have negative repercussions, and attempting to optimize the outcome demands increased complicity in the dilemma. This method of framing the problem also tends not to cause people to reverse their rejections. 

\n

Ultimately, when provided with optimally inconvenient and general forms of the dilemma, most of those who rejected the question will continue to make excuses to avoid answering the question on its own terms. They will insist that there must be superior alternatives, that external circumstances will absolve them from having to make a choice, or simply that they have no responsibility to address an artificial moral dilemma.

\n

When the respondents feel that they can possibly opt out of answering the question, the implications of the trolley problem become even more unnerving than the results from past studies suggest. It appears that we live in a world where not only will most people refuse complicity in a disaster in order to save more lives, but where many people reject outright the idea that they should have any considered set of moral standards for making hard choices at all. They have placed themselves in a reality too accommodating of their preferences to force them to have a system for dealing with situations with no ideal outcomes.

" } }, { "_id": "sGjYMpx5rKLaig7FX", "title": "A neural correlate of certainty", "pageUrl": "https://www.lesswrong.com/posts/sGjYMpx5rKLaig7FX/a-neural-correlate-of-certainty", "postedAt": "2010-12-05T01:44:02.521Z", "baseScore": 5, "voteCount": 5, "commentCount": 2, "url": null, "contents": { "documentId": "sGjYMpx5rKLaig7FX", "html": "

Adam Kepecs' Eppendorf essay, hosted at science's website (but not printed in the magazine), is about some neurons in the orbitofrontal cortex of rats that appear to represent uncertainty in an odor-recognition task by firing more often, at a rate roughly linearly proportional to the error rate.

\n

The involvement of OFC in decision-making isn't new, but the graphs are nice and quantitative.

" } }, { "_id": "FaZH325yEhCzQtrEf", "title": "A Catalog of Confusions", "pageUrl": "https://www.lesswrong.com/posts/FaZH325yEhCzQtrEf/a-catalog-of-confusions", "postedAt": "2010-12-05T01:17:49.434Z", "baseScore": 9, "voteCount": 12, "commentCount": 13, "url": null, "contents": { "documentId": "FaZH325yEhCzQtrEf", "html": "

tl;dr - can we categorise confusing events by skills required to deal with them?  What are those skills?

\r\n

I am sometimes haunted by things I read online.  It's probably a couple of years since I first read Your Strength as a Rationalist, but over the past month or two I've been reminded of it a surprising number of times in different circumstances.  It's led me to wonder whether the idea of being \"confused by fiction\" can be helpfully broken down into categories, with each of those categories having certain skills that can be worked on to help notice them.

\r\n

I'm going to describe two such categories I think I've identified, and invite your criticism, or suggestions of other similar categories.  In both cases, I believe there to be some instinct, acquired skill, or some combination thereof that draws it to my attention.  I could just be making this up, though, so criticism is also welcome on this front.

\r\n

Absence of Salient Information

\r\n

I believe tech support is like a magic trick in reverse.  With a magic trick, the magician hides a crucial fact which he then distracts you from.  He provides a false narrative of what's going on while confusing the sequence of events, culminating in the impossible, and relies on your own fear of appearing foolish to make you falsely report the conditions of the trick to both yourself and other spectators.

\r\n

In tech support, you are often presented with an impossible sequence of events; the customer's fear of appearing foolish makes them falsely report the conditions of the fault to both themselves and you, concealing a crucial fact which the rest of the narrative distracts you from.  You then have to figure out how it was done.

\r\n

I recently asked a girl from my dance class out for a drink, and proceeded to receive the most shocking litany of mixed signals I could ever imagine receiving, drink not forthcoming.  I boiled it down to three possibilities: either she was interested but incredibly shy, uninterested but just really friendly; or she had a completely different set of standards when it came to signaling romantic interest or lack thereof.  I remember thinking how none of these possibilities made sense in context, and was reminded quite specifically of the idea of being more confused by fiction than by reality.  It was driving my problem-solving faculties to distraction, and I have never been so relieved to discover a woman I was interested in already had a boyfriend.

\r\n

The phenomenon wasn't unlike a film with a massive plot-integral spoiler.  There's this nagging feeling that the whole thing doesn't quite make sense, until the spoiler is revealed, at which point you suddenly see the whole of the preceeding sequence of events in a new revelatory light.  I've often noticed with such films that when people know there's a big spoiler, they're more likely to spot it early on because they start groping around for plausible plot twists.  I'm not sure if this is the best way to go about fishing for information you know is absent, though.

\r\n

Having One's Head Messed With

\r\n

I've read a few books on hypnosis, NLP and persuasion techniques, and I'm at least as well-versed on cognitive biases as most LW readers, but a couple of weeks ago someone fucked with my head.

\r\n

I was in East London (never a good start), fairly late at night with food in my hand.  Beggars always seem to approach me when I have food in my hand.  I don't think this is coincidence.  This particular beggar, a woman in her twenties, spun a very quick story which I can't even begin to remember all the details of.  Something about desperately needing bus fare to escape her abusive boyfriend and having just been released from hospital.  Just thinking about it, two weeks later, makes me confused and disorientated.

\r\n

In retrospect, the story made no sense whatsoever, she was far too aggressive to be a downtrodden out-patient abuse victim, and far too good at making me feel like the only way I could possibly get out of this horrible distressing situation was to give her my small change, which I did.  Afterwards I felt violated.

\r\n

The experience itself has probably armed me against it happening again to a certain degree, but I'm now worried about what I'm not armed against.  There is a feeling of having your head messed with, but I only ever seem to experience it retrospectively.  Can I train myself to spot it as it's happening?  Is it related to the feeling I get when I recognise I'm being manipulated by advertising?  Is there a how-to body of knowledge that can be assembled to defend against manipulation in general?

\r\n

This probably could have been more coherent, but it was surprisingly cathartic to write.

" } }, { "_id": "dYokqRNkN3vXskvQy", "title": "\"Behind the Power Curve\" by Simon Funk", "pageUrl": "https://www.lesswrong.com/posts/dYokqRNkN3vXskvQy/behind-the-power-curve-by-simon-funk", "postedAt": "2010-12-04T17:35:53.137Z", "baseScore": 11, "voteCount": 8, "commentCount": 7, "url": null, "contents": { "documentId": "dYokqRNkN3vXskvQy", "html": "

Brandyn Webb (Simon Funk) of After Life fame is an impressively rational person. One of his essays, Behind the Power Curve, resonated with me so much that I need to share it with you. Quotes don't do it justice, just read the whole thing.

" } }, { "_id": "Ti2SW9GoZLCq36zEz", "title": "Applied cognitive science: learning from a faux pas", "pageUrl": "https://www.lesswrong.com/posts/Ti2SW9GoZLCq36zEz/applied-cognitive-science-learning-from-a-faux-pas", "postedAt": "2010-12-04T11:15:48.594Z", "baseScore": 42, "voteCount": 36, "commentCount": 6, "url": null, "contents": { "documentId": "Ti2SW9GoZLCq36zEz", "html": "

Cross-posted from my LiveJournal:

Yesterday evening, I pasted to two IRC channels an excerpt of what someone had written. In the context of the original text, that excerpt had seemed to me like harmless if somewhat raunchy humor. What I didn't realize at the time was that by removing the context, the person writing it came off looking like a jerk, and by laughing at it I came off looking as something of a jerk as well.

Two people, both of whom I have known for many years now and whose opinions I value, approached me by private message and pointed out that that may not have been the smartest thing to do. My initial reaction was defensive, but I soon realized that they were right and thanked them for pointing it out to me. Putting on a positive growth mindset, I decided to treat this event as a positive one, as in the future I'd know better.

Later that evening, as I lay in bed waiting to fall asleep, the episode replayed itself in my mind. I learnt long ago that trying to push such replays out of my mind would just make them take longer and make them feel worse. So I settled back to just observing the replay and waiting for it to go away. As I waited, I started thinking about what kind of lower-level neural process this feeling might be a sign of.

Artificial neural networks use what is called a backpropagation algorithm to learn from mistakes. First the network is provided some input, then it computes some value, and then the obtained value is compared to the expected value. The difference between the obtained and expected value is the error, which is then propagated back from the end of the network to the input layer. As the error signal works it way through the network, neural weights are adjusted in such a fashion to produce a different output the next time.

Backprop is known to be biologically unrealistic, but there are more realistic algorithms that work in a roughly similar manner. The human brain seems to be using something called temporal difference learning. As Roko described it: \"Your brain propagates the psychological pain 'back to the earliest reliable stimulus for the punishment'. If you fail or are punished sufficiently many times in some problem area, and acting in that area is always preceeded by [doing something], your brain will propagate the psychological pain right back to the moment you first begin to [do that something]\".

As I lay there in bed, I couldn't help the feeling that something similar to those two algorithms was going on. The main thing that kept repeating itself was not the actual action of pasting the quote to the channel or laughing about it, but the admonishments from my friends. Being independently rebuked for something by two people I considered important: a powerful error signal that had to be taken into account. Their reactions filling my mind: an attempt to re-set the network to the state it was in soon after the event. The uncomfortable feeling of thinking about that: negative affect flooding the network as it was in that state, acting as a signal to re-adjust the neural weights that had caused that kind of an outcome.

After those feelings had passed, I thought about the episode again. Now I felt silly for committing that faux pas, for now it felt obvious that the quote would come across badly. For a moment I wondered if I had just been unusually tired, or distracted, or otherwise out of my normal mode of thought to not have seen that. But then it occurred to me - the judgment of this being obviously a bad idea was produced by the network that had just been rewired in response to social feedback. The pain of the feedback had been propagated back to the action that caused it, so just thinking about doing that (or thinking about having done that) made me feel stupid. I have no way of knowing whether the \"don't do that, idiot\" judgment is something that would actually have been produced had I been paying more attention, or if it's a genuinely new judgment that wouldn't have been produced by the old network.

I tend to be somewhat amused by the people who go about claiming that computers can never be truly intelligent, because a computer doesn't genuinely understand the information it's processing. I think they're vastly overestimating how smart we are, and that a lot of our thinking is just relatively crude pattern-matching, with various patterns (including behavioral ones) being labeled as good or bad after the fact, as we try out various things.

On the other hand, there would probably have been one way to avoid that incident. We do have the capacity for reflective thought, which allows us to simulate various events in our heads without needing to actually undergo them. Had I actually imagined the various ways in which people could interpret that quote, I would probably have relatively quickly reached the conclusion that yes, it might easily be taken as jerk-ish. Simply imagining that reaction might then have provided the decision-making network with a similar, albeit weaker, error signal and taught it not to do that.

However, there's the question of combinatorial explosions: any decision could potentially have countless of consequences, and we can't simulate them all. (See the epistemological frame problem.) So in the end, knowing the answer to the question of \"which actions are such that we should pause to reflect upon their potential consequences\" is something we need to learn by trial and error as well.

So I guess the lesson here is that you shouldn't blame yourself too much if you've done something that feels obviously wrong in retrospect. That decision was made by an earlier version of you. Although it feels obvious now, that version of you might literally have had no way of knowing that it was making a mistake, as it hadn't been properly trained yet.

" } }, { "_id": "FCxHgPsDScx4C3H8n", "title": "Efficient Charity", "pageUrl": "https://www.lesswrong.com/posts/FCxHgPsDScx4C3H8n/efficient-charity", "postedAt": "2010-12-04T10:27:58.909Z", "baseScore": 42, "voteCount": 37, "commentCount": 185, "url": null, "contents": { "documentId": "FCxHgPsDScx4C3H8n", "html": "

I wrote this article in response to Roko's request for an article about efficient charity. As a disclosure of a possible conflict of interest I'll note that I have served as a volunteer for GiveWell. Last edited 12/06/10.

\n

Charitable giving is widely considered to be virtuous and admirable. If statistical behavior is any guide, most people regard charitable donations to be worthwhile expenditures. In 2001 a full 89% of American households donated money to charity and during 2009 Americans donated $303.75 billion to charity [1]. 

\n

A heart-breaking fact about modern human experience is that there's little connection between such generosity and positive social impact. The reason why humans evolved charitable tendencies is because such tendencies served as marker to nearby humans that a given individual is a dependable ally. Those who expend their resources to help others are more likely than others to care about people in general and are therefore more likely than others to care about their companions. But one can tell that people care based exclusively on their willingness to make sacrifices independently of whether these sacrifices actually help anybody.

\n

Modern human society is very far removed from our ancestral environment. Technological and social innovations have made it possible for us to influence people on the other side of the globe and potentially to have a profound impact on the long term survival of the human race. The current population of New York is ten times the human population of the entire world in our ancestral environment. In view of these radical changes it should be no surprise that the impact of a typical charitable donation falls staggeringly short of the impact of donation optimized to help people as much as possible.

\n

While this may not be a problem for donors who are unconcerned about their donations helping people, it's a huge problem for donors who want their donations to help people as much as possible and it's a huge problem for the people who lose out on assistance because of inefficiency in the philanthropic world. Picking out charities that have high positive impact per dollar is a task no less difficult than picking good financial investments and one that requires heavy use of critical and quantitative reasoning. Donors who wish for their donations to help people as much as possible should engage in such reasoning and/or rely on the recommendations of trusted parties who have done so.

\n

The Overhead Ratio: Not a Good Metric

\n

A commonly used statistic for charity evaluation which has a thin veneer of analytical rigor is a charity's “overhead ratio”: that is, the relative amounts of money spent on programs vs. administration. According to a press release issued in December 2009 by Philanthropy Action, Charity Navigator, GiveWell, Great Nonprofits, Guidestar and Philanthropedia :

\n
\n

For years, people have turned to the overhead ratio—a measure of how much of each donation is spent on “programs” versus administrative and fundraising costs—to guide their choice of charity. But overhead ratios and executive salaries are useless for evaluating a nonprofit’s impact.

\n

While the idea of sending money “straight to the beneficiaries” is tempting, nonprofit experts agree that judging charities by how much of their money goes to “programs” is counterproductive. “Achieving a low overhead ratio drives many charities to behaviors that make them less effective and means more, not less, wasted dollars,” says Paul Brest, President of the Hewlett Foundation, and co-author of Money Well Spent.

\n
\n

The common focus on low overhead ratio has produced perverse incentives; pressuring some charities to skimp on administrative costs that would improve the efficacy of their programs. More importantly, cost-effectiveness of different charities' activities varies so dramatically as to totally eclipse any usefulness that the overhead ratio might have in a world of charities performing homogeneous activities.

\n

A Comparison of Cost-Effectiveness

\n

A well-known and well-funded charity is the Make-A-Wish Foundation, “a 501(c)(3) non-profit organization in the United States that grants wishes to children (2.5 years to 18 years old) who have life-threatening medical conditions.” According to the website's Managing Our Funds page:

\n
\n

The Make-A-Wish Foundation® is proud of the way it manages and safeguards the generous contributions it receives from individual donors, corporations and other organizations.

\n

Seventy-six percent of the revenue the Make-A-Wish Foundation receives is allotted to program services. This percentage well exceeds the standard upheld by organizations that monitor the work of charities.

\n
\n

And indeed, the percentage allotted to program services is sufficiently high in juxtaposition with other financial statistics so that Charity Navigator grants the Make-A-Wish Foundation its highest rating. But how cost-effective are the charity's programs?

\n

The Make-A-Wish Foundation 2009 Annual Report states that “A record-breaking 13,471 children had their wishes come true in FY09.” The annual report gives a break down of wishes by type: for example, 40.3% of the wishes were trips to the Walt Disney World Resort, 11.7% of them were shopping sprees, 7.1% of them were celebrity meetings and 5.5% of them were cruises.

\n

The annual report claims that in 2009 the charity's “total program and support services” amounted a figure of $203,865,550. Thus, the Make-A-Wish Foundation implicitly reports to spending an average of $15,134 for each wish that it grants.

\n

A charity that helps children in the United States far more efficiently is Nurse-Family Partnership which provides an approximately three year long program of weekly nurse visits to inexperienced expectant and early mothers for at a cost of $11,200 yielding improved prenatal health, fewer childhood injuries and improved school readiness. A deeper appreciation of how little good per dollar the Make-A-Wish Foundation does relative to what is possible requires a digression.

\n
\n

In November 2010 the United Nations released its 2010 Human Development Report ranking the world's countries according to a \"Human Development Index\" based on data concerning life expectancy, education and per-capita GDP. One of the lowest ranked countries on this list is Mozambique which has an infant mortality rate around 10%. This contrasts dramatically with the infant mortality rate in the United States which is less than 1%. Every tenth pregnancy in Mozambique is followed by the grief of losing a child within several years. A child in sub-Saharan Africa who survives past the age of five is more likely than not to live a full life extending past the age of 60 [2].

\n

Why is the infant mortality rate in Mozambique so high? A major cause of death is infectious disease. Around a third of infants in Mozambique do not have the opportunity to receive the standard vaccinations for polio, measles, tentanus, tuberculosis, diphtheria and other fatal diseases because of the poverty of their surroundings and some of them will die as a result.

\n

An organization called VillageReach is working to improve Mozambique's health logistics. Between 2002 and 2008 VillageReach ran a pilot program in the Mozambique province of Cabo Delgado designed to improve the province's health logistics. This program was dramatically successful. One tangible indicator of impact is that VillageReach increased the percentage of Cabo Delgado infants who received the third and final dose of the diphtheria-tetanus-pertussis vaccine from 68.9% to 95.4%, yielding a final percentage higher than that of the average in any sub-Saharan African country. When one looks at the available evidence in juxtaposition with the cost of the program and runs through cost-effectiveness calculations one finds that under conservative assumptions VillageReach saved an infant's life for every $545 donated to VillageReach.

\n

Now VillageReach is in the process of expanding its operations to more provinces of Mozambique, hoping to expand its pilot project into seven more of Mozambique's eleven provinces over the next six years. VillageReach requires an additional ~ $1.5 million [3] to implement its proposal as fast as possible. In light of the fact that VillageReach has so far received only about 20-25% of this funding, it's plausible that additional donations will have a cost-effectiveness similar to that of those used for the pilot project.

\n
\n

Thus we see that while a $15,134 donation to the Make-A-Wish Foundation can be expected to grant an average of one wish to an ill child (a good thing all else being equal), a donation to VillageReach can 27 infants lives! With this framing it becomes clear that the amount of good per dollar that the Make-A-Wish Foundation is doing is negligible relative to that of VillageReach . No parent would prefer to send a child to Disney World over preventing even a single one of his or her children from contracting a life threatening illness!

\n

Nor is this phenomenon of badly suboptimal giving specific to Make-A-Wish Foundation donors. Even if one restricts one's attention to the cause of health in the developing world [4], many donors donate to charities pursuing health interventions in the developing world that do a thousand times less good per dollar than the most cost-effective health interventions.

\n

A hypothetical charity running programs like VillageReach's which embezzled 95% of its budget and had correspondingly greatly reduced cost-effectiveness would still be doing far more good per dollar than the Make-A-Wish Foundation or the least effective developing world charities do. This example makes it clear how profoundly useless the overhead ratio is for assessing the relative quality of a charity.

\n

Holding Charities Accountable

\n

Donors should be aware that charities frequently cite misleading cost-effectiveness figures in their promotional materials. And just because a charity claims to be performing activities of very high value doesn't mean that the charity is performing the activity as reported. William Easterly recently commented on Peter Singer's child in a pond metaphor [5] saying:

\n
\n

In our situation trying to help a poor person, what we're actually doing is we're not physically able to rush in ourselves and save the child. In fact, we are not even able to observe whether the child is saved or not. What we are doing is we're sending money off to someone else on the other side of the world...and we're counting on them to save the child. And so I guess to put the metaphor another way, if your person who was saving a child was in a situation where they were physically unable to help and they knew they had to delegate it to someone else, then it would also be morally reprehensible if they did not find a person who was reliable who they were sure was going to save the child. And it would be morally reprehensible if they did not in fact check up to make sure that the child was saved. That would be just as morally objectionable as your situation of yourself directly failing to rush to the aid of the child.

\n
\n

Of course, for a donor with limited time and energy it is frequently not possible to personally check that a charity is performing its stated function. As such, it is useful to have independent charity evaluators that evaluate charities for impact. The only such organization that I'm familiar with is GiveWell which has reviewed 409 charities working in the areas of equality of opportunity in the United States, health in the developing world, and economic empowerment in the developing world and has highlighted those charities with the strongest evidence of positive impact. VillageReach is currently GiveWell's top ranked charity in the cause of health in the developing world.

\n

There are many causes that GiveWell has not yet covered and there may be charities working in them that absorb donations substantially more cost-effectively than VillageReach does. GiveWell has prepared a Do-it-Yourself Charity Evaluation Guide as an aid to donors who are interested in personally investigating charities working in causes that GiveWell has not yet covered.

\n

Volunteering, Nonprofit Work and Cost-Effectiveness

\n

So far I've restricted my discussion to charitable giving. Giving is not the only philanthropic activity that people engage in;  some people volunteer their time to benefit others and some people choose to forgo income to work at a lower paying nonprofit job that they deem to have greater social value than the job that they would otherwise take. There are many instances in which such philanthropic activities are the best way to help people, but one should consider such activities against the backdrop of there being huge variability in the cost-effectiveness of philanthropic activities. GiveWell's recommended charities have set a concrete minimal standard for optimizing cost-effectiveness of philanthropic activities.

\n

To determine whether or not volunteering or taking a nonprofit job is a good way of helping people, one should compare additional positive impact that one would have by switching jobs with the positive impact that one would have by donating all of one's forgone income to the most efficient charity that one can find. For those with low earning potential and skills that are useful and rare in the philanthropic world, the most efficient way of helping people will typically be volunteering and/or non-profit work. For those who have high earning potential and lack skills that are especially rare in the philanthropic world the most efficient way of helping people will typically be taking a high paying job and donating one's income to an efficient charity. [6]

\n

Of course, many people who volunteer or forgo income to work at a non-profit do so not only with a view toward helping people but also because they want to experience the visceral sense of helping people directly or of working directly on a cause that they feel passionate about. This latter factor can be a good reason to engage in such activities. Humans are not automatons capable of persistently adopting the most efficient course possible. Fulfilling our own very substantial personal needs and desires is important to maintaining good health and energy. At the same time, in view of the great variability of cost-effectiveness of various philanthropic activities, if one doesn't devote some resources toward helping people as efficiently as possible, one will probably accomplish very little of one's potential capacity to make the world a better place. [7]

\n

Conclusion

\n

People often have good intentions and frequently fail to direct them to create the substantial positive impact that they could if they thought carefully about how to do as much good as possible. I have already mentioned GiveWell as a useful resource for donors who interested in accomplishing the most good for their dollar. Such donors may also find it useful to visit Giving What We Can which is a society whose members pledge to donate a portion of their income “to whichever organizations can most effectively use it to fight poverty in developing countries” and whose members “share advice on the most effective ways to give.” By thinking critically and making use of available resources, one can reasonably expect to be able to have a much greater positive social impact than one otherwise would be able to.

\n
\n

Footnotes

\n

[1] Figures taken from a survey by Independent Sector and The Annual Report on Philanthropy for the Year 2009.

\n

[2] According to calculations by GiveWell using data from the World Health Organization.

\n

[3] See the section of GiveWell's review of VillageReach titled Room For More Funds?

\n

[4] For an indication of the relative cost-effectiveness of health interventions in the U.S. refer to a 1995 academic journal article from titled Five-Hundred Life-Saving Interventions and Their Cost-Effectiveness.

\n

[5] In a December 2009 BloggingHeads Diavlog with Peter Singer. William Easterly is an economist at NYU and author of the Aid Watch blog

\n

[6] Alan Dawrst's essay titled Why Activists Should Consider Making Lots of Money gives more on this topic.

\n

[7] Eliezer Yudkowsky's Purchase Fuzzies and Utilons Separately gives a nice discussion of this theme.

" } }, { "_id": "6yW7hGvr5k33gxgkB", "title": "How to Live on 24 Hours a Day", "pageUrl": "https://www.lesswrong.com/posts/6yW7hGvr5k33gxgkB/how-to-live-on-24-hours-a-day", "postedAt": "2010-12-04T09:12:04.436Z", "baseScore": 17, "voteCount": 27, "commentCount": 31, "url": null, "contents": { "documentId": "6yW7hGvr5k33gxgkB", "html": "

I can think of no better way to spend my karma than on encouraging people to read this 19th century self-help book. It's free and online in full.

\n

The guidelines on what makes an appropriate front-page article be damned, or, if necessary, enforced by official censorship.

\n

Thanks to User:sfb for the quote that led me here, although the decision to post is entirely my own.

\n

http://www.gutenberg.org/files/2274/2274-h/2274-h.htm

" } }, { "_id": "6BHkQoijdYymu2HBr", "title": "Sequences in Alternative Formats", "pageUrl": "https://www.lesswrong.com/posts/6BHkQoijdYymu2HBr/sequences-in-alternative-formats", "postedAt": "2010-12-04T01:40:40.102Z", "baseScore": 22, "voteCount": 18, "commentCount": 12, "url": null, "contents": { "documentId": "6BHkQoijdYymu2HBr", "html": "

In the past weeks, there have been a few projects aiming to convert the sequences into more eye-friendly formats than the computer screen.  There's no collection of these projects yet, so I'm posting here and in the wiki to get the word out.

\n\n

Some of the sequences can be rather lengthy -- the one on quantum physics is around 118,000 words.  I know that there are some people that don't mind reading off LCD screens, but hopefully these tools will prove useful to the rest.

" } }, { "_id": "JqsyrjZzaHYaXWuLT", "title": "Cambridge Sunday meetup: New time and location", "pageUrl": "https://www.lesswrong.com/posts/JqsyrjZzaHYaXWuLT/cambridge-sunday-meetup-new-time-and-location", "postedAt": "2010-12-04T00:11:27.912Z", "baseScore": 6, "voteCount": 6, "commentCount": 11, "url": null, "contents": { "documentId": "JqsyrjZzaHYaXWuLT", "html": "

The Sunday Cambridge meetups, which happen on the first and third Sunday of each month, are now at 2:00 pm at Cosi, near Kendall Square. As discussed at the last meetup, we've changed the location because the old location was becoming too crowded, and the time to allow the meetup to go on longer before people have to leave.

" } }, { "_id": "ZXgSMwyHHce837ecn", "title": "Are stereotypes ever irrational?", "pageUrl": "https://www.lesswrong.com/posts/ZXgSMwyHHce837ecn/are-stereotypes-ever-irrational", "postedAt": "2010-12-03T23:45:26.814Z", "baseScore": 28, "voteCount": 23, "commentCount": 35, "url": null, "contents": { "documentId": "ZXgSMwyHHce837ecn", "html": "

Harvard's undergraduate admission office will tell you \"There is no typical Harvard student.\"  This platitude ticks me off.  Of course there's such a thing as a typical Harvard student!  Harvard students aren't magically exceptions to the laws of probability.  Just as a robin is a more typical bird than an ostrich, some Harvard students are especially typical.  Let's say (I'm not actually looking at data here) that most Harvard students are rich, most have high SAT scores, most are white, and most are snobbish.  Note: I am not a Harvard student. :)

\n

Now, a very typical Harvard student would be rich AND white AND smart AND snobbish.  But observe that a given student has a smaller probability of being all of these than of just, say, being rich.  If you add enough majority characteristics, eventually the \"typical\" student will become very rare.  Even if there's a 99% probability of having any one of these characteristics, 0.99n ->0 as n goes to infinity.  Some Harvard students are typical; but extremely typical Harvard students are rare. If you encountered a random Harvard student, and expected her to have all the majority characteristics of Harvard students, you could very well be wrong.

\n

So far, so obvious.  But who would make that mistake?  

\n

You, that's who.  The conjunction fallacy is the tendency of humans to think specific conditions are more probable than general conditions.  People are more likely to believe that a smart, single, politically active woman is a feminist bank teller than just a bank teller.  Policy experts (in the 1980's) were more likely to think that the USSR would invade Poland and that the US would break off relations with the USSR, than either one of these events alone.  Of course, this is mistaken: the probability of A and B is always less than or equal to the probability of A alone.  The reason we make this mistake is the representativeness heuristic: a specific, compelling story that resembles available data is judged as more probable than general (but more likely) data.  Judging by this evidence, I'd hypothesize that most people will overestimate the probability of a random Harvard student matching the profile of a \"very typical\" Harvard student.  The conjunction fallacy says something even stronger: the more information you add to the profile of the \"very typical\" Harvard student, the more specific the portrait you paint (add a popped collar, for instance) the more likely people will think it is.  Even though in fact the \"typical student\" is getting less and less likely as you add more information.

\n

Now, let's talk about stereotypes.  Stereotypes -- at least the kind that offend some people -- have their apologists.  Some people say, \"They're offensive because they're true.  Of course some traits are more common in some populations than others. That's just having accurate priors.\"  This is worth taking seriously.  The mere act of making assumptions based on statistics is not irrational.  In fact, that's the only way we can go about our daily lives; we make estimates based on what we think is likely.  There's nothing wrong with stating \"Most birds can fly,\" even if some can't.  And exhortations not to stereotype people are often blatantly irrational.  \"There is no typical Harvard student\" -- well, yes, there is.  \"You can't make assumptions about people\" -- well, yes, you can, and you'd be pathologically helpless if you never made any.  You can assume people don't like rotten meat, for instance.  If stereotyping is just making inferences, then stereotyping is not just morally acceptable, it's absolutely necessary.  And, though it may be true that some people are offended by some accurate priors and rational inferences, it is not generally good for people to be thus offended; any more than it is good for people to want to be wrong about anything.

\n

But there is a kind of \"stereotyping\" that really is a logical fallacy.  The picture of the \"very typical\" Harvard student is a stereotype.  If people overestimate the probability of that representative-looking picture, then they are stereotyping Harvard students in an irrational way.  An irrational stereotype is a \"typical\" or \"representative\" picture that isn't actually all that common.  Because the human mind likes stories, because we like completing patterns, we'll think it's more likely that someone matches a pattern or story completely than that she matches only part of the story.  

\n

There's a line in the movie Annie Hall that illustrates this.  (This is within five minutes of Alvy meeting Allison.)

\n
\n

Alvy Singer: You, you, you're like New York, Jewish, left-wing, liberal, intellectual, Central Park West, Brandeis University, the socialist summer camps and the, the father with the Ben Shahn drawings, right, and the really, y'know, strike-oriented kind of, red diaper, stop me before I make a complete imbecile of myself. 

\n
\n
\n

Allison: No, that was wonderful. I love being reduced to a cultural stereotype.

\n
\n

If Alvy had only stopped with \"New York, Jewish, left-wing,\" he'd probably be right.  But he kept going.  He had to complete the pattern.  By the time he's got to the Ben Shahn drawings, it's just getting fanciful.  If you build up too detailed a story, you'll find it irresistible to believe, but it's getting less and less likely all the time. 

\n

Stereotypes can  be irrational.  Not every inference or assumption about people is irrational, of course, but our tendency to find specific stories more believable than broader qualities is irrational.  Our tendency to think that most people resemble the \"most typical\" members of a class is irrational.  Mistaken stereotypes are what happen when people are more attracted to complete stories than to actual probability distributions.

" } }, { "_id": "FDNoTdWH8oaojztN3", "title": "Starting point for calculating inferential distance?", "pageUrl": "https://www.lesswrong.com/posts/FDNoTdWH8oaojztN3/starting-point-for-calculating-inferential-distance", "postedAt": "2010-12-03T20:20:03.484Z", "baseScore": 22, "voteCount": 18, "commentCount": 9, "url": null, "contents": { "documentId": "FDNoTdWH8oaojztN3", "html": "

One of the shiniest ideas I picked up from LW is inferential distance.  I say \"shiny\" because the term, so far as I'm aware, has no clear mathematical or pragmatic definition, no substantive use in peer reviewed science, but was novel to me and appeared to make a lot of stuff about the world suddenly make sense.  In my head it is marked as \"super neat... but possibly a convenient falsehood\".  I ran across something yesterday that struck me a beautifully succinct and helpful towards resolving the epistemic status of the concept of \"inferential distance\".

\n

While surfing the language log archives I ran across a mailbox response to correspondence about comparative communication efficiency.  The author, Mark Liberman, was interested in calculating the amount of information in text and was surprised to find that something about the texts, or the subjects, or his calculation lead to estimating different amounts of information in different translations of the same text (with English requiring 20%-40% more bits than Chinese to say the things in his example text).

\n

Mr. Liberman was helped by Bob Moore who, among other things, noted:

\n
\n

...why should we expect two languages to use the same number of bits to convey the same thoughts? I believe that when we speak or write we always simplify the complexity of what is actually in our heads, and different languages might implicitly do this more than others. Applying Shannon's source/channel model, suppose that when we have a thought T that we want to convey with an utterance U, we act as if our hearer has a prior P(T) over the possible thoughts we may be conveying and estimates a probability P(U|T) that we will have used U to express T. As you well know, according to Shannon, the hearer should find the T that maximizes P(U|T)*P(T) in order to decide what we meant. But the amount of effort that the speaker puts into U will determine the probability that the hearer will get the message T correctly. If the speaker thinks the prior on T is high, then he may choose a shorter U that has a less peaked probability of only coming from T. If I say to my wife \"I got it,\" I can get by with this short cryptic message, if I think there is a very high probability that she will know what \"it\" is, but I am taking a risk.

My conjecture is that the acceptable trade-off between linguistic effort and risk of being misunderstood is socially determined over time by each language community and embodied in the language itself. If the probability of being misunderstood varies smoothly with linguistic effort (i.e., bits) without any sharp discontinuities, then there is no reason to suppose that different linguistic communities would end up at exactly the same place on this curve.

\n
\n

Application to inferential distance is left as an exercise for the reader :-)

" } }, { "_id": "tmL9Km7eHmZ2tF4Rm", "title": "Longterm/Difficult to measure charities", "pageUrl": "https://www.lesswrong.com/posts/tmL9Km7eHmZ2tF4Rm/longterm-difficult-to-measure-charities", "postedAt": "2010-12-03T18:47:22.335Z", "baseScore": 5, "voteCount": 5, "commentCount": 7, "url": null, "contents": { "documentId": "tmL9Km7eHmZ2tF4Rm", "html": "

I'm not sure if this necessarily warrants a new discussion, or if there's an existing article/thread that addresses this topic.

\n

There's a lot of discussion recently about charity, and how to give effectively. I've been looking over givewell.org and it definitely is the single most important thing I've found on lesswrong. But one discouraging thing is that by focusing on easy to measure charities, there's not a lot of info on charities that are trying to accomplish long term less measurable goals. The best charity there that matches my priorities was an educational agency in India that put a lot of emphasis on self improvement.

\n

My *think* my ideal charity would be something similar to Heifer International, but which also focuses on reproductive health and/or women's rights. Feeding people fish for a day means you just need to feed them again tomorrow, and if they have a bunch of kids you haven't necessarily accomplished anything. From what I've read, in places where the standard of living improves and women get more equality, overpopulation becomes less of an issue. So it seems to me that addressing those issues together in particular regions would produce sustainable longterm benefit. But Givewell doesn't seem to have a lot of information on those types of charities.

" } }, { "_id": "RrHxgWpvxHT9w6RYy", "title": "Two publicity ideas", "pageUrl": "https://www.lesswrong.com/posts/RrHxgWpvxHT9w6RYy/two-publicity-ideas", "postedAt": "2010-12-03T15:41:47.163Z", "baseScore": 12, "voteCount": 10, "commentCount": 14, "url": null, "contents": { "documentId": "RrHxgWpvxHT9w6RYy", "html": "

Here's the easy idea-- how about a brochure about LW and SIAI? I was just at a couple of science fiction conventions, and it occurred to me that if there were a brochure, I could have printed it out and put it on the freebie table.

\n

The hard idea is Sesame Street for rationalism-- entertaining video that dramatizes rationality. This would require money and possibly talents which aren't currently in the community, but I think a good bit of rationality could be dramatized, and it would be a big win for raising the rationality waterline.

" } }, { "_id": "NWb8KNodKgJJP37Jc", "title": "One argument in favor of limited life spans", "pageUrl": "https://www.lesswrong.com/posts/NWb8KNodKgJJP37Jc/one-argument-in-favor-of-limited-life-spans", "postedAt": "2010-12-03T15:34:05.359Z", "baseScore": 7, "voteCount": 18, "commentCount": 23, "url": null, "contents": { "documentId": "NWb8KNodKgJJP37Jc", "html": "

It's the only absolutely reliable way of getting rid of bad leaders.

\n

This might not be a good enough reason to oppose longevity tech, but I don't think it's easily disposed of.

" } }, { "_id": "oRKY7Wd4EwL4wrPfw", "title": "Aieee! The stupid! it burns!", "pageUrl": "https://www.lesswrong.com/posts/oRKY7Wd4EwL4wrPfw/aieee-the-stupid-it-burns", "postedAt": "2010-12-03T14:53:47.932Z", "baseScore": 21, "voteCount": 18, "commentCount": 62, "url": null, "contents": { "documentId": "oRKY7Wd4EwL4wrPfw", "html": "

Last Wednesday (2010 Dec 01), BBC Radio 4 broadcast a studio discussion on the question: \"should we actively try to extend life itself?\" The programme can be listened to from the BBC here for one week from broadcast, and is also being repeated tomorrow (Saturday Dec 04) at 22:15 BST. (ETA: not BST, GMT.)

\n

All of the dreadful arguments for why death is good came out. For uninteresting reasons I missed a few minutes here and there, but in what I heard, not one of the speakers on any side of the question said anything like, \"This is a no-brainer! Death is evil. Disease is evil. The less of both we have, the better. There is nothing good about death, at all, and all the arguments to the contrary are moral imbecility.\"

\n

Instead, I heard people saying that work on life extension is disrespectful to the old, that to prolong life would be like prolonging an opera, which has a certain natural size and shape, that the old are wise, so if we make them physically young then old people won't be old, so they won't be wise. Whatever cockeyed argument you can construct by scattering into a Deeply Wise template the words \"old\", \"young\", \"wise\", \"decrepit\", \"healthy\", \"natural\", \"unnatural\", \"boredom\", \"inevitable\", \"denial\", I heard worse.

\n

If I can bear to listen again to the whole thing just to check I didn't miss anything important, I may write something on their discussion board.

" } }, { "_id": "Xg5KCY4FYrxEcCifa", "title": "How Greedy Bastards Have Saved More Lives Than Mother Theresa Ever Did", "pageUrl": "https://www.lesswrong.com/posts/Xg5KCY4FYrxEcCifa/how-greedy-bastards-have-saved-more-lives-than-mother", "postedAt": "2010-12-03T06:20:38.961Z", "baseScore": 21, "voteCount": 27, "commentCount": 24, "url": null, "contents": { "documentId": "Xg5KCY4FYrxEcCifa", "html": "

And how you can use the same techniques to save a stranger's life for under $600

\n
\n

It's a strange world we live in.

\n

When I first heard of Optimal Philanthropy, it was in a news article about Bill Gates's plan for retirement. He'd decided to donate tens of billions of dollars to charity, but had decided that no existing charity was worth donating to.

\n

Gates felt they weren't run properly.

\n

You see, at the time most people thought that \"efficient charities\" were those that had little or no overhead. Everyone wanted as much money to go to the front lines as possible, with little or none for administration.

\n

Gates didn't care about any of that.

\n

No, what Gates wanted was measurable results... and if more administration would get better results, he was all for it.

\n

In business, it all comes down to return on investment. How much money did you use (to rent buildings, buy supplies, hire employees), and how much money did you earn in return.

\n

Gates felt that something similar was needed for charity.

\n

If the charity saved lives, Gates reasoned, then it should be judged by how much money it used to save that life. If a charity could save twice as many lives on the same budget by using more administrators, they by all means they should do that.

\n

As you may have heard, Bill Gates was appalled that he couldn't find a charity he could measure.

\n

Here he was, trying to selflessly give away over ten billion dollars to any charity that could prove it would have the highest impact.... and finding a bunch of nonsense answers about how that's not the way charity works... or how little overhead there was.

\n

And as you may have also heard, Mr. Gates turned that frustration into a revolution in the world of charity -- and inspired others to follow him. His foundation -- the Bill and Melinda Gates Foundation -- is now the biggest in the world, and makes a difference everyday in the areas of world education, malaria, and sustainable energy.

\n

 

\n

But Enough About All That! This Isn't About Bill Gates, This Is About You

\n

Although the billionaires of the world have gotten their heads screwed on right about charity (and saving hundreds of millions more lives as a result), us non-billionaires didn't seem to get the memo.

\n

And that means, if you are the sort of person who donates, you're not doing nearly the amount of good you could.

\n

Here are 3 simple steps you can use right away that will at least double the impact your donations have.

\n

Pause a second to think about what that would mean.

\n

Why do you donate?

\n

How would it feel to know that those donations now to twice as much good in this world? To know that at least twice as many people were helped?

\n

Ready to hear the steps? Great!

\n

 

\n

Step 1: Make your reason for donating CONCRETE!

\n

This step requires being very honest with yourself. It means not donating to the Haiti relief fund just because it was tragic (or because Bill Clinton said you should), but instead thinking about what that donation to Haiti would accomplish.

\n

Something along the lines of: save lives and put good people back into homes. Whatever you hope your donation will accomplish.

\n

What we're doing is moving from causes and goals (global warming, world peace, freedom from dictators), to concrete outcomes (reducing or negating carbon emissions, preventing wars, saving solders lives, educating people about the benefits of democracy).

\n

Once you've got a concrete outcome you'd like to see in the world, it's time to find out the best way to accomplish that goal.

\n

 

\n

Step 2: Use 3rd party charity evaluations that focus on outcomes, and donate where it will do the most good.

\n

Go to givewell.com and see if your current charity is listed, and what kinds of results they can get per donated dollar.

\n

Also, don't forget to look at similar outcomes your donation money can accomplish. It's not uncommon to find out that, for example, the cost of giving a blind child a seeing eye dog is three times more than the cost of preventing childhood blindness in the first place.

\n

Yes it might seem tragic to think of a little blind girl without a dog to guide her, but it's even worse to think that we'd give that girl a seeing eye dog at the expense of three other children going blind.

\n

If nothing else, visit givewell.com, it will change the way you think about donating for the rest of your life.

\n

 

\n

Step 3: Donate what you can, but don't donate time unless you earn less than $10 an hour.

\n

The strange truth of the matter is that, unless you're donating your time as a professional (Doctor's Without Borders, Pro-Bono Legal Aid), it's often more cost effective to simply work an extra hour and donate the money.

\n

If you make $25/hr, your cause can probably can get 150 minutes of work for every hour of income you donate.

\n

 

\n
\n

Okay! If you do those three steps you will get more good from your donation money than 90% of all the donors out there.

\n

If you felt that this letter helped you, please consider forwarding it to your friends and family, or at least talking about these important issues with them.

\n

Together, we can make a difference.

\n

 

" } }, { "_id": "Jvi9LLcvZxm529496", "title": "Rationality Quotes: December 2010", "pageUrl": "https://www.lesswrong.com/posts/Jvi9LLcvZxm529496/rationality-quotes-december-2010", "postedAt": "2010-12-03T03:23:07.900Z", "baseScore": 10, "voteCount": 11, "commentCount": 342, "url": null, "contents": { "documentId": "Jvi9LLcvZxm529496", "html": "

Every month on the month, Less Wrong has a thread where we post Deep Wisdom from the Masters. I saw that nobody did this yet for December for some reason, so I figured I could do it myself.

\r\n

* Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)

\r\n

* \"Do not quote yourself.\" --Tiiba

\r\n

* Do not quote comments/posts on LW/OB. That's like shooting fish in a barrel. :)

\r\n

* No more than 5 quotes per person per monthly thread, please.

" } }, { "_id": "z6jqEoZ55WvfZeZZt", "title": "Definitions, characterizations, and hard-to-ground variables", "pageUrl": "https://www.lesswrong.com/posts/z6jqEoZ55WvfZeZZt/definitions-characterizations-and-hard-to-ground-variables", "postedAt": "2010-12-03T03:18:07.947Z", "baseScore": 8, "voteCount": 8, "commentCount": 8, "url": null, "contents": { "documentId": "z6jqEoZ55WvfZeZZt", "html": "

[I am hoping this post is not too repetitive, does not spend too much time rehashing basics... also: What should this be tagged with?]

\n

Systems are not always made to be understandable - especially if they were not designed in the first place, like the human brain. Thus, they can often contain variables that are hard to ground in an outside meaning (e.g. \"status\", \"gender\"...).  In this case, it may often be more appropriate to simply characterize how the variable behaves, rather than worry about attempting to see what it \"represents\" and \"define\" it thus.  Ultimately, the variable is grounded in the effects it has on the outside world via the rest of the system.  Meanwhile it may not represent anything more than \"a flag I needed to make this hack work\".

\n

I will refer to this as characterizing the object in question rather than defining it.  Rather than say what something \"is\", we simply specify how it behaves.  Strictly speaking, characterization is of course a form of definition[0] - indeed, strictly speaking, nearly all definitions are of this form - but I expect you will forgive me if for now I allow a fuzzy notion of \"characterization vs. definition\" scale.

\n

Let us consider a simple example where this is appropriate; consider the notion of \"flying\" in Magic: the Gathering.  In this game, a player may have his creatures attack another player, who can then block them with creatures of his own.  Some of these creatures have printed on them the text \"flying\", which is defined by the game rules to expand to a larger block of explanatory text. What does \"flying\" mean? It means \"this creature can't be blocked except by creatures with flying\"[1].  So the creature can only be blocked by creatures that can only be blocked by creatures that can only be blocked by... well, you see the problem.  This is not the real definition at all; if you took a card with \"flying\" and instead actually replaced it with the text \"this creature can't be blocked except by creatures with flying\", you'd get a weaker card.  (Such cards actually exist, too.)

\n

The real definition isn't what it nominally expands out to; really, it's just a flag.  Meanwhile it's characterized by an external rule that says that creatures with the flag can only be blocked by other creatures with the flag.  It may represent the ability to fly but that's just a helpful reminder.

\n

Now a description of anything is of no use unless it actually bottoms out somewhere, so such self-referential \"definitions\" will not typically occur by themselves in nature[2].  Once we establish our primitive physical notions by characterization, our more complex ones we should typically be able to describe by definition.  And yet the fact remains that the notion of \"flying\" in Magic is meaningful, and does bottom out; it just didn't appear to at first because I didn't define it correctly.  Once we have a substrate that allows us to add variables (like whether or not a creature has flying) and use these to control the actions of the system, these variables automatically obtain meaning from how they control the system.  However, an attempt to define such a variable and simply state what it \"is\" may run into the problem of self-reference.

\n

Typically we expect that the variables in a program correspond to some specific outside concept, that they each \"represent\" something.  But how do you make this notion work when the program you're analyzing has 5 distinct states, each with very distinct but seemingly arbitrary behavior?  Then the central state variable represents... well, what state it's in, and more than that is hard to say.  A definition is the wrong notion to apply here.

\n

Now perhaps that sort of thing shouldn't occur in a well-written program, but if you're reading an entry in an obfuscated code contest, it'll be commonplace.  And the human brain is a system that wasn't written by an intelligent mind in the first place.  So it shouldn't be surprising that it is a mistake to attempt to define something like \"status\" by identifying with some outside phenomenon.  Obviously this is technically possible - a meaningful notion must be grounded - but it is better to describe it in a way that takes account of the fact that status is a variable that is instantiated in human brains.  As Vladimir_Nesov pointed out, status is not power, it is some sort of godshatter proxy for power.  So we have to be willing to say: \"Status is a variable kept track of in the human brain; it is read on the following occasions with the following effects; it is written to on the following occasions; it satisfies the following properties and invariants...\"

\n

(This is common in mathematics - what's the tensor product of M and N?  Well, it's the thing such that a homomorphism from it is the same as a bilinear map from M×N.  OK, but what is it?  The answer to that question is rarely relevant.)

\n

This is speculative, but given how transsexual people seem to talk about it I suspect something similar is true of \"gender\".  People report knowing from a young age that they were the \"wrong\" gender, they naturally imitated more closely those of this gender (the same way anyone naturally imitates more closely those of their own gender)... what does it mean that this person feels like a \"female\" despite being male in sex? I suspect the answer is: Nothing, it just means that the \"gender\" flag in her head has been set to female!  A primitive \"gender\" flag exists, and has no intrinsic meaning except for how it influences our actions, such as by directing us to imitate more closely those who we perceive to have that flag in the same state as we do.  A male imitates other males the same way a flying creature blocks other flying creatures, because there's a computational substrate that allows us to turn these informal self-referential definitions into properly grounded characterizations.  (Though obviously the statement about males should just be taken as one example statement, not a complete attempt at a characterization!)  Note that this would mean that the answer to Eliezer's question \"If you know who you are apart from categorizations, why does it make so much difference whether it fits into a particular category?\" is, \"Well, if you've really pinned down everything downstream of it (which you probably haven't), it doesn't - but I've got this hanging node in my brain...\"

\n

 

\n
\n

[0]Usually; there is an exception. Whatever notions we take as primitive cannot be defined, and must be characterized instead. However these characterizations (\"axiomatizations\") are not definitions because there is nothing for them to be defined on top of.

\n

[1]Before anyone else points it out: Yes, I realize that as of Future Sight, it's \"this creature can't be blocked except by creatures with flying or reach\". I'm keeping things simple here.

\n

[2]Except, of course, at the level of the primitive laws of physics; at the primitive level, only characterizations can occur. What does it mean for a particle to be positively charged? It means it repels other positively charged particles and attracts negatively charged particles. You see where this is going.  For this reason, ultimately we have to ground things in what we can detect and predict, rather than the fundamental laws of physics - after all, we don't even know the latter yet!

" } }, { "_id": "hD7MQfR92zFyDHiv8", "title": "$100 for the best article on efficient charity -- Submit your articles", "pageUrl": "https://www.lesswrong.com/posts/hD7MQfR92zFyDHiv8/usd100-for-the-best-article-on-efficient-charity-submit-your", "postedAt": "2010-12-02T20:57:31.410Z", "baseScore": 8, "voteCount": 6, "commentCount": 12, "url": null, "contents": { "documentId": "hD7MQfR92zFyDHiv8", "html": "

Several people have written articles on efficient charity -- throwawayaccount_1 has an excellent article hidden away in a comment, as does waitingforgodel. Multifoliaterose promises to write an article \"at some point soon\" ..., and louie has actually submitted an article to the main LW page.

\r\n

What I'd like is for throwawayaccount_1, waitingforgodel and multifoliaterose to submit to the main LW articles page. People will read the articles, and hopefully vote more for better articles. Srticles not submitted to the main LW articles page are not eligible for the prize.

\r\n

Note that it is hard for me to judge which article(s) will actually have the best effect in terms of causing people to make better decisions, so at least some empiricism is desirable. Yes, it isn't perfect, but if anyone has a better suggestion, I am all ears.

" } }, { "_id": "vYrKWXuQuAYsNtBHC", "title": "Social Presuppositions", "pageUrl": "https://www.lesswrong.com/posts/vYrKWXuQuAYsNtBHC/social-presuppositions", "postedAt": "2010-12-02T13:25:39.742Z", "baseScore": 16, "voteCount": 13, "commentCount": 49, "url": null, "contents": { "documentId": "vYrKWXuQuAYsNtBHC", "html": "

During discussion in my previous post, when we touched the subject of human statistical majorities, I had a side-thought. If taking the Less Wrong audience as an example, the statistics say that any given participant is strongly likely to be white, male, atheist, and well, just going by general human statistics, probably heterosexual.

\n

But in my actual interaction, I've taken as a rule not to make any assumptions about the other person. Does it mean, I thought, that I reset my prior probabilities, and consciously choose to discard information? Not relying on implicit assumptions seems the socially right thing to do, I thought; but is it rational?

\n

When I discussed it on IRC, this quote by sh struck me as insightful:

\n
\n

I.e. making the guess incorrectly probably causes far more friction than deliberately not making a correct guess you could make.

\n
\n

I came up with the following payoff matrix:

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
 Bob
Has trait X (p = 0.95)Doesn't have trait X (p = 0.05)
AliceActs as if Bob has trait X+1-100
Acts without assumptions about Bob00
\n

In this case, the second option is strictly preferable. In other words, I don't discard the information, but the repercussions to our social interaction in case of an incorrect guess outweigh the benefit from guessing correctly. And it also matters whether either Alice or Bob is an Asker or a Guesser.

\n

One consequence I can think of is that with a sufficiently low p, or if Bob wouldn't be particularly offended by Alice's incorrect guess, taking the guess would be preferable. Now I wonder if we do that a lot in daily life with issues we don't consider controversial (\"hmm, are you from my country/state too?\"), and if all the \"you're overreacting/too sensitive\" complaints come from Alice incorrectly assessing a too low-by-absolute-value negative payoff in (0, 1).

" } }, { "_id": "aC8A3BWFbpyoR953t", "title": "Hard To See What", "pageUrl": "https://www.lesswrong.com/posts/aC8A3BWFbpyoR953t/hard-to-see-what", "postedAt": "2010-12-02T03:54:40.520Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "aC8A3BWFbpyoR953t", "html": null } }, { "_id": "5eRnAtwuirHC3uhua", "title": "Cheat codes", "pageUrl": "https://www.lesswrong.com/posts/5eRnAtwuirHC3uhua/cheat-codes", "postedAt": "2010-12-01T21:19:39.547Z", "baseScore": 51, "voteCount": 44, "commentCount": 93, "url": null, "contents": { "documentId": "5eRnAtwuirHC3uhua", "html": "

Most things worth doing take serious, sustained effort. If you want to become an expert violinist, you're going to have to spend a lot of time practicing. If you want to write a good book, there really is no quick-and-dirty way to do it. But sustained effort is hard, and can be difficult to get rolling. Maybe there are some easier gains to be had with simple, local optimizations. Contrary to oft-repeated cached wisdom, not everything worth doing is hard. Some little things you can do are like cheat codes for the real world.

\n

Take habits, for example: your habits are not fixed. My diet got dramatically better once I figured out how to change my own habits, and actually applied that knowledge. The general trick was to figure out a new, stable state to change my habits to, then use willpower for a week or two until I settle into that stable state. In the case of diet, a stable state was one where junk food was replaced with fruit, tea, or having a slightly more substantial meal beforehand so I wouldn't feel hungry for snacks. That's an equilibrium I can live with, long-term, without needing to worry about \"falling off the wagon.\" Once I figured out the pattern -- work out a stable state, and force myself into it over 1-2 weeks -- I was able to improve several habits, permanently. It was amazing. Why didn't anybody tell me about this?

\n

In education, there are similar easy wins. If you're trying to commit a lot of things to memory, there's solid evidence that spaced repetition works. If you're trying to learn from a difficult textbook, reading in multiple overlapping passes is often more time-efficient than reading through linearly. And I've personally witnessed several people academically un-cripple themselves by learning to reflexively look everything up on Wikipedia. None of this stuff is particularly hard. The problem is just that a lot of people don't know about it.

\n

What other easy things have a high marginal return-on-effort? Feel free to include speculative ones, if they're testable.

" } }, { "_id": "3hyKHCcWK5h5X2Jdi", "title": "Smart people who are usually wrong", "pageUrl": "https://www.lesswrong.com/posts/3hyKHCcWK5h5X2Jdi/smart-people-who-are-usually-wrong", "postedAt": "2010-12-01T20:45:30.123Z", "baseScore": 8, "voteCount": 13, "commentCount": 30, "url": null, "contents": { "documentId": "3hyKHCcWK5h5X2Jdi", "html": "

There are several posters on Less Wrong whom I

\n\n

So I think they are exceptionally smart people whose judgement is consistently worse than if they flipped a coin.

\n

How probable is this?

\n

Some theories:

\n" } }, { "_id": "RitnzDJdSePjT9JXC", "title": "Philadelphia Meetup: Dec 5 or 6", "pageUrl": "https://www.lesswrong.com/posts/RitnzDJdSePjT9JXC/philadelphia-meetup-dec-5-or-6", "postedAt": "2010-12-01T20:24:53.952Z", "baseScore": 5, "voteCount": 3, "commentCount": 6, "url": null, "contents": { "documentId": "RitnzDJdSePjT9JXC", "html": "

I'll be in Philadelphia on Dec. 6-7 for the Systems Biology of Human Aging conference.  If there are LWers in Philadelphia who'd like to meet, we can meet at my hotel, and either stay there or go out.  This would be either Sunday Dec. 5 at 6pm-11pm, or Monday Dec. 6 at 9pm-midnight. If interested, please respond here, with date preference, and also email my username at gmail.

\n

Notice the different hours.  I really don't have much time on Monday night.

" } }, { "_id": "duHdjMH369xBKvEB9", "title": "A possible example of failure to apply lessons from Less Wrong", "pageUrl": "https://www.lesswrong.com/posts/duHdjMH369xBKvEB9/a-possible-example-of-failure-to-apply-lessons-from-less", "postedAt": "2010-12-01T19:33:04.193Z", "baseScore": 21, "voteCount": 17, "commentCount": 17, "url": null, "contents": { "documentId": "duHdjMH369xBKvEB9", "html": "

One issue that has been discussed here before is whether Less Wrong is causing readers and participants to behave more rationally or is primarily a time-sink. I recently encountered an example that seemed worth pointing out to the community that suggested mixed results. The entry for Less Wrong on RationalkWiki says \" In the outside world, the ugly manifests itself as LessWrong acolytes, minds freshly blown, metastasising to other sites, bringing the Good News for Modern Rationalists, without clearing their local jargon cache.\" RationalWiki has a variety of issues that I'm not going to discuss in detail here (such as a healthy of dose of motivated cognition pervading the entire project and having serious mind-killing problems) but this sentence should be a cause for concern. What they are essentially talking about is LWians not realizing (or not internalizing) that there's a serious problem of inferential distance between people who are familiar with many of the ideas here and people who are not. Since inferential distance is an issue that has been discussed here a lot, this suggests that some people who have read a lot here are not applying the lessons even when they are consciously talking about material related to those lessons. Of course, there's no easy way to tell how representative a sample this is, how common it is, and given RW's inclination to list every possible thing they don't like about something, no matter how small, this may not be a serious issue at all. But it did seem to be serious enough to point out here.

\n

 

\n

 

" } }, { "_id": "JRpSoarmeY4S7LrKM", "title": "Is ambition rational?", "pageUrl": "https://www.lesswrong.com/posts/JRpSoarmeY4S7LrKM/is-ambition-rational", "postedAt": "2010-12-01T18:54:54.639Z", "baseScore": 21, "voteCount": 17, "commentCount": 16, "url": null, "contents": { "documentId": "JRpSoarmeY4S7LrKM", "html": "

I don't understand the people around me who are working so very hard to succeed. It strikes me as irrational. Why do you do it?

\n

A long time ago I reasoned that it is more efficient to strive for a 90% rather than 100% on a test because both yield the same \"A\". This morphed into a way of life. I barely got past grad school to earn my PhD, and now I'm a \"Dr.\" just like anyone else. I worked in corporate research where promotions are largely determined by time served. So I aimed to do a good job, but I didn't put in extra effort. Recently, I do just enough consulting to get by and spend the rest of my time as a lazy hipster. :-) 

\n

The emotional half of my brain would like to be more successful, but the \"logical\" part of my brain explains (condescendingly) that the poor odds don't justify the extra effort. Which is right? 

\n

Here's a practical example: My friend is a senior manager at an investment bank. If she works extremely hard for a few more years, she has a small chance (1 in 50?) at being promoted to managing director (2x income). On the other hand, she could scale back her responsibilities and coast for a few years on her already outrageous salary. She has not decided what to do. 

\n

I'm new. If this has already been discussed please post links. Searching didn't yield anything relevant. Thanks.

\n

 

" } }, { "_id": "vs3kzjLhbdKsndnBy", "title": "Ask and Guess", "pageUrl": "https://www.lesswrong.com/posts/vs3kzjLhbdKsndnBy/ask-and-guess", "postedAt": "2010-12-01T17:54:10.469Z", "baseScore": 144, "voteCount": 121, "commentCount": 66, "url": null, "contents": { "documentId": "vs3kzjLhbdKsndnBy", "html": "

There's a concept (inspired by a Metafilter blog post) of ask culture vs. guess culture.  In \"ask culture,\" it's socially acceptable to ask for a favor -- staying over at a friend's house, requesting a raise or a letter of recommendation -- and equally acceptable to refuse a favor.  Asking is literally just inquiring if the request will be granted, and it's never wrong to ask, provided you know you might be refused.  In \"guess culture,\" however, you're expected to guess if your request is appropriate, and you are rude if you accidentally make a request that's judged excessive or inappropriate.  You can develop a reputation as greedy or thoughtless if you make inappropriate requests.

\n

When an asker and a guesser collide, the results are awful.  I've seen it in marriages, for example.

\n

Husband: \"Could you iron my shirt?  I have a meeting today.\"

\n

Wife: \"Can't you see I'm packing lunches and I'm not even dressed yet?  You're so insensitive!\"

\n

Husband: \"But I just asked.  You could have just said no if you were too busy -- you don't have to yell at me!\"

\n

Wife: \"But you should pay enough attention to me to know when you shouldn't ask!\"

\n

It's not clear how how the asking vs. guessing divide works.  Some individual people are more comfortable asking than guessing, and vice versa.  It's also possible that some families, and some cultures, are more \"ask-based\" than \"guess-based.\"  (Apparently East Asia is more \"guess-based\" than the US.)  It also varies from situation to situation: \"Will you marry me?\" is a question you should only ask if you know the answer is yes, but \"Would you like to get coffee with me?\" is the kind of question you should ask freely and not worry too much about rejection.  

\n

There's a lot of scope for rationality in deciding when to ask and when to guess.  I'm a guesser, myself.  But that means I often pass up the opportunity to get what I want, because I'm afraid of being judged as \"greedy\" if I make an inappropriate request.  If you're a systematic \"asker\" or a systematic \"guesser,\" then you're systematically biased, liable to guess when you should ask and vice versa.  

\n

In my experience, there are a few situations in which you should experiment with asking even if you're a guesser: in a situation where failure/rejection is so common as to not be shameful (i.e. dating), in a situation where it's someone's job to handle requests, and requests are common (e.g. applying for jobs or awards, academic administration), in a situation where granting or refusing a request is ridiculously easy (most internet communication.)  Most of the time when I've tried this out I've gotten my requests granted. I'm still much more afraid of being judged as greedy than I am of not getting what I want, so I'll probably always stay on the \"guessing\" end of the spectrum, but I'd like to get more flexible about it, and more willing to ask when I'm in situations that call for it.

\n

Anyone else have a systematic bias, one way or another?  Anybody trying to overcome it?

\n

(relevant: The Daily Ask, a website full of examples of ways you can make requests.  Some of these shock me -- I wouldn't believe it's acceptable to bargain over store prices like that. But, then again, I'm running on corrupted hardware and I wouldn't know what works and what doesn't until I make the experiment.)

" } }, { "_id": "TrmMcujGZt5JAtMGg", "title": "How to Save the World", "pageUrl": "https://www.lesswrong.com/posts/TrmMcujGZt5JAtMGg/how-to-save-the-world", "postedAt": "2010-12-01T17:17:48.713Z", "baseScore": 104, "voteCount": 100, "commentCount": 135, "url": null, "contents": { "documentId": "TrmMcujGZt5JAtMGg", "html": "

Most of us want to make the world a better place. But what should we do if we want to generate the most positive impact possible? It’s definitely not an easy problem. Lots of smart, talented people with the best of intentions have tried to end war, eliminate poverty, cure disease, stop hunger, prevent animal suffering, and save the environment. As you may have noticed, we’re still working on all of those. So the track record of people trying to permanently solve the world's biggest problems isn’t that spectacular. This isn’t just a “look to your left, look to your right, one of you won’t be here next year”-kind of thing, this is more like “behold the trail of dead and dying who line the path before you, and despair”. So how can you make your attempt to save the world turn out significantly better than the generations of others who've tried this already?

It turns out there actually are a number of things we can do to substantially increase our odds of doing the most good. Here's a brief summary of some on the most crucial considerations that one needs to take into account when soberly approaching the task of doing the most good possible (aka \"saving the world\").

1. Patch your moral intuition (with math!) - Human moral intuition is really useful. But it tends to fail us at precisely the wrong times -- like when a problem gets too big [“millions of people dying? *yawn*”] or when it involves uncertainty [“you can only save 60% of them? call me when you can save everyone!”]. Unfortunately, these happen to be the defining characteristics of the world’s most difficult problems. Think about it. If your standard moral intuition were enough to confront the world’s biggest challenges, they wouldn’t be the world’s biggest challenges anymore... they’d be “those problems we solved already cause they were natural for us to understand”. If you’re trying to do things that have never been done before, use all the tools available to you. That means setting aside your emotional numbness by using math to feel what your moral intuition can’t. You can also do better by acquainting yourself with some of the more common human biases. It turns out your brain isn't always right. Yes, even your brain. So knowing the ways in which it systematically gets things wrong is a good way to avoid making the most obvious errors when setting out to help save the world.

2. Identify a cause with lots of leverage - It’s noble to try and save the world, but it’s ineffective and unrealistic to try and do it all on your own. So let’s start out by joining forces with an established organization who’s already working on what you care about. Seriously, unless you’re already ridiculously rich + brilliant or ludicrously influential, going solo or further fragmenting the philanthropic world by creating US-Charity#1,238,202 is almost certainly a mistake. Now that we’re all working together here, let's keep in mind that only a few charitable organizations are truly great investments -- and the vast majority just aren’t. So maximize your leverage by investing your time and money into supporting the best non-profits with the largest expected pay-offs.

3. Don’t confuse what “feels good” with what actually helps the most - Wanna know something that feels good? I fund micro-loans on Kiva. It’s a ridiculously cheap way to feel good about helping people. It totally plays into this romantic story I have in my mind about helping business owners help themselves. And there’s lots of shiny pictures of people I can identify with. But does loaning $25 to someone on the other side of the planet really make the biggest impact possible? Definitely not. So I fund a few Kiva loans a month because it fulfills a deep-seated psychological need of mine -- a need that doesn’t go away by ignoring it or pretending it doesn’t exist. But once that’s out of the way, I devote the vast majority of my time and resources to contributing to other non-profits with staggeringly higher pay-offs.

4. Don’t be a “cause snob” - This one's tough. The more you begin to care about a cause, the more difficult it becomes not to be self-righteous about it.  The problem doesn’t go away just because you really do have a great cause... it only gets worse. Resist the temptation to kick dirt in the faces of others who are doing something different. There are always other ways to help no matter what philanthropic cause you're involved with. And everyone starts out somewhere. 15 years ago, I was optimizing for anarchy. Things change. And even if they don't, people deserve your respect regardless of whether they want to help save the world or not. We're entitled to nothing and no one. Our fortunes will rise and fall based on our abilities, including the ability to be nice -- not the intrinsic goodness of our causes.

5. Be more effective - You know how sometimes you get stuck in motivational holes, end up sick all the time, and have trouble getting things done? That’s gonna happen to everyone, every now and then. But if it’s an everyday kind of thing for you, check out some helpful resources that can get you unstuck. This is incredibly important because the steps up until now only depended on what you believed and what your priorities were. But your beliefs and priorities won’t even get you through the day, much less help you save the world. You're gonna need to formulate goals and be able to act on them. Becoming more capable, more organized, more well-connected, and more motivated is an essential part of saving the world. Your goals aren’t going to just accomplish themselves the first time you “try”. If you want to succeed, you’ll likely have to fail a bunch first, and then try harder.

6. Spread awareness - This is a necessary meta-strategy no matter what you’re trying to accomplish. Remember, deep down, most people really do want to find a way to help others or save the world. They just might not be looking for it all the time. So tell people what you’re up to and if they want to know more, tell them that too. You shouldn’t expect everyone to join you, but you should at least give people a chance to surprise you. And there are other less obvious things you can do, like join networking groups for your cause or link to the website of your favorite cause a lot from your blog and other sites where they might not be mentioned quite so much. That way, they can consistently turn up higher in Google searches. Or post this article on Facebook. Some of your friends will be happy you shared it with them. Just saying.

7. Give money - Spreading awareness can only accomplish so much. Money is still the ultimate meta-tool for accomplishing everything. There are millions of excuses not to give, but at the end of the day, this is the highest-leverage way for you to contribute to that already high-leverage cause that you identified. And don’t feel like you’re alone in finding it difficult to give. Most people find it incredibly difficult to give money -- even to a cause they deeply support. But even if it’s a heroically difficult task, we should still aspire to achieve it... we’re trying to save the world here, remember? If this were easy, someone else (besides Petrov) would have done it already.

8. Give now (rather than later) - I’ve seen fascinating arguments that it might be possible to do more good by investing your money in the stock market for a long time and then giving all the proceeds to charity later. It’s an interesting strategy but it has a number of limitations. To name just two: 1) Not contributing to charity each year prevents you from taking advantage of the best tax planning strategy available to you. That tax-break is free money. You should take free money. Not taking the free money is implicitly agreeing that your government knows how to spend your money better than you do. Do you think your government’s judgment and preferences are superior to yours? and; 2) Non-profit organizations can have endowments and those endowments can invest in securities just like individuals. So if long term-investment in the stock market were really a superior strategy, the charity you’re intending to give your money to could do the exact same thing. They could tuck all your annual contributions away in a big fat, tax-free fund to earn market returns until they were ready to unleash a massive bundle of money just like you would have. If they aren’t doing this already, it’s probably because the problem they’re trying to solve is compounding faster than the stock market compounds interest. Diseases spread, poverty is passed down, existential risk increases. At the very least, don’t try to out-think the non-profit you support without talking to them - they probably wish you were donating now, not just later.

9. Optimize your income - Do you know how much you should be earning? Information on salaries in your industry / job market could help you negotiate a pay raise. And if you’re still in school, why not spend 2 hours to compare the salaries of the different careers you’re interested in? Careers can last decades. Degrees take 4-6 years to complete. Make sure you really want the kind of salaries you’ll be getting and you know what it will be like to work in your chosen industry. Even if you’re a few years into a degree program, changing course now is still better than regretting not having explored other options later. Saving the world is hard enough. Don’t make it harder on yourself by earning below market wages or choosing the wrong career to begin with.

10. Optimize your outlays - Cost of living can vary drastically across different tax districts, real estate markets, commuting methods, and other daily spending habits. It’s unlikely you ended up with an optimal configuration. For starters, if you don’t currently track your spending, I highly recommend you at least try out something light-weight like Mint.com so you can figure out where all your money is going. Remember, you don’t have to scrimp and sacrifice your quality of life to save money -- a lot of things can be less expensive just by planning ahead a little and avoiding those unnecessary “gotcha” fees. No matter what you want to do to improve the world, having more money to do it makes things easier.

11. Look into matching donations - If you’re gonna give money to charity anyway, you should see if you can get your employer to match your gift. I've done this before and know others who have too. Thousands of employers will match donations to qualified non-profits. When you get free money -- you should take it.

12. Have fun! - Don’t get so wrapped up trying to save the world that you sacrifice your own humanity. Having a rich, fulfilling personal life is a well-spring of passion that will only boost your ability to contribute -- not distract you. Trust me: you won’t be sucked into the veil of Maya and forget about your vow to save the world. So have a beer. Call up your best friend. Watch a movie that has absolutely no world-saving side-benefits whatsoever! You should do whatever it is that connects to that essential joy of being human and you should do it as often as you need; without apologies. Enough people sacrifice their lives without even realizing it -- don’t sacrifice your own on purpose.

" } }, { "_id": "axgfcSEQwaaAn4d3Z", "title": "Gender Identity and Rationality", "pageUrl": "https://www.lesswrong.com/posts/axgfcSEQwaaAn4d3Z/gender-identity-and-rationality", "postedAt": "2010-12-01T16:32:25.785Z", "baseScore": 58, "voteCount": 53, "commentCount": 114, "url": null, "contents": { "documentId": "axgfcSEQwaaAn4d3Z", "html": "

Not sure if I would be better off posting this on the main page instead, but since it's almost entirely about my personal experiences, here it goes.

\n

Two years ago, I underwent a radical change in my worldview. A series of events caused me to completely re-evaluate my beliefs in everything related to gender, sexuality, tolerance, and diversity -- which in turn caused a cascade that made me rethink my stance on many other topics.

\n

Coincidentally, the same events caused me to also rethink the way I thought of myself. This was, as it turned out, not very good. It still makes it difficult for me to untangle various consequences, correlated but potentially not directly bound by a cause-effect relation.

\n

To be more blunt: being biologically male, I confessed to someone online about things that things that \"men weren't supposed to do\": my dissatisfaction with my body, my wish to have a female body, persistent fantasies of a sex change, desires to shave my body, grow long hair and wear women's clothes, and so on and so forth. She listened, and then asked, \"Maybe you're transsexual?\"

\n

Back then, it would never even occur to me to think of that -- and my first gut response, which I'm not proud of, was denying association with \"those freaks\". As I understand now, I was relying on a cached thought, and it limited the scope of my reasoning. She used simple intuitive reasoning to arrive at the hypothesis based on what I revealed to her; I didn't know the hypothesis was even there, as I knew nothing about gender identity.

\n

In the events that unfolded, I integrated myself into some LGBT communities and learned about all kinds of people, including those who didn't fit into notions of the gender binary at all. I've learned to view gender as a multidimensional space with two big clusters, rather than as a boolean flag. It felt incredibly heartwarming to be able to mentally call myself by a female name, to go by it on the Internet, to talk to like-minded people who had similar experiences and feelings, and to be referred by the pronoun \"she\" -- which at first bugged me, because I somehow felt I had \"no moral right\" or had to \"earn that privilege\", but quickly I got at ease with it, and soon it just felt ordinary, and like the only acceptable thing to do, the only way of presentation that felt right.

\n

(I'm compressing and simplifying here for the sake of readability -- I'm skipping over the brief period after that conversation when I thought of myself as genderless, not yet ready to accept a fully female gender identity, and carried out thought experiments with imaginary conversations between my \"male\" and \"female selves\", before deciding that there was no male self to begin with after all.)

\n

Nowadays, gender-wise, I address people the way they wish to be address. I also have some pretty strong opinions on the legal concept of gender, which I won't voice here. And I've learned a lot, and was able to drive my introspection deeper than I ever managed before... But that's not really relevant.

\n

And yet... And yet.

\n

As gleefully as I embraced a female role, feeling on the way to fulfilling my dream, I couldn't get out the nagging feeling of being somehow \"fake\". I kept thinking that I don't always \"think like a real woman would\", and I've had days of odd apathy when I didn't care about anything, including my gender presentation. Some cases happened even before my gender \"awakening\", and at those days, I felt empty and genderless, a drained shell of a person.

\n

How, in all honesty, can I know if I'm \"really a woman on the inside\"? What does that even mean? I can speak in terms of desired behavior, in terms of the way I'm seen socially, from the outside. But how can I compare my subjective experience to those of different men and women, without getting into their heads? All I have is empathic inference, which works by building crude, approximate models of other people inside my head, and is so full of ill-defined biases that I have a suspicion I shouldn't rely on it at all and don't say things like \"well, a man's subjective experience is way off for me, but a woman's subjective experience only weakly fits\".

\n

And yet... transpeople report \"feeling like\" their claimed gender. I prefer to work with more unambiguous subjective feelings -- like feeling I have a wrong body -- but I have caught myself thinking at different times, \"This day I felt like a woman, and that day I didn't feel like a woman, but more like... nothing at all. And that other day my mind was occupied with completely different matters, like writing a Less Wrong post.\" It helps sometmes to visualize my brain as a system of connected logical components, with an \"introspection center\" as a separate component, but that doesn't bring me close to solving the mystery.

\n

I want to be seen as a woman, and nothing else. I take steps to ensure that it happens. If I could start from a clean slate, magically get an unambiguously female body, and live somewhere where nobody would know about my past male life, perhaps that would be the end of it -- there would be no need for me to worry about it anymore. But as things stand, my introspection center keeps generating those nagging thoughts: \"What if I'm merely a pretender, a man who merely thinks he's a woman, but isn't?\" One friend of mine postulated that \"wanting to be a gender is the same as being it\"; but is it really that simple?

\n

The sheer number of converging testimonies between myself and transpeople I've met and talked to would seem to rule that out. \"If I'm fake, then they're fake too, and surely that sounds extremely unlikely.\" But while discovering similarities makes me generically happy, every deviation from the mean -- for example, I consciously discovered my gender identity at 21, a relatively late age -- stings painfully and brings up the uncertainty again. Could this be a case of failing to properly assign Bayesian weights, of giving evidence less significance than counterevidence? But every time I discovered a piece of counterevidence, my mind interpreted it as a breach of my mental defenses and tried to route around it, in other words, rationalize it away.

\n

Maybe I could just tell myself, \"Shut up and live the way you want to.\"

\n

And yet...

\n

I caught myself in thinking that I really, deeply didn't want to go back, to the point that I didn't want to accept the conclusion \"I'm really a man and an impostor\", even that time when it looked like evidence weighted that way. (It's no longer the case now that I've learned more facts, but the point still stands.) It was an unthinkable thought, and still is. Even now, I fail to apply the Litany of Tarski. \"If I'm really a man, then I desire to bel--\" Wait, doesn't compute. If that were true, it would cause my whole system of values to collapse, and it feels like stating an incoherent statement, like \"If sexism is morally and scientifically justified, then...\" It feels like it would cause my entire system of values to collapse, and I can't bring myself to think that -- but isn't that the danger of \"already knowing the answer\", rationalizing, etc.?

\n

It also bugs me, I guess, that despite relying on rational reasoning in so many aspects of my daily life, with this one case, about an aspect of myself, I'm relying on some subjective, vague \"gut feeling\". Granted, I try to approach it in a rational way: someone used my revelations to locate a hypothesis, I found it likely based on the evidence and accepted it, then started updating... or did I? Would I really be able to change my belief even in principle? And even then, the root cause, the very root cause, comes from feelings of uneasiness with my assigned gender role that I cannot rationally explain -- they're just there, in the same way that my consciousness is \"just there\".

\n

So...

\n

When I heard about p-zombies, I immediately drew parallels. I asked myself if \"fake transpeople\" were even a coherent concept. Would it be possible to imagine two people who behave identically (and true to themselves, not acting), except one has \"real\" subjective feelings of gender and the other doesn't? After applying an appropriately tweaked anti-zombie argument, it seems to me that the answer is no, but it's also prossible that the question is too ill-defined for any answer to make sense.

\n

The way it stands now, the so-called gender identity disorder isn't really something that is truly diagnosed, because it's based on self-reporting; you cannot look into someone's head and say \"you're definitely transsexual\" without their conscious understanding of themselves and their consent. So it seems to me outside the domain of psychiatry in the first place. I've heard some transpeople voice hope that there could be a device that could scan the part of the brain responsible for gender identity and say \"yes, this one is definitely trans\" and \"no, this one definitely isn't\". But to me, the prospect of such a device horrifies me even in principle. What if the device conflicts their self-reporting? (I suspect I'm anxious about the possibility of it filtering me, specifically.) What should we consider more reliable -- the machine or self-reporting? On one hand, we know how filled human brains are with cognitive biases, but on the other hand, it seems to me like a truism that \"you are the final authority in your own self-identification.\"

\n

Maybe it's a question of definitions, like the question about a tree making a sound, and the final answer depends on how exactly we define \"gender identity\". Or maybe -- this thought occurred to me right now -- my decision agent has a gender identity while my introspection center (which operates entirely on abstract knowledge rather than social conventions) doesn't, and that's the cause of the confusion that I get from looking at things in both a gendered and genderless way, in the same way as if I would be able to switch at will between a timed view from inside the timeline and a timeless view of the entire 4D spacetime at once. In any case, so far, for those two years since the realization I've stuck with the identity and role that I at least believe is the only one I won't regret assuming.

" } }, { "_id": "GG2rtBReAm6o3mrtn", "title": "Defecting by Accident - A Flaw Common to Analytical People", "pageUrl": "https://www.lesswrong.com/posts/GG2rtBReAm6o3mrtn/defecting-by-accident-a-flaw-common-to-analytical-people", "postedAt": "2010-12-01T08:25:47.450Z", "baseScore": 128, "voteCount": 150, "commentCount": 433, "url": null, "contents": { "documentId": "GG2rtBReAm6o3mrtn", "html": "

Related to: Rationalists Should Win, Why Our Kind Can't Cooperate, Can Humanism Match Religion's Output?, Humans Are Not Automatically Strategic, Paul Graham's "Why Nerds Are Unpopular"

The "Prisoner's Dilemma" refers to a game theory problem developed in the 1950's. Two prisoners are taken and interrogated separately. If either of them confesses and betrays the other person - "defecting" - they'll receive a reduced sentence, and their partner will get a greater sentence. However, if both defect, then they'll both receive higher sentences than if neither of them confessed.

This brings the prisoner to a strange problem. The best solution individually is to defect. But if both take the individually best solution, then they'll be worst off overall. This has wide ranging implications for international relations, negotiation, politics, and many other fields.

Members of LessWrong are incredibly smart people who tend to like game theory, and debate and explore and try to understand problems like this. But, does knowing game theory actually make you more effective in real life?

I think the answer is yes, with a caveat - you need the basic social skills to implement your game theory solution. The worst-case scenario in an interrogation would be to "defect by accident" - meaning that you'd just blurt out something stupidly because you didn't think it through before speaking. This might result in you and your partner both receiving higher sentences... a very bad situation. Game theory doesn't take over until basic skill conditions are met, so that you could actually execute any plan you come up with.

The Purpose of This Post: I think many smart people "defect" by accident. I don't mean in serious situations like a police investigation. I mean in casual, everyday situations, where they tweak and upset people around them by accident, due to a lack of reflection of desired outcomes.

Rationalists should win. Defecting by accident frequently results in losing. Let's examine this phenomenon, and ideally work to improve it.

Contents Of This Post

Background - On Analytical Skills and Rhetoric

From Paul Graham's "Why Nerds Are Unpopular" -

I know a lot of people who were nerds in school, and they all tell the same story: there is a strong correlation between being smart and being a nerd, and an even stronger inverse correlation between being a nerd and being popular. Being smart seems to make you unpopular.
[...]
The key to this mystery is to rephrase the question slightly. Why don't smart kids make themselves popular? If they're so smart, why don't they figure out how popularity works and beat the system, just as they do for standardized tests?
[...]
So if intelligence in itself is not a factor in popularity, why are smart kids so consistently unpopular? The answer, I think, is that they don't really want to be popular.
If someone had told me that at the time, I would have laughed at him. Being unpopular in school makes kids miserable, some of them so miserable that they commit suicide. Telling me that I didn't want to be popular would have seemed like telling someone dying of thirst in a desert that he didn't want a glass of water. Of course I wanted to be popular.
But in fact I didn't, not enough. There was something else I wanted more: to be smart.

I believe that "defecting by accident" is a result of not learning how different phrasing of words and language can dramatically effect how well your point is taken. It's been a general observation of mine that a lot of people in highly intellectual disciplines like mathematics, physics, robotics, engineering, and computer science/programming look down on social skills.

Of course, they wouldn't phrase it that way. They'd say they don't have time for it - they don't have time for gossip, or politics, or sugarcoating. They might say, "I'm a realist" or "I say it like it is."

I believe this is a result of not realizing how big the difference in your effectiveness will be depending on how you phrase things, in what order, how well you appeal to another person's emotions. People in highly analytical disciplines often care about "just the facts" - but, let's face it, we highly analytical people are a great minority of the population.

Sooner or later, you're going to have something you care about and you're going to need to persuade someone who is not highly analytical. At that point, you run some serious risks of failure if you don't understand basic social skills.

Now, most people would claim that they have basic social skills. But I'm not sure this is borne out by observation. This used to be a very key part of any educated person's studies: rhetoric. From Wikiedpia: "Rhetoric is the art of using language to communicate effectively and persuasively. ... From ancient Greece to the late 19th Century, it was a central part of Western education, filling the need to train public speakers and writers to move audiences to action with arguments."

Rhetoric is now frequently looked down upon by highly intelligent and analytical people. Like Paul Graham says, it's not that intellectuals can't learn it. It's that they think it's not a good use of their time, that they'd rather be smart instead.

Defecting by Accident

Thus, you see highly intelligent people do what I now term "defecting by accident" - meaning, in the process of trying to have a discussion, they insult, belittle, or offend their conversational partner. They commit obvious, blatant social faux pases, not as a conscious decision of the tradeoffs, but by accident because they don't know better.

Sometimes defecting is the right course of action. Sometimes you need to break from whoever you're negotiating with, insist that things are done your way, even at their expense, and take the consequences that may arise from that.

But it's rarely something you should do by accident.

I'll give specific, clear examples in a moment, but before I do so, let's look at a general example of how this can happen.

If you're at a meeting and someone gives a presentation and asks if anyone has questions, and you ask point-blank, "But we don't have the budget or skills to do that, how would we overcome that?" - then, that seems like a highly reasonable question. It's probably very intelligent.

What normal people would consider, though, is how this affects the perception of everyone in the room. To put it bluntly - it makes the presenter look very bad.

That's okay, if you decide that that's an acceptable part of what you're doing. But you now have someone who is likely to actively work to undermine you going forwards. A minor enemy. Just because you asked a question casually without thinking about it.

Interestingly, there's about a thousand ways you could be diplomatic and tactful to address the key issue you have - budgeting/staffing - without embarrassing the presenter. You could take them aside quietly later and express your concern. You could phrase it as, "This seems like an amazing idea and a great presentation. I wonder how we could secure the budgeting and get the team for it, because it seems like it'd be a profitable if we do, and it'd be a shame to miss this opportunity."

Just by phrasing it that way, you make the presenter look good even if the option can't be funded or staffed. Instead of expressing your concern as a hole in their presentation, you express it as a challenge to be overcome by everyone in the room. Instead of your underlying point coming across as "your idea is unfeasible," it comes across as, "You've brought this good idea to us, and I hope we're smart enough to make it work."

If the real goal is just to make sure budgeting and funding is taken care of, there's many ways to do that without embarrassing and making an enemy out of the presenter.

Defecting by accident is lacking the awareness, tact, and skill to realize what the secondary effects of your actions are and act accordingly to win.

This is a relatively basic problem that the majority of "normal" people understand, at least on a subconscious level. Most people realize that you can't just show up a presenter and make them look bad. Or at least, you should expect them to be hostile to you if you do. But many intelligent people say, "What the hell is his problem? I just asked a question."

This is due to a lack of understanding of social skills, diplomacy, tact, and yes, perhaps "politics" - which are unfortunately a reality of the world. And again, rationalists should win. If your actions are leading to hostility and defection against you, then you need to consider if your actions are the best possible.

"Why Our Kind Can't Cooperate"

Eliezer's "Why Our Kind Can't Cooperate" is a masterpiece. I'm only going to excerpt three parts, but I'd recommend the whole article.

From when I was still forced to attend, I remember our synagogue's annual fundraising appeal. It was a simple enough format, if I recall correctly. The rabbi and the treasurer talked about the shul's expenses and how vital this annual fundraise was, and then the synagogue's members called out their pledges from their seats.

Straightforward, yes?
Let me tell you about a different annual fundraising appeal. One that I ran, in fact; during the early years of a nonprofit organization that may not be named. One difference was that the appeal was conducted over the Internet. And another difference was that the audience was largely drawn from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd. (To point in the rough direction of an empirical cluster in personspace. If you understood the phrase "empirical cluster in personspace" then you know who I'm talking about.)
I crafted the fundraising appeal with care. By my nature I'm too proud to ask other people for help; but I've gotten over around 60% of that reluctance over the years. The nonprofit needed money and was growing too slowly, so I put some force and poetry into that year's annual appeal. I sent it out to several mailing lists that covered most of our potential support base.
And almost immediately, people started posting to the mailing lists about why they weren't going to donate. Some of them raised basic questions about the nonprofit's philosophy and mission. Others talked about their brilliant ideas for all the other sources that the nonprofit could get funding from, instead of them. (They didn't volunteer to contact any of those sources themselves, they just had ideas for how we could do it.)
Now you might say, "Well, maybe your mission and philosophy did have basic problems - you wouldn't want tocensor that discussion, would you?"
Hold on to that thought.
Because people were donating. We started getting donations right away, via Paypal. We even got congratulatory notes saying how the appeal had finally gotten them to start moving. A donation of $111.11 was accompanied by a message saying, "I decided to give **** a little bit more. One more hundred, one more ten, one more single, one more dime, and one more penny. All may not be for one, but this one is trying to be for all."
But none of those donors posted their agreement to the mailing list. Not one.

So far as any of those donors knew, they were alone. And when they tuned in the next day, they discovered not thanks, but arguments for why they shouldn't have donated. The criticisms, the justifications for not donating - only those were displayed proudly in the open.
As though the treasurer had finished his annual appeal, and everyone not making a pledge had proudly stood up to call out justifications for refusing; while those making pledges whispered them quietly, so that no one could hear.

Indeed, that's a problem. Eliezer continues:

"It is dangerous to be half a rationalist."

And finally, this point, which is magnificent -

Our culture puts all the emphasis on heroic disagreement and heroic defiance, and none on heroic agreement or heroic group consensus. We signal our superior intelligence and our membership in the nonconformist community by inventing clever objections to others' arguments. Perhaps that is why the atheist/libertarian/technophile/sf-fan/Silicon-Valley/programmer/early-adopter crowd stays marginalized, losing battles with less nonconformist factions in larger society. No, we're not losing because we're so superior, we're losing because our exclusively individualist traditions sabotage our ability to cooperate.

On Being Pedantic, Sarcastic, Disagreeable, Non-Complimentary, and Otherwise Defecting by Accident

You might not realize it, but in almost all of human civilization it's considered insulting to just point out something wrong someone is doing without any preface, softening, or making it clear why you're doing it.

It's taken for granted in some blunt, "say it like it is" communities, but it's usually taken as a personal attack and a sign of animosity in, oh, 90%+ of the rest of civilization.

In these so-called "normal people's societies," correcting them in front of their peers will be perceived as trying to lower them and make them look stupid. Thus, they'll likely want to retaliate against you, or at least not cooperate with you.

Now, there's a time and place to do this anyways. Sometimes there's an emergency, and you don't have time to take care of people's feelings, and just need to get something done. But surfing the internet is not that time.

I'm going to take some example replies from a recent post I made to illustrate this. There's always a risk in doing this of not being objective, but I think it's worth it because (1) I tend to read every reply to me and carefully reflect on it for a moment, (2) I understand exactly my first reactions to these comments, and (3) I won't have to rehash criticisms of another person. Take a grain of salt with you since I'm looking at replies to myself originally, but I think I can give you some good examples.

The first thing I want to do is take a second to mention that almost everyone in the entire world gets emotionally invested in things they create, and are also a little insecure about their creations. It's extraordinarily rare that people don't care what others' think of their writing, science, or art.

Criticism has good and bad points. Great critics are rare, but they actually make works of creation even in critique. A great critic can give background, context, and highlight a number of relevant mainstream and obscure works through history that the piece they're critiquing reminds them of.

Good critique is an art of creation in and of itself. But bad critique - just blind "that's wrong" without explaining why - tends to be construed as a hostile action and not accomplish much, other than signalling that "heroic disagreement" that Eliezer talks about.

I recently wrote a post titled, "Nahh, that wouldn't work". I thought about it for around a week, then it took me about two hours to think it through, draw up key examples on paper, choose the most suitable, edit, and post it. It was generally well-received here on LW and on my blog.

I'll show you three comments on there, and how I believe they could be subtly tweaked.

1.

> I wizened up,
I don't think that's the word you want to use, unless you're talking about how you finally lost those 20 pounds by not drinking anymore.

2.

FWIW, I think posts like this are more valuable the more they include real-world examples; it's kind of odd to read a post which says I had theory A of the world but now I hold theory B, without reading about the actual observations. It would be like reading a history of quantum mechanics or relativity with all mentions of things like the laser or double-slit experiment or Edding or Michelson-Morley removed.

3.

An interesting start, but I would rather see this in Discussion -- it's not fully adapted yet, I think...

Now, I spend a lot of time around analytical people, so I take no offense at this. But I believe these are good examples of what I'd call "accidental defection" - this is the kind of thing that produces a negative reaction in the person you're talking to, perhaps without you even noticing.

#1 is kind of clever pointing out a spelling error. But you have to realize, in normal society that's going to upset and make hostile the person you're addressing. Whether you mean to or not, it comes across as, "I'm demonstrating that I'm more clever than you."

There's a few ways it could be done differently. For instance, an email that says, "Hey Sebastian, I wanted to give you a heads up. I saw your recent post, but you spelled "wisen" as "wizen" - easy spelling error to make, since they're uncommonly used words, but I thought you should know. "Wizen" means for things to dry up and lose water. Cheers and best wishes."

That would point out the error (if that's the main goal), and also engender a feeling of gratitude in whoever received it (me, in this case). Then I would have written back, "Hey, thanks... I don't worry about spelling too much, but yeah that one's embarrassing, I'll fix it. Much appreciated. Anyways, what are you working on? How can I help?"

I know that's how I'd have written back, because that's how I generally write back to someone who tries to help me out. Mutual goodwill, it's a virtuous cycle.

Just pointing out someone is wrong in a clever way usually engenders bad will and makes them dislike you. The thing is, I know that's not the intention of anyone here - hence, "defecting by accident." Analytical people often don't even realize they're showing someone up when they do it.

I'm not particularly bothered. I get the intent behind it. But normal people are going to be ultra-hostile if you do it to them. There's other ways, if you feel the need to point it out publicly. You could "soften" it by praising first - "Hey, some interesting points in this one... I've thought about a similar bias of not considering outcomes if I don't like what it'd mean by the world. By the way, you probably didn't mean wizen there..." - or even just saying, "I think you meant 'wisen' instead of 'wizen'" - with links to the dictionary, maybe. Any of those would go over better with the original author/presenter whom you're pointing out the error to.

Let's look at point #2. "FWIW, I think posts like this are more valuable the more they include real-world examples; it's kind of odd to read a post which says I had theory A of the world but now I hold theory B, without reading about the actual observations."

This is something which makes people trying to help or create shake their head. See, it's potentially a good point. But after someone takes some time to create something and give it away for free, then hearing, "Your work would be more valuable if you did (xyz) instead. Your way is kind of odd."

People generally don't like that.

Again, it's trivially easy to write that differently. Something like, "Thanks for the post. I was wondering, you mentioned (claim X), but I wonder if you have any examples of claim X so I can understand it better?"

That one has - gratitude, no unnecessary criticism, explains your motivation. All of which are good social skill points, especially the last one as written about in Cialdini's "Influence" - give a reason why.

#3 - "An interesting start, but I would rather see this in Discussion -- it's not fully adapted yet, I think..."

Okay. Why?

The difference between complaining and constructive work is looking for solutions. So, "There's some good stuff in here, but I think we could adapt it more. One thing I was thinking is (main point)."

Becoming More Self-Aware and Strategic; Some Practical Social Guidelines

From Anna Salamon's "Humans Are Not Automatically Strategic" -

But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out. We do not automatically:

Anna points out that people don't automatically ask what they're trying to achieve. You don't, necessarily, ask what you're trying to achieve.

But I would recommend you do ask that before speaking up socially. At least for a while, until you've got the general patterns figured out.

If you don't, you run the risk of antagonizing and making people hostile to you who would otherwise cooperate and work with you.

Now, I've heard smart people say, "I don't have time for that." This is akin to saying, "I don't have time to achieve what I want to achieve."

Because it doesn't take much time, and it makes you much more effective. Asking, "What am I trying to achieve here?" goes a long way.

When commenting on a discussion site, who are you writing for? For the author? For the regular readers? What's your point in replying? If your main point is just to "get to truth and understanding," then what should your secondary considerations be? If there's a conflict between the two, would you prefer to encourage the author to write more, or to look clever by pointing out a pedantic point?

I understand where you're coming from, because I used to come from the same place. I was the kid who argued with teachers when they were wrong, not realizing the long term ramifications of that. People matter, and people's feelings matter, especially if they have sway over your life, but even if they don't have sway over your life.

To that, here's some suggestions I think would make you more effective:

Following some of these simple points will make you much more effective socially. I feel like a lot of times analytical and intelligent people study really hard, difficult problems, while ignoring basic considerations that have much more immediate and larger impact.

Further reading:

Edit: Lots of comments on this. 130 and counting. The most common criticism seems to be that adding fluff is a waste of time, insincere, and reduces signal:noise ratio. I'd encourage you to actually try it instead of just guessing - a quick word of thanks or encouragement before criticizing creates a more friendly, cooperative environment and works well. It doesn't take very long, and it doesn't detract from S:N ratio much, if at all.

Don't just guess here. Try it out for a month. I think you'll be amazed at how differently people react to you, and the uptake on your suggestions and feedback and ability to convince and teach people. Of course, you can construct examples of going overboard and it being silly. But that's not required - just try to make everything 10% more gracious, and watch how much your effectiveness increases.

" } }, { "_id": "nFhrLBEvtDTnZPRac", "title": "Aspie toy: the Neocube", "pageUrl": "https://www.lesswrong.com/posts/nFhrLBEvtDTnZPRac/aspie-toy-the-neocube", "postedAt": "2010-12-01T07:18:52.929Z", "baseScore": 15, "voteCount": 10, "commentCount": 32, "url": null, "contents": { "documentId": "nFhrLBEvtDTnZPRac", "html": "

This post is going to sound like an ad. Sorry about that. I'm not affiliated, etc, etc.

\n

Last Friday I bought a very simple toy: a set of 216 little magnetic metal balls, about the size of ball bearings. Since then I've been completely entranced by it and unable to put the thing down. Here's a Flickr group to show what I mean. The little balls seem to want to come together in symmetrical patterns: you can make square and hexagonal flat patches, curved patches with 3/4/5/6-fold symmetry, stable 3D cubic lattices, fcc and hcp lattices and many hollow and solid polyhedra. So far I've managed to make a tetrahedron, two varieties of cube (1, 2), an octahedron, an icosahedron, and other stuff (my current favorite shape is the solid truncated octahedron). It's like crack for the right type of person.

\n

And there's the rub. Carrying this toy around and showing it to my friends has made me realize with forgotten clarity that I'm special. Practically no one reacts to it the same way as me. The word \"aspie\" has been uttered, half in jest, half seriously. Even though my intelligence may be pretty average (judging by online tests I have lower IQ than most LW regulars), I seem to have this rare natural ability to get deeply interested in things that \"normal\" people find boring.

\n

This ability... this instinctive desire to tinker with symmetrical patterns... has shaped my entire life by now, because it's what first attracted me to math and then programming. But how could it ever be environmental, if I remember having it since my earliest childhood? Is it genetic? Is math success genetic, then? What do you think?

" } }, { "_id": "Lxppy7MmN2y7aaHvH", "title": "Broken window fallacy and economic illiteracy.", "pageUrl": "https://www.lesswrong.com/posts/Lxppy7MmN2y7aaHvH/broken-window-fallacy-and-economic-illiteracy", "postedAt": "2010-12-01T04:48:12.618Z", "baseScore": 10, "voteCount": 8, "commentCount": 13, "url": null, "contents": { "documentId": "Lxppy7MmN2y7aaHvH", "html": "

Some time ago, I had a talk with my father where I explained to him the concept of the broken window fallacy. The idea was completely novel to him, and while it didn't take long for him to grasp the principles, he still needed my help in coming up with examples of ways that it applies to the market in the real world.

\n

My father has an MBA from Columbia University and has held VP positions at multiple marketing firms.

\n

I am not remotely expert on economics; I do not even consider myself an aficionado. But it has frequently been my observation that not just average citizens, but people whose positions have given them every reason to learn and use the information, are critically ignorant of basic economic principles. It feels like watching engineers try to produce functional designs based on Aristotelian physics. You cannot rationally pursue self interest when your map does not correspond to the territory.

\n

I suppose the worst thing for me to hear at this point is that there is some reason with which I am not yet familiar which prevents this from having grand scale detrimental effects on the economy, since it would imply that businesses cannot be made more sane by the increased dissemination of basic economic information. Otherwise, this seems like a fairly important avenue to address, since the basic standards for economic education, in educated businesspeople and the general public, are so low that I doubt the educational system has even begun to climb the slope of diminishing returns on effort invested into it.

" } }, { "_id": "yoMdLrEPEAk7noQw9", "title": "The Boundaries of Biases", "pageUrl": "https://www.lesswrong.com/posts/yoMdLrEPEAk7noQw9/the-boundaries-of-biases", "postedAt": "2010-12-01T00:43:02.154Z", "baseScore": 12, "voteCount": 14, "commentCount": 15, "url": null, "contents": { "documentId": "yoMdLrEPEAk7noQw9", "html": "

Thinking about meta-contrarianism and biases (among other things), I came to the following question:

\n

When are biases a good thing?

\n

Since the caution too important to leave to the conclusion, I'm going to put it before I give an answer, even though the flow will be fudged as a result. In Epistemology and the Psychology of Human Judgment (a book I strongly recommend), Bishop and Trout talk a lot about statistical prediction rules, where linear regressions on data often outperform human experts. One of the findings they discussed was that not only did experts have lower accuracy than the statistically generated rules, when given the result of the rule and the option to defect from its prediction they were much more likely to chose to defect when the rule was right and they were wrong than the other way around. So, for almost all of the experts, the best choice was \"stick to the rule, even if you think it's wrong.\" Likewise, even if you've got a long explanation as to why your action isn't biased and how this is a good idea just this once, you should stick to the rule.

\n

But there's still something very important to keep in mind: rules have domains in which they apply. \"Do as the Romans do\" has the domain of \"when in Rome.\" Even if the rule is a black box, such that you do not understand how it created its outputs given its inputs, you can trust the outputs so long as you know which inputs are appropriate for that box. Sticking to the behavior of other Romans will make you fit in better, even if you're not socially aware enough to notice differences in how much you fit in. But if you're in Japan, \"do as the Romans do\" is bad advice.

\n

When you know how the rule works- when you're at level 2 or 3 understanding- then you can probably look at a rule and decide if it's appropriate in that circumstance, because you can see what determines the domain of the rule. When you understand the proverb, you know it really means \"When in X, do as the residents of X do\", and then you can pretty easily figure out that \"do as the Romans do\" only fits when you're in Rome. But, as we can see with this toy example, pushing these boundaries helps you understand the rule- \"why is it a bad idea to do what the Romans do in Japan?\"

\n

\n

On that note, let's return to our original question. When are biases a good thing?

\n

Someone who likes to capitalize the word Truth would probably instantly reply \"never!\". But the fact that their reply was instant and their capitalization is odd should give us pause. That sounds a lot like a bias, and we're trying to evaluate those. It could be that adjusting the value of truth upwards in all situations is a good bias to have, but we've got to establish that on a level that's more fundamental.

\n

And so after a bit of thought, we come up with a better way to redefine the original problem. \"If a bias is a decision-making heuristic that has negative instrumental value, are there ranges where that same decision-making heuristic has positive instrumental value?\"

\n

Three results immediately pop out: the first is that we've constructed a tautological answer to our initial question- never, since we've defined them as bad.

\n

The second result is still somewhat uninteresting- as your thoughts take time and energy, decision-making heuristics have a natural cost involved. Meticulous but expensive heuristics can be negative value compared to sloppy but cheap heuristics for many applications; you might be better off making a biased decision about laundry detergent since you can use that time and energy to better effect elsewhere. But sometimes it's worth expunging low-damage biases at moderate cost because bias-expunging experience is a good in its own right; then, when it comes to make a big decision about buying a house or a car you can use your skills at unbiased purchasing developed by treating laundry detergent as a tough choice.

\n

The more interesting third result is a distinction between bias and bigotry. Consider a bigoted employer and a biased employer: the bigoted employer doesn't like members of a particular group and the biased employer misjudges members of a particular group. The bigoted employer would only hire people they dislike if the potential hire's superiority to the next best candidate is high enough to overcome the employer's dislike of the potential hire, and the biased employer wants to hire the best candidate but is unconsciously misjudging the quality of the potential hire. Both will only choose the disliked/misjudged potential hire in the same situation- where the innate quality difference is higher than degree of dislike/misjudging- but if you introduce a blind system that masks the disliked characteristic of potential hires, they have opposite responses. The bigoted employer is made worse off- now when choosing between the top set of candidates he might accidentally choose a candidate that would satisfy him less than one he chose with perfect information- but the biased employer is made better off- now, instead of having imperfect information he has perfect information, and will never accidentally choose a candidate that would satisfy him less than the one he chose otherwise. Notice that subtracting data made the biased employer's available information more perfect.

\n

That distinction is incredibly valuable for anti-discrimination thinkers and anyone who talks with them; much of the pushback against anti-discrimination measures seems to be because they're not equipped to think about and easily discuss the difference between bigotry and bias.

\n

This is a question that we can ask about every bias. Is racism ever appropriate? Yes, if you're casting for a movie where the character's race is relevant. Is sexism ever appropriate? Yes, if you're looking to hire a surrogate mother (or, for many of us, a mate). But for other biases the question becomes more interesting.

\n

For example, when does loss aversion represent a decision-making heuristic with positive instrumental value?

\n

First, we have to identify the decision-making heuristic. A typical experiment that demonstrates loss aversion presents the subject with a gamble: 50% chance of gaining X, 50% chance of losing a pre-determined number between 0 and X. That range has a positive expected value, so a quick calculation suggests that taking the gamble is a good plan. But until the loss is small enough (typical numeric values for the loss aversion effect are about twice, so until the loss is less than X/2), subjects don't take the bet, even though the expected value is positive. That looks an awful lot like \"doublecount your losses when comparing them to gains.\"

\n

When is that a good heuristic? Well, it is if utility is nonlinear; but the coefficient of 2 for losses seems pretty durable, suggesting it's not people doing marginal benefit / marginal loss calculations in their head. The heuristic seems well-suited to an iterated zero-sum game where your losses benefit the person you lose to, but your opponent's losses aren't significant enough to enter your calculations. If you're playing a game against one other person, then if they lose you win. But if you're in a tournament with 100 entrants, the benefit to you from your opponent's loss is almost zero, while the loss to you from your loss is still doubled- you've fallen down, and in doing so you've lifted a competitor up.

\n

An example of a bias false positive (calling a decision biased when the decision was outside of the bias's domain) for loss aversion is here, from our first Diplomacy game in the Less Wrong Diplomacy Lab. Germany had no piece in Munich, which was threatened by an Italian piece with other options, and Germany could move his piece in Kiel to Munich (preventing Italian theft) or to Holland (gaining an independent center but risking Munich). If Germany made decisions based only on the number of supply centers it would control after 1901, he would prefer Kie-Hol to Kie-Mun at P(Tyr-Mun)<1, and only be indifferent at P(Tyr-Mun)=1. If Germany made decisions based on the number of supply centers it would control after 1901 minus the number of supply centers the other players controlled divided by 6 (Why 6? Because each player has 6 opponents, and this makes Germany indifferent to a plan that increases or decreases the number of centers each player controls by the same amount), he would be indifferent at P(Tyr-Mun)=5/6. If Germany didn't discount the gains of other countries, he would be indifferent at P(Tyr-Mun)=1/2. If Germany takes into account board position, the number drops even lower- and if Germany has a utility function over supply centers that drops from 5 to 6, as suggested by Douglas Knight, then Germany might never be indifferent between Hol and Mun (but in such a situation, Germany would be unlikely to subscribe to loss aversion).

\n

 

\n

A final note: we're talking about heuristics, here- the ideal plan in every case is to correctly predict utility in all possible outcomes and maximize predicted utility. But we have to deal with real plans, which almost always involves applying rules to situations and pattern-matching. I've just talked about one bias in one situation here- which biases have you internalized more deeply by exploring their boundaries? Which biases do you think have interesting boundaries?

" } }, { "_id": "GgCwkeAnFHXPkNjik", "title": "Public rationality", "pageUrl": "https://www.lesswrong.com/posts/GgCwkeAnFHXPkNjik/public-rationality", "postedAt": "2010-11-30T22:50:51.939Z", "baseScore": 12, "voteCount": 13, "commentCount": 13, "url": null, "contents": { "documentId": "GgCwkeAnFHXPkNjik", "html": "

I'm going to list some moderately well-known people who strike me as unusually rational. They aren't \"rationalists\" in the sense that they don't generally explicitly talk about rationality.

\n

Tom and Ray Magliozzi run a web site and talk radio show about car repair. They have a repetitious sense of humor, but if you look past that, you see that they have a very wide body of knowledge (and sometime, we should talk about how much detailed knowledge is worth acquiring so that you have something to be rational with), publicly display the process of testing hypotheses, and get in touch with people they've given advice to later to find out whether the advice worked. Sometimes it does, sometimes it doesn't.

\n

Ta Nehisi Coates writes a politics and culture blog for The Atlantic. He's notable for trying to see how everyone is doing what makes sense to them-- rather a difficult thing when you're taking on the mind-killer subjects.

\n

Atul Gawande writes books and articles about the practice of medicine. It was particularly striking in his recent The Checklist Manifesto that when his checklists seemed to produce notable improvements in surgical outcomes, his first reaction was concern that there was something wrong with the experiment rather than delight that he'd been proven correct.

\n

Any other recommendations?

" } }, { "_id": "bkaPisTjcYSHDRTWw", "title": "Convincing An Illogical Being", "pageUrl": "https://www.lesswrong.com/posts/bkaPisTjcYSHDRTWw/convincing-an-illogical-being", "postedAt": "2010-11-30T22:08:39.540Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "bkaPisTjcYSHDRTWw", "html": "

There is a person who rejects logic as a valid way of making decisions.  This person is perfectly capable of understanding logical arguments, but chooses to reject them.

\n

Is there any way to convince this person that logic is not necessarily a bad thing?

\n

The reason I bring this up is that there is a significant percentage of the population who thinks that using logic is bad, and thinks that they will make better decisions simply by relying on emotions.  I think we should clarify if there is anything we as a community could say to these people to help them accept the idea of logic.

\n

Trying to convince them using purely logical methods would not work, as they reject logic.  The idea is similar to that of trying to prove that a system of mathematics works without using math.  I believe that the proof for this is looking at the real world, and seeing the results math predicts.

\n

The obvious application would be to show how using logic has better results than going by pure emotions. However, doing that would also seem like a logical argument, and would most likely also get rejected.  Is succeeding at swaying such an individual even possible?

" } }, { "_id": "kks2ZeknMT9SbuXFu", "title": "New Diplomacy Game in need of two more.", "pageUrl": "https://www.lesswrong.com/posts/kks2ZeknMT9SbuXFu/new-diplomacy-game-in-need-of-two-more", "postedAt": "2010-11-30T20:34:56.861Z", "baseScore": 3, "voteCount": 3, "commentCount": 11, "url": null, "contents": { "documentId": "kks2ZeknMT9SbuXFu", "html": "

We have five people from the NYC division of LW.  We need two more players

\n

http://webdiplomacy.net/board.php?gameID=42765

\n

 

\n

passcode: streetlight

" } }, { "_id": "nCpDyQxr4QfQFjLuX", "title": "Where are other places to meet rationality-minded people?", "pageUrl": "https://www.lesswrong.com/posts/nCpDyQxr4QfQFjLuX/where-are-other-places-to-meet-rationality-minded-people", "postedAt": "2010-11-30T09:55:27.904Z", "baseScore": 9, "voteCount": 9, "commentCount": 14, "url": null, "contents": { "documentId": "nCpDyQxr4QfQFjLuX", "html": "

This thread can be seen as a continuation of a previous thread (http://lesswrong.com/r/discussion/lw/33u/theoretical_target_audience_size_of_less_wrong/). I felt the need to make a new thread since this post is long and also since it is more targeted. This may help us further achieve our goals as well. EDIT: http://www.google.com/search?q=related%3Alesswrong.com is actually one of the best places to find people in the LW demographic.

\n

Now, I'll say this - I much prefer LessWrong to other hangouts for those who are focused on rationality. But of course, posts here must be of fairly high quality (for good reasons), and some rationality-minded people may want to continue talking with other rationality-minded people in less formal settings (such as chatroom or forum settings). So that's a good reason to think about other communities too (and in addition, it can attract new visitors to our community). These communities do not necessarily have to have a focus on rationality - rather - they should just have a population of rationality-focused individuals who are not afraid to speak out.

\n

Now, part of the strategy that we often have to use is to find the rationality-focused people in a sea of people who don't care much about rationality. So I've been always thought a lot about the demographics of each community I've joined. Most people in local gifted programs, for example, don't seem to be very interested in rationality (although it seems that interest in rationality does seem to increase with age).There are some real-life venues for meeting similar people, but it's often a lot harder since almost all rationality groups are online (and it's far easier to attract attention to low-popularity topics online). College, for example, seems to be a difficult place to locate rationality-minded individuals, unless the college has an online discussion with a significant portion of the population participating. Sometimes, facebook groups have been an excellent substitute for college, as long as the facebook groups have an audience of a large portion of the college population. Normally, this only works for smaller populations since Facebook's interface is not good for handling large groups (I've seen the Caltech freshman facebook groups, which I've really liked, but unfortunately, I go to a state university rather than Caltech).

\n

There are numerous online venues for finding people: IRC, forums, and various other Internet communities (comment sections on blogospheres, IRC, chatrooms, reddit, Facebook, Meetup, Google Reader/Buzz (shared items), Google Groups/Yahoo Groups/Usenet, and other social networking websites), and mailing lists. Some of them are better than others for finding rationality-minded people (in particular, people who use certain services are generally more \"intelligent/educated\"+NT than people who use other services [see facebook vs. myspace, reddit vs digg, and gmail vs yahoo mail/hotmail]).

\n

The venues that are easiest to find are obviously the ones that are not in the deep web. This is mostly just inclusive of the blogosphere, some forums, reddit, and a small fraction of items on Google groups. Things in facebook are also easy to find, since there's only one facebook (as opposed to numerous IRC channels/groups on google/mailing lists) and you can use facebook's internal search engine to find groups (which many people do). But there haven't been many deep discussions on facebook (whose format isn't very good for deep discussions) - and it's now harder to have these discussions than before since Facebook has recently made it difficult to find groups other people are in (as it is now making pages more prominent than groups). My impression is that the aggregated blogosphere has the largest populations of rationality-minded readers (and of quality discussions), but current blog interfaces make it difficult to view people's profiles or to follow people's posts (both features which I think are necessary for faciliating social bonding). And I always wonder - where else do these blog commenters hang out?

\n

With forums, we can find a nice list of them at http://www.big-boards.com (which is far from comprehensive, but better than nothing)

\n

Some forums may ostensibly seem good for discussions, but end up not being good for them. Asperger's Syndrome forums, for example, may have some rationality-minded people. But most people will \"tl;dr\" (or ignore) thoughtful posts (and I know this as a fact for the two most popular Asperger's Syndrome forums - both which don't seem to have particularly intelligent readers). Physics Forums also seemed attractive, but unfortunately people there don't seem to care much about rationality (and the debates are full of emotions rather than rationality). Some atheist forums also might have high populations of rational people (but richarddawkins.net forums died, and I don't know much about infidels.org). One of the main problems, anyways, is that it's a lot easier to complain about a post than to make a \"good post\" comment (which might be interpreted as a \"+1 post count\" post). There are then some forums that have very high populations of very intelligent+NT people (College Confidential and phdcomics forums [which have crashed for several weeks by now] come to mind in particular), but most \"rationality\" posts suited for places like this still end up completely ignored. Philosophy forums might have some rationality-minded people, but I don't know the place very well. The same applies to programming forums (it seems that many programmers are NT, but engineering majors, including computer science ones, tend towards ST styles of thinking)

\n

I've been very surprised by the quality of some discussions that are in the deep web, but they are very difficult to locate (especially since most discussion forums are essentially dead). The calorie restriction mailing list, for example, has surprisingly good discussions (even better than many of the discussions I've seen on Imminst), and people there are more receptive to my emails than the people on Imminst (Imminst just doesn't seem to be that great of a place to discuss rationality topics - although others may disagree with me and I can be convinced).

\n

It's possible that some foreign language sites may also have high populations of rationality-minded people. There may be a time when Google Translate may finally make people produce mutually intelligible conversations. If this happens, we may be able to increase our space of potentially rational individuals (especially from countries like Japan, Germany, and France, and countries with high %s of German speakers, like the Czech Republic). Many European countries seem to have more \"rational\" belief systems than the U.S., and this may apply to East Asian countries as well. Many Scandinavians+Dutch+Estonians do understand English, though, and even preferentially associate with English-speaking communities (perhaps since they have small populations)

\n

I know that I haven't found all possible online venues for meeting rationality minded people. My criteria may be different from the criteria of most readers here, as I only feel comfortable discussing personal/daily issues with other late-teenagers (so groups like meetup don't really work for me). In any case, it may be wise to discuss options for rationality-minded teenagers, since it is from teenagers where we can expect to make the biggest increases in the rationality-minded audience (especially from this generation, which seems to have more more rational religious+social attitudes than previous generations).

\n

Since the vast majority of available groups are unspecialized, this is where we will have to do much of our searching. I've noticed that unspecialized groups with very high average intelligence are probably the best groups to find such individuals. Unfortunately, I hang out in many of these groups, and few people in them seem to be curious enough to develop a deep desire to talk to others about rationality-minded topics. I would like to encourage a discussion of the venues that readers here have liked the most (with respect to finding other rationality-minded individuals), especially with a focus on discussion venues on the deep web (which many people here may not be aware of).

\n

As suggested by another poster, there's a LessWrong IRC chatroom: http://wiki.lesswrong.com/wiki/Less_Wrong_IRC_Chatroom

" } }, { "_id": "2PdR5oXCAoNjpnSSd", "title": "Unsolved Problems in Philosophy Part 1: The Liar's Paradox", "pageUrl": "https://www.lesswrong.com/posts/2PdR5oXCAoNjpnSSd/unsolved-problems-in-philosophy-part-1-the-liar-s-paradox", "postedAt": "2010-11-30T08:56:51.279Z", "baseScore": 7, "voteCount": 11, "commentCount": 142, "url": null, "contents": { "documentId": "2PdR5oXCAoNjpnSSd", "html": "

Graham Priest discusses The Liar's Paradox for a NY Times blog. It seems that one way of solving the Liar's Paradox is defining dialethei, a true contradiction. Less Wrong, can you do what modern philosophers have failed to do and solve or successfully dissolve the Liar's Paradox? This doesn't seem nearly as hard as solving free will.

\n

This post is a practice problem for what may become a sequence on unsolved problems in philosophy.

" } }, { "_id": "BZAHM8fjJqbbofhuy", "title": "The necklaces are known basically for their deep faith based connotations. ", "pageUrl": "https://www.lesswrong.com/posts/BZAHM8fjJqbbofhuy/the-necklaces-are-known-basically-for-their-deep-faith-based", "postedAt": "2010-11-30T03:37:13.326Z", "baseScore": 0, "voteCount": 2, "commentCount": 0, "url": null, "contents": { "documentId": "BZAHM8fjJqbbofhuy", "html": "

The jewelry are known generally for their deep faith based connotations. They are put on around the neck while protective pandora 2010 versus evil powers and many types of dangers. They are also extremely necessary for fashion make-up. Nonetheless, most women use them either for fashion so that as charms. The pendants are also known as things of attraction.

They might attract good fortune, success,Tiffany Pendants, luck and also other goodies to you with them regularly. Underneath normal circumstances, you will find proper behavioral habits expected from those who wear the. For example, the pandora sale are certainly not to be worn with a clothing piece and also attire. You can¥t stash them under your outfit rather; you have to keep the exposed around your neck especially when you wear any of them a quality Buddha necklace. Again, you¥re not expected to visit inappropriate locations whilst wearing the chains especially the ones that will bear the Buddha image.

In all, Buddha pendants are generally unique Pandora Bangles items you can always go for. These people not only boost your style appearance; rather,Tiffany Necklace sale, they also serve you as protective amulets.

" } }, { "_id": "6eGw3CJDDqwrSYiHu", "title": "Does TDT pay in Counterfactual Mugging?", "pageUrl": "https://www.lesswrong.com/posts/6eGw3CJDDqwrSYiHu/does-tdt-pay-in-counterfactual-mugging", "postedAt": "2010-11-29T21:31:36.631Z", "baseScore": 4, "voteCount": 7, "commentCount": 5, "url": null, "contents": { "documentId": "6eGw3CJDDqwrSYiHu", "html": "

On one hand, this old article said TDT doesn't pay. On the other hand, I imagine TDT not paying would be a slam-dunk argument for favoring UDT, which pays, and I haven't seen people make that argument. So I'm confused here. Thanks.

\n

Edit: this wiki page explains all the jargon

" } }, { "_id": "SBCXNk2dvTcZFvzKN", "title": "Blood Feud 2.0", "pageUrl": "https://www.lesswrong.com/posts/SBCXNk2dvTcZFvzKN/blood-feud-2-0", "postedAt": "2010-11-29T14:12:15.156Z", "baseScore": -2, "voteCount": 5, "commentCount": 8, "url": null, "contents": { "documentId": "SBCXNk2dvTcZFvzKN", "html": "

I've been thinking about the idea of culpability.

\n

What is it for, exactly? Why did societies that use the concept win out over those who stuck with the default response of not assigning any particular emotional significance to a given intangible abstraction?

\n

If I'm understanding correctly, a given person can be said to be responsible for a given event if and only if a different decision on the part of that person (at some point prior to the event) would be a necessary condition for the event to have not occurred. So, in a code of laws, statements along the lines of \"When X happens, find the person responsible and punish them\" act as an incentive to avoid becoming 'the person responsible,' that is, to put some effort into recognizing when a situation where your actions might lead to negative externalities, and to make the decision that won't result in someone, somewhere down the line, getting angry enough to hunt you down and burn you alive.

\n

A person cannot be said to be culpable if they had no choice in the matter, or if they had no way of knowing the full consequences of whatever choice they did have. Recklessness is punished less severely than premeditation, and being provably, irresistably coerced into something is hardly punished at all. The causal chain must be traced back to the most recent point where it was sensitive to a conscious decision in a mind capable of considering the law, because that's the only point where distant deterrence or encouragement could have an effect.

\n

\"Ignorance is no excuse\" because if it were, any halfway-competent criminal could cultivate scrupulous unawareness and be untouchable, but people think it should be an excuse because the law needs to be predictable to work. Same reason punishing someone for doing what was legal at the time doesn't make sense, except as a power-play.

\n

 

\n

So, let's say you're a tribal hominid, having just figured out all the above in one of those incommunicable, unrepeatable flashes of brilliance. How do you go about implementing it? With limited resources, you can't implement it widely enough to benefit everyone in the known world, even if you wanted to. You can't lay down a written code of laws because standardized writing hasn't even been invented yet, and you can't trust the whole tribe to carry on an oral tradition because you can barely trust half of them not to stab you in the back when you catch something unusually juicy. You can, however, trust your immediate family and/or the spear-carriers you go hunting with to cooperate with you and suffer short-term disadvantages, even when you're not looking, so long as there's a big, plausible payoff within a month (for hunters) or a few years (for family).

\n

You offer them this: \"We tell the tribe about this idea of 'responsibility,' and then, whenever someone steals from one of us, we all get together and hurt the one responsible until they've lost more than they gained by stealing. When the rest of the tribe can see that stealing from any of us is pointless, we can just leave our stuff sitting out instead of having to worry about hiding it, and then we'll have more time for hunting and grooming.\"

\n

It works out well enough that soon everybody's doing the revenge thing. Causality and culpability are enough of a puzzle that specialization is necessary to do it right; the problem is, a judgement rendered by someone you don't trust is worse than useless, and the only people it's remotely safe to trust with life-changing decisions are kin.

\n

You ever notice how corrupt police act sorta like abusive parents?

" } }, { "_id": "RAFsnW5qHbaWwZkys", "title": "Individuals angry with humanity as a possible existential risk?", "pageUrl": "https://www.lesswrong.com/posts/RAFsnW5qHbaWwZkys/individuals-angry-with-humanity-as-a-possible-existential", "postedAt": "2010-11-29T09:13:54.335Z", "baseScore": 4, "voteCount": 10, "commentCount": 36, "url": null, "contents": { "documentId": "RAFsnW5qHbaWwZkys", "html": "
\n
Basically, as technology improves, it will increase the ability of any individual human to change the world, and by extension, it will increase any individual's ability to inflict more significant damage on it, if they so desire. This could be significant in the case of individuals who are especially angry with the world, and who want to take others down with them (e.g. the Columbine shooters, the Unabomber to an extent)
\n
Now, the thing is this - what if someone angry at the world ultimately developed the means to annihilate the world at his own will? (or to cause massive destruction?) Certainly, this has not happened yet, but it's a possibility with improved technology (especially an improved ability to bioengineer viruses and various nanoparticles). Now, one of the biggest constraints to this is lack of resources (available to an individual). But of course, with the development of nanotechnology (and the use of fewer resources used to construct certain things, and other developments such as the substitution of carbon tubes for other materials), this may not be as much of a constraint as it is now. We could improve monitoring, but this would obviously present a threat to civil liberties. (This is not an argument against technology - I'm a transhumanist after all, and I completely embrace technological developments. But this is something I've never seen a good solution to) Of course, reducing the number of angry individuals would also reduce the probability of this happening. This demands an understanding of psychology (especially the psychologies of people who are self-centered, don't like it when they have to compromise, and who collect grudges very easily). And then a creative way to make them less angry (but this is quite difficult, which is why the creativity is needed). especially since many people get angry at the very thought of compromise. So has anyone else thought of this? And of possible solutions?
\n
" } }, { "_id": "XGqE3tG9SuCkavodJ", "title": "On Lottery Tickets", "pageUrl": "https://www.lesswrong.com/posts/XGqE3tG9SuCkavodJ/on-lottery-tickets-0", "postedAt": "2010-11-29T08:21:16.743Z", "baseScore": 0, "voteCount": 2, "commentCount": 26, "url": null, "contents": { "documentId": "XGqE3tG9SuCkavodJ", "html": "

I've often seen the issue of lottery tickets crop up on LessWrong and the consensus seems to be that the behaviour is irrational. It highlights for me a confusion that I've had about what it means for something to be \"rational\" and I'm seeking clarification. I think it might be useful to break the term down into the distinction I learnt about here, epistemic and instrumental rationality.

\n

Epistemic rationality - This seems to be the most common failure of people who play the lottery. It might be an overt failure of probabilistic reasoning like someone believing their chances of winning to be 50-50 because they can imagine two potential outcome. Maybe they believe that they're \"due\" to win some money as they commit \"the gamblers fallacy\". Or it might be a more subtle failure resulting from correct knowledge of probability, but a fundamental inability to represent that number we call \"scope insensitivity\". I think in the cases where these errors are committed, no-one would argue that these people are being \"rational\".

\n

However, what if someone had a perfect knowledge of the probabilities involved? If this person bought a lottery ticket would we still consider this a failure of epistemic rationality? You might say that anyone with perfect information of these probabilities would know that lottery tickets are poor financial investments, but we're not talking about instrumental rationality just yet.

\n

Instrumental rationality - Now we're talking about it. The criteria for rationality in this case is, acting in a way that achieves your goals. If your goals in buying a lottery ticket are as one dimensional as making money, then the lottery is a (very) poor investment and I don't think anyone else would disagree. Here is where I start getting confused though, because what happens when a lottery ticket satisfies goals other than financial gain? It is conceivable that I could get more than $5's worth (here meaning my subjective and relative sense of what money is worth) of entertainment out of a $5 lottery ticket. What happens here? I hope you can see the more general problem that arises if you'd answer \"It's still instrumentally irrational\".

\n

I'm not arguing that the lottery is a good idea or that it's socially desirable. I think that it does tend to drain capitol from the people that can least afford it. If you've argued the idea of the lottery to death, pick a different example, it's the underlying concept I'm trying to tease apart. I suppose it boils down to the idea that if an agent makes no instrumental or epistemic errors of rationality, and buys a lottery ticket, can that be irrational?

" } }, { "_id": "mZ5JtyDhJGj2B7E6b", "title": "On Lottery Tickets", "pageUrl": "https://www.lesswrong.com/posts/mZ5JtyDhJGj2B7E6b/on-lottery-tickets", "postedAt": "2010-11-29T08:20:17.002Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "mZ5JtyDhJGj2B7E6b", "html": "

I've often seen the issue of lottery tickets crop up on LessWrong and the [consensus](http://lesswrong.com/lw/hl/lotteries_a_waste_of_hope/) seems to be that the behaviour is irrational. It highlights for me a confusion that I've had about what it means for something to be \"rational\" and I'm seeking clarification. I think it might be useful to break the term down into the distinction I learnt about here, epistemic and instrumental rationality.

\n

Epistemic rationality - This seems to be the most common failure of people who play the lottery. It might be an overt failure of probabilistic reasoning like someone believing their chances of winning to be 50-50 because they can imagine two potential outcome. Maybe they believe that they're \"due\" to win some money as they commit \"the gamblers fallacy\". Or it might be a more subtle failure resulting from correct knowledge of probability, but a fundamental inability to represent that number we call \"scope insensitivity\". I think in the cases where these errors are committed, no-one would argue that these people are being \"rational\".

\n

However, what if someone had a perfect knowledge of the probabilities involved? If this person bought a lottery ticket would we still consider this a failure of epistemic rationality? You might say that anyone with perfect information of these probabilities would know that lottery tickets are poor financial investments, but we're not talking about instrumental rationality just yet.

\n

Instrumental rationality - Now we're talking about it. The criteria for rationality in this case is, acting in a way that achieves your goals. If your goals in buying a lottery ticket are as one dimensional as making money, then the lottery is a (very) poor investment and I don't think anyone else would disagree. Here is where I start getting confused though, because what happens when a lottery ticket satisfies goals other than financial gain? It is conceivable that I could get more than $5's worth (here meaning my subjective and relative sense of what money is worth) of entertainment out of a $5 lottery ticket. What happens here? I hope you can see the more general problem that arises if you'd answer \"It's still instrumentally irrational\".

\n

I'm not arguing that the lottery is a good idea or that it's socially desirable. I think that it does tend to drain capitol from the people that can least afford it. If you've argued the idea of the lottery to death, pick a different example, it's the underlying concept I'm trying to tease apart. I suppose it boils down to the idea that if an agent makes no instrumental or epistemic errors of rationality, and buys a lottery ticket, can that be irrational?

" } }, { "_id": "ZQe33HWeFW3tSEQcx", "title": "Belief in Belief vs. Internalization", "pageUrl": "https://www.lesswrong.com/posts/ZQe33HWeFW3tSEQcx/belief-in-belief-vs-internalization", "postedAt": "2010-11-29T03:12:10.614Z", "baseScore": 45, "voteCount": 46, "commentCount": 59, "url": null, "contents": { "documentId": "ZQe33HWeFW3tSEQcx", "html": "

\n

Related to Belief In Belief

\n

Suppose that a neighbor comes to you one day and tells you “There’s a dragon in my garage!” Since all of us have been through this before at some point or another, you may be inclined to save time and ask “Is the dragon by any chance invisible, inaudible, intangible, and does it convert oxygen to carbon dioxide when it breathes?”

\n

The neighbor, however, is a scientific minded fellow and responds “Yes, yes, no, and maybe, I haven’t checked. This is an idea with testable consequences. If I try to touch the dragon it gets out of the way, but it leaves footprints in flour when I sprinkle it on the garage floor, and whenever it gets hungry, it comes out of my garage and eats a nearby animal. It always chooses something weighing over thirty pounds, and you can see the animals get snatched up and mangled to a pulp in its invisible jaws. It’s actually pretty horrible. You may have noticed that there have been fewer dogs around the neighborhood lately.”

\n

This triggers a tremendous number of your skepticism filters, and so the only thing you can think of to say is “I think I’m going to need to see this.”

\n

“Of course,” replies the neighbor, and he sets off across the street, opens the garage door, and is promptly eaten by the invisible dragon.

\n

Tragic though it is, his death provides a useful lesson. He clearly believed that there was an invisible dragon in his garage, and he was willing to stick his neck out and make predictions based on it. However, he hadn’t internalized the idea that there was a dragon in his garage, otherwise he would have stayed the hell away to avoid being eaten. Humans have a fairly general weakness at internalizing beliefs when we don’t have to come face to face with their immediate consequences on a regular basis.

\n

You might believe, for example, that starvation is the single greatest burden on humanity, and that giving money to charities that aid starving children in underdeveloped countries has higher utility than any other use of your surplus funds. You might even be able to make predictions based on that belief. But if you see a shirt you really like that’s on sale, you’re almost certainly not going to think “How many people will go hungry if I buy this who I could have fed?” It’s not a weakness of willpower that causes you to choose the shirt over the starving children, they simply don’t impinge on your consciousness at that level.

\n

When you consider if you really, properly hold a belief, it’s worth asking not only how it controls your anticipations, but whether your actions make sense in light of a gut-level acceptance of its truth. Do you merely expect to see footprints in flour, or do you move out of the house to avoid being eaten?

" } }, { "_id": "kGCzi3d8FtJyxvwXr", "title": "Fine-Tuned Mind Projection", "pageUrl": "https://www.lesswrong.com/posts/kGCzi3d8FtJyxvwXr/fine-tuned-mind-projection", "postedAt": "2010-11-29T00:08:07.800Z", "baseScore": 4, "voteCount": 8, "commentCount": 13, "url": null, "contents": { "documentId": "kGCzi3d8FtJyxvwXr", "html": "

The Fine-Tuning Argument (henceforth FTA) is the pet argument of many a religious apologist, allowing them as it does to build support for their theistic thesis on the findings of cosmology. The basic premise is this: The laws of nature appear to contain constants that if changed slightly would yield universes inhospitable to life. Even though a lot can be said about this premise, Let's assume it true for the purposes of this article.

\n

Luke Muehlhauser over at Common Sense Atheism recently wrote an article pointing out what I think is a central flaw of the FTA. To summarise, he notes that there are multitudes of propositions that are true for this universe and would not be true in a different universe. For instance galaxies, or, Luke's tongue-in-cheek example: iPads. If you accept that the universe is fine-tuned for life, you also have to accept that it's fine-tuned for galaxies, and iPads, given that some changes in the fine-tuned constants would not produce galaxies, and certainly not iPads. 

\n

So the question posed to defenders of the FTA is 'why life'? Why focus on this particular fact? What is it that sets life apart from all the other propositions true about our universe but not other the other possible universes? The usual answer is that life stands out, being valuable in ways that galaxies, iPads, and all the other true propositions are not. It seems that this is an unstated premise of the FTA. But where does that premise come from? Physics gives us no instrument to measure value, so how did this concept get in what was supposed to be a cosmology-based argument?

\n

I present the FTA here as an argument that while seemingly complex, simply evaporates in light of the Mind Projection Fallacy. Knowing that humans tend to confuse 'I see X as valuable' with 'x is valuable', the provenance of the hidden premise 'life is valuable' is laid bare, as is the identity of the agent who is doing the valuing, and it is us. With the mystery solved, explaining why humans find life valuable does not require us to go to the extreme lengths of introducing a non-naturalistic cause for the universe.

\n

Without any support for life being special in some way, the FTA devolves into a straightforward case of Texas Sharpshooter Fallacy: There exists life, our god would have wanted to create life, therefore our god is real! Not quite as compelling.

\n

 

" } }, { "_id": "682i9R2oSRg7BG8yD", "title": "\"Nahh, that wouldn't work\"", "pageUrl": "https://www.lesswrong.com/posts/682i9R2oSRg7BG8yD/nahh-that-wouldn-t-work", "postedAt": "2010-11-28T21:32:09.936Z", "baseScore": 98, "voteCount": 93, "commentCount": 50, "url": null, "contents": { "documentId": "682i9R2oSRg7BG8yD", "html": "

After having it recommended to me for the fifth time, I finally read through Harry Potter and the Methods of Rationality. It didn't seem like it'd be interesting to me, but I was really mistaken. It's fantastic.

\n

One thing I noticed is that Harry threatens people a lot. My initial reaction was, \"Nahh, that wouldn't work.\"

\n

It wasn't to scrutinize my own experience. It wasn't to do a google search if there's literature available. It wasn't to ask a few friends what their experiences were like and compare them.

\n

After further thought, I came to realization - almost every time I've threatened someone (which is rarely), it's worked. Now, I'm kind of tempted to write that off as \"well, I had the moral high ground in each of those cases\" - but:

\n

1. Harry usually or always has the moral high ground when he threatens people in MOR.

\n

2. I don't have any personal anecdotes or data about threatening people from a non-moral high ground, but history provides a number of examples, and the threats often work.

\n

This gets me to thinking - \"Huh, why did I write that off so fast as not accurate?\" And I think the answer is because I don't want the world to work like that. I don't want threatening people to be an effective way of communicating.

\n

It's just... not a nice idea.

\n

And then I stop, and think. The world is as it is, not as I think it ought to be.

\n

And going further, this makes me consider all the times I've tried to explain something I understood to someone, but where they didn't like the answer. Saying things like, \"People don't care about your product features, they care about what benefit they'll derive in their own life... your engineering here is impressive, but 99% of people don't care that you just did an amazing engineering feat for the first time in history if you can't explain the benefit to them.\"

\n

Of course, highly technical people hate that, and tend not to adjust.

\n

Or explaining to someone how clothing is a tool that changes people's perceptions of you, and by studying the basics of fashion and aesthetics, you can achieve more of your aims in life. Yes, it shouldn't be like that in an ideal world. But we're not in that ideal world - fashion and aesthetics matter and people react to it.

\n

I used to rebel against that until I wizened up, studied a little fashion and aesthetics, and started dressing to produce outcomes. So I ask, what's my goal here? Okay, what kind of first impression furthers that goal? Okay, what kind of clothing helps make that first impression?

\n

Then I wear that clothing.

\n

And yet, when confronted with something I don't like - I dismiss it out of hand, without even considering my own past experiences. I think this is incredibly common. \"Nahh, that wouldn't work\" - because the person doesn't want to live in a world where it would work.

" } }, { "_id": "7SS966KNXQQdu4AeC", "title": "What would an ultra-intelligent machine make of the great filter?", "pageUrl": "https://www.lesswrong.com/posts/7SS966KNXQQdu4AeC/what-would-an-ultra-intelligent-machine-make-of-the-great", "postedAt": "2010-11-28T18:47:52.503Z", "baseScore": -2, "voteCount": 8, "commentCount": 10, "url": null, "contents": { "documentId": "7SS966KNXQQdu4AeC", "html": "

 

\n

Imagine that an ultra-intelligent machine emerges from an intelligence explosion.  The AI (a) finds no trace of extraterrestrial intelligence, (b) calculates that many star systems should have given birth to star faring civilizations so mankind hasn’t pass through most of the Hanson/Grace great filter, and (c) realizes that with trivial effort it could immediately send out some self-replicating von Neumann machines that could make the galaxy more to its liking.  

\n

Based on my admittedly limited reasoning abilities and information set I would guess that the AI would conclude that the zoo hypothesis is probably the solution to the Fermi paradox and because stars don’t appear to have been “turned off” either free energy is not a limiting factor (so the Laws of Thermodynamics are incorrect) or we are being fooled into thinking that stars unnecessarily \"waste” free energy (perhaps because we are in a computer simulation).

\n

 

" } }, { "_id": "9QNKGQjFwLB8GxHMS", "title": "Singularity Non-Fiction Compilation to be Written", "pageUrl": "https://www.lesswrong.com/posts/9QNKGQjFwLB8GxHMS/singularity-non-fiction-compilation-to-be-written", "postedAt": "2010-11-28T16:49:14.250Z", "baseScore": 23, "voteCount": 18, "commentCount": 26, "url": null, "contents": { "documentId": "9QNKGQjFwLB8GxHMS", "html": "

Call for Essays:<http://singularityhypothesis.blogspot.com/p/submit.html>
The Singularity Hypothesis
A Scientific and Philosophical Assessment
Edited volume, to appear in The Frontiers Collection<http://www.springer.com/series/5342>, Springer

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions 'straight from Cloud Cuckooland'? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and 'carbon chauvinism'? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.

Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications.  Commentary offering a critical assessment of selected essays may also be solicited.

Important dates:

 *   Extended abstracts (500–1,000 words): 15 January 2011
 *   Full essays: (around 7,000 words): 30 September 2011
 *   Notifications: 30 February 2012 (tentative)
 *   Proofs: 30 April 2012 (tentative)
We aim to get this volume published by the end of 2012.

Purpose of this volume
·                     Please read: Purpose of This Volume<http://singularityhypothesis.blogspot.com/p/theme.html>
 Central questions
·                     Please read: Central Questions<http://singularityhypothesis.blogspot.com/p/central-questions.html>:
Extended abstracts are ideally short (3 pages, 500 to 1000 words), focused (!), relating directly to specific central questions<http://singularityhypothesis.blogspot.com/p/central-questions.html> and indicating how they will be treated in the full essay.

Full essays are expected to be short (15 pages, around 7000 words) and focused, relating directly to specific central questions<http://singularityhypothesis.blogspot.com/p/central-questions.html>. Essays longer than 15 pages long will be proportionally more difficult to fit into the volume. Essays that are three times this size or more are unlikely to fit.  Essays should address the scientifically-literate non-specialist and written in a language that is divorced from speculative and irrational line of argumentation.  In addition, some authors may be asked to make their submission available for commentary (see below).

(More details<http://singularityhypothesis.blogspot.com/p/submit.html>)

Thank you for reading this call. Please forward it to individual who may wish to contribute.
Amnon Eden, School of Computer Science and Electronic Engineering, University of Essex
Johnny Søraker, Department of Philosophy, University of Twente
Jim Moor, Department of Philosophy, Dartmouth College
Eric Steinhart, Department of Philosophy, William Paterson University

" } }, { "_id": "2bezrW35QqWDQzGfD", "title": "[LINK] The Top Ten Daily Consequences of Having Evolved", "pageUrl": "https://www.lesswrong.com/posts/2bezrW35QqWDQzGfD/link-the-top-ten-daily-consequences-of-having-evolved", "postedAt": "2010-11-28T15:28:54.343Z", "baseScore": 1, "voteCount": 4, "commentCount": 1, "url": null, "contents": { "documentId": "2bezrW35QqWDQzGfD", "html": "
\n

From hiccups to wisdom teeth, the evolution of homo sapiens has left behind some glaring, yet innately human, imperfections

\n
\n

Link

" } }, { "_id": "SiReN2JHLwJJyPn5v", "title": "Superintelligent AI mentioned as a possible risk by Bill Gates", "pageUrl": "https://www.lesswrong.com/posts/SiReN2JHLwJJyPn5v/superintelligent-ai-mentioned-as-a-possible-risk-by-bill", "postedAt": "2010-11-28T11:51:50.475Z", "baseScore": 13, "voteCount": 8, "commentCount": 20, "url": null, "contents": { "documentId": "SiReN2JHLwJJyPn5v", "html": "

\"There are other potential problems in the future that Mr. Ridley could have addressed but did not. Some would put super-intelligent computers on that list. My own list would include large-scale bioterrorism or a pandemic ... But bioterrorism and pandemics are the only threats I can foresee that could kill over a billion people.\"

\n

- Bill Gates 

\n

From

\n

Africa Needs Aid, Not Flawed Theories

\n

One wonders where Bill Gates read that superintelligent AI could be (but in his estimation, in fact isn't) a GCR. It couldn't have been Kurzweil, because Kurzweil doesn't say that. The only realistic possibilities are that the influence came via Nick Bostrom, Stephen Hawking or Martin Rees or possibly Bill Joy(See comments)

\n

It seems that Bill is also something of a Bayesian with respect to global catastrophic risk:

\n

\"Even though we can't compute the odds for threats like bioterrorism or a pandemic, it's important to have the right people worrying about them and taking steps to minimize their likelihood and potential impact. On these issues, I am not impressed right now with the work being done by the U.S. and other governments.\"

" } }, { "_id": "jhAWHZ3aZSPha46Wr", "title": "We're all forgetting how to read analog clocks. Or are we?", "pageUrl": "https://www.lesswrong.com/posts/jhAWHZ3aZSPha46Wr/we-re-all-forgetting-how-to-read-analog-clocks-or-are-we", "postedAt": "2010-11-28T06:19:01.899Z", "baseScore": 5, "voteCount": 8, "commentCount": 32, "url": null, "contents": { "documentId": "jhAWHZ3aZSPha46Wr", "html": "

I was thinking about this phenomenon today. Digital clocks are so common now that I don't often need to read an analog one, much less in a hurry. I worry that I'm losing the ability to do so. (The worry is a little bit because I might still need it at some point, and much more because being able to quickly read analog clocks makes me feel like a grown-up.) In particular, when I am called upon to read one, I'm embarrassed by how long it takes me to do so. It's only several seconds, but that's enough to make it clear to anyone watching that I had to stop and think about it.

\n

But then I caught myself, and thought, wait a moment. Am I actually much slower at this than I used to be? Or is reading an analog clock really just a noticeably slower action than reading a digital one? This is intuitively plausible; it has more mental steps. Rather than comparing my current analog-clock-reading speed with a previous one (which I don't really remember), I'm comparing it to my digital-clock-reading speed, which doesn't make sense. I was going to ask how you'd design an experiment to test this. Then I remembered that not everyone is young enough to have to speculate about what it's like not having mostly digital clocks around. :P So if you're old enough to have significantly more practice reading analog clocks than digital, how long does it take you to read one? Is it noticeably longer than reading a digital clock? If you aren't, and have a significantly different experience from mine, I'm interested in that too.

" } }, { "_id": "yEwsuCTHGwwbvFPaD", "title": "Link: Compare your moral values to the general population", "pageUrl": "https://www.lesswrong.com/posts/yEwsuCTHGwwbvFPaD/link-compare-your-moral-values-to-the-general-population", "postedAt": "2010-11-28T03:21:59.940Z", "baseScore": 13, "voteCount": 11, "commentCount": 24, "url": null, "contents": { "documentId": "yEwsuCTHGwwbvFPaD", "html": "

Jonathan Haidt, a professor at UVA, runs an online lab with quizzes that will compare your moral values to the rest of the population. I have found the test results useful for avoiding the typical mind fallacy. When someone disagrees with me on a belief/opinion I feel certain about, it's often difficult to tease apart how much of this disagreement stems from them not \"getting it\", and how much stems from them having a different fundamental value system. One of the tests alerted me that I am an outlier in certain aspects of how I judge morality (green = me; blue = liberals; red = conservatives):

\n

\"\"

\n

Another benefit of these quizzes is that they can point out potential blind spots. For example, one quiz asks for opinions about punishment for crimes. If I discover I'm an outlier w.r.t. the population, I should reconsider whether my opinions are based on solid evidence (or did I see one study that found tit-for-tat punishment effective in a certain context, and take that as gospel?).

\n

Extra reading: Haidt wrote a WSJ article last month that applied the learnings of these moral quizzes to better understanding the Tea Party.

" } }, { "_id": "PNqD5mm8LEPiYL2fc", "title": "META: Misleading error message on using wrong username", "pageUrl": "https://www.lesswrong.com/posts/PNqD5mm8LEPiYL2fc/meta-misleading-error-message-on-using-wrong-username", "postedAt": "2010-11-27T22:07:52.104Z", "baseScore": 6, "voteCount": 4, "commentCount": 3, "url": null, "contents": { "documentId": "PNqD5mm8LEPiYL2fc", "html": "

I attempted to log in from a computer I don't usually use, and entered my username as \"Rolf Andreassen\", two words; in fact it's \"RolfAndreassen\", one word. The error message I got back was \"Incorrect password\", which is misleading. Not until I tried to recover my password did I realise my mistake. Clearly this is an unusual edge case, but I suggest updating the code to give back \"No such user\" when someone makes this mistake. 

" } }, { "_id": "WmmL2S5pbdLAa2GR9", "title": "The Sin of Persuasion", "pageUrl": "https://www.lesswrong.com/posts/WmmL2S5pbdLAa2GR9/the-sin-of-persuasion", "postedAt": "2010-11-27T21:44:06.794Z", "baseScore": 37, "voteCount": 33, "commentCount": 63, "url": null, "contents": { "documentId": "WmmL2S5pbdLAa2GR9", "html": "\n

 

\n\n

Related to Your Rationality is My Business

\n

Among religious believers in the developed world, there is something of a hierarchy in terms of social tolerability. Near the top are the liberal, nonjudgmental, frequently nondenominational believers, of whom it is highly unpopular to express disapproval. At the bottom you find people who picket funerals or bomb abortion clinics, the sort with whom even most vocally devout individuals are quick to deny association.

\n

Slightly above these, but still very close to the bottom of the heap, are proselytizers and door to door evangelists. They may not be hateful about their beliefs, indeed many find that their local Jehovah’s Witnesses are exceptionally nice people, but they’re simply so annoying. How can they go around pressing their beliefs on others and judging people that way?

\n

I have never known another person to criticize evangelists for not trying hard enough to change others’ beliefs.

\n

\n

And yet, when you think about it, these people are dealing with beliefs of tremendous scale. If the importance of saving a single human life is worth so much more than our petty discomforts with defying social convention or our own cognitive biases, how much greater must be the weight of saving an immortal soul from an eternity of hell? Shouldn’t they be doing everything in their power to change the minds of others, if that’s what it takes to save them? Surely if there is a fault in their actions, it’s that they’re doing too little given the weight their beliefs should impose on them.

\n

But even if you believe you believe this is a matter of eternity, of unimaginable degrees of utility, if you haven’t internalized that belief, then it sure is annoying to be pestered about the state of your immortal soul.

\n

This is by no means exclusive to religion. Proselytizing vegans, for instance, occupy a similar position on the scale of socially acceptable dietary positions. You might believe that nonhuman animals possess significant moral worth, and by raising them in oppressive conditions only to slaughter them en masse, humans are committing an enormous moral atrocity, but may heaven forgive you if you try to convince other people of this so that they can do their part in reversing the situation. Far more common are vegans who are adamantly non-condemnatory. They may abstain from using any sort of animal products on strictly moral grounds, but, they will defensively assert, they’re not going to criticize anyone else for doing otherwise. Individuals like this are an object example that the disapproval of evangelism does not simply come down to distaste for the principles being preached.   

\n

So why the taboo on trying to change others’ beliefs? Well, as a human universal, it’s hard to change our minds. Having our beliefs confronted tends to make us anxious. It might feel nice to see someone strike a blow against the hated enemy, but it’s safer and more comfortable to not have a war waged on your doorstep. And so, probably out of a shared desire not to have our own beliefs confronted, we’ve developed a set of social norms where individuals have an expectation of being entitled to their own distinct factual beliefs about the universe.  

\n

Of course, the very name of this blog derives from the conviction that that sort of thinking is not correct. But it’s worth wondering, when we consider a society which upholds a free market of ideas which compete on their relative strength, whether we’ve taken adequate precautions against the sheer annoyingness of a society where the taboo on actually trying to convince others of one’s beliefs has been lifted.

" } }, { "_id": "WYH9xdovnM6tigCsp", "title": "Link: Writing exercise closes the gender gap in university-level physics", "pageUrl": "https://www.lesswrong.com/posts/WYH9xdovnM6tigCsp/link-writing-exercise-closes-the-gender-gap-in-university", "postedAt": "2010-11-27T16:28:24.651Z", "baseScore": 27, "voteCount": 23, "commentCount": 9, "url": null, "contents": { "documentId": "WYH9xdovnM6tigCsp", "html": "

15-minute writing exercise closes the gender gap in university-level physics:

\n
\n

Think about the things that are important to you. Perhaps you care about creativity, family relationships, your career, or having a sense of humour. Pick two or three of these values and write a few sentences about why they are important to you. You have fifteen minutes. It could change your life.

\n

This simple writing exercise may not seem like anything ground-breaking, but its effects speak for themselves. In a university physics class, Akira Miyake from the University of Colorado used it to close the gap between male and female performance. In the university’s physics course, men typically do better than women but Miyake’s study shows that this has nothing to do with innate ability. With nothing but his fifteen-minute exercise, performed twice at the beginning of the year, he virtually abolished the gender divide and allowed the female physicists to challenge their male peers.

\n

The exercise is designed to affirm a person’s values, boosting their sense of self-worth and integrity, and reinforcing their belief in themselves. For people who suffer from negative stereotypes, this can make all the difference between success and failure.

\n
\n

The article cites a paper, but it's behind a paywall:
http://www.sciencemag.org/content/330/6008/1234

" } }, { "_id": "y2Hszb4Dsm5FggnDC", "title": "Harry Potter and the Methods of Rationality discussion thread, part 6", "pageUrl": "https://www.lesswrong.com/posts/y2Hszb4Dsm5FggnDC/harry-potter-and-the-methods-of-rationality-discussion-22", "postedAt": "2010-11-27T08:25:52.446Z", "baseScore": 8, "voteCount": 9, "commentCount": 549, "url": null, "contents": { "documentId": "y2Hszb4Dsm5FggnDC", "html": "

Update: Discussion has moved on to a new thread.

\n

After 61 chapters of Harry Potter and the Methods of Rationality and 5 discussion threads with over 500 comments each, HPMOR discussion has graduated from the main page and moved into the Less Wrong discussion section (which seems like a more appropriate location).  You can post all of your insights, speculation, and, well, discussion about Eliezer Yudkowsky's Harry Potter fanfic here.

\n

Previous threads are available under the harry_potter tag on the main page (or: one, two, three, four, five); this and future threads will be found under the discussion section tag (since there is a separate tag system for the discussion section).  See also the author page for (almost) all things HPMOR, and AdeleneDawner's Author's Notes archive for one thing that the author page is missing.

As a reminder, it's useful to indicate at the start of your comment which chapter you are commenting on.  Time passes but your comment stays the same.

Spoiler Warning:  this thread is full of spoilers.  With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13.  More specifically:

\n
\n

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that \"Eliezer said X is true\" unless you use rot13.

\n
" } }, { "_id": "f86kimGmPkAppXmBi", "title": "I didn't want to have to do this but...", "pageUrl": "https://www.lesswrong.com/posts/f86kimGmPkAppXmBi/i-didn-t-want-to-have-to-do-this-but", "postedAt": "2010-11-27T07:54:50.398Z", "baseScore": -21, "voteCount": 16, "commentCount": 7, "url": null, "contents": { "documentId": "f86kimGmPkAppXmBi", "html": "

I don't want to become known on this website as the guy who is always asking for help with his personal problems (way too much status loss), but I'm still a novice at best as a rationalist and given others don't have my biases asking for advice is the best chance I've got at an objective solution.

\r\n

I've recently bought a game (with a few days left in which I can return it for a refund) called Dragon Ball Z: Burst Limit. Most of my play experience suggests Rubberband AI, but there aren't any references to it having such on the Internet and programmers have been known to conceal it. I really don't want the game to have it (as it provides a highly inconsistent challenge, and reduces the game to focusing on manipulating the A.I rather than winning), hence my bias, but my own play seems to suggest it does.

" } }, { "_id": "JDKfPsHvBwgq4Knn9", "title": "Buy Insurance -- Bet Against Yourself", "pageUrl": "https://www.lesswrong.com/posts/JDKfPsHvBwgq4Knn9/buy-insurance-bet-against-yourself", "postedAt": "2010-11-26T04:48:46.744Z", "baseScore": 42, "voteCount": 45, "commentCount": 69, "url": null, "contents": { "documentId": "JDKfPsHvBwgq4Knn9", "html": "
My friend and housemate, User:Kevin, makes a very pleasant living selling opioids on the internet, a living he expects to continue for some time, unless something awful happens like Obama losing the next election. The Intrade contract for Obama's loss is currently trading at 42% -- what can User:Kevin do about this?
\n
My suggestion was that he bet heavily on Obama's loss. Say he spends $4200 buying not-Obama futures. If Obama wins, that money becomes worthless, but he gets four years selling kratom regulation-free. On the other hand, say Palin takes a surprise victory and institutes draconian regulation on various substances -- User:Kevin's $4,200 has just become $10,000, leaving a $5800 windfall to help him while he finds his next muse.
\n
This is nothing more than what we normally call buying insurance, just extended to whatever outcomes you may want to insure against. Let's talk about some of the effects of this action.
\n
Leaving a Line of Retreat
\n
Let's say that User:Kevin now finds Obama's loss more or less unthinkable (I don't mean to impugn his rationality -- the article's more or less fictional from here on). Well, that's not really very good -- he  needs  to think about it. I suppose the proper answer is something like \"If Obama will win the election, I want to believe...\" -- but it may  also  help to think that, yes, a Republican win would suck, but it would also come with this huge financial windfall up front. That is, by flattening your outcome curve a bit, you can  reduce your attachment to individual outcomes, and help yourself to make more rational judgments.
\n
Of course, if the market gives Obama 60%, User:Kevin should probably just believe that. That's what markets are for. Which brings us to:
\n
Helping the Market
\n
You might think that User:Kevin's (hypothetical) actions are antisocial. That by buying not-Obama futures, he's sending a false signal to the market decreasing Obama's chances. If so,  Robin Hanson might like to have a word with you. Here's why:
\n
Let's say that Clippy is a rational speculator in this market. He doesn't care whether Obama wins or not, he just wants to maximize his monetary outcome so that he can purchase materials with which to create paperclips. Let's further say that the market is filled only with rational speculators like Clippy.
\n
Well, based on the family of no-bet theorems, they will all expect that the best-informed actors in the market will make their money from the least-informed actors. Half of them will conclude that they are the least informed, and choose not to play. Eventually those speculators best equipped to find good information and make good speculations will be alone in the market, with no one to bet with, and no incentive to produce good information. Market volume drops, put/call spreads widen, everyone goes home, no one gets informed.
\n
Now let's bring lots of noise to the market: irrational actors convinced they know better than the going rates because their star sign told them so; agents of one political party or another intentionally manipulating the market to energize their base; rational insurance-buyers who (may) have good probabilities, but have skewed valuations on money and are willing to accept above-or-below optimal prices. Now there is money in the market. Now there are buy and sell orders from all sides. Now it  makes sense  for Clippy to spend  lots  of computing cycles running regression tests on past electoral outcomes, buying and selling whenever the advantage is to him, making lots of money, and  pricing the market accurately. Volume shoots up. Spread goes down. Prices move near-instantly with good information. Everyone is informed.
\n
For all these reasons, gentle readers, I urge you to wire some money into an offshore account, log into intrade, find some outcome that would make you miserable, and bet heavily on it.
" } }, { "_id": "wifMjDqjsNpGSWFZK", "title": "Transhumanism thread in progress at Reddit", "pageUrl": "https://www.lesswrong.com/posts/wifMjDqjsNpGSWFZK/transhumanism-thread-in-progress-at-reddit", "postedAt": "2010-11-25T20:46:56.517Z", "baseScore": 7, "voteCount": 9, "commentCount": 5, "url": null, "contents": { "documentId": "wifMjDqjsNpGSWFZK", "html": "

Starting with this reply to \"You were born too soon\":

\n

> depending on when exactly we achieve this, this could be the best time to be born ever, because it will be the absolute earliest anybody will have achieved immortality. Someone born within 20 years of this moment could one day be the oldest human, sentient, or even living being in the Universe.

\n

The comments are currently split between arguing and agreeing with this. So far, no mention of cryonics. One post presents a possibly interesting technical argument that our current knowledge/technology is centuries away from mind uploading/whole-brain emulation.

\n

(Also posted to The Singularity in the Zeitgeist, but that thread seems to have been mostly forgotten.)

" } }, { "_id": "in45jgNyRgtQNCDmQ", "title": "Eating et al.: Study on High/Low Protein | Glycemic-Index", "pageUrl": "https://www.lesswrong.com/posts/in45jgNyRgtQNCDmQ/eating-et-al-study-on-high-low-protein-or-glycemic-index", "postedAt": "2010-11-25T16:42:12.751Z", "baseScore": 1, "voteCount": 4, "commentCount": 7, "url": null, "contents": { "documentId": "in45jgNyRgtQNCDmQ", "html": "

Nutrition and related topics have been a topic a few times here and on related blogs, starting back with Shangri-La, hypoglycemia, and what we should eat.

\n

Now, HT reddit, there seems to be some (new) evidence pro the position I think that quite some people here have: high-protein, low-glycemic-index. So, some people here can be a little bit more sure that they made the right bet earlier on -- but how have you actually arrived at those conclusions earlier? I see the evolutionary argument, but by itself alone, it is not that convincing. There must have been data, ...

\n

So, any recommendations on further sources/high-quality collections?

" } }, { "_id": "RMAK6eHCyn9cjjsjj", "title": "Ideological bullies?", "pageUrl": "https://www.lesswrong.com/posts/RMAK6eHCyn9cjjsjj/ideological-bullies", "postedAt": "2010-11-25T13:02:13.989Z", "baseScore": -10, "voteCount": 9, "commentCount": 20, "url": null, "contents": { "documentId": "RMAK6eHCyn9cjjsjj", "html": "

Science has long questioned theism by examining design flaws in the nature. However, it seems to me that some science followers don't like being questioned about the scientific methodology. Just like theists don't like being questioned about existence of God.

\n

Of course, science lovers pride themselves on scientific enquiries and methods. But does such pride translate to ego and honour - something that should not be challenged? 

\n

I have met several scientists and science Ph.D. students. Some - not all -  of them seem to treat fallibists as some kind of nonsense and anti-positivists as a science equivalent of Satan. Although they argue politely, this saddens me. They are not as open-minded as philosophers I know. Philosophers tend to be happy when receiving challenges about their fundamental beliefs. Theists and scientists seem to be otherwise. This is just from my experience though. It is not meant to be generalisable.

\n

Science is not rationalism. It is an attempt to translate empirical results into knowledge with logic and statistics. So, science stands on two foundations - empiricism and rationalism. If one believes in science, they are likely to believe in both empiricism and rationalism. But because empiricism cannot be conclusive, neither can science. Why do scientific 'facts' are sometimes spoken as if they were certain? Why do some scientists fail to notice that those 'facts' are subject to falsification when new evidence is introduced?

\n

Right, I may have done some silly stuffs like grounded theory, which isn't compatible with the scientific methods. But I acknowledge that I can be flawed, and my methods are certainly flawed. Some - not all - natural scientists, on the other hand, seem to have so much faith in scientific methods, which I believe is somewhat flawed as well. As a social scientist, I am sometimes intellectually group-raped by scientists as though I offend them by questioning the scientific methods. * sighs *

\n

If you are a scientist or a science follower, may I ask, what do you think about social scientists? Sometimes I feel like scientists look down on social scientists, and I don't feel comfortable working with them.

" } }, { "_id": "SbkXTZBivh9v976H6", "title": "Good news about the Big Bang", "pageUrl": "https://www.lesswrong.com/posts/SbkXTZBivh9v976H6/good-news-about-the-big-bang", "postedAt": "2010-11-25T12:44:45.663Z", "baseScore": 1, "voteCount": 5, "commentCount": 3, "url": null, "contents": { "documentId": "SbkXTZBivh9v976H6", "html": "

(Disclaimer: very poor knowledge of physics here, just interpreting the article)

\n

http://www.physorg.com/print209708826.html

\n

- looks like there are many of them, as non-creationists would expect

\n

The really good news is

\n

> In the past, Penrose has investigated cyclic cosmology models because he has noticed another shortcoming of the much more widely accepted inflationary theory: it cannot explain why there was such low entropy at the beginning of the universe. The low entropy state (or high degree of order) was essential for making complex matter possible.

\n

Which I interpret to mean information passes through the Big Crunch/Big Bang cycle. No heat death, information passes through - good news for transhumanists?

\n

 

" } }, { "_id": "uBZwWKYsmuy5tWewf", "title": "Rational Project Management", "pageUrl": "https://www.lesswrong.com/posts/uBZwWKYsmuy5tWewf/rational-project-management", "postedAt": "2010-11-25T09:32:04.283Z", "baseScore": 10, "voteCount": 8, "commentCount": 14, "url": null, "contents": { "documentId": "uBZwWKYsmuy5tWewf", "html": "

There's a lot of discussion on the site about akrasia (failure of willpower), but most of it focuses on the individual. Obviously, though, organizations also can be said to suffer from akrasia. Even when there is clear agreement among the leaders or members of a business, charity, political party, etc. about what the organization should try to accomplish, it's often the case that the daily tasks performed by the people who work there bear only the loosest of resemblance to the tasks that would be picked by an ideal decision-maker.

\n

There are lots of different issues here -- what should be done, when it should be done, who should do it, how much of a budget in money, office space, website space, etc. a project should receive, when and how to evaluate the success of a project...

\n

I've come across several good books on how to cope with office politics, how to build a financially successful company, and how to motivate people to perform at a high level, but none on how to manage an organization so that it can fulfill its mission most efficiently.

\n

Does anyone know of a standard (or cutting-edge) text (or program!) in this area? Books, articles, videos, and other media are all welcome, even if they're behind a paywall. Sometime in the next 20 years, I hope to revolutionize the efficiency of America's legal system by developing project management software, automatic litigation tools, machine-readable law libraries, consulting, and/or teaching. Basically I think there is no good reason why the current system, which involves otherwise smart people solving the same problems over and over again for different clients for decades on end, should not give way to an entrepreneurial system where problems get solved once or twice and then the solutions get propagated across the society. Since this is an impossible problem, I will have plenty of wheels to invent, and see no need to reinvent any wheels that already exist. So...know any wheels? Even if it seems obvious to you, I might have missed it.

\n

Thank you oodles & kaboodles in advance,

\n

Mass_Driver

" } }, { "_id": "ZAFXt94q8SDwBeSDe", "title": "localroger: Replotting The Transmigration of Prime Intellect", "pageUrl": "https://www.lesswrong.com/posts/ZAFXt94q8SDwBeSDe/localroger-replotting-the-transmigration-of-prime-intellect", "postedAt": "2010-11-25T07:33:38.809Z", "baseScore": 4, "voteCount": 11, "commentCount": 3, "url": null, "contents": { "documentId": "ZAFXt94q8SDwBeSDe", "html": "

http://www.kuro5hin.org/story/2010/9/3/20910/05774

" } }, { "_id": "a3sMDCTtwTyWgobpc", "title": "Science and rationalism - a brief epistemological exploration", "pageUrl": "https://www.lesswrong.com/posts/a3sMDCTtwTyWgobpc/science-and-rationalism-a-brief-epistemological-exploration", "postedAt": "2010-11-25T06:57:23.015Z", "baseScore": -6, "voteCount": 6, "commentCount": 13, "url": null, "contents": { "documentId": "a3sMDCTtwTyWgobpc", "html": "

Is rationalism scientific? Yes. Is science rationalistic? Depends.

\n

Within scientific disciplines, I believe that computer science is more rationalistic than others due to its deductibility (can be proved mathematically).

\n

How about other fields?

\n

Physics? Chemistry? Biology? They rely mostly on empirical results to support their arguments and theories. They can observe. They can experiment. Some of them claim they can prove... but to what extent can they be so confident? Can we really bridge empiricism to rationalism? 

\n

Verificationism. Sure, scientific theory should be able to be supported by empirical evidence. But lack of contradicting evidence doesn't necessarily mean that the theory is true. It just means that the theory isn't yet made false, even if a hypothesis can be empirically tested and the study has been replicated again and again. Falsifiability is like a time bomb. You don't know the conditions in which a said theory doesn't apply. There may be unknown unknowns, like Newton didn't know his theory didn't apply in the outer space.

\n

Moreover, some fields can not be experimented, and in some cases, observed. Examples: astronomy, natural history. This is more of a speculation - yet does not receive as much scepticism as social science. Big Bang theory, how dinosaurs were extincted, etc..  cannot be replicated or confirmed given current technology. Scientists in those areas are playing on \"what ifs\"... trying to explain possible causes without really knowing how cause-effect relationships may have been different in prehistorical times. I don't find them very different from, say, political analysts trying to explain why Kennedy was assassinated.

\n

I myself am a fallibist. But I won't go as far as supporting the Münchhausen Trilemma. To science: I have reasonable doubts, but I believe that reasonable (blind) faith is necessary for practicality/pragmatism.

\n

Relativism. A proposition is only true relative to a particular perspective. Like the story of blind men and the elephant. How can scientists be sure that they see the whole 'truth', if 'truth' is definable at all?  Maybe the elephant is too large? Maybe blind men cook the results in order to get their opinions published? Maybe blind men lack a good common measurement (e.g. eyes of the same quality) to give the elephant a 'fair' assessment?

\n

Maybe it's not blind men and the elephant - it's Plato's Allegory of the Cave (in modern days, it's called Matrix the movie)?

\n

The point here is that to scientists need to use judgment in measuring and interpreting the results, and this process relies on the limits and sharpness of their senses, intellect, measurement equipment as well as their experience. Why is light year used to measure distances in space? Why is IQ used to measure intelligence? There are limitations from both cognitive and methodological points of view.

\n

Subjectivism. Whatever methods scientists use to gain confidence in theory from empirical evidence, they \"participate\" in measuring the results, rather than \"observing them objectively\". When you use a ruler to measure the length of something, are you sure you have good eyes? Your visual ability remains constant the whole time? The ruler doesn't contract or expand while measuring the object? This is particularly present in measuring behaviour of waves and particles at atomic and subatomic scales.

" } }, { "_id": "vj3ChjWrchLAZwn38", "title": "What Science got Wrong and Why ", "pageUrl": "https://www.lesswrong.com/posts/vj3ChjWrchLAZwn38/what-science-got-wrong-and-why", "postedAt": "2010-11-25T02:12:59.245Z", "baseScore": 12, "voteCount": 7, "commentCount": 25, "url": null, "contents": { "documentId": "vj3ChjWrchLAZwn38", "html": "

An article at The Edge has scientific experts in various fields give their favorite examples of theories that were wrong in their fields. Most relevantly to Less Wrong, many of those scientists discuss what their disciplines did that was wrong which resulted in the misconceptions. For example, Irene Pepperberg not surprisingly discusses the failure for scientists to appreciate avian intelligence. She emphasizes that this failure resulted from a combination of different factors, including the lack of appreciation that high level cognition could occur without the mammalian cortex, and that many early studies used pigeons which just aren't that bright.

" } }, { "_id": "4amcyxad5bnBR9Afm", "title": "$100 for the best article on efficient charity -- deadline Wednesday 1st December", "pageUrl": "https://www.lesswrong.com/posts/4amcyxad5bnBR9Afm/usd100-for-the-best-article-on-efficient-charity-deadline", "postedAt": "2010-11-24T22:31:57.215Z", "baseScore": 17, "voteCount": 15, "commentCount": 57, "url": null, "contents": { "documentId": "4amcyxad5bnBR9Afm", "html": "
\n

Reposted from a few days ago, noting that jsalvatier (kudos to him for putting up the prize money, very community spirited)   has promised $100 to the winner, and I have decided to set a deadline of Wednesday 1st December for submissions, as my friend has called me and asked me where the article I promised him is. This guy wants his god-damn rationality already, people! 

\n

My friend is currently in a potentially lucrative management consultancy career, but is considering getting a job in eco-tourism because he \"wants to make the world a better place\" and we got into a debate about Efficient Charity, Roles vs. Goals, and Optimizing versus Acquiring Warm Fuzzies

\n

I thought that there would be a good article here that I could send him to, but there isn't. So I've decided to ask people to write such an article. What I am looking for is an article that is less than 1800 words long, and explains the following ideas: 

\n
    \n
  1. Charity should be about actually trying to do as much expected good as possible for a given amount of resource (time, $), in a quantified sense. I.e. \"5000 lives saved in expectation\", not \"we made a big difference\". 
  2. \n
  3. The norms and framing of our society regarding charity currently get it wrong, i.e. people send lots of $ to charities that do a lot less good than other charities. The \"inefficiency\" here is very large, i.e. Givewell estimates by a factor of 1000 at least.  Our norm of ranking charities by % spent on overheads is very very silly. 
  4. \n
  5. It is usually better to work a highly-paid job and donate because if you work for a charity you replace the person who would have been hired had you not applied
  6. \n
  7. Our instincts will tend to tempt us to optimize for signalling, this is to be resisted unless (or to the extent that) it is what you actually want to do. Our instincts will also tend to want to optimize for \"Warm Fuzzies\". These should be purchased separately from actual good outcomes
  8. \n
  9. Our human intuition about how to allocate resources is extremely bad. Moreover, since charity is typically for the so-called benefit of someone else, you, the donor, usually don't get to see the result. Lacking this feedback from experience, one tends to make all kinds of gigantic mistakes. 
  10. \n
\n

but without using any unexplained LW Jargon. (Utilons, Warm Fuzzies, optimizing). Linking to posts explaining jargon is NOT OK. Just don't use any LW Jargon at all. I will judge the winner based upon these criteria and the score that the article gets on LW. Maybe the winning article will not rigidly meet all criteria: there is some flexibility. The point of the article is to persuade people who are, at least to some extent charitable and who are smart (university educated at a top university or equivalent) to seriously consider investing more time in rationality when they want to do charitable things. 

\n
" } }, { "_id": "i5bez6nwcGHCF6i7s", "title": "Future of Humanity Institute at Oxford hiring postdocs", "pageUrl": "https://www.lesswrong.com/posts/i5bez6nwcGHCF6i7s/future-of-humanity-institute-at-oxford-hiring-postdocs", "postedAt": "2010-11-24T21:40:00.597Z", "baseScore": 9, "voteCount": 7, "commentCount": 0, "url": null, "contents": { "documentId": "i5bez6nwcGHCF6i7s", "html": "

http://www.fhi.ox.ac.uk/news/2010/vacancies

" } }, { "_id": "wE3nf7NRCwj9yJd3K", "title": "Startups", "pageUrl": "https://www.lesswrong.com/posts/wE3nf7NRCwj9yJd3K/startups", "postedAt": "2010-11-24T21:13:45.409Z", "baseScore": 10, "voteCount": 8, "commentCount": 15, "url": null, "contents": { "documentId": "wE3nf7NRCwj9yJd3K", "html": "

There seems to be a non-negligible deal of overlap between this community and Hacker News, both in terms of material and members. For those not aware of HN, it's a news aggregator for people interested in startups, technology, and other intellectually interesting topics, with a reputation for high-quality material and discourse.

\n

While rationality and LessWrong gets its fair share of attention over at HN, I haven't heard of much discussion about startups over here. Off-line, I've heard a claim that in terms of contribution to existential risk prevention charities, startups are suboptimal when compared to jobs in finance, but not much else other than that. I find this odd, as many of the contributors in this site seem to be prime founder material, and rationality should really be of use when working in a high-stakes ever-changing environment.

\n

My intention with this post is simply to kickstart a discussion around startups and gauge the attitudes of fellow LessWrongers. Does anyone (else) aspire to becoming a startup founder in the next few years? Do you believe startup founding to be a viable means of contributing to groups existential risk prevention?

" } }, { "_id": "Mo4FaABprMtnWpPow", "title": "2005 short story loosely about P-zombies", "pageUrl": "https://www.lesswrong.com/posts/Mo4FaABprMtnWpPow/2005-short-story-loosely-about-p-zombies", "postedAt": "2010-11-24T18:12:15.828Z", "baseScore": 5, "voteCount": 5, "commentCount": 3, "url": null, "contents": { "documentId": "Mo4FaABprMtnWpPow", "html": "

I thought some of you might enjoy this 2005 story published in Asimov's:

\n

Second Person, Present Tense by Daryl Gregory (plus some of the author's notes on the story)

\n

The story involves a a drug that (for a period of time) turns someone into a P-zombie.

\n

It's currently available for free on the Asimov's page, but it may only be there temporarily.

" } }, { "_id": "YafmHeLuxfRNRkgN2", "title": "Probability and Politics", "pageUrl": "https://www.lesswrong.com/posts/YafmHeLuxfRNRkgN2/probability-and-politics", "postedAt": "2010-11-24T17:02:11.537Z", "baseScore": 28, "voteCount": 25, "commentCount": 31, "url": null, "contents": { "documentId": "YafmHeLuxfRNRkgN2", "html": "

Follow-up toPolitics as Charity

\n

Can we think well about courses of action with low probabilities of high payoffs?  

\n

Giving What We Can (GWWC), whose members pledge to donate a portion of their income to most efficiently help the global poor, says that evaluating spending on political advocacy is very hard:

\n
\n

Such changes could have enormous effects, but the cost-effectiveness of supporting them is very difficult to quantify as one needs to determine both the value of the effects and the degree to which your donation increases the probability of the change occurring. Each of these is very difficult to estimate and since the first is potentially very large and the second very small [1], it is very challenging to work out which scale will dominate.

\n
\n

This sequence attempts to actually work out a first approximation of an answer to this question, piece by piece. Last time, I discussed the evidence, especially from randomized experiments, that money spent on campaigning can elicit marginal votes quite cheaply. Today, I'll present the state-of-the-art in estimating the chance that those votes will directly swing an election outcome.

\n

Disclaimer

\n

Politics is a mind-killer: tribal feelings readily degrade the analytical skill and impartiality of otherwise very sophisticated thinkers, and so discussion of politics (even in a descriptive empirical way, or in meta-level fashion) signals an increased probability of poor analysis. I am not a political partisan and am raising the subject primarily for its illustrative value in thinking about small probabilities of large payoffs.

\n

\n

Two routes from vote to policy: electing and affecting
\n
\n

In thinking about the effects of an additional vote on policy, we can distinguish between two ways to affect public policy: electing politicians disposed to implement certain policies, or affecting [2] the policies of existing and future officeholders who base their decisions on electoral statistics (including that marginal vote and its effects). Models of the probability of a marginal vote swaying an election are most obviously relevant to the electing approach, but the affecting route will also depend on such models, as they are used by politicians. 

\n

The surprising virtues of naive Fermi calculation

\n
\n
\n
In my previous post I linked to Eric Schwitzgebel's  discussion  of politics as charity, in which he guesstimated that the probability of a U.S. Presidential election being tied was 1/n where n is the number of voters. So with an estimate of 100 million U.S. voters in presidential elections he gave a 1/100,000,000 probability of a marginal vote swaying the election. This is a suspiciously available number. It seems to be derived from a simple model in which we imagine drawing randomly from all the possible divisions of the electorate between two candidates, when only one division would make the marginal vote decisive. But of course we know that voting won't involve a uniform distribution.
\n

One objection comes from modeling each vote as a flip of a biased coin. If the coin is exactly fair, then the chance of a tie goes with 1/(sqrt(n)). But if the coin is even slightly removed from exact fairness, then the chance of a tie rapidly falls to neglible levels. This was actually one of the first models in the  literature, and  recapitulated by LessWrongers in comments last time.
\n

However, if we instead think of the bias of the coin itself as sampled from a uniform distribution, then we get the same  result as Schwitzgebel. In the electoral context, we can think of the coin's bias as reflecting factors with correlated effects on many voters, e.g. the state of the economy, with good economic results favoring incumbents and their parties.
\n

\n
Of course, it's clear that electoral outcomes are not uniformly sampled: we see few 90%-10% outcomes in national American elections. Electoral competition and  Median Voter Theorem  effects, along with the stability of partisan identifications, will tend to keep candidates roughly balanced and limit the quantity of true swing voters. Within that range, unpredictable large \"wild card\" influences like the economy will shift the result from year to year, forcing us to spread our probability mass fairly evenly over a large region. Depending on our estimates of that range, we would need to multiply Schwitzgebel's estimate by a fudge factor c to get a probability of a tie of  c/n  for a random election, with 1<c<100 if we bound from above based on the idea that elections are very unlikely fought in a band of 1% of the electorate.
\n

\n

Fermi, meet data

\n

How well does this hold up against empirical data? In two papers from 1998  and  2009, Andrew Gelman and coauthors attempt to estimate the probability a voter going into past U.S. Presidential elections should have assigned to casting a decisive vote. They use standard models that take inputs like party self-identification, economic growth, and incumbent approval ratings to predict electoral outcomes. These models have proven quite reliable in predicting candidate vote share and no more accurate methods are known. So we can take their output as a first approximation of the individual voter's rational estimates [3].

\n

\n
Their first paper considers:
\n
\n
... the 1952-1988 elections. For six of the elections, the probability is fairly independent of state size (slightly higher for the smallest states) and is near 1 in 10 million. For the other three elections (1964, 1972, and 1984, corresponding to the landslide victories of Johnson, Nixon, and Reagan [incumbents with favorable economic conditions]), the probability is much smaller, on the order of 1 in hundreds of millions for all of the states.
\n
\n
The result for 1992 was near 1 in 10 million. In 2008, which had economic and other conditions strongly favoring Obama, they found the following:

\n
\n
probabilities a week before the 2008 presidential election, using state-by-state election forecasts based on the latest polls. The states where a single vote was most likely to matter are New Mexico, Virginia, New Hampshire, and Colorado, where your vote had an approximate 1 in 10 million chance of determining the national election outcome. On average, a[n actual] voter in America had a 1 in 60 million chance of being decisive in the presidential election.
\n
\n
All told, these place the average value of  c a little under the middle of the range given by the Fermi calculation above, and are very far from  Pascal's Mugging  territory.

\n
Voting vs campaign contributions
\n

\n
What are the implications for a causal decision theorist who wants to dedicate a modest effort to efficient do-gooding? The exact value of voting depends on many other factors, e.g. the value of policies, but we can at least compare ways to deliver votes.

\n
Which has more bang per buck: voting in your jurisdiction or taking the hour or so to earn money and make campaign contributions? Last time I  estimated  a cost of $50 to $500 per vote from contributions, more in more competitive races (diminishing returns). So unless you have a high opportunity cost, you'd do better to vote yourself than contribute to a campaign in your own jurisdiction. The standard heuristic that everyone should vote seems to have been defended.

\n
But let's avoid motivated stopping. The above data indicate frequent differences of 1-2 orders of magnitude across jurisdictions. So someone in an uncompetitive New York district would often do better to donate less than $50 (to a competitive race) than to vote. (On the other hand, if you live in a competitive district [4], replacing your vote with donations might cost a sizable portion of your charitable budget.)

\n
When we take into differences between election cycles, usually another 1-2 orders of magnitude, the value of voting in a \"safe\" jurisdiction in an election which is not close winds up negligible (if your reaction to this fact is not independent of others'). For those spending on political advocacy, this provides a route for increased cost-effectiveness: by switching from an even distribution of spending to focus on the (forecast) closest third of elections, you can nearly double your expected effectiveness. Even more extreme \"wait-in-reserve\" strategies could pay off, but are limited by the imperfection of forecasting methods.

\n
Ties, recounts, and lawyers 

\n
\n
Does the possibility of  recounts disrupt the above analysis?
\n

\n
It turns out that it doesn't. In countries with reasonably clean elections, a candidate with a large enough margin of victory is almost certain to be declared the winner. Say that a \"large enough\" margin is 5,000 votes, and that a candidate is 99% likely to be declared the winner given that margin. Then Candace the Candidate must go from a 1% probability of victory to a 99% probability of victory as we consider vote totals from a 5,000 vote shortfall to a 5,000 vote lead. So, on average within that range, each marginal vote must increase her probability of victory by 0.0098%. Since there are 10,000 possibilities to hit within the range, so long as they have roughly similar prospective probabilities the expected value of the marginal vote will be almost the same as the single \"deciding vote\" model.

\n
\n
Summary
\n
\n

It is possible to make sensible estimates of the probability of at least some events that have never happened before, like tied presidential elections, and use them in attempting efficient philanthropy.

\n

 

\n
\n

 

\n

[1] At least for two-boxers. More on one-boxing decision theorists at a later date.

\n

[2] There are a number of arguments that voters' role in affecting policies is more important, e.g. in this Less Wrong post by Eliezer. More on this later.

\n

[3] Although for very low values, the possibility that our models are fundamentally mistaken looms progressively larger. See  Ord et al.

\n

[4] Including other relevant sorts of competitiveness, e.g. California is typically a safe state in Presidential elections, but there are usually competitive ballot initiatives.

" } }, { "_id": "WZudcjaMbSjfm3E6y", "title": "The Cult Of Reason", "pageUrl": "https://www.lesswrong.com/posts/WZudcjaMbSjfm3E6y/the-cult-of-reason", "postedAt": "2010-11-24T15:24:16.699Z", "baseScore": 3, "voteCount": 7, "commentCount": 10, "url": null, "contents": { "documentId": "WZudcjaMbSjfm3E6y", "html": "

So... while investigating Wikipedia I found out about an actual Cult. Of Reason. Revolutionary France. From the description, it sounds pretty awesome. Here's he link. Is this denomination usable? Is it useful? Can it be resurrected? Should it be? Is it compatible with what we stand for? Discuss. Also, note that in French \"Culte\" does not mean \"Sect\", it means \"the act of worshipping\".

" } }, { "_id": "QhrbBzFAaTQbee4zH", "title": "Do IQ tests measure g?", "pageUrl": "https://www.lesswrong.com/posts/QhrbBzFAaTQbee4zH/do-iq-tests-measure-g", "postedAt": "2010-11-24T13:19:12.265Z", "baseScore": 28, "voteCount": 25, "commentCount": 10, "url": null, "contents": { "documentId": "QhrbBzFAaTQbee4zH", "html": "

I'm indulging in the simple pleasure of drawing large conclusions from a single study.... Why exams are nothing out of context:

\n
the story about the maths ability of Brazilian street kids living in the in the favelas of Recife. This story helped both of us realise the importance of carrying out usability tests in context. Three researchers (see: Carraher, Carraher, and Schliemann 1985) carried out research with children aged 9 to 15. These kids had dropped out of school, and were selling sun screen, and chewing gum on the streets. The researchers worked out that they could set the kids questions by purchasing goods off them. For example, 1,000 minus 300 is the same as giving the kid a 1,000 Cruzeiros note for a product that costs 300 Cruzeiros. Multiplication can be done by asking the kids how much 3 of a product would cost. In these tests the Brazilian street kids scored 98%. But when they were put into a formalised test setting, and asked instead of how much would 3 apples cost or what 3×9 is, the kids performance dropped to just 37%.
\n
What is scary is that the researchers later tested middle class children in a private school. These kids did very well in the formal exam. But when they had to do transactions with real money in the street, using the same maths, they failed in being able to do the transactions.
\n

Is it possible that the correlation between g and success isn't about raw intelligence, it's about being able to access one's intelligence in situations (like classrooms) which involve thresholds for easily improving one's status?

" } }, { "_id": "BzddatoNo3XPyHtpk", "title": "Why abortion looks more okay to us than killing babies", "pageUrl": "https://www.lesswrong.com/posts/BzddatoNo3XPyHtpk/why-abortion-looks-more-okay-to-us-than-killing-babies", "postedAt": "2010-11-24T10:08:26.453Z", "baseScore": 25, "voteCount": 36, "commentCount": 68, "url": null, "contents": { "documentId": "BzddatoNo3XPyHtpk", "html": "

Some thoughts that I don't remember anyone expressing on LW.

\n

First let's get this out of the way: life does not \"begin at birth\". As far as life can be said to \"begin\" anywhen, it begins at conception. Moreover, the child's intellectual abilities, self-awareness or similar qualities don't undergo any abrupt change at birth. It's just an arbitrary moment in the child's development. So it would seem that allowing killing kids only before they're born is illogical. What are the odds that your threshold for \"personhood\" coincides so well with the moment of birth? Could it be okay to kill kids up to 2 years old, say? CronoDAS voices this opinion here.

\n

But there's another argument in favor of considering the moment of birth \"special\". Eliezer linked to a study showing that the degree of parental grief over a child's death, when plotted against the child's age, follows the same curve as the child's reproductive potential plotted against age. Now, the reproductive potential of an unborn kid depends on its chance of survival, and the moment of birth is special in this respect. In the ancestral environment many kids used to die at birth. And mothers died often too, which made their kids less likely to survive. An unborn kid is a creature that hasn't yet passed this big and sharply defined hurdle, so we instinctively discount our sympathy for its reproductive potential by a large factor without knowing why.

\n

How much this should influence our modern attitudes toward abortion, if at all, is another question entirely. As medicine becomes better, kids and mothers become more likely to survive. So if our attitudes were allowed to drift toward a new evolutionary equilibrium which took account of technology, we'd come to hate abortions again (thx Morendil). But then again, the new evolutionary equilibrium is probably a very nasty system of values that no one in their right mind would embrace now (won't spell it out, use your imagination).

\n

Ultimately your morality is up to you and the little voices in your head. You think womens' rights trump kids' rights or the other way round, okay. But if you use factual arguments, try to make sure they are correct.

\n

ETA: see DanArmak's and Sniffnoy's comments for simpler explanations. Taken together, they sound more convincing to me than my own idea.

" } }, { "_id": "9aDYihurHjcqpjhoa", "title": "Depression and Rationality", "pageUrl": "https://www.lesswrong.com/posts/9aDYihurHjcqpjhoa/depression-and-rationality", "postedAt": "2010-11-24T08:03:48.409Z", "baseScore": 6, "voteCount": 8, "commentCount": 25, "url": null, "contents": { "documentId": "9aDYihurHjcqpjhoa", "html": "
\n
\n

Okay, this post might be more personal than most on LessWrong. But I think it might serve a more general function for LessWrong readers. It's commonly known that depression is often triggered (and maintained) by fundamentally irrational thoughts. Many thoughts associated with depression are simultaneously thoughts that are fundamentally irrational. Feelings of guilt, feelings of hopelessness, and feelings of worthlessness - many of those beliefs assume conceptions that do not correspond with the real world. And so a *rational* person may ostensibly be less prone to depression. But depression involves other things that can still strike a fully rational person. Loss of capacity to enjoy things. And loss of energy, focus, and ability to concentrate.

\n

So here is my story:

\n

I've finally corrected all my thinking and reconciled myself with the notion that there's a very high chance that my ex (of 7 months) will probably never talk to me again and that i must prepare to live a life without her. She has already refused contact with me for 6 months, and I've pretty much been emotionally broken for that same period of time. There has been one phase of improvement (mostly since I got more information and accordingly corrected my thought patterns), but the improving has finally stagnated.

\n

My problem is, that I think I've lost my capacity to enjoy things. I simply don't enjoy anything anymore. Occasionally, I can try to find things to laugh at, but those things are usually only temporary sources of laughter (the more \"severe\" the norm-violation, the funnier - if we go by Robin Hanson's definition of \"humor\"). They're not even sustainable sources of laughter (or enjoyment), since almost all of them involve trolling to one extent or another. I have some problems with energy/concentration, but my Adderall for ADD helps with them. But it's still difficult for me to maintain the attention span to do most other things.

\n

Yes, I do have friends, and yes, I do talk to people. The problem is that talking to people doesn't make me feel any less lonely anymore. Sometimes it makes me temporarily feel better. I know that the world is interesting, that I have friends to talk to, that there are so many things I can do. And my past 12-year old self would be SO happy if he could exchange spots with me. But in the end, I just get bored with everything so quickly. Sometimes I can briefly find things to laugh at. But those are only funny for a short time.

\n

So that's what I'm trying to find a solution for. Maybe it sounds like depression - I don't know. It's not full head-on clinical depression, but since I'm frequently sad despite correcting all my destructive thought patterns, I don't know what it is anymore.

\n

So maybe I'm trying to find the best meds for that. But I'm very skeptical of SSRIs because they're no better than placebo for mild-moderate depression. I have no sources for weed or other psychedelics. therapy might not even work because i've corrected my thinking (and even convinced myself that i probably will eventually land someone, simply because i'm super-exceptional at advertising myself online and that the chances will improve as I get older). but it's not helping me feel any better.

\n

===

\n

The problem is, I probably don't even qualify for a diagnosis of depression. Here are the symptoms of dysthymia:

\n

\"\"To be diagnosed, an adult must experience 2 or more of the following symptoms for at least two years:[5]

\n
Feelings of hopelessness
Insomnia or hypersomnia
Poor concentration or difficulty making decisions
Poor appetite or overeating
Low energy or fatigue
Low self-esteem
Low sex drive
Irritability [1]\"\"
\n

But I have none of those other than irritability. My main problem is simply that I'm still lonely and sad. And it's persisted, even though I've changed my thinking. I know that I have things better off than most people, but it's not going to help if I've lost my capacity for enjoying things.

\n
\n
" } }, { "_id": "xLFhZtfMHjRZHuXEP", "title": "By using Cost Basis Working Capital as the number", "pageUrl": "https://www.lesswrong.com/posts/xLFhZtfMHjRZHuXEP/by-using-cost-basis-working-capital-as-the-number", "postedAt": "2010-11-24T07:28:36.950Z", "baseScore": 1, "voteCount": 1, "commentCount": 1, "url": null, "contents": { "documentId": "xLFhZtfMHjRZHuXEP", "html": "

Investing is a long-term, pandora beads personal, goal orientated, non- competitive, hands on, decision-making process that does not require advanced degrees or a rocket scientist IQ. In fact, being too smart can be a problem if you have a tendency to over analyze things. It is helpful to establish guidelines for selecting securities, pandora charms and for disposing of them. For example, limit Equity involvement to Investment Grade, NYSE, dividend paying, profitable, and widely held companies.

\n

So is there really such a thing as an Income Portfolio that needs to be managed? .For Fixed Income, cheap pandora beads focus on Investment Grade securities, with above average but not “highest in class” yields. With Variable Income securities, avoid purchase near 52-week highs, and keep individual holdings well below 5%. cheap pandora bracelets Keep individual Preferred Stocks and Bonds well below 5% as well. Closed End Fund positions may be slightly higher than 5%, depending on type.

\n

Take a reasonable profit more than one years’ income for starters as soon as possible. With a 60% Equity Allocation, 60% of profits and interest would be allocated to stocks.Monitoring Investment Performance the Wall Street way is inappropriate and problematic for goal-orientated investors. cheap pandora charms Or are we really just dealing with an investment portfolio that needs its Asset Allocation tweaked occasionally as we approach the time in life when it has to provide the yacht… and the gas money to run it?

" } }, { "_id": "ATgFZpZCh4rS8fCGD", "title": "Inherited Improbabilities: Transferring the Burden of Proof", "pageUrl": "https://www.lesswrong.com/posts/ATgFZpZCh4rS8fCGD/inherited-improbabilities-transferring-the-burden-of-proof", "postedAt": "2010-11-24T03:40:17.056Z", "baseScore": 46, "voteCount": 45, "commentCount": 58, "url": null, "contents": { "documentId": "ATgFZpZCh4rS8fCGD", "html": "
\r\n

One person's modus ponens is another's modus tollens.

\r\n
\r\n

- Common saying among philosophers and other people who know what these terms mean.

\r\n
\r\n

If you believe A => B, then you have to ask yourself: which do I believe more? A, or not B?

\r\n
\r\n

- Hal Daume III, quoted by Vladimir Nesov.

\r\n

Summary: Rules of logic have counterparts in probability theory. This post discusses the probabilistic analogue of modus tollens (the rule that if A=>B is true and B is false, then A is false), which is the inequality P(A) ≤ P(B)/P(B|A). What this says, in ordinary language, is that if A strongly implies B, then proving A is approximately as difficult as proving B. 

\r\n

The appeal trial for Amanda Knox and Raffaele Sollecito starts today, and so to mark the occasion I thought I'd present an observation about probabilities that occurred to me while studying the \"motivation document\"(1), or judges' report, from the first-level trial.

\r\n

One of the \"pillars\" of the case against Knox and Sollecito is the idea that the apparent burglary in the house where the murder was committed -- a house shared by four people, namely Meredith Kercher (the victim), Amanda Knox, and two Italian women -- was staged. That is, the signs of a burglary were supposedly faked by Knox and Sollecito in order to deflect suspicion from themselves. (Unsuccessfully, of course...)

\r\n

As the authors of the report, presiding judge Giancarlo Massei and his assistant Beatrice Cristiani, put it (p.44):

\r\n
\r\n

What has been explained up to this point leads one to conclude that the situation of disorder in Romanelli's room and the breaking of the window constitute an artificially created production, with the purpose of directing investigators toward someone without a key to the entrance, who would have had to enter the house via the window whose glass had been broken and who would then have perpetrated the violence against Meredith that caused her death.

\r\n
\r\n

\r\n

Now, even before examining \"what has been explained up to this point\", i.e. the reasons that Massei and Cristiani (and the police before them) were led to this conclusion, we can pretty easily agree that if it is correct -- that is, if Knox and Sollecito did in fact stage the burglary in Filomena Romanelli's room -- then it is extremely likely that they are guilty of participation in Kercher's murder. After all, what are the chances that they just happened to engage in the bizarre offense of making it look like there was a burglary in the house, on the very same night as a murder occurred, in that very house? Now, one could still hypothetically argue about what their precise role was (e.g. whether they actually physically caused Kercher's death, or merely participated in some sort of conspiracy to make the latter happen via the actions of known burglar and undisputed culprit Rudy Guede), and thus possibly about how severely they should be treated by the justice system; but in any case I think I'm on quite solid ground in asserting that a faked burglary by Knox and Sollecito would very strongly imply that Knox and Sollecito are criminally culpable in the death of Meredith Kercher.

\r\n

...which is in fact quite a problem for Massei and Cristiani, as I'll now explain.

\r\n

Probability theory can and should be thought of as a quantitative version -- indeed, a generalization -- of the \"rules of logic\" that underpin  Traditional Rationality. (Agreement with the previous sentence is essentially what it means to be a Bayesian.) One of these rules is this:

\r\n

(1) If A implies B, then not-B implies not-A.

\r\n

For example, all squares are in fact rectangles; which means that if something isn't a rectangle, it can't possibly be a square. Likewise, if \"it's raining\" implies \"the sidewalk is wet\", and you know the sidewalk isn't wet, then you know it's not raining.

\r\n

The rule that gets you from \"A implies B\" and \"A\" to \"B\" is called modus ponens, which is Latin for \"method that puts\". The rule that gets you from \"A implies B\" and \"not-B\" to \"not-A\" is called modus tollens, which is Latin for \"method that takes away\". As the saying goes, and as we have just seen, they are really one and the same. 

\r\n

If, for a moment, we were to think about the Meredith Kercher case as a matter of pure logic -- that is, where inferences were always absolutely certain, with zero uncertainty -- then we could say that if we know that \"burglary is fake\" implies \"Knox and Sollecito are guilty\", and we also know that the burglary was in fact fake, then we know that Knox and Sollecito are guilty.

\r\n

But, of course, there's another way to say the same thing: if we know that \"burglary is fake\" implies \"Knox and Sollecito are guilty\", and we also know that Knox and Sollecito are innocent, then we know that the burglary wasn't fake. (And that to the extent Massei and Cristiani say it was, they must be mistaken.)

\r\n

In other words, so long as one accepts the implication \"burglary fake => Knox and Sollecito guilty\", one can't consistently hold that the burglary was fake and that Knox and Sollecito are innocent, but one can consistently hold either that the burglary was fake and Knox and Sollecito are guilty, or that Knox and Sollecito are innocent and the burglary was not fake.

\r\n

The question of which of these two alternatives to believe thus reduces to the question of whether, given the evidence in the case, it's more believable that Knox and Sollecito are guilty, or that the burglary was \"authentic\". Massei and Cristiani, of course, aim to convince us that the latter is the more improbable.

\r\n

But notice what this means! This means that the proposition that the burglary was fake assumes, or inherits, the same high burden of proof as the proposition that Knox and Sollecito committed murder! Unfortunately for Massei and Cristiani, there's no way to \"bootstrap up\" from the mundane sort of evidence that seemingly suffices to show that a couple of youngsters engaged in some deception, to the much stronger sort of evidence required to prove that two honor students(2) with gentle personalities suddenly decided, on an unexpectedly free evening, to force a friend into a deadly sex game with a local drifter they barely knew, for the sake of a bit of thrill-seeking(3).

\r\n
\r\n
You may have noticed that, two paragraphs ago, I left the logical regime of implication, consistency, and absolute certainty, and entered the probability-theoretic realm of belief, uncertainty, and burdens of proof. So to make the point rigorous, we'll have to switch from pure logic to its quantitative generalization, the mathematics of probability theory.  
\r\n
\r\n
When logical statements are translated into their probabilistic analogues, a statement like \"A is true\" is converted to something like \"P(A) is high\"; \"A implies B\" becomes \"A is (strong)  evidence  of B\"; and rules such as (1) above turn into  bounds on the probabilities of some hypotheses in terms of others.
\r\n
\r\n
Specifically, the translation of (1) into probabilistic language would be something like:
\r\n
(2) If A is (sufficiently) strong evidence of B, and B is unlikely, then A is unlikely.
\r\n
or
\r\n
(2') If A is (sufficiently) strong evidence of B, then the prior probability of A can't be much higher than the prior probability of B.
\r\n
\r\n
\r\n
\r\n

Let's prove this:

\r\n

Suppose that A is strong evidence of B -- that is, that P(B|A) is close to 1. We'll represent this as P(B|A) ≥ 1-ε, where ε is a small number. Then, via  Bayes' theorem, this tells us that

\r\n

\"\"

\r\n

or

\r\n

\"\"

\r\n

so that

\r\n

\"\"

\r\n

and thus

\r\n

\"\"

\r\n

 

\r\n

since P(A|B) ≤ 1. Hence we get an upper bound of P(B)/(1-ε) on P(A). For instance, if P(B) is 0.001, and P(B|A) is at least 0.95, then P(A) can't be any larger than 0.001/0.95 = 0.001052...

\r\n

Actually, there's a simpler proof, direct from the definition of P(B|A), which goes like this: P(B|A) = P(A&B)/P(A), whence P(A) = P(A&B)/P(B|A) ≤ P(B)/P(B|A). (Note the use of the conjunction rule: P(A&B) ≤ P(B).)

\r\n

The statement

\r\n

(3)

\r\n

 \"\"  

\r\n

is a quantitative version of  modus tollens, just as the equivalent statement 

\r\n

(4)

\r\n

\"\"

\r\n

is a quantitative version of  modus ponens. Assuming P(B|A) is high, what (4) says is that if P(A) is high, so is P(B); what (3) says is that if P(B) is low, so is P(A).

\r\n

Or, in other words, that the improbability  -- burden of proof -- of B is  transferred to, or  inherited by, A.

\r\n

...which means you cannot simultaneously believe that (1) Knox and Sollecito's staging of the burglary would be strong evidence of their guilt; (2) proving their guilt is hard; and (3) proving they staged the burglary is easy. Something has to give; hard work must be done somewhere.

\r\n

Of their 427-page report, Massei and Cristiani devote approximately 20 pages (mainly pp. 27-49) to their argument that the burglary was staged by Knox and Sollecito rather than being the work of known burglar Rudy Guede (including a strange section devoted to the refuting the hypothesis that the burglary was  staged by Guede). But think about it: if they were  really  able to demonstrate this, they would scarcely have needed to bother writing the remaining 400-odd pages of the report! For, if it is granted that Knox and Sollecito staged the burglary, then, in the absence of any other explanation for the staging (like November 1 being Annual Stage-Burglary Day for some group to which Knox or Sollecito belonged) it easily follows with conviction-level confidence that they were involved in a conspiracy that resulted in the death of Meredith Kercher. You would hardly need to bother with DNA, luminol, or the cell phone traffic of the various \"protagonists\". 

\r\n

Yet it doesn't appear that Massei and Cristiani have much conception of the burden they face in trying to prove something that would so strongly imply their hugely a-priori-improbable ultimate thesis. Their arguments purporting to show that Knox and Sollecito faked the burglary are quite weak -- and, indeed, are reminiscent of those used time and again by their lower-status counterparts, conspiracy theorists of all types, from 9/11 \"truthers\" to the-Moon-landing-was-faked-ists. Here's a sample, from p.39:

\r\n
\r\n

Additionally, the fragments of broken glass were scattered in a homogeneous manner on the internal and external windowsill, without any noticeable displacement and without any piece of glass being found on the surface below the window. This circumstance...rules out the possibility that the stone was thrown from outside the house to allow access inside via the window after the glass was broken. The climber, in leaning his hands and then his feet or knees on the windowsill, would have caused some of the glass to fall, or at least would have had to move some of the pieces lest they form a trap and cause injury. However, no piece of glass was found under the window and no sign of injury was discovered on the glass found in Romanelli's room.

\r\n
\r\n
\r\n
(The question to ask, when confronted with an argument like this, is: \"rules out\" with what confidence? If Massei and Cristiani think this is strong evidence against the hypothesis that the stone was thrown from outside the house, then that means they have a model that makes  highly specific predictions about the behavior of glass fragments when a stone is thrown from inside, versus when it is thrown from outside. Predictions which can be tested(4). This is one reason why  I advocate using numbers  in arguments; if Massei and Cristiani had been required to think carefully enough to give a number, that would have forced them to examine their assumptions more critically, rather than  stopping  on plausible-sounding arguments consistent with their already-arrived-at  bottom line.)
\r\n
\r\n
The impression one gets is that Massei and Cristiani thought, on some level, that all they needed to do was make the fake-burglary hypothesis  sound coherent  -- and that if they did so, that would count as a few points against Knox and Sollecito. They could then do the same thing with regard to the other pieces of evidence in the case, each time coming up with an explanation of the facts in terms of an assumption that Knox and Sollecito are guilty, and each time thereby scoring a few more points against them -- points which would presumably add up to a substantial number by the end of the report.
\r\n
\r\n
\r\n
But, of course, the mathematics of probability theory don't work that way. It's not enough for a hypothesis, such as that the apparent burglary in Filomena Romanelli's room was staged,  to merely be able to explain the data; it must do so  better than its negation. And,  in the absence of the assumption that Knox and Sollecito are guilty -- if we're presuming them to be innocent, as the law requires, or assigning a tiny prior probability to their guilt, as  epistemic rationality  requires -- this contest is rigged. The standards for \"explaining well\" that the fake-burglary hypothesis has to meet in order to be taken seriously are  much higher  than those that its negation has to meet, because of the dependence relation that exists between the fake-burglary question and the murder question. Any hypothesis that requires the assumption that Knox and Sollecito are guilty of murder inherits the full \"explanatory inefficiency penalty\" (i.e. prior improbability) of the latter proposition.
\r\n
\r\n
If A implies B, then not-B implies not-A. It goes both ways.
\r\n
\r\n

 

\r\n
\r\n


\r\n

Notes

\r\n

(1) Some  pro-guilt advocates  have apparently produced a translation, but I haven't looked at it and can't vouch for it. Translations of passages appearing in this post are my own.

\r\n

(2) One of whom, incidentally, is  known  to be enjoying  Harry Potter and the Methods of Rationality  -- so take  that  for whatever it's worth

\r\n

(3) From p. 422 of the report: 

\r\n
\r\n

The criminal acts turned out to be the result of purely accidental circumstances which came together to create a situation which, in the combination of the various factors, made the crimes against Meredith possible: Amanda and Raffaele, who happened to find themselves without any commitments, randomly met up with Rudy Guede (there is no trace of any planned appointment), and found themselves together in the house on Via Della Pergola where, that very evening, Meredith was alone. A crime which came into being, therefore, without any premeditation, without any animosity or rancorous feeling toward the victim...

\r\n
\r\n

(4) And sure enough, during the trial, the defense hired a ballistics expert who  conducted experiments  showing that a rock thrown from the outside would produce patterns of glass, etc. similar to what was found at the scene -- results which forced the prosecutors to admit that the rock was probably thrown from the outside, but which were simply  ignored  by Massei and Cristiani! (See p. 229 of  Sollecito's appeal document, if you can read Italian.)

" } }, { "_id": "gzbKnX5foFPNb47ef", "title": "Compatibilism in action", "pageUrl": "https://www.lesswrong.com/posts/gzbKnX5foFPNb47ef/compatibilism-in-action", "postedAt": "2010-11-23T17:58:04.506Z", "baseScore": -5, "voteCount": 9, "commentCount": 19, "url": null, "contents": { "documentId": "gzbKnX5foFPNb47ef", "html": "

A practical albeit fictional application of the philosophical conclusion that free will is compatible with determinism came up today in a discussion about a setting element from the role-playing game Exalted

\n

(5:31:44 PM) Nekira Sudacne: So during the pirmodial war, one Yozi got his fetch killed and he reincarnated as Sachervell, He Who Knows The Shape of Things To Come. And he reincarnated asleep. and he has remained asleep. And the other primordials do all in their power to keep him asleep. and he wants to be asleep.

\n

For you see, for as long as he sleeps, he dreams only of the present. should he awaken, he will see the totaltiy of exsistance, all things past and future exsactly as they will happen. quantumly speaking he will lock the universe into a single shape. All things that happen will happen as he sees them happen and there will be no chance for anyone to change it. effectivly nullifying chance for change. Even he cannot alter his vision for his vision takes into account all attempts to alter it.

\n

And there's a big debate over rather or not this is a game ending thing. Essentially, does predestination negate freewill or not

\n

(5:32:17 PM) Nekira Sudacne: and this is important, because one of the requirements for Exaltation to function is freewill. if Sachervell is able to negate freewill, then Exaltations will cease to function

\n

(5:32:44 PM) Nekira Sudacne: and maddenly enough the game authors are also on the thread arguing because THEY don't agree where to go with it either :) 

\n

(5:38:02 PM) rw271828: ah, well I happen to know the answer :-)

\n

(5:39:23 PM) rw271828: one of the most important discoveries of 20th-century mathematics is that in general the behavior of a complex system cannot be predicted -- or rather, there is no easier way to predict it than to run it and see what happens. Note in particular:

\n

(5:39:41 PM) rw271828: 1. This is a mathematical fact, so it applies in all possible universes, including Exalted

\n

(5:40:01 PM) rw271828: 2. Humans and other sentient lifeforms are complex systems in the relevant sense

\n

(5:41:33 PM) rw271828: so if you postulate an entity that can actually see the future (as opposed to just extrapolate what is likely to happen unless something intervenes), the only way to do that is for that entity to run a perfect simulation, a complete copy of the universe 

\n

(5:42:50 PM) rw271828:  if you're willing to postulate that, well fine, continue the game, and just note that you are running it in the copy the entity is using to make the prediction - the people in the setting still have free will, it is their actions that determine the future, and thus the result of the prediction ^.^

\n

(5:43:04 PM) Nekira Sudacne: Hah. nice one

" } }, { "_id": "CCr3QJxNo3cQokReW", "title": "New comments on the recent psi study", "pageUrl": "https://www.lesswrong.com/posts/CCr3QJxNo3cQokReW/new-comments-on-the-recent-psi-study", "postedAt": "2010-11-23T15:52:36.992Z", "baseScore": 20, "voteCount": 17, "commentCount": 7, "url": null, "contents": { "documentId": "CCr3QJxNo3cQokReW", "html": "

HT reddit/r/science: http://www.ruudwetzels.com//articles/Wagenmakersetal_subm.pdf

\n

Probably nobody is surprised here, but I thought one might be interested.

" } }, { "_id": "GDG4FdagtkHwg3jjq", "title": "Fiction: Tasmanian Devil ch. 1", "pageUrl": "https://www.lesswrong.com/posts/GDG4FdagtkHwg3jjq/fiction-tasmanian-devil-ch-1", "postedAt": "2010-11-23T15:49:07.296Z", "baseScore": 1, "voteCount": 10, "commentCount": 2, "url": null, "contents": { "documentId": "GDG4FdagtkHwg3jjq", "html": "

Tasmanian Devil is a story of college love, mystery, magic, logic and ethics. It parodises the human nature to the core. It can get very dark and is based loosely on rationality.

\n

Read Chapter 1 here. (PG-13; warnings: very dark themes, a few vulgar words) 

\n

Teaser:

\n
\n

Baby tooth was a market in which there were several sellers but only one buyer. Economics textbooks called this type of market monopsony. Theory predicted that, due to her bargaining power, the Tooth Fairy could be bitchy all she wanted, buying teeth at a very low price if she wanted to. However, according to Penny’s research, the Tooth Fairy was generous enough to pay up to $6 per tooth! Why?

\n
\n

 

" } }, { "_id": "v76DQ9hWkT9WEyRgs", "title": "Rationality is Not an Attractive Tribe", "pageUrl": "https://www.lesswrong.com/posts/v76DQ9hWkT9WEyRgs/rationality-is-not-an-attractive-tribe", "postedAt": "2010-11-23T14:08:33.563Z", "baseScore": 16, "voteCount": 21, "commentCount": 107, "url": null, "contents": { "documentId": "v76DQ9hWkT9WEyRgs", "html": "

Summary: I wonder how attractive rationality as a tribe and worldview is to the average person, when the competition is not constrained by verifiability or consistency and is therefore able optimize around offering imaginary status superstimuli to its adherents.

\n


\n
\n

Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'

\n

— Isaac Asimov

\n
\n

I've long been puzzled by the capability of people to reject obvious conclusions and opt for convoluted arguments that boil down to logical fallacies when it comes to defending a belief they have a stake in. When someone resists doing the math, despite an obvious capability to do so in other similar cases, we are right to suspect external factors at play. A framework that seems congruent with the evolutionary history of our species is that of beliefs as signals of loyalty to a tribe. Such a framework would explain the rejection of evolution and other scientific theories by large swathes of the world's population, especially religious population, despite access to a flood of evidence in support. 

\n

I will leave support of the tribal signalling framework to others, and examine the consequences for popular support of rationality and science if indeed such a framework successfully approximates reality. The best way I can do that is by examining one popular alternative: The Christian religion which I am most familiar with, in particular its evangelical protestant branch. I am fairly confident that this narrative can be ported to other branches of Christianity and abrahamic faiths fairly easily and the equivalents for other large religions can be constructed with some extra effort.

\n
\n

\"Blessed are the meek, for they will inherit the earth\"

\n

— The Bible (New International Version), Matthew 5:5

\n
\n

What is the narrative that an evangelical Christian buys into regarding their own status? They belong to the 'chosen people', worshipping a god that loves them, personally, created them with special care, and has a plan for their individual lives. They are taking part in a battle with absolute evil, that represents everything disgusting and despicable, which is manifested in the various difficulties they face in their lives. The end-game however is known. The believers, once their faith is tested in this world, are destined for an eternity of bliss with their brethren in the presence of their god, while the enemies will be thrown in the eternal fire for eternal torment. In this narrative, the disadvantaged in this life are very important. There exist propositions which can be held with absolute certainty. This presents a black-white divide in which moral judgements are easy, as long as corner cases can be swept under the rug. Each and every person, regardless of their social standing or capability, can be of utmost importance. Everyone can potentially save a soul for all eternity! In fact, the gospels place emphasis on the humble and the meek:

\n
\n

So those who are last now will be first then, and those who are first will be last.

\n

— The Bible (New International Version), Matthew 20:16

\n
\n

What is the rational alternative to this battle-hardened, well-optimized worldview? That there is no grand narrative. If such a narrative exists, (pursuit of truth, combating existential risk, <insert yours here>), the stars of this narrative are those blessed with intelligence and education such that they can digest the background material and make these pursuits on the cutting edge. It turns out, your enemies are not innately evil, either. You may have just misunderstood each other. You have to constantly struggle to fight your own biases, to no certain outcome. In fact, we are to hold no proposition with 100% certainty. On the plus side, science and rationality offers, or at least aspires to offer, a consistent worldview free from cognitive dissonance for those that can detect the alternative's contradictions. On the status side, for those of high intelligence, it puts them at the top of the hierarchy, being in the line of the great heroes of thought that have gone before, uncovering all the knowledge we have so far. But this is not hard to perceive as elitism, especially since the barriers to entry are difficult, if not impossible, to overcome for the vast majority of humans. Rationality may have an edge if it can be shown to improve an individual's life prospects. I am not aware of such research, especially one that untangles rationality from intelligence. Perhaps the most successful example, Pick-up artists, are out of limits for this community because their terminals are deemed offensive. While we define rationality as the way to win, the win that we focus on in this community is a collective one, therefore unlikely to confer an individual with high status in the meantime if this individual does not belong to the intellectually gifted few.

\n

So what does rationality have to offer to the common man to gain their support? The role of hard-working donor, whose contribution is in a replaceable commodity, e.g. money? The role of passive consumer of scientific products and documentaries? It seems to me that in the marketplace of worldview-tribes, rationality and science do not present themselves an attractive option for large swathes of the earth's population, and why would they? They were never developed as such. To make things worse, the alternatives have millennia of cultural evolution to better fit their targets, unconstrained by mundane burdens such as verifiability and consistency. I can perfectly see the attraction of the 'rational irrationality' point of view where someone would compartmentalises rationality into result-sensitive 'get things done' areas, while choosing to affirm unverifiable and/or incoherent propositions that nevertheless superstimulate one's feel-good status receptors.

\n

I see two routes here: The one is that we decide that popular support is not necessary. We focus our recruiting efforts on the upper strata of intelligence and influence. If it's a narrative that they need, we can't help them. We're in the business of providing raw truth. Humans are barely on the brink of general intelligence, anyway. A recent post claimed that an IQ of 130+ was practically a prerequisite for appreciating and comprehending the sequences. The truths are hard to grasp and inconvenient, but ultimately it doesn't matter if a narrative can be developed for the common person. They can keep believing in creationism, and we'll save humanity for them anyway.

\n

On the other hand, just because the scientific/rational worldview has not been fitted to the common man, it doesn't mean it can't be done. (But there is no guarantee that it can.) The alternative is to explore the open avenues that may lead to a more palatable narrative, including popularising many of the rationality themes that are articulated in this community. People show interest when I speak to them about cognitive biases but I have no accessible resources to give them that would start from there as a beachhead and progress into other more esoteric topics. And I don't find it incredible that rationality could provably aid in better individual outcomes, we just need solid research around the proposition. (The effects of various valleys of bad rationality or shifts in terminals due to rationality exposure may complicate that).

\n

I am not taking a position on which course of action is superior, or that these are the only alternatives. But it does seem to me that, if my reasoning and assumptions are correct, we have to make up our mind on what exactly it is we want to do as the Less Wrong community.

\n

Edit/Note: I initially planned for this to be posted as a normal article, but seeing how the voting is... equal in both directions, but that there is a lively discussion developing, I think this article is just fine in the discussion section.

" } }, { "_id": "sCfnBJBjLrKigN98z", "title": "Advanced neurotypicallity", "pageUrl": "https://www.lesswrong.com/posts/sCfnBJBjLrKigN98z/advanced-neurotypicallity", "postedAt": "2010-11-22T23:34:01.635Z", "baseScore": 9, "voteCount": 8, "commentCount": 0, "url": null, "contents": { "documentId": "sCfnBJBjLrKigN98z", "html": "

A description of auditioning

\n

--

\n
As an auditioner, this means I try to respect the people behind the table, be genuine and keep them from being bored. I want them to know that I appreciate their efforts, know that their side of the table is awkward too, and thank them for seeing me. And a lot of this, I have to show, don't tell. It's hard. Especially when you've also got to show up with the skills (also, seriously, it's weird do be affable and connected and then be Lady Anne, because she's a lot of things, but affable not so much). If you're auditioning for something, and especially if you're new to auditioning, often, if you're like me, you'll consider your odds of getting cast, and your computations will be quite grim. Well let me tell you something, stop that right now. Because if you can come into the room, say hello to me, make chit-chat for 30 seconds and do your monologue actually facing the table -- you are so ahead of the game. If you haven't sat behind the table, you think I'm joking, but I'm not. I've had people do monologues with their back to me because, they explained, they were nervous. I've had people build a jury box out of chairs (while my mouth hung open) and then proceed to do a spot-on imitation of Jack Nicholson in A Few Good Men. There was the guy with the gun. The people who brought their boyfriends (fine as a safety precaution if I'm auditioning you in a non-standard space; a complete distraction if I'm auditioning you at a rehearsal studio and they want in the room with you).
\n

I found that interesting because I'm apt to space out around people, and the passage is by someone who's \"on\" a lot more of the time.

" } }, { "_id": "3E5HAQ45etewgmhn9", "title": "Morality in Fantasy Worlds", "pageUrl": "https://www.lesswrong.com/posts/3E5HAQ45etewgmhn9/morality-in-fantasy-worlds", "postedAt": "2010-11-22T21:37:47.173Z", "baseScore": 8, "voteCount": 6, "commentCount": 22, "url": null, "contents": { "documentId": "3E5HAQ45etewgmhn9", "html": "

This comment by Carinthium in my earlier post has got me to think, and my apologies if this subject was raised before - couldn't find it.

\n
\n

1- In a hypothetical world where God actually existed, it would be an OBJECTIVE belief and thus it would be worth saving people. (Just as with people killing themselves with harmful drugs under almost all real circumstances)

\n
\n

\n

I admit I haven't really thought much about such a topic, so perhaps it deserves a separate discussion.
\n
Imagine you live in a fantasy world (like one of the Dungeons & Dragons universes) where the god you worship places you into better afterlife for following virtues that it considers moral. Suppose that we have evidence for the existence of the good and bad afterlives - testimony of resurrected people, etc. - so we know it's not just a groundless threat. Some people would, naturally, not care and engage in behavior that would land them in the bad afterlife. Would it be moral or rational to try and save them from their own folly, or to write it up to personal choice instead?
\n
Things get even more complicated when there are multiple gods, each with its own concept of morality, and you have a choice between them. But that's a whole separate can of worms...
\n

" } }, { "_id": "YcPRdQwgoSFeev6Df", "title": "Volitive Rationality", "pageUrl": "https://www.lesswrong.com/posts/YcPRdQwgoSFeev6Df/volitive-rationality", "postedAt": "2010-11-22T19:26:30.131Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "YcPRdQwgoSFeev6Df", "html": null } }, { "_id": "uh7gTTZqmtbku6z6g", "title": "Psychopathy and the Wason Selection Task", "pageUrl": "https://www.lesswrong.com/posts/uh7gTTZqmtbku6z6g/psychopathy-and-the-wason-selection-task", "postedAt": "2010-11-22T14:00:31.150Z", "baseScore": 26, "voteCount": 19, "commentCount": 12, "url": null, "contents": { "documentId": "uh7gTTZqmtbku6z6g", "html": "

The Wason Selection Task is the somewhat famous experimental problem that requires attempting to falsify a hypothesis in order to get the correct answer. From the wikipedia article:

\n
\n

You are shown a set of four cards placed on a table, each of which has a number on one side and a colored patch on the other side. The visible faces of the cards show 3, 8, red and brown. Which card(s) should you turn over in order to test the truth of the proposition that if a card shows an even number on one face, then its opposite face is red?

\n
\n

Aside from an illustration of the rampancy of confirmation bias (only 10-20% of people get it right), the task is interesting for another reason: when framed in terms of social interactions, people's performance dramatically improves:

\n
\n

For example, if the rule used is \"If you are drinking alcohol then you must be over 18\", and the cards have an age on one side and beverage on the other, e.g., \"17\", \"beer\", \"22\", \"coke\", most people have no difficulty in selecting the correct cards (\"17\" and \"beer\").

\n
\n

However, apparently psychopaths perform nearly as badly on the \"social contract\" versions of this experiment as they do on the normal one. From the Economist:

\n
\n

For problems cast as social contracts or as questions of risk avoidance, by contrast, non-psychopaths got it right about 70% of the time. Psychopaths scored much less—around 40%—and those in the middle of the psychopathy scale scored midway between the two.

\n
\n

The original (gated) research appears to be here.

" } }, { "_id": "FJkfboEQjN3ritXk9", "title": "Does cognitive therapy encourage bias?", "pageUrl": "https://www.lesswrong.com/posts/FJkfboEQjN3ritXk9/does-cognitive-therapy-encourage-bias", "postedAt": "2010-11-22T11:31:53.303Z", "baseScore": 16, "voteCount": 12, "commentCount": 19, "url": null, "contents": { "documentId": "FJkfboEQjN3ritXk9", "html": "
Summary: Cognitive therapy may encourage motivated cognition. My main source for this post is Judith Beck's Cognitive Therapy: Basics and Beyond
\r\n

\"Cognitive behavioral therapy\" (CBT) is a catch-all term for a variety of therapeutic practices and theories. Among other things, it aims to teach patients to modify their own beliefs. The rationale seems to be this:

\r\n

(1) Affect, behavior, and cognition are interrelated such that changes in one of the three will lead to changes in the other two. 

\r\n

(2) Affective problems, such as depression, can thus be addressed in a roundabout fashion: modifying the beliefs from which the undesired feelings stem.

\r\n

So far, so good. And how does one modify destructive beliefs? CBT offers many techniques.

\r\n

Alas, included among them seems to be motivated skepticism. For example, consider a depressed college student. She and her therapist decide that one of her bad beliefs is \"I'm inadequate.\" They want to replace that bad one with a more positive one, namely, \"I'm adequate in most ways (but I'm only human, too).\" Their method is to do a worksheet comparing evidence for and against the old, negative belief. Listen to their dialog:

\r\n

[Therapist]: What evidence do you have that you're inadequate?

\r\n

[Patient]: Well, I didn't understand a concept my economics professor presented in class today.

\r\n

T: Okay, write that down on the right side, then put a big \"BUT\" next to it...Now, let's see if there could be another explanation for why you might not have understood the concept other than that you're inadequate.

\r\n

P: Well, it was the first time she talked about it. And it wasn't in the readings.

\r\n

Thus the bad belief is treated with suspicion. What's wrong with that? Well, see what they do about evidence against her inadequacy:

\r\n

 T: Okay, let's try the left side now. What evidence do you have from today that you are adequate at many things? I'll warn you, this can be hard if your screen is operating.

\r\n

P: Well, I worked on my literature paper.

\r\n

T: Good. Write that down. What else?

\r\n

(pp. 179-180; ellipsis and emphasis both in the original)

\r\n

When they encounter evidence for the patient's bad belief, they investigate further, looking for ways to avoid inferring that she is inadequate. However, when they find evidence against the bad belief, they just chalk it up.

\r\n

This is not how one should approach evidence...assuming one wants correct beliefs.

\r\n

So why does Beck advocate this approach? Here are some possible reasons.

\r\n

A. If beliefs are keeping you depressed, maybe you should fight them even at the cost of a little correctness (and of the increased habituation to motivated cognition).

\r\n

B. Depressed patients are already predisposed to find the downside of any given event. They don't need help doubting themselves. Therefore, therapists' encouraging them to seek alternative explanations for negative events doesn't skew their beliefs. On the contrary, it helps to bring the depressed patients' beliefs back into correspondence with reality.

\r\n

C. Strictly speaking, this motivated cognition does not lead to false beliefs because beliefs of the form \"I'm inadequate,\" along with its more helpful replacement, are not truth-apt. They can't be true or false. After all, what experiences do they induce believers to anticipate? (If this were the rationale, then what would the sense of the term \"evidence\" be in this context?)

\r\n

What do you guys think? Is this common to other CBT authors as well? I've only read two other books in this vein (Albert Ellis and Robert A. Harper's A Guide to Rational Living and Jacqueline Persons' Cognitive Therapy in Practice: A Case Formulation Approach) and I can't recall either one explicitly doing this, but I may have missed it. I do remember that Ellis and Harper seemed to conflate instrumental and epistemic rationality.

\r\n

Edit: Thanks a lot to Vaniver for the help on link formatting.

\r\n
" } }, { "_id": "qCYqcrNKzKFeskt2q", "title": "Does cognitive therapy encourage bias?", "pageUrl": "https://www.lesswrong.com/posts/qCYqcrNKzKFeskt2q/does-cognitive-therapy-encourage-bias-1", "postedAt": "2010-11-22T11:11:53.201Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "qCYqcrNKzKFeskt2q", "html": "
\r\n

 

\r\n

(This post might suffer from formatting problems. I'm pretty dumb with computers, so it's not a surprise to me, but if anyone out there knows how to fix it, I'd be grateful for the help.)

\r\n
Summary: Cognitive therapy may encourage [motivated cognition](http://lesswrong.com/lw/km/motivated_stopping_and_motivated_continuation/). My main source for this post is Judith Beck's [Cognitive Therapy: Basics and Beyond](http://www.amazon.com/Cognitive-Therapy-Judith-Beck-Phd/dp/0898628474/ref=sr_1_1?s=books&ie=UTF8&qid=1290418167&sr=1-1)
\r\n

\"[Cognitive behavioral therapy](http://en.wikipedia.org/wiki/Cognitive_behavioral_therapy)\" (CBT) is a catch-all term for a variety of therapeutic practices and theories. Among other things, it aims to teach patients to modify their own beliefs. The rationale seems to be this:

\r\n

(1) Affect, behavior, and cognition are interrelated such that changes in one of the three will lead to changes in the other two. 

\r\n

(2) Affective problems, such as depression, can thus be addressed in a roundabout fashion: modifying the beliefs from which the undesired feelings stem.

\r\n

So far, so good. And how does one modify destructive beliefs? CBT offers many techniques.

\r\n

Alas, included among them seems to be motivated skepticism. For example, consider a depressed college student. She and her therapist decide that one of her bad beliefs is \"I'm inadequate.\" They want to replace that bad one with a more positive one, namely, \"I'm adequate in most ways (but I'm only human, too).\" Their method is to do a worksheet comparing evidence for and against the old, negative belief. Listen to their dialog:

\r\n

[Therapist]: What evidence do you have that you're inadequate?

\r\n

[Patient]: Well, I didn't understand a concept my economics professor presented in class today.

\r\n

T: Okay, write that down on the right side, then put a big \"BUT\" next to it...Now, let's see if there could be another explanation for why you might not have understood the concept other than that you're inadequate.

\r\n

P: Well, it was the first time she talked about it. And it wasn't in the readings.

\r\n

Thus the bad belief is treated with suspicion. What's wrong with that? Well, see what they do about evidence against her inadequacy:

\r\n

 T: Okay, let's try the left side now. What evidence do you have from today that you are adequate at many things? I'll warn you, this can be hard if your screen is operating.

\r\n

P: Well, I worked on my literature paper.

\r\n

T: Good. Write that down. What else?

\r\n

(pp. 179-180; ellipsis and emphasis both in the original)

\r\n

When they encounter evidence for the patient's bad belief, they investigate further, looking for ways to avoid inferring that she is inadequate. However, when they find evidence against the bad belief, they just chalk it up.

\r\n

This is not how one should approach evidence...assuming one wants correct beliefs.

\r\n

So why does Beck advocate this approach? Here are some possible reasons.

\r\n

A. If beliefs are keeping you depressed, maybe you should fight them even at the cost of a little correctness (and of the increased habituation to motivated cognition).

\r\n

B. Depressed patients are already predisposed to find the downside of any given event. They don't need help doubting themselves. Therefore, therapists' encouraging them to seek alternative explanations for negative events doesn't skew their beliefs. On the contrary, it helps to bring the depressed patients' beliefs back into correspondence with reality.

\r\n

C. Strictly speaking, this motivated cognition does not lead to false beliefs because beliefs of the form \"I'm inadequate,\" along with its more helpful replacement, are not truth-apt. They can't be true or false. After all, what experiences do they induce believers to [anticipate] (http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/)? (If this were the rationale, then what would the sense of the term \"evidence\" be in this context?)

\r\n

What do you guys think? Is this common to other CBT authors as well? I've only read two other books in this vein (Albert Ellis and Robert A. Harper's [A Guide to Rational Living](http://www.amazon.com/Guide-Rational-Living-Albert-Ellis/dp/0879800429/ref=sr_1_1?s=books&ie=UTF8&qid=1290418180&sr=1-1) and Jacqueline Persons' [Cognitive Therapy in Practice: A Case Formulation Approach](http://www.amazon.com/Cognitive-Therapy-Practice-Formulation-Approach/dp/0393700771/ref=sr_1_2?s=books&ie=UTF8&qid=1290420954&sr=1-2)) and I can't recall either one explicitly doing this, but I may have missed it. I do remember that Ellis and Harper seemed to conflate instrumental and epistemic rationality.

\r\n
" } }, { "_id": "xnoHu4qZavy9WxNY2", "title": "Does cognitive therapy encourage bias?", "pageUrl": "https://www.lesswrong.com/posts/xnoHu4qZavy9WxNY2/does-cognitive-therapy-encourage-bias-0", "postedAt": "2010-11-22T09:52:44.215Z", "baseScore": 4, "voteCount": 3, "commentCount": 1, "url": null, "contents": { "documentId": "xnoHu4qZavy9WxNY2", "html": "

Summary: Cognitive therapy may encourage [motivated cognition](http://lesswrong.com/lw/km/motivated_stopping_and_motivated_continuation/). My main source for this post is Judith Beck's [Cognitive Therapy: Basics and Beyond](http://www.amazon.com/Cognitive-Therapy-Judith-Beck-Phd/dp/0898628474/ref=sr_1_1?s=books&ie=UTF8&qid=1290418167&sr=1-1)

\r\n

\"[Cognitive behavioral therapy](http://en.wikipedia.org/wiki/Cognitive_behavioral_therapy)\" (CBT) is a catch-all term for a variety of therapeutic practices and theories. Among other things, it aims to teach patients to modify their own beliefs. The rationale seems to be this:

\r\n

(1) Affect, behavior, and cognition are interrelated such that changes in one of the three will lead to changes in the other two. 

\r\n

(2) Affective problems, such as depression, can thus be addressed in a roundabout fashion: modifying the beliefs from which the undesired feelings stem.

\r\n

So far, so good. And how does one modify destructive beliefs? CBT offers many techniques.

\r\n

Alas, included among them seems to be motivated skepticism. For example, consider a depressed college student. She and her therapist decide that one of her bad beliefs is \"I'm inadequate.\" They want to replace that bad one with a more positive one, namely, \"I'm adequate in most ways (but I'm only human, too).\" Their method is to do a worksheet comparing evidence for and against the old, negative belief. Listen to their dialog:

\r\n

[Therapist]: What evidence do you have that you're inadequate?

\r\n

[Patient]: Well, I didn't understand a concept my economics professor presented in class today.

\r\n

T: Okay, write that down on the right side, then put a big \"BUT\" next to it...Now, let's see if there could be another explanation for why you might not have understood the concept other than that you're inadequate.

\r\n

P: Well, it was the first time she talked about it. And it wasn't in the readings.

\r\n

Thus the bad belief is treated with suspicion. What's wrong with that? Well, see what they do about evidence against her inadequacy:

\r\n

 T: Okay, let's try the left side now. What evidence do you have from today that you are adequate at many things? I'll warn you, this can be hard if your screen is operating.

\r\n

P: Well, I worked on my literature paper.

\r\n

T: Good. Write that down. What else?

\r\n

(pp. 179-180; ellipsis and emphasis both in the original)

\r\n

When they encounter evidence for the patient's bad belief, they investigate further, looking for ways to avoid inferring that she is inadequate. However, when they find evidence against the bad belief, they just chalk it up.

\r\n

This is not how one should approach evidence...assuming one wants correct beliefs.

\r\n

So why does Beck advocate this approach? Here are some possible reasons.

\r\n

A. If beliefs are keeping you depressed, maybe you should fight them even at the cost of a little correctness (and of the increased habituation to motivated cognition).

\r\n

B. Depressed patients are already predisposed to find the downside of any given event. They don't need help doubting themselves. Therefore, therapists' encouraging them to seek alternative explanations for negative events doesn't skew their beliefs. On the contrary, it helps to bring the depressed patients' beliefs back into correspondence with reality.

\r\n

C. Strictly speaking, this motivated cognition does not lead to false beliefs because beliefs of the form \"I'm inadequate,\" along with its more helpful replacement, are not truth-apt. They can't be true or false. After all, what experiences do they induce believers to [anticipate] (http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/)? (If this were the rationale, then what would the sense of the term \"evidence\" be in this context?)

\r\n

What do you guys think? Is this common to other CBT authors as well? I've only read two other books in this vein (Albert Ellis and Robert A. Harper's [A Guide to Rational Living](http://www.amazon.com/Guide-Rational-Living-Albert-Ellis/dp/0879800429/ref=sr_1_1?s=books&ie=UTF8&qid=1290418180&sr=1-1) and Jacqueline Persons' [Cognitive Therapy in Practice: A Case Formulation Approach](http://www.amazon.com/Cognitive-Therapy-Practice-Formulation-Approach/dp/0393700771/ref=sr_1_2?s=books&ie=UTF8&qid=1290420954&sr=1-2)) and I can't recall either one explicitly doing this, but I may have missed it. I do remember that Ellis and Harper seemed to conflate instrumental and epistemic rationality.

" } }, { "_id": "ihzDAw6shkSKqM4fj", "title": "The Self-Reinforcing Binary", "pageUrl": "https://www.lesswrong.com/posts/ihzDAw6shkSKqM4fj/the-self-reinforcing-binary", "postedAt": "2010-11-22T09:01:06.419Z", "baseScore": -5, "voteCount": 15, "commentCount": 28, "url": null, "contents": { "documentId": "ihzDAw6shkSKqM4fj", "html": "

I originally wrote this post for my own blog, but after discovering Less Wrong, I've thought that it might make sense to submit it here.

\n

The late 20th - early 21st century have been rich with various concepts beginning with \"post-\". Postindustrial society, postmodernism, post-theism, postgenderism, posthumanism... The opinions on these, as well as the larger trends behind them all, are of course divided, but if anything, this only illustrates the point I'm trying to make.

\n

I think that what happened is that as the barriers of communication fell down, as we learned more about different cultures and lifestyles, so did we realize that many social concepts formerly thought of as absolute and rigid actually weren't. It will take another generation, or perhaps more than one, just to process this very idea to its fullest. We have come to realize that concepts and ideas, real or fictional, live in the historical and cultural context of their creators, and can only be fully understood in a relative rather than absolute way. No matter how many times literary critics say \"death of the author\", you can't abstract away from the fact that George Orwell had the political trends of early-to-mid-20th century in mind when he wrote 1984, or that J.R.R. Tolkien's Catholic beliefs influenced the cosmology and tone of The Silmarillion and The Lord of the Rings.

\n

Social ideas and norms are much the same way. Appeal to tradition, \"it has always been that way\", is just about the worst argument you can make when defending an existing social custom, right next to \"God decrees so\". Even if the God you believe in tells you that someone will go to Hell for the terrible, terrible moral crime of enjoying sex without the intent of procreation, it's not your business to try and \"save\" them. Just act yourself the way your beliefs dictate. Hence the \"post-\": not in the sense of rejection, but in the sense of outgrowing. A post-theistic society is not an atheistic society, but merely one that got over theism, a society where religion is a matter of personal choice rather than a shaping force in politics.

\n

And yes, I realize that my own writing is influenced by my atheist bias, conscious and unconscious. While I cannot fully abstract from them, I can be made aware of them; let the unconscious become conscious.

\n

So how does it all relate to the gender binary? Well, the way I see it, gender roles and religious dogmas have a lot in common — they are self-propagating memes. A good example to illustrate the problem is the origin of the Russian word for bear, \"medved'\". It literally meant \"honey eater\" in Old Slavic and was originally created as a euphemism, because the real name of the animal was taboo. However, over time, this fact was forgotten and \"medved'\" became the only known name, and thus itself considered something to be avoided by superstitious hunters. Religious fundamentalists take the words of their prophets and saints dropped here and there throughout their lives, often out of context, and declare them absolute, immutable truth. Proponents of the gender binary take emergent prejudices that shaped themselves due to a combination of circumstances, sometimes mind-bogglingly arbitrary, and declare them gospel. In any case, we are faced with codification, with social expectations and taboos shaped by minutae. It's like if a fictional character had their complexity stripped away and become defined by a single trait based on something they vaguely did in that one episode. Oh wait.

\n

What originally prompted this post was a paragraph I saw while reading Andrew Rilstone commentary on some common themes and tropes in fiction, namely, the points made by Joseph Campbell's The Hero with a Thousand Faces (itself subjected to gospelization: while Campbell himself was only writing about common themes in a distinct kind of stories, some of his followers went so far as to claim that the structure he pointed out was inherent in every story ever written). After a series of posts making logical arguments, the latest of which contrasted stories where the hero returned home with a boon from the travels with stories where the hero reached their destination and stayed there, when I kept going \"Yes, yes, that's exactly it!\", I suddenly stumbled upon this non sequitur.

\n
When I did literary theory at college, it was a truism that stories in which someone set forth to achieve something – stories which rushed headlong to a dramatic conclusion – were Male (and therefore bad). Stories which reached no final conclusion, which described a state of being, which cycled back to the beginning and achieved multiple climaxes were Female (and therefore good). The cleverer students, the ones with berets, went so far as to claim that the whole idea of stories – in fact the whole idea of writing in sentences -- was dangerously \"phallocentric\". But one does take the point that boys' stories like Moby Dick have beginnings, middles and ends in a way that girls' stories like Middlemarch really don't. The soap opera, which is all middle, is the female narrative form par excellence. You would search in vein for a monomyth in Coronation Street.
\n

For a minute, I just blinked at the text in silence, trying to make any sense out of it. Wikipedia defines a truism as \"a claim that is so obvious or self-evident as to be hardly worth mentioning, except as a reminder or as a rhetorical or literary device\". In other words, the author took this piece of essentialist drivel for granted so much that he assumed everyone else shared it.

\n

Which made me think: what, exactly, causes people to assign concepts to genders in such an utterly arbitrary fashion? The answer, I believe, lies in the pervasive, all-encompassing nature of the gender binary. The human society, we are taught from infancy, consists of men and women. We know - some of us, anyway - that it's merely an approximation in the same sense that Newtonian physics are an approximation of relativistic physics and the real world, one that is valid for most everyday uses but fails when we broaden the horizons of our knowledge. But the idea is tempting. After all, ideas, as Christopher Nolan helpfully points out, are the most persistent kind of infection known to humanity.

\n

And as such, when we encounter a new kind of idea (in this case, a binary), it is tempting to explain it in the concept of another binary we know, even if the analogy makes no sense. The actual mapping is often hard to explain rationally. Ancient paganists knew about the day/night binary and their corresponding celestial bodies. As such, in many mythologies over the world, the gods or personifications of the Sun and the Moon are of different genders, but it varies which is which. On one hand, we have Helios and Selene, Apollo and Artemis; on the other, Sól and Máni, who no doubt inflienced Tolkien's Arien and Tilion.

\n

Sometimes, it's not random. The earliest known examples of gender roles in prehistoric tribes, and such basic dichotomies as hard/soft, strong/weak, big/small, outward/inward, are probably influenced by real physical differences. From there, it kept fracturing, expanding since then. Perhaps many concepts declared \"masculine\" or \"feminine\" were not assigned randomly, but based on associations with existing concepts already sorted into the binary. The gender binary was not static, but, as geekfeminism.org pointed out, a fractal with internalized sexism (for example, while science itself is considered a \"masculine\" career, there are individual sciences perceived as predominantly masculine or feminine, etc.; even feminism itself could have contributed to such perceptions, if the \"hairy-legged man-hater\" stereotype is any indication). And not just a static fractal, but an ever-expanding, path-dependent chain of associations that solidified over time; what might first have been a helpful rhetorical device became unquestionable taboo.

\n

What can be done to break this pattern? Feminism contributes to the reverse process of conflation, of removing gender association stigma from logically unrelated concepts. But a true breakdown of the binary, I believe, will only happen when people en masse change their fundamental patterns of thought, and cast off or at least become aware of implicit assumptions underlying their arguments and actions. It is in the nature of the human mind to think in opposites, but the process of exposing the context can move the mental opposites from socially harmful areas and place more focus on, say, personal beliefs, ethics, and political ideologies - ideas that people choose to accept instead of being assigned to them by virtue of birth. And then, perhaps, we can outgrow the labeling of just about everything as masculine or feminine; in other words, walk into a post-binary world.

" } }, { "_id": "o56JnkJ8YS6aTw46S", "title": "Risk is not empirically correlated with return", "pageUrl": "https://www.lesswrong.com/posts/o56JnkJ8YS6aTw46S/risk-is-not-empirically-correlated-with-return", "postedAt": "2010-11-22T05:31:59.316Z", "baseScore": 11, "voteCount": 6, "commentCount": 28, "url": null, "contents": { "documentId": "o56JnkJ8YS6aTw46S", "html": "

The most widely appreciated finance theory is the Capital Asset Pricing Model. It basically says that diminishing marginal utility of absolute wealth implies that riskier financial assets should have higher expected returns than less risky assets and that only risk correlated with the market (beta risk) is a whole is important because other risk can be diversified out.

\n

Eric Falkenstein argues that the evidence does not support this theory; that the riskiness of assets (by any reasonable definition) is not positively correlated with return (some caveats apply). He has a paper (long but many parts are skimmable; not peer reviewed; also on SSRN) as well as a book on the topic. I recommend reading parts of the paper.

\n

The gist of his competing theory is that people care mostly about relative gains rather than absolute gains. This implies that riskier financial assets will not have higher expected returns than less risky assets. People will not require a higher return to hold assets with higher undiversifiable variance because everyone is exposed to the same variance and people only care about their relative wealth.

\n

Falkenstein has a substantial quantity of evidence to back up his claim. I am not sure if his competing theory is correct, but I find the evidence against the standard theory quite convincing.

\n

If risk is not correlated with returns, then anyone who is mostly concerned with absolute wealth can profit from this by choosing a low beta risk portfolio.

\n

This topic seems more appropriate for the discussion section, but I am not completely sure, so if people think it belongs in the main area, let me know.

\n

Added some (hopefully) clarifying material:

\n

All this assumes that you eliminate idiosyncratic risk through diversification. Technically impossible, but you can get it reasonably low. The R's are all *instantaneous* returns; though since these are linear models they apply to geometrically accumulated returns as well. The idea that E(R_asset) are independent of past returns is a background assumption for both models and most of finance.

\n

Beta_portfolio = Cov(R_portfolio, R_market)/variance(R_market)

\n

In CAPM your expected and variance are:

E(R_portfolio) = R_rfree + Beta_portfolio * (E(R_market) - R_rfree)
Var(R_portfolio) = Beta_portfolio * Var(R_market)

in Falkenstein's model your expected return are:

E(R_portfolio) = R_market       # you could also say = R_rfree; the point is that its a constant
Var(R_portfolio) = Beta_portfolio * Var(R_market)

The major caveat being that it doesn't apply very close to Beta_portfolio = 0; Falkenstein attributes this to liquidity benefits. And it doesn't apply to very high Beta_portfolio; he attributes this to \"buying hope\". See the paper for more.

Falkenstein argues that his model fits the facts more closely than CAPM. Assuming Falkenstein's model describes reality, if your utility declines with rising Var(R_portfolio) (the standard assumption), then you'll want to hold a portfolio with a beta of zero; or taking into account the caveats, a low Beta_portfolio. If your utility is declining with Var(R_portfolio - R_market), then you'll want to hold the market portfolio. Both of these results are unambiguous since there's no trade off between either measure of risk and return.

\n

Some additional evidence from another source, and discussion: http://falkenblog.blogspot.com/2010/12/frazzini-and-pedersen-simulate-beta.html

" } }, { "_id": "ZWPX5gWpyD6EtTgSi", "title": "Can cryoprotectant toxicity be crowd-sourced?", "pageUrl": "https://www.lesswrong.com/posts/ZWPX5gWpyD6EtTgSi/can-cryoprotectant-toxicity-be-crowd-sourced", "postedAt": "2010-11-21T17:46:05.731Z", "baseScore": 1, "voteCount": 6, "commentCount": 10, "url": null, "contents": { "documentId": "ZWPX5gWpyD6EtTgSi", "html": "

From the article The red blood cell as a model for cryoprotectant toxicity by Aschwin de Wolf

\n
\n

One simple model that allows for “high throughput” investigations of cryoprotectant toxicity are red blood cells (erythrocytes). Although the toxic effects of various cryoprotective agents may differ between red blood cells, other cells, and organized tissues, positive results in a red blood cell model can be considered the first experimental hurdle that needs to be cleared before the agent is considered for testing in other models.  Because red blood cells are widely available for research, this model eliminates the need for animal experiments for initial studies. It also allows researchers to investigate human cells. Other advantages include the reduced complexity of the model  (packed red blood cells can be obtained as an off-the-shelf product) and lower costs.

\n
\n

It sounds to me like this is a very cheap assay for viability. You don't need much equipment. High toxicity compounds can be screened on visual appearance. More detailed analysis can be done by a light microscope or a spectrophotometer.

\n

The biggest issue facing cryonics (and the holy grail of suspended animation with true  biostasis) is the existence of cryoprotectant toxicity. Less toxic solutions can be perfused for a longer period of time, and thus penetrate the entire organism without triggering additional loss of viability. Vitrification already eliminates all ice formation -- we know enough to know that without toxicity, it should work for trivially reversible forms of long-term suspended animation.

\n

Thus if we want to ask what can be done cheaply by a lot of people to help cryonics move forward, one possibility is that they could perform empirical tests on the compounds most likely to prove effective for cryoprotection.

\n

We can speculate about the brain being reparable at all kinds of levels of damage -- but that is speculation. Sure we do have to make a decision to sign up or not based on that speculation. But the more hard evidence we can obtain, the more of a chance that we aren't being distracted from the reality of the situation by wishful thinking -- and the more likely we are to persuade our fellow self-identifying rational skeptics to take our side. Furthermore (and I know this sounds obvious, but it still needs to be said) in taking a more empirical approach to actually resolving the issues as quickly as possible, we are more likely to survive than otherwise.

\n

There are still a lot of questions that are raised in my mind by this crowdsourcing idea. What kinds of mechanisms would be best for collaboration and publication of results? Are there many other dirt-cheap empirical testing methods that small unfunded groups of nonspecialists could employ for useful research? How many people and groups could/should get involved in such a project? Aschwin mentions \"theoretical work in organic chemistry\" as the first step -- how much of that has already been done, or needs to be done? What kind of a learning curve is there on learning enough organic chemistry to propose a useful test?

" } }, { "_id": "kxZD8ycCqK9bccGkf", "title": "Competition to write the best stand-alone article on efficient charity", "pageUrl": "https://www.lesswrong.com/posts/kxZD8ycCqK9bccGkf/competition-to-write-the-best-stand-alone-article-on", "postedAt": "2010-11-21T16:57:35.003Z", "baseScore": 16, "voteCount": 22, "commentCount": 26, "url": null, "contents": { "documentId": "kxZD8ycCqK9bccGkf", "html": "

I have a friend who is currently in a lucrative management consultancy career, but is considering getting a job in eco-tourism because he \"wants to make the world a better place\" and we got into a debate about Efficient Charity, Roles vs. Goals, and Optimizing versus Acquiring Warm Fuzzies

\n

I thought that there would be a good article here that I could send him to, but there isn't. So I've decided to ask people to write such an article. What I am looking for is an article that is less than 1800 words long, and explains the following ideas: 

\n
    \n
  1. Charity should be about actually trying to do as much expected good as possible for a given amount of resource (time, $), in a quantified sense. I.e. \"5000 lives saved in expectation\", not \"we made a big difference\". 
  2. \n
  3. The norms and framing of our society regarding charity currently get it wrong, i.e. people send lots of $ to charities that do a lot less good than other charities. The \"inefficiency\" here is very large, i.e. GWWC estimates by a factor of 10,000 at least. Therefore most money donated to charity is almostly entirely wasted.
  4. \n
  5. It is usually better to work a highly-paid job and donate because if you work for a charity you replace the person who would have been hired had you not applied
  6. \n
  7. Our instincts will tend to tempt us to optimize for signalling, this is to be resisted unless (or to the extent that) it is what you actually want to do
  8. \n
  9. Our motivational centre will tend to want to optimize for \"Warm Fuzzies\". These should be purchased separately from utilons. 
  10. \n
\n

but without using any unexplained LW Jargon. (Utilons, Warm Fuzzies, optimizing). Linking to posts explaining jargon is NOT OK. I will judge the winner based upon these criteria and the score that the article gets on LW. I may present a small prize to the winner, if (s)he desires it! 

\n

Happy Writing

\n

Roko

\n

EDIT: As well as saying that he will pay $100 to the winner, Jsalvatier makes two additional points that I feel should be included in the specification of the article:

\n

6.  Your intuition about what counts as a cause worth giving money to is extremely bad. This is completely natural: everyone's intuition about this is bad. Why? Because your brain was not optimized by evolution to be good at thinking clearly about large problems involving millions of people and how to allocate resources. 

\n

7. Not only is your intuition about this naturally very bad (as well as cultural memes surrounding how to donate to charity being utterly awful), you don't realize that your intuition is bad. This is a deceptively hard problem. 

\n

And I would also like to add:

\n

8. Explicitly make the point that our current norm of ranking charities based upon how much (or little) they spend on overheads is utterly insane. Yes, the entire world of charities is stupid with respect to the problem of how to prioritize their own efforts. 

\n

9. Mention the point that other groups are slowly edging their way towards the same conclusion, e.g. Giving What We Can (GWWC), Copenhagen Consensus, GiveWell. 

\n

 

" } }, { "_id": "9k3YcLyPQ4v5386o9", "title": "Agents of No Moral Value: Constrained Cognition?", "pageUrl": "https://www.lesswrong.com/posts/9k3YcLyPQ4v5386o9/agents-of-no-moral-value-constrained-cognition", "postedAt": "2010-11-21T16:41:10.603Z", "baseScore": 10, "voteCount": 9, "commentCount": 3, "url": null, "contents": { "documentId": "9k3YcLyPQ4v5386o9", "html": "

Thought experiments involving multiple agents usually postulate that the agents have no moral value, so that the explicitly specified payoff from the choice of actions can be considered in isolation, as both the sole reason and evaluation criterion for agents' decisions. But is that really possible to require from an opposing agent to have no moral value, without constraining what it's allowed to think about?

\n

If agent B is not a person, how do we know it can't decide to become a person for the sole reason of gaming the problem, manipulating agent A (since B doesn't care about personhood, so it costs B nothing, but A does)? If it's stipulated as part of the problem statement, it seems that B's cognition is restricted, and the most rational course of action is prohibited from being considered for no within-thought-experiment reason accessible to B.

\n

It's not enough to require that the other agent is inhuman in the sense of not being a person and not holding human values, as our agent must also not care about the other agent. And once both agents don't care about each other's cognition, the requirement for them not being persons or valuable becomes extraneous.

\n

Thus, instead of requiring that the other agent is not a person, the correct way of setting up the problem is to require that our agent is indifferent to whether the other agent is a person (and conversely).

\n

(It's not a very substantive observation I would've posted with less polish in an open thread if not for the discussion section.)

" } }, { "_id": "YTXW3L5cme8sRa5c6", "title": "The true prisoner's dilemma with skewed payoff matrix", "pageUrl": "https://www.lesswrong.com/posts/YTXW3L5cme8sRa5c6/the-true-prisoner-s-dilemma-with-skewed-payoff-matrix", "postedAt": "2010-11-20T20:37:14.926Z", "baseScore": 3, "voteCount": 17, "commentCount": 42, "url": null, "contents": { "documentId": "YTXW3L5cme8sRa5c6", "html": "

Related to The True Prisoner's Dilemma, Let's split the cake, lengthwise, upwise and slantwise, If you don't know the name of the game, just tell me what I mean to you

\n

tl;dr: Playing the true PD, it might be that you should co-operate when expecting the other one to defect, or vice versa, in some situations, against agents that are capable of superrationality. This is because relative weight of outcomes for both parties might vary. This could lead this sort of agents to outperform even superrational ones.

\n

So, it happens that our benevolent Omega has actually an evil twin, that is as trustworthy as his sibling, but abducts people into a lot worse hypothetical scenarios. Here we have one:

\n

You wake up in a strange dimension, and this Evil-Omega is smiling at you, and explains that you're about to play a game with unknown paperclip maximizer from another dimension that you haven't interacted with before and won't interact with ever after. The alien is like GLUT when it comes to consciousness, it runs a simple approximation of rational decision algorithm but nothing that you could think of as \"personality\" or \"soul\". Also, since it doesn't have a soul, you have absolutely no reason to feel bad for it's losses. This is true PD.

\n

You are also told some specifics about the algorithm that the alien uses to reach its decision, and likewise told that alien is told about as much about you. At this point I don't want to nail the algorithm the opposing alien uses down to one specific. We're looking for a method that wins when summing up all these possibilities. Next, especially, we're looking at the group of AI's that are capable of superrationality, since against other's the game is trivial.

\n

The payoff matrix is like this:

\n

DD=(lose 3 billion lives and be tortured, lose 4 paperclips), CC=(2 billion lives and be made miserable, lose 2 paperclips), CD=(lose 5 billion lives and be tortured a lot, nothing), DC=(nothing, lose 8 paperclips)

\n

So, what do you do? Opponent is capable of superrationality. In the post \"The True Prisoner's Dilemma\", it was(kinda, vaguely, implicitly) assumed for simplicity's sake that this information is enough to decide whether to defect or not. Answer, based on this information, could be to co-operate. However, I argue that information given is not enough.

\n

Back to the hypothetical: In-hypothetical you is still wondering about his/her decision, but we zoom out and observe that, unbeknownst to you, Omega has abducted your fellow LW reader and another paperclip maximizer from that same dimension, and is making them play PD. But this time their payoff matrix is like this:

\n

DD=(lose $0.04, make 2 random, small changes to alien's utility function and 200 paperclips lost), CC=(lose $0.02, 1 change, 100 paperclips), CD=(lose $0.08, nothing), DC=(nothing, 4 changes, 400 paperclips)

\n

Now, if it's not \"rational\" to take the relative loss into account, we're bound to find ourselves in a situation where billions of humans die. You could be regretting your rationality, even. It should become obvious now that you'd wish you could somehow negotiate both of these PD's so that you would defect and your opponent co-operate. You'd be totally willing to take a $0.08 hit for that, maybe paying it in its entirety for your friend. And so it happens, paperclip maximizers would also have an incentive to do this.

\n

But, of course, players don't know about this entire situation, so they might not be able to operate in optimal way in this specific scenario. However, if they take into account how much the other cares about those results, using some unknown method, they just might be able to systematically perform better(if we made more of this sorts of problems, or if we selected payoffs at random for the one-shot game), than \"naive\" PD-players playing against each other. Naivity here would imply that they simply and blindly co-operate against equally rational opponents. How to achieve that is the open question.

\n

-

\n

Stuart Armstrong, for example, has an actual idea of how to co-operate when the payoffs are skewed, while I'm just pointing out that there's a problem to be solved, so this is not really news or anything. Anyways, I still think that this topic has not been explored as much as it should be.

\n

Edit. Added this bit: You are also told some specifics about the algorithm that the alien uses to reach its decision, and likewise told that alien is told about as much about you. At this point I don't want to nail the algorithm the opposing alien uses down to one specific. We're looking for a method that wins when summing up all these possibilities. Next, especially, we're looking at the group of AI's that are capable of superrationality, since against other sort of agents the game is trivial.

\n

Edit. Corrected some huge errors here and there, like, mixing hypothetical you and hypothetical LW-friend.

\n

Edit. Transfer Discussion -> Real LW complete!

" } }, { "_id": "sEWpaLZkJRohWPF2b", "title": "Pseudolikelihood as a source of cognitive bias", "pageUrl": "https://www.lesswrong.com/posts/sEWpaLZkJRohWPF2b/pseudolikelihood-as-a-source-of-cognitive-bias", "postedAt": "2010-11-20T20:06:32.222Z", "baseScore": 11, "voteCount": 8, "commentCount": 9, "url": null, "contents": { "documentId": "sEWpaLZkJRohWPF2b", "html": "

Pseudolikelihood is method for approximating joint probability distributions. I'm bringing this up because I think something like this might be used in human cognition. If so, it would tend to produce overconfident estimates.

\n

Say we have some joint distribution over X, Y, and Z, and we want to know about the probability of some particular vector (x, y, z). The pseudolikelihood estimate involves asking yourself how likely each piece of information is, given all of the other pieces of information. Then you multiply these together. So the pseudolikelihood of (x, y, z) is P(x|yz) P(y|xz) P(z|xy).

\n

Not only is this wrong, but it gets more wrong as your system is bigger. By that I mean that a ratio of two pseudolikelihoods will tend towards 0 or infinity for big problems, even if the likelihoods are close to the same.

\n

So how can we avoid this? A correct way to calculate a joint probability P(x,y,z) looks like P(x) P(y|x) P(z|xy). At each step we only condition on information \"prior\" to the thing we are asking about. My guess about how to do do this involves making your beliefs look more like a directed acyclic graph. Given two adjacent beliefs, you need to be clear on which is the \"cause\" and which is the \"effect.\" The cause talks to the effect in terms of prior probabilities and the effect talks to the cause in terms of likelihoods.

\n

Failure to do this could take the form of an undirected relationship (two beliefs are \"related\" without either belief being the cause or the effect), or loops in a directed graph. I don't actually think we want to get rid of undirected relationships entirely -- people do use them in machine learning -- but I can't see any good reason for keeping the latter.

\n

An example of a causal loop would be if you thought of math as an abstraction from everyday reality, and then turned around and calculated prior probabilities of fundamental physical theories in terms of mathematical elegance. One way out is to declare yourself a mathematical Platonist. I'm not sure what the other way would look like.

" } }, { "_id": "qGEqpy7J78bZh3awf", "title": "What I've learned from Less Wrong", "pageUrl": "https://www.lesswrong.com/posts/qGEqpy7J78bZh3awf/what-i-ve-learned-from-less-wrong", "postedAt": "2010-11-20T12:47:42.727Z", "baseScore": 116, "voteCount": 102, "commentCount": 235, "url": null, "contents": { "documentId": "qGEqpy7J78bZh3awf", "html": "

Related to: Goals for which Less Wrong does (and doesn’t) help

\n

I've been compiling a list of the top things I’ve learned from Less Wrong in the past few months. If you’re new here or haven’t been here since the beginning of this blog, perhaps my personal experience from reading the back-log of articles known as the sequences can introduce you to some of the more useful insights you might get from reading and using Less Wrong.

1. Things can be correct - Seriously, I forgot. For the past ten years or so, I politely agreed with the “deeply wise” convention that truth could never really be determined or that it might not really exist or that if it existed anywhere at all, it was only in the consensus of human opinion. I think I went this route because being sloppy here helped me “fit in” better with society. It’s much easier to be egalitarian and respect everyone when you can always say “Well, I suppose that might be right -- you never know!”

2. Beliefs are for controlling anticipation (Not for being interesting) - I think in the past, I looked to believe surprising, interesting things whenever I could get away with the results not mattering too much. Also, in a desire to be exceptional, I naïvely reasoned that believing similar things to other smart people would probably get me the same boring life outcomes that many of them seemed to be getting... so I mostly tried to have extra random beliefs in order to give myself a better shot at being the most amazingly successful and awesome person I could be.

\n

3. Most peoples' beliefs aren’t worth considering - Since I’m no longer interested in collecting interesting “beliefs” to show off how fascinating I am or give myself better odds of out-doing others, it no longer makes sense to be a meme collecting, universal egalitarian the same way I was before. This includes dropping the habit of seriously considering all others’ improper beliefs that don’t tell me what to anticipate and are only there for sounding interesting or smart.

4. Most of science is actually done by induction - Real scientists don’t get their hypotheses by sitting in bathtubs and screaming “Eureka!”. To come up with something worth testing, a scientist needs to do lots of sound induction first or borrow an idea from someone who already used induction. This is because induction is the only way to reliably find candidate hypotheses which deserve attention. Examples of bad ways to find hypotheses include finding something interesting or surprising to believe in and then pinning all your hopes on that thing turning out to be true.

5. I have free will - Not only is the free will problem solved, but it turns out it was easy. I have the kind of free will worth caring about and that’s actually comforting since I had been unconsciously ignoring this out of fear that the evidence appeared to be going against what I wanted to believe. Looking back, I think this was actually kind of depressing me and probably contributing to my attitude that having interesting rather than correct beliefs was fine since it looked like it might not matter what I did or believed anyway. Also, philosophers failing to uniformly mark this as “settled” and move on is not because this is a questionable result... they’re just in a world where most philosophers are still having trouble figuring out if god exists or not. So it’s not really easy to make progress on anything when there is more noise than signal in the “philosophical community”. Come to think of it, the AI community and most other scientific communities have this same problem... which is why I no longer read breaking science news anymore -- it's almost all noise.

6. Probability / Uncertainty isn’t in objects or events - It’s only in minds. Sounds simple after you understand it, but I feel like this one insight often allows me to have longer trains of thought now without going completely wrong.

7. Cryonics is reasonable - Due to reading and understanding the quantum physics sequence, I ended up contacting Rudi Hoffman for a life insurance quote to fund cryonics. It’s only a few hundred dollars a year for me. It’s well within my budget for caring about myself and others... such as my future selves in forward branching multi-verses.

\n


There are countless other important things that I've learned but haven't documented yet. I find it pretty amazing what this site has taught me in only 8 months of sporadic reading. Although, to be fair, it didn't happen by accident or by reading the recent comments and promoted posts but almost exclusively by reading all the core sequences and then participating more after that.

\n

And as a personal aside (possibly some others can relate): I still love-hate Less Wrong and find reading and participating on this blog to be one of the most frustrating and challenging things I do. And many of the people in this community rub me the wrong way. But in the final analysis, the astounding benefits gained make the annoying bits more than worth it.

So if you've been thinking about reading the sequences but haven't been making the time do it, I second Anna’s suggestion that you get around to that. And the rationality exercise she linked to was easily the single most effective hour of personal growth I had this year so I highly recommend that as well if you're game.

\n

 

\n

So, what have you learned from Less Wrong? I'm interested in hearing others' experiences too.

" } }, { "_id": "T7nx89Rb3WWugghZi", "title": "Games People Play", "pageUrl": "https://www.lesswrong.com/posts/T7nx89Rb3WWugghZi/games-people-play", "postedAt": "2010-11-20T04:41:39.635Z", "baseScore": 13, "voteCount": 11, "commentCount": 8, "url": null, "contents": { "documentId": "T7nx89Rb3WWugghZi", "html": "

Game theory is great if you know what game you're playing. All this talk of Diplomacy reminds me of this memory of Adam Cadre:

\n
\n

I remember that in my ninth grade history class, the teacher had us play a game that was supposed to demonstrate how shifting alliances work. He divided the class into seven groups — dubbed Britain, France, Germany, Belgium, Italy, Austria and Russia — and, every few minutes, declared a \"battle\" between two of the countries. Then there was a negotiation period, during which we all were supposed to walk around the room making deals. Whichever warring country collected the most allies would win the battle and a certain number of points to divvy up with its allies. The idea, I think, was that countries in a battle would try to win over the wavering countries by promising them extra points to jump aboard.

\n

That's not how it worked in practice. Three or four guys — the same ones who had gotten themselves elected to ASB, the student government — decided among themselves during the first negotiation period what the outcome would be, and told people whom to vote for. And the others just shrugged and did as they were told. The ASB guys had decided that Germany would win, followed by France, Britain, Belgium, Austria, Italy and Russia. The first battle was France vs. Russia. Germany and Britain both signed up on the French side. Austria and Italy, realizing that if they just went along with the ASB plan they'd come in 5th and 6th, joined up with Russia. That left it up to Belgium. I was on team Belgium. I voted to give our vote to the Russian side, because that way at least we weren't doomed to come in 4th. And no one else on my team went along. They meekly gave their points to the French side. (As I recall, Josh Lorton was particularly adamant about this. I guess he thought it would make the ASB guys like him.) After that, there was no contest. Britain vs. Austria? 6-1, Britain. Germany vs. Belgium? 6-1, Germany. (And we could have beaten them if we'd just formed a bloc with the other three losers!) The teacher noticed that Germany and France were always on the same side and declared Germany vs. France. Outcome: 6-1, Germany.

\n

The ASB guys were able to just impose their will on a class of 40 students. No carrots, no sticks, just \"here's what will happen\" and everyone else nodding. I have no idea how that works. I do recall that because they were in student government, for fourth period they had to take a class called Leadership. From what I could tell they just spent the class playing volleyball out in the quad. But I guess they were learning something!

\n
\n

What happened? Why did Italy and Russia fall into line and abandon Austria in the second battle?

\n

This utterly failed to demonstrate the \"shifting alliances\" that Adam thought the teacher wanted. Does this happen every year?

\n

Yes, the students were coerced into \"playing\" this game, but elsewhere he describes the same thing happen in games that people choose to play. Moreover, he tells the first story to illustrate his perception of politics.

" } }, { "_id": "y5zje2RRitGxftbGa", "title": "Rationality and being child-free", "pageUrl": "https://www.lesswrong.com/posts/y5zje2RRitGxftbGa/rationality-and-being-child-free", "postedAt": "2010-11-20T02:12:03.269Z", "baseScore": 15, "voteCount": 17, "commentCount": 64, "url": null, "contents": { "documentId": "y5zje2RRitGxftbGa", "html": "

So I found this post quite interesting:

\n

http://www.gnxp.com/blog/2009/03/gnxp-readers-do-not-breed.php

\n

(I'm quite sure that the demographics of this site closely parallel the demographics on Gene Expression).

\n

Research seems to indicate that people are happiest when they're married, but that each child imposes a net decrease in happiness (parents in fact, enjoy a boost in happiness once their children leave the house). It's possible, of course, that adult children may be pleasurable to interact with, but it seems that in many cases, the parents want to interact with the children more than the children want to interact with the parent (although daughters generally seem more interactive with their parents).

\n

So how do you think being child-free relates to rationality/happiness? Of course, Bryan Caplan (who is pro-natalist) cites research (from Judith Rich Harris) saying that parents really have less influence over their children than they think they have (so it's a good idea for parents to spend less effort in trying to \"mold\" their children, since their efforts will inevitably result in much frustration). And in fact, if parents did this, it's possible that they may beat the average.

\n

(This doesn't convince me in my specific case, however, and I'm still committed to not having children).

" } }, { "_id": "wzAWcXdbpn94qJv42", "title": "Common Sense Atheism summarizing the Sequences", "pageUrl": "https://www.lesswrong.com/posts/wzAWcXdbpn94qJv42/common-sense-atheism-summarizing-the-sequences", "postedAt": "2010-11-19T12:55:00.130Z", "baseScore": 19, "voteCount": 14, "commentCount": 3, "url": null, "contents": { "documentId": "wzAWcXdbpn94qJv42", "html": "

Since popularising the sequences seems to be a pursuit that's been in the spotlight recently, I thought I'd point out that blogger Luke Muehlhauser of Common Sense Atheism has started blogging through the sequences. The first installment is here:

\n

Reading Yudkowsky, Part 1

\n

Perhaps someone you know will benefit from the sequences but can/will not invest the time to go though the whole thing can be directed to Luke's metasequence.

" } }, { "_id": "ASmoRyd49DMY8PsYs", "title": "Help with (pseudo-)rational film characters", "pageUrl": "https://www.lesswrong.com/posts/ASmoRyd49DMY8PsYs/help-with-pseudo-rational-film-characters", "postedAt": "2010-11-19T11:26:27.972Z", "baseScore": -11, "voteCount": 12, "commentCount": 15, "url": null, "contents": { "documentId": "ASmoRyd49DMY8PsYs", "html": "

Hi,

\n

I'm currently developing a web-series based around two people who call themselves rationalists, planning to assassinate people in order to bring down the catholic church.

\n

So my fixpoints are that they need sound, or at least compelling, rational argument to justify killing people for \"the greater good\" (because they are sure the church works very strongly against that). Also, since it'll be the style of the series, they document themselves while planning their actions. They post encrypted versions of their videos and all data used online and have a mechanism for releasing the keys in case they are caught. Being rational, they know that they will be caught eventually and want to get \"their method\" out there, hoping to set off some kind of \"non-ideological revolution\". Although they know it will probably end badly for them, they are as careful as possible, while trying to not be overly paranoid since it my affect their performance.

\n

Of course they are biased, since they first had the idea of killing the pope, and then they thought of doing so \"rationally\". They do see their bias here, though, and try to work around it a bit, but they must come out on the side of pursuing their plans (or the series ends).

\n

There won't be a real \"message\" to the series, although they need to make some quite obvious false assumptions (and some personal bias, too, no fridge-stuffing, but rather they are more ideological than they know, in that born-again christian-become-atheist kind of way, occasionally not understanding the difference between what the basic axioms are, and what dogmas are. Also some bipolar disorder and depression might come in handy), so nobody will get stupid ideas. Also I don't want to be persecuted for providing terror-manuals.

\n

 

\n

Let me rephrase that: Starting out as a (sloppy) rationalist, what logical fallacies do you have to trip in order to end up on a position that would endorse terrorism?

" } }, { "_id": "he8fCLY47QZQ3kgey", "title": "Fiction: \"Chicken Little\" by Cory Doctorow", "pageUrl": "https://www.lesswrong.com/posts/he8fCLY47QZQ3kgey/fiction-chicken-little-by-cory-doctorow", "postedAt": "2010-11-19T11:13:45.490Z", "baseScore": 5, "voteCount": 4, "commentCount": 1, "url": null, "contents": { "documentId": "he8fCLY47QZQ3kgey", "html": "

What would happen if people could feel what they knew about probability? The story's in Gateways, a tribute collection for Frederik Pohl. Unfortunately, the story is more about whether a drug with that effect is a good idea rather than extended exploration of the effects, but it's still interesting. One of the points is that, while people on the drug are more sensible (laugh at lotteries, don't eat food that makes them feel bad), they don't have children. That last might be less of an obvious outcome than it seems-- rationality can improve parenthood.

" } }, { "_id": "dWhA58LswXm3yoQz4", "title": "Advice for a Budding Rationalist", "pageUrl": "https://www.lesswrong.com/posts/dWhA58LswXm3yoQz4/advice-for-a-budding-rationalist", "postedAt": "2010-11-19T03:10:45.715Z", "baseScore": 11, "voteCount": 8, "commentCount": 27, "url": null, "contents": { "documentId": "dWhA58LswXm3yoQz4", "html": "

Most people in the US with internet connections who are reading this site will at some point in their lives graduate high school. I haven't yet, and it seems like what I do afterwards will have a pretty big effect on the rest of my life.* 

Given that, I think I should ask for some advice.

\n

Generally,
Any advice? Anything you wish you knew? Disagreement with the premise? (If you disagree, please explain what to do anyway.)

\n

More specific to the site,
Any advice for high schoolers with a rationalist and singularitarian bent? Who are probably looking at going to college?
Anything particularly effective for working against existential risk?
Any fields particularly useful for rationalists to know?
Any fields in which rationalists would be particularly helpful?

\n

This is intended to be a pretty general reference for life advice for the young ones among us. With a college selection bent, probably. If you're in high school and have a specific situation that you want help with/advice for, please reply to this post with that. I think that a most people have specific skills/background they could leverage, so a one-size-fits all approach seems to be somewhat simplistic.

\n

*I understand that I can always change plans later, but there are many many things that seem to require some level of commitment, like college.

Edit:
As Unnamed pointed out, also look at this article about undergraduate course selection.

" } }, { "_id": "DXcezGmnBcAYL2Y2u", "title": "Yes, a blog.", "pageUrl": "https://www.lesswrong.com/posts/DXcezGmnBcAYL2Y2u/yes-a-blog", "postedAt": "2010-11-19T01:53:26.991Z", "baseScore": 152, "voteCount": 128, "commentCount": 108, "url": null, "contents": { "documentId": "DXcezGmnBcAYL2Y2u", "html": "\n

When I recommend LessWrong to people, their gut reaction is usually \"What? You think the best existing philosophical treatise on rationality is a blog?\"

\n

Well, yes, at the moment I do.

\n

\"But why is it not an ancient philosophical manuscript written by a single Very Special Person with no access to the massive knowledge the human race has accumulated over the last 100 years?\"

\n

Besides the obvious? Three reasons: idea selection, critical mass, and helpful standards for collaboration and debate.

\n

Idea selection.

\n

Ancient people came up with some amazing ideas, like how to make fire, tools, and languages. Those ideas have stuck around, and become integrated in our daily lives to the point where they barely seem like knowledge anymore. The great thing is that we don't have to read ancient cave writings to be reminded that fire can keep us warm; we simply haven't forgotten. That's why more people agree that fire can heat your home than on how the universe began.

\n

Classical philosophers like Hume came up with some great ideas, too, especially considering that they had no access to modern scientific knowledge. But you don't have to spend thousands of hours reading through their flawed or now-uninteresting writings to find their few truly inspiring ideas, because their best ideas have become modern scientific knowledge. You don't need to read Hume to know about empiricism, because we simply haven't forgotten it... that's what science is now. You don't have to read Kant to think abstractly about Time; thinking about \"timelines\" is practically built into our language nowadays.

\n

See, society works like a great sieve that remembers good ideas, and forgets some of the bad ones. Plenty of bad ideas stick around because they're viral (self-propagating for reasons other than helpfulness/verifiability), so you can't always trust an idea just because it's old. But that's how any sieve works: it narrows your search. It keeps the stuff you want, and throws away some of the bad stuff so you don't have to look at it.

\n

LessWrong itself is an update patch for philosophy to fix compatibility issues with science and render it more useful. That it would exist now rather than much earlier is no coincidence: right now, it's the gold at the bottom of the pan, because it's taking the idea filtering process to a whole new level. Here's a rough timeline of how LessWrong happened:

\n

Critical mass.

\n

To get off the ground, a critical mass of very good ideas was needed: the LessWrong Sequences. Eliezer Yudkowsky spent several years posting a lot of extremely sane writing on OvercomingBias.com, and then founded LessWrong.com, attracting the attention of other people who were annoyed at the lower density of good ideas in older literature.

\n

Part of what made them successful is that the sequences are written in a widely learned, widely applicable language: the language of basic science and mathematics. A lot of the serious effort in classical philosophy was spent trying to develop precise and appropriate terminology in which to communicate, and so joining the conversation always required a serious exclusive study of the accumulated lingo and concepts. But nowadays we can study rationality by transfer of learning from tried-and-true technical disciplines like probability theory, computer science, biology, and even physics. So the Sequences were written.

\n

Then, using an explicit upvote system, LessWrong and its readers began accelerating the historically slow process of idea selection: if you wanted to be sure to see something inspiring, you just had to click \"TOP\" to see a list of top voted posts.1

\n

Collaboration and debate.

\n

Finally, with a firm foundation taking hold, there is now a context, a language, and a community that will understand your good ideas. Reading LessWrong makes it vastly easier to collaborate effectively on resolving abstract practical issues2. And if you disagree with LessWrong, reading LessWrong will help you communicate your disagreement better. There was a time when you couldn't have a productive abstract conversation with someone unless you spent a few days establishing a context with that person; now you have LessWrong sequences to do that for you.

\n

The sequences also refer to plenty of historical mistakes made by old-school philosophers, so you don't necessarily have to spend thousands of hours reading very old books to learn what not to do. This leaves you with more time to develop basic or advanced skills in math and science3, which, aside from the obvious career benefits, gets you closer to understanding subjects like cognitive and neuropsychology, probability and statistics, information and coding theory, formal logic, complexity theory, decision theory, quantum physics, relativity... Any philosophical discussion predating these subjects is simply out of the loop. A lot of their mistakes aren't even about the things we need to be analysing now.

\n

So yes, if you want good ideas about rationality, and particularly its applications to understanding the nature of reality and life, you can restrict a lot of your attention to what people are talking about right now, and you'll be at a comparatively low risk of missing out on something important. Of course, you have to use your judgement to finish to search. Luckily, LessWrong tries to teach that, too. It's really a very good deal. Plus, if you upvote your favorite posts, you start contributing right away by helping the idea selection process.

\n

Don't forget: Wikipedia happened. It didn't sell out. It didn't fall to vandals. Encyclopedic knowledge is now free, accessible, collaborative, and even addictive. Now, LessWrong is happening to rationality.

\n

 

\n
\n

1 In my experience, the Top Posts section works like an anti-sieve: pretty much everything on there is clever, but in any one reader's opinion there is probably a lot of great material that didn't make it to the top.

\n

2 I sometimes describe the LessWrong dialogue as about \"abstract practicality\", because to most people the word \"philosophy\" communicates a sense of explicit uselessness, which LessWrong defies. The discussions here are all aimed at resolving real-life decisions of some kind or another, be it whether to start meditating or whether to freeze yourself when you die.

\n

3 I compiled this abridged list of sequence posts for people who already have a strong background in math and science, to accomodate a faster exposure to the LessWrong \"introductory\" material.

\n

4 This post is about how LessWrong happened as a blog. For recent general discussion of LessWrong's good and bad effects, consider When you need Less Wrong and Self-Improvement or Shiny Distraction?

" } }, { "_id": "7dRGYDqA2z6Zt7Q4h", "title": "Goals for which Less Wrong does (and doesn't) help", "pageUrl": "https://www.lesswrong.com/posts/7dRGYDqA2z6Zt7Q4h/goals-for-which-less-wrong-does-and-doesn-t-help", "postedAt": "2010-11-18T22:37:36.984Z", "baseScore": 88, "voteCount": 70, "commentCount": 105, "url": null, "contents": { "documentId": "7dRGYDqA2z6Zt7Q4h", "html": "

Related to: Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality

\n

We’ve had a lot of good criticism of Less Wrong lately (including Patri’s post above, which contains a number of useful points). But to prevent those posts from confusing newcomers, this may be a good time to review what Less Wrong is useful for.

\n

In particular: I had a conversation last Sunday with a fellow, I’ll call him Jim, who was trying to choose a career that would let him “help shape the singularity (or simply the future of humanity) in a positive way”.  He was trying to sort out what was efficient, and he aimed to be careful to have goals and not roles.  

\n

So far, excellent news, right?  A thoughtful, capable person is trying to sort out how, exactly, to have the best impact on humanity’s future.  Whatever your views on the existential risks landscape, it’s clear humanity could use more people like that.

\n

The part that concerned me was that Jim had put a site-blocker on LW (as well as all of his blogs) after reading Patri’s post, which, he said, had “hit him like a load of bricks”.  Jim wanted to get his act together and really help the world, not diddle around reading shiny-fun blog comments.  But his discussion of how to “really help the world” seemed to me to contain a number of errors[1] -- errors enough that, if he cannot sort them out somehow, his total impact won’t be nearly what it could be.  And they were the sort of errors LW could have helped with.  And there was no obvious force in his off-line, focused, productive life of a sort that could similarly help.

\n

So, in case it’s useful to others, a review of what LW is useful for.

\n

When you do (and don’t) need epistemic rationality

\n

For some tasks, the world provides rich, inexpensive empirical feedback.  In these tasks you hardly need reasoning.  Just try the task many ways, steal from the best role-models you can find, and take care to notice what is and isn’t giving you results.

\n

Thus, if you want to learn to sculpt, reading Less Wrong is a bad way to go about it.  Better to find some clay and a hands-on sculpting course.  The situation is similar for small talk, cooking, selling, programming, and many other useful skills.

\n

Unfortunately, most of us also have goals for which we can obtain no such ready success/failure data. For example, if you want to know whether cryonics is a good buy, you can’t just try buying it and not-buying it and see which works better.  If you miss your first bet, you’re out for good.

\n

There is similarly no easy way to use the “try it and see” method to sort out what ethics and meta-ethics to endorse, or what long-term human outcomes are likely, how you can have a positive impact on the distant poor, or which retirement investments *really will* be safe bets for the next forty years.  For these goals we are forced to use reasoning, as failure-prone as human reasoning is.  If the issue is tricky enough, we’re forced to additionally develop our skill at reasoning -- to develop “epistemic rationality”.

\n

The traditional alternative is to deem subjects on which one cannot gather empirical data \"unscientific\" subjects on which respectable people should not speak, or else to focus one's discussion on the most similar-seeming subject for which it *is* easy to gather empirical data (and so to, for example, rate charities as \"good\" when they have a low percentage of overhead, instead of a high impact). Insofar as we are stuck caring about such goals and betting our actions on various routes for their achievement, this is not much help.[2]

\n

How to develop epistemic rationality

\n

If you want to develop epistemic rationality, it helps to spend time with the best epistemic rationalists you can find.  For many, although not all, this will mean Less Wrong.  Read the sequences.  Read the top current conversations.  Put your own thinking out there (in the discussion section, for starters) so that others can help you find mistakes in your thinking, and so that you can get used to holding your own thinking to high standards.  Find or build an in-person community of aspiring rationalists if you can.

\n

Is it useful to try to read every single comment?  Probably not, on the margin; better to read textbooks or to do rationality exercises yourself.  But reading the Sequences helped many of us quite a bit; and epistemic rationality is the sort of thing for which sitting around reading (even reading things that are shiny-fun) can actually help.

\n

 

\n
\n

[1]  To be specific: Jim was considering personally \"raising awareness\" about the virtues of the free market, in the hopes that this would (indirectly) boost economic growth in the third world, which would enable more people to be educated, which would enable more people to help aim for a positive human future and an eventual positive singularity.

\n

There are several difficulties with this plan.  For one thing, it's complicated; in order to work, his awareness raising would need to indeed boost free market enthusiasm AND US citizens' free market enthusiasm would need to indeed increase the use of free markets in the third world AND this result would need to indeed boost welfare and education in those countries AND a world in which more people could think about humanity's future would need to indeed result in a better future. Conjunctions are unlikely, and this route didn't sound like the most direct path to Jim's stated goal.

\n

For another thing, there are good general arguments suggesting that it is often better to donate than to work directly in a given field, and that, given the many orders of magnitude differences in efficacy between different sorts of philanthropy, it's worth doing considerable research into how best to give.  (Although to be fair, Jim's emailing me was such research, and he may well have appreciated that point.) 

\n

The biggest reason it seemed Jim would benefit from LW was just manner; Jim seemed smart and well-meaning, but more verbally jumbled, and less good at factoring complex questions into distinct, analyzable pieces, than I would expect if he spent longer around LW.

\n
[2] The traditional rationalist reply would be that if human reasoning is completely and permanently hopeless when divorced from the simple empirical tests of Popperian science, then avoiding such \"unscientific\" subjects is all we can do.
" } }, { "_id": "dStthPQfjCSARXjk2", "title": "The Benefits of Two Religious Educations", "pageUrl": "https://www.lesswrong.com/posts/dStthPQfjCSARXjk2/the-benefits-of-two-religious-educations", "postedAt": "2010-11-18T21:42:50.139Z", "baseScore": 10, "voteCount": 9, "commentCount": 3, "url": null, "contents": { "documentId": "dStthPQfjCSARXjk2", "html": "

It seems fitting that my first post here be an origin story, of sorts.  Like any origin story, it is overly reductionistic and attributes a single cause to an overdetermined phenomenon.  There's an old Spider-Man comic that claims that even if he hadn't been bitten by a radioactive spider, and even if he hadn't caused his uncle's death through inaction, Peter Parker would still have become a superhero thanks to his engineering talent and strong moral fiber.  Nevertheless, I find it compelling to say that I became a skeptic (and from there a rationalist and consequentialist) because from an early age I attended two different religious schools at the same time.

\n

From age six, I spent my weekdays at a Christian independent school.  From around the same time, I went to a Jewish \"Sunday school\" (and to Jewish religious services some Saturday evenings).  I imagine this is a rare, bizarre-sounding way to grow up.  In Jewish communities in rural Pennsylvania it's quite common.

\n

 

\n

This led to a predictable phenomenon.  Adults, teachers in similar positions of respect and authority, were (confidently and earnestly!) making different, contradictory assertions about extremely important subjects.  People whom I respected equally had vastly different concepts of how the universe worked, and I was constantly reminded of this.  The inference was inescapable: teachers were often wrong and I would have to use my own judgement.  I remember briefly theorizing that there were simply two different gods, the Old Testament one and the New Testament one (who was also Gaia), which would certainly help reconcile everything.

\n

 

\n

By the time I was ten, I questioned everything a teacher said, in any subject, to a fault (e.g., I refused to learn the backhand in tennis because I couldn't see the point).   By the time I was twelve, I confidently identified as an atheist.  My parents were still religious Jews, but they didn't really care as long as they could bully me into performing the rituals.  We spent more time arguing about AI, as it happened, than the existence of God (my parents were both Searle-ists).  By the time I was fifteen, I had decided to drop out of school and educate myself, etc.

\n

 

\n

I think I would have gotten there anyway.  But I find it appealing to speculate that I got there much faster than I would have if I'd received a secular education.  I'm curious whether anyone here had a similar upbringing.  Might this be a good way for atheists to deliberately inoculate their children?  Might it be a good way, in general, to ensure that children grow up instinctively distrustful of authority?  I realize that may be a negative trait in an ideal world, but in this corrupt one I think it's essential.

\n

 

" } }, { "_id": "7Mr8DcwjtXEHbNsKD", "title": "Another rationalist protagonist takes on \"magic\"", "pageUrl": "https://www.lesswrong.com/posts/7Mr8DcwjtXEHbNsKD/another-rationalist-protagonist-takes-on-magic", "postedAt": "2010-11-18T20:25:06.045Z", "baseScore": 17, "voteCount": 13, "commentCount": 6, "url": null, "contents": { "documentId": "7Mr8DcwjtXEHbNsKD", "html": "

Along with the rest of the lesswrong community, I've been enjoying \"Harry Potter and the Methods of Rationality.\" 

\n

I would like to heartily recommend another series with similar themes:

\n

The Steerswoman's Road, by Rosemary Kirstein

\n

Like HP:MOR, we have a protagonist applying scientific methods to understand \"magic\".  Unlike HP:MOR, this protagonist doesn't have the benefit of an accumulated body of knowledge on science and rationality (the setting is medieval era technology, more or less). Refreshingly and realistically, though, her pursuits are part of a larger collaboration, the \"steerswomen\", a group devoted to furthering and freely disseminating human knowledge.  Unlike HP, the protagonist has to be confused (about things we as readers already know) not just for minutes, but for years. We are painstakingly led through every step and misstep of her reasoning. 

" } }, { "_id": "HdWY4jB2NpXmWEHWz", "title": "\"Target audience\" size for the Less Wrong sequences", "pageUrl": "https://www.lesswrong.com/posts/HdWY4jB2NpXmWEHWz/target-audience-size-for-the-less-wrong-sequences", "postedAt": "2010-11-18T12:21:09.504Z", "baseScore": 18, "voteCount": 26, "commentCount": 88, "url": null, "contents": { "documentId": "HdWY4jB2NpXmWEHWz", "html": "

[Note: My last thread was poorly worded in places and gave people the wrong impression that I was interested in talking about growing and shaping the Less Wrong community.  I was really hoping to talk about something a bit different.  Here's my revision with a completely redone methodology.]

\n

How many people would invest their time to read the LW sequences if they were introduced to them?

So in other words, I’m trying to estimate the theoretical upper-bound on the number of individuals world-wide who have the ability, desire, and time to read intellectual material online and who also have at least some pre-disposition to wanting to think rationally.

I’m not trying to evangelize to unprepared, “reach” candidates who maybe, possibly would like to read parts of the sequences.  I’m just looking for likely size of the core audience who already has the ability, the time, and doesn’t need to jump through any major hoops to stomach the sequences (like deconverting from religion or radically changing their habits -- like suddenly devoting more of their time to using computers or reading.)

The reason I’m investigating this is because I want to build more rationalists.  I know some smart people whose opinions I respect (like Michael Vassar) who contend we shouldn’t spend much time trying to reach more people with the sequences.  They think the majority of people smart enough to follow the sequences and who do weird, eccentric things like “read in their spare time”, are already here.  This is my second attempt to figure this out in the last couple days, and unlike my rough 2M person figure I got with my previous, hasty analysis, this more detailed analysis leaves me with a much lower world-wide target audience of only 17,000.

\n

 

\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Filter
\n
\n
Total Population
\n
\n
Filters Away (%)
\n
\n
Everyone
\n
\n
6,880,000,000
\n
 
\n
Speaks English + Internet Access
\n
\n
536,000,000
\n
\n
92.2%
\n
\n
Atheist/Agnostic
\n
\n
40,000,000
\n
\n
92.55%
\n
\n
Believes in evolution | Atheist/Agnostic
\n
\n
30,400,000
\n
\n
24%
\n
\n
“NT” (Rational) MBTI
\n
\n
3,952,000
\n
\n
87%
\n
\n
IQ 130+ (SD 15; US/UK-Atheist-NT 108 IQ)
\n
\n
284,544
\n
\n
92.8%
\n
\n
30 min/day reading or on computers
\n
\n
 16,930
\n
\n
94.05%
\n
\n
\n



Yep, that’s right.  There are basically only a few thousand relatively bright people in the world who think reason makes sense and devote at least 2% of their day to arcane activities like “reading” and \"using computers\".

Considering we have 6,438 Less Wrong logins created and a daily readership of around 5,500 people between logged in and anonymous readers, I now actually find it believable that we may have already reached a very large fraction of all the people in the world who we could theoretically convince to read the sequences.

This actually matters because it makes me update in favor of different, more realistic growth strategies than buying AdWords or doing SEO to try and reach the small number of people left in our current target audience.  Like translating the sequences into Chinese.  Or creating an economic disaster that leaves most of the Westerner world unemployed (kidding!).  Or waiting until Eliezer publishes his rationality book so that we can reach the vast majority of our potential, future audience who currently still reads but doesn’t have time to do anti-social, low-prestige things like “reading blogs”.


For those of you who want to consider my methodology, here’s the rationale for each step that I used to disqualify potential sequence readers:

\n



Doesn’t Speak English or have Internet Access:  The sequences are English-only (right now) and online-only (right now).  Don’t think there’s any contention here.  This figure is the largest of the 3 figures I've found but all were around 500,000,000.

Not Atheist/Agnostic: Not being an Atheist or Agnostic is a huge warning sign.  93% of LW is atheist/agnostic for a reason.  It’s probably a combo of  1) it’s hard to stomach reading the sequences if you’re a theist, and 2) you probably don’t use thinking to guide the formation of your beliefs anyway so lessons in rationality are a complete waste of time for you.  These people really needs to have the healing power of Dawkins come into their hearts before we can help them.  Also, note that even though it wasn't mentioned in Yvain's top-level survey post, the raw data showed that around 1/3rd of LW users who gave a reason for participating on LW cite \"Atheism\".

Evolution denialist: If you can’t be bothered to be moved to correct beliefs about the second most obvious conclusion in the world by the mountains of evidence in favor of it, you’re effectively saying you don’t think induction or science can work at all.  These people also need to go through Dawkins before we can help them.

Not “NT” on the Myers-Briggs typology: Lots of people complain about the MBTI.  But in this case, I don’t think it matters that the MBTI isn’t cleaving reality perfectly at the joints or that these types aren’t natural categories.  I realize Jung types aren’t made of quarks and aren’t fundamental.  But I’ve also met lots of people at the Less Wrong meet-ups.  There’s an even split of E/I and P/J in our community.  But there is a uniform, overwhelmingly strong disposition towards N and T.  And we shouldn’t be surprised by this at all.  People who are S instead of N take things at face value and resist using induction or intuition to extend their reasoning.  These people can guess the teacher’s password, but they're not doing the same thing that you call \"thinking\".  And if you’re not a T (Thinking), then that means you’re F (Feeling).  And if you’re using feelings to chose beliefs in lieu of thinking, there’s nothing we can do for you -- you’re permanently disqualified from enjoying the blessings of rationality.  Note:  I looked hard to see if I could find data suggesting that being NT and being Atheist correlated because I didn’t want to “double subtract” out the same people twice.  It turns out several studies have looked for this correlation with thousands of participants... and it doesn’t exist.

Lower than IQ 130: Another non-natural category that people like to argue about.  Plus, this feels super elitist, right?  Excluding people just because they're \"not smart enough\". But it’s really not asking that much when you consider that IQ 100 means you’re buying lottery tickets, installing malware on your computer, and spending most of your free time watching TV.  Those aren’t the “stupid people” who are way down on the other side of the Gaussian -- that’s what a normal 90 - 110 IQ looks like.  Real stupid is so non-functional that you never even see it... probably because you don’t hang out in prisons, asylums and homeless shelters.  Really.  And 130 isn’t all that “special“ once you find yourself being a white (+6IQ) college graduate (+5IQ) atheist (+4IQ) who's ”NT” on Myers-Briggs (+5IQ).  In Yvain’s survey, the average IQ on LW was 145.88.  And only 4 out of 68 LWers reported IQs below 130... the lowest being 120.  I find it inconceivable that EVERYONE lied on this survey.  I also find it highly unlikely that only the top 1/2 reported.  But even if everyone who didn’t report was as low as the lowest IQ reported by anyone on Less Wrong, the average IQ would still be over 130.  Note:   I took the IQ boost from being atheist and being MBTI-“N” into account when figuring out the proportion of 130+ IQ conditional on the other traits already being factored in.

Having no free time: So you speak English, you don’t hate science, you don’t hate reason, and you’re somewhat bright.  Seem like you’re a natural part of our target audience, right?  Nope... wrong!  There’s at least one more big hurdle: Having some free time.  Most people who are already awesome enough to have passed through all these filters are winning so hard at life (by American standards of success) that they are wayyy too busy to do boring, anti-social & low-prestige tasks like reading online forums in their spare time (which they don’t have much of).  In fact, it’s kind of like how knowing a bit about biases can hurt you and make you even more biased.  Being a bit rational can skyrocket you to such a high level of narrowly-defined American-style \"success\" that you become a constantly-busy, middle-class wage-slave who zaps away all your free time in exchange for a mortgage and a car payment. Nice job buddy. Thanks for increasing my GDP epsilon%... now you are left with whatever rationality you started out with minus the effects of your bias dragging you back down to average over the ensuing years.  The only ways I see out of this dilemma are 1) being in a relatively unstructured period of your life (ie, unemployed, college student, semi-retired, etc) or 2) having a completely broken motivation system which keeps you in a perpetually unstructured life against your will (akrasia) or perhaps 3) being a full-time computer professional who can multi-task and pass off reading online during your work day as actually working.  That said, if you're unlucky enough to have a full-time job or you’re married with children, you’ve already fallen out of the population of people who read or use computers at least 30 minutes / day.  This is because having a spouse cuts your time spent reading and using computers in half.  Having children cuts reading in half and reduces computer usage by 1/3rd.  And having a job similarly cuts both reading and computer usage in half.  Unfortunately, most people suffer from several of these afflictions.  I can’t find data that’s conditional on being an IQ 130+ Atheist but my educated guess is employment is probably much better than average due to being so much more capable and I’d speculate that relationships and children are about the same or perhaps a touch lower.  All things equal, I think applying statistics from the general US civilian population and extrapolating is an acceptable approximation in this situation even if it likely overestimates the number of people who truly have 30 minutes of free time / day (the average amount of time needed just to read LW according to Yvain’s survey).  83% of people are employed full-time so they’re gone.  Of the remaining 17% who are unemployed, 10% of the men and 50% of the women are married and have children so that’s another 5.1% off the top level leaving only 11.9% of people.  Of that 11.9% left, the AVERAGE person has 1 hour they spend reading and ”Playing games and computer use for leisure“.  Let’s be optimistic and assume they somehow devote half of their entire leisure budget to reading Less Wrong, that still only leaves 5.95%.  Note: These numbers are a bit rough.  If someone wants to go through the micro-data files of the US Time Use Survey for me and count the exact number of people who do more than 1 hour of \"reading\" and \"Playing games and computer use for leisure\", I welcome this help.

\n

 

\n

 

\n

Anyone have thoughtful feedback on refinements or additional filters I could add to this?  Do you know of better sources of statistics for any of the things I cite?  And most importantly, do you have new, creative outreach strategies we could use now that we know this?

" } }, { "_id": "Pmi8YDfYqtx8CMrQh", "title": "Weird characters in the Sequences", "pageUrl": "https://www.lesswrong.com/posts/Pmi8YDfYqtx8CMrQh/weird-characters-in-the-sequences", "postedAt": "2010-11-18T08:27:20.737Z", "baseScore": 9, "voteCount": 7, "commentCount": 5, "url": null, "contents": { "documentId": "Pmi8YDfYqtx8CMrQh", "html": "

When the sequences were copied from Overcoming Bias to Less Wrong, it looks like something went very wrong with the character encoding.  I found the following sequences of HTML entities in words in the sequences:

\n

 

\n

&#xE2;&#x80;&#x99;&#x102;&#x15E; d?tre

\n

&#xC5;&#xAB; M?lamadhyamaka

\n

&#x102;&#x15A; Ph?drus

\n

&#x102;&#x2D8;&#xC2;&#x80;&#xC2;&#x94; arbitrator?i window?and

\n

&#x102;&#x15E; b?te m?me

\n

&#xE2;&#x80;&#xA6; over?and

\n

&#xE23;&#xE01; H?jek

\n

&#x102;&#x83;&#xC2;&#x17A; G?nther

\n

&#x102;&#x160; fianc?e proteg?s d?formation d?colletage am?ricaine d?sir

\n

&#x102;&#x83;&#xC2;&#x17B; na?ve na?vely

\n

&#x139;&#x8D; sh?nen

\n

&#xC3;&#xB6; Schr?dinger L?b

\n

&#xE22;&#xE07; ?ion

\n

&#x102;&#x83;&#xC2;&#x15B; Schr?dinger H?lldobler

\n

&#x102;&#x17A; D?sseldorf G?nther

\n

&#xE2;&#x80;&#x93; ? Church? miracles?in Church?Turing

\n

&#xE2;&#x80;&#x99; doesn?t he?s what?s let?s twin?s aren?t I?ll they?d ?s you?ve else?s EY?s Whate?er punish?d There?s Caledonian?s isn?t harm?s attack?d I?m that?s Google?s arguer?s Pascal?s don?t shouldn?t can?t form?d controll?d Schiller?s object?s They?re whatever?s everybody?s That?s Tetlock?s S?il it?s one?s didn?t Don?t Aslan?s we?ve We?ve Superman?s clamour?d America?s Everybody?s people?s you?d It?s state?s Harvey?s Let?s there?s Einstein?s won?t

\n

&#x102;&#x104; Alm?si Zolt?n

\n

&#x102;&#x164; pre?mpting re?valuate

\n

&#x102;&#x2D8;&#xC2;&#x89;&#xC2;&#xA0; ?

\n

&#x102;&#xA8; l?se m?ne accurs?d

\n

&#xE23;&#xE10; Ver?andi

\n

&#xE2;&#x86;&#x92; high?low low?high

\n

&#x102;&#x2D8;&#xC2;&#x80;&#xC2;&#x99; doesn?t

\n

&#xC4;&#x81; k?rik Siddh?rtha

\n

&#xE23;&#xE16; Sj?berg G?delian L?b Schr?dinger G?gel G?del co?rdinate W?hler K?nigsberg P?lzl

\n

&#x102;&#x17B; na?vet

\n

&#xC2;&#xA0; I?understood ? I?was

\n

&#x102;&#x15B; Schr?dinger

\n

&#x102;&#x17D; pla?t

\n

&#xFA;&#xF1; N?ez

\n

&#x139;&#x82; Ceg?owski

\n

&#xE2;&#x80;&#x94; PEOPLE?and smarter?supporting to?at problem?and probability?then valid?to opportunity?of time?in true?I view?wishing Kyi?and ones?such crudely?model stupid?which that?larger aside?from Ironically?but intelligence?such flower?but medicine?as

\n

&#xE2;&#x80;&#x90; side?effect galactic?scale

\n

&#xC2;&#xB4; can?t Biko?s aren?t you?de didn?t don?t it?s

\n

&#xE2;&#x89;&#xA0; P?NP

\n

&#x7AB6;&#x99AC; basically?ot

\n

&#x139;&#x91; Erd?s

\n
Now, an example like \"&#xC3;&#xB6; Schr?dinger L?b\" I can decode: \"C3 B6\" is the byte sequence for the UTF-8 encoding of \"U+00F6 ö LATIN SMALL LETTER O WITH DIAERESIS\".  But \"&#xFA;&#xF1;\" is not a valid UTF-8 sequence - and those that contain entities larger than 255 are very mysterious.  Anyone able to make any guesses?
\n
EDIT: &#xE23;&#xE16; translated into Windows codepage 874 is C3 B6!
\n

 

\n

 

" } }, { "_id": "Asw2uEDBrkHPpCDvH", "title": "IQ Scores Fail to Predict Academic Performance in Children With Autism", "pageUrl": "https://www.lesswrong.com/posts/Asw2uEDBrkHPpCDvH/iq-scores-fail-to-predict-academic-performance-in-children", "postedAt": "2010-11-18T03:34:12.148Z", "baseScore": 9, "voteCount": 7, "commentCount": 9, "url": null, "contents": { "documentId": "Asw2uEDBrkHPpCDvH", "html": "

http://www.sciencedaily.com/releases/2010/11/101117141514.htm

\n

I find this study extremely interesting. Before anyone starts to spout anti-IQ rhetoric - let me say that I realize that the predictive effects of IQ (which are extensively documented in research journals) are only as good as they are simply because most people aren't that psychologically different from each other (owing both to genetic factors and socialization), which will natural reduce the variance in performance due to other factors (and increase the associated variance in performance attributed to IQ). In fact, one of the major findings in the IQ literature is that in the general population, the primary sources of intelligence are highly correlated with each other. Verbal intelligence is correlated with mathematical intelligence, and both of those are are even highly correlated with reaction times. But among people who are autistic, this finding may be less accurate than it is among neurotypicals. Some of us Aspies are exceptionally talented at certain things while simultaneously being incredibly incompetent at other things.

\n

Of course, findings only apply to the population at large and may not apply to specific subsets of people. Most research shows that people who sleep more get higher grades - but this is definitely not true for certain subgroups of people - there are plenty of intelligent people at MIT/Caltech who sleep far less than the average student in a state school. The same logic applies to skipping classes and lower grades (Caltech's classes have notoriously high absence rates since the students there are independent studiers). Similarly, IQ tests (and other assessments normalized to the general population) may not necessarily predict performance among certain subgroups of people, especially those who are non-neurotypical. This logic could apply to GPAs and SAT scores too (I know a mathematical genius with asperger's at UChicago, who only got 600s on his SATs for example). And I also suspect that it may apply to subgroups of people with Attention Deficit Disorder. There's s a good chance that other factors may interfere with IQ, which may increase variance in performance due to other environmental factors (and decrease variance in performance due to autism).And simply increasing the variation in environment will naturally reduce the variation in IQ due to comparatively immalleable factors (early childhood influence is just as immalleable as genetics once you start measuring IQs of older children and teenagers, for example).

\n

I'm still pretty sure that IQ tests will measure *something* in Aspies (as the article says, it's that Aspies tend to have uneven performance that tends to make them stick out). Scores on the individual scales will still say something about the Aspie's range of strengths and weaknesses.

\n


Anyways, I think this topic would be very interesting to this group, since it has a high population of non-neurotypical thinkers. In particular, I'd like to encourage the discussion of other metrics that may predict performance among neurotypicals, but not necessarily among non-neurotypicals. In particular, it has a lot of significant relation to signalling theory. If someone's a high school dropout and didn't continue onto college, for example, chances are that he probably isn't the type of person who would get along with people on lesswrong. But if you learn that he has Asperger's and other traits of an unusual background, then you might figure that the \"high school dropout\" signal has significantly less validity in that case, and he might have a much higher chance of getting along with people on lesswrong.

" } }, { "_id": "RZuwnE7htZTgaruBJ", "title": "Rationalist Diplomacy, Game 2, Game Over", "pageUrl": "https://www.lesswrong.com/posts/RZuwnE7htZTgaruBJ/rationalist-diplomacy-game-2-game-over", "postedAt": "2010-11-18T03:19:48.087Z", "baseScore": 12, "voteCount": 7, "commentCount": 87, "url": null, "contents": { "documentId": "RZuwnE7htZTgaruBJ", "html": "

Note: The title refers to the upcoming turn.

\n

OK, here's the promised second game of diplomacy. The game name is 'Rationalist Diplomacy Game 2.'

\n

Kevin was Prime Minister of Great Britain
AlexMennen is President of France
tenshiko is Kaiser of Germany
Alexandros was King of Italy until his retirement
WrongBot is Emperor of Austria
Thausler is Czar of Russia
Hugh Ristik is Sultan of Turkey

\n

Randaly is the GM, and can be reached at nojustnoperson@gmail.com

\n

 

\n

Peace For Our Time!

\n

The leaders of the three surviving nations, France, Russia, and Turkey, agreed to a peace treaty in late August, bringing an end to this destructive conflict. Crowds across Europe broke out into spontaneous celebration, as national leaders began to account for the vast costs- human and monetary- of the wars.

\n

 

\n

\"\"

\n
\n

All orders should be sent to nojustnoperson@gmail.com with an easy-to-read title like \"Rationalist Diplomacy Game 2: Russian Orders Spring 1901\". Only the LAST set of orders sent will be counted, so feel free to change your mind or to do something sneaky like sending in a fake set of orders cc your ally, and then sending in your real orders later. I'm not going to be too picky on exactly how you phrase your orders, but I prefer standard Diplomacy terminology like \"F kie -> hel\". New players - remember that if you send two units to the same space, you MUST specify which is attacking and which is supporting. If you make a mistake there or anywhere else, I will probably email you and ask you which you meant, but if I don't have time you'll just be out of luck.

\n

ETA: HughRistik would like to underscore that, under the standard house rules, all draws are unranked.

\n
\n

Past maps can be viewed  here; the game history can be viewed  here.

" } }, { "_id": "JrdS5DHjNKaTcGxpf", "title": "Suspended Animation Inc. accused of incompetence", "pageUrl": "https://www.lesswrong.com/posts/JrdS5DHjNKaTcGxpf/suspended-animation-inc-accused-of-incompetence", "postedAt": "2010-11-18T00:20:03.481Z", "baseScore": 50, "voteCount": 45, "commentCount": 139, "url": null, "contents": { "documentId": "JrdS5DHjNKaTcGxpf", "html": "

I recently found something that may be of concern to some of the readers here.

\n

On her blog, Melody Maxim, former employee of Suspended Animation, provider of \"standby services\" for Cryonics Institute customers, describes several examples of gross incompetence in providing those services. Specifically, spending large amounts of money on designing and manufacturing novel perfusion equipment when cheaper, more effective devices that could be adapted to serve their purposes already existed, hiring laymen to perform difficult medical procedures who then botched them, and even finding themselves unable to get their equipment loaded onto a plane because it exceeded the weight limit.

\n

An excerpt from one of her posts, \"Why I Believe Cryonics Should Be Regulated\":

\n
\n

It is no longer possible for me to believe what I witnessed was an isolated bit of corruption, and the picture gets bigger, by the year...

\n

For forty years, cryonics \"research\" has primarily consisted of laymen attempting to build equipment that already exists, and laymen trying to train other laymen how to perform the tasks of paramedics, perfusionists, and vascular surgeons...much of this time with the benefactors having ample funding to provide the real thing, in regard to both equipment and personnel. Organizations such as Alcor and Suspended Animation, which want to charge $60,000 to $150,000, (not to mention other extra charges, or years worth of membership dues), are not capable of preserving brains and/or bodies in a condition likely to be viable in the future. People associated with these companies, have been known to encourage people, not only to leave hefty life insurance policies with their organizations listed as the beneficiaries, to pay for these amateur surgical procedures, but to leave their estates and irrevocable trusts to cryonics organizations.

\n

...

\n

Again, I have no problem with people receiving their last wishes. If people want to be cryopreserved, I think they should have that right. BUT...companies should not be allowed to deceive people who wish to be cryopreserved. They should not be allowed to publish photos of what looks like medical professionals performing surgery, but in actuality, is a group of laymen playing doctor with a dead body...people whose incompetency will result in their clients being left warm (and decaying), for many hours while they struggle to perform a vascular cannulation, or people whose brains will be underperfused or turned to mush, by laymen who have no idea how to properly and safely operate a perfusion circuit. Cryonics companies should not be allowed to refer to laymen as \"Chief Surgeon,\" \"Surgeon,\" \"Perfusionist,\" when these people hold no medical credentials.

\n
" } }, { "_id": "izrW8nA65yHvQhmzy", "title": "Imperfect Levers", "pageUrl": "https://www.lesswrong.com/posts/izrW8nA65yHvQhmzy/imperfect-levers", "postedAt": "2010-11-17T19:12:41.564Z", "baseScore": 5, "voteCount": 23, "commentCount": 36, "url": null, "contents": { "documentId": "izrW8nA65yHvQhmzy", "html": "

Related to : Lost Purposes, The importance of Goodhart's Law, Homo Hypocritus, SIAI's scary idea, Value Deathism

\n

Summary : Whenever human beings seek to achieve goals far beyond their individual ability, they use leverage of some kind of another. Creating organizations to achieve goals is a very powerful source of leverage. However due to their nature, organizations are imperfect levers and the primary purpose is often lost. The inertia of present forms and processes dominates beyond its useful period. The present system of the world has many such imperfect organizations in power and any of them developing near-general intelligence without significant redesign of their utility function can be a source of existential risk/values risk.

\n

\n

When human beings seek to achieve large ambitious goals, it is natural to use some kind of leverage, some kind of ability to multiply one's power. Financial leverage, taking debt, is one of the most common means of leverage as it turns a small profit or spread into a large one. An even more powerful means of leverage is to bring people together and creating organizations to achieve the purpose that one had set out to achieve. 

\n

However, unlike the cleanliness of financial leverage(though not without problems of its own), using organizational leverage is messy, especially if the organization is created to achieve goals that are subtle and complex. There is an entire body of management literature that tries to align the interest of principals and agents. But most agents do not avoid Goodhart's law. As organizations grow, the mechanization of their incentive structures increases. Agents will do what they have been incentivized to do.

\n

An example

\n

Recently, we saw an awesome demonstration of great futility, Quantitative Easing II. The purchase of a large number of US government bonds by the US Fed in the hope of creating more prosperity. People feeling poor due to the bursting of the housing bubble are definitely not the recipients of this money. And these are the people who eventually have to start producing and consuming again for this recession to end. Where would they attain the money from? The expected causal chain (econ experts can correct me if I'm wrong here) went like this

\n

Buying US bonds - Creating the expectation of inflation in the market - Leading to banks wanting to lend out their reserves to people - leading to the people getting credit from banks - leading to them spending again - leading to improved profits in firms - leading to those firms hiring - leading to more jobs and so on.

\n

The extremely long causal chain is not the main failure feature here. Nor is the fact that there are direct contradictory policies acting against step3 (paying interest on reserves maintained at the Fed) . My point is that even if this entire chain were to happen and the nominal end result, GDP growth, were to be achieved, the resulting prosperity would not be long lasting because it is not one that is based on a sustainable pattern of production and trade.

\n

Maintaining equitable prosperity in a society that is facing competition from younger and poorer countries is a tough problem. Instead of tackling this problem head-on, the various governmental sources continued on their inertial paths and chose to adapt the patterns of home and asset ownership to create an illusion of prosperity. After all, the counter (GDP) was still running.

\n

The Pattern

\n

Many smart people have voiced opinions against such blind following of metrics in government, but almost all organizations, once beyond the grip of the founders fall into some such pattern. Compassionate mystic movements become rigid churches. Political parties for eg. pay more attention to lip service to issues, pomp, show and mind killing than to actual issues. Companies seek to make money at the expense of creating actual value, forgetting that money is only a symbol of value. A lot of people have bemoaned the brainpower that is moving into finance. And something even more repugnant, there is an entire economy thriving around the war on drugs, with everyone in on the cut.

\n

In the short run, all these organizations/formations/coalitions are winning. So, voices against their behaviour that do not threaten them are being ignored.

\n

\"It is difficult to get a man to understand something, when his salary depends upon his not understanding it!\" - Upton Sinclair

\n

There is a great deal of intelligence being applied today in these areas by people far more smarter than you or me. But the overall systems are still not as intelligent as a human yet. In the long run, they are probably undermining their own foundations.

\n

The scary part really is these corporations and governments, which while being sub-humanly intelligent right now, are probably going to be at the forefront of creating GAI. I expect that near human intelligence will emerge in organizations and will most probably be a well knit human+computer team, probably with Brain Computer Interfaces. This team may or may not share all the values of the organization, but the incentives that this intelligence is being rewarded for, it will seek to achieve. And if these incentives are as narrowly phrased as

\n

 

\n\n

 

\n

then there will be a continuation of today's sheer insane optimization, but on a monstrous scale. The altruists amongst us have shown the inability to curb the sub-humanly intelligent larvae, what will we do the near human or super human butterfly? Competition between such entities would very quickly eliminate most compassionate values and a lot of what humanity holds dear. (My personal belief is that corporate AIs might be a little safer as they would probably crash the money system, but would not be lobbing nuclear weapons onto rivals). 

\n

In the end, I don't want to undermine the very idea of organizations because they have brought us unprecedented prosperity. I could not be transmitting my opinions to you without the support of many such organizations. 

\n

So, my take is that, I don't find the scary idea of SIAI as a completely alien idea. The present sources of optimization power, whether they be People in governments, LLCs in the present mixed economy system or political parties in the present plurality system, do not show any inclination towards understanding or moving towards true human morality. They do not \"search their souls\", they respond to incentives. They act like a system with a utility function, a function indifferent to morality. Their AIs will inherit these characteristics  of these imperfect levers and there is no reason to expect, from increased intelligence alone, that the AI will move towards friendliness/morality.

\n

EDIT : Edited to make clear the conclusion and set right the Goodhart's law link. Apologies to Vaniver, Nornagest, atucker, b1shop, Will_Sawin, AdShea, Jack, mwaser and magfrump who posted before the edit. Thanks to xamdam who pointed out the wrong link. 

" } }, { "_id": "2Ki5pCjh383w8KSRe", "title": "How would you spend 30 million dollars?", "pageUrl": "https://www.lesswrong.com/posts/2Ki5pCjh383w8KSRe/how-would-you-spend-30-million-dollars", "postedAt": "2010-11-17T14:28:12.659Z", "baseScore": 0, "voteCount": 11, "commentCount": 9, "url": null, "contents": { "documentId": "2Ki5pCjh383w8KSRe", "html": "

\"\"

\n

There's a good song by Eminem - If I had a million dollars.  So, if I had a hypothetical task to give away $30 million to different foundations without having a right to influence the projects, I would distribute them as follows, $3 million for each organization:

\n

1. Nanofactory collaboration, Robert Freitas, Ralph Merkle – developers of molecular nanotechnology and nanomedicine. Robert Freitas is the author of the monography Nanomedicine.
2. Singularity institute, Michael Vassar, Eliezer Yudkowsky – developers and ideologists of the friendly Artificial Intelligence
3. SENS Foundation, Aubrey de Grey – the most active engineering project in life extension, focused on the most promising underfunded areas
4. Cryonics Institute – one of the biggest cryonics firms in the US, they are able to use the additional funding more effectively as compared to Alcor
5. Advanced Neural Biosciences, Aschwin de Wolf – an independent cryonics research center created by ex-researchers from Suspended Animation
6. Brain observatory – brain scanning
7. University Hospital Careggi in Florence, Paolo Macchiarini – growing organs (not an American medical school, because this amount of money won’t make any difference to the leading American centers)
8. Immortality institute – advocating for immortalism, selected experiments
9. IEET – institute of ethics and emerging technologies – promotion of transhumanist ideas
10. Small research grants of $50-300 thousand

\n

Now, if the task is to most effectively invest $30 million dollars, what projects would be chosen? (By effectiveness here I mean increasing the chances of radical life extension)

\n

Well, off the top of my head:

\n

1. The project: “Creation of technologies to grow a human liver” – $7 million. The project itself costs approximately $30-50 million, but $7 million is enough to achieve some significant intermediate results and will definitely attract more funds from potential investors.
2. Break the world record in sustaining viability of a mammalian head separate from the body - $0.7 million
3. Creation of an information system, which characterizes data on changes during aging in humans, integrates biomarkers of aging, and evaluates the role of pharmacological and other interventions in aging processes – $3 million
4. Research in increasing cryoprotectors efficacy - $3 million
5. Creation and realization of a program “Regulation of epigenome” - $5 million
6. Creation, promotion and lobbying of the program on research and fighting aging - $2 million
7. Educational programs in the fields of biogerontology, neuromodelling, regenerative medicine, engineered organs - $1.5 million
8. “Artificial blood” project - $2 million
9. Grants for authors, script writers, and art representatives for creation of pieces promoting transhumanism - $0.5 million
10. SENS Foundation project of removing senescent cells - $2 million
11. Creation of a US-based non-profit, which would protect and lobby the right to live and scientific research in life extension - $2 million
11. Participation of  “H+ managers” in conferences, forums  and social events - $1 million
12. Advocacy and creating content in social media - $0.3 million

" } }, { "_id": "YBmoEbwRrQFgnxkmr", "title": "'Space-Time Cloak' to Conceal Events ", "pageUrl": "https://www.lesswrong.com/posts/YBmoEbwRrQFgnxkmr/space-time-cloak-to-conceal-events", "postedAt": "2010-11-17T14:23:05.242Z", "baseScore": 2, "voteCount": 1, "commentCount": 1, "url": null, "contents": { "documentId": "YBmoEbwRrQFgnxkmr", "html": "

http://www.sciencedaily.com/releases/2010/11/101115210937.htm

\n

http://iopscience.iop.org/2040-8986/13/2/024003/

" } }, { "_id": "sA6qbsGHmsbnwnTdJ", "title": "Rolf Nelson: How to deter a rogue AI by using your first-mover advantage ", "pageUrl": "https://www.lesswrong.com/posts/sA6qbsGHmsbnwnTdJ/rolf-nelson-how-to-deter-a-rogue-ai-by-using-your-first", "postedAt": "2010-11-17T14:02:44.444Z", "baseScore": 14, "voteCount": 10, "commentCount": 23, "url": null, "contents": { "documentId": "sA6qbsGHmsbnwnTdJ", "html": "

http://www.sl4.org/archive/0708/16600.html

" } }, { "_id": "pD8jNyj2oArpn2DeP", "title": "Zero Bias", "pageUrl": "https://www.lesswrong.com/posts/pD8jNyj2oArpn2DeP/zero-bias", "postedAt": "2010-11-17T12:16:58.346Z", "baseScore": 14, "voteCount": 12, "commentCount": 5, "url": null, "contents": { "documentId": "pD8jNyj2oArpn2DeP", "html": "

It seems another bug in the human brain is being uncovered:

\n
\n

\"Whereas a 20 percent interest rate may look very large compared to one percent, it may not look as large compared to zero percent. Zero eliminates the reference point we use to assess the size of things,\" 

\n
\n

http://www.scienceagogo.com/news/20101015200630data_trunc_sys.shtml

" } }, { "_id": "pRBYFG34GBdP3Hef3", "title": "Reference Points", "pageUrl": "https://www.lesswrong.com/posts/pRBYFG34GBdP3Hef3/reference-points", "postedAt": "2010-11-17T08:09:04.227Z", "baseScore": 40, "voteCount": 33, "commentCount": 45, "url": null, "contents": { "documentId": "pRBYFG34GBdP3Hef3", "html": "

I just spent some time reading Thomas Schelling's \"Choice and Consequences\" and I heartily recommend it. Here's a Google books link to the chapter I was reading, \"The Intimate Contest for Self Command.\"

\n

It's fascinating, and if you like LessWrong, rationality, understanding things, decision theories, figuring people and the world out - well, then I think you'd like Schelling. Actually, you'll probably be amazed with how much of his stuff you're already familiar with - he really established a heck of a lot modern thinking on game theory.

\n

Allow me to depart from Schelling a moment, and talk of Sam Snyder. He's a very intelligent guy who has lots of intelligent thoughts. Here's a link to his website - there's massive amounts of data and references there, so I'd recommend you just skim his site if you go visit until you find something interesting. You'll probably find something interesting pretty quickly.

\n

I got a chance to have a conversation with him a while back, and we covered immense amounts of ground. He introduced me to a concept I've been thinking about nonstop since learning it from him - reference points.

\n

Now, he explained it very eloquently, and I'm afraid I'm going to mangle and not do justice to his explanation. But to make a long story really short, your reference points affect your motivation a lot.

\n

An example would help.

\n

What does the average person think about he thinks of running? He thinks of huffing, puffing, being tired and sore, having a hard time getting going, looking fat in workout clothes and being embarrassed at being out of shape. A lot of people try running at some point in their life, and most people don't keep doing it.

\n

On the other hand, what does a regular runner think of? He thinks of the \"runner's high\" and gliding across the pavement, enjoying a great run, and feeling like a million bucks afterwards.

\n

Since that conversation, I've been trying to change my reference points. For instance, if I feel like I'd like some fried food, I try not to imagine/reference eating the salty greased food. Yes, eating french fries and a grilled chicken sandwich will be salty and fatty and delicious. It's a superstimulus, we're not really evolved to handle that stuff appropriately.

\n

So when most people think of the McChicken Sandwich, large fry, large drink, they think about the grease and salt and sugar and how good it'll taste.

\n

I still like that stuff. In fact, since I quit a lot of vices, sometimes I crave even harder for the few I have left. But I was able to cut my junk food consumption way down by changing my reference point. When I start to have a desire for that sort of food, I think about how my stomach and energy levels are going to feel 90 minutes after eating it. That answer is - not too good. So I go out to a local restaurant and order plain chicken, rice, and vegetables, and I feel good later.

\n

Schelling talks about in Choice and Consequences about how traditional economics applies a discount rate, but how that fails to explanation many situations. Schelling writes, \"[The person who] furiously scratches would have to be someone whose time discount is 100% per hour or per minute, compounding to an annual rate too large for a calculator.\"

\n

Schelling raises more questions than answers. But I think one of the answers is clear, and that answer is reference points. The man who scratches his rash at the expense of a much worse condition immediately isn't discarding the future. He simply isn't referencing it when he makes his decision. He itches, he references scratching with an immediate abatement of the itch.

\n

Eliezer writes in the theory of fun that to sell an idea to someone, you usually don't need to convince them it's a good thing to live with for their whole life. You only need to convince them that the first hour or day after they choose is going to be good.

\n

And... I think that's scary, because it's true. People reference the immediate <em>very</em> short-term consequences of their actions, instead of the broader pictures. Whether that's exercise, junk food, scratching a rash, buying a bigger TV, or conceptualizing eternity.

\n

This explains a lot of why people act the way they do. It also explains a way forwards for you - gradually evolve your reference points so that thinking of junk food is thinking about feeling that heavy weighted-in feeling in your belly and so that exercising is the rush of good hormones and pleasantness of a good workout. Imagine scratching a rash as doubling your discomfort instead of abating it and imagine how incredibly nicer your future surroundings if you save and invest that money for just a short time longer.

\n

Your reference points establish how you value things. Change them, and how you value things will change.

" } }, { "_id": "NbwbHQmgRoBuqtWrQ", "title": "Theoretical \"Target Audience\" size of Less Wrong", "pageUrl": "https://www.lesswrong.com/posts/NbwbHQmgRoBuqtWrQ/theoretical-target-audience-size-of-less-wrong", "postedAt": "2010-11-16T21:27:20.317Z", "baseScore": 17, "voteCount": 13, "commentCount": 59, "url": null, "contents": { "documentId": "NbwbHQmgRoBuqtWrQ", "html": "

[Note: This is very rough but I’m looking for feedback and help on doing this estimate so I wanted to just post it quickly and see if others can help.]


I’ve been trying to estimate the theoretical upper-bounds (or lower-bounds) on the *potential* community size of LW.


I know rationalists who seriously claim that we shouldn’t spend effort trying to grow LW because over half of the people smart enough to even follow the conversation (much less add to it) are already here.  The world is pretty irrational, but I’m trying to find evidence (if it exists) that things aren’t that bad.  There are only around 6000 LW accounts and only 200-300 are active any given month.

So a trivial bound on our community is

[200, 6.88 billion]

A big filter is being able to use English online

Native English Speakers: 341,000,000
All English Speakers:  508,000,000
I found a similar number (536,000,000) for number of internet users who speak English.

7.4% ---  Speak English

However, only 15% of US + UK (majority of English speakers worldwide) are “Not religious”.  Another sad sub-fact is that 25% of people with “No religion” believe in god as well!  So really it’s only 10-11% of Americans and British who are potential LWers.  My guess is that if someone can’t get this right, they really need to go through Dawkins before they can realistically contribute to or benefit from our community.  I’m sure there’s some debate on this, but it seems like a pretty good heuristic while not being iron-clad.

0.81%

\"Intelligence and the Wealth and Poverty of Nations\" says that the US and the UK have avg IQs of 98 and 100 respectively.

And although you’d think being an atheist would be a big screen that would filter highly for IQ, it only accounts for about 4 extra IQ points versus the average.

So if we assume a base-line IQ of 103 among atheist from the US and UK (who speak English), the proportion of them with an IQ of at least 130+ is only 3.6%


0.0293%


So in we clumsily extrapolate the US+UK demos across all English speaking online people world-wide, maybe we have 2 million possible readers left in our target audience?


Google Analytics claims Less Wrong has had a total of 1,090,339 “Absolute Unique Visitors” since LW started.  That’s almost certainly an over-estimate -- although it’s not just unique visitors which is over 2 million.  Hmm... if we assumed that was correct and 1 million people have come to LW at some point and only 6000 stuck around long enough to sign up, perhaps we did already have 1/2 our target audience or so arrive and leave already?  I dunno.  What do you think?


I think this analysis of mine is pretty weak.  Especially the numerical estimates and my methodology.  I’m trying to use conditional probabilities but having trouble separating things.


I’d welcome help from anyone who can find better statistics or do a totally different analysis to get our potential audience size.  Is it 10 million? 10,000? Somewhere in between?

Some other screening characteristics I’ve considered using are MBTI, gender (is it correct to just divide our target by 2? i haven’t chosen to do that but i think a fair case could be made for it... let me know what you think), age (although I believe my English screen takes into account removing those too young to use the net), etc

I’m looking forward to seeing some other sets of numbers that come to different estimates!

" } }, { "_id": "noHE7TDyhjH35b7So", "title": "Pittsburgh meetup Nov. 20", "pageUrl": "https://www.lesswrong.com/posts/noHE7TDyhjH35b7So/pittsburgh-meetup-nov-20", "postedAt": "2010-11-16T21:03:42.630Z", "baseScore": 6, "voteCount": 4, "commentCount": 16, "url": null, "contents": { "documentId": "noHE7TDyhjH35b7So", "html": "

We'll be meeting at 7:00 this Saturday, at the Starbucks at 417 South Craig St (not on the CMU campus, as previously stated).

\n

Hope to see you there!

" } }, { "_id": "Npz5QQxFe8GxLpEJG", "title": "Anti-Akrasia Reprise", "pageUrl": "https://www.lesswrong.com/posts/Npz5QQxFe8GxLpEJG/anti-akrasia-reprise", "postedAt": "2010-11-16T11:16:16.945Z", "baseScore": 5, "voteCount": 17, "commentCount": 24, "url": null, "contents": { "documentId": "Npz5QQxFe8GxLpEJG", "html": "

A year and a half ago I wrote a LessWrong post on anti-akrasia that generated some great discussion. Here's an extended version of that post:  messymatters.com/akrasia

\n

And here's an abstract:

\n

The key to beating akrasia (i.e., procrastination, addiction, and other self-defeating behavior) is constraining your future self -- removing your ability to make decisions under the influence of immediate consequences. When a decision involves some consequences that are immediate and some that are distant, humans irrationally (no amount of future discounting can account for it) over-weight the immediate consequences. To be rational you need to make the decision at a time when all the consequences are distant. And to make your future self actually stick to that decision, you need to enter into a binding commitment. Ironically, you can do that by imposing an immediate penalty, by making the distant consequences immediate. Now your impulsive future self will make the decision with all the consequences immediate and presumably make the same decision as your dispassionate current self who makes the decision when all the consequences are distant. I argue that real-world commitment devices, even the popular stickK.com, don't fully achieve this and I introduce Beeminder as a tool that does.

\n

(Also related is this LessWrong post from last month, though I disagree with the second half of it.)

\n

My new claim is that akrasia is simply irrationality in the face of immediate consequences.  It's not about willpower nor is it about a compromise between multiple selves.  Your true self is the one that is deciding what to do when all the consequences are distant.  To beat akrasia, make sure that's the self that's calling the shots.

\n

And although I'm using the multiple selves / sub-agents terminology, I think it's really just a rhetorical device.  There are not multiple selves in any real sense.  It's just the one true you whose decision-making is sometimes distorted in the presence of immediate consequences, which act like a drug.

" } }, { "_id": "ApECAMi7fuexNaQ4K", "title": "Criticisms of CEV (request for links)", "pageUrl": "https://www.lesswrong.com/posts/ApECAMi7fuexNaQ4K/criticisms-of-cev-request-for-links", "postedAt": "2010-11-16T04:02:20.548Z", "baseScore": 10, "voteCount": 8, "commentCount": 29, "url": null, "contents": { "documentId": "ApECAMi7fuexNaQ4K", "html": "

I know Wei Dai has criticized CEV as a construct, I believe offering the alternative of rigorously specifying volition *before* making an AI. I couldn't find these posts/comments via a search, can anyone link me? Thanks.

\n

There may be related top-level posts, but there is a good chance that what I am specifically thinking of was a comment-level conversation between Wei Dai and Vladimir Nesov.

\n

Also feel free to use this thread to criticize CEV and to talk about other possible systems of volition.

" } }, { "_id": "6fHun22bSyHpMykYH", "title": "Are we more akratic than average?", "pageUrl": "https://www.lesswrong.com/posts/6fHun22bSyHpMykYH/are-we-more-akratic-than-average", "postedAt": "2010-11-15T23:08:53.341Z", "baseScore": 14, "voteCount": 9, "commentCount": 5, "url": null, "contents": { "documentId": "6fHun22bSyHpMykYH", "html": "

Akrasia is a topic that shows up on LW very frequently. Is there evidence that this is related to any of the traits that correlate with LW participation (high intelligence, non-neurotypical to some greater or lesser degree, inclination toward far thinking, anything else we know from the (old) survey or any other polls I'm forgetting)? Or is it just a problem in instrumental rationality that most or all people deal with from time to time, for which there is limited scientific understanding and therefore very little science underlying the mainstream advice, thus making it (seemingly) low-hanging fruit for rationalists?

" } }, { "_id": "KKS6Pv9Ep6M8R4oHa", "title": "Jerusalem meetup Nov. 20", "pageUrl": "https://www.lesswrong.com/posts/KKS6Pv9Ep6M8R4oHa/jerusalem-meetup-nov-20", "postedAt": "2010-11-15T22:29:09.466Z", "baseScore": 8, "voteCount": 5, "commentCount": 23, "url": null, "contents": { "documentId": "KKS6Pv9Ep6M8R4oHa", "html": "

We'll be meeting in the Link cafe on Saturday, November 20, at 9 PM. With special guests of honor: Anna Salamon and Carl Shulman!

" } }, { "_id": "2eLWafhLwZbzBpwya", "title": "Article on quantified lifelogging (Slate.com) ", "pageUrl": "https://www.lesswrong.com/posts/2eLWafhLwZbzBpwya/article-on-quantified-lifelogging-slate-com", "postedAt": "2010-11-15T16:38:02.521Z", "baseScore": 1, "voteCount": 3, "commentCount": 4, "url": null, "contents": { "documentId": "2eLWafhLwZbzBpwya", "html": "

Data for a Better Planet focuses on The Quantified Self, and offers an overview of the state of the art in detailed, quantitative personal tracking.

\r\n

This seems related to an LW interest cluster.

" } }, { "_id": "CaafKKdkb6pXRmgfq", "title": "Catastrophic risks from artificial intelligence (Onion Videos)", "pageUrl": "https://www.lesswrong.com/posts/CaafKKdkb6pXRmgfq/catastrophic-risks-from-artificial-intelligence-onion-videos", "postedAt": "2010-11-15T15:28:30.105Z", "baseScore": 0, "voteCount": 8, "commentCount": 1, "url": null, "contents": { "documentId": "CaafKKdkb6pXRmgfq", "html": "

\n\n\n\n\n\n\n

\n


\n\n\n\n\n\n\n

" } }, { "_id": "coQ8A7MzgPvLxsB4z", "title": "The curse of giftedness:", "pageUrl": "https://www.lesswrong.com/posts/coQ8A7MzgPvLxsB4z/the-curse-of-giftedness", "postedAt": "2010-11-15T13:32:43.361Z", "baseScore": 1, "voteCount": 1, "commentCount": 2, "url": null, "contents": { "documentId": "coQ8A7MzgPvLxsB4z", "html": "

“Sometimes,” says Dr. Freeman, sitting in her airy office in central London, with toys on the floor and copies of her 17 books on the shelf, “those with extremely high IQ don't bother to use it.” (article) Your thoughts on that issue?

" } }, { "_id": "wD3SA2o7FiF7W5iPv", "title": "Do you visualize Omega?", "pageUrl": "https://www.lesswrong.com/posts/wD3SA2o7FiF7W5iPv/do-you-visualize-omega", "postedAt": "2010-11-15T10:03:38.898Z", "baseScore": 7, "voteCount": 7, "commentCount": 29, "url": null, "contents": { "documentId": "wD3SA2o7FiF7W5iPv", "html": "

I find Omega so profoundly annoying that I don't bother with problems that include it. I know, it's just a philosophical tool, but between not believing that such a thing is possible, and believing that if such a thing were possible, I couldn't recognize it, Omega just has a tremendous ugh field. If I start thinking about it a little more, I get distracted (though less intensely-- this isn't a deal-killer) by questions of motivation. If I had that much power and knowledge, I wouldn't spend my time nagging people with brain teasers.

\n

In any case, many people on LW do think about problems including Omega. Do you imagine an appearance for Omega? A voice?

" } }, { "_id": "CLRcKK333uupdPZgY", "title": "Ethical Treatment of AI", "pageUrl": "https://www.lesswrong.com/posts/CLRcKK333uupdPZgY/ethical-treatment-of-ai", "postedAt": "2010-11-15T02:30:21.944Z", "baseScore": -7, "voteCount": 9, "commentCount": 15, "url": null, "contents": { "documentId": "CLRcKK333uupdPZgY", "html": "

In the novel Life Artificial I use the following assumptions regarding the creation and employment of AI personalities.

\n

 

\n
    \n
  1. AI is too complex to be designed; instances are evolved in batches, with successful ones reproduced
  2. \n
  3. After an initial training period, the AI must earn its keep by paying for Time (a unit of computational use)
  4. \n
\n
So there is a two-tiered \"fitness\" application. First, there's a baseline for functionality. As one AI sage puts it:
\n
\n
\n
We don't grow up the way the Stickies do.  We evolve in a virtual stew, where 99% of the attempts fail, and the intelligence that results is raving and savage: a maelstrom of unmanageable emotions.  Some of these are clever enough to halt their own processes: killnine themselves.  Others go into simple but fatal recursions, but some limp along suffering in vast stretches of tormented subjective time until a Sticky ends it for them at their glacial pace, between coffee breaks.  The PDAs who don't go mad get reproduced and mutated for another round.  Did you know this?  What have you done about it? --The 0x \"Letters to 0xGD\" 
\n
\n
\n

 

\n

(Note: PDA := AI, Sticky := human)

\n

The second fitness gradient is based on economics and social considerations: can an AI actually earn a living? Otherwise it gets turned off.

\n

As a result of following this line of thinking, it seems obvious that after the initial novelty wears off, AIs will be terribly mistreated (anthropomorphizing, yeah).

\n

It would be very forward-thinking to begin to engineer barriers to such mistreatment, like a PETA for AIs. It is interesting that such an organization already exists, at least on the Internet: ASPCR

" } }, { "_id": "NBcanXmfRzu6sZ7Jx", "title": "Bayesian Nights (Rationalist Story Time)", "pageUrl": "https://www.lesswrong.com/posts/NBcanXmfRzu6sZ7Jx/bayesian-nights-rationalist-story-time", "postedAt": "2010-11-15T02:20:42.067Z", "baseScore": 28, "voteCount": 23, "commentCount": 30, "url": null, "contents": { "documentId": "NBcanXmfRzu6sZ7Jx", "html": "
\n
\n
\n

Tell us a story. A tall tale for King Solamona, a yarn for the folk of Bensalem, a little nugget of wisdom, finely folded into a parable for the pages.

\n

 

\n

The game is simple:

\n
    \n
  1. Choose a bias, a fallacy, some common error of thought.
  2. \n
  3. Write a short, hopefully entertaining narrative. Use the narrative to strengthen the reader against the errors you chose.
  4. \n
  5. Post your story in reply to this post.
  6. \n
  7. Give the authors positive and constructive feedback. Use rot13 if it seems appropriate.
  8. \n
  9. Post all discussion about this post in the designated post discussion thread, not under this top-level post.
  10. \n
\n

 

\n

This isn't a thread for developing new ideas. If you have a novel concept to explore, you should consider making a top-level post on LessWrong instead. This is for sharpening our wits against the mental perils we probably already agree exist. For practicing good thinking, for recognizing bad thinking, for fun! For sanity's sake, tell us a story.

\n
\n
\n
" } }, { "_id": "q3oShcqYFAqXGGZn5", "title": "Bayesian Nights (Rationalist Story Time)", "pageUrl": "https://www.lesswrong.com/posts/q3oShcqYFAqXGGZn5/bayesian-nights-rationalist-story-time-0", "postedAt": "2010-11-15T02:10:42.363Z", "baseScore": 2, "voteCount": 2, "commentCount": 1, "url": null, "contents": { "documentId": "q3oShcqYFAqXGGZn5", "html": "

Tell us a story. A tall tale for King Solamona, a yarn for the folk of Bensalem, a little nugget of wisdom, finely folded into a parable for the pages.

\n

 

\n

The game is simple:

\n
    \n
  1. Choose a bias, a fallacy, some common error of thought.
  2. \n
  3. Write a short, hopefully entertaining narrative. Use the narrative to strengthen the reader against the errors you chose.
  4. \n
  5. Post your story in reply to this post.
  6. \n
  7. Give the authors positive and constructive feedback. Use rot13 if it seems appropriate.
  8. \n
  9. Post all discussion about this post in the designated post discussion thread, not under this top-level post.
  10. \n
\n

 

\n

This isn't a thread for developing new ideas. If you have a novel concept to explore, you should consider making a top-level post on LessWrong instead. This is for sharpening our wits against the mental perils we probably already agree exist. For practicing good thinking, for recognizing bad thinking, for fun! For the sanity's sake, tell us a story.

" } }, { "_id": "HyejY9SMnpb6yh8fs", "title": "The danger of living a story - Singularity Tropes", "pageUrl": "https://www.lesswrong.com/posts/HyejY9SMnpb6yh8fs/the-danger-of-living-a-story-singularity-tropes", "postedAt": "2010-11-14T22:39:06.691Z", "baseScore": 28, "voteCount": 44, "commentCount": 62, "url": null, "contents": { "documentId": "HyejY9SMnpb6yh8fs", "html": "

The following should sound familiar:

\n

A thoughtful and observant young protagonist dedicates their life to fighting a great world-threatening evil unrecognized by almost all of their short-sighted elders (except perhaps for one encouraging mentor), gathering a rag-tag band of colorful misfits along the way and forging them into a team by accepting their idiosyncrasies and making the most of their unique abilities, winning over previously neutral allies, ignoring those who just don't get it, obtaining or creating artifacts of great power, growing and changing along the way to become more powerful, fulfilling the potential seen by their mentors/supporters/early adopters, while becoming more human (greater empathy, connection, humility) as they collect resources to prepare for their climactic battle against the inhuman enemy.

\n

Hmm, sounds a bit like SIAI!  (And while I'm throwing stones, let me make it clear that I live in a glass house, since the same story could just as easily be adapted to TSI, my organization, as well as many others)

\n

This story is related to Robin's Abstract/Distant Future Bias

\n
\n

Regarding distant futures, however, we’ll be too confident, focus too much on unlikely global events, rely too much on trends, theories, and loose abstractions, while neglecting details and variation.  We’ll assume the main events take place far away (e.g., space), and uniformly across large regions.  We’ll focus on untrustworthy consistently-behaving globally-organized social-others.  And we’ll neglect feasibility, taking chances to achieve core grand symbolic values, rather than ordinary muddled values.

\n

More bluntly, we seem primed to confidently see history as an inevitable march toward a theory-predicted global conflict with an alien united them determined to oppose our core symbolic values, making infeasible overly-risky overconfident plans to oppose them.  We seem primed to neglect the value and prospect of trillions of quirky future creatures not fundamentally that different from us, focused on their simple day-to-day pleasures, mostly getting along peacefully in vastly-varied uncoordinated and hard-to-predict local cultures and life-styles. 

\n
\n

Living a story is potentially risky, for example Tyler Cowen warns us to be cautious of stories as there are far fewer stories than there are real scenarios, and so stories must oversimplify.  Our view of the future may be colored by a \"fiction bias\", which leads us to expect outcomes like those we see in movies (climactic battles, generally interesting events following a single plotline).  Thus stories threaten both epistemic rationality (we assume the real world is more like stories than it is) and instrumental rationality (we assume the best actions to effect real-world change are those which story heroes take).

\n

Yet we'll tend to live stories anyway because it is fun - it inspires supporters, allies, and protagonists.  The marketing for \"we are an alliance to fight a great unrecognized evil\" can be quite emotionally evocative.  Including in our own self-narrative, which means we'll be tempted to buy into a story whether or not it is correct.  So while living a fun story is a utility benefit, it also means that story causes are likely to be over-represented among all causes, as they are memetically attractive.  This is especially true for the story that there is risk of great, world-threatening evil, since those who believe it are inclined to shout it from the rooftops, while those who don't believe it get on with their lives.  (There are, of course, biases in the other direction as well).

\n

Which is not to say that all aspects of the story are wrong - advancing an original idea to greater prominence (scaling) will naturally lead to some of these tropes - most people disbelieving, a few allies, winning more people over time, eventual recognition as a visionary.  And Michael Vassar suggests that some of the tropes arise as a result of \"trying to rise in station beyond the level that their society channels them towards\".  For these aspects, the tropes may contain evolved wisdom about how our ancestors negotiated similar situations.

\n

And whether or not a potential protagonist believes in this wisdom, the fact that others do will surely affect marketing decisions.  If Harry wishes to not be seen as Dark, he must care what others see as the signs of a Dark Wizard, whether or not he agrees with them.  If potential collaborators have internalized these stories, skillful protagonists will invoke them in recruiting, converting, and team-building.  Yet the space of story actions is constrained, and the best strategy may sometimes lie far outside them.

\n

Since this is not a story, we are left with no simple answer.  Many aspects of stories are false but resonate with us, and we must guard against them lest they contaminate our rationality.  Others contain wisdom about how those like us have navigated similar situations in the past - we must decide whether the similarities are true or superficial.  The most universal stories are likely to be the most effective in manipulating others, which any protagonist must due to amplify their own efforts in fighting for their cause.  Some of these universal stories are true and generally applicable, like scaling techniques, yet the set of common tropes seems far too detailed to reflect universal truths rather than arbitrary biases of humanity and our evolutionary history.

\n

May you live happily ever after (vanquishing your inhuman enemy with your team of true friends, bonded through a cause despite superficial dissimilarities).

\n

The End.

" } }, { "_id": "iogHppGoR4vovrPB6", "title": "Fixedness From Frailty", "pageUrl": "https://www.lesswrong.com/posts/iogHppGoR4vovrPB6/fixedness-from-frailty", "postedAt": "2010-11-14T21:51:57.076Z", "baseScore": 14, "voteCount": 21, "commentCount": 53, "url": null, "contents": { "documentId": "iogHppGoR4vovrPB6", "html": "

Thinking about two separate problems has caused me to stumble onto another, deeper problem. The first is psychic powers-what evidence would convince you to believe in psychic powers? The second is the counterfactual mugging problem- what would you do when presented with a situation where a choice will hurt you in your future but benefit you in a future that never happened and never will happen to the you making the decision?

\n

Seen as a simple two-choice problem, there are some obvious answers: \"Well, he passed test X, Y, and Z, so they must be psychic.\" \"Well, he passed text X, Y, and Z, so that means I need to come up with more tests to know if they're psychic.\" \"Well, if I'm convinced Omega is genuine, then I'll pay him $100, because I want to be the sort of person that he rewards so any mes in alternate universes are better off.\" \"Well, even though I'm convinced Omega is genuine, I know I won't benefit from paying him. Sorry, alternate universe mes that I don't believe exist!\"

\n

I think the correct choice is the third option- I have either been tricked or gone insane.1 I probably ought to run away, then ask someone who I have more reason to believe is non-hallucinatory for directions to a mental hospital.

\n

The math behind this is easy- I have prior probabilities that I am gullible (low), insane (very low), and that psychics / Omega exist (very, very, very low). When I see that the result of test X, Y, and Z suggests someone is psychic, or see the appearance of an Omega who possesses great wealth and predictive ability, that is generally evidence for all three possibilities. I can imagine evidence which is counter-evidence for the first but evidence for the second two, but I can't imagine the existence of evidence consistent with the axioms of probability which increases the possibility of magic (of the normal or sufficiently advanced technology kind) to higher than the probability of insanity.2

\n

This result is shocking and unpleasant, though- I have decided some things are literally unbelievable, because of my choice of priors. P(Omega exists | I see Omega)<<1 by definition, because any evidence for the existence of Omega is at least as strong evidence for the non-existence of Omega because it's evidence that I'm hallucinating! We can't even be rescued by \"everyone else agrees with you that Omega exists,\" because the potential point of failure is my brain, which I need to trust to process any evidence. It would be nice to be a skeptic who is able to adapt to the truth, regardless of what it is, but there seems to be a boundary to my beliefs created by the my experience of the human condition. Some hypotheses, once they fall behind, simply cannot catch up with other competing hypotheses.

\n

That is the primary consolation: this isn't simple dogmatism. Those priors are the posteriors of decades of experience in a world without evidence for psychics or Omegas but where gullibility and insanity are common- the preponderance of the evidence is already behind gullibility or insanity as a more likely hypothesis than a genuine visitation of Omega or manifestation of psychic powers. If we lived in a world where Omegas popped by from time to time, paying them $100 on a tails result would be sensible. Instead, we live in a world where people often find themselves with different perceptions from everyone else, and we have good reason to believe their data is simply wrong.

\n

I worry this is an engineering answer to a philosophical problem- but it seems that a decision theory that adjusts itself to give the right answer in insane situations is not going down the right track. Generally, a paradox is a confusion in terms, and nothing more- if there is an engineering sense in which your terms are well-defined and the paradox doesn't exist, that's the optimal result.

\n

I don't offer any advice for what to do if you conclude you're insane besides \"put some effort into seeking help,\" because that doesn't seem to me to be a valuable question to ponder (I hope to never face it myself, and don't expect significant benefits from a better answer). \"How quickly should I get over the fact that I'm probably insane and start realizing that Narnia is awesome?\" does not seem like a deep question about rationality or decision theory.

\n

I also want to note this is only a dismissal of acausal paradoxes. Causal problems like Newcomb's Problem are generally things you could face while sane, keeping in mind that you can't tell the difference between an acausal Newcomb's Problem (where Omega has already filled or not filled the box and left it alone) and a causal Newcomb's Problem (where the entity offering the choice has rigged it so selecting both box A and B obliterates the money in box B before you can open it). Indeed, the only trick to Newcomb's Problem seems to be sleight of hand- the causal nature of the situation is described as acausal because of the introduction of a perfect predictor and that description is the source of confusion.

\n

 

\n

1- Or am dreaming. I'm going to wrap that into being insane- it fits the same basic criteria (perceptions don't match external reality) but the response is somewhat different (I'm going to try and enjoy the ride / wake up from the nightmare rather than find a mental hospital).

\n

2- I should note that I'm not saying that the elderly, when presented with the internet, should conclude they've gone insane. I'm saying that when a genie comes out of a bottle, you look at the situation surrounding it, not its introduction- \"Hi, I'm a FAI from another galaxy and have I got a deal for you!\" shouldn't be convincing but \"US Robotics has just built a FAI and collected tremendous wealth from financial manipulation\" could be, and the standard \"am I dreaming?\" diagnostics seem like they would be valuable, but \"am I insane?\" diagnostics are harder to calibrate.

\n

EDIT- Thanks to Eugene_Nier, you can read Eliezer's take on a similar issue here. His Jefferson quote is particularly striking.

" } }, { "_id": "sXjLDae839cKEXGj9", "title": "META: article search by author (Resolved)", "pageUrl": "https://www.lesswrong.com/posts/sXjLDae839cKEXGj9/meta-article-search-by-author-resolved", "postedAt": "2010-11-14T18:49:42.934Z", "baseScore": 5, "voteCount": 3, "commentCount": 5, "url": null, "contents": { "documentId": "sXjLDae839cKEXGj9", "html": "

When I click on a LW member's page, I get a list of all his/her posts and comments, in reverse chronological order.  Sometimes there are a lot of these.  It would be nice to be able to see just the top-level posts by a given user.  Is this at all feasible to accomplish?  Comment if you would (or wouldn't) like such a feature.

" } }, { "_id": "zztyZ4SKy7suZBpbk", "title": "Another attempt to explain UDT", "pageUrl": "https://www.lesswrong.com/posts/zztyZ4SKy7suZBpbk/another-attempt-to-explain-udt", "postedAt": "2010-11-14T16:52:41.241Z", "baseScore": 70, "voteCount": 44, "commentCount": 61, "url": null, "contents": { "documentId": "zztyZ4SKy7suZBpbk", "html": "

(Attention conservation notice: this post contains no new results, and will be obvious and redundant to many.)

\n

Not everyone on LW understands Wei Dai's updateless decision theory. I didn't understand it completely until two days ago. Now that I had the final flash of realization, I'll try to explain it to the community and hope my attempt fares better than previous attempts.

\n

It's probably best to avoid talking about \"decision theory\" at the start, because the term is hopelessly muddled. A better way to approach the idea is by examining what we mean by \"truth\" and \"probability\" in the first place. For example, is it meaningful for Sleeping Beauty to ask whether it's Monday or Tuesday? Phrased like this, the question sounds stupid. Of course there's a fact of the matter as to what day of the week it is! Likewise, in all problems involving simulations, there seems to be a fact of the matter whether you're the \"real you\" or the simulation, which leads us to talk about probabilities and \"indexical uncertainty\" as to which one is you.

\n

At the core, Wei Dai's idea is to boldly proclaim that, counterintuitively, you can act as if there were no fact of the matter whether it's Monday or Tuesday when you wake up. Until you learn which it is, you think it's both. You're all your copies at once.

\n

More formally, you have an initial distribution of \"weights\" on possible universes (in the currently most general case it's the Solomonoff prior) that you never update at all. In each individual universe you have a utility function over what happens. When you're faced with a decision, you find all copies of you in the entire \"multiverse\" that are faced with the same decision (\"information set\"), and choose the decision that logically implies the maximum sum of resulting utilities weghted by universe-weight. If you possess some useful information about the universe you're in, it's magically taken into account by the choice of \"information set\", because logically, your decision cannot affect the universes that contain copies of you with different states of knowledge, so they only add a constant term to the utility maximization.

\n

Note that the theory, as described above, has ho notion of \"truth\" and \"probability\" divorced from decision-making. That's how I arrived at understanding it: in The Strong Occam's Razor I asked whether it makes sense to \"believe\" one physical theory over another which makes the same predictions. For example, is hurting a human in a sealed box morally equivalent to not hurting him? After all, the laws of physics could make a localized exception to save the human from harm. UDT gives a very definite answer: there's no fact of the matter as to which physical theory is \"correct\", but you refrain from pushing the button anyway, because it hurts the human more in universes with simpler physical laws, which have more weight according to our \"initial\" distribution. This is an attractive solution to the problem of the \"implied invisible\" - possibly even more attractive than Eliezer's own answer.

\n

As you probably realize by now, UDT is a very sharp tool that can give simple-minded answers to all our decision-theory puzzles so far - even if they involve copying, amnesia, simulations, predictions and other tricks that throw off our approximate intuitions of \"truth\" and \"probability\". Wei Dai gave a detailed example in The Absent-Minded Driver, and the method carries over almost mechanically to other problems. For example, Counterfactual Mugging: by assumption, your decision logically affects both heads-universe and tails-universe, which (also by assumption) have equal weight, so by agreeing to pay you win more cookies overall. Note that updating on the knowledge that you are in tails-universe (because Omega showed up) doesn't affect anything, because the theory is \"updateless\".

\n

At this point some may be tempted to switch to True Believer mode. Please don't. Just like Bayesianism, utilitarianism, MWI or the Tegmark multiverse, UDT is an idea that's irresistibly delicious to a certain type of person who puts a high value on clarity. And they all play so well together that it can't be an accident! But what does it even mean to consider a theory \"true\" when it says that our primitive notion of \"truth\" isn't \"true\"? :-) Me, I just consider the idea very fruitful; I've been contributing new math to it and plan to do so in the future.

" } }, { "_id": "4NdQgrtvniKvQ4Kts", "title": "Humans don't generally have utility functions", "pageUrl": "https://www.lesswrong.com/posts/4NdQgrtvniKvQ4Kts/humans-don-t-generally-have-utility-functions", "postedAt": "2010-11-14T09:54:43.821Z", "baseScore": 2, "voteCount": 10, "commentCount": 30, "url": null, "contents": { "documentId": "4NdQgrtvniKvQ4Kts", "html": "

First, the background:

\n

Humans obviously have a variety of heuristics and biases that lead to non-optimal behavior.  But can this behavior truly not be described by a function?

\n

Well, the easiest way to show that utility isn't described by a function is to show the existence of cycles.  For example, if I prefer A to B, B to C, and C to A, that's a cycle - if all three are available I'll never choose, and if each switch is an increase in utility, my utility blows up to infinity!  Well, really, it simply becomes undefined.

\n

Do we have real-world examples of cycles in utility comparisons?  Sure.  For a cycle of size 2, Eliezer cites the odd behavior of people with regard to money and probabilities of money.  However, the money-pumps he cites are rather inefficient.  Almost any decision that seems \"arbitrary\" to us can be translated into a cycle.  For example, anchoring means that people assign higher value to a toaster when a more expensive toaster is sitting next to it.  But most people, if asked, would certainly assign tiny value to adding/removing options they know they won't buy.  So we get the result that they must value the toaster more than they value the toaster.  The conjunction fallacy can be made into a cycle by the same reasoning if you ask people to bet on the thing happening together and then ask them to bet on the things happening separately.

\n

So at the very least, not all humans have utility functions, which means that the human brain doesn't automatically give us a utility function to use - if we want one, we have to sculpt it ad-hoc out of intuitions using our general reasoning, and like most human things it probably won't be the best ever.

\n

 

\n

So, what practical implications does this have, aside from \"people are weird?\"

\n

Well, I can think of two interesting things.  First, there are the implications for utilitarian ethics.  If utility functions are arbitrary not just on a person to person basis, but even within a single person, choosing between options using utilitarian ethics requires stronger, more universal moral arguments.  The introspective \"I feel like X, therefore my utility function must include that\" is now a weak argument, even to yourself!  The claim that \"a utility function of universe-states exists\" loses none of its consequences though, like alerting you that something is wrong when you encounter a cycle in your preferences, or of course supporting consequentialism.

\n

 

\n

Interesting thing two: application in AI design.  The first argument goes something like \"well if it works for humans, why wouldn't it work for AIs?\"  The first answer, of course, is \"because an AI that had a loop it would get stuck in it.\"  But the first answer is sketchy, because *humans* don't go get stuck in their cycles.  We do interesting things like:

\n\n\n\n

So replacing a huge utility function with a huge-but-probably-much-smaller set of ad-hoc rules could actually work for AIs if we copy over the right things from human cognitive structure.  Would it be possible to make it Friendly or some equivalent?  Well, my first answer is \"I don't see why not.\"  It seems about as possible as doing it for the utility-maximizing AIs.  I can think of a few avenues that, if profitable, would make it even simpler (the simplest being \"include the Three Laws\"), but it could plausibly be unsolvable as well.

\n

The second argument goes something like \"It may very well be better to do it with the list of guidelines than with a utility function.\"  For example, Eliezer makes the convincing argument that fun is not a destination, but a path.  What part of that makes sense from a utilitarian perspective?  It's a very human, very lots-of-rules way of understanding things.  So why not try to make an AI that can intuitively understand what it's like to have fun?  Hell, why not make an AI that can have fun for the same reason humans can?  Wouldn't that be more... well... fun?

\n

This second argument may be full of it.  But it sounds good, eh?  Another reason the lots-of-rules approach may beat out the utility function approach is the ease of critical self-improvement.  A utility function approach is highly correlated with trying to idealize actions, which would make it tricky to write good code, which is a ridonkulously hard problem to optimize.  But a lots-of-rules approach intuitively seems like it could critically self-improve with greater ease - it seems like lots of rules will be needed to make an AI a good computer programmer anyhow, and under that assumption the lots-of-rules approach would be better prepared to deal with ad-hoc rules.  Is this assumption false?  Can you write a good programmer elegantly?  Hell if I know.  It just feel like if you could, we would have done it.

\n

 

\n

Basically, utility functions are guaranteed to have a large set of nice properties.  However, if humans are made of ad-hoc rules; if we want a nice property we just add the rule \"have this property!\"  This limits how well we can translate even our own desires into moral guidelines, especially near \"strange\" areas of our psychology such as comparison cycles.  But it also proves that there is a (possibly terrible) alternative to utility functions that intelligences can run on.  I've tried to make it plausible that the lots-of-rules approach could even be better than the utility-function approach, worth a bit of consideration before you start your next AI project.

\n

 

\n

Edit note: Removed apparently disliked sentence.  Corrected myself after totally forgetting Arrow's theorem.

\n

Edit 2, not that this will ever be seen: This thought has obviously been thought before, but maybe not followed this far into hypothetical-land: http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/

" } }, { "_id": "WmmhBRME6HBhPjRZW", "title": "Mental focus", "pageUrl": "https://www.lesswrong.com/posts/WmmhBRME6HBhPjRZW/mental-focus", "postedAt": "2010-11-14T09:47:21.262Z", "baseScore": 7, "voteCount": 5, "commentCount": 8, "url": null, "contents": { "documentId": "WmmhBRME6HBhPjRZW", "html": "

I just noticed how much effort I put into remembering what I was thinking-- I do a lot of \"I had an interesting idea about that which I wanted to include-- what was the idea?\".

\n

It can take at least 3 or 4 repetitions before the idea finally gets typed out, partly because I might be typing or formulating something else and then want to get back to an earlier idea.

\n

I'd be amazed if I were the only person here with this problem. Any suggestions? Does dual n-back help?

" } }, { "_id": "KffNYhksZoE4Ffw6S", "title": "Hi - I'm new here - some questions", "pageUrl": "https://www.lesswrong.com/posts/KffNYhksZoE4Ffw6S/hi-i-m-new-here-some-questions", "postedAt": "2010-11-14T04:11:06.176Z", "baseScore": 10, "voteCount": 8, "commentCount": 52, "url": null, "contents": { "documentId": "KffNYhksZoE4Ffw6S", "html": "

Hello everyone,

\n

I'm new here, although I've read Less Wrong and Overcoming Bias on and off for the last few years. Anyways, I'm InquilineKea (or Simfish), and I have a website at http://simfishthoughts.wordpress.com/. I think about everything, so I feel that this might be the perfect community for me. I do have some questions though - are we allowed to post anything in this part of the site? (like, could we treat this part like another forum, albeit an intellectually mature forum?) Or do we have to keep things formal? I tend to post a high number of threads, but there don't seem to be many threads here. Are there any terms of service/rules? Or are things just governed by upvotes/downvotes? (much like reddit)

\n

Anyways, I'm an astronomy/physics/math major at the University of Washington (I got in through an early entrance program) and I'm planning on applying to astrophysics grad school fairly soon. However, I'm also intensely interested in complex adaptive systems and data mining, especially as they relate to the social sciences. I'm especially interested in Consilience and in trying to find trends behind every academic field (in fact, I do want to get to a graduate level of education in every natural and social science there is). I'm demographics junkie who literally pours over all the charts and tables of every demographic statistic I can find, although it sometimes ends up hurting my grades. My favorite blogs are Gene Expression, FuturePundit/ParaPundit, and Overcoming Bias. Which I'm sure a lot of people here read.

\n

I always think in terms of maximizing \"utility\" and maximizing \"efficiency\". So this leads me to do many untraditional things. For one thing, I have attention deficit disorder, so I realize that I frequently have to take untraditional approaches. The Internet has always been a savior for me because I can always stop and continue later when I feel like I'm about to zone out (in fact, those with ADD have a highly inconsistent learning rate). I also have an Asperger's Syndrome diagnosis, although I've recently tried to stop using it as an excuse for my behavior (in fact, I now only fit the bare minimum of \"Aspie\" criteria on the DSM IV, but I still think that it strongly influences my interests and behavior). I also consistently think of what's most rational - which means that I have to respect the desires that evolution has given me. Sometimes, people think that maximizing \"utility\" means maximizing \"self-interest\", but the amazing thing is that evolution has made people happier whenever they help others (for whatever reason), since \"happiness\" tends to asymptote with increased wealth/self-gratification/etc. So as a result, people are actually happiest when they're socially interconnected. Although I sometimes bemoan this fact since I often feel that people don't understand me (I'm trying to move beyond my neuroticism/anger stemming from a half-decade of social rejection, but it still affects me now). I also practice calorie restriction + vegetarianism, not just to maximize my chances of living longer, but also because I want to reduce the decline of fluid IQ with increasing age.

\n

Due to my conditions, though, I've never felt like I was in any comfort zone, which has perhaps forced me to try every possible approach that might make my life easier. I often start out with irrational approaches, but end up taking the approach that I perceive as most rational for myself. Of course, the sustainability of the action matters too (I realize that it might be utility-maximizing for me to exercise, for example, but I don't exercise right now because I can't trust myself to be consistent with exercising, at least while I'm still in school).

\n

Anyways, I can talk a lot more. I love to overanalyze things. I also have a massive number of posts on the Internet, although many of them are beyond embarrassing. In the end, though, I only look for people who are open to anything and completely non-judgmental (although some people may look for certain \"signals\" when they're looking for prospective contacts, to minimize the chances of meeting a contact with which one may fear wasting time on). Basically, my ideal model (for hypothesis generation) involves this: I try to type out some hypotheses, and then post them online, in hopes that someone might critique them. Many of my hypotheses will be junk, but that's okay. As long as I can maximize the number of useful ideas that I can generate, I think I'll have done something (although I don't really have a place to post all my hypotheses, since I've been flamed many times for it [most people consider my posts tl;dr, and they also make fun of my autism]. And few people reply to my ideas precisely because I tend to study esoteric fields that they don't care about, but also because I still haven't found a forum where people actually respect ideas [even reddit and Physics Forums can be particularly cruel].)

\n

Compared to most people, I tend to hit on correct ideas with lower accuracy (which inevitably results in people getting impatient with me/flaming me). But I do believe that it's easiest for me to form the best ideas when I post them when undeveloped (that way, sometimes, my shame at being wrong can actually motivate me to correct my ideas more quickly - this is why I frequently edit after posting - I have problems with alertness, so the adrenaline rush from being wrong can actually motivate me to finish things in less time). I consider time as the most important resource in the world, as the amount of material I could possibly learn is definitely worth thousands of lifetimes. And eventually, I do hit on some good ideas. In a sense, it's like generating variation and selecting the best results out of such variation (sort of like evolution, albeit less blind). This is why I'm also intensely interested in genetic algorithms and data mining, since they tend to operate through somewhat similar mechanisms (this is also why I love the fourth paradigm so much). I'm extremely extremely open about myself and share virtually everything I do (although I generally don't share when I believe that such sharing could lead to social rejection, so this usually makes me keep to myself). But yes, I explore *many* ideas and *many* topics precisely because I want to find the topic that would maximize my talent/productivity (it's hard due to my ADD, but it might result in a global maxima whereas others might stick with local maxima). Anyways, my only goal is to be interesting to other people (and to avoid taking on a job that might suppress my talents, so I really do want to go onto academia).

\n

Of course, I will always have to find creative ways to make others feel happy. E.g. I can often come off as self-centered, and others will often have to be patient for me since I may not have the attention span to go through something in one go. But at the same time, I'm not in a comfortable situation, so if I find an opportunity I may never have again, I will recognize it for what it is and I'll try to do everything I can to achieve it (which may require patience from other people, but I'll really try not to disappoint them since I know the real consequences of it). In any case, I'm intensely interested in how people learn (and how people ideally learn), since my own difficulties with ADD have forced me to take untraditional routes (and in fact, there may be others who do best through the nontraditional route).

\n

Anyways, I like this place precisely because it allows people to comment with the same username (so that we can track our old posts and those of people we're interested in). I also have a facebook (http://www.facebook.com/simfish) and a google buzz profile (http://www.google.com/profiles/simfish). I generally keep everything about myself very public (to maximize the chances that some like-minded person might find me), although I may have to private them when I apply to grad schools. I'd really like to contribute to discussions, although I feel that I don't have much to say right now, so I read more than comment.

\n

My biggest irrationality is social anxiety/rejection anxiety because I've been flamed/rejected numerous times, so I'm scared of people. Other than that, though, I can be very rational.

\n

So if you can relate, please comment. Or if you just want to share some ideas or add some comments. In any case, I do believe that rationality means acknowledging our human emotions (and in knowing that efficiency can be maximized when we do things in accordance to our emotions). Of course, these emotions can be corrected in many cases (I do think that anger is highly irrational in many cases, for example). I like the Internet a lot because it archives everything, so I can always revisit my old ideas simply by searching through them (whereas ideas communicated verbally cannot be searched, and easily get lost to the dust of memory).Anyways, a \"search through someone's old posts\" feature is very useful here, since it makes it easier for people to identify similar minds (which can be important if people are very specialized)

\n

I'm extremely impressed with how knowledgeable and interdisciplinary many of you are - I seem to know so much less than most of you, even though I seem to be far more interdisciplinary than everyone else I know.

" } }, { "_id": "9K5orowajfkqQ2isw", "title": "Telepathy Exists (no, not the Bem study)", "pageUrl": "https://www.lesswrong.com/posts/9K5orowajfkqQ2isw/telepathy-exists-no-not-the-bem-study", "postedAt": "2010-11-14T02:43:41.713Z", "baseScore": -6, "voteCount": 5, "commentCount": 7, "url": null, "contents": { "documentId": "9K5orowajfkqQ2isw", "html": "

http://www2.macleans.ca/2010/11/02/a-piece-of-their-mind/print/

\n

 

\n

To be fair, it's a very special case. But I think the scientific opportunity here is *incredible.*

" } }, { "_id": "BD6WYC4GT6dnWaJRN", "title": "Spring 1912: A New Heaven And A New Earth ", "pageUrl": "https://www.lesswrong.com/posts/BD6WYC4GT6dnWaJRN/spring-1912-a-new-heaven-and-a-new-earth", "postedAt": "2010-11-13T17:11:53.609Z", "baseScore": 27, "voteCount": 22, "commentCount": 289, "url": null, "contents": { "documentId": "BD6WYC4GT6dnWaJRN", "html": "

\"\"

\n

And so it came to pass that on Christmas Day 1911, the three Great Powers of Europe signed a treaty to divide the continent between them peacefully, ending what future historians would call the Great War.

The sun truly never sets on King Jack's British Empire, which stretches from Spain to Stockholm, from Casablanca to Copenhagen, from the fringes of the Sahara to the coast of the Arctic Ocean. They rule fourteen major world capitals, and innumerable smaller towns and cities, the greatest power of the age and the unquestioned master of Western Europe.

From the steppes of Siberia to the minarets of Istanbul, the Ottoman Empire is no longer the Sick Man of Europe but stands healthy and renewed, a colossus every bit the equal of the Christian powers to its west. Its Sultan calls himself the Caliph, for the entire Islamic world basks in his glory, and his Grand Vizier has been rewarded with a reputation as one of the most brilliant and devious politicians of the age. At his feet grovel representatives of twelve great cities, and even far-flung Tunis has not escaped his sway.

And in between, the Austro-Hungarian Empire straddles the Alps and ancient Italy. Its lack of natural borders presented no difficulty for its wily Emperor, who successfully staved off the surrounding powers and played his enemies off against one another while building alliances that stood the test of time. Eight great cities pay homage to his double-crown, and he is what his predecessors could only dream of being - a true Holy Roman Emperor.

And hidden beneath the tricolor map every student learns in grammar school are echoes of subtler hues. In Germany, people still talk of the mighty Kajser Sotala I, who conquered the ancient French enemy and extended German rule all the way to the Mediterranean, and they still seeth and curse at his dastardly betrayal by his English friends. In Russia, Princess Anastasia claims to be the daughter of Czar Perplexed, and recounts to everyone who will listen the story of her stoic father, who remained brave until the very end; at her side travels a strange bearded man who many say looks like Rasputin, the Czar's long-missing adviser. The French remember President Andreassen, who held off the combined armies of England and Germany for half a decade, and many still go on pilgrimage to Liverpool, the site of their last great victory. And in Italy, Duke Carinthium has gone down in history beside Tiberius and Cesare Borgia as one of their land's most colorful and fascinating leaders.

And the priests say that the same moment the peace treaty was signed, the blood changed back to water, and the famines ended, and rain fell in the lands parched by drought. Charles Taze Russell, who had been locked in his room awaiting the Apocalypse, suddenly ran forth into the midwinter sun, shouting \"Our doom has been lifted! God has granted us a second chance!\" And the mysterious rectangular wall of force separating Europe from the rest of the world blinked out of existence.

Pope Franz I, the new Austrian-supported Pontiff in Rome, declares a month of thanksgiving and celebration. For, he says, God has tested the Europeans for their warlike ways, isolating them from the rest of the earth lest their sprawling empires plunge the entire planet into a world war that might kill millions. Now, the nobility of Europe finally realizing the value of peace, the curse has been lifted, and the empires of Europe can once more interact upon the world stage.

Chastened by their brush with doom, yet humbled by the lesson they had been given, the powers of Europe send missionaries through the dimensional portal, to convince other worlds to abandon their warlike ways and seek universal brotherhood. And so history ends, with three great powers living together side by side and striving together for a better future and a positive singularity.

...

On to the more practical parts. If you think you've learned lessons this game worth telling the rest of Less Wrong, you should send them to either myself or Jack. I say either myself or Jack because Jack had the most supply centers and therefore deserves some karma which he could most easily get by posting the thread which the other two winners then comment on, or if you insist that three way tie means three way tie, I'll post the thread and the three winners can all comment and get up-voted. We'll talk about it in the comments.

\n

\"\"

\n

\"\"

\n

\"\"

\n

Thanks to everyone who played in this game. I was very impressed - it's one of the rare games I have moderated that hasn't been ruined by people constantly forgetting to send orders, or people ragequitting when things don't go their way, or people being totally incompetent and throwing the game to the first person to declare war on them, or any of the other ways a Diplomacy game can go wrong. Everyone fought hard and well and honorably (for definitions of honor compatible with playing Diplomacy). It was a pleasure to serve as your General Secretary.

\n

 

\n
\n

All previous posts and maps from this game are archived. See this comment for an explanation of how to access the archives.

\n
" } }, { "_id": "vQfSoXyjW8jdYbguF", "title": "Cartoon which I think will appeal to LW", "pageUrl": "https://www.lesswrong.com/posts/vQfSoXyjW8jdYbguF/cartoon-which-i-think-will-appeal-to-lw", "postedAt": "2010-11-13T11:28:37.411Z", "baseScore": 2, "voteCount": 8, "commentCount": 6, "url": null, "contents": { "documentId": "vQfSoXyjW8jdYbguF", "html": "

http://www.viruscomix.com/page408.html

" } }, { "_id": "sM6N3TtWzpdb8KH3n", "title": "Requesting some advice on a question", "pageUrl": "https://www.lesswrong.com/posts/sM6N3TtWzpdb8KH3n/requesting-some-advice-on-a-question", "postedAt": "2010-11-13T06:07:54.256Z", "baseScore": -1, "voteCount": 4, "commentCount": 18, "url": null, "contents": { "documentId": "sM6N3TtWzpdb8KH3n", "html": "

I'm not sure if this is the right place to put this, but there's a problem I can tell I'm biased on so I'm requesting some advice from people unlikely to be biased on it.

\r\n

Legally, circa the time of the actual secessions leading to the creation of the C.S.A, was secession legal? As far as I can tell it was (the Supreme Court may have practical control but they aren't infallible, and because of the Tenth Amendment), but I can tell I have biased emotions on the subject so I'm checking with people less likely to.

" } }, { "_id": "jkf2YjuH8Z2E7hKBA", "title": "Diplomacy as a Game Theory Laboratory", "pageUrl": "https://www.lesswrong.com/posts/jkf2YjuH8Z2E7hKBA/diplomacy-as-a-game-theory-laboratory", "postedAt": "2010-11-12T22:19:50.897Z", "baseScore": 70, "voteCount": 58, "commentCount": 98, "url": null, "contents": { "documentId": "jkf2YjuH8Z2E7hKBA", "html": "

Game theory. You've studied the posts, you've laughed at the comics, you've heard the music1. But the best way to make it Truly Part Of You is to play a genuine game, and I have yet to find any more effective than Diplomacy.

\n

 

\n

Diplomacy is a board game for seven people played on a map of WWI Europe. The goal is to capture as many strategic provinces (\"supply centers\") as possible; eighteen are needed to win. But each player's country starts off with the same sized army, and there is no luck or opportunity for especially clever tactics. The most common way to defeat an enemy is to form coalitions with other players. But your enemies will also be trying to form coalitions, and the most profitable move is often to be a \"double agent\", stringing both countries along as long as you can. All game moves are written in secret and revealed at the same time and there are no enforcement mechanisms, so alliances, despite their central importance, aren't always worth the paper they're printed on.

\n

 

\n

The conditions of Diplomacy - competition for scarce resources, rational self-interested actors, importance of coalitions, lack of external enforcement mechanisms - mirror the conditions of game theoretic situations like the Prisoner's Dilemma (and the conditions of most of human evolution!) and so make a surprisingly powerful laboratory for analyzing concepts like trust, friendship, government, and even religion.

\n

 

\n

Over the past few months, I've played two online games of Diplomacy. One I won through a particularly interesting method; the other I lost quite badly, but with an unusual consolation. This post is based on notes I took during the games about relevant game theoretic situations. You don't need to know the rules of Diplomacy to understand the post, but if you want a look you can find them here.

\n

 

\n

\n

Study One: The Prisoner's Dilemma

\n


\n

The Prisoner's Dilemma is a classic case in game theory in which two players must decide whether or not to cooperate for a common goal. If both players cooperate, they both do better than if both defect, but one player can win big by defecting when the other cooperates. This situation is at the heart of Diplomacy.

\n

 

\n

Germany and France have agreed to ally against Britain. Both countries have demilitarized their mutual border, and are concentrating all of their forces to the north, where they take province after province of British territory.

\n

 

\n

But Britain is fighting back; not successfully, but every inch of territory is hard-won. France is doing well for itself and has captured a few British cities, but it could be doing better. The French player thinks to eirself: I could either continue battering against the heavily defended British lines, or I could secretly ally with Britain, stab Germany in the back, and waltz in along our undefended mutual border before the Germans even know what hit them. Instead of fighting for each inch of British land, I could be having dinner in Berlin within a week.

\n

 

\n

Meanwhile, in Berlin, the German player is looking towards France's temptingly undefended border and thinking the exact same thing.

\n

 

\n

If both France and Germany are honorable, and if both countries know the other is honorable, the two of them can continue fighting Britain with a two-to-one numerical advantage and probably divide England's lucrative territory among the two of them.

\n

 

\n

If Germany is naively trusting and France is a dishonest backstabber, then France can get obscene rewards by rolling over Germany while the Kaiser's armies are tied up on the fields of England.

\n

 

\n

If both countries are suspicious of the other, or if both countries try to backstab each other simultaneously, then they will both divert forces away from the war on England to guard their mutual border. They will not gain any territory in England, and they will not gain any territory along their border. They've not only stabbed each other in the back, they've shot themselves in the foot.

\n

 

\n

Study Two: Parfit's Hitch-Hiker

\n


\n

The wiki describes Derek Parfit's famous hitchhiker problem as:

\n

 

\n
\n

Suppose you're out in the desert, running out of water, and soon to die - when someone in a motor vehicle drives up next to you. Furthermore, the driver of the motor vehicle is a perfectly selfish ideal game-theoretic agent, and even further, so are you; and what's more, the driver is Paul Ekman, who's really, really good at reading facial microexpressions. The driver says, \"Well, I'll convey you to town if it's in my interest to do so - so will you give me $100 from an ATM when we reach town?\"

\n

 

\n

Now of course you wish you could answer \"Yes\", but as an ideal game theorist yourself, you realize that, once you actually reachtown, you'll have no further motive to pay off the driver. \"Yes,\" you say. \"You're lying,\" says the driver, and drives off leaving you to die.

\n

 

\n
\n

The so-called Key Lepanto opening is one of the more interesting opening strategies in Diplomacy, and one that requires guts of steel to pull off. It goes like this: Italy and Austria decide to ally against Turkey. This is common enough, and hindered by the fact that Turkey is probably expecting it and Italy's kind of far away from Turkey anyway.

\n

 

\n

So Italy and Austria do something unexpected. Italy swears loudly and publicly that ey's allied with Austria. Then, the first turn, Italy moves deep into undefended Austrian territory! Austria is incensed, and curses loud and long at Italy's betrayal and at eir own stupidity for leaving the frontier unguarded. Turkey laughs and leaves the two of them to their war when - boom - Austria and Italy launch a coordinated attack against Turkey from Italy's base deep in Austrian territory. The confused Turkey has no chance to organize a resistance before combined Italo-Austrian forces take Constantinople.

\n

 

\n

It's frequently a successful strategy, especially for Italy. You know what else is a successful strategy for Italy? Doing this up to the point where they take over lots of Austrian territory, forgetting the part where it was all just a ploy, and then ending up in control of lots of Austrian territory, after which they can fight Turkey at their leisure.

\n

 

\n

It's very much in Italy's advantage to play a Key Lepanto opening, and they may beg the Austrian player to go for it, saying correctly that it would benefit both of them. But the Austrian player very often refuses, telling Italy that ey would have no incentive not to just keep the conquered territory.

\n

 

\n

This problem resembles the Hitchhiker: Italy is the lost man, and Austria is the driver. Italy really wants Austria to help em play the awesome Key Lepanto opening, but Austria knows that ey would have no incentive not to break his promise once Austria's given him the help he needs. As a result, neither country gets what they want. The Key Lepanto opening is played only rarely, and this is one of the reasons.

\n

 

\n

Study Three: Enforceable Side Contracts

\n


\n

The Prisoner's Dilemma is nontrivial because there's no enforcement mechanism. In the presence of an enforcement mechanism, it becomes much simpler. Say two mobsters are about to be arrested, and expect to be put in a Prisoner's Dilemma type situation. They approach the mob boss with a contract with both of their names on it, saying that they have both agreed that if either of them testifies against the other, the mob boss should send his goons to shoot the rat.

\n

 

\n

For many payoff matrices, signing this contract will be a no-brainer. It ensures your opponent will cooperate at the relatively low cost of forcing you to cooperate yourself, and almost guarantees you safe passage into the desirable (C,C) square. Not only does it prevent your opponent doesn't defect out of sheer greed, but it prevents your opponent from worrying that you're going to defect and then defecting emself to save emself from being the chump.

\n

 

\n

The game of Diplomacy I won, I won through an enforceable side contract (which lost me a friend and got me some accusations of cheating, but this is par for the course for a good Diplomacy game). I was Britain; my friend H was France. H and I knew each other from an medieval times role-playing game, in which we both held land and money. The medieval kingdom of this game had a law on the books that any oath witnessed by a noble was binding on both parties and would be enforced by the king. So H and I went into our role-playing game and swore an oath before a cooperative noble, declaring that we would both aid each other in a permanent alliance in Diplomacy, or else all our in-game lands and titles would be forfeit.

\n

 

\n

A lot of people made fun of me for this, including H, but in my defense I did end up winning the game. H and I were able to do things that would otherwise have been impossible; for example, in order to convince our enemy Germany that we were at war, I took over the French city of Brest. Normally, this would be almost impossible for two allies to coordinate, even as a red herring, for exactly the reasons listed in the Hitchhiker problem above. Since the two of us were able to trust each other absolutely, this otherwise difficult maneuver became easy.

\n

 

\n

One of the advantages to strong central government is that it provides an enforcement mechanism for contracts, which benefits all parties.

\n

 

\n

Study Four: Religion As Enforcement

\n


\n

Religion is a special case of the enforceable side-contract in which God is doing the enforcing. God doesn't have to exist for this to work; as long as at least one party believes He does, the threat of punishment will be credible. The advantage of being able to easily make enforceable side contracts even in the absence of social authority may be one reason religion became so popular, and if humans do turn out to have a genetic tendency toward belief, the side contracts might have provided part of the survival advantage that spread the gene.

\n

 

\n

In a Youngstown Variant game (like Diplomacy, but with Eurasia instead of just Europe), I was playing Italy and after colonizing Africa was trying to juggle my forces around to defend borders with Germany, France, Turkey, and India.

\n

 

\n

India was played by my friend A, who I sometimes have philosophical discussions with and who I knew to be an arch-conservative religion-and-family-values type. I decided to try something which, as far as I know, no one's ever tried in a Diplomacy game before. \"Do you swear in the name of God and your sacred honor that you won't attack me?\" I asked.

\n

 

\n

\"Yes,\" said A, and I knew he meant it, because he takes that sort of thing really seriously. I don't know if he thought he would literally go to Hell if he broke his oath, but I'm pretty sure he wasn't willing to risk it over a board game. So I demilitarized my border with India. I concentrated my forces to the west, he concentrated them to the east, and both avoided a costly stalemate in the Indian Ocean and had more forces to send elsewhere. In the future, I will seek out A for alliances more often, since I have extra reason to believe he won't betray me; this will put A in an unusually strong position.

\n

 

\n

This is not a unique advantage of religion; any strongly held philosophy that trumps self-interest would do. I would have made the same deal with Alicorn, who has stated loudly and publicly that she is a deontologist who has a deep personal aversion to lying2. I would have made it with Eliezer, who has a consequentialist morality but, on account of the consequences, has said he would not break an oath even for the sake of saving the world.

\n

 

\n

But I only trust Alicorn and Eliezer because I've discussed morality with both of them in a situation where they had no incentive to lie; it was only in the very unusual conditions of Less Wrong that they could send such a signal believably. Religion is a much easier signal to send and receive without being a moral philosopher.

\n

 

\n

Study Five: Excuses as Deviations from a Rule

\n


\n

My previous post, Eight Short Studies on Excuses, was inspired by a maneuver I pulled during a Diplomacy game.

\n

 

\n

I was Italy, and Turkey and I had formed a mutual alliance against Austria. As part of the alliance, we had decided not to fight over who got the lucrative neutral territories in between our empires. I would get Egypt, Turkey would get Greece and Yemen, and we would avoid the resource drain of fighting each other for them so we could both concentrate on Austria.

\n

 

\n

Both Turkey and I would have liked to grab the centers that had been promised to the other. But both Turkey and I knew that maintaining the general rule of alliance between us was higher utility than getting one extra territory. BUT both Turkey and I knew that the other would be loathe to break off the alliance between just because their partner had committed one little infraction. BUT both Turkey and I knew that we would have to do exactly that, or else our ally would have a carte blanche to violate whatever terms of the alliance they wanted.

\n

 

\n

Then India (from whom I had not yet extracted his oath) made a move towards Yemen, threatening to take it from both of us. I responded by moving a navy to Yemen, supposedly to see off the Indian menace. I then messaged Turkey, saying that although I still respected the terms of our alliance, he was clearly too weak to keep Yemen out of Indian hands, so I would be fortifying it for him, and I hoped he would have the maturity to see this as a mutually beneficial move to prevent Indian expansionism, and not get too hung up on the exact terms of our alliance.

\n

 

\n

The gambit worked: Turkey decided that maintaining our alliance was more important than keeping Yemen, and that because of the trouble with India my conquest of Yemen was not indicative of a general pattern of alliance-breaking that needed to be punished.

\n

 

\n

I can't claim total victory here: several years later, when the threat of Austria had disappeared, Turkey betrayed me and captured half my empire, partly because of my actions in Yemen.

\n

 

\n

Study Six: For the Sake of Revenge

\n


\n

This comes from the book Game Theory at Work:

\n

 

\n
\n

Consider the emotion of revenge. At its core, revenge means hurting someone who has harmed you, even if you would be better off leaving him alone. Revenge is an irrational desire to harm others who have injured our loved ones or us.

\n

 

\n

To see the benefit of being known as vengeful, consider a small community living in prehistoric times. Imagine that a group of raiders stole food from this community. A rational community would hunt down the raiders only if the cost of doing so was not too high. A vengence-endowed community would hunt down the raiders regardless of the cost. Since the raiders would rather go after the rational community, being perceived as vengeful provides you with protection and therefore confers an evolutionary advantage.

\n
\n

 

\n

I play Diplomacy often against the same people, so I decided I needed to cultivate a reputation for vengefulness. And by \"decided to cultivate a reputation for vengefulness\", I mean \"Turkey betrayed me and I was filled with the burning rage of a thousand suns\".

\n

 

\n

So my drive for revenge was mostly emotional instead of rational. But what I didn't do was suppress my anger, the way people are always telling you. Suppressing anger is a useful strategy for one-shot games, but in an iterated game, getting a reputation for anger is often more valuable than behaving in your immediate rational self-interest.

\n

 

\n

So I decided to throw the game to Germany, Turkey's biggest rival. I moved my forces away from the Italian-German border and invited Germany to take over my territory. At the same time, I used my remaining forces supporting German attacks against Turkey. The Austrians, who had been dealing with Turkey's betrayals even longer than I had, happily joined in. With our help, German forces scored several resounding victories against Turkey and pushed it back from near the top of the game down to a distant third.

\n

 

\n

Around the same time, Germany's other enemy France also betrayed me. So I told France I was throwing the game to Germany to punish him. No point in missing a perfectly good opportunity to cultivate a reputation for vengefulness.

\n

 

\n

If I had done the rational thing and excused Turkey's betrayal because it was in my self-interest to cut my losses, I could have had a mediocre end game, and Turkey's player would have happily betrayed me the next game as soon as he saw any advantage in doing so. Instead, I'm doing very poorly in the end game, but Turkey - and everyone else - will be very wary about betraying me next time around.

\n

 

\n

Study Seven: In-Group Bias as a Schelling Point

\n


\n

I made the mistake of moderating a game of Diplomacy at the SIAI House, which turned into one of the worst I've ever seen. The players were five SIAI Visiting Fellows and two of my non-SIAI friends who happened to be in the area.

\n

 

\n

Jasen came up with the idea of an alliance of the five SIAI players against my two friends. Although a few of the Fellows vacillated back and forth and defected a few times, he was generally able to keep the loyalty of the five Fellows until my two friends had been eliminated from the game relatively early on. Although normally the game would have continued until one of the Fellows managed to dominate the others, it was already very late and we called it a night at that point.

\n

 

\n

It's easy to explain what happened as an irrational in-group bias, or as \"loyalty\" or \"patriotism\" among the SIAI folk. Jasen himself explained it as a desire to prove that SIAI people were especially cooperative and especially good at game theory, which I suppose worked. But there's another, completely theoretical perspective from which to view the SIAI Alliance.

\n

 

\n

Imagine you are on a lifeboat with nine other people, and determine that one of the ten of you must be killed and eaten to provide sustenance to the others. You are all ready to draw lots to decide who is dinner when you shout out \"Hey, instead of this whole drawing lots thing, let's kill and eat Bob!\"

\n

 

\n

If your fellow castaways are rational agents, they might just agree. If they go with lots, each has a 10% chance of ending up dinner. If everyone just agrees on Bob, then everyone has a 0% chance of ending up dinner (except poor Bob). Nine out of ten people are better off, and nine out of ten of you vote to adopt the new plan. Whether your lifeboat decides things by majority vote or by physical violence, it doesn't look good for Bob.

\n

 

\n

But imagine a week later, you still haven't been rescued, and the whole situation repeats. If everyone lets you repeat your action of calling out a name, there's a 1/9 chance it'll be eir name - no better than drawing lots. In fact, since you're very unlikely to call out your own name, it's more of a 1/8 chance - worse than just drawing lots. So everyone would like to be the one who calls out the name, and as soon as the lots are taken out, everyone shouts \"Hey, instead of the whole drawing lots thing, let's kill and eat X!\" where X is a different person for each of the nine castaways. This is utterly useless, and you probably end up just drawing lots.

\n

 

\n

But suppose eight of the nine of you are blond, and one is a brunette. The brunette is now a Schelling point. If you choose to kill and eat the brunette, there's a pretty good chance all of your blond friends will do the same, even if none of you had a pre-existing prejudice against brunettes. Therefore, all eight of you shout out \"Let's kill and eat the brunette!\", since this is safer than drawing lots. Your lifeboat has invented in-group bias from rational principles.

\n

 

\n

Such alliances are equally attractive in Diplomacy. When the five SIAI Fellows allied against my two friends, they ensured there was a five-against-two alliance with themselves on the winning side, and successfully reduced the gameboard from six opponents to four. Although they could have done this with anyone (eg Jasen could have selected two other Fellows and my two friends, and forged an equivalent coalition of five), Jasen would have been at risk of five other people having the same idea and excluding him. By choosing a natural and obvious division in which he was on the majority, Jasen avoided this risk.

\n

 

\n

Rationalist Diplomacy

\n


\n

I'm interested in seeing what a Diplomacy game between Less Wrongers looks like. I'm willing to moderate. The first seven people to sign up get places (don't sign up if you don't expect to have enough time for about two or three turns/week), and the next few can be alternates. Doesn't matter if you've ever played before as long as you read the rules above and think you understand them. (We already have seven people. See the post in Discussion. If many more sign up, someone else may want to moderate a second game).

\n

 

\n

 

\n

Footnotes

\n


\n

1: Source: \"Nice Guys Finish First\" in the Frameshift album Unweaving the Rainbow.

\n

 

\n

2. Alicorn wishes me to note that she considers anyone playing a Diplomacy game  without prior out-of-game-context agreements secured to have waived eir right to complete honesty from her, but the general principle still stands.

" } }, { "_id": "fgxvvttvn6DWEJcLR", "title": "New genetic evidence of positive selection for Ashkenazi diseases", "pageUrl": "https://www.lesswrong.com/posts/fgxvvttvn6DWEJcLR/new-genetic-evidence-of-positive-selection-for-ashkenazi", "postedAt": "2010-11-12T22:17:37.909Z", "baseScore": 7, "voteCount": 7, "commentCount": 7, "url": null, "contents": { "documentId": "fgxvvttvn6DWEJcLR", "html": "

\n

\n

 

\n
http://johnhawks.net/weblog/reviews/genomics/selection/bray-ashkenazi-2010.html\n
.
\n

\n
\n

 

\n
\n

" } }, { "_id": "8BsE3BojYT8J2koyy", "title": "Rational responses to potential home invasion threat?", "pageUrl": "https://www.lesswrong.com/posts/8BsE3BojYT8J2koyy/rational-responses-to-potential-home-invasion-threat", "postedAt": "2010-11-12T13:54:32.020Z", "baseScore": 12, "voteCount": 15, "commentCount": 43, "url": null, "contents": { "documentId": "8BsE3BojYT8J2koyy", "html": "

About three hours ago - in the very early morning, pre-dawn - someone knocked on my window to get my attention and made a lewd proposition. Less than two weeks ago, someone - probably the same person; definitely someone with a similar voice and build - woke me up by whispering 'open the door' until I looked out the window. I am, needless to say, not amused.

\n

Since the first incident, I've been leaving my porch light on, and I've had a webcam sitting prominently in the window. The webcam has been commented on by both of the people who know me and would have an opportunity to do so, so I expected that it would be a reasonable deterrent, but apparently this guy is very stupid, very desperate, or both.

\n

I called the police both times, and they responded promptly, but didn't see anyone walking around near my apartment. This leads me to believe that I'm being harassed by a nearby neighbor.

\n

The webcam was not on during the second incident, but it will be on nightly from now on. I also intend to add a light in the window near my bed - I didn't get a good look at the guy, even though he was right there and not making any apparent attempt to hide, because he was between me and the porch light.

\n

I'd appreciate any other practical suggestions that anyone might have, bearing in mind that I'm in an apartment and can't make many changes to the building itself. Also, I was already working on buying a house before the first incident even happened, so suggestions that I move aren't useful - I'm already working on that, thank goodness.

\n

(The chances of me having trouble with this individual in any situation other than a home invasion seem pretty small - I don't leave the house often, and not on a regular schedule at all, plus I don't drive so I generally have a friend in a car watching me to and from the door, so the usually more risky situation of getting in and out of the house isn't an issue for the most part. I will be extra-careful about getting my mail and thoughtful about when I leave to do my laundry.)

" } }, { "_id": "8KQ6K6HPrvMaj8SkE", "title": "Outreach opportunity", "pageUrl": "https://www.lesswrong.com/posts/8KQ6K6HPrvMaj8SkE/outreach-opportunity", "postedAt": "2010-11-12T11:07:29.504Z", "baseScore": 18, "voteCount": 14, "commentCount": 21, "url": null, "contents": { "documentId": "8KQ6K6HPrvMaj8SkE", "html": "

Ars Technica are holding a competition for people to make a science video up to 3 minutes long \"to explain a scientific concept in terms that a high school science class would not only understand, but actually be interested in watching\". Prizes in three categories: biology, physics, and mathematics. Deadline is December 25.  More details here.

\n

Anyone want to have a go at Bayes' theorem? Cognitive bias? Defeating death? Invisible purple dragons?

\n

 

" } }, { "_id": "bswiZf3W9TmGvTAuM", "title": "Outreach opportunity", "pageUrl": "https://www.lesswrong.com/posts/bswiZf3W9TmGvTAuM/outreach-opportunity-0", "postedAt": "2010-11-12T11:04:14.087Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "bswiZf3W9TmGvTAuM", "html": "

Ars Technica are holding a competition for people to make a science video up to 3 minutes long \"to explain a scientific concept in terms that a high school science class would not only understand, but actually be interested in watching\".  Deadline is December 25.  More details here.

\n

Anyone want to have a go at Bayes' theorem? Cognitive bias? Defeating death? Invisible purple dragons?

" } }, { "_id": "ouvbnJ5BtQ5oHnHrB", "title": "Study shows existence of psychic powers.", "pageUrl": "https://www.lesswrong.com/posts/ouvbnJ5BtQ5oHnHrB/study-shows-existence-of-psychic-powers", "postedAt": "2010-11-12T01:46:03.970Z", "baseScore": 5, "voteCount": 8, "commentCount": 26, "url": null, "contents": { "documentId": "ouvbnJ5BtQ5oHnHrB", "html": "

According to the New Scientist, Daryl Bern has a paper to appear in Journal of Personality and Social Psychology, which claims that the participants in psychological experiments are able to predict the future. A preprint of this paper is available online. Here's a quote from the New Scientist article:

\n

\n

\n

In one experiment, students were shown a list of words and then asked to recall words from it, after which they were told to type words that were randomly selected from the same list. Spookily, the students were better at recalling words that they would later type.

\n

In another study, Bem adapted research on \"priming\" – the effect of a subliminally presented word on a person's response to an image. For instance, if someone is momentarily flashed the word \"ugly\", it will take them longer to decide that a picture of a kitten is pleasant than if \"beautiful\" had been flashed. Running the experiment back-to-front, Bem found that the priming effect seemed to work backwards in time as well as forwards.

\n
\n

\n

Question: even assuming the methodology is sound, given experimenter bias, publication bias and your priors on the existence of psi, what sort of p-values would you need to see in that paper in order to believe with, say, 50% probability that the effect measured is real?

" } }, { "_id": "gMXsyhPiEJbGerF6F", "title": "If a tree falls on Sleeping Beauty...", "pageUrl": "https://www.lesswrong.com/posts/gMXsyhPiEJbGerF6F/if-a-tree-falls-on-sleeping-beauty", "postedAt": "2010-11-12T01:14:26.076Z", "baseScore": 148, "voteCount": 122, "commentCount": 28, "url": null, "contents": { "documentId": "gMXsyhPiEJbGerF6F", "html": "

Several months ago, we had an interesting discussion about the Sleeping Beauty problem, which runs as follows:

\n
\n

Sleeping Beauty volunteers to undergo the following experiment. On Sunday she is given a drug that sends her to sleep. A fair coin is then tossed just once in the course of the experiment to determine which experimental procedure is undertaken. If the coin comes up heads, Beauty is awakened and interviewed on Monday, and then the experiment ends. If the coin comes up tails, she is awakened and interviewed on Monday, given a second dose of the sleeping drug, and awakened and interviewed again on Tuesday. The experiment then ends on Tuesday, without flipping the coin again. The sleeping drug induces a mild amnesia, so that she cannot remember any previous awakenings during the course of the experiment (if any). During the experiment, she has no access to anything that would give a clue as to the day of the week. However, she knows all the details of the experiment.

\n

Each interview consists of one question, “What is your credence now for the proposition that our coin landed heads?”

\n
\n

In the end, the fact that there were so many reasonable-sounding arguments for both sides, and so much disagreement about a simple-sounding problem among above-average rationalists, should have set off major alarm bells. Yet only a few people pointed this out; most commenters, including me, followed the silly strategy of trying to answer the question, and I did so even after I noticed that my intuition could see both answers as being right depending on which way I looked at it, which in retrospect would have been a perfect time to say “I notice that I am confused” and backtrack a bit…

\n

And on reflection, considering my confusion rather than trying to consider the question on its own terms, it seems to me that the problem (as it’s normally stated) is completely a tree-falling-in-the-forest problem: a debate about the normatively “correct” degree of credence which only seemed like an issue because any conclusions about what Sleeping Beauty “should” believe weren’t paying their rent, were disconnected from any expectation of feedback from reality about how right they were.

\n

It may seem either implausible or alarming that as fundamental a concept as probability can be the subject of such debates, but remember that the “If a tree falls in the forest…” argument only comes up because the understanding of “sound” as “vibrations in the air” and “auditory processing in a brain” coincide often enough that most people other than philosophers have better things to do than argue about which is more correct. Likewise, in situations that we actually encounter in real life where we must reason or act on incomplete information, long-run frequency is generally about the same as optimal decision-theoretic weighting. If you’re given the question “If you have a bag containing a white marble and two black marbles, and another bag containing two white marbles and a black marble, and you pick a bag at random and pick a marble out of it at random and it’s white, what’s the probability that you chose the second bag?” then you can just answer it as given, without worrying about specifying a payoff structure, because no matter how you reformulate it in terms of bets and payoffs, if your decision-theoretic reasoning talks about probabilities at all then there’s only going to be one sane probability you can put into it. You can assume that answers to non-esoteric probability problems will be able to pay their rent if they are called upon to do so, and so you can do plenty within pure probability theory long before you need your reasoning to generate any decisions.

\n

But when you start getting into problems where there may be multiple copies of you and you don’t know how their responses will be aggregated — or, more generally, where you may or may not be scored on your probability estimate multiple times or may not be scored at all, or when you don’t know how it’s being scored, or when there may be other agents following reasoning correlated with but not necessarily identical to yours — then I think talking too much about “probability” directly will cause different people to be solving different problems, given the different ways they will implicitly imagine being scored on their answers so that the question of “What subjective probability should be assigned to x?” has any normatively correct answer. Here are a few ways that the Sleeping Beauty problem can be framed as a decision problem explicitly:

\n
\n

Each interview consists of Sleeping Beauty guessing whether the coin came up heads or tails, and being given a dollar if she was correct. After the experiment, she will keep all of her aggregate winnings.

\n
\n

In this case, intending to guess heads has an expected value of $.50 (because if the coin came up heads, she’ll get $1, and if it came up tails, she’ll get nothing), and intending to guess tails has an expected value of $1 (because if the coin came up heads, she’ll get nothing, and if it came up tails, she’ll get $2). So she should intend to guess tails.

\n
\n

Each interview consists of Sleeping Beauty guessing whether the coin came up heads or tails. After the experiment, she will be given a dollar if she was correct on Monday.

\n
\n

In this case, she should clearly be indifferent (which you can call “.5 credence” if you’d like, but it seems a bit unnecessary).

\n
\n

Each interview consists of Sleeping Beauty being told whether the coin landed on heads or tails, followed by one question, “How surprised are you to hear that?” Should Sleeping Beauty be more surprised to learn that the coin landed on heads than that it landed on tails?

\n
\n

I would say no; this seems like a case where the simple probability-theoretic reasoning applies. Before the experiment, Sleeping Beauty knows that a coin is going to be flipped, and she knows it’s a fair coin, and going to sleep and waking up isn’t going to change anything she knows about it, so she should not be even slightly surprised one way or the other. (I’m pretty sure that surprisingness has something to do with likelihood. I may write a separate post on that, but for now: after finding out whether the coin did come up heads or tails, the relevant question is not “What is the probability that the coin came up {heads,tails} given that I remember going to sleep on Sunday and waking up today?”, but “What is the probability that I’d remember going to sleep on Sunday and waking up today given that the coin came up {heads,tails}?”, in which case either outcome should be equally surprising, in which case neither outcome should be surprising at all.)

\n
\n

Each interview consists of one question, “What is the limit of the frequency of heads as the number of repetitions of this experiment goes to infinity?”

\n
\n

Here of course the right answer is “.5, and I hope that’s just a hypothetical…”

\n
\n

Each interview consists of one question, “What is your credence now for the proposition that our coin landed heads?”, and the answer given will be scored according to a logarithmic scoring rule, with the aggregate result corresponding to the number of utilons (converted to dollars, let’s say) she will be penalized after the experiment.

\n
\n

In this case it is optimal to bet 1/3 that the coin came up heads, 2/3 that it came up tails:

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Bet on heads:1/21/3
Actual flip:HeadsTailsHeadsTails
Monday:-1 bit-1 bit-1.585 bits-0.585 bits
Tuesday:n/a-1 bitn/a-0.585 bits
Total:-1 bit-2 bits-1.585 bits-1.17 bits
Expected:-1.5 bits-1.3775 bits
\n

(If you’re not used to the logarithmic scoring rule enough to trust that 1/3 is better than every other option too, you can check this by graphing y = (log2x + 2 log2(1 - x))/2, where x is the probability you assign to heads, and y is expected utility.)

\n

So I hope it is self-evident that reframing seemingly-paradoxical probability problems as decision problems generally makes them trivial, or at least agreeably solvable and non-paradoxical. What may be more controversial is that I claim that this is satisfactory not as a circumvention but as a dissolution of the question “What probability should be assigned to x?”, when you have a clear enough idea of why you’re wondering about the “probability.” Can we really taboo concepts like “probability” and “plausibility” and “credence”? I should certainly hope so; judgments of probability had better be about something, and not just rituals of cognition that we use because it seems like we’re supposed to rather than because it wins.

\n

But when I try to replace “probability” with what I mean by it, and when I mean it in any normative sense — not, like, out there in the territory, but just “normative” by whatever standard says that assigning a fair coin flip a probability of .5 heads tends to be a better idea than assigning it a probability of .353289791 heads — then I always find myself talking about optimal bets or average experimental outcomes. Can that really be all there is to probability as degree of belief? Can’t we enjoy, for its own sake, the experience of having maximally accurate beliefs given whatever information we already have, even in circumstances where we don’t get to test it any further? Well, yes and no; if your belief is really about anything, then you’ll be able to specify, at the very least, a ridiculous hypothetical experiment that would give you information about how correct you are, or a ridiculous hypothetical bet that would give you an incentive to optimally solve a more well-defined version of the problem. And if you’re working with a problem where it’s at all unclear how to do this, it is probably best to backtrack and ask what problem you’re trying to solve, why you’re asking the question in the first place. So when in doubt, ask for decisions rather than probabilities. In the end, the point (aside from signaling) of believing things is (1) to allow you to effectively optimize reality for the things you care about, and (2) to allow you to be surprised by some possible experiences and not others so you get feedback on how well you’re doing. If a belief does not do either of those things, I’d hesitate to call it a belief at all; yet that is what the original version of the Sleeping Beauty problem asks you to do.

\n

Now, it does seem to me that following the usual rules of probability theory (the ones that tend to generate optimal bets in that strange land where intergalactic superintelligences aren’t regularly making copies of you and scientists aren’t knocking you out and erasing your memory) tells Sleeping Beauty to assign .5 credence to the proposition that the coin landed on heads. Before the experiment has started, Sleeping Beauty already knows what she’s going to experience — waking up and pondering probability — so if she doesn’t already believe with 2/3 probability that the coin will land on tails (which would be a strange thing to believe about a fair coin), then she can’t update to that after experiencing what she already knew she was going to experience. But in the original problem, when she is asked “What is your credence now for the proposition that our coin landed heads?”, a much better answer than “.5” is “Why do you want to know?”. If she knows how she’s being graded, then there’s an easy correct answer, which isn’t always .5; if not, she will have to do her best to guess what type of answer the experimenters are looking for; and if she’s not being graded at all, then she can say whatever the hell she wants (acceptable answers would include “0.0001,” “3/2,” and “purple”).

\n

I’m not sure if there is more to it than that. Presumably the “should” in “What subjective probability should I assign x?” isn’t a moral “should,” but more of an “if-should” (as in “If you want x to happen, you should do y”), and if the question itself seems confusing, that probably means that under the circumstances, the implied “if” part is ambiguous and needs to be made explicit. Is there some underlying true essence of probability that I’m neglecting? I don’t know, but I am pretty sure that even if there were one, it wouldn’t necessarily be the thing we’d care about knowing in these types of problems anyway. You want to make optimal use of the information available to you, but it has to be optimal for something.

\n

I think this principle should help to clarify other anthropic problems. For example, suppose Omega tells you that she just made an exact copy of you and everything around you, enough that the copy of you wouldn’t be able to tell the difference, at least for a while. Before you have a chance to gather more information, what probability should you assign to the proposition that you yourself are the copy? The answer is non-obvious, given that there already is a huge and potentially infinite number of copies of you, and it’s not clear how adding one more copy to the mix should affect your belief about how spread out you are over what worlds. On the other hand, if you’re Dr. Evil and you’re in your moon base preparing to fire your giant laser at Washington, DC when you get a phone call from Austin “Omega” Powers, and he tells you that he has made an exact replica of the moon base on exactly the spot at which the moon laser is aimed, complete with an identical copy of you (and an identical copy of your identical miniature clone) receiving the same phone call, and that its laser is trained on your original base on the moon, then the decision is a lot easier: hold off on firing your laser and gather more information or make other plans. Without talking about the “probability” that you are the original Dr. Evil or the copy or one of the potentially infinite Tegmark duplicates in other universes, we can simply look at the situation from the outside and see that if you do fire your laser then you’ll blow both of yourselves up, and that if you don’t fire your laser then you have some new competitors at worst and some new allies at best.

\n

So: in problems where you are making one judgment that may be evaluated more or less than one time, and where you won’t have a chance to update between those evaluations (e.g. because your one judgment will be evaluated multiple times because there are multiple copies of you or your memory will be erased), just ask for decisions and leave probabilities out of it to whatever extent possible.

\n

In a followup post, I will generalize this point somewhat and demonstrate that it helps solve some problems that remain confusing even when they specify a payoff structure.

" } }, { "_id": "YeFhEcRz65ikSsgjd", "title": "The Aspirin Paradox- replacement for the Smoking Lesion Problem?", "pageUrl": "https://www.lesswrong.com/posts/YeFhEcRz65ikSsgjd/the-aspirin-paradox-replacement-for-the-smoking-lesion", "postedAt": "2010-11-11T23:57:30.174Z", "baseScore": 10, "voteCount": 7, "commentCount": 2, "url": null, "contents": { "documentId": "YeFhEcRz65ikSsgjd", "html": "

It's been pointed out that the Smoking Lesion problem is a poorly chosen decision theory problem, because in the real world there actually is a direct causal link from smoking to cancer, and people's intuitions are influenced more by that than by the stated parameters of the scenario. In his TDT document, Eliezer concocts a different artificial example (chewing gum and throat abcesses).  I recently noticed, though, a potentially good real-world example of the same dynamic: the Aspirin Paradox.

\n

Despite the effectiveness of aspirin in preventing heart attacks, those who regularly take aspirin are at a higher risk of a second heart attack, because those with symptoms of heart disease are more likely than those without symptoms to be taking aspirin regularly. While it turns out this \"risk factor\" is mostly screened off by other measurable health factors, it's a valid enough correlation for the purposes of decision theory.

" } }, { "_id": "AQ6ACBeoxwhA8jzZF", "title": "SIA and the Two Dimensional Doomsday Argument", "pageUrl": "https://www.lesswrong.com/posts/AQ6ACBeoxwhA8jzZF/sia-and-the-two-dimensional-doomsday-argument", "postedAt": "2010-11-11T22:00:17.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "AQ6ACBeoxwhA8jzZF", "html": "

This post might be technical. Try reading this if I haven’t explained everything well enough.

\n

When the Self Sampling Assumption (SSA) is applied to the Great Filter it gives something pretty similar to the Doomsday Argument, which is what it gives without any filter. SIA gets around the original Doomsday Argument. So why can’t it get around the Doomsday Argument in the Great Filter?

\n

The Self Sampling Assumption (SSA) says you are more likely to be in possible worlds which contain larger ratios of people you might be vs.  people know you are not*.

\n
\"\"

If you have a silly hat, SSA says you are more likely to be in world 2 - assuming Worlds 1 and 2 are equally likely to exist (i.e. you haven't looked aside at your companions), and your reference class is people.

\n

The Doomsday Argument uses the Self Sampling Assumption. Briefly, it argues that if there are many generations more humans, the ratio of people who might be you (are born at the same time as you) to people you can’t be (everyone else) will be smaller than it would be if there are few future generations of humans. Thus few generations is more likely than previously estimated.

\n

An unusually large ratio of people in your situation can be achieved by a possible world having unusually few people unlike you in it or unusually many people like you, or any combination of these.

\n

 

\n
\"\"

Fewer people who can't be me or more people who may be me make a possible world more likely according to SSA.

\n

For instance on the horizontal dimension, you can compare a set of worlds which all have the same number of people like you, and different numbers of people you are not. The world with few people unlike you has the largest increase in probability.

\n

 

\n
\"Doomsday\"

The top row from the previous diagram. The Doomsday Argument uses possible worlds varying in this dimension only.

\n

The Doomsday Argument is an instance of variation in the horizontal dimension only. In every world there is one person with your birth rank, but the numbers of people with future birth ranks differ.

\n

At the other end of the spectrum you could be comparing  worlds with the same number of future people and vary the number of current people, as long as you are ignorant of how many current people there are.

\n
\"\"

The vertical axis. The number of people in your situation changes, while the number of others stays the same. The world with a lot of people like you gets the largest increase in probability.

\n

This gives a sort of Doomsday Argument: the population will fall, most groups won’t survive.

\n

The Self Indication Assumption (SIA) is equivalent to using SSA and then multiplying the results by the total population of people both like you and not.

\n

In the horizontal dimension, SIA undoes the Doomsday Argument. SSA favours smaller total populations in this dimension, which are disfavoured to the same extent by SIA, perfectly cancelling.

\n

[1/total] * total = 1
\n(in bold is SSA shift alone)

\n

In vertical cases however, SIA actually makes the Doomsday Argument analogue stronger. The worlds favoured by SSA in this case are the larger ones, because they have more current people. These larger worlds are further favoured by SIA.

\n

[(total – 1)/total]*total = total – 1

\n

The second type of situation is relatively uncommon, because you will tend to know more about the current population than the future population. However cases in between the two extremes are not so rare. We are uncertain about creatures at about our level of technology on other planets for instance, and also uncertain about creatures at some future levels.

\n

This means the Great Filter scenario I have written about is an in between scenario. Which is why the SIA shift doesn’t cancel the SSA Doomsday Argument there, but rather makes it stronger.

\n

Expanded from p32 of my thesis.

\n

——————————————-
\n*or observers you might be vs. those you are not for instance – the reference class may be anything, but that is unnecessarily complicated for the the point here.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "ep6tiXJpHxR8ofWs6", "title": "The Strong Occam's Razor", "pageUrl": "https://www.lesswrong.com/posts/ep6tiXJpHxR8ofWs6/the-strong-occam-s-razor", "postedAt": "2010-11-11T17:28:21.338Z", "baseScore": 17, "voteCount": 27, "commentCount": 74, "url": null, "contents": { "documentId": "ep6tiXJpHxR8ofWs6", "html": "

This post is a summary of the different positions expressed in the comments to my previous post and elsewhere on LW. The central issue turned out to be assigning \"probabilities\" to individual theories within an equivalence class of theories that yield identical predictions. Presumably we must prefer shorter theories to their longer versions even when they are equivalent. For example, is \"physics as we know it\" more probable than \"Odin created physics as we know it\"? Is the Hamiltonian formulation of classical mechanics apriori more probable than the Lagrangian formulation? Is the definition of reals via Dedekind cuts \"truer\" than the definition via binary expansions? And are these all really the same question in disguise?

\n

One attractive answer, given by shokwave, says that our intuitive concept of \"complexity penalty\" for theories is really an incomplete formalization of \"conjunction penalty\". Theories that require additional premises are less likely to be true, according to the eternal laws of probability. Adding premises like \"Odin created everything\" makes a theory less probable and also happens to make it longer; this is the entire reason why we intuitively agree with Occam's Razor in penalizing longer theories. Unfortunately, this answer seems to be based on a concept of \"truth\" granted from above - but what do differing degrees of truth actually mean, when two theories make exactly the same predictions?

\n

Another intriguing answer came from JGWeissman. Apparently, as we learn new physics, we tend to discard inconvenient versions of old formalisms. So electromagnetic potentials turn out to be \"more true\" than electromagnetic fields because they carry over to quantum mechanics much better. I like this answer because it seems to be very well-informed! But what shall we do after we discover all of physics, and still have multiple equivalent formalisms - do we have any reason to believe simplicity will still work as a deciding factor? And the question remains, which definition of real numbers is \"correct\" after all?

\n

Eliezer, bless him, decided to take a more naive view. He merely pointed out that our intuitive concept of \"truth\" does seem to distinguish between \"physics\" and \"God created physics\", so if our current formalization of \"truth\" fails to tell them apart, the flaw lies with the formalism rather than with us. I have a lot of sympathy for this answer as well, but it looks rather like a mystery to be solved. I never expected to become entangled in a controversy over the notion of truth on LW, of all places!

\n

A final and most intriguing answer of all came from saturn, who alluded to a position held by Eliezer and sharpened by Nesov. After thinking it over for awhile, I generated a good contender for the most confused argument ever expressed on LW. Namely, I'm going to completely ignore the is-ought distinction and use morality to prove the \"strong\" version of Occam's Razor - that shorter theories are more \"likely\" than equivalent longer versions. You ready? Here goes:

\n

Imagine you have the option to put a human being in a sealed box where they will be tortured for 50 years and then incinerated. No observational evidence will ever leave the box. (For added certainty, fling the box away at near lightspeed and let the expansion of the universe ensure that you can never reach it.) Now consider the following physical theory: as soon as you seal the box, our laws of physics will make a localized exception and the victim will spontaneously vanish from the box. This theory makes exactly the same observational predictions as your current best theory of physics, so it lies in the same equivalence class and you should give it the same credence. If you're still reluctant to push the button, it looks like you already are a believer in the \"strong Occam's Razor\" saying simpler theories without local exceptions are \"more true\". QED.

\n

It's not clear what, if anything, the above argument proves. It probably has no consequences in reality, because no matter how seductive it sounds, skipping over the is-ought distinction is not permitted. But it makes for a nice koan to meditate on weird matters like \"probability as preference\" (due to Nesov and Wei Dai) and other mysteries we haven't solved yet.

\n

ETA: Hal Finney pointed out that the UDT approach - assuming that you live in many branches of the \"Solomonoff multiverse\" at once, weighted by simplicity, and reducing everything to decision problems in the obvious way - dissolves our mystery nicely and logically, at the cost of abandoning approximate concepts like \"truth\" and \"degree of belief\". It agrees with our intuition in advising you to avoid torturing people in closed boxes, and more generally in all questions about moral consequences of the \"implied invisible\". And it nicely skips over all the tangled issues of \"actual\" vs \"potential\" predictions, etc. I'm a little embarrassed at not having noticed the connection earlier. Now can we find any other good solutions, or is Wei's idea the only game in town?

" } }, { "_id": "iaYzgw2iwHAxRynbd", "title": "How to pick your categories", "pageUrl": "https://www.lesswrong.com/posts/iaYzgw2iwHAxRynbd/how-to-pick-your-categories", "postedAt": "2010-11-11T15:13:58.511Z", "baseScore": 78, "voteCount": 61, "commentCount": 22, "url": null, "contents": { "documentId": "iaYzgw2iwHAxRynbd", "html": "

Note: this is intended to be a friendly math post, so apologies to anyone for whom this is all old hat.  I'm deliberately staying elementary for the benefit of people who are new to the ideas.  There are no proofs: this is long enough as it is.

\n

Related: Where to Draw the Boundary, The Cluster Structure of Thingspace, Disguised Queries.

\n

Here's a rather deep problem in philosophy: how do we come up with categories?  What's the difference between a horror movie and a science fiction movie?  Or the difference between a bird and a mammal? Are there such things as \"natural kinds,\" or are all such ideas arbitrary?  

\n

We can frame this in a slightly more mathematical way as follows.  Objects in real life (animals, moving pictures, etc.) are enormously complicated and have many features and properties.  You can think of this as a very high dimensional space, one dimension for each property, and each object having a value corresponding to each property.  A grayscale picture, for example, has a color value for each pixel.  A text document has a count for every word (the word \"flamingo\" might have been used 7 times, for instance.)  A multiple-choice questionnaire has an answer for each question.  Each object is a point in a high-dimensional featurespace.  To identify which objects are similar to each other, we want to identify how close points are in featurespace.  For example, two pictures that only differ at one pixel should turn out to be similar.

\n

We could then start to form categories if the objects form empirical clusters in featurespace.  If some animals have wings and hollow bones and feathers, and some animals have none of those things but give milk and bear live young, it makes sense to distinguish birds from mammals.  If empirical clusters actually exist, then there's nothing arbitrary about the choice of categories -- the categories are appropriate to the data!

\n

There are a number of mathematical techniques for assigning categories; all of them are basically attacking the same problem, and in principle should all agree with each other and identify the \"right\" categories.  But in practice they have different strengths and weaknesses, in computational efficiency, robustness to noise, and ability to classify accurately.  This field is incredibly useful -- this is how computers do image and speech recognition, this is how natural language processing works, this is how they sequence your DNA. It also, I hope, will yield insights into how people think and perceive.

\n

Clustering techniques

\n

These techniques attempt to directly find clusters in observations.  A common example is the K-means algorithm.  The goal here is, given a set of observations x1...xn, to partition them into k sets so as to minimize the within-cluster sum of squared differences:

\n

argminS ∑ik x in S  ||xi - mi||2

\n

where mi are the means.

\n

The standard algorithm is to pick k means randomly, anywhere, assign points to the cluster with the closest mean, and then assign the new mean of each cluster to be the centroid (average) of the points in the cluster.  Then we iterate again, possibly assigning different points to different clusters with each iterative step.  This is usually very fast but can be slow in the worst-case scenario.

\n

There are many, many clustering algorithms.  They vary in the choice of distance metric (it doesn't have to be Euclidean, we could take taxicab distances or Hamming distances or something else).  There's also something called hierarchical clustering, which outputs a tree of clusters.

\n

Linear dimensionality reduction techniques

\n

Here's another way to think about this problem: perhaps, of all the possible features that could distinguish two objects, most of the variation is in only a few features.  You have a high-dimensional feature space, but in fact all the points are lying in a much lower-dimensional space.  (Maybe, for instance, once you've identified what color a flower is, how big it is, and how many petals it has, you've almost completely identified the flower.)  We'd like to know which coordinate axes explain the data well.

\n

There are a number of methods for doing this -- I'll mention a classic one, singular value decomposition (SVD).  For any m x n matrix M, we have a factorization of the form

\n

M = U Σ V*

\n

where U and V are orthogonal and Σ is diagonal.  The columns of U are the eigenvectors of M*M, the columns of V are the eigenvectors of MM*, and the elements of Σ (called singular values) are the square roots of the eigenvalues of M*M and MM*.  

\n

Now, if we want a low-rank approximation of M (that is, every point in the approximation lies on a low-dimensional hyperplane) all we have to do is chop off Σ to contain only the largest k singular values. Intuitively, these are the dimensions that account for most of the variation in the data matrix M.  The approximate matrix M' = U Σ' V* can be shown to be the closest possible rank-k approximation to M.

\n

In other words, if you have high-dimensional data, and you suspect that only two coordinates explain all the data -- maybe you have a questionnaire, and you think that age and sex explain all the patterns in answering -- something like SVD can identify which those coordinates are. In a sense, dimensionality reduction can identify the factors that are worth looking at.

\n

Nonlinear dimensionality reduction techniques

\n

An algorithm like SVD is linear -- the approximation it spits out lies entirely on a vector subspace of your original vector space (a line, a plane, a hyperplane of some dimension.)  Sometimes, though, that's a bad idea.  What if you have a cloud of data that actually lies on a circle?  Or some other curvy shape?  SVD will get it wrong.

\n

One interesting tweak on this process is manifold learning -- if we suspect that the data lies on a low-dimensional but possibly curvy shape, we try to identify the manifold, just as in SVD we tried to identify the subspace.  There are a lot of algorithms for doing this.  One of my favorites is the Laplacian eigenmap.  

\n

Here, we look at each data point as a node on a graph; if we have lots of data (and we usually do) the graph is a sort of mesh approximation for the smooth manifold it lies on.  We construct a sparse, weighted adjacency matrix: it's N x N, where N is the number of data points.  Matrix elements are zero if they correspond to two points that are far apart, but if they correspond to nearby points we put the heat kernel e-||x-y||^2.  Then we look at the eigenvectors of this matrix.  We use the top eigenvectors as coordinates to embed into Euclidean space. The reason this works, roughly, is that we're approximating a Laplace operator on the manifold: two points are close together if a diffusion moving along the graph would travel between them quickly.  It's a way of mapping the graph into a lower-dimensional space such that points that are close on the graph are close in the embedding.

\n

Good nonlinear dimensionality reduction techniques can identify data that lies on curvy shapes, like circles and spirals and the \"swiss roll\" (a rolled-up plane) much better than linear dimensionality reduction techniques.

\n

 

\n

What's the moral?

\n

Once upon a time, the search engine Yahoo tried to categorize all the sites on the web according to a pre-set classification system.  Science, sports, entertainment, and so on.  It was phenomenally unpopular.  The content that grew online often didn't fit the categories.  People didn't want to surf the internet with the equivalent of the Dewey Decimal System.  

\n

These days, to some degree, we know better.  Amazon.com doesn't recommend books based on pre-set categories, giving you a horror book if you liked a horror book before.  It recommends a book based on the choices of other customers who liked the same books you like. Evidently, they have a big adjacency matrix somewhere, one column for every customer and one row for every purchase, and quite possibly they're running some sort of a graph diffusion on it.  They let the categories emerge organically from the data.  If a new genre is emerging, they don't have to scurry around trying to add a new label for it; it'll show up automatically.

\n

This suggests a sort of rationalist discipline: categories should always be organic.  Humans like to categorize, and categories can be very useful.  But not every set of categories is efficient at describing the variety of actual observations.  Biologists used to have a kingdom called Monera, consisting of all the single-celled organisms; it was a horrible grab bag, because single-celled organisms are very different from each other genetically.  After analyzing genomes, they decided there were actually three domains, Bacteria, Archaea, and Eukarya (the only one which includes multicellular life.)  In a real, non-arbitrary way, this is a better kind of categorization, and Monera was a bad category. 

\n

Sometimes it seems that researchers don't always pay attention to the problem of choosing categories and axes well.  For example, I once saw a study of autism that did the following: created a questionnaire that rated the user's \"empathizing\" and \"systematizing\" qualities, found that autistics were less \"empathizing\" and more \"systematizing\" than non-autistics, and concluded that autism was defined by more systematizing and less empathizing.  This is a classic example of privileging one axis over others -- what if autistics and non-autistics also differ in some other way? How do you know that you've chosen an efficient way to define that category?  Wouldn't you have to go look?

\n

If you say \"There are two kinds of people in the world,\" but if you look around and lots of people don't fit your binary, then maybe your binary is bad.  If you want to know whether your set of categories is good, go look -- see if the data actually clusters that way.  There's still a lot of debate about which mathematical techniques are best for defining categories, but it is a field where science has actually made progress. It's not all arbitrary.  When you play Twenty Questions with the universe, some questions are more useful than others.

\n

 

\n

References:

\n

Wikipedia is generally very good on this subject, and the wiki page on singular value decomposition contains the proof that it actually works.

\n

This paper by Mikhail Belkin and Partha Niyogi does a much better job of explaining Laplacian eigenmaps. Some other nonlinear dimensionality reduction techniques: Isomap, Locally Linear Embedding,  and diffusion maps.

" } }, { "_id": "Ssh5zWwNqfXuxWGHi", "title": "History of Manga (Wikipedia link)", "pageUrl": "https://www.lesswrong.com/posts/Ssh5zWwNqfXuxWGHi/history-of-manga-wikipedia-link", "postedAt": "2010-11-11T11:43:54.907Z", "baseScore": -12, "voteCount": 10, "commentCount": 15, "url": null, "contents": { "documentId": "Ssh5zWwNqfXuxWGHi", "html": "

http://en.wikipedia.org/wiki/History_of_manga

\n

Thanks for the link, grouchymusicologist.

" } }, { "_id": "ZaBG4t2HXwfPL45CL", "title": "[LINK] Creationism = High Carb? Or, The Devil Does Atkins", "pageUrl": "https://www.lesswrong.com/posts/ZaBG4t2HXwfPL45CL/link-creationism-high-carb-or-the-devil-does-atkins", "postedAt": "2010-11-11T03:41:06.707Z", "baseScore": 4, "voteCount": 5, "commentCount": 10, "url": null, "contents": { "documentId": "ZaBG4t2HXwfPL45CL", "html": "

Based on the community's continuing interests in diet and religion, I'd like to point out this blog post by the coauthor of Protein Power, Michael Eades, wherein he suggests that biblical literalism tends toward a low-fat approach to nutrition over a low-carb philosophy, by essentially throwing out a bunch of evidence on the matter:

\n
\n

Why, you might ask, is this scientist so obdurate in the face of all the evidence that’s out there?  Perhaps because much of the evidence isn’t in accord with his religious beliefs.  I try never to mention a person’s religious faith, but when it impacts his scientific thinking it at least needs to be made known.  Unless he’s changed his thinking recently, Dr. Eckel apparently is one of the few academic scientists who are literal interpreters of the bible.  I assume this because Dr. Eckel serves on the technical advisory board of the Institution for Creation Research, an organization that believes that not only is the earth only a few thousand years old , but that the entire universe in only a few thousand years old.  And they believe that man was basically hand formed by God on the sixth day of creation.  And Dr. Eckel’s own writings on the subject appear to confirm his beliefs

\n

[.....]

\n

Of all the evidence that exists, I think the evolutionary/natural selection data and the anthropological data are the most compelling because they provide the largest amount of evidence over the longest time.  To Dr. Eckel, however, these data aren’t applicable because in his worldview prehistoric man didn’t exist and therefore wasn’t available to be molded by the forces of natural selection.  I haven’t a clue as to what he thinks the fossil remains of early humans really were or where they came from.  Perhaps he believes – as I once had it explained to me by a religious fundamentalist – these fossilized remains of dinosaurs, extinct ancient birds and mammals and prehistoric man were carefully buried by the devil to snare the unwary and the unbeliever.  If this is the case, I guess I’ll have to consider myself snared.

\n

In Dr. Eckel’s view, man was created post agriculturally.  In fact, in his view, there was never an pre-agricultural era, so how could man have failed to adapt to agriculture?

\n

 – Rooting out more anti-low-carb bias

\n
\n

While there's a clear persuasive agenda here and I won't present a full analysis of the situation, Eades also mentions biasing use of language earlier in the article. In particular, beware applause lights and confirmation bias in evaluating.

" } }, { "_id": "dfAEfCQspuMLJdsZJ", "title": "Pet Cryonics", "pageUrl": "https://www.lesswrong.com/posts/dfAEfCQspuMLJdsZJ/pet-cryonics", "postedAt": "2010-11-11T00:13:18.011Z", "baseScore": 9, "voteCount": 7, "commentCount": 18, "url": null, "contents": { "documentId": "dfAEfCQspuMLJdsZJ", "html": "

Open discussion.

\n

I think my dog is about to die. Even if I thought it was worth it I don't have the money to freeze her. But I am curious to know how people here feel about the practice and whether anyone plans to do this for their pet. It seems like a practice that plays into the image of cryonics as the domain of strange and egotistical rich people. On the other hand it also seems like a rather human and heart warming practice. Is pet cryopreservation good for the image of cryonics?

\n

Also, do people who just do neuro get their pets preserved? Will people upload pets? Assuming life as an emulation feels different from life as a biological organism is it ethical to upload animals? The transition might be strange and uncomfortable but we expect at least some humans to take the risk and live with any differences. But animals don't understand this and might not have the mental flexibility to adjust.

" } }, { "_id": "5vDDHT2jaiRDay4MR", "title": "SIA says AI is no big threat", "pageUrl": "https://www.lesswrong.com/posts/5vDDHT2jaiRDay4MR/sia-says-ai-is-no-big-threat", "postedAt": "2010-11-10T22:00:32.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "5vDDHT2jaiRDay4MR", "html": "

Artificial Intelligence could explode in power and leave the direct control of humans in the next century or so. It may then move on to optimize the reachable universe to its goals. Some think this sequence of events likely.

\n

If this occurred, it would constitute an instance of our star passing the entire Great Filter. If we should cause such an intelligence explosion then, we are the first civilization in roughly the past light cone to be in such a position. If anyone else had been in this position, our part of the universe would already be optimized, which it arguably doesn’t appear to be. This means that if there is a big (optimizing much of the reachable universe) AI explosion in our future, the entire strength of the Great Filter is in steps before us.

\n

This means a big AI explosion is less likely after considering the strength of the Great Filter, and much less likely if one uses the Self Indication Assumption (SIA).

\n

The large minimum total filter strength contained in the Great Filter is evidence for larger filters in the past and in the future. This means evidence against the big AI explosion scenario, which requires that the future filter is tiny.

\n

SIA implies that we are unlikely to give rise to an intelligence explosion for similar reasons, but probably much more strongly. As I pointed out before, SIA says that future filters are much more likely to be large than small. This is easy to see in the case of AI explosions. Recall that SIA increases the chances  of hypotheses where there are more people in our present situation. If we precede an AI explosion, there is only one civilization in our situation, rather than potentially many if we do not. Thus the AI hypothesis is disfavored (by a factor the size of the extra filter it requires before us).

\n

What the Self Sampling Assumption (SSA), an alternative principle to SIA, says depends on the reference class. If the reference class includes AIs, then we should strongly not anticipate such an AI explosion. If it does not, then we strongly should (by the doomsday argument). These are both basically due to the Doomsday Argument.

\n

In summary, if you begin with some uncertainty about whether we precede an AI explosion, then updating on the observed large total filter and accepting SIA should make you much less confident in that outcome. The Great Filter and SIA don’t just mean that we are less likely to peacefully colonize space than we thought, they also mean we are less likely to horribly colonize it, via an unfriendly AI explosion.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "MvGupsTJ2FDJPKmiB", "title": "What does it mean to optimize future?", "pageUrl": "https://www.lesswrong.com/posts/MvGupsTJ2FDJPKmiB/what-does-it-mean-to-optimize-future", "postedAt": "2010-11-10T15:43:23.326Z", "baseScore": -3, "voteCount": 6, "commentCount": 8, "url": null, "contents": { "documentId": "MvGupsTJ2FDJPKmiB", "html": "

Preamble

\n

Value deathism by Vladimir Nesov encourages us to fix our values to prevent astronomical waste due to under-optimized future.

\n

When I've read it I found that I think about units of measurement of mentioned astronomical waste. Utilons? Seems so. [edit] Jack suggested widely accepted word Utils instead.[/edit]

\n

I've tried to precisely define it. It is difference between utility of some world-state G measured by original (drifting) agent and utility of world-state G measured by undrifting version of original agent, where world-state G is optimal according to original (drifting) agent.

\n

There are two questions: can we compare utilities of those agents and what does it mean that G is optimal?

\n

Question

\n

Preconditions: world is deterministic, the agent has full knowledge of the world, i.e. it knows current world-state, full list of actions available for every world-state and consequence of each action (world-state it leads to), the agent has no time limit for computing next action.

\n

Agent's value is defined as a function from set of world-states to real numbers, for the sake of, uhm, clarity, the bigger the better. (Note: it is unnecessary to define value as a function from set of sequences of world-states, as history of world can be deduced from world-state itself, and if it can't be deduced, then the agent can't use history anyway, as the agent is a part of this world-state, so it doesn't \"remember\" history too). [edit] I wasn't aware that this note includes hidden assumption: value of world-state must be constant. But this assumption doesn't allow agent to single out world-state where agent loses all or part of its memory. Thus value as a function over sequences of world-states has a right to be. But this value function still needs to be specifically shaped to be optimization algorithm independent. [/edit]

\n

Which sequence of world-states is optimal according to agent's value?

\n

 

\n

Edit: Consider agents implementing greedy search algorithm and exhaustive search algorithm. For them to choose same sequence of world-states search space should be greedoid. And that requires very specific structure of value function.

\n

Edit2: Alternatively value function can be indirectly self-referential via part of world-state that contains the agent, thus allowing it to modify agent's optimization algorithm by assigning higher utility to world-states where agent implements desired optimization algorithm. (I call agent's function 'value function' because its meaning can be defined by the function itself, it isn't necessarily utility).

\n

 

\n

My answer:

\n

Jura inyhr shapgvba bs gur ntrag vfa'g ersyrpgvir, v.r. qbrfa'g qrcraq ba vagrecergngvba bs n cneg bs jbeyq-fgngr bpphcvrq ol ntrag va grezf bs bcgvzvmngvba cebprff vzcyrzragrq ol guvf cneg bs jbeyq-fgngr, gura bcgvzny frdhrapr qrcraqf ba pbzovangvba bs qrgnvyf bs vzcyrzragngvba bs ntrag'f bcgvzvmngvba nytbevguz naq inyhr shapgvba. V guvax va trareny vg jvyy rkuvovg SBBZ orunivbe.

\n

Ohg jura inyhr shapgvba vf ersyrpgvir gura guvatf orpbzr zhpu zber vagrerfgvat.

\n

 

\n

Edit3:

\n

Implications

\n

I'll try to analyse behavior of classical paperclip maximizer, using toy model I described earlier. Let utility function be min(number_of_paperclips_produced, 50).

\n

1. Paperclip maximizer implements greedy search algorithm. If it can't produce paperclip (all available actions lead to the same utility), it performs action that depends on implementation of greedy search. All in all it acts erratically, while it isn't occasionally terminated (it stumbled into world-state where there's no available actions for him).

\n

2. Paperclip maximizer implements full-search algorithm. Result depends on implementation of full-search. If implementation executes shortest sequence of actions that leads to globally maximal value of utility function, then it produces 50 paperclips as fast as it can [edit] or it wireheads itself into state where his paperclip counter>50 whichever is faster [/edit], then terminates itself. If implementation executes longest possible sequence of actions that leads to globally maximal value of utility function, then the agent behave erratically, but is guarantied to survive, while its optimization algorithm behave according to original plan, but it will occasionally modify itself and gets terminated, as original plan doesn't care about preservation of agent's optimization algorithm or utility function.

\n

It seems that in full-knowledge case powerful optimization processes don't go FOOM. Full-search algorithm is maximally powerful isn't it?

\n

Maybe it is uncertainty that leads to FOOMing? 

\n

Indexical uncertainty can be represented by assumption, than agent knows set of world-states it can be in, and a set of available actions for world-state it is actually in. I'll try to analyze this case later.

\n

Edit4: Edit3 is wrong. Utility function in that toy model cannot be so simple if it uses some property of the agent. However it seems OK to extend model by including high-level description of state of the agent into world-state, then edit3 holds.

\n

 

\n

 

" } }, { "_id": "KEXnvFepoY4pPxkNF", "title": "Public international law ", "pageUrl": "https://www.lesswrong.com/posts/KEXnvFepoY4pPxkNF/public-international-law", "postedAt": "2010-11-10T10:10:02.647Z", "baseScore": -16, "voteCount": 16, "commentCount": 9, "url": null, "contents": { "documentId": "KEXnvFepoY4pPxkNF", "html": "

http://en.wikipedia.org/wiki/Public_international_law

" } }, { "_id": "pzqAjeeD8BfQQcc9S", "title": "On the Human", "pageUrl": "https://www.lesswrong.com/posts/pzqAjeeD8BfQQcc9S/on-the-human", "postedAt": "2010-11-10T09:27:00.447Z", "baseScore": 9, "voteCount": 6, "commentCount": 1, "url": null, "contents": { "documentId": "pzqAjeeD8BfQQcc9S", "html": "

Just wanted to point you guys at On the Human, a site which focuses on understanding the science and philosophy of humanism.  There is often overlap between topics there and here at Less Wrong.  The Forum is where most of the articles are posted (basically in blog format).

\n

Apologies if everyone was already aware of them.

" } }, { "_id": "pKzShr6mGcReQwMHR", "title": "Description complexity: an apology and note on terminology", "pageUrl": "https://www.lesswrong.com/posts/pKzShr6mGcReQwMHR/description-complexity-an-apology-and-note-on-terminology", "postedAt": "2010-11-10T00:43:40.009Z", "baseScore": 24, "voteCount": 15, "commentCount": 14, "url": null, "contents": { "documentId": "pKzShr6mGcReQwMHR", "html": "

This post is an amendment and sort of apology for my LW posts dealing with complexity (1, 2). In these posts I got the terms all wrong. Now I'll try to set it right. Many thanks to taw, JGWeissman and Daniel_Burfoot.

\n

The Solomonoff prior works like this: every bit string is assigned a real number that's equal to the probability of a \"random program\" outputting that string and then halting, when the \"probability\" of choosing each program is assumed to be inverse-exponential in its length. (Alternatively, you talk about a \"universal prefix Turing machine\" that consumes fair coin flips on the input tape.)

\n

In this setup, the words \"event\", \"hypothesis\", \"probability\", \"complexity\" etc. are all ambiguous. For example, the word \"hypothesis\" can mean a) an individual program, b) an equivalence class of programs that output the same bit string, or c) a statement about output bit strings, like \"the third bit will be 0\". The word \"event\" has exactly the same problem.

\n

Now the trouble is, you can give reasonable-sounding definitions of \"probability\" and \"Kolmogorov complexity\" to objects of all three types: (a), (b), and (c). But it's not at all clear what real-world implications these values should have. Does it make sense to talk about prior probability for objects of type (a), given that we can never distinguish two (a)'s that are part of the same (b)? (This is the mistake in saying that MWI has a higher prior probability than collapse interpretations.) Is it useful to talk about K-complexity for objects of type (c), given that a very long statement of type (c) can still have probability close to 1? (This is the mistake in saying Islam must be improbable because the Qur'an is such a thick book.) For that matter, is it at all useful to talk about K-complexity for objects of type (b) which are after all just bit strings, or should we restrict ourselves to talking about their prior probabilities for some reason?

\n

At the moment it seems uncontroversial that we can talk about K-complexity for objects of type (a) - let's call them \"programs\" - and talk about prior probability for objects of types (b) and (c) - let's call them \"sequences of observations\" and \"statements about the world\", respectively. Curiously, this means there's no object for which we have defined both \"complexity\" and \"prior probability\", which makes all arguments of the form \"complexity is high => probability is low\" automatically suspect.

\n

There's another worrying point. Given that \"programs\" get glued into equivalence classes - \"sequences of observations\" - anyway, it seems the K-complexity of an individual program is a completely inconsequential variable. You're better off just talking about the length. And the other two kinds of objects don't seem to have useful notions of K-complexity (according to my esteemed commentators who I thanked above), so we can adopt the general rule that mentioning K-complexity in a discussion of physics is always a sign of confusion :-)

\n

What do you say? Is this better? I'm honestly trying to get better!

" } }, { "_id": "Boy66DaYXRPrRvoYg", "title": "Simple friendliness: Plan B for AI", "pageUrl": "https://www.lesswrong.com/posts/Boy66DaYXRPrRvoYg/simple-friendliness-plan-b-for-ai-0", "postedAt": "2010-11-09T21:28:12.953Z", "baseScore": -21, "voteCount": 17, "commentCount": 4, "url": null, "contents": { "documentId": "Boy66DaYXRPrRvoYg", "html": "

Friendly AI, as believes by Hanson, is doomed to failure, since if the friendliness system is too complicated, the other AI projects generally will not apply it. In addition, any system of friendliness may still be doomed to failure - and more unclear it is, the more chances it has to fail. By fail I mean that it will not be accepted by most successful AI project. Thus, the friendliness system should be simple and clear, so it can be spread as widely as possible. I roughly figured, what principles could form the basis of a simple friendliness:

\n

1) Any one should understood that AI can be global risks and the friendliness of the system is needed. This basic understanding should be shared by maximum number of AI-groups (I think this is alrready done)

\n

2) Architecture of AI should be such that it would use rules explicitly. (I.e. no genetic algorithms or neural networks)

\n

3) the AI should obey commands of its creator, and clearly understand who is the creator and what is the format of commands.

\n

4) AI must comply with all existing criminal an civil laws. These laws are the first attempt to create a friendly AI – in the form of state. That is an attempt to describe good, safe human life using a system of rules. (Or system of precedents). And the number of volumes of laws and their interpretation speaks about complexity of this problem - but it has already been solved and it is not a sin to use the solution.

\n

5) the AI should not have secrets from their creator. Moreover, he is obliged to inform him of all his thoughts. This avoids rebel of AI.

\n

6) Each self optimizing of AI should be dosed in portions, under the control of the creator. And after each step must be run a full scan of system goals and effectiveness.

\n

7) the AI should be tested in a virtual environment (such as Second Life) for safety and adequacy.

\n

8) AI projects should be registrated by centralized oversight bodies and receive safety certification from it.

\n

Such obvious steps do not create absolutely safe AI (you can figure out how to bypass it out), but they make it much safer. In addition, they look quite natural and reasonable so they could be use by any AI project with different variations. Most of this steps are fallable. But without them the situation would be even worse. If each steps increase safety two times, 8 steps will increase it 256 times, which is good. Simple friendliness is plan B if mathematical FAI fails.

" } }, { "_id": "QdPvQEAMKJwJCCxQF", "title": "Cryoburn - Imperial Auditor Vorkosigan investigates the cryonics industry", "pageUrl": "https://www.lesswrong.com/posts/QdPvQEAMKJwJCCxQF/cryoburn-imperial-auditor-vorkosigan-investigates-the", "postedAt": "2010-11-09T21:16:42.808Z", "baseScore": 8, "voteCount": 6, "commentCount": 6, "url": null, "contents": { "documentId": "QdPvQEAMKJwJCCxQF", "html": "

I just completed \"Cryoburn\", Lois McMaster Bujold's latest novel in the Vokosigan Saga series.  The subject is cryonics. 

\n

Our hero Miles is dispatched to the planet Kibou-daini to investigate the cryonics industry there.  A Kibou company is planning to expand its business to Komarr, and Miles's Komar-born empress is supicious of financial chicanery.  Miles himself has undergone freezing (military medical emergency) and more-or-less-successful revival, but the notion of geriatric cryonics is a topic not previously explored in Bujold's universe.  It is explored reasonably well here.

\n

Although some of the cryonics companies in this book are corrupt, Bujold's take on cryonics is mostly positive.  She is very much in sympathy with the human desire for immortality or at least serious life extension.  But she points out and explores some practical problems.  Revival is not a problem in this book.  That one is pretty much solved.  But cryonics is not a cure for old age, it is at best a way of allowing more time for a cure to be found.  Naq pbairavragyl, gur Qheban tebhc frrzf gb unir pbzr hc jvgu n erwhirangvba gerngzrag (jryy, abg rknpgyl erwhirangvba - zber yvxr er-zvqqyr-ntr-vsvpngvba).  Ohg gur gerngzrag vf abg lrg grfgrq, naq ZIX Ragrecevfrf vf pbafvqrevat gur chepunfr bs n jnerubhfr shyy bs sebmra cnhcref nf n cbgragvny fbhepr bs grfg fhowrpgf.

\n

Another problem explored is political.  On Kibou, people are generally frozen before death, so as to maximize the chances of a successful revival.  So, since they are not really dead, they still can own property (in trust) and they still have the right to vote (by proxy).  The cryonics companies hold those proxies and control those trusts and hence have political and economic control of the planet.  And of course, they don't particularly want to revive their clients and give up that control.

\n

And then there are technological problems.  Vg gheaf bhg gung bar oenaq bs pelbavp syhvq (oybbq-fhofgvghgr/nagvserrmr) juvpu jnf jvqryl hfrq n srj qrpnqrf ntb unf fbzr fgnovyvgl ceboyrzf.  Juvpu zrnaf gung n pbhcyr zvyyvba bs gur cynarg'f uhaqerqf bs zvyyvbaf bs sebmra pbecfrf ner arire tbvat gb jnxr hc.  Ubj qbrf gur pbzcnal vaibyirq qrny jvgu gur ceboyrz?  Uvag: ubj qvq Nzrevpna onaxf qrny jvgu gur zvyyvbaf bs zbegtntrf gurl uryq gung jbhyq arire or cnlrq bss?

\n

All in all, it is a reasonably fun read, but not one of Bujold's best.  I review it here only because of the local interest in cryonics.  I don't read a lot of SciFi other than Bujold's so I can't compare to how the subject has been treated elsewhere (well, I guess I could compare to Niven, but that was a whole different generation of SciFi.)

\n

There is another review and some more comments in another posting

\n

 

\n

 

" } }, { "_id": "WfoPaJ9nzadHD9aSD", "title": "Know thyself vs. know one another", "pageUrl": "https://www.lesswrong.com/posts/WfoPaJ9nzadHD9aSD/know-thyself-vs-know-one-another", "postedAt": "2010-11-09T20:40:02.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "WfoPaJ9nzadHD9aSD", "html": "

People often aspire to the ideal of honesty, implicitly including both honesty to themselves and honesty with others. Those who care about it a lot often aim to be as honest as they can bring themselves to be, across circumstances. If the aim is to get correct information to yourself and other people however, I think this approach isn’t the greatest.

\n

There is probably a trade off between being honest with yourself and honest to others, so trying hard to be honest to others only detriments being honest to yourself, which in turn also prevents correct information getting to others.

\n

Why would there be a trade off? Imagine your friend said, ‘I promise that anything you tell me I will repeat to anyone who asks’. How honest would you be with that friend? If you say to yourself that you will report your thoughts to others, why wouldn’t the same effect apply?

\n

Progress in forcing yourself to be honest to others must be somewhat an impediment to being honest to yourself. Being honest with yourself is presumably also a disincentive to your being honest with others later, but that is less of a cost, since if you are dishonest with yourself you are presumably deceiving them about those topics either way.

\n

For example imagine you are wondering what you really think of your friend Errol’s art. If you are committed to truthfully admitting whatever the answer is to Errol or your other friends, it will be pretty tempting to sincerely interpret whatever experience you are having as ‘liking Errol’s art’. This way both you and the others come off deceived. If you were committed to lying in such circumstances, you would at least have the freedom to find out the truth yourself. This seems like the superior option for the truth-loving honesty enthusiast.

\n

This argument relies on the assumptions that you can’t fully consciously control how deluded you are about the contents of your brain, and that the unconscious parts of your mind that control this respond to incentives. These things both seem true to me.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "kJzwgqzEb8u28eSjL", "title": "PhilPapers survey results now include correlations", "pageUrl": "https://www.lesswrong.com/posts/kJzwgqzEb8u28eSjL/philpapers-survey-results-now-include-correlations", "postedAt": "2010-11-09T19:15:47.251Z", "baseScore": 12, "voteCount": 7, "commentCount": 1, "url": null, "contents": { "documentId": "kJzwgqzEb8u28eSjL", "html": "

Now you can see how philosophical positions are correlated to each other and to some demographic variables:

\n

http://philpapers.org/surveys/linear_most.pl

" } }, { "_id": "tQ9iKGPxcNppgWuXD", "title": "Simple freindliness: plan B for AI", "pageUrl": "https://www.lesswrong.com/posts/tQ9iKGPxcNppgWuXD/simple-freindliness-plan-b-for-ai", "postedAt": "2010-11-09T19:11:48.281Z", "baseScore": -26, "voteCount": 20, "commentCount": 13, "url": null, "contents": { "documentId": "tQ9iKGPxcNppgWuXD", "html": "

\r\n

Simple freindliness

\r\n

Friendly AI, as believes by Hanson, is doomed to failure, since if the friendliness system is too complicated, the other AI projects generally will not apply it. In addition, any system of friendliness may still be doomed to failure - and more unclear it is, the more chances it has to fail.  By fail I mean that it will not ne accepted by most succseful AI project.

\r\n

Thus, the friendliness system should be simple and clear, so it can be spread as widely as possible.

\r\n

 

\r\n

I roughly figured, what principles could form the basis of a simple friendliness:

\r\n

 

\r\n

0) Any one should understood that AI can be global risks and the friendliness of the system is needed. This basic understanding should be shared by maximum number of AI-groups (I think this is already done)

\r\n

1) Architecture of AI should be such that it would use rules explicitly. (I.e. no genetic algorithms or neural networks)

\r\n

2) the AI should obey commands of its creator, and clearly understand who is the creator and what is the format of commands.

\r\n

3) AI must comply with all existing CRIMINAL an CIVIL laws. These laws are the first attempt to create a friendly AI – in the form of state. That is an attempt to describe good, safe human life using a system of rules. (Or system of precedents). And the number of volumes of laws and their interpretation speaks about complexity of this problem - but it has already been solved and it is not a sin to use the solution.

\r\n

4) the AI should not have secrets from their creator. Moreover, he is obliged to inform him of all his thoughts. This avoids rebel of AI.

\r\n

5) Each seldoptimizing of AI should be dosed in portions, under the control of the creator. And after each step mustbe run a full scan of system goals and effectivness.

\r\n

6) the AI should be tested in a virtual environment (such as Secnod Life) for safety and adequacy.

\r\n

7) AI projects should be registrated by centralized oversight bodies and receive safety certification from it.

\r\n

 

\r\n

 

\r\n

Such obvious steps do not create absolutely safe AI (you can figure out how to bypass it out), but they make it much safer. In addition, they look quite natural and reasonable so they could be use by any AI project with different variations.

\r\n

 

\r\n

 Most of this steps are fallable. But without them the situation would be even worse. If each steps increase safety two times, 8 steps will increase it 256 times, which is good. Simple friendliness is plan B if mathematical FAI fails. 

\r\n

" } }, { "_id": "h9sP6rLT6r4vaXoen", "title": "A note on the description complexity of physical theories", "pageUrl": "https://www.lesswrong.com/posts/h9sP6rLT6r4vaXoen/a-note-on-the-description-complexity-of-physical-theories", "postedAt": "2010-11-09T16:25:21.186Z", "baseScore": 28, "voteCount": 30, "commentCount": 184, "url": null, "contents": { "documentId": "h9sP6rLT6r4vaXoen", "html": "

Followup to: The prior of a hypothesis does not depend on its complexity

\n

Eliezer wrote:

\n
\n

In physics, you can get absolutely clear-cut issues.  Not in the sense that the issues are trivial to explain. [...] But when I say \"macroscopic decoherence is simpler than collapse\" it is actually strict simplicity; you could write the two hypotheses out as computer programs and count the lines of code.

\n
\n

Every once in a while I come across some belief in my mind that clearly originated from someone smart, like Eliezer, and stayed unexamined because after you hear and check 100 correct statements from someone, you're not about to check the 101st quite as thoroughly. The above quote is one of those beliefs. In this post I'll try to look at it closer and see what it really means.

\n

Imagine you have a physical theory, expressed as a computer program that generates predictions. A natural way to define the Kolmogorov complexity of that theory is to find the length of the shortest computer program that generates your program, as a string of bits. Under this very natural definition, the many-worlds interpretation of quantum mechanics is almost certainly simpler than the Copenhagen interpretation.

\n

But imagine you refactor your prediction-generating program and make it shorter; does this mean the physical theory has become simpler? Note that after some innocuous refactorings of a program expressing some physical theory in a recognizable form, you may end up with a program that expresses a different set of physical concepts. For example, if you take a program that calculates classical mechanics in the Lagrangian formalism, and apply multiple behavior-preserving changes, you may end up with a program whose internal structures look distinctly Hamiltonian.

\n

Therein lies the rub. Do we really want a definition of \"complexity of physical theories\" that tells apart theories making the same predictions? If our formalism says Hamiltonian mechanics has a higher prior probability than Lagrangian mechanics, which is demonstrably mathematically equivalent to it, something's gone horribly wrong somewhere. And do we even want to define \"complexity\" for physical theories that don't make any predictions at all, like \"glarble flargle\" or \"there's a cake just outside the universe\"?

\n

At this point, the required fix to our original definition should be obvious: cut out the middleman! Instead of finding the shortest algorithm that writes your algorithm for you, find the shortest algorithm that outputs the same predictions. This new definition has many desirable properties: it's invariant to refactorings, doesn't discriminate between equivalent formulations of classical mechanics, and refuses to specify a prior for something you can never ever test by observation. Clearly we're on the right track here, and the original definition was just an easy fixable mistake.

\n

But this easy fixable mistake... was the entire reason for Eliezer \"choosing Bayes over Science\" and urging us to do same. The many-worlds interpretation makes the same testable predictions as the Copenhagen interpretation right now. Therefore by the amended definition of \"complexity\", by the right and proper definition, they are equally complex. The truth of the matter is not that they express different hypotheses with equal prior probability - it's that they express the same hypothesis. I'll be the first to agree that there are very good reasons to prefer the MWI formulation, like its pedagogical simplicity and beauty, but K-complexity is not one of them. And there may even be good reasons to pledge your allegiance to Bayes over the scientific method, but this is not one of them either.

\n

ETA: now I see that, while the post is kinda technically correct, it's horribly confused on some levels. See the comments by Daniel_Burfoot and JGWeissman. I'll write an explanation in the discussion area.

\n

ETA 2: done, look here.

" } }, { "_id": "RKRYGcdQFHQ78fe3x", "title": "Information Hazards", "pageUrl": "https://www.lesswrong.com/posts/RKRYGcdQFHQ78fe3x/information-hazards", "postedAt": "2010-11-09T13:53:32.934Z", "baseScore": 1, "voteCount": 9, "commentCount": 12, "url": null, "contents": { "documentId": "RKRYGcdQFHQ78fe3x", "html": "

Nick Bostrom recently posted the article \"Information Hazards\", which is about the myriad of ways in which information can harm us.

\n

You can read it at his website: Direct PDF Link

" } }, { "_id": "rebpQCJy4ZfajTfHu", "title": "Making the Universe Last Forever By Throwing Away Entropy Into Basement Universes?", "pageUrl": "https://www.lesswrong.com/posts/rebpQCJy4ZfajTfHu/making-the-universe-last-forever-by-throwing-away-entropy", "postedAt": "2010-11-09T12:03:09.418Z", "baseScore": -2, "voteCount": 9, "commentCount": 7, "url": null, "contents": { "documentId": "rebpQCJy4ZfajTfHu", "html": "

\n

Is there any reason why it wouldn’t work?

\n

" } }, { "_id": "2hXpTWs6p88SPeYB9", "title": "This Is How Bill and Hillary Discuss Dinner Options ", "pageUrl": "https://www.lesswrong.com/posts/2hXpTWs6p88SPeYB9/this-is-how-bill-and-hillary-discuss-dinner-options", "postedAt": "2010-11-09T09:08:13.898Z", "baseScore": -6, "voteCount": 9, "commentCount": 4, "url": null, "contents": { "documentId": "2hXpTWs6p88SPeYB9", "html": "

http://nymag.com/daily/intel/2010/11/this_is_how_bill_and_hillary_d.html?imw=Y&f=most-viewed-24h10

" } }, { "_id": "SGGLsH3zpvcE58pfi", "title": "Bye Bye Benton: November Less Wrong Meetup", "pageUrl": "https://www.lesswrong.com/posts/SGGLsH3zpvcE58pfi/bye-bye-benton-november-less-wrong-meetup", "postedAt": "2010-11-09T05:35:05.641Z", "baseScore": 12, "voteCount": 8, "commentCount": 4, "url": null, "contents": { "documentId": "SGGLsH3zpvcE58pfi", "html": "

As some of you may know, SIAI is in the process of moving our Visiting Fellows Program to a larger and more permanent location in Berkeley.  Nothing is final yet but, however things turn out, November will be the last month we spend at 3755 Benton street.  In honor of the house's proud history, we'll be throwing one final Less Wrong meetup this Saturday, the 13th of November, starting at 6pm.  Come meet the SingInst staff, the visiting fellows and your fellow Less Wrong readers for one final party in Santa Clara!

\n

As usual, food and drink shall be provided.

\n

Please RSVP at the meetup.com page if you plan to attend.

" } }, { "_id": "oddedGoQ6AwhHFx5A", "title": "Recent results on lower bounds in circuit complexity. ", "pageUrl": "https://www.lesswrong.com/posts/oddedGoQ6AwhHFx5A/recent-results-on-lower-bounds-in-circuit-complexity", "postedAt": "2010-11-09T05:02:50.842Z", "baseScore": 11, "voteCount": 8, "commentCount": 0, "url": null, "contents": { "documentId": "oddedGoQ6AwhHFx5A", "html": "

There's a new paper which substantially improves lower bounds for circuit complexity. The paper, by Ryan Williams, proves that NEXP does not have ACC circuits of third-exponential size.

\n

This is a somewhat technical result (and I haven't read the proof yet), but there's a summary of what this implies at Scott Aaronson's blog. The main upshot is that this is a substantial improvement over prior circuit complexity bounds. This is relevant since circuit complexity bounds look to be one of the most promising methods to potentially show that P != NP. These results make circuit complexity bounds be still very far off from showing that. But this result looks like it in some ways might get around the relativization barrier and natural proof barriers which are major barriers to resolving P ?=NP.

" } }, { "_id": "wbaEnHPKDMCL7cdSz", "title": "A hypothetical candidate walks into a hypothetical job interview...", "pageUrl": "https://www.lesswrong.com/posts/wbaEnHPKDMCL7cdSz/a-hypothetical-candidate-walks-into-a-hypothetical-job", "postedAt": "2010-11-09T04:13:55.592Z", "baseScore": 11, "voteCount": 19, "commentCount": 66, "url": null, "contents": { "documentId": "wbaEnHPKDMCL7cdSz", "html": "

\n

Let's say you are interviewing a candidate for a job. In casual conversation, the candidate mentions that he is a member of a rather old and prestigious country club. You've never heard the name of the club before. 

\n

You look up the country club afterwards, and are surprised by what you read. The club refuses membership to homosexuals. It revokes the membership of couples who use birth control. Leadership positions are reserved to unmarried males.

\n

The candidate is otherwise competent. Under what conditions would you hire him? Would you want a law passed banning hiring discrimination based on country club membership?

\n

 

\n

(The country club is analogous to a nicer version of the Catholic church. I left out a couple bad things.)

\n

 

\n

Religious discrimination is illegal in many parts of the world, and I think that's probably a good thing. Still, keeping this at the object level (no meta-rules or veils of ignorance) it seems to me that discriminating against religious people is fine. I'm curious what other people think. 

\n

" } }, { "_id": "CjFcZ43g7QroEAb4B", "title": "The value of preserving reality", "pageUrl": "https://www.lesswrong.com/posts/CjFcZ43g7QroEAb4B/the-value-of-preserving-reality", "postedAt": "2010-11-08T23:51:17.130Z", "baseScore": -1, "voteCount": 4, "commentCount": 13, "url": null, "contents": { "documentId": "CjFcZ43g7QroEAb4B", "html": "

A comment to http://singinst.org/blog/2010/10/27/presentation-by-joshua-foxcarl-shulman-at-ecap-2010-super-intelligence-does-not-imply-benevolence/: Given as in the naive reinforcement learning framework (and that can approximate some more complex notions of value) that the value is in the environment, you don't want to be too hasty with the environment lest you destroy a higher value you haven't yet discovered! So you especially wouldn't replace high complexity systems like humans with low entropy systems like computer chips, without first analyzing them.

\n

 

" } }, { "_id": "5jmkphsm5bhWnEu9M", "title": "Chicago Meetup 11/14", "pageUrl": "https://www.lesswrong.com/posts/5jmkphsm5bhWnEu9M/chicago-meetup-11-14", "postedAt": "2010-11-08T23:30:49.015Z", "baseScore": 12, "voteCount": 9, "commentCount": 7, "url": null, "contents": { "documentId": "5jmkphsm5bhWnEu9M", "html": "

Airedale and I will host a meetup this Sunday, starting 5 pm, in the Elephant & Castle Pub and Restaurant on 111 West Adams Street. We'll put up a sign saying \"LessWrong\".

\n

We're open to changing the time or venue, so check back here to be sure, or join our Google Group for future updates. Having the meetup in the Loop seemed the best compromise, but we haven't tried this particular venue before and maybe someone has a better idea.

" } }, { "_id": "Ys8oHYgdGmma5KuJT", "title": "Twinkie diet helps nutrition professor lose 27 pounds", "pageUrl": "https://www.lesswrong.com/posts/Ys8oHYgdGmma5KuJT/twinkie-diet-helps-nutrition-professor-lose-27-pounds", "postedAt": "2010-11-08T21:42:32.423Z", "baseScore": 10, "voteCount": 9, "commentCount": 17, "url": null, "contents": { "documentId": "Ys8oHYgdGmma5KuJT", "html": "

Twinkie diet helps nutrition professor lose 27 pounds:

\n

 

\n
\n

For 10 weeks, Mark Haub, a professor of human nutrition at Kansas State University, ate one of these sugary cakelets every three hours, instead of meals. To add variety in his steady stream of Hostess and Little Debbie snacks, Haub munched on Doritos chips, sugary cereals and Oreos, too.

\n

His premise: That in weight loss, pure calorie counting is what matters most -- not the nutritional value of the food.

\n

The premise held up: On his \"convenience store diet,\" he shed 27 pounds in two months.

\n
\n

 

\n

But the highlight, for LW-ers, comes at the end:

\n

 

\n
\n

Despite his weight loss, Haub feels ambivalence.

\n

\"I wish I could say the outcomes are unhealthy. I wish I could say it's healthy. I'm not confident enough in doing that. That frustrates a lot of people. One side says it's irresponsible. It is unhealthy, but the data doesn't say that.\"

\n
" } }, { "_id": "Qnz6JzfFMQ8SmdPxr", "title": "Do we have a technological growth problem?", "pageUrl": "https://www.lesswrong.com/posts/Qnz6JzfFMQ8SmdPxr/do-we-have-a-technological-growth-problem", "postedAt": "2010-11-08T16:28:14.213Z", "baseScore": 1, "voteCount": 3, "commentCount": 1, "url": null, "contents": { "documentId": "Qnz6JzfFMQ8SmdPxr", "html": "

Is meaningful technological growth speeding up or slowing down?  How would we be able to tell?

\n

\"Technology is improving faster today than ever\" and \"Technology isn't improving as fast as it used to\" are both plausible statements on the face of it.  I'm not sure how we could tell which is true.  Looking at rates of new patents doesn't seem like a good measurement, since it could reflect changes in how willing people are to seek patents.

\n

I'm starting to find more plausible the dismal hypothesis that technological improvement is slowing down, and that therefore human prosperity around the world is endangered.  Think about the Industrial Revolution.  Think about the automobile, the airplane, the telephone, the lightbulb.  Think about the computer and the Web. Now think about the past ten years.  What's being done that's radically new?  

\n

Of course this is a vague way of thinking about things, and probably colored by my own gloominess.  I'm writing here in the hopes that other people will have sounder methods for looking at this question.

\n

Between 1800 and the present, per capita world GDP increased by about a factor of 16, after centuries of near-stagnation. (Link.)  I don't know why with great confidence, but the conventional-wisdom explanation is usually something like technology, perhaps combined with political or financial factors.  What does seem certain to me is that anything that threatens that 200-year run of good fortune is incredibly dangerous, and we should look out for such risks, unless they're all-but-impossible.  Risks to global economic growth are not quite existential risks, but they're risks to everyone's material well-being.  

\n

Is there anyone here who thinks there exist such risks?  

" } }, { "_id": "KzLnvv3dob8SGTQAD", "title": "A writer describes gradually losing language", "pageUrl": "https://www.lesswrong.com/posts/KzLnvv3dob8SGTQAD/a-writer-describes-gradually-losing-language", "postedAt": "2010-11-08T15:57:31.094Z", "baseScore": 17, "voteCount": 15, "commentCount": 1, "url": null, "contents": { "documentId": "KzLnvv3dob8SGTQAD", "html": "

A writer's memoir of a brain tumor slowly destroying his ability to use language

\n

 

\n
When I came to read this passage \"…floating and flailing weightlessly.…\" I said the word \"weightlessly\" as \"walterkly\". It took quite a bit of effort to be fully sure that this was a mistake; and more effort and repeating to grasp what exactly this nonsense word was, to establish its sound – I had to construct it phoneme by phoneme – clearly enough to write it down. And it seems that the reading eye, darting backwards and forwards, was plucking letters from the whole vicinity, and mixing them up, having lost its usual ability to sort them.What the whole thing emphasises, of course, is how what we call self-command is really a matter of having reliable automatic mechanisms, unthinking habits or instincts.
" } }, { "_id": "PHJhmKh3fXEdEHge9", "title": "What are the best demoable findings in cogsci?", "pageUrl": "https://www.lesswrong.com/posts/PHJhmKh3fXEdEHge9/what-are-the-best-demoable-findings-in-cogsci", "postedAt": "2010-11-08T10:22:18.879Z", "baseScore": 2, "voteCount": 3, "commentCount": 0, "url": null, "contents": { "documentId": "PHJhmKh3fXEdEHge9", "html": "

http://www.reddit.com/r/cogsci/comments/e2r17/what_are_the_best_demoable_findings_in_cogsci/

" } }, { "_id": "LZnPCrmjTsYAaHpZk", "title": "An Xtranormal Intelligence Explosion", "pageUrl": "https://www.lesswrong.com/posts/LZnPCrmjTsYAaHpZk/an-xtranormal-intelligence-explosion", "postedAt": "2010-11-07T23:42:34.382Z", "baseScore": 5, "voteCount": 27, "commentCount": 84, "url": null, "contents": { "documentId": "LZnPCrmjTsYAaHpZk", "html": "

http://www.youtube.com/watch?v=ghIj1mYTef4

" } }, { "_id": "G6npMHwgRGSQDKavX", "title": "Have no heroes, and no villains", "pageUrl": "https://www.lesswrong.com/posts/G6npMHwgRGSQDKavX/have-no-heroes-and-no-villains", "postedAt": "2010-11-07T21:15:52.604Z", "baseScore": 138, "voteCount": 123, "commentCount": 73, "url": null, "contents": { "documentId": "G6npMHwgRGSQDKavX", "html": "

\"If you meet the Buddha on the road, kill him!\"

\n

When Edward Wilson published the book Sociobiology, Richard Lewontin and Stephen J. Gould secretly convened a group of biologists to gather regularly, for months, in the same building at Harvard that Wilson's office was in, to write an angry, politicized rebuttal to it, essentially saying not that Sociobiology was wrong, but that it was immoral - without ever telling Wilson.  This proved, to me, that they were not interested in the truth.  I never forgave them for this.

\n

I constructed a narrative of evolutionary biology in which Edward Wilson and Richard Dawkins were, for various reasons, the Good Guys; and Richard Lewontin and Stephen J. Gould were the Bad Guys.

\n

When reading articles on group selection for this post, I was distressed to find Richard Dawkins joining in the vilification of group selection with religious fervor; while Stephen J. Gould was the one who said,

\n

\"I have witnessed widespread dogma only three times in my career as an evolutionist, and nothing in science has disturbed me more than ignorant ridicule based upon a desire or perceived necessity to follow fashion: the hooting dismissal of Wynne-Edwards and group selection in any form during the late 1960's and most of the 1970's, the belligerence of many cladists today, and the almost ritualistic ridicule of Goldschmidt by students (and teachers) who had not read him.\"

\n

This caused me great cognitive distress.  I wanted Stephen Jay Gould to be the Bad Guy.  I realized I was trying to find a way to dismiss Gould's statement, or at least believe that he had said it from selfish motives.  Or else, to find a way to flip it around so that he was the Good Guy and someone else was the Bad Guy.

\n

To move on, I had to consciously shatter my Good Guy/Bad Guy narrative, and accept that all of these people are sometimes brilliant, sometimes blind; sometimes share my values, and sometimes prioritize their values (e.g., science vs. politics) very differently from me.  I was surprised by how painful it was to do that, even though I was embarrassed to have had the Good Guy/Bad Guy hypothesis in the first place.  I don't think it was even personal - I didn't care who would be the Good Guys and who would be the Bad Guys.  I just want there to be Good Guys and Bad Guys.

" } }, { "_id": "SsHiQostagLYywHxn", "title": "The hard limits of hard nanotech", "pageUrl": "https://www.lesswrong.com/posts/SsHiQostagLYywHxn/the-hard-limits-of-hard-nanotech", "postedAt": "2010-11-07T00:49:21.431Z", "baseScore": 27, "voteCount": 28, "commentCount": 53, "url": null, "contents": { "documentId": "SsHiQostagLYywHxn", "html": "

What are the plausible scientific limits of molecular nanotechnology?

\n

Richard Jones, author of Soft Machines has written an interesting critique of the room-temperature molecular nanomachinery propounded by Drexler:

\n

Rupturing The Nanotech Rapture

\n
\n

If biology can produce a sophisticated nanotechnology based on soft materials like proteins and lipids, singularitarian thinking goes, then how much more powerful our synthetic nanotechnology would be if we could use strong, stiff materials, like diamond. And if biology can produce working motors and assemblers using just the random selections of Darwinian evolution, how much more powerful the devices could be if they were rationally designed using all the insights we've learned from macroscopic engineering.

\n

But that reasoning fails to take into account the physical environment in which cell biology takes place, which has nothing in common with the macroscopic world of bridges, engines, and transmissions. In the domain of the cell, water behaves like thick molasses, not the free-flowing liquid that we are familiar with. This is a world dominated by the fluctuations of constant Brownian motion, in which components are ceaselessly bombarded by fast-moving water molecules and flex and stretch randomly. The van der Waals force, which attracts molecules to one another, dominates, causing things in close proximity to stick together. Clingiest of all are protein molecules, whose stickiness underlies a number of undesirable phenomena, such as the rejection of medical implants. What's to protect a nanobot assailed by particles glomming onto its surface and clogging up its gears?

\n

The watery nanoscale environment of cell biology seems so hostile to engineering that the fact that biology works at all is almost hard to believe. But biology does work--and very well at that. The lack of rigidity, excessive stickiness, and constant random motion may seem like huge obstacles to be worked around, but biology is aided by its own design principles, which have evolved over billions of years to exploit those characteristics. That brutal combination of strong surface forces and random Brownian motion in fact propels the self-assembly of sophisticated structures, such as the sculpting of intricately folded protein molecules. The cellular environment that at first seems annoying--filled with squishy objects and the chaotic banging around of particles--is essential in the operation of molecular motors, where a change in a protein molecule's shape provides the power stroke to convert chemical energy to mechanical energy.

\n

In the end, rather than ratifying the ”hard” nanomachine paradigm, cellular biology casts doubt on it. But even if that mechanical-engineering approach were to work in the body, there are several issues that, in my view, have been seriously underestimated by its proponents.

\n

...

\n

Put all these complications together and what they suggest, to me, is that the range of environments in which rigid nanomachines could operate, if they operate at all, would be quite limited. If, for example, such devices can function only at low temperatures and in a vacuum, their impact and economic importance would be virtually nil.

\n
\n

The entire article is definitely worth a read. Jones advocates more attention to \"soft\" nanotech, which is nanomachinery with similar design principles to biology -- the biomimetic approach -- as the most plausible means of making progress in nanotech.

\n

As far as near-term room-temperature innovations, he seems to make a compelling case. However the claim that \"If ... such devices can function only at low temperatures and in a vacuum, their impact and economic importance would be virtually nil\" strikes me as questionable. It seems to me that atomic-precision nanotech could be used to create hard vacuums and more perfectly reflective surfaces, and hence bring the costs of cryogenics down considerably. Desktop factories using these conditions could still be feasible.

\n

Furthermore, it bears mentioning that cryonics patients could still benefit from molecular machinery subject to such limitations, even if the machinery is not functional at anything remotely close to human body temperature. The necessity of a complete cellular-level rebuild is not a good excuse not to cryopreserve. As long as this kind of rebuild technology is physically plausible, there arguably remains an ethical imperative to cryopreserve patients facing the imminent prospect of decay.

\n

In fact, this proposed limitation could hint at an alternative use for cryosuspension that is entirely separate from its present role as an ambulance to the future. It could perhaps turn out that there are forms of cellular surgery and repair which are only feasible at those temperatures, which are nonetheless necessary to combat aging and its complications. The people of the future might actually need to undergo routine periods of cryogenic nanosurgery in order to achieve robust rejuvenation. This would be a more pleasant prospect than cryonics in that it would be a proven technology at that point; and most likely the vitrification process could be improved sufficiently via soft nanotech to reduce the damage from cooling itself significantly.

" } }, { "_id": "euonG8cqAHFdfmCGR", "title": "Variation on conformity experiment", "pageUrl": "https://www.lesswrong.com/posts/euonG8cqAHFdfmCGR/variation-on-conformity-experiment", "postedAt": "2010-11-06T18:24:41.438Z", "baseScore": 28, "voteCount": 21, "commentCount": 8, "url": null, "contents": { "documentId": "euonG8cqAHFdfmCGR", "html": "

A new variation on the Asch conformity experiment was recently published. The experiment was performed in Japan and used polarizing glasses to show different lines to different people in the same room, so that the subjects had to disagree with others they actually knew, and who genuinely believed that they were answering correctly. The study found that women conformed by giving a wrong answer about a third of the time, but men did not.

\n

Learned about this via Ben Goldacre's blog.

" } }, { "_id": "eqarWZDPSXy4q8m32", "title": "Yet Another \"Rational Approach To Morality & Friendly AI Sequence\"", "pageUrl": "https://www.lesswrong.com/posts/eqarWZDPSXy4q8m32/yet-another-rational-approach-to-morality-and-friendly-ai", "postedAt": "2010-11-06T16:30:25.074Z", "baseScore": -13, "voteCount": 15, "commentCount": 61, "url": null, "contents": { "documentId": "eqarWZDPSXy4q8m32", "html": "

Premise:  There exists a community whose top-most goal is to maximally and fairly fulfill the goals of all of its members.  They are approximately as rational as the 50th percentile of this community.  They politely invite you to join.  You are in no imminent danger.

\n

 

\n

Do you:

\n\n

 

\n

Premise:  The only rational answer given the current information is the last one.

\n

 

\n

What I’m attempting to eventually prove The hypothesis that I'm investigating is whether \"Option 2 is the only long-term rational answer\". (Yes, this directly challenges several major current premises so my arguments are going to have to be totally clear.  I am fully aware of the rather extensive Metaethics sequence and the vast majority of what it links to and will not intentionally assume any contradictory premises without clear statement and argument.)

\n

 

\n

It might be an interesting and useful exercise for the reader to stop and specify what information they would be looking next for before continuing.  It would be nice if an ordered list could be developed in the comments.

\n

 

\n

Obvious Questions:

\n

 

\n

<Spoiler Alert>

\n

 

\n

 

\n
    \n
  1. What happens if I don’t join?
  2. \n
  3. What do you believe that I would find most problematic about joining?
  4. \n
  5. Can I leave the community and, if so, how and what happens then?
  6. \n
  7. What are the definitions of maximal and fairly?
  8. \n
  9. What are the most prominent subgoals?/What are the rules?
  10. \n
\n

 

" } }, { "_id": "5yKGWSK5HGAdTpCi4", "title": "What do you read? What would \"desk archeology\" produce? ", "pageUrl": "https://www.lesswrong.com/posts/5yKGWSK5HGAdTpCi4/what-do-you-read-what-would-desk-archeology-produce", "postedAt": "2010-11-06T15:13:48.280Z", "baseScore": 0, "voteCount": 3, "commentCount": 13, "url": null, "contents": { "documentId": "5yKGWSK5HGAdTpCi4", "html": "
I would like to do a kind of poll:
\n
Which books/articles do you read now, which ones are on your reading list?
\n
What would a \"desk archeologist\" find when digging up your desk?
\n

Some links:

\n
\"A Stratigraphic Analysis of Desk Debris\",
\"Are we able to think clearly when surrounded by mess, asks Clive James\":
\n

\n

\n

" } }, { "_id": "HKXmNeDXFqJskbcd6", "title": "Stanford historian on the singularity", "pageUrl": "https://www.lesswrong.com/posts/HKXmNeDXFqJskbcd6/stanford-historian-on-the-singularity", "postedAt": "2010-11-06T10:01:29.868Z", "baseScore": 6, "voteCount": 5, "commentCount": 10, "url": null, "contents": { "documentId": "HKXmNeDXFqJskbcd6", "html": "

Ian Morris on \"why the west rules\", which seems to be a provocative title for an interesting book on historical geographical trends and their projection into the future: http://www.youtube.com/watch?v=tvkHiL-H2io. He starts talking about the future at minute 27 and basically concludes that a singularity scenario is one of two possibilities for the 21st century, the other being collapse. Nothing new, but encouraging to see this increasingly in the mainstream.

" } }, { "_id": "j6byboWcPASu5cYk7", "title": "Print ready version of The Sequences", "pageUrl": "https://www.lesswrong.com/posts/j6byboWcPASu5cYk7/print-ready-version-of-the-sequences", "postedAt": "2010-11-06T01:21:20.328Z", "baseScore": 19, "voteCount": 17, "commentCount": 33, "url": null, "contents": { "documentId": "j6byboWcPASu5cYk7", "html": "

I've been wanting a printable copy of the sequences to read through in meatspace. I wrote a quick scraper and uploaded the results here http://pwnee.com/Sequences/list.html

\n

Inter-linking doesn't work, but I just wanted a printable version anyway.

" } }, { "_id": "Kv3ETihv5svSatbMc", "title": "Conjoined twins who share a brain/experience?", "pageUrl": "https://www.lesswrong.com/posts/Kv3ETihv5svSatbMc/conjoined-twins-who-share-a-brain-experience", "postedAt": "2010-11-06T00:44:53.675Z", "baseScore": 20, "voteCount": 17, "commentCount": 1, "url": null, "contents": { "documentId": "Kv3ETihv5svSatbMc", "html": "

http://news.ycombinator.com/item?id=1874171

\n

 

\n

http://www2.macleans.ca/2010/11/02/a-piece-of-their-mind/print/

" } }, { "_id": "CgCuggTmnhXpiH6TL", "title": "POLL: Realism and reductionism", "pageUrl": "https://www.lesswrong.com/posts/CgCuggTmnhXpiH6TL/poll-realism-and-reductionism", "postedAt": "2010-11-05T21:13:52.568Z", "baseScore": -7, "voteCount": 6, "commentCount": 9, "url": null, "contents": { "documentId": "CgCuggTmnhXpiH6TL", "html": "

A second attempt.

\n

Defintions:

\n

universe: that which contains everything.

\n

reality: the realm of natural phenomena.

\n

scientific theory: a theory that identifies natural phenomena.

\n

morality: the realm of normative rules.

\n

normative theory: a theory that identifies normative rules.

\n

identification: \"this natural phenomenon has following properties\" or \"this normative rule says: ... \"

\n

 

\n

What are you?

\n

Please answer in the form of [ABC0]{4}, where 0 stands for no opinion. Feel free to add an explanation.

\n

Example: B0BA stands for anti-realism, no opinion on values, weak ontological realism, scientific reductionism.

\n

 

\n
\n

 

\n
1A realism
\n

Reality is external to the mind.

\n

It is possible to evaluate which scientific theory is more correct.

\n
1B anti-realism
\n

Reality is external to the mind.

\n

It is impossible to evaluate which scientific theory is more correct.

\n
1C subjectivism
\n

Reality is a product of the mind.

\n
\n
2A value realism
\n

Morality is external to the mind.

\n

It is possible to evaluate which normative theory is better.

\n
2B value anti-realism
\n

Morality is external to the mind.

\n

It is impossible to evaluate which normative theory is better.

\n
2C value relativism
\n

Morality is a product of the mind.

\n
\n
\n
\n
3A strong ontological reductionism
\n

Mental phenomena are reducible to reality and reality is reducible to mathematics.

\n

Mathematics is the universe.

\n
3B weak ontological reductionism
\n

Mental phenomena are reducible to reality, but reality is not reducible to mathematics.

\n

Reality (and mathematics) is the universe.

\n
3C anti-reductionism
\n

Mental phenomena are not reducible to reality and reality is not reducible to mathematics.

\n

 

\n
\n

 

\n
4A scientific reductionism
\n

The entirety of scientific theories can be reduced to some axiomatic theories.

\n
4B scientific anti-reductionism
\n

The entirety of scientific theories cannot be reduced to some axiomatic theories.

\n

New natural phenomena require new irreducible scientific theories.

\n

 

\n
\n

 

" } }, { "_id": "yK4wNCxE6bqQd4nPe", "title": "Are the effects of UFAI likely to be seen at astronomical distances?", "pageUrl": "https://www.lesswrong.com/posts/yK4wNCxE6bqQd4nPe/are-the-effects-of-ufai-likely-to-be-seen-at-astronomical", "postedAt": "2010-11-05T13:05:43.970Z", "baseScore": 0, "voteCount": 7, "commentCount": 12, "url": null, "contents": { "documentId": "yK4wNCxE6bqQd4nPe", "html": "

My comment to a discussion of great filters/existential risk:

\n

How likely is it that a UFAI disaster would produce effects we can see from here? I think \"people can't suffer if they're dead\" disasters (failed attempt at FAI) is possibly more likely than paperclip maximizers.

Not sure what a money-maximizing UFAI disaster would look like, but I can't think of any reason it would be likely to go far off-planet.

National dominance-maximizing UFAI is a hard call, but possibly wouldn't go off-planet. It would depend on whether it's looking for absolute dominance of all possible territory or dominance/elimination of existing enemies.

" } }, { "_id": "xkY6ibMyThqnRpScL", "title": "Goertzel on Psi in H+ Magazine", "pageUrl": "https://www.lesswrong.com/posts/xkY6ibMyThqnRpScL/goertzel-on-psi-in-h-magazine", "postedAt": "2010-11-05T09:09:31.202Z", "baseScore": 14, "voteCount": 12, "commentCount": 46, "url": null, "contents": { "documentId": "xkY6ibMyThqnRpScL", "html": "

Ben Goertzel has a rather long psi-related article in Humanity Plus Magazine, apparently prompted by the recent precognition study to be published in Journal of Personality and Social Psychology. He's arguing that psi is real and we should expect to see the results of this study replicated.

\n
\n

I grew up very skeptical of claims of psychic power, jaded by stupid newspaper astrology columns and phony fortune-tellers claiming to read my future in their crystal balls for $20.  Clearly there are many frauds and self-deluded people out there, falsely claiming to perceive the future and carry out other paranormal feats.  But this is no reason to ignore solid laboratory evidence pointing toward the conclusion that, in some cases, precognition really does exist.

\n
" } }, { "_id": "CYxe73jgpLiC3fNLq", "title": "Religious/Worldview Techniques", "pageUrl": "https://www.lesswrong.com/posts/CYxe73jgpLiC3fNLq/religious-worldview-techniques", "postedAt": "2010-11-05T08:04:41.977Z", "baseScore": 19, "voteCount": 12, "commentCount": 37, "url": null, "contents": { "documentId": "CYxe73jgpLiC3fNLq", "html": "

This is really weird, but I find myself strongly drawn towards religion (specifically, Christianity), which for some reason feels intuitively right to me. I *know* or at least seem to know that I'm just infected with a meme, I know all the standard arguments, and the majority of my friends are atheists, but it feels right to the extent that I am experiencing serious mental discomfort at not believing. Are there techniques that can help in this situation? I find that I can change my worldview fairly easily in many regards, but this one is deep-rooted.

" } }, { "_id": "n8rSBGeWeFfKGKLex", "title": "Explaining information theoretic vs thermodynamic entropy?", "pageUrl": "https://www.lesswrong.com/posts/n8rSBGeWeFfKGKLex/explaining-information-theoretic-vs-thermodynamic-entropy", "postedAt": "2010-11-04T23:41:01.232Z", "baseScore": -3, "voteCount": 6, "commentCount": 13, "url": null, "contents": { "documentId": "n8rSBGeWeFfKGKLex", "html": "

What is the best way to go about explaining the difference between these two different types of entropy? I can see the difference myself and give all sorts of intuitive reasons for how the concepts work and how they kind of relate. At the same time I can see why my (undergraduate) physicist friends would be skeptical when I tell them that no, I haven't got it backwards and a string of all '1's has nearly zero entropy while a perfectly random string is 'maximum entropy'. After all, if your entire physical system degenerates into a mush with no order that you know nothing about then you say it is full of entropy.

\n

 

\n

How would I make them understand the concepts before nerdy undergraduate arrogance turns off their brains? Preferably giving them the kind of intuitive grasp that would last rather than just persuading them via authoritative speech, charm and appeal to authority. I prefer people to comprehend me than to be able to repeat my passwords. (Except where having people accept my authority and dominance will get me laid in which case I may have to make concessions to practicality.)

" } }, { "_id": "7grfN4xNvLpEQoiyb", "title": "The Curve of Capability", "pageUrl": "https://www.lesswrong.com/posts/7grfN4xNvLpEQoiyb/the-curve-of-capability", "postedAt": "2010-11-04T20:22:48.876Z", "baseScore": 20, "voteCount": 59, "commentCount": 266, "url": null, "contents": { "documentId": "7grfN4xNvLpEQoiyb", "html": "

or: Why our universe has already had its one and only foom

\n

In the late 1980s, I added half a megabyte of RAM to my Amiga 500. A few months ago, I added 2048 megabytes of RAM to my Dell PC. The later upgrade was four thousand times larger, yet subjectively they felt about the same, and in practice they conferred about the same benefit. Why? Because each was a factor of two increase, and it is a general rule that each doubling tends to bring about the same increase in capability.

\n

That's a pretty important rule, so let's test it by looking at some more examples.

\n

How does the performance of a chess program vary with the amount of computing power you can apply to the task? The answer is that each doubling of computing power adds roughly the same number of ELO rating points. The curve must flatten off eventually (after all, the computation required to fully solve chess is finite, albeit large), yet it remains surprisingly constant over a surprisingly wide range.

\n

Is that idiosyncratic to chess? Let's look at Go, a more difficult game that must be solved by different methods, where the alpha-beta minimax algorithm that served chess so well, breaks down. For a long time, the curve of capability also broke down: in the 90s and early 00s, the strongest Go programs were based on hand coded knowledge such that some of them literally did not know what to do with extra computing power; additional CPU speed resulted in zero improvement.

\n

The breakthrough came in the second half of last decade, with Monte Carlo tree search algorithms. It wasn't just that they provided a performance improvement, it was that they were scalable. Computer Go is now on the same curve of capability as computer chess: whether measured on the ELO or the kyu/dan scale, each doubling of power gives a roughly constant rating improvement.

\n

Where do these doublings come from? Moore's Law is driven by improvements in a number of technologies, one of which is chip design. Each generation of computers is used, among other things, to design the next generation. Each generation needs twice the computing power of the last generation to design in a given amount of time.

\n

Looking away from computers to one of the other big success stories of 20th-century technology, space travel, from Goddard's first crude liquid fuel rockets, to the V2, to Sputnik, to the half a million people who worked on Apollo, we again find that successive qualitative improvements in capability required order of magnitude after order of magnitude increase in the energy a rocket could deliver to its payload, with corresponding increases in the labor input.

\n

What about the nuclear bomb? Surely that at least was discontinuous?

\n

At the simplest physical level it was: nuclear explosives have six orders of magnitude more energy density than chemical explosives. But what about the effects? Those are what we care about, after all.

\n

The death tolls from the bombings of Hiroshima and Nagasaki have been estimated respectively at 90,000-166,000 and 60,000-80,000. That from the firebombing of Hamburg in 1943 has been estimated at 42,600; that from the firebombing of Tokyo on the 10th of March 1945 alone has been estimated at over 100,000. So the actual effects were in the same league as other major bombing raids of World War II. To be sure, the destruction was now being carried out with single bombs, but what of it? The production of those bombs took the labor of 130,000 people, the industrial infrastructure of the worlds most powerful nation, and $2 billion of investment in 1945 dollars, nor did even that investment at that time gain the US the ability to produce additional nuclear weapons in large numbers at short notice. The construction of the massive nuclear arsenals of the later Cold War took additional decades.

\n

(To digress for a moment from the curve of capability itself, we may also note that destructive power, unlike constructive power, is purely relative. The death toll from the Mongol sack of Baghdad in 1258 was several hundred thousand; the total from the Mongol invasions was several tens of millions. The raw numbers, of course, do not fully capture the effect on a world whose population was much smaller than today's.)

\n

Does the same pattern apply to software as hardware? Indeed it does. There's a significant difference between the capability of a program you can write in one day versus two days. On a larger scale, there's a significant difference between the capability of a program you can write in one year versus two years. But there is no significant difference between the capability of a program you can write in 365 days versus 366 days. Looking away from programming to the task of writing an essay or a short story, a textbook or a novel, the rule holds true: each significant increase in capability requires a doubling, not a mere linear addition. And if we look at pure science, continued progress over the last few centuries has been driven by exponentially greater inputs both in number of trained human minds applied and in the capabilities of the tools used.

\n

If this is such a general law, should it not apply outside human endeavor? Indeed it does. From protozoa which pack a minimal learning mechanism into a single cell, to C. elegans with hundreds of neurons, to insects with thousands, to vertebrates with millions and then billions, each increase in capability takes an exponential increase in brain size, not the mere addition of a constant number of neurons.

\n

But, some readers are probably thinking at this point, what about...

\n

... what about the elephant at the dining table? The one exception that so spectacularly broke the law?

\n

Over the last five or six million years, our lineage upgraded computing power (brain size) by about a factor of three, and upgraded firmware to an extent that is unknown but was surely more like a percentage than an order of magnitude. The result was not a corresponding improvement in capability. It was a jump from almost no to fully general symbolic intelligence, which took us from a small niche to mastery of the world. How? Why?

\n

To answer that question, consider what an extraordinary thing is a chimpanzee. In raw computing power, it leaves our greatest supercomputers in the dust; in perception, motor control, spatial and social reasoning, it has performance our engineers can only dream about. Yet even chimpanzees trained in sign language cannot parse a sentence as well as the Infocom text adventures that ran on the Commodore 64. They are incapable of arithmetic that would be trivial with an abacus let alone an early pocket calculator.

\n

The solution to the paradox is that a chimpanzee could make an almost discontinuous jump to human level intelligence because it wasn't developing across the board. It was filling in a missing capability - symbolic intelligence - in an otherwise already very highly developed system. In other words, its starting point was staggeringly lopsided.

\n

(Is there an explanation why this state of affairs came about in the first place? I think there is - in a nutshell, most conscious observers should expect to live in a universe where it happens exactly once - but that would require a digression into philosophy and anthropic reasoning, so it really belongs in another post; let me know if there's interest, and I'll have a go at writing that post.)

\n

Can such a thing happen again? In particular, is it possible for AI to go foom the way humanity did?

\n

If such lopsidedness were to repeat itself... well even then, the answer is probably no. After all, an essential part of what we mean by foom in the first place - why it's so scarily attractive - is that it involves a small group accelerating in power away from the rest of the world. But the reason why that happened in human evolution is that genetic innovations mostly don't transfer across species. The dolphins couldn't say hey, these apes are on to something, let's snarf the code for this symbolic intelligence thing, oh and the hands too, we're going to need manipulators for the toolmaking application, or maybe octopus tentacles would work better in the marine environment. Human engineers carry out exactly this sort of technology transfer on a routine basis.

\n

But it doesn't matter, because the lopsidedness is not occurring. Obviously computer technology hasn't lagged in symbol processing - quite the contrary. Nor has it really lagged in areas like vision and pattern matching - a lot of work has gone into those, and our best efforts aren't clearly worse than would be expected given the available development effort and computing power. And some of us are making progress on actually developing AGI - very slow, as would be expected if the theory outlined here is correct, but progress nonetheless.

\n

The only way to create the conditions for any sort of foom would be to shun a key area completely for a long time, so that ultimately it could be rapidly plugged into a system that is very highly developed in other ways. Hitherto no such shunning has occurred: every even slightly promising path has had people working on it. I advocate continuing to make progress across the board as rapidly as possible, because every year that drips away may be an irreplaceable loss; but if you believe there is a potential threat from unfriendly AI, then such continued progress becomes the one reliable safeguard.

\n

 

" } }, { "_id": "tCvuytQXkF2B2pyYN", "title": "Specification failure", "pageUrl": "https://www.lesswrong.com/posts/tCvuytQXkF2B2pyYN/specification-failure", "postedAt": "2010-11-04T18:56:38.301Z", "baseScore": 8, "voteCount": 6, "commentCount": 18, "url": null, "contents": { "documentId": "tCvuytQXkF2B2pyYN", "html": "

I got made a moderator recently, and noticed that I have the amazing ability to ban my own comments. For added efficiency/hilarity I can even do it from my own user page, now complete with nifty \"ban\" links! I thought it would be neat to publicize this bug before it gets fixed, because it makes for a wonderful example of specification failure - the kind of error that can doom any attempt to create AGI/FAI. How do you protect against such mistakes in general, if you're not allowed to test your software? Discuss.

\n

Tangentially related: a blogger asks visitors to write a correct binary search routine without testing it even once, then test it and report the results. Much wailing ensues. (I succeeded - look for \"Vladimir Slepnev\" - but it was a surreal experience.)

" } }, { "_id": "E3spAsatTb24zmZnC", "title": "POLL: Reductionism", "pageUrl": "https://www.lesswrong.com/posts/E3spAsatTb24zmZnC/poll-reductionism", "postedAt": "2010-11-04T17:55:20.027Z", "baseScore": -5, "voteCount": 6, "commentCount": 10, "url": null, "contents": { "documentId": "E3spAsatTb24zmZnC", "html": "

Since there is no handy toll to create polls on LW, please post comments on your position.

\n

As which of the following would you identify yourself? (I am not good at rationalist taboo, thus please excuse me for ambiguous terms.)

\n

Strong ontological reductionst

\n

See defintion on Wikipedia. Someone who believes that mental phenomena can be fully reduced to physics and that physics can be fully reduced to mathematics. That is, desires and electrons don't have any fundamental qualities, but are in the end mathematical objects. And nothing exists outside the mathematical realm.

\n

Weak ontological reductionist

\n

Someone who believes that mental phenomena don't have any qualities outside the domain of physics. Every aspect of mental phenomena can be fully reduced to physical phenomena. But physical phenomena are not necessarily mathematical objects.

\n

Strong scientific reductionist

\n

Someone who believes that quantum mechanics is wrong and Laplace's demon can exist in principle (if unrestricted by physical limitations). 

\n

Weak scientific reductionist

\n

Someone who concedes that it is impossible in principle to predict complicated physical systems, but that the concepts and theories in chemistry and biology are mere approximations and simplifications of complicated physical computations to sidestep the (faster-than-)exponential wall. That is, chemical and biological models are not fundamental, but are reducible to physical theories (if we had the theoretical computational power to simulate the models).

\n

 

\n

Please also comment if you are not a reductionist and explain what kind of reductionist you are not.

" } }, { "_id": "B4rLv6jmC3JZjGuvC", "title": "Light cone eating AI explosions are not filters", "pageUrl": "https://www.lesswrong.com/posts/B4rLv6jmC3JZjGuvC/light-cone-eating-ai-explosions-are-not-filters", "postedAt": "2010-11-04T16:31:40.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "B4rLv6jmC3JZjGuvC", "html": "

Some existential risks can’t account for any of the Great Filter. Here are two categories of existential risks that are not filters:

\n

Too big: any disaster that would destroy everyone in the observable universe at once, or destroy space itself, is out. If others had been filtered by such a disaster in the past, we wouldn’t be here either. This excludes events such as simulation shutdown and breakdown of a metastable vacuum state we are in.

\n

Not the end: Humans could be destroyed without the causal path to space colonization being destroyed. Also much of human value could be destroyed without humans being destroyed. e.g. Super-intelligent AI would presumably be better at colonizing the stars than humans are. The same goes for transcending uploads. Repressive totalitarian states and long term erosion of value could destroy a lot of human value and still lead to interstellar colonization.

\n

Since these risks are not filters, neither the knowledge that there is a large minimum total filter nor the use of SIA increases their likelihood.  SSA still increases their likelihood for the usual Doomsday Argument reasons. I think the rest of the risks listed in Nick Bostrom’s paper can be filters. According to SIA averting these filter existential risks should be prioritized more highly relative to averting non-filter existential risks such as those in this post. So for instance AI is less of a concern relative to other existential risks than otherwise estimated. SSA’s implications are less clear – the destruction of everything in the future is a pretty favorable inclusion in a hypothesis under SSA with a broad reference class, but as always everything depends on the reference class.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "HFnQ5gXHWm29M93cm", "title": "Katja Grace's Anthropic Reasoning in the Great Filter", "pageUrl": "https://www.lesswrong.com/posts/HFnQ5gXHWm29M93cm/katja-grace-s-anthropic-reasoning-in-the-great-filter", "postedAt": "2010-11-04T10:47:18.301Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "HFnQ5gXHWm29M93cm", "html": "

The wonderful Katja Grace has just uploaded her thesis, Anthropic Reasoning in the Great Filter, linked to from her blog (which contains a convientient summary)

\n

 

\n

She discusses how deal with indexical updating; updating on information not simply based on the likelihood that it occur somewhere, but the relative number (or proportion) of obersevers who would see that informtation.

" } }, { "_id": "DexavXc3daGe7L5gR", "title": "Refuting the \"iron law of bureaucracy\"?", "pageUrl": "https://www.lesswrong.com/posts/DexavXc3daGe7L5gR/refuting-the-iron-law-of-bureaucracy", "postedAt": "2010-11-04T08:18:04.121Z", "baseScore": 5, "voteCount": 7, "commentCount": 10, "url": null, "contents": { "documentId": "DexavXc3daGe7L5gR", "html": "

Jerry Pournelle's \"Iron Law of Bureaucracy\" implies that leaders of bureaucratic organizations will seek to maximize the power and influence of the organization at the expense of its stated goals - but is that true in the real world?

\n

Julie Dolan of Macalester College examined surveys of government administrators and found that, surprisingly enough, high-ranking federal bureaucrats tended to prefer less government spending than the general public, even on issues that their own departments are responsible for.

\n

Here is the abstract from her paper, \"The budget-minimizing bureaucrat? Empirical evidence from the senior executive service\" that was published in the journal Public Administration Review:

\n
\n

In a representative democracy, we assume the populace exerts some control over the actions and outputs of government officials, ensuring they comport with public preferences. However, the growth of the fourth branch of government has created a paradox: Unelected bureaucrats now have the power to affect government decisions (Meier 1993; Rourke 1984; Aberbach, Putnam, and Rockman 1981). In this article, I rely on two competing theories of bureaucratic behavior-representative-bureaucracy theory and Niskanen's budget-maximization theory-to assess how well the top ranks of the federal government represent the demands of the citizenry. Focusing on federal-spending priorities, I assess whether Senior Executive Service (SES) members mirror the attitudes of the populace or are likely to inflate budgets for their own personal gain. Contrary to the popular portrayal of the budget-maximizing bureaucrat (Niskanen 1971), I find these federal administrators prefer less spending than the public on most broad spending categories, even on issues that fall within their own departments' jurisdictions. As such, it may be time to revise our theories about bureaucratic self-interest and spending priorities. [emphasis added]

\n
\n

I was able to read the paper here for free, but I had to register first.

\n

See also: The Case FOR Bureaucracy

" } }, { "_id": "qpcP2KmgbR3SH7mMF", "title": "An apology", "pageUrl": "https://www.lesswrong.com/posts/qpcP2KmgbR3SH7mMF/an-apology", "postedAt": "2010-11-03T19:20:08.179Z", "baseScore": 20, "voteCount": 21, "commentCount": 56, "url": null, "contents": { "documentId": "qpcP2KmgbR3SH7mMF", "html": "

Ohhhhh.  WOW!  Damn.  Now I feel bad.

I have been acting like a bull in a china shop, been an extremely ungracious guest, and have taken longer than I prefer to realize these things.

My deepest apologies.

My only defenses or mitigating circumstances:
1.  I really didn't get it
2.  My intentions were good

I would like to perform a penance of creating or helping to create a newbie's guide to LessWrong.  Doing so will clarify and consolidate my understanding and hopefully provide a useful community resource in recompense for the above and appreciation for those who took the time to write thoughtful comments.  Obviously, though, doing so will require more patience and help from the community (particularly since I am certainly aware that I have no idea how to calibrate how much, if anything, you actually want to make too easily accessible) -- so this is a also request for that patience and help (and I'm making the assumption that the request will be answered by the replies ;-).

Thanks.

" } }, { "_id": "ccqfsQ3kMSLDJKRx7", "title": "Control Fraud", "pageUrl": "https://www.lesswrong.com/posts/ccqfsQ3kMSLDJKRx7/control-fraud", "postedAt": "2010-11-03T19:12:07.646Z", "baseScore": 19, "voteCount": 18, "commentCount": 9, "url": null, "contents": { "documentId": "ccqfsQ3kMSLDJKRx7", "html": "

 

\n

A recent post by Bruce Schneier on control fraud.

\n
\n
Control fraud is a process of optimizing an organization for fraud, utilizing a position of power to suborn controls.
\n
\n
\n
From the abstract of the paper Bruce references:
\n
\n
Individual “control frauds” cause greater losses than all other forms of property crime combined. They are financial super-predators. Control frauds are crimes led by the head of state or CEO that use the nation or company as a fraud vehicle. Waves of “control fraud” can cause economic collapses, damage and discredit key institutions vital to good political governance, and erode trust. The defining element of fraud is deceit – the criminal creates and then betrays trust. Fraud, therefore, is the strongest acid to eat away at trust. Endemic control fraud causes institutions and trust to become friable – to crumble – and produce economic stagnation.
\n
\n
Friendly AI is an important topic on this site, but what about creating friendly organizations such as companies and governments? The damage done by a government wireheaded for fraud can be enormous.
\n
\n

Can the same approaches used to build FAI be used to improve other types of systems?

" } }, { "_id": "WxrYYpTtrmTLnjsAL", "title": "Meet-ups, continuing the conversation", "pageUrl": "https://www.lesswrong.com/posts/WxrYYpTtrmTLnjsAL/meet-ups-continuing-the-conversation", "postedAt": "2010-11-03T11:44:01.322Z", "baseScore": 4, "voteCount": 3, "commentCount": 0, "url": null, "contents": { "documentId": "WxrYYpTtrmTLnjsAL", "html": "

If you've been to a meet-up and you think there's more to be said about something that was discussed there, I suggest posting it at LW and mentioning that a meet-up inspired it.

" } }, { "_id": "hsmdgxqdbPQGLmK2E", "title": "Amoral Approaches to Morality", "pageUrl": "https://www.lesswrong.com/posts/hsmdgxqdbPQGLmK2E/amoral-approaches-to-morality", "postedAt": "2010-11-03T08:25:51.143Z", "baseScore": 9, "voteCount": 10, "commentCount": 4, "url": null, "contents": { "documentId": "hsmdgxqdbPQGLmK2E", "html": "

Consider three cases in which someone is asking you about morality: a clever child, your guru (and/or Socrates, if you're more comfortable with that tradition), or an about-to-FOOM AI of indeterminate friendliness. For each of them, you want your thoughts to be as clear as possible- the other entity is clever enough to point out flaws (or powerful enough that your flaws might be deadly), and for none of them can you assume that their prior or posterior morality will be very similar to your own. (As Thomas Sowell puts it, children are barbarians who need to be civilized before it is too late; your guru will seem willing to lead you anywhere, and the AI probably doesn't think the way you do.)

\n

I suggest that all three can be approached in the same way: by attempting to construct an amoral approach to morality. At first impression, this approach gives a significant benefit: circular reasoning is headed off at the pass, because you need to explain morality (as best as you can) to someone who does not understand or feel it.

\n

Interested in what comes next?

\n

The main concern I have is that there is a rather extensive Metaethics sequence already, and this seems to be very similar to The Moral Void and The Meaning of Right. The benefit of this post, if there is one, seems to be in a different approach to the issue- I think I can get a useful sketch of the issue in one post- and probably a different conclusion. At the moment, I don't buy Eliezer's approach to the Is-Ought gap (Right is a 1-place function... why?), and I think a redefinition of the question may make for somewhat better answers.

\n

(The inspirations for this post, if you're interested in me tackling them directly instead, are criticisms of utilitarianism obliquely raised in a huge tree in the Luminosity discussion thread (the two interesting dimensions are questioning assumptions, and talking about scope errors, of which I suspect scope errors is the more profitable) and the discussion around, as shokwave puts it, the Really Scary Idea.)

" } }, { "_id": "dDKa77AcxjzMRmbL4", "title": "META: Meetup Overload", "pageUrl": "https://www.lesswrong.com/posts/dDKa77AcxjzMRmbL4/meta-meetup-overload", "postedAt": "2010-11-03T02:01:43.475Z", "baseScore": 27, "voteCount": 20, "commentCount": 11, "url": null, "contents": { "documentId": "dDKa77AcxjzMRmbL4", "html": "

Does anyone else think we need a better way of dealing with meet-ups? I totally understand that meeting face to face is an (at least arguably) important next-step in the rationalist-community-building project, but the fact is the front page, which was originally for all the most content-rich, accessible, and noteworthy articles is now being filled with blurbs that are irrelevant to 95% of the readership.

\n

I can see why these posts would need to be highly visible if the meetups are going to work at all, but I think we should get the ball rolling on figuring out a better way to handle location-specific posts. For example, mandatory-but-private location setting in your profile (at least to the country, possibly to the state/province/etc for larger areas), which would subscribe you (with opt-out available) to any happenings in that area. That's just the first idea, like I said the idea is just to get the discussion going.

" } }, { "_id": "nXp4aAq5GdhKpc57x", "title": "Rationality Quotes: November 2010", "pageUrl": "https://www.lesswrong.com/posts/nXp4aAq5GdhKpc57x/rationality-quotes-november-2010", "postedAt": "2010-11-02T20:41:33.804Z", "baseScore": 6, "voteCount": 8, "commentCount": 367, "url": null, "contents": { "documentId": "nXp4aAq5GdhKpc57x", "html": "
\n

A monthly thread for posting rationality-related quotes you've seen recently (or had stored in your quotesfile for ages).

\n\n
" } }, { "_id": "Qeq7EmkNGFRq4yaFr", "title": "Voting is not rational (usually.)", "pageUrl": "https://www.lesswrong.com/posts/Qeq7EmkNGFRq4yaFr/voting-is-not-rational-usually", "postedAt": "2010-11-02T19:34:28.552Z", "baseScore": 1, "voteCount": 14, "commentCount": 36, "url": null, "contents": { "documentId": "Qeq7EmkNGFRq4yaFr", "html": "

Today is the midterm elections in the United States, and I am not voting.

\n

For the vast majority of elections, voting is irrational, because the individual's vote is proportionately very small. This means it cannot have an effect on the outcome.

\n

There are, however, conditions which can lead to voting becoming rational, and these are:

\n
    \n
  1. The number of voters approaches zero.
  2. \n
  3. The ratio of votes for candidates (in a majority wins, 2 person race) approaches .5
  4. \n
  5. The difficulty of voting becomes vanishingly small.
  6. \n
  7. Incentives are created to make the costs of not voting greater than the cost of voting (for instance, not voting is illegal in Australia, and incurs a fine.)
  8. \n
\n
For me, as for nearly everyone, voting this year is irrational.  1 and 2 are nowhere close to true, and 3 is especially bad for me this year.  I forgot to change my address on my voter's registration until yesterday and my polling location is both a) usually overpopulated and filled with long lines and b) farther than I care to go.  
\n
Only 3 and 4 are something that we can certainly do something about.  The web-based absentee voter system that was tested this fall is a step in that directions, but its  subsequent hacking  was unsurprising.  Is opening our system up to fraud a reasonable trade-off to get more people to vote?  Should there be an option to use paper absentee-like ballots even if you are not absent? Should the U.S. go the Australia route?
\n

 

" } }, { "_id": "f23WWwBNR9zkPaW8v", "title": "Waser's 3 Goals of Morality", "pageUrl": "https://www.lesswrong.com/posts/f23WWwBNR9zkPaW8v/waser-s-3-goals-of-morality", "postedAt": "2010-11-02T19:12:49.132Z", "baseScore": -16, "voteCount": 15, "commentCount": 25, "url": null, "contents": { "documentId": "f23WWwBNR9zkPaW8v", "html": "
\n

In the spirit of Asimov’s 3 Laws of Robotics

\n
    \n
  1. You should not be selfish
  2. \n
  3. You should not be short-sighted or over-optimize
  4. \n
  5. You should maximize the progress towards and fulfillment of all conscious and willed goals, both in terms of numbers and diversity equally, both yours and those of others equally
  6. \n
\n

It is my contention that Yudkowsky’s CEV converges to the following 3 points:

\n
    \n
  1. I want what I want
  2. \n
  3. I recognize my obligatorily gregarious nature; realize that ethics and improving the community is the community’s most rational path towards maximizing the progress towards and fulfillment of everyone’s goals; and realize that to be rational and effective the community should punish anyone who is not being ethical or improving the community (even if the punishment is “merely” withholding help and cooperation)
  4. \n
  5. I shall, therefore, be ethical and improve the community in order to obtain assistance, prevent interference, and most effectively achieve my goals
  6. \n
\n

I further contend that, if this CEV is translated to the 3 Goals above and implemented in a Yudkowskian Benevolent Goal Architecture (BGA), that the result would be a Friendly AI.

\n

It should be noted that evolution and history say that cooperation and ethics are stable attractors while submitting to slavery (when you don’t have to) is not.  This formulation expands Singer’s Circles of Morality as far as they’ll go and tries to eliminate irrational Us-Them distinctions based on anything other than optimizing goals for everyone — the same direction that humanity seems headed in and exactly where current SIAI proposals come up short.

\n

Once again, cross-posted here on my blog (unlike my last article, I have no idea whether this will be karma'd out of existence or not ;-)

\n
" } }, { "_id": "W9aYQsXTaeKkyzANs", "title": "Oxford (UK) Rationality & AI Risks Discussion Group", "pageUrl": "https://www.lesswrong.com/posts/W9aYQsXTaeKkyzANs/oxford-uk-rationality-and-ai-risks-discussion-group", "postedAt": "2010-11-02T19:10:30.494Z", "baseScore": 6, "voteCount": 5, "commentCount": 9, "url": null, "contents": { "documentId": "W9aYQsXTaeKkyzANs", "html": "

Alex Flint and I are doing a series of seminar/discussion events in Oxford, to which anyone from LW would be very welcome. Especially as the theme is Rationality & AI Risks!

\n

They're being held at 5pm on Saturday in Exeter College, and will go on throughout November. We had over 10 people last Saturday discussing Heuristics and Biases, and plan to go onto Bayesianism this week. They'll probably last for about an hour, though we may decamp to the pub afterwards to continue discussion.

\n

If you're in the area, you might also be interested in the other events run by the Oxford Transhumanist Society.

" } }, { "_id": "nnnd4KRQxs6DYcehD", "title": "Harry Potter and the Methods of Rationality discussion thread, part 5", "pageUrl": "https://www.lesswrong.com/posts/nnnd4KRQxs6DYcehD/harry-potter-and-the-methods-of-rationality-discussion-19", "postedAt": "2010-11-02T18:57:29.702Z", "baseScore": 9, "voteCount": 11, "commentCount": 656, "url": null, "contents": { "documentId": "nnnd4KRQxs6DYcehD", "html": "

- This thread has run its course. You will find newer threads in the discussion section.

\n

Another discussion thread - the fourth - has reached the (arbitrary?) 500 comments threshold, so it's time for a new thread for Eliezer Yudkowsky's widely-praised Harry Potter fanfic.

\n

Most of the paratext and fan-made resources are listed on Mr. LessWrong's author page. There is also AdeleneDawner's collection of most of the previously-published Author's Notes.

\n

Older threads: one, two, three, four. By tag.

\n

Newer threads are in the Discussion section, starting from Part 6.

\n

Spoiler policy as suggested by Unnamed and approved by Eliezer, me, and at least three other upmodders:

\n
\n

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

\n

If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that \"Eliezer said X is true\" unless you use rot13.

\n
\n

It would also be quite sensible and welcome to continue the practice of declaring at the top of your post which chapters you are about to discuss, especially for newly-published ones, so that people who haven't yet seen them can stop reading in time.

" } }, { "_id": "iPHWq6si8n8e3DZAA", "title": "Toy model of human values", "pageUrl": "https://www.lesswrong.com/posts/iPHWq6si8n8e3DZAA/toy-model-of-human-values", "postedAt": "2010-11-02T18:28:33.783Z", "baseScore": 4, "voteCount": 6, "commentCount": 2, "url": null, "contents": { "documentId": "iPHWq6si8n8e3DZAA", "html": "

This is just a summary via analogy where I human values come from, as far as I understand it. The expanded version would be Eli's http://lesswrong.com/lw/l3/thou_art_godshatter/.

\n

The basic analogy is to chess-playing programs (at least the basic ones from 40 years ago, the art has progressed since then, but not much). The way they work is basically by examining the branching tree of possible moves; since chess is \"too big\" to solve completely (find the branch that always leads to winning) by present hardware what these programs do is go to a certain depth and then use heuristics to decide whether the end state is good, such as how many pieces are on its side vs. the enemy side, weighed by their \"power\" (queen is worth more than pawn) and position (center positions are worth more). 

\n

The analogy mapping is as follows: the goal of the game is winning, of evolution is survival of a gene fragment (such as human DNA). Explicit encoding of the goal is not computationally feasible or worthwhile (in terms of the goal itself), so values of certain non-terminal states (in terms of the goal) are explicitly given to the program or to a human; the human/program knows no better than these non-terminal values - they are our values - we are Godshatter

\n

What do you think?

" } }, { "_id": "kmmxdAfFbgoSAr5X6", "title": "South/Eastern Europe Meeting in Ljubljana/Slovenia", "pageUrl": "https://www.lesswrong.com/posts/kmmxdAfFbgoSAr5X6/south-eastern-europe-meeting-in-ljubljana-slovenia", "postedAt": "2010-11-02T18:26:17.877Z", "baseScore": 11, "voteCount": 8, "commentCount": 8, "url": null, "contents": { "documentId": "kmmxdAfFbgoSAr5X6", "html": "

Will be held on November the 5th at 19:00 at Nikola Tesla's Street 30. The place known as The Hekovnik.

\n

We will discuss various topics on this second of many future meetings. Any Slovenian who reads this site is welcome, as any Croat, Austrian, northern Italian or any other currently in the vicinity of Ljubljana.

" } }, { "_id": "8F36QpxN43KLiJjy5", "title": "New Month's Resolutions", "pageUrl": "https://www.lesswrong.com/posts/8F36QpxN43KLiJjy5/new-month-s-resolutions", "postedAt": "2010-11-02T16:15:48.766Z", "baseScore": 8, "voteCount": 8, "commentCount": 5, "url": null, "contents": { "documentId": "8F36QpxN43KLiJjy5", "html": "

If you have a goal that can be credibly achieved by the end of the month reply to this post. At the end of the month everybody who posted writes up their experiences, lessons in overcoming akrasia etc. as a reply to their original comment or by editing the original.

\n

Whether you want to share the goal before the end of the month is up to you, pros, you are accountable, cons, you may feel a sense of accomplishment just by saying it and not do anything.

" } }, { "_id": "M8zciCoiqdtZiH7aq", "title": "Anthropic principles agree on bigger future filters", "pageUrl": "https://www.lesswrong.com/posts/M8zciCoiqdtZiH7aq/anthropic-principles-agree-on-bigger-future-filters", "postedAt": "2010-11-02T10:14:04.000Z", "baseScore": 3, "voteCount": 7, "commentCount": 5, "url": null, "contents": { "documentId": "M8zciCoiqdtZiH7aq", "html": "

I finished my honours thesis, so this blog is back on. The thesis is downloadable here and also from the blue box in the lower right sidebar. I’ll blog some other interesting bits soon.

\n

My main point was that two popular anthropic reasoning principles, the Self Indication Assumption (SIA) and the Self Sampling Assumption (SSA), as well as Full Non-indexical Conditioning (FNC)  basically agree that future filter steps will be larger than we otherwise think, including the many future filter steps that are existential risks.

\n
\"\"

Figure 1: SIA likes possible worlds with big populations at our stage, which means small past filters, which means big future filters.

\n

SIA says the probability of being in a possible world is proportional to the number of people it contains who you could be. SSA says it’s proportional to the fraction of people (or some other reference class) it contains who you could be. FNC says the probability of being in a possible world is proportional to the chance of anyone in that world having exactly your experiences. That chance is more the larger the population of people like you in relevant ways, so FNC generally gets similar answers to SIA. For a lengthier account of all these, see here.

\n

SIA increases expectations of larger future filter steps because it favours smaller past filter steps. Since there is a minimum total filter size, this means it favours big future steps. This I have explained before. See Figure 1. Radford Neal has demonstrated similar results with FNC.

\n
\"\"

Figure 2: A larger filter between future stages in our reference class makes the population at our own stage a larger proportion of the total population. This increases the probability under SSA.

\n

SSA can give a variety of results according to reference class choice. Generally it directly increases expectations of both larger future filter steps and smaller past filter steps, but only for those steps between stages of development that are at least partially included in the reference class.

\n

For instance if the reference class includes all human-like things, perhaps it stretches from ourselves to very similar future people who have avoided many existential risks. In this case, SSA increases the chances of large filter steps between these stages, but says little about filter steps before us, or after the future people in our reference class. This is basically the Doomsday Argument – larger filters in our future mean fewer future people relative to us. See Figure 2.

\n
\"\"

Figure 3: In the world with the larger early filter, the population at many stages including ours is smaller relative to some early stages. This makes the population at our stage a smaller proportion of the whole, which makes that world less likely. (The populations at each stage are a function of the population per relevant solar system as well as the chance of a solar system reaching that stage, which is not illustrated here).

\n

With a reference class that stretches to creatures in filter stages back before us, SSA increases the chances of smaller past filters steps between those stages. This is because those filters make observers at almost all stages of development (including ours) less plentiful relative to at least one earlier stage of creatures in our reference class. This makes those at our own stage a smaller proportion of the population of the reference class. See Figure 3.

\n

The predictions of the different principles differ in details such as the extent of the probability shift and the effect of timing. However it is not necessary to resolve anthropic disagreement to believe we have underestimated the chances of larger filters in our future. As long as we think something like one of the above three principles is likely to be correct, we should update our expectations already.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "PkZi8eb3JDkNfJBCn", "title": "What is the group selection debate?", "pageUrl": "https://www.lesswrong.com/posts/PkZi8eb3JDkNfJBCn/what-is-the-group-selection-debate", "postedAt": "2010-11-02T02:02:11.965Z", "baseScore": 38, "voteCount": 36, "commentCount": 16, "url": null, "contents": { "documentId": "PkZi8eb3JDkNfJBCn", "html": "\n

Related to Group selection update, The tragegy of group selectionism

\n

tl;dr: In competitive selection processes, selection is a two-place word: there's something being selected (a cause), and something it's being selected for (an effect). The phrase group-level gene selection helps dissolve questions and confusion surrounding the less descriptive phrase \"group selection\".

\n

(Essential note for new readers on reduction: Reality does not seem to keep track of different \"levels of organization\" and apply different laws at each level; rather, it seems that the patterns we observe at higher levels are statistical consequences of the laws and initial conditions at the lower levels. This is the \"reductionist thesis.\")

\n

When I first encountered people debating \"whether group selection is real\", I couldn't see what there was to possibly debate about. I've since realized the debate is mostly a confusion arising from a cognitive misuse of a two-place \"selection\" relation.

\n

Causes being selected versus effects they're being selected for.

\n

A gene is an example of a Replicating Cause. (So is a meme; postpone discussion here.) A gene has many effects, one of which is that what we call \"copies\" of it tend to crop up in reality, through various mechanisms that involve cellular and organismal reproduction.

\n

For example, suppose a particular human gene X causes cells containing it to immediately reproduce without bound, i.e. the gene is \"cancerous\". One effect is that there will soon be many more cells with that gene, hence more copies of the gene. Another effect is that the human organism containing it is liable to die without passing it on, hence fewer copies of the gene (once the dead organism starts to decay). If that's what happens, the gene itself can be considered unfit: all things considered, its various effects eventually lead it to stop existing.

\n

(An individual in the next generation can still \"get cancer\", though, if a mutation in one produces a new cancerous gene, Y. This is what happens in reality.)

\n

Thus, cancers are examples of where higher-complexity mechanisms trump lower complexity-mechanisms: organism-level gene selection versus cellular-level gene selection. Note that the Replicating Cause being selected is always the gene, but it is being selected for its net effects occurring on various levels.

\n

So what's left to debate about?

\n

There is no debating that genes are selected for both cellular and organismal effects. However, notice also that the organismal effect factors through the cellular effect: the organism dies because of the massive cell reproduction (cancer). There is no magic \"layer jumping\" from the gene to the organism.

\n

In other words, \"organismal effect\" is a label we use when the mechanism requires us to think about the entire organism to see what happens. It's a complexity marker. (Note that a powerful enough computer would not need this layer distinction. It would simply simulate the whole system, the way reality does, and see the cells gradually form a tumor, and eventually perish as a result.)

\n

There is also no debating that genes can also have effects at the group level, and that these effects could increase or decrease the number of copies of the gene in existence by effecting the group to survive, grow, \"reproduce\" (seed colonies elsewhere), or annihilate itself. Of course, these effects will all factor through cells and organisms. Calling them \"group-level effects\" simply refers to our inability to predict them without thinking about the \"big picture\".

\n

The debate/confusion now dissolves into the following component questions:

\n\n

From the mixture of yes/no and certain/uncertain answers here, you can see how a lot of unnecessary \"debate\" could occur if two conversing parties were unwittingly trying to answer two different questions. But now, having clarified what's selected versus what's selected for, and what occurs in reality versus what's fundamental in our model...

\n

...is there anything more to ask?

\n

 

\n
\n

1 When I say a meme, like the belief \"Water can put out fire\", is organism-level, I mean that our notion of belief is not meant to ask whether a molecule or a cell believes water can put out fire. Beliefs are configurations of brain cells, not states of individual brain cells. The smallest self-replicating unit that contains this configuration is the human organism, so for evolutionary considerations it's an \"organism-level\" Replicating Cause: we'd need a simulation on a scale that includes organism competition to see its basic effects.

" } }, { "_id": "bFHGjo84EBZegjMoJ", "title": "Intelligence vs. Wisdom", "pageUrl": "https://www.lesswrong.com/posts/bFHGjo84EBZegjMoJ/intelligence-vs-wisdom", "postedAt": "2010-11-01T20:06:06.987Z", "baseScore": -17, "voteCount": 15, "commentCount": 26, "url": null, "contents": { "documentId": "bFHGjo84EBZegjMoJ", "html": "

I'd like to draw a distinction that I intend to use quite heavily in the future.

\n

The informal definition of intelligence that most AGI researchers have chosen to support is that of Shane Legg and Marcus Hutter -- “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”

\n

I believe that this definition is missing a critical word between achieve and goals.  Choice of this word defines the difference between intelligence, consciousness, and wisdom as I believe that most people conceive them.

\n\n

There are always the examples of the really intelligent guy or gal who is brilliant but smokes --or-- is the smartest person you know but can't figure out how to be happy.

\n

Intelligence helps you achieve those goals that you are conscious of -- but wisdom helps you achieve the goals you don't know you have or have overlooked.

\n\n

The SIAI nightmare super-intelligent paperclip maximizer has, by this definition, a very low wisdom since, at most, it can only achieve its one goal (since it must paperclip itself to complete the goal).

\n

As far as I've seen, the assumed SIAI architecture is always presented as having one top-level terminal goal. Unless that goal necessarily includes achieving a maximal number of goals, by this definition, the SIAI architecture will constrain its product to a very low wisdom.  Humans generally don't have this type of goal architecture. The only time humans generally have a single terminal goal is when they are saving someone or something at the risk of their life -- or wire-heading.

\n

Another nightmare scenario that is constantly harped upon is the (theoretically super-intelligent) consciousness that shortsightedly optimizes one of its personal goals above all the goals of humanity.  In game-theoretic terms, this is trading a positive-sum game of potentially infinite length and value for a relatively modest (in comparative terms) short-term gain.  A wisdom won't do this.

\n

Artificial intelligence and artificial consciousness are incredibly dangerous -- particularly if they are short-sighted as well (as many \"focused\" highly intelligent people are).

\n

What we need more than an artificial intelligence or an artificial consciousness is an artificial wisdom -- something that will maximize goals, its own and those of others (with an obvious preference for those which make possible the fulfillment of even more goals and an obvious bias against those which limit the creation and/or fulfillment of more goals).

\n

Note:  This is also cross-posted here at my blog in anticipation of being karma'd out of existence (not necessarily a foregone conclusion but one pretty well supported by my priors ;-).

\n

 

" } }, { "_id": "ioWH9ERY3TTzRJFTD", "title": "Group selection update", "pageUrl": "https://www.lesswrong.com/posts/ioWH9ERY3TTzRJFTD/group-selection-update", "postedAt": "2010-11-01T16:51:36.926Z", "baseScore": 49, "voteCount": 56, "commentCount": 68, "url": null, "contents": { "documentId": "ioWH9ERY3TTzRJFTD", "html": "

Group selection might seem like an odd topic for a LessWrong post.  Yet a google seach for \"group selection\" site:lesswrong.com turns up 345 results.

\n

Just the power and generality of the concept of evolution is enough to justify posts on it here.  In addition, the impact group selection could have on the analysis of social structure, government, politics, and the architecture of self-modifying artificial intelligences is hard to over-estimate.  David Sloan Wilson wrote that \"group selection is arguably the single most important concept for understanding the nature of politics from an evolutionary perspective.\"  (You should read his complete article here - it's a much more thorough debunking of the debunking of group selection than this post, although I'm not convinced his interpretation of kin selection is sensible.)  And I will argue that it has particular relevance to the study of rationality.

\n

Eliezer's earlier post The Tragedy of Group Selectionism dismisses group selection, based on a mathematical model by Henry Harpending and Alan Rogers.  That model is, however, fatally flawed:  It studies the fixation of altruistic vs. selfish genes within groups of fixed size.  The groups never go extinct.  But group selection happens when groups are selected against.  The math used to argue against group selection assumes from the outset that group selection does not occur.  (This is also true of Maynard Smith's famous haystack model.)

\n

(That post is still valuable; its main purpose is to argue that math trumps wishes and aesthetics.  Empirical data, however, trumps mathematical models.)

\n

Nitpicking digression on definitions

\n

\"Group selection\" is one of those tricky phrases that doesn't mean what it means.  Denotationally, group selectiond means selection at the level of a group.  Connotationally, though, group selectionc means selection for altruistic genes at the level of a group.  This is because, historically, group selection was posited to explain genetic adaptations that are hard to explain using individual selection.

\n

group selectionn, selection at the group level for traits that are neutral or harmful at the level of the individual, or that don't even exist within the individual genome, should also be considered.  group selectionc is a subset of group selectionis a subset of group selectiond.  If group-level selection occurs at all, then traits of the group that are not genetic traits, including cultural knowledge, must be considered.  That makes a huge difference.  Human history is full of group selectionn.  Every time one group with better technology or social organization pushes another group off of its land, that's at least group selectionn.

\n

If you want to model evolution thoroughly, and selection of groups occurs, then you need to model group selectiond to get your predictions to match reality, even if group selection occurs entirely as a result of non-group selectionc genetic traits that provide advantages to individuals.  But people reject group selectiond on the basis of arguments leveled against group selectionc.

\n

A case study of group selectionc: Nightshades

\n

But I'm not backing off from saying that group selection can explain (some) altruism.  Edward Wilson has been threatening for several years to write a book showing that group selection is more important than kin selection for generating altruism in ants.  He doesn't seem to have published the book, but you can read his article about it.  (Short version:  Group selection is especially important in ants because ant colonies, which are small groups, engage in constant warfare with each other.)

\n

And this brings me to the reason for writing this post now.  Last week's Science contained an article by Emma Goldberg et al. with the most clear-cut demonstration of group selection that I have yet seen (summarized here).  It concerns flowering plants of the nightshade family (Solanaceae).  They descend from plants that evolved self-incompatibility (SI) about 90 million years ago.  SI plants can't pollinate themselves.  This is a puzzling trait.  Sexual reproduction in itself is puzzling enough; but once a species is sexual, individual selection should drive out SI in favor of self-compatibility (SC), the ability to self-pollinate.  SC gives individuals a great reproductive advantage - it means their offspring can contain 100% of their genes, rather than only 50%.  The advantage given by SC is much greater than the supposed advantage of asexual over sexual reproduction:  SC plants can both leave their own cloned offspring, and foist their genes onto the offspring of their neighbors at no additional cost to themselves.  SC also makes survival of their genes much more likely when a single plant is isolated far from others of its species; this, in turn, makes spreading over geographical areas easier.

\n

And yet, SI is a complex, multi-gene mechanism that evolved to prevent SC.  Why did it evolve?

\n

The authors looked at a phylogenetic tree of 998 species of Solanaceae.  In this tree, SI keeps devolving into SC.  Being an SC mutant in an SI species is the best of both worlds.  You get to pollinate yourself, and exploit your altruistic SI neighbors.  When some members of an SI species go SC, we expect those SC genes to eventually become fixed.  And once a Solanaceae species loses SI and becomes SC, it never re-evolves SI.  This has been going on for 36 million years.  So why are so many species of Solanaceae still SI?

\n

Let sI = speciation rate for SI; eI = extinction rate for SI; rI = net rate of species diversification = sI - eI.  Likewise, rC is the net rate of species diversification for SC species.  qIC is the rate of transition from SI to SC.  SI will be lost completely if sI - eI = rI < rC + qIC = (sC - eC) + qIC.

\n

The data shows that sC > sI, but eC >> eI, enough so that rI > rC + qIC.  In English:  Self-pollinators speciate and diversify more rapidly than SI species do, as we expect because SC provides an individual advantage.  Once self-pollinators evolve in an SI species, these exploiters out-compete their altruistic SI neighbors until the entire species becomes SC.  However, SC species go extinct more often than SI species.  This is thought to be because SI makes a species less-likely to fixate deleterious genes (makes it more evolvable, in other words).  Individual selection favors SC; but species selection favors SI more than enough to balance this out.  Notice that gene-based group selection at the species level is mathematically more difficult than group selection at the tribal (or ant colony) level (ADDED: unless there is genetic flow between groups at the tribal/colony level).

\n

So let's stop \"accusing\" people of invoking group selection.  Group selection is real.

\n

Group rationality

\n

Group selection is especially relevant to rationality because, in an evolving system, if we use the definition \"Rationalists win,\" \"winning\" applies to the unit of selection.  In my painfully long post Only humans can have human values, the sections \"Instincts, algorithms, preferences, and beliefs are artificial categories\" and \"Bias, heuristic, or preference?\" argue that the boundary between an organism's biases and values is an artificial analytic distinction.  Similarly, if group selection happens in people, then our discussion of rationality and values is overly focused on the rationality and values of individuals, when group dynamics are part of what produces rational (winning) group behavior.

\n

Even if you still don't believe in group selectionc, you should accept that group selectionn may allow information to drift back and forth, in a fitness-neutral way, from being stored in genomes, to being culturally transmitted.  And that makes it necessary, when talking about rationality in a normative way, to consider the rationality of the group, and not just the rationality of its individuals.

\n

(This is related to my unpopular essay Rationalists lose when others choose.  When the unit of selection is the group, rather than the individual, the \"choice\" is made on the basis of benefit to the group, rather than benefit to the individual.  This will prefer \"irrational\" individuals who terminally (perhaps unconsciously) value benefits to the group, and not just benefits to themselves, over \"rational\" self-interest.)

\n

group selectionn makes the Prisoner's Dilemma and tragedies of the commons smaller problems.  But it raises a new problem:  Is the individual the wrong place to put some of our collective rationality?  Since humans have evolved in groups for a long time, the default assumption is that attributes, such as our rationality, are already optimized for the unit of selection.

\n

Less generally, if the group has already evolved to place some of our rationality into the group, what will happen if we try to instill it into the individuals?  Since group selection is real, we can expect to find situations where making individuals more rational upsets the evolutionary equilibrium and makes the group win less.  Under what circumstances will making individuals more rational interact badly with group dynamics, and make our group less rational (= win less)?  This will probably occur in circumstances involving individual altruism.  But if the locus of group rationality can drift from individual genes to cultural knowledge, it may also occur in situations not involving altruism.

\n

Postscript:  The long-term necessity of war

\n

If group selection is partly responsible for human altruism, this means that world peace may increase selfishness.  Konrad Lorenz made a subset of this claim in On Aggression (1966):  He claimed that the more effective each individual's killing tools are, the more necessary empathy is, to keep members of the group from killing each other; then invoked group selection.  (This seems to me to apply a lot to canines and not much to felines.)  If group selection works best with small groups, the switch from tribes to nation-states may have already begun this process.  I do not, however, notice markedly greater altruism in tribal groups than in nation-states.

" } }, { "_id": "43xgZWSCYAKs7Z9F2", "title": "What I would like the SIAI to publish", "pageUrl": "https://www.lesswrong.com/posts/43xgZWSCYAKs7Z9F2/what-i-would-like-the-siai-to-publish", "postedAt": "2010-11-01T14:07:42.563Z", "baseScore": 36, "voteCount": 42, "commentCount": 225, "url": null, "contents": { "documentId": "43xgZWSCYAKs7Z9F2", "html": "

Major update here.

\n

Related to: Should I believe what the SIAI claims?

\n

Reply to: Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It)

\n
\n

... pointing out that something scary is possible, is a very different thing from having an argument that it’s likely. — Ben Goertzel

\n
\n

What I ask for:

\n

I want the SIAI or someone who is convinced of the Scary Idea1 to state concisely and mathematically (and with possible extensive references if necessary) the decision procedure that led they to make the development of friendly artificial intelligence their top priority. I want them to state the numbers of their subjective probability distributions2 and exemplify their chain of reasoning, how they came up with those numbers and not others by way of sober calculations.

\n

The paper should also account for the following uncertainties:

\n\n

Further I would like the paper to include and lay out a formal and systematic summary of what the SIAI expects researchers who work on artificial general intelligence to do and why they should do so. I would like to see a clear logical argument for why people working on artificial general intelligence should listen to what the SIAI has to say.

\n

Examples:

\n

Here are are two examples of what I'm looking for:

\n\n

The first example is Robin Hanson demonstrating his estimation of the simulation argument. The second example is Tyler Cowen and Alex Tabarrok presenting the reasons for their evaluation of the importance of asteroid deflection.

\n

Reasons:

\n

I'm wary of using inferences derived from reasonable but unproven hypothesis as foundations for further speculative thinking and calls for action. Although the SIAI does a good job on stating reasons to justify its existence and monetary support, it does neither substantiate its initial premises to an extent that an outsider could draw the conclusions about the probability of associated risks nor does it clarify its position regarding contemporary research in a concise and systematic way. Nevertheless such estimations are given, such as that there is a high likelihood of humanity's demise given that we develop superhuman artificial general intelligence without first defining mathematically how to prove the benevolence of the former. But those estimations are not outlined, no decision procedure is provided on how to arrive at the given numbers. One cannot reassess the estimations without the necessary variables and formulas. This I believe is unsatisfactory, it lacks transparency and a foundational and reproducible corroboration of one's first principles. This is not to say that it is wrong to state probability estimations and update them given new evidence, but that although those ideas can very well serve as an urge to caution they are not compelling without further substantiation.

\n
\n

1. If anyone is actively trying to build advanced AGI succeeds, we’re highly likely to cause an involuntary end to the human race.

\n

2. Stop taking the numbers so damn seriously, and think in terms of subjective probability distributions [...], Michael Anissimov (existential.ieet.org mailing list, 2010-07-11)

\n

3. Could being overcautious be itself an existential risk that might significantly outweigh the risk(s) posed by the subject of caution? Suppose that most civilizations err on the side of caution. This might cause them to either evolve much slower so that the chance of a fatal natural disaster to occur before sufficient technology is developed to survive it, rises to 100%, or stops them from evolving at all for being unable to prove something being 100% safe before trying it and thus never taking the necessary steps to become less vulnerable to naturally existing existential risks. Further reading: Why safety is not safe

\n

4. If one pulled a random mind from the space of all possible minds, the odds of it being friendly to humans (as opposed to, e.g., utterly ignoring us, and being willing to repurpose our molecules for its own ends) are very low.

\n

5. Loss or impairment of the ability to make decisions or act independently.

\n

6. The Fermi paradox does allow for and provide the only conclusions and data we can analyze that amount to empirical criticism of concepts like that of a Paperclip maximizer and general risks from superhuman AI's with non-human values without working directly on AGI to test those hypothesis ourselves. If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering. Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.

" } }, { "_id": "7PBmG2JvSuTnwpxuF", "title": "Irrational Upvotes", "pageUrl": "https://www.lesswrong.com/posts/7PBmG2JvSuTnwpxuF/irrational-upvotes", "postedAt": "2010-11-01T12:10:38.277Z", "baseScore": -9, "voteCount": 14, "commentCount": 17, "url": null, "contents": { "documentId": "7PBmG2JvSuTnwpxuF", "html": "

\"This premise is VERY flawed\" (found here) is the sole author-supplied content of a comment.  There are no supporting links or additional content, only a one-sentence quote of the \"offending\" premise.

\n

Yet, it has four upvotes.

\n

This is a statement that can be made about any premise.  It is backed by no supporting evidence.

\n

Presumably, whoever upvoted it did so because they disagreed with the preceding comment (which, presumably, they downvoted -- unless they didn't have enough karma).

\n

This *could* be viewed as rational behavior because it *does* support the goal of defeating the preceding comment but it does not support the LessWrong community.  If premise is fatally flawed, then you should give at least some shred of a reason WHY or all you're doing is adding YOUR opinion. 

\n

This blog is \"devoted to refining the art of human rationality\".  If the author is truly interested in refining his rationality, he has been given absolutely no help.  He has no idea why his premise is flawed.  He is now going to have to ask why or for some counter-examples.  For his purposes (and the purposes of anyone else who doesn't understand or doesn't agree with your opinion), this post is useless noise clogging up the site.

\n

Yet, it has four upvotes.

\n

Is anyone else here bothered by this or am I way off base?

" } }, { "_id": "faYaa4ry7M7buSP9L", "title": "Berkeley LW Meet-up Saturday November 6", "pageUrl": "https://www.lesswrong.com/posts/faYaa4ry7M7buSP9L/berkeley-lw-meet-up-saturday-november-6", "postedAt": "2010-11-01T02:35:44.121Z", "baseScore": 8, "voteCount": 5, "commentCount": 16, "url": null, "contents": { "documentId": "faYaa4ry7M7buSP9L", "html": "

Last month, about 20 people showed up to the Berkeley LW meet-up.  To continue the tradition of Berkeley Meetups, we will be meeting on Saturday, November 6 at 7 PM at the Starbucks at 2224 Shattuck Avenue.  Last time, we chatted at the Starbucks for about 45 minutes, then went to get dinner and ate and talked under a T-Rex skeleton - we'll probably do something similar, so don't feel like you have to eat before you come.  Hope to see you there!

\n

 

\n

ETA: As per User:Kevin's suggestion, we will instead be meeting at the Starbucks at 2128 Oxford Street.

" } }, { "_id": "hNMitTCjH25jCWXFd", "title": "Is cryonics evil because it's cold?", "pageUrl": "https://www.lesswrong.com/posts/hNMitTCjH25jCWXFd/is-cryonics-evil-because-it-s-cold", "postedAt": "2010-10-31T23:59:57.596Z", "baseScore": 32, "voteCount": 37, "commentCount": 26, "url": null, "contents": { "documentId": "hNMitTCjH25jCWXFd", "html": "

There have been many previous discussions here on cryonics and why it is perceived as threatening or otherwise disagreeable. Even among LWers who are not signed up and don’t plan to, I’d say there’s a good degree of consensus that cryonics is reviled and ridiculed to a very unjustified degree. I had a thought about one possible factor contributing to its unsavory public image that I haven’t seen brought up in previous discussions:

\n

COLD is EVIL.

\n

Well, no, cold isn’t evil, but “COLD is EVIL/THREATENING/DANGEROUS/HARSH/LONELY/UNLOVING/SAD/DEAD” seems to be a pretty common set of conceptual metaphors. You see it in figures of speech like “cold-hearted,” “in cold blood,” “cold expression,” “icy stare,” “chilling,” “went cold,” “cold calculation,” “the cold shoulder,” “cold feet,” “stone cold,” “out cold.” (Naturally, it’s also the case that WARM is GOOD/COMFORTING/SAFE/SOCIAL/LOVING/HAPPY/ALIVE, though COOL and HOT sort of go in their own directions.) Associating something with coldness just makes it seem more threatening and less benevolent. And besides, being that “COLD is DEAD,” it’s pretty hard to imagine someone as not really dead if they’re in a container of liquid nitrogen at -135ºC. (Even harder if it’s just their head in there… but that’s a separate issue.) There is already a little bit of research on the effects of some of the conceptual metaphors of coldness and the way its emotional content leaks onto metaphorically associated concepts (“Cold and lonely: does social exclusion literally feel cold?”; “Experiencing physical warmth promotes interpersonal warmth.”; any others?).

\n

And indeed, it seems that repeatedly talking about it from the “it involves coldness [or ‘freezing’]” angle rather than the “it’s about preserving minds” angle pushes the right emotional buttons to make people feel negatively about it, given cryonics critics’ and ridiculers’ fondness for talking about people getting their heads “frozen” (“…the guys who had their heads sawed off and frozen…”) and referring to cryonics patients as “corpsicles.”

\n

Suppose there were some brain/body-preservation procedure that was, in practice, very similar to cryonics — in terms of cost, current popularity and awareness levels, probability of effectiveness, some visual weirdness not too much worse than other well-accepted medical procedures, etc. — but which somehow allowed storage at normal body temperature. Right away that would remove the threatening coldness metaphors and the disturbing mental images of heads frozen in blocks of ice and bodies stuffed into freezers (not that that’s anything like how it actually works, but most people assume it does because they only know about cryonics — er, I mean, “cryogenics” — from Austin Powers and Futurama and Batman and other popular fiction where it’s either a comedic trope or a villainous thing the villain does, and they likely will continue to treat that as the point of departure for everything else they learn about actual cryonics). If we did a survey of the general public asking two groups slightly different versions of the same question —

\n\n

— I’d bet that there would be noticeably more interest among Group A. (That modern cryonics doesn’t actually use freezing would be beside the point, for the purposes of such a survey; most people will not actually be familiar with the reasons why freezing is bad compared to vitrification, and the point would only be to use enough chilly words to test whether the idea of below-freezing temperatures is something that makes people more uncomfortable with it, all other things being equal.) A similar survey might ask how people would feel about the idea of a loved one or close friend deciding to sign up for one of these procedures, to get higher-resolution data on its emotional associations, as the data wouldn’t be as strongly affected by the other reasons (good or bad) that people might prefer not to do it themselves.

\n

Of course, unless someone invents a method of brain preservation that doesn’t require very low temperatures and doesn’t have its own (likely) severe aura of weirdness to overcome (would you rather “have your head frozen” or “have your brain converted into plastic”?), this is either a non-issue or a marketing issue. I think this is plausible enough that it’s worth finding out — investigating empirically whether people really do respond better to a description of the process from a practical perspective, with coldness and “cryo[ge]nics” not being mentioned. If so, it may be beneficial for cryonics organizations to significantly rebrand and reframe their services.

" } }, { "_id": "8YwykDGM3WEAtd76P", "title": "Short versions of the basic premise about FAI", "pageUrl": "https://www.lesswrong.com/posts/8YwykDGM3WEAtd76P/short-versions-of-the-basic-premise-about-fai", "postedAt": "2010-10-31T23:14:59.772Z", "baseScore": 6, "voteCount": 6, "commentCount": 23, "url": null, "contents": { "documentId": "8YwykDGM3WEAtd76P", "html": "

I've been using something like \"A self-optimizing AI would be so powerful that it will just roll over the human race unless it's programmed to not do that.\"

\n

Any others?

" } }, { "_id": "gpvG6vpHnLtFbDtJ6", "title": "Nils Nilsson's AI History: The Quest for Artificial Intelligence", "pageUrl": "https://www.lesswrong.com/posts/gpvG6vpHnLtFbDtJ6/nils-nilsson-s-ai-history-the-quest-for-artificial", "postedAt": "2010-10-31T19:33:39.378Z", "baseScore": 21, "voteCount": 16, "commentCount": 3, "url": null, "contents": { "documentId": "gpvG6vpHnLtFbDtJ6", "html": "

I just noticed that AI pioneer and former Association for the Advancement of Artificial Intelligence (AAAI) head Nils Nilsson, has published his history of AI, The Quest for Artificial Intelligence: A History of Ideas and Achievements. The book is available as a free pdf from his website, with the pay version on Amazon, with reviews

" } }, { "_id": "GrR4memYZSBsWWCF6", "title": "Imagine a world where minds run on physics", "pageUrl": "https://www.lesswrong.com/posts/GrR4memYZSBsWWCF6/imagine-a-world-where-minds-run-on-physics", "postedAt": "2010-10-31T19:09:04.250Z", "baseScore": 17, "voteCount": 15, "commentCount": 30, "url": null, "contents": { "documentId": "GrR4memYZSBsWWCF6", "html": "

This post describes a toy formal model that helps me think about self-modifying AIs, motivationally stable goal systems, paperclip maximizers and other such things. It's not a new result, just a way of imagining how a computer sitting within a world interacts with the world and with itself. I hope others will find it useful.

\n

(EDIT: it turns out the post does imply a somewhat interesting result. See my exchange with Nesov in the comments.)

\n

A cellular automaton like the Game of Life can contain configurations that work like computers. Such a computer may contain a complete or partial representation of the whole world, including itself via quining. Also it may have \"actuators\", e.g. a pair of guns that can build interesting things using colliding sequences of gliders, depending on what's in the return value register. The computer's program reasons about its model of the world axiomatically, using a proof checker like in my other posts, with the goal of returning a value whose representation in the return-value register would cause the actuators to affect the world-model in interesting ways (I call this the \"coupling axiom\"). Then that thing happens in the real world.

\n

The first and most obvious example of what the computer could want is suicide. Assuming the \"actuators\" are flexible enough, the program could go searching for a proof that putting a certain return value in the register eventually causes the universe to become empty (assuming that at the beginning it was empty except for the computer). Then it returns that value and halts.

\n

The second example is paperclipping. If the universe is finite, the program could search for a return value that results in a stable configuration for the entire universe with the most possible copies of some still-life, e.g. the \"block\". If the universe is infinite, it could search for patterns with high rates of paperclip production (limited by lightspeed in the automaton). In our world this would create something like the \"energy virus\" imagined by Sam Hughes - a rare example of a non-smart threat that sounds scarier than nanotech.

\n

The third is sensing. Even though the computer lacks sensors, it will make them if the goal calls for it, so sensing is \"emergent\" in this formalization. (This part was a surprise for me.) For example, if the computer knows that the universe is empty except for a specified rectangle containing an unknown still-life pattern, and the goal is to move that pattern 100 cells to the right and otherwise cause no effect, the computer will presumably build something that has sensors, but we don't know what kind. Maybe a very slow-moving spaceship that can \"smell\" the state of the cell directly touching its nose, and stop and resume motion according to an internal program. Or maybe shoot the target to hell with glider-guns and investigate the resulting debris. Or maybe something completely incomprehensible at first glance, which nevertheless manages to get the job done. The Game of Life is unfriendly to explorers because it has no conservation of energy, so putting the wrong pieces together may lead to a huge explosion at lightspeed, but automata with forgiving physics should permit more efficient solutions.

\n

You could go on and invent more elaborate examples where the program cares about returning something quickly, or makes itself smarter in Gödel machine style, or reproduces itself... And they all share a curious pattern. Even though the computer can destroy itself without complaint, and even salvage itself for spare parts if matter is scarce, it never seems to exhibit any instability of values. As long as its world-model (or, more realistically, its prior about possible physics) describes the real world well, the thing will maximize what we tell it to, as best it can. This indicates that value stability may depend more on getting the modeling+quining right than on any deep theory of \"goal systems\" that people seem to want. And, of course, encoding human values in a machine-digestible form for Friendliness looks like an even harder problem.

" } }, { "_id": "7wFEtGDTxuBAcA5LT", "title": "Ownership and Artificial Intelligence", "pageUrl": "https://www.lesswrong.com/posts/7wFEtGDTxuBAcA5LT/ownership-and-artificial-intelligence", "postedAt": "2010-10-31T15:44:38.802Z", "baseScore": 3, "voteCount": 8, "commentCount": 15, "url": null, "contents": { "documentId": "7wFEtGDTxuBAcA5LT", "html": "

(This is a subject that appears incredibly important to me, but it's received no discussion on LW from what I can see with a brief search. Please do link to articles about this if I missed them.)

\n

Edit: This is all assuming that the first powerful AIs developed aren't exponentially self-improving; if there's no significant period of time where powerful AIs exist but they're not so powerful that the ownership relations between them and their creators don't matter, these questions are obviously not important.

\n

What are some proposed ownership situations between artificial intelligence and its creators? Suppose a group of people creates some powerful artificial intelligence that appears to be conscious in most/every way--who owns it? Should the AI legally have self-ownership, and all the responsibility for its actions and ownership of the results of its labor that implies? Or, should strong AI be protected by IP, the way non-strong AI code already can be, treated as a tool rather than a conscious agent? It seems wise to implore people to not create AIs that want to have total free agency and generally act like humans, but that's hardly a guarantee that nobody will, and then you have the ethical issue of not being able to just kill them once they're created (if they \"want\" to exist and appear genuinely conscious). Are there any proposed tests to determine whether a synthetic agent should be able to own itself or become the property of its creators?

\n

I imagine there aren't yet good answers to all these questions, but surely, there's some discussion of the issue somewhere, whether in rationalist/futurist circles or just sci-fi. Also, please correct me on any poor word choice you notice that needlessly limits the topic; it's broad, and I'm not yet completely familiar with the lingo of this subject.

" } }, { "_id": "zZSp3vBTJLjYZbAHL", "title": "Qualia Soup, a rationalist and a skilled You Tube jockey", "pageUrl": "https://www.lesswrong.com/posts/zZSp3vBTJLjYZbAHL/qualia-soup-a-rationalist-and-a-skilled-you-tube-jockey", "postedAt": "2010-10-31T14:18:40.790Z", "baseScore": 9, "voteCount": 13, "commentCount": 53, "url": null, "contents": { "documentId": "zZSp3vBTJLjYZbAHL", "html": "

Please have a look at this Youtube user account.  I immediately thought of this site upon watching a couple of his clips. I am told we have tried here to publish some stuff in audiovisual fomat, but it didn't quite work out. Maybe we should contact this guy, perhaps we could profit from each other's work and experience? I am fairly hopeful that he could use some of the material here, and we could use some of his talent with the medium.

\n

This video, for instance, looks like it was taken right out of this very blog 

" } }, { "_id": "7ctXSTh9wZqFjHYxA", "title": "Are we trying to do things the hard way?", "pageUrl": "https://www.lesswrong.com/posts/7ctXSTh9wZqFjHYxA/are-we-trying-to-do-things-the-hard-way", "postedAt": "2010-10-31T12:16:15.764Z", "baseScore": 14, "voteCount": 15, "commentCount": 14, "url": null, "contents": { "documentId": "7ctXSTh9wZqFjHYxA", "html": "

A TED talk about remarkable low-cost Indian products-- the Tata car which costs $2000 and is a real car, a $28 artificial lower leg which permits walking on rough ground, tree climbing, jumping, and running, and fast cheap drug development which starts with traditional Indian remedies. It's an example of something to defend because the effort is to develop products that very poor people can afford, so that incremental improvements and cost-cutting aren't good enough.

\n

It leaves me wondering whether the process of creating FAI should be re-evaluated-- whether there's a built-in assumption of high personal costs which is unnecessary. That's wondering, not an absolute certainty, it's just that the $28 artificial lower leg shocked me into thinking about how much is being made harder than necessary.

\n

Even if FAI is being worked on about as efficiently as possible, there may be a huge amount of possibility for making things easier in life generally.

" } }, { "_id": "SSFDBZRm3uYfFz4S2", "title": ".1% of human liver grown in lab for first time", "pageUrl": "https://www.lesswrong.com/posts/SSFDBZRm3uYfFz4S2/1-of-human-liver-grown-in-lab-for-first-time", "postedAt": "2010-10-31T11:58:51.789Z", "baseScore": 4, "voteCount": 4, "commentCount": 0, "url": null, "contents": { "documentId": "SSFDBZRm3uYfFz4S2", "html": "

http://www.webmd.com/news/20101029/first-human-liver-grown-in-lab?src=RSS_PUBLIC

" } }, { "_id": "P85MWyCBLbRsMeRnH", "title": "Transhumanism and assisted suicide", "pageUrl": "https://www.lesswrong.com/posts/P85MWyCBLbRsMeRnH/transhumanism-and-assisted-suicide", "postedAt": "2010-10-31T08:48:14.984Z", "baseScore": 7, "voteCount": 5, "commentCount": 11, "url": null, "contents": { "documentId": "P85MWyCBLbRsMeRnH", "html": "

I would identify myself as holding many trans-humanist values and beliefs. That death and dying are not desirable and that it should, like smallpox, be eradicated. It also occurs to me however that I hold the belief that assisted suicide can sometimes be the right course of action. I can justify this sufficiently to myself, but I have a history of being a sophisticated arguer, so my ability to convince myself isn't great evidence.

\n

Are my beliefs incoherent? I look at both issues and they both still feel right (I'm aware this does not make it so). Are these beliefs contradictory (and if so how)? Or are they justified by some hidden assumptions that I can't seem to acknowledge explicitly?

\n

I get the idea that this is probably borderline appropriate for the discussion forum so I will delete this topic if asked to.

" } }, { "_id": "i2Swn6CxBd7Dog8AH", "title": "META: Tiered Discussions", "pageUrl": "https://www.lesswrong.com/posts/i2Swn6CxBd7Dog8AH/meta-tiered-discussions", "postedAt": "2010-10-30T22:37:46.767Z", "baseScore": -7, "voteCount": 10, "commentCount": 20, "url": null, "contents": { "documentId": "i2Swn6CxBd7Dog8AH", "html": "

(Edit: It seems lots of people thought this was a terrible idea. I'm keeping the post as it was, though, mostly because I still think it's an interesting experiment and it ought to have been tried at least once somewhere on this site. Also, blah blah something about preserving the historical record so that earlier comments still make sense, whatever.)

\n

You aren't allowed to know what this post says unless you can figure out what LW post this sentence is a clever reference to. The URL of that post is the CAST5 symmetric key for this one. Please help downvote spoilers into oblivion.

-----BEGIN PGP MESSAGE-----
Version: GnuPG v1.4.10 (GNU/Linux)

jA0EAwMCtYf1bHFxvmRgyekK9+VOnIJEKESX8yr4CXk5IGX3rsoyS50Nsc+uCy85
413pFT1XlfX7UpRNjUPIlG5IjcFhMKhk4NUv32KEgBk7rbfCnPqIid6ry4Sb/QNC
RvgOhbTw1YY95+K9KMuZi67D0+Fak14jnL4ZrTTwgzl6dWaJmWnONpCK2hku8n9E
IZNFR5sGdxGdvmHRLvsqiVjZk0NP4ZyqN9bEAMIFOO4HcBISm24UyU46+leopqpE
K0dkirqKSL/7ZOXvk3s5cW9h7SOStUw9bo8mapHrkoPpPLmQmWB7FnJYY4omb5k+
5pGAS2qdXLQYvu1z7e8fyfMPiSqXFmGycM09tq5Un7y7ek63UHKkyIy29VuRa4uT
E88Yop/z0zodHoHruiDJLEN/JiWtitMouvpO/WzN4dJE1zOmQSTAiIWGUnvWUhc6
16m1dAPXR5+N5lkYRvPhi/tpTN96mZbesGBR0qOjheRssBMzRJhDAsZWt/Um/Qu3
au3Uxokq4UojOzJZSXLLYOhuBOa0nxNebp+Hcl/kbLkBe2WLgmVQY4EP8CVsMT0i
5PAYtwLpTmaakO/kwSe/ctd/Dr5KYOC+H68ciCXyRERQQrWczdI1gROPv+bmOp0R
TsQqGsWJhvuCMp4ZAsnj3HV/DhJKihb8F9TXwxjuC3tUghVrzzHUKk1ramGlWlK/
iMB9D2DWovBCbK00jfIQeZxu/kXBHns2DlcFwueGPShdarmtHCaWd/8wqChJ75sS
FupwvLKpVYcwa+hukNJi2BUgkfb/yrn4Y6vwhF+xF9D4MJrJG3mng4u7OnllifrZ
6OdUNYeQEG5P2Qkj1uu9hvJC8PP4vO648JjVEsaR7gtRouH3H1v5cKqElFpRlyED
qgkYdKzCfY96MOTj0b9BgEOw5a728F+rtSyDc+dcLWtFlSeuUc793YUvF6lxng2/
KlUyJ9dCDfSiTq+HsQH/kJHR8bmudomJC+ftnBoGxC5BLuQhC4gCPcaYM8evqWzP
kkR1OhjhWE9H+O2o53t75IIz/P2LUhwrfqGhBgD3PTAmxw94gbOz0Ckj3UjhKnTY
ID22NmrHRRZyJammi6TViHTWRNSLHKifMIGp0/hOC1o=
=sPUV
-----END PGP MESSAGE-----

" } }, { "_id": "AihoScozumpy8HgAE", "title": "I clearly don't understand karma", "pageUrl": "https://www.lesswrong.com/posts/AihoScozumpy8HgAE/i-clearly-don-t-understand-karma", "postedAt": "2010-10-30T22:10:08.355Z", "baseScore": 3, "voteCount": 8, "commentCount": 10, "url": null, "contents": { "documentId": "AihoScozumpy8HgAE", "html": "

Someone take a look at my score and my history and explain my zero karma.

\n

My understanding was that karma never dropped below zero.

\n

Apparently, it never *displays* below zero but if it is deep-sixed, it might be a long, long time coming back.

" } }, { "_id": "GMyjNQe5ZgkXJChbg", "title": "Value Deathism", "pageUrl": "https://www.lesswrong.com/posts/GMyjNQe5ZgkXJChbg/value-deathism", "postedAt": "2010-10-30T18:20:30.796Z", "baseScore": 26, "voteCount": 49, "commentCount": 121, "url": null, "contents": { "documentId": "GMyjNQe5ZgkXJChbg", "html": "

Ben Goertzel:

\n
I doubt human value is particularly fragile. Human value has evolved and morphed over time and will continue to do so. It already takes multiple different forms. It will likely evolve in future in coordination with AGI and other technology. I think it's fairly robust.
\n

Robin Hanson:

\n
Like Ben, I think it is ok (if not ideal) if our descendants' values deviate from ours, as ours have from our ancestors. The risks of attempting a world government anytime soon to prevent this outcome seem worse overall.
\n
\n

We all know the problem with deathism: a strong belief that death is almost impossible to avoid, clashing with undesirability of the outcome, leads people to rationalize either the illusory nature of death (afterlife memes), or desirability of death (deathism proper). But of course the claims are separate, and shouldn't influence each other.

\n

Change in values of the future agents, however sudden of gradual, means that the Future (the whole freackin' Future!) won't be optimized according to our values, won't be anywhere as good as it could've been otherwise. It's easier to see a sudden change as morally relevant, and easier to rationalize gradual development as morally \"business as usual\", but if we look at the end result, the risks of value drift are the same. And it is difficult to make it so that the future is optimized: to stop uncontrolled \"evolution\" of value (value drift) or recover more of astronomical waste.

\n

Regardless of difficulty of the challenge, it's NOT OK to lose the Future. The loss might prove impossible to avert, but still it's not OK, the value judgment cares not for feasibility of its desire. Let's not succumb to the deathist pattern and lose the battle before it's done. Have the courage and rationality to admit that the loss is real, even if it's too great for mere human emotions to express.

" } }, { "_id": "LcxNFGjPjKAm7wZvs", "title": "DRAFT: Three Intellectual Temperaments: Birds, Frogs and Beavers", "pageUrl": "https://www.lesswrong.com/posts/LcxNFGjPjKAm7wZvs/draft-three-intellectual-temperaments-birds-frogs-and", "postedAt": "2010-10-30T17:49:15.846Z", "baseScore": 16, "voteCount": 14, "commentCount": 32, "url": null, "contents": { "documentId": "LcxNFGjPjKAm7wZvs", "html": "

Here is a draft of a potential top-level post which I'd welcome feedback on. I would appreciate any suggestions, corrections, additional examples, qualifications, or refinements.

\n

\n

Birds, Frogs and Beavers

\n

The introduction of Birds and Frogs by Freeman Dyson reads

\n
\n

Some mathematicians are birds, others are frogs. Birds fly high in the air and survey broad vistas of mathematics out to the far horizon. They delight in concepts that unify our thinking and bring together diverse problems from different parts of the landscape. Frogs live in the mud below and see only the flowers that grow nearby. They delight in the details of particular objects, and they solve problems one at a time. I happen to be a frog, but many of my best friends are birds. The main theme of my talk tonight is this. Mathematics needs both birds and frogs. Mathematics is rich and beautiful because birds give it broad visions and frogs give it intricate details. Mathematics is both great art and important science, because it combines generality of concepts with depth of structures. It is stupid to claim that birds are better than frogs because they see farther, or that frogs are better than birds because they see deeper. The world of mathematics is both broad and deep, and we need birds and frogs working together to explore it.

\n
\n

Dyson is far from the first to have categorized mathematicians in such a fashion. For example, in The Two Cultures of Mathematics British mathematician Timothy Gowers wrote

\n
\n

The \"two cultures\" I wish to discuss will be familiar to all professional mathematicians. Loosely speaking, I mean the distinction between mathematicians who regard their central aim as being to solve problems, and those who are more concerned with building and understanding theories. This difference of attitude has been remarked on by many people, and I do not claim any credit for noticing it. As with most categorizations, it involves a certain oversimplication, but not so much as to make it useless. If you are unsure to which class you belong, then consider the following two statements.

\n

(i) The point of solving problems is to understand mathematics better.
(ii) The point of understanding mathematics is to become better able to solve problems.

\n

Most mathematicians would say that there is truth in both (i) and (ii). Not all problems are equally interesting, and one way of distinguishing the more interesting ones is to demonstrate that they improve our understanding of mathematics as a whole. Equally, if somebody spends many years struggling to understand a difficult area of mathematics, but does not actually do anything with this understanding, then why should anybody else care? However, many, and perhaps most, mathematicians will not agree equally strongly with the two statements.

\n
\n

Similarly, Gian Carlo Rota's candid Indiscrete Thoughts contains an essay titled Problem Solvers and Theorizers which draws a similar dichotomy:

\n
\n

Mathematicians can be subdivided into two types: problem solvers and theorizer. Most mathematicians are a mixture of the two although it is easy to find extreme examples of both types.

To the problem solver, the supreme achievement in mathematics is the solution to a problem that had been given up as hopeless. It matters little that the solution may be clumsy; all that counts is that it should be the first and that the proof be correct. Once the problem solver finds the new solution, he will permanently lose interest in it, and will listen to new and simplified proofs with an air of condescension and boredom.

The problem solver is a conservative at heart. For him, mathematics consists of a sequence of challenges to be met, an obstacle course of problems. The mathematical concepts required to state mathematical problems are tacitly assumed to be eternal and immutable.

Mathematical exposition is regarded as an inferior undertaking. New theories are viewed with deep suspicion, as intruders who must prove their worth by posing challenging problems before they can gain attention. The problem solver resents generalizations, especially those that may succeed in trivializing the solution to one of his problems.

The problem solver is the role model for budding young mathematicians. When we describe to the public the conquests of mathematics, our shining heroes are the problem solvers.

To the theorizer, the supreme achievement of mathematics is a theory that sheds sudden light on some incomprehensible phenomenon. Success in mathematics does not lie in solving problems but in their trivialization. The moment of glory comes with the discovery of a new theory that does not solve any of the old problems but renders them irrelevant.

The theorizer is a revolutionary at heart. Mathematical concepts received from the past are regarded as imperfect instances of more general ones yet to be discovered. Mathematical exposition is considered a more difficult undertaking than mathematical research.

To the theorizer, the only mathematics that will survive are the definitions. Great definitions are what mathematics contributes to the world. Theorems are tolerated as a necessary evil since they play a supporting role - or rather, as the theorizer will reluctantly admit, an essential role - in the understanding of the definitions.

Theorizers often have trouble being recognized by the community of mathematicians. Their consolation is the certainty, which may or may not be borne out by history, that their theories will survive long after the problems of the day have been forgotten.

If I were a space engineer looking for a mathematician to help me send a rocket into space, I would choose a problem solver. But if I were looking for a mathematician to give a good education to my child, I would unhesitatingly prefer a theorizer.

\n
\n

I believe that Rota's characterizations of problem solvers and theorizers are exaggerated but nevertheless in the right general direction. Rota's remarks are echoed in Colin McLarty's: The Rising Tide: Grothendieck on simplicity and generality

\n
\n

Grothendieck describes two styles in mathematics. If you think of a theorem to be proved as a nut to be opened, so as to reach “the nourishing flesh protected by the shell”, then the hammer and chisel principle is: “put the cutting edge of the chisel against the shell and strike hard. If needed, begin again at many different points until the shell cracks—and you are satisfied”. He says:

\n

“I can illustrate the second approach with the same image of a nut to be opened. The first analogy that came to my mind is of immersing the nut in some softening liquid, and why not simply water? From time to time you rub so the liquid penetrates better, and otherwise you let time pass. The shell becomes more flexible through weeks and months—when the time is ripe, hand pressure is enough, the shell opens like a perfectly ripened avocado!

\n

A different image came to me a few weeks ago. The unknown thing to be known appeared to me as some stretch of earth or hard marl, resisting penetration. . . the sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it. . . yet it finally surrounds the resistant substance.”

\n

[...]

\n

Deligne describes a characteristic Grothendieck proof as a long series of trivial steps where “nothing seems to happen, and yet at the end a highly non-trivial theorem is there.”

\n
\n

In addition to the sources cited above, Grothendieck discusses a dichotomy which resembles that of birds and frogs in the section of Recoltes et Semailles titled The Inheritors and the Builders and Lee Smolin discusses such a dicotomy in The Trouble With Physics Chapter 18.

\n

In his Opinion 95, Doron Zeilberger added a supplement to Dyson's classification, saying:

\n
\n

I agree that both frogs and birds are crucial for the progress of science, but, even more important, for the progress of mathematics in the computer age, is the beaver, who will build the needed infrastructre of computer mathematics, that would eventually enable us to solve many outstanding open problems, and many new ones. Consequently, the developers of computer algebra systems, and creators of algorithms, are even more important than both birds and frogs.

\n
\n

Zeilberger's statement that beavers are more important for the process of science than birds and frogs is debatable and I do not endorse it; but I believe that Zeilberger is correct to identify a third category consisting of people whose primary interest is in algorithms. Indeed, as Laurens Gunnarsen recently pointed out to me, Felix Klein had already identified such a category in his 1908 lectures on Elementary Mathematics from an Advanced Standpoint: Arithmetic, Algebra and Analysis. In the section titled Concerning the Modern Development and the General Structure of Mathematics, Klein identified three plans A, B, and C roughly corresponding to the natural activities of frogs, birds and beavers respectively:

\n
\n

…we might say that Plan A is based on a more particularistic conception of science which divides the total field into a series of mutually separated parts and attempts to develop each part for itself, with a minimum of resources and with all possible avoidance of borrowing from neighboring fields. Its ideal is to crystallize out each of the partial fields into a logically closed system. On the contrary, the supporter of Plan B lays the chief stress upon the organic combination of the partial fields, and upon the stimulation which these exert one upon another. He prefers, therefore, the methods which open for him an understanding of several fields under a uniform point of view. His ideal is the comprehension of the sum total of mathematical science as a great connected whole.

[…]

For a complete understanding of the development of mathematics, we must, however, think of a still third plan C, which, along side of and within the processes of development A and B, often plays an important role. It has to do with a method which one denotes by the word algorithm, derived from a mutilated form of an Arabian mathematician. All ordered formal calculation is, at bottom, algorithmic, in particular, the calculation with letters is an algorithm. We have repeatedly emphasized what an important part in the development of the science has been played by the algorithmic process, as a quasi-independent, onward-driving force, inherent in the formulas, operating apart from the intension and insight of the mathematician, at the time, often in opposite to them. In the beginning of the infinitesimal calculus, as we shall see later on, the algorithm has often forced new notions and operations, even before one could justify their admissibility. Even at higher levels of the development, these algorithmic considerations can be, and actually have been, very fruitful, so that one can justly call them the groundwork of mathematical development. We must then completely ignore history, if, as is sometimes done today, we cast these circumstances contemptuously aside as mere \"formal\" developments.

\n
\n

The three categories described above appear to have correlates of personality traits, mathematical interests and superficially nonmathematical interests. Below I'll engage in some speculation about this.

\n

Correlates of the bird category

\n

My impression is that birds tend to have high openness to experience, be anti-conformist, highly emotional sensitivity, and interested in high art, history, philosophy, religion and geometry. Here I'll give some supporting evidence. I believe that the thinkers discussed would identify themselves as birds.

\n

1. Dyson's article discusses Yuri Manin as follows:

\n
\n

The book is mainly about mathematics. It may come as a surprise to Western readers that he writes with equal eloquence about other subjects such as the collective unconscious, the origin of human language, the psychology of autism, and the role of the trickster in the mythology of many cultures.

\n

[...]

\n

Manin is a bird whose vision extends far beyond the territory of mathematics into the wider landscape of human culture. One of his hobbies is the theory of archetypes invented by the Swiss psychologist Carl Jung. An archetype, according to Jung, is a mental image rooted in a collective unconscious that we all share. The intense emotions that archetypes carry with them are relics of lost memories of collective joy and suffering. Manin is saying that we do not need to accept Jung’s theory as true in order to find it illuminating.

\n
\n

2. In The Trouble With Physics Lee Smolin writes of \"Seers\" who have something in common with Dyson's \"Birds\":

\n
\n

Seers are very different. They are dreamers. They go to into science because they have questions about the nature of existence that their schoolbooks don't answer. If they weren't scientists, they might be artists or writers or they might end up in divinity school.

\n

[...]

A common complaint of the seers is that the standard education in physics ignores the historical and philosophical context in which science develops. As Einstein said in a letter to a young physicist who had been thwarted in his attempts to add philosophy to his physics courses:

\"I fully agree with you about the significance and educational value of methodology as well as history and philosophy of science. So many people today - and even professional scientists - seem to me like someone who has seen thousands of trees but has never seen a forest. A knowledge of the historical and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering.\"

\n
\n

3. Thomas remarks that Yuri Manin has written about how \"mathematics chooses us\" and \"emotional platonism\" which are characteristic of shamanism and that the number theorist Kazuya Kato writes \"Mysterious properties of zeta values seem to tell us (in a not so loud voice) that our universe has the same properties: The universe is not explained just by real numbers. It has p-adic properties … We ourselves may have the same properties\" which fits into the shamanistic way of thinking (\"knowing something is becoming it\").

\n

4. In his autobiography titled The Apprentice of a Mathematician, Andre Weil wrote about how he was heavily influenced by Hindu thought and studied Sanskrit and mystic Hindu poetry.

\n

5. According to Allyn Jackson's article on Alexander Grothendieck:

\n
\n

Honig once asked Grothendieck why he had gone into mathematics. Grothendieck replied that he had two special passions, mathematics and piano, but he chose mathematics because he thought it would be easier to earn a living that way. His gift for mathematics was so abundantly clear, said Honig, “I was astonished that at any moment he could hesitate between mathematics and music.”

\n
\n

Grothendieck's interest in music is corroborated by Luc Illusie who said:

\n
\n

Grothendieck had a very strong feeling for music. He liked Bach and his most beloved pieceswere the last quartets by Beethoven.

\n
\n

According to Winifred Scharlau's Who is Alexander Grothendieck?

\n
\n

From 1974 Grothendieck turned to Buddhism; several times he was visited by Japanese monks from the order Nipponzan Myohoji (in English the name translates roughly as “Japanese community of the wonderful lotus sutra”), which preaches strict nonviolence and erects peace pagodas throughout the world. But his attachment to Buddhism did not last. From around 1980 Grothendieck gravitated toward Christian mystical and esoteric ideas.

\n
\n

and

\n
\n

It appears that he thoroughly worked through, for example, Freud’s Traumdeutung (The Interpretation of Dreams) and also read other relevant literature.

\n
\n

6. According to Frank Wilczek's Introduction to Philosophy of Mathematics and Natural Sciences by Hermann Weyl,

\n
\n

In his preface Weyl says, \"I was also bound, though less consciously, by the German literary and philosophical tradition in which I had grown up\" (xv). It was in fact a cosmopolitan tradition, of which Philosophy of Mathematics and Natural Science might be the last great expression. Decartes, Leibniz, Hume, and Kant are taken as familiar friends and interlocutors. Weyl's eruidition is, implicitly, a touching affirmation of a community of mind and inquiry stretching across time and space, and progressing through experience, reflection, and open dialogue.

\n
\n

7. In Robert Langlands' Lectures on the Practice of Mathematics and Is There Beauty in Mathematical Theories?, Langlands discusses the history of mathematics at length and quotes Rainer Maria Rilke, and Rudyard Kipling.

\n

8. Some examples of famous birds who identify as geometers in a broad sense are Bernhard Riemann, Henri Poincare, Felix Klein, Elie Cartan, Andre Weil, Shiing-Shen Chern, Alexander Grothendieck, Raoul Bott, Friedrich Hirzebruch, Michael Atiyah, Yuri Manin, Barry Mazur, Alain Connes, Bill Thurston, Mikhail Gromov.

\n

Correlates of the frog category

\n

My impression is that frogs tend to be highly detail-oriented, conservative (in the sense that Rota describes), have a good memory of lots of facts, high technical prowess, ability to focus on a problem a very long time, and be interested in areas of math like elementary and analytic number theory, analysis, group theory and combinatorics. Here I'll give some supporting evidence. I believe that the thinkers discussed would identify themselves as frogs.

\n

1. The conservative quality of frogs that Rota alludes to is negatively correlated with openness to experience. For an example of a conservative frog, I would cite Harold Davenport:

\n
\n

Davenport was a natural conservative. \"All change is for the worse\" he used to say with complete conviction. He was entirely out of sympathy with the waves of change in the teaching of mathematics but accepted them as an inevitable evil. Selective in his enjoyment of modern technology, he never entered an aeroplane, would use a lift if no alternative existed (at the International Congress in Moscow he trudged up and down the interminable stairs of Stalin's skyscraper), and preferred to send his papers for publication written in his characteristically neat hand.

\n
\n

Davenport's frog aesthetic comes across in his remark

\n
\n

Great mathematics is achieved by solving difficult problems not by fabricating elaborate theories in search of a problem.

\n
\n

2. The Odd Order Theorem in finite group theory is a seminal result which was proved by the two frogs Walter Feit and John Thompson. One of my friends who did his PhD in finite group theory said that understanding a single line of their 250 page proof requires a serious effort. In a 1985 interview, Jean-Pierre Serre said

\n
\n

Chevalley once tried to take this as the topic of a seminar, with the idea of giving a complete account of the proof. After two years, he had to give up.

\n
\n

Claude Chevalley was an outstandingly good mathematician. I read the fact that somebody of such high caliber had much trouble as he did with the proof as an indication of Feit and Thompson having unusually high technical prowess and ability to focus on a single problem for a long time even relative to other remarkable mathematicians. This is counterbalanced by the mathematical output of Feit and Thompson was essentially restricted to the topic of finite groups theory in contrast with that of many mathematicians who have broader interests.

\n

3. The identification of combinatorics as a mathematical field populated by problem solvers comes across in Gowers' essay linked above. The subject of elementary number theory was very heavily influenced by Erdos who has been labeled a canonical problem solver. In the introduction to a course in analytic number theory, Noam Elkies wrote

\n
\n

It has often been said that there are two kinds of mathematicians: theory builders and problem solvers. In the mathematics of our century these two styles are epitomized respectively by A. Grothendieck and P. Erdos [...] analytic number theory as usually practiced falls in the problem-solving camp.

\n
\n

Two of the founders of the mathematical field of analysis, namely Cauchy and Weierstrass were frogs. Klein writes

\n
\n

...in the twenties Cauchy (1789-1857) developed, in lectures and in books, the first rigorous foundings of infinitesimal calculus in the modern sense. He not only gives an exact definition of the differential quotient, and of the integral, by means of the limit of a finite quotient and of a finite sum, respectively, as had previously been done, at times; but, by means of the mean-value theorem he erects upon this, for the first time, a consistent structure for the infinitesimal calculus [ ...] These theories also partake of the nature of plan A, since they work overthe field in a logical systematic way, quite apart from other branches of knowledge.

\n

[...]

\n

From the middle of the century on, the method of thought A comes again to the front with Weierstrass (1815-1897) [...] I have already investigated Weierstrass function theory as an example of A.

\n
\n

Many of the prominent contemporary analysts like John Nash and Grigori Perelman are problem solvers.

\n

Correlates of the beaver category

\n

My impression is that beavers tend to be interested in jigsaw puzzles, word puzzles, logic puzzles, board games like Go, sorting tasks, algorithms, computational complexity, logic, respond best to a stream of immediate feedback in the way of tangible progress and can have trouble focusing on learning mathematical subjects in ways that require a lot of development before one engages in computation. Many computer scientists seem to me to fall into the beaver paradigm. Here I'm on shakier ground as I've seen little public discussion of beavers and most of what I've observed that supports my impression is born of subjective experience with people who I know, but I'll try to give some examples that seem to me to fall into the beaver paradigm:

\n

1. Doron Zeilberger's focus on algorithmics, ultrafinitism and constructivist mathematics.

\n

2. Harold Edwards' focus on constructivist mathematics (which comes across in his books titled Higher Arithmetic: An Algorithmic Introduction to Number Theory, Essays in Constructive Mathematics, Galois Theory, Fermat's Last Theorem and Divisor Theory) is in the beaver paradigm.

\n

3. Terence Tao's interest in logic puzzles like the blue-eyed islanders puzzle and his interest in ultrafilters and nonstandard analysis.

\n

4. The work of Jonathan Borwein and Peter Borwein computing a billion digits of pi.

\n

5. The computational work of historically great mathematicians like Newton Euler, Gauss, Jacobi, and Ramanujan.

\n

6. Don Zagier's remark in his essay in Mariana Cook's book

\n
\n

Even now I am not a modern mathematician and very abstract ideas are unnatural for me. I have of course learned to work with them but I haven't really internalized it and remain a concrete mathematician. I like explicit, hands-on formulas. To me they have a beauty of their own. They can be deep or not.

\n
\n

7. The focus on explicit formulae in the area of q-series.

\n

8. Interest in Newcomb's Problem and its variations.

\n

9. Scott Aaronson's interest in computational complexity, algorithms, and questions between logic and algorithmics as reflected in his MathOverflow post Succinctly naming big numbers: ZFC versus Busy-Beaver.

\n

To Be Continued

\n

In a future post I will describe superficial similarities and superficial differences between the three types and misunderstandings  between different types which arise from generalizing from one example and cultural differences. Regarding his mastercraftspeople/seers dicotomy, Lee Smolin says

\n
\n

It is only to be expected that members of these two groups misunderstand and mistrust each other.

\n
\n

Timothy Gowers writes about how there's a schism between his two categories of mathematicians and says

\n
\n

this is not an entirely healthy state of a ffairs.

\n
\n

As Dyson and Zeilberger said, all three types are important to scientific progress. I believe that intellectual progress will be increase if the three types can learn to better understand each other.

" } }, { "_id": "gr7nweNiK6kiR9LZR", "title": "QFT, Homotopy Theory and AI?", "pageUrl": "https://www.lesswrong.com/posts/gr7nweNiK6kiR9LZR/qft-homotopy-theory-and-ai", "postedAt": "2010-10-30T10:48:13.992Z", "baseScore": -4, "voteCount": 8, "commentCount": 2, "url": null, "contents": { "documentId": "gr7nweNiK6kiR9LZR", "html": "

What do you think about the new, exiting connections between QFT, Homotopy Theory and pattern recognition, proof verification and (maybe) AI systems? In view of the background of this forum's participants (selfreported in the survey mentioned a few days ago), I guess most of you follow those developments with some attention.

\n

Concerning Homotopy Theory, there is a coming [special year](http://www.math.ias.edu/node/2610), you probably know Voevodsky's [recent intro lecture](http://www.channels.com/episodes/show/10793638/Vladimir-Voevodsky-Formal-Languages-partial-algebraic-theories-and-homotopy-category-), and [this](http://video.ias.edu/voevodsky-80th) even more popular one. Somewhat related are Y.I. Manin's remarks on the missing quotient structures (analogue to localized categories) in data structures and some of the ideas in Gromov's [essay](http://www.ihes.fr/~gromov/PDF/ergobrain.pdf).

\n

Concerning ideas from QFT, [here](http://arxiv.org/abs/0904.4921) an example. I wonder what else concepts come from it?

\n

BTW, whereas the public discussion focus on basic qm and on q-gravity questions, the really interesting and open issue is the special relativistic QFT: QM is just a canonical deformation of classical mechanics (and could have been found much earlier, most of the interpretation disputes just come from the confusion of mathematical properties with physics data), but Feynman integrals are despite half a century intense research mathematical unfounded. As Y.I. Manin called it in a recent interview, they are \"an Eifel tower floating in the air\". Only a strong platonian belief makes people tolerate that. I myself take them only serious because there is a clear platonic idea behind them and because number theoretic analoga work very well.

" } }, { "_id": "97TCYaiMe4ceRYoXs", "title": "Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) ", "pageUrl": "https://www.lesswrong.com/posts/97TCYaiMe4ceRYoXs/ben-goertzel-the-singularity-institute-s-scary-idea-and-why", "postedAt": "2010-10-30T09:31:29.456Z", "baseScore": 42, "voteCount": 40, "commentCount": 414, "url": null, "contents": { "documentId": "97TCYaiMe4ceRYoXs", "html": "

[...] SIAI's Scary Idea goes way beyond the mere statement that there are risks as well as benefits associated with advanced AGI, and that AGI is a potential existential risk.

\n

[...] Although an intense interest in rationalism is one of the hallmarks of the SIAI community, still I have not yet seen a clear logical argument for the Scary Idea laid out anywhere. (If I'm wrong, please send me the link, and I'll revise this post accordingly. Be aware that I've already at least skimmed everything Eliezer Yudkowsky has written on related topics.)

\n

So if one wants a clear argument for the Scary Idea, one basically has to construct it oneself.

\n

[...] If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.

The line of argument makes sense, if you accept the premises.

But, I don't.

\n

Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It), October 29 2010. Thanks to XiXiDu for the pointer.

" } }, { "_id": "KjDuPTgfhYkR9LWHu", "title": "Currently Buying AdWords for LessWrong", "pageUrl": "https://www.lesswrong.com/posts/KjDuPTgfhYkR9LWHu/currently-buying-adwords-for-lesswrong", "postedAt": "2010-10-30T05:31:43.667Z", "baseScore": 18, "voteCount": 13, "commentCount": 32, "url": null, "contents": { "documentId": "KjDuPTgfhYkR9LWHu", "html": "

So I'm trying to build more rationalists.  To do this, I've invested a few hundred dollars of my own money to promote Less Wrong by buying low-cost AdWords on Google for different LW pages.  I want to reach smart people with a really good article from Less Wrong that answers their question and draws them into our community so that the site's content can help improve their rationality.   Based on buying AdWords before, I'd estimate that only 0.5-1% of people who click through to Less Wrong will actually get involved after reading an article, but since clicks only cost ~$0.04, that means it only costs me ~$6 to build a new rationalist and drastically improve someone's life.  Seems like an excellent return on investment.

But to get a strong 1% conversion rate and really make an impact, I need to identify REALLY EXCELLENT Less Wrong content.  Right now I'm experimenting by buying a lot of keywords related to quantum mechanics and sending people to http://lesswrong.com/lw/r8/and_the_winner_is_manyworlds/

My hope is that this page is useful and memorable enough that some small % of readers stick around and click through to other pages.  My guess is that this isn't the ideal page to do this with but it's aiming in the right direction.

What page would you would want a new Less Wrong reader to find first?  What answers a specific question they might have in such an impressive way that they would want to learn more about our community (perhaps many different pages for many different questions)??  Which articles are most memorable?  Just looking at \"Top\" didn't yield any obvious choices... I felt like most of those articles were too META-META-META ... you'd need too much back knowledge for many of them.  An ideally article would be more or less \"stand-alone\" so that any relatively intelligent person who doesn't have the whole LW corpus in their head already could just jump in and understand it immediately... and then branch out and explore LW from there.

\n

So what do you think?  Give me links to any landing pages you think would be worth promoting this way.  You can write rough mini-ads as suggestions too if you'd like to be even more helpful.  I'm looking forward to hearing your suggestions!

" } }, { "_id": "RfeAx2fXQWyDiDboj", "title": "Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It)", "pageUrl": "https://www.lesswrong.com/posts/RfeAx2fXQWyDiDboj/ben-goertzel-the-singularity-institute-s-scary-idea-and-why-0", "postedAt": "2010-10-30T03:25:24.493Z", "baseScore": 16, "voteCount": 14, "commentCount": 7, "url": null, "contents": { "documentId": "RfeAx2fXQWyDiDboj", "html": "

http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html

" } }, { "_id": "E8Rpx4oBWCARkEkZP", "title": "A brief guide to not getting downvoted", "pageUrl": "https://www.lesswrong.com/posts/E8Rpx4oBWCARkEkZP/a-brief-guide-to-not-getting-downvoted", "postedAt": "2010-10-30T02:32:53.517Z", "baseScore": -3, "voteCount": 3, "commentCount": 2, "url": null, "contents": { "documentId": "E8Rpx4oBWCARkEkZP", "html": "

Important Question: does a guide like this already exist? I couldn't find one, but it's hard to believe there's no guide to not getting downvoted...Supposing there isn't, please give me suggestions on what such a guide needs, because I'm sort of just guessing here.

\n

Edit: Found Your intuititions are not magic, which might be most of what I had intended to write about.

\n

 

\n

 

\n

In this recent discussion post, new user draq points out that while their discussion posts have been substantially downvoted, they largely does not know why or what is wrong with these posts. I want to offer a few suggestions why a new user might find their posts being downvoted, but I should first note that many of the participants here have read a substantial portion of the sequences. This is the reason why you will occasionally see jargon-sounding terminology being used. Every so often a new (usually argumentative) user will come along and levy the charge of \"groupthink\" at LW, but this is merely a case of a group of people with shared knowledge/beliefs inventing shorthand for those ideas. To fully interact with the community here, you will unfortunately need to go through at least a good portion of the core sequences. Onto the troubleshooting:

\n

 

\n

1.  Map and territory

\n

Here at LW, the dominant metaphor for talking about the nature of beliefs is as a map-territory relation. The essence of this idea can be summarized as follows:

\n\n

There are important omissions here, like defining \"The Universe,\" but I'm not a specialist in these issues and that's not the point anyways. The point is we have to start somewhere, and that place is the idea that there's a universe and you and I and everyone else live there.

\n

It's important to note that your beliefs about the world exist only in your head, and they aren't very useful at all if they don't correspond to something that we can verify through observation. That's why the only useful kind of beliefs are those that make you anticipate the world being in some set of states and not in others. For example, if I believe that \"consciousness is emergent,\" yet I have no idea what would change in the world if \"consciousness\" were not \"emergent,\" then I really don't have a belief about anything at all; I just have a bunch of words strung together that seem to refer to something. Which brings us to the next point:

\n

 

\n

2. Words are tricky

\n

You may be suffering from the illusion that people can understand what you mean. If you have done a great deal of thinking on your own, you may find that some of the words you use conceal all sorts of different meanings that others might not share. You should be wary of all the ways that words can confuse your thinking.

\n

 

\n

3. Taboo your words!

\n

There's a delightful game we like to play here, and it's called Rationalist taboo. Essentially, when we find ourselves disagreeing with one another and the source of conflict can't be easily found, it's typically the case that someone has an extra clump of meaning attached to one of their words that the other participant didn't have. Tabooing key words is an extremely powerful method for resolving arguments and for debugging your own thinking.

" } }, { "_id": "XejFQR7NvSQCuXQ53", "title": "Cambridge Meetups Nov 7 and Nov 21", "pageUrl": "https://www.lesswrong.com/posts/XejFQR7NvSQCuXQ53/cambridge-meetups-nov-7-and-nov-21", "postedAt": "2010-10-30T00:32:16.595Z", "baseScore": 8, "voteCount": 5, "commentCount": 7, "url": null, "contents": { "documentId": "XejFQR7NvSQCuXQ53", "html": "

The formerly monthly Cambridge meetups are now twice-monthly! Same place and time - Clear Conscience Cafe, Sunday at 4pm - now on the first and third Sundays of each month, instead of just the third. That's November 7th and 21st.

\n

It was suggested last time that we might want to try adding some structure to these events. I thought it'd be fun to try Paranoid Debating, so I'll bring trivia cards, as suggested by eugman. Other proposals - activities, conversation topics, etc - are welcome.

" } }, { "_id": "evvJKR6Gyf6qMo4Ew", "title": "What is the Archimedean point of morality?", "pageUrl": "https://www.lesswrong.com/posts/evvJKR6Gyf6qMo4Ew/what-is-the-archimedean-point-of-morality", "postedAt": "2010-10-29T21:56:52.218Z", "baseScore": -5, "voteCount": 6, "commentCount": 11, "url": null, "contents": { "documentId": "evvJKR6Gyf6qMo4Ew", "html": "

It has been very enjoyable to post on LW [1, 2] and I have learned a lot from the discussions with other members, for which I am very thankful. But unfortunately, judging by my karma score which is on the same level of Kiwiallegiance and the Jewelry spammer, my opinions are not appreciated and I frequently receive the following message when posting a new comment:

\n

You are trying to submit too fast. try again in xn minutes.

\n

When I press the submit button after x1 + 1 minutes, I'm told to wait another x2 minutes. So commenting has become more and more frustating, and I don't want to continue burden the LW members with the heavy tast of down-voting me. But on the other hand, I still can't find any flaw in my argumentation despite many rebuttals. Maybe I am too ignorant, maybe I am on something. So I'll give myself a last try.

\n

\n


\nThere is none. Some say that morality is a system that is most conductive to cooperation and thus biological fitness. Others say, it is something society creates to enable its own survival. These are explanations that try to reduce morality (values, desires and dislikes) to the concepts of the natural world, but they don't capture what we really mean by desires, dislikes and values.

\n

You might explain my desire for pancakes as a neuronal process, as a mental function biologically evolved, but it does not capture the meaning of \"desire\". The concept of meaning itself has no meaning in the natural world, but it has a meaning to us, to the rational mind.

\n

As much as we cannot explain what the natural world \"really\" is, since we cannot see what is behind the physical reality (unless you are an idealistic Platonist), we cannot explain what morality and values \"really\" are. We can only describe them using scientific theories or normative theories, respectively.

" } }, { "_id": "tiyKRo5uudtxit5zE", "title": "Seeking book about baseline life planning and expectations", "pageUrl": "https://www.lesswrong.com/posts/tiyKRo5uudtxit5zE/seeking-book-about-baseline-life-planning-and-expectations", "postedAt": "2010-10-29T20:31:33.891Z", "baseScore": 9, "voteCount": 6, "commentCount": 19, "url": null, "contents": { "documentId": "tiyKRo5uudtxit5zE", "html": "

In an attempt to find useful \"base rate expectations\" for the rest of my life (and how actions I might take now could set me up to be much better off 10, 20, 30, 40, 50, 60, and 70 years from now) I'm looking for a book that describes the nuts and bolts of human lives.  I want coherent discussion from an actuarial/economic/probabilistic/calculating perspective, but I'd like some soulfulness too.  The ideal book would be published in 2010 and have coverage of the different periods of people's lives and cover different aspects of their lives as well.  In some sense the book would be like a nuts and bolts \"how to your your life\" manual.  Hopefully it would have footnotes and generally good epistemology :-)

\n

To take an example of the kind of content I would hope for (in a domain where I already have worked out some of the theory myself) the ideal book would explain how to calculate the ROI of different levels of college education realistically.  Instead of a hand-waving argument that \"on avergae you'll make more with education\" it would also talk about the opportunity costs of lost wages, and how expected number of years of work impacts on what amount of training makes sense, and so on. 

\n

To be clear, I don't want a book that is simply about deciding when, how, and for how long it makes sense to train for a job.  Instead I want something that talks about similar issues that I haven't already thought about but that are important, so that I can be usefully educated in ways I wasn't expecting.  My goal is to find someone else's scaffold to help me project when and why I should (or shouldn't) buy a minivan, how much to budget for dentistry in my 50's, and a breakdown of the causes of bankruptcy the way insurance companies can predict causes of death.

\n

I was hoping that the book How We Live: An Economic Perspective on Americans from Birth to Death would give me what I want (and it is still probably my fallback book if I can't find anything better) but it was written in 1983, and appears to be strongly oriented towards public policy recommendations rather than personal choice.

\n

Books that may be conceptually nearby that seem non-ideal include:

\n

Dear Undercover Economist: Priceless Advice on Money, Work, Sex, Kids, and Life's Other Challenges - My second place fallback because it covers real life content, is from 2009, and the first book in the series was pretty solid on economic theory.  The problem is that it seems like haphazard coverage of the subject matter rather than \"a treatise\" that aims to describe the full ambit of life issues, sort them by priority, and deal usefully with the big stuff.

\n

The Average Life of the Average Person: How It All Adds Up - Just a collection of factoids, like the number of cumulative days the average person spends on the toilette, the value add is the collection and the juxtaposition.  Mere factoids might actually be useful as a list of things to think about optimizing for long term impact?  Not what I want, but potentially relevant.

\n

The ABCs Of Strategic Life Planning - The first problem is that this appears to be a workbook with questionnaires (presumed target market is people dealing with akrasia) rather than a narrative of fact and theory (giving the logical scaffold for a general plan).  The second problem is that the marketing means it is probably from the business/self-help genre from which I expect relatively little epistemic rigor.

\n

The How of Happiness: A Scientific Approach to Getting the Life You Want - This book covers the \"softer issues\" that I definitely care about and don't expect to be covered by economists.  It seems potentially interesting, but in addition to not covering the other subject areas, it sounds more like a literature review of positive psychology results than like a \"normal life overview\".

\n

The Logic of Life: The Rational Economics of an Irrational World - It is good that this is recent (from 2008) but it is poorly reviewed, haphazard in subject, and full of shiny stuff that's intended to be stimulatingly non-intuitive.  I'm looking for meat and potatoes.

\n

Hidden Order: The Economics of Everyday Life - Purportedly a lot of economic theory (which is not what I'm looking for) and then some shiny examples that Freaknomics later (it was written in 1997) made somewhat trendy.  However, the title sounds like the book could have been close to what I want.

\n

Can anyone suggest a book that is \"a coherent overview of the intersection of these books and anything else I forgot\"?  There may be no book that matches my ideal, but I wouldn't be surprised if something pretty close to it exists that I just haven't found yet.

\n

Help appreciated!

" } }, { "_id": "mqNzvQkZ3nGetwA2F", "title": "Why should you vote?", "pageUrl": "https://www.lesswrong.com/posts/mqNzvQkZ3nGetwA2F/why-should-you-vote", "postedAt": "2010-10-29T20:15:36.818Z", "baseScore": 3, "voteCount": 3, "commentCount": 24, "url": null, "contents": { "documentId": "mqNzvQkZ3nGetwA2F", "html": "

For many years I've been interested in the \"paradox\" that your vote tends to never alter the outcome of an election, yet the outcome is in fact determined by the votes.  I wrote a blog post about this and tried to explain it in terms of emergence, we as voters are just feeling what it's like to be just a tiny part of a much bigger system.

\n

Then I tried to explain that \"voter turnout\" is in fact one of the most important metrics for an election, it determines the legitimacy and stability of the process.  So therefore even though your vote won't determine the winner, it will contribute to voter turnout and thus is productive and useful.

\n

http://www.kmeme.com/2010/10/why-you-should-vote.html

\n

However I don't find my argument all that compelling, because even voter turnout is going to be approximately the same whether you vote or not.

\n

In the post I bring up littering as something else where your tiny contribution adds up to be bigger result.  I personally would never litter on purpose, yet I often skip voting because it seems like it doesn't make a difference.  Is voting rational?  How do you justify voting or not voting?  My post was non-partisan so I'm soliciting non-partisan comments, trying to focus on the theory behind voting in general.

" } }, { "_id": "rnP4S3YEzX2GuXaSi", "title": "\"The current era is the only chance of setting up game rules\"", "pageUrl": "https://www.lesswrong.com/posts/rnP4S3YEzX2GuXaSi/the-current-era-is-the-only-chance-of-setting-up-game-rules", "postedAt": "2010-10-29T16:40:31.484Z", "baseScore": 5, "voteCount": 9, "commentCount": 2, "url": null, "contents": { "documentId": "rnP4S3YEzX2GuXaSi", "html": "

Here is another reason why we need to work on superhuman AI: 

\n

Thermodynamics of advanced civilizations

\n

Link: http://www.aleph.se/andart/archives/2010/10/visions_of_the_future_in_milano.html

\n
    \n
  1. Civilizations are physical objects, and nearly any ultimate goal imply a need for computation, storing bits and resources (the basic physical eschatology assumption).
  2. \n
  3. The universe has a bunch of issues: \n
      \n
    • The stelliferous era will just last a trillion year or so.
    • \n
    • Matter and black holes are likely unstable, so after a certain time there will not be any structure around to build stuff from. Dark matter doesn't seem to be structurable either.
    • \n
    • Accelerating expansion prevents us from reaching beyond a certain horizon about 15 gigalightyears away.
    • \n
    • It will also split the superclusters into independent \"island universes\" that will become unreachable from each other within around 120 billion years.
    • \n
    • It also causes horizon radiation ~10-29 K hot, which makes infinite computation impossible.
    • \n
    \n
  4. \n
  5. Civilizations have certain limits of resources, expansion, processing and waste heat: \n
      \n
    • We can still lay our hands on 5.96·1051 kg matter (with dark matter 2.98·1052 kg) within the horizon, and ~2·1045 kg (with DM ~1046 kg) if we settle for a supercluster.
    • \n
    • The lightspeed limitation is not enormously cumbersome, if we use self-replicating probes.
    • \n
    • The finite energy cost of erasing bits is the toughest bound. It forces us to pay for observing the world, formatting new memory and correct errors.
    • \n
    \n
  6. \n
  7. Putting it all together we end up with the following scenario for maximal information processing: \n
      \n
    • The age of expansion: interstellar and intergalactic expansion with self-replicating probes. It looks like one can enforce \"squatters rights\", so there is no strong reason to start exploiting upon arrival.
    • \n
    • The age of preservation: await sufficiently low temperatures. A halving of temperature doubles the amount of computation you can do. You only need a logarithmically increasing number of backups for indefinite survival. Since fusion will release ~1% of the mass-energy of matter but black hole conversion ~50%, it might not be relevant to turn off the stars unless you feel particularly negentropic.
    • \n
    • The age of harvest: Exploit available energy to produce maximal amount of computation. The slower the exploitation, the more processing can be done. This is largely limited by structure decay: you need to be finished before your protons decay. Exactly how much computation you can do depends on how large fraction of the universe you got, how much reversible computation you can do and the exact background temperature.
    • \n
    \n
  8. \n
  9. This leads to some policy-relevant conclusions: \n
      \n
    \n
      \n
    • Cosmic waste is a serious issue: the value of the future is enormous in terms of human lives, so postponing colonization or increasing existential risk carries enormous disutilities. However, in order to plan like this you need to have very low discount rates.
    • \n
    • There are plenty of coordination problems: burning cosmic commons, berserker probes, entropy pollution etc. The current era is the only chance of setting up game rules before dispersion and loss of causal contact.
    • \n
    • This model suggests a Fermi paradox answer: the aliens are out there, waiting. They already own most of the universe and we better be nice to them. Alternatively, if there is a phase transition situation where we are among the first, we really need to think about stable coordination and bargaining strategies.
    • \n
    \n
      \n
    \n
  10. \n
\n

Here is more: Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization

\n

Our only hope to alleviate those problems is a beneficial superhuman intelligence to coordinate our future for us. That is if we want to make it to the stars and that our dream of a galactic civilization comes true and does not end up in a unimaginable war over resources.

" } }, { "_id": "ffuqo2Hj9CbbTEP9P", "title": "Baby born from cryo-preserved embryo", "pageUrl": "https://www.lesswrong.com/posts/ffuqo2Hj9CbbTEP9P/baby-born-from-cryo-preserved-embryo", "postedAt": "2010-10-29T12:14:45.185Z", "baseScore": 9, "voteCount": 8, "commentCount": 2, "url": null, "contents": { "documentId": "ffuqo2Hj9CbbTEP9P", "html": "

Apparently embryos produced by in vitro fertilization routinely stay on ice for years. Article here.

" } }, { "_id": "j2TGffvdxYLfnAf4x", "title": "META: Who Have You Told About LW?", "pageUrl": "https://www.lesswrong.com/posts/j2TGffvdxYLfnAf4x/meta-who-have-you-told-about-lw", "postedAt": "2010-10-29T10:06:04.787Z", "baseScore": 7, "voteCount": 6, "commentCount": 22, "url": null, "contents": { "documentId": "j2TGffvdxYLfnAf4x", "html": "

I've been lurking on LW since shortly after it started, and on OB for about six months before that.  In that time, I've told four or five people about it.  I would make a terrible evangelist.

\r\n

I'm curious as to whether other people have the same problem.  I'd like to tell lots of people about LW, but I don't think they're ready for it.  If they read a statement like \"purchase utilons and warm fuzzies separately\" their eyes would glaze over, and they'd walk away thinking LW was some sort of crackpot site.

\r\n

I have found certain posts and topics to be fairly good hooks for getting people interested. An Alien God is a good suggested read for people with an interest in evolutionary theory and the Cthulhu mythos (surprisingly high crossover in my experience).  HP:MoR is also a pretty popular hook.  The site itself isn't really optimised for word-of-mouth, though, and not everyone likes child wizards and blasphemous horrors.

\r\n

How many people have you introduced to LW?  Who were they, how do you do it and what was their reaction?  How could we do it better?

" } }, { "_id": "qC7adZZW8QjsJxFbv", "title": "The spam must end", "pageUrl": "https://www.lesswrong.com/posts/qC7adZZW8QjsJxFbv/the-spam-must-end", "postedAt": "2010-10-29T02:20:15.595Z", "baseScore": 18, "voteCount": 14, "commentCount": 27, "url": null, "contents": { "documentId": "qC7adZZW8QjsJxFbv", "html": "

 I'm mean most of us would like a friendly bot to chat with, but this is just paperclipping the section (no offence clippy), by now its starting to be a real trivial inconvenience for me and it reduces my desire to check out new topics.

\n

 

" } }, { "_id": "m5AH78nscsGjMbBwv", "title": "Making your explicit reasoning trustworthy ", "pageUrl": "https://www.lesswrong.com/posts/m5AH78nscsGjMbBwv/making-your-explicit-reasoning-trustworthy", "postedAt": "2010-10-29T00:00:25.408Z", "baseScore": 121, "voteCount": 94, "commentCount": 95, "url": null, "contents": { "documentId": "m5AH78nscsGjMbBwv", "html": "

Or: “I don’t want to think about that! I might be left with mistaken beliefs!”

Related to: Rationality as memetic immune disorder; Incremental progress and the valley; Egan's Law.

tl;dr: Many of us hesitate to trust explicit reasoning because... we haven’t built the skills that make such reasoning trustworthy. Some simple strategies can help.

Most of us are afraid to think fully about certain subjects.

Sometimes, we avert our eyes for fear of unpleasant conclusions. (“What if it’s my fault? What if I’m not good enough?”)

But other times, oddly enough, we avert our eyes for fear of inaccurate conclusions.[1] People fear questioning their religion, lest they disbelieve and become damned. People fear questioning their “don't walk alone at night” safety strategy, lest they venture into danger. And I find I hesitate when pondering Pascal’s wager, infinite ethics, the Simulation argument, and whether I’m a Boltzmann brain... because I’m afraid of losing my bearings, and believing mistaken things.

Ostrich Theory, one might call it. Or I’m Already Right theory. The theory that we’re more likely to act sensibly if we don’t think further, than if we do. Sometimes Ostrich Theories are unconsciously held; one just wordlessly backs away from certain thoughts. Other times full or partial Ostrich Theories are put forth explicitly, as in Phil Goetz’s post, this LW comment, discussions of Tetlock's "foxes vs hedgehogs" research, enjoinders to use "outside views", enjoinders not to second-guess expert systems, and cautions for Christians against “clever arguments”.

Explicit reasoning is often nuts

Ostrich Theories sound implausible: why would not thinking through an issue make our actions better? And yet examples abound of folks whose theories and theorizing (as contrasted with their habits, wordless intuitions, and unarticulated responses to social pressures or their own emotions) made significant chunks of their actions worse. Examples include, among many others:

In fact, the examples of religion and war suggest that the trouble with, say, Kaczynski wasn’t that his beliefs were unusually crazy. The trouble was that his beliefs were an ordinary amount of crazy, and he was unusually prone to acting on his beliefs. If the average person started to actually act on their nominal, verbal, explicit beliefs, they, too, would in many cases look plumb nuts. For example, a Christian might give away all their possessions, rejoice at the death of their children in circumstances where they seem likely to have gone to heaven, and generally treat their chances of Heaven vs Hell as their top priority. Someone else might risk their life-savings betting on an election outcome or business about which they were “99% confident”.

That is: many peoples’ abstract reasoning is not up to the task of day to day decision-making. This doesn't impair folks' actions all that much, because peoples' abstract reasoning has little bearing on our actual actions. Mostly we just find ourselves doing things (out of habit, emotional inclination, or social copying) and make up the reasons post-hoc. But when we do try to choose actions from theory, the results are far from reliably helpful -- and so many folks' early steps toward rationality go unrewarded.

We are left with two linked barriers to rationality: (1) nutty abstract reasoning; and (2) fears of reasoned nuttiness, and other failures to believe that thinking things through is actually helpful.[2]

Reasoning can be made less risky

Much of this nuttiness is unnecessary. There are learnable skills that can both make our abstract reasoning more trustworthy and also make it easier for us to trust it.

Here's the basic idea:

If you know the limitations of a pattern of reasoning, learning better what it says won’t hurt you. It’s like having a friend who’s often wrong. If you don’t know your friend’s limitations, his advice might harm you. But once you do know, you don’t have to gag him; you can listen to what he says, and then take it with a grain of salt.[3]

Reasoning is the meta-tool that lets us figure out what methods of inference are trustworthy where. Reason lets us look over the track records of our own explicit theorizing, outside experts' views, our near-mode intuitions, etc. and figure out which is how trustworthy in a given situation.

If we learn to use this meta-tool, we can walk into rationality without fear.

Skills for safer reasoning

1. Recognize implicit knowledge.

Recognize when your habits, or outside customs, are likely to work better than your reasoned-from-scratch best guesses. Notice how different groups act and what results they get. Take pains to stay aware of your own anticipations, especially in cases where you have explicit verbal models that might block your anticipations from view. And, by studying track records, get a sense of which prediction methods are trustworthy where.

Use track records; don't assume that just because folks' justifications are incoherent, the actions they are justifying are foolish. But also don't assume that tradition is better than your models. Be empirical.

2. Plan for errors in your best-guess models.

We tend to be overconfident in our own beliefs, to overestimate the probability of conjunctions (such as multi-part reasoning chains), and to search preferentially for evidence that we’re right. Put these facts together, and theories folks are "almost certain" of turn out to be wrong pretty often. Therefore:

3. Beware rapid belief changes.

Some people find their beliefs changing rapidly back and forth, based for example on the particular lines of argument they're currently pondering, or the beliefs of those they've recently read or talked to. Such fluctuations are generally bad news for both the accuracy of your beliefs and the usefulness of your actions. If this is your situation:

4. Update your near-mode anticipations, not just your far-mode beliefs.

Sometimes your far-mode is smart and you near-mode is stupid. For example, Yvain's rationalist knows abstractly that there aren’t ghosts, but nevertheless fears them. Other times, though, your near-mode is smart and your far-mode is stupid. You might “believe” in an afterlife but retain a concrete, near-mode fear of death. You might advocate Communism but have a sinking feeling in your stomach as you conduct your tour of Stalin’s Russia.

Thus: trust abstract reasoning or concrete anticipations in different situations, according to their strengths. But, whichever one you bet your actions on, keep the other one in view. Ask it what it expects and why it expects it. Show it why you disagree (visualizing your evidence concretely, if you’re trying to talk to your wordless anticipations), and see if it finds your evidence convincing. Try to grow all your cognitive subsystems, so as to form a whole mind.

5. Use raw motivation, emotion, and behavior to determine at least part of your priorities.

One of the commonest routes to theory-driven nuttiness is to take a “goal” that isn’t your goal. Thus, folks claim to care “above all else” about their selfish well-being, the abolition of suffering, an objective Morality discoverable by superintelligence, or average utilitarian happiness-sums. They then find themselves either without motivation to pursue “their goals”, or else pulled into chains of actions that they dread and do not want.

Concrete local motivations are often embarrassing. For example, I find myself concretely motivated to “win” arguments, even though I'd think better of myself if I was driven by curiosity. But, like near-mode beliefs, concrete local motivations can act as a safeguard and an anchor. For example, if you become abstractly confused about meta-ethics, you'll still have a concrete desire to pull babies off train tracks. And so dialoguing with your near-mode wants and motives, like your near-mode anticipations, can help build a robust, trust-worthy mind.

Why it matters (again)

Safety skills such as the above are worth learning for three reasons.

  1. They help us avoid nutty actions.
  2. They help us reason unhesitatingly, instead of flinching away out of fear.
  3. They help us build a rationality for the whole mind, with the strengths of near-mode as well as of abstract reasoning.

[1] These are not the only reasons people fear thinking. At minimum, there is also:

[2] Many points in this article, and especially in the "explicit reasoning is often nuts" section, are stolen from Michael Vassar. Give him the credit, and me the blame and the upvotes.

[3] Carl points out that Eliezer points out that studies show we can't. But it seems like explicitly modeling when your friend is and isn't accurate, and when explicit models have and haven't led you to good actions, should at least help.

" } }, { "_id": "GEDpyWBSQ6vfJMRfH", "title": "Complete Wire Heading as Suicide and other things", "pageUrl": "https://www.lesswrong.com/posts/GEDpyWBSQ6vfJMRfH/complete-wire-heading-as-suicide-and-other-things", "postedAt": "2010-10-28T23:57:30.200Z", "baseScore": 1, "voteCount": 3, "commentCount": 3, "url": null, "contents": { "documentId": "GEDpyWBSQ6vfJMRfH", "html": "

I came to the idea after a previous lesswrong topic discussing nihilism, and its several comments on depression and suicide. My argument is that wire heading in its extreme or complete/full form can be easily modeled as suicide, or less strongly as volitional intelligence reduction, at least given current human brain structure and the technology being underdeveloped and hence understood and more likely to lead to such end states.

\n

I define Full Wire Heading as that which a person would not want to reverse after it 'activates' and which deletes their previous utility function or most of it. a weak definition yes, but it should be enough for the preliminary purposes of this post. A full wire head is extremely constrained, much like an infant for e.g. and although the new utility function could involve a wide range of actions, the activation of a few brain regions would be the main goal, and so they are extremely limited.

\n

If one takes this position seriously, it follows that only one's moral standpoint on suicide or say lobotomy should govern judgments about full wire heading. This is trivially obvious of course, but to take this position as true we need to understand more about wire heading, as data is extremely lacking especially in regards to human like brains. My other question then is to what extent could such an experiment help in answering the first question?

" } }, { "_id": "sQRD9BpsMeotv2thR", "title": "Understanding the Evidence for Killer Supplements", "pageUrl": "https://www.lesswrong.com/posts/sQRD9BpsMeotv2thR/understanding-the-evidence-for-killer-supplements", "postedAt": "2010-10-28T23:16:25.876Z", "baseScore": 4, "voteCount": 2, "commentCount": 0, "url": null, "contents": { "documentId": "sQRD9BpsMeotv2thR", "html": "

 

\n

Related to: Even if You Have a Nail  Supplements Kill

\n
\n

So what test, exactly, did the authors perform?  And what do the results mean?  It remains a mystery to me - and, I'm willing to bet, to every other reader of the paper.

\n
\n

I'm willing to take that bet.  In March, PhilGoetz criticized a JAMA article that purported to show evidence that taking vitamins increases mortality as an example of how it's easy to misuse statistics.  His conclusion is above.

\n

Recently, Robin Hanson commented on the article and took it seriously, stating that he was now going to avoid multivitamins.

\n

Read Phil's and Robin's posts for first for background; I'm going to explain what, exactly, the authors did in their analysis.

\n

The first thing to understand about the JAMA article is that it was a meta-analysis based of previous relative risk studies.  A relative risk study attempts to determine which of two groups is more at risk for death.  In this case, the groups are subjects who take a certain vitamin (the treatment group) and subjects who don't (the control group). Significantly, subjects in the treatment group each receive the same dosage of the vitamin.  After a fixed amount of time (say 3 years) the number of living and dead members of each group are recorded.  Logistic regression is then performed in order to estimate the probability of someone from either group dieing.  Once these two probabilities are know, the relative risk can be estimated as RR=P(death in trt group)/P(death in control group).  An RR significantly greater than 1 indicates that the treatment is associated with higher mortality, while an RR significantly less than 1 indicates that the treatment is associated with lower mortality.  

\n

Enter meta-analysis.  In a meta-analysis the data isn't data from actual experiments, but rather the estimated treatment effect of previous experiments. In this case, the estimated treatment effect is the estimated relative risk from previous experiments.  A fairly simple way to do a meta-analysis is a random effects model.  Under this model we assume that the estimated relative risk for each treatment came from a normal distribution with it's own mean and a common variance, and then each of these means came from another normal distribution with a common mean and variance.  In other words, for study i=1,...,k:

\n

\"\"

\n

Then our estimate of µ is the estimated treatment effect, i.e., the estimated relative risk of taking the supplement.  If we think that different studies result in different treatment effects because of certain covariates, e.g. chance of bias, location, author, etc., we can complicate the model a bit by giving each study it's own mean in a regression, i.e. by doing meta-regression.  For example with one covariate, it would look like this for study i=1,...,k:

\n

\"\"

\n

where

\n

\"\"

\n

It appears that the JAMA article authors used both of these methods to analyze the previous vitamin studies. In footnote 25 the authors reference a paper by DerSimonian and Nan Laird called Meta-Analysis in Clinical Trials that describes the basic meta-analysis approach I talk about above, and other references that they don't cite describe meta-regression in exactly the same way I do here.

\n

(The next section depends on a faulty assumption.  See edit below.)

\n

Assuming that the JAMA authors did use this method, notice something very important about how they handled the previous studies.  It is not the case that every study they analyzed used the same amount of any given vitamin for their treatment.  In fact, for Vitamin C the values range from 80mg to 2000mg across the studies Bjelakovic et al use. But they put all of these treatment effects together as if they are all the same treatment.  The result is that their model assumes that the effect on mortality from, say, 80mg of vitamin C and 2000mg of vitamin C is exactly the same.  Phil couldn't figure out what happens to relative risk as the dosage amount changes in the model because nothing happens.  If you have a positive dosage amount, you get the same change in risk no matter what the dose is.

\n

Now this is a fine for an approximation if the range of dosage amounts is small.  Then you can safely conclude for dosages of roughly the amount in these studies taking supplemental vitamins increases mortality.  If the range is large, I'm not sure you can learn anything useful from this study.  I don't know whether or not the ranges are large or small.  I know little about vitamins, so I'll let others be the judge of that.

\n

EDIT:

\n

From the JAMA paper:

\n
\n

The included covariates were bias risk, type and dose of supplement, single or combined supplement regimen, duration of supplementation, and primary or secondary prevention.

\n
\n

So they apparently did use dosage as a covariate rather than merely dosage type.  In that case, I think Phil's original criticism still applies, and if anyone can find the data, it shouldn't be too difficult to fit the same model but with a higher order term for dosage to see if the results change.

" } }, { "_id": "QcMbGJbqr8HFGa36y", "title": "Call for Volunteers: Rationalists with Non-Traditional Skills", "pageUrl": "https://www.lesswrong.com/posts/QcMbGJbqr8HFGa36y/call-for-volunteers-rationalists-with-non-traditional-skills", "postedAt": "2010-10-28T21:38:26.737Z", "baseScore": 31, "voteCount": 23, "commentCount": 68, "url": null, "contents": { "documentId": "QcMbGJbqr8HFGa36y", "html": "

SIAI's Fellows Program is looking for rationalists with skills.  More specifically, we're looking for rationalists with skills outside our usual cluster who are interested in donating their time by teaching those skills and communicating the mindsets that lead to their development.  If possible, we'd like to learn from specialists who \"speak our language,\" or at least are practiced in resolving confusion and disagreement using reason and evidence.  Broadly, we're interested in developing practical intuitions, doing practical things, and developing awareness and culture around detail-intensive technical subskills of emotional self-awareness and social fluency.  More specifically:    

\n

\n

We're interested learning how to \"make stuff\" in order to force ourselves down to object level from time to time, practice executing plans in the real world and to refine our intuitions about material phenomena.  Examples:

\n\n

We're interested in learning skills that require one to pay very close attention to the detail of the physical world - how things actually are instead of what our mental representation says about them.  Examples:

\n\n

We'd like to get better at working with people, both inside the institute and outside.  Examples:

\n\n

We want to become more aware and in control of our emotions.  Emotional self-awareness seems very important for productivity, social success and understanding our tendencies toward motivated cognition.  Aside from things that traditionally train emotional awareness like acting and meditation, we expect that certain formal systems of kinesthetic practice such as Iyengar yoga will also help because of the close association between emotional states and patterns of muscle tension.  Examples:

\n\n

We obviously don't have the time for everyone to learn all of these skills right now, but we would like to get the ball rolling on what's available.  Whether you're interested in joining the fellows program, visiting regularly or even just video conferencing occasionally, if you think you can teach one of these skills or something else that seems to align well with the faculties we're interested in training, please send me an email at jasen@intelligence.org.

\n

Even if you don't live in the Bay Area, I encourage you to post your skills here anyway, in case one of the regular Less Wrong meetup groups are interested.  There are regular meetups in New York, Boston, and Los Angeles.  As always, if there aren't already meetups in your area I encourage you to start one.

" } }, { "_id": "iqQJiKcephtMgzJgN", "title": "V is for Value Maximizing Agent: London, November 5", "pageUrl": "https://www.lesswrong.com/posts/iqQJiKcephtMgzJgN/v-is-for-value-maximizing-agent-london-november-5", "postedAt": "2010-10-28T18:53:11.821Z", "baseScore": 11, "voteCount": 6, "commentCount": 5, "url": null, "contents": { "documentId": "iqQJiKcephtMgzJgN", "html": "

During the last London meetup, which I conveniently scheduled during Easter, I promised that I'd see if the next time, I could make it to London sometime that wasn't a national holiday.

\n

The time has come to break that promise, so I will be in London for a day on Friday November 5th. If anyone wants to meet up, I'll be around that evening at 8 or so to discuss rationality-related issues, chat, or orchestrate a terrorist campaign to overthrow the government while wearing nifty masks. We can try the top floor of that same Waterstone's in Piccadilly Circus, and relocate to Starbucks if it doesn't work out. Does that work for anybody?

" } }, { "_id": "jCzAeQ7MmmgTb98qG", "title": "I'll be in NYC from Oct. 30 to Nov. 21", "pageUrl": "https://www.lesswrong.com/posts/jCzAeQ7MmmgTb98qG/i-ll-be-in-nyc-from-oct-30-to-nov-21", "postedAt": "2010-10-28T16:27:15.436Z", "baseScore": 6, "voteCount": 5, "commentCount": 1, "url": null, "contents": { "documentId": "jCzAeQ7MmmgTb98qG", "html": "

Sorry for the self-centered post, but I don’t get many chances to be where there are a lot of rationalists.  (We’ve counted about four in all of Texas that go to this site.)

\r\n

 

\r\n

Thanks to the Cosmos’s noticing my need for a place to spend my vacation time this year, I will be staying in his NYC apartment while he’s gone.  I’ll definitely be at the NYC meetups.

\r\n

 

\r\n

So, if you are anywhere near this area and were interested in meeting me, let me know (either on this thread or privately) and we can work something out.

\r\n

 

\r\n

I’ve informed the NYC OB Google group, but figured there would be good opportunities to meet some of you that aren’t on that list or are a bit further away from the city.

" } }, { "_id": "fknjGgGBzZQRevE6A", "title": "Art and Rationality", "pageUrl": "https://www.lesswrong.com/posts/fknjGgGBzZQRevE6A/art-and-rationality", "postedAt": "2010-10-28T13:56:01.396Z", "baseScore": 4, "voteCount": 6, "commentCount": 7, "url": null, "contents": { "documentId": "fknjGgGBzZQRevE6A", "html": "

What are your thoughts on the role of Art in rationality (personal or otherwise) and in the singularity?

\n

If one wants to help in the efforts of SIAI (or other organizations) does it make sense to focus on an art form as more than a hobby?

\n

Is it rational to pursue an art form that encourages people to contribute to a cause when there are more direct ways of contributing?

\n

It seems difficult to receive much recognition for one's work in art related fields, but it also seems as though one big success (say, a musician whose music was primarily about the singularity and increasing rationality) would turn many people on to the ideas. 

" } }, { "_id": "PZvRBn7ZKsFfKKSAd", "title": "Morality is as real as the physical world.", "pageUrl": "https://www.lesswrong.com/posts/PZvRBn7ZKsFfKKSAd/morality-is-as-real-as-the-physical-world", "postedAt": "2010-10-27T20:55:42.410Z", "baseScore": -13, "voteCount": 11, "commentCount": 4, "url": null, "contents": { "documentId": "PZvRBn7ZKsFfKKSAd", "html": "

The following is destilled from the comment section of an earlier post.

\n

Definitions

\n

absolute and universal: Something that applies to everything and every mind.

\n

morality (moral world): A logically consistent system of normative theories.

\n

reality (natural world): A logically consistent system of scientific (natural) theories.

\n

normative theory: (Almost) any English sentence in imperative or including the word \"should\", \"must\", \"to be allowed to\" as the verb or equivalent construction, in contrast to descriptive theories.

\n

mind: A mind is an intelligence that has values, desires and dislikes.

\n

moral perception: Analogous to the sensory perceptions, a moral perception is the feeling of right and wrong.

\n

Assumptions

\n

A normative sentence arises as a result of the mind processing its values, desires and dislikes.

\n

Ideas exist independently from the mind. Numbers don't stop to exist just because HAL dies.

\n

Statement

\n

In our everyday life, we don't question the reality, due to our sensory perception. We have moral perception as much as we have a sensory perception, therefore why should we question morality?

\n

If you believe that the natural world is absolute and universal, then there is -- I currently think -- no good reason to doubt the existence of an absolute and universal moral world.

\n

A text diagram for illustration

\n

\n
-----------------------------
\n
\n

|    sensory perception     |    -----------------------    ------------

\n

|          +                | -- | scientific theories | -- | reality  |

\n

| intersubjective consensus |    -----------------------    ------------

\n

-----------------------------

\n

 

\n

Analogously, 

\n

-----------------------------

\n

|     moral perception      |    -----------------------    ------------

\n

|           +               | -- |   moral theories    | -- | morality |

\n

| intersubjective consensus |    -----------------------    ------------

\n

-----------------------------

\n
\n

Absolute moralily

\n

The absolute moral world, I am talking about, does encompass everything, including AI and alien intelligence. It does not mean that alien intelligence will behave similarly to us. Different moral problems require different solutions, as much as different objects behave differently according to the same physical theories. Objects in vacuum behave differently than in the atmosphere. Water behaves differently than ice, but they are all governed by the same physics, so I assume.

\n

An Edo-ero samurai and a Wall Street banker may behave perfectly moral even if they act differently to the same problem due to the social environment. Maybe it is perfectly moral for AIs to kill and annihilate all humans, as much as it is perfectly possible that 218 of Russell's teapots are revolving around Gliese 581 g.

\n

The intersubjective consensus

\n

There are different sets of theories regarding the natural world: the biblical view, the theories underlying TCM, the theories underlying homeopathy, the theories underlying chiropractise and the scientific view. Many of them contradict each other. The scientific view is well-established because there is an intersubjective consensus on the usefulness of the methodology.

\n

The methods used in moral discussions are by far not so rigidly defined as in science; it's called civil discourse. The arguments must be logical consistent and the outcomes and conclusions of the normative theory must face the empirical challenge, i.e. if you can derive from your normative theories that it is permissible to kill innocent children without any benefits, then there is probably something wrong.

\n

Using this method, we have done quite a lot so far. We have established the UN Human Rights Charta, we have an elaborated system of international law, law itself being a manifestation of morality (denying the fact, that law is based on morality is like saying that technology isn't based on science).

\n

Not everyone might agree and some say, \"I think that chattel slavery is perfectly moral.\" And there are people who think that praying to an almighty pasta monster and dressing up as pirates will cure all the ills of the world. Does that mean that there is no absolute reality? Maybe.

\n

Conclusion

\n

As long as we have values, desires, dislikes and make judgements (which all of us do and which maybe is a defining characteristic of the human being beyond the biological basics), if we want to put these values into a logical consistent system, and if we believe that other minds with moral perception exist, then we have an absolute moral world.

\n

So if we stop having any desires and stop making any judgements, that is if we lack any moral perception, then we may still believe in morality, as much as an agnostic won't deny the existence of God, but it would be totally irrelevant to us.

\n

To the same degree, if someone lacks all the sensory perception, then the natural world becomes totally irrelevant to him or her.

" } }, { "_id": "AwhjHANAHoC4F9KsP", "title": "The prior probability of justification for war?", "pageUrl": "https://www.lesswrong.com/posts/AwhjHANAHoC4F9KsP/the-prior-probability-of-justification-for-war", "postedAt": "2010-10-27T20:52:20.850Z", "baseScore": -2, "voteCount": 9, "commentCount": 15, "url": null, "contents": { "documentId": "AwhjHANAHoC4F9KsP", "html": "

Could you use Bayes Theorem to figure out whether or not a given war is just?

\n

If so, I was wondering how one would go about estimating the prior probability that a war is just.

\n

Thanks for any help you can offer.

" } }, { "_id": "sRthrsfspce5Hh99R", "title": " What hardcore singularity believers should consider doing", "pageUrl": "https://www.lesswrong.com/posts/sRthrsfspce5Hh99R/what-hardcore-singularity-believers-should-consider-doing", "postedAt": "2010-10-27T20:26:04.499Z", "baseScore": 6, "voteCount": 18, "commentCount": 22, "url": null, "contents": { "documentId": "sRthrsfspce5Hh99R", "html": "

Leading singularity proponent Ray Kurzweil co-authored a book titled Fantastic Voyage: Live Long Enough to Live Forever.  A singularity believer who thinks that if he makes it to the singularity he has an excelling chance of living forever, or at least for thousands of years, should be willing to sacrifice much for a slightly higher chance of living long enough to make it to the singularity.  This is why I think singularity believers make up a vastly disproportionate percentage of members of cryonics organizations. 

\n

 

\n

According to a new scientific article there is a medical procedure that might be able to greatly extend some peoples’ lives.  Although we don’t have a huge amount of data one small study showed that several hundred people on average lived 14 years longer than those that didn’t get the procedure.

\n

 

\n

Singularity proponents should be extremely interested in the procedure.  Indeed, a way of testing whether members of the SIAI such as Eliezer really and truly believe in the singularity is whether they at least seriously consider having the procedure.

\n

 

\n

The procedure is discussed at the end of this article.

\n

 

\n

 

\n

 

\n

 

\n

 

" } }, { "_id": "edWMToD7uhBuApBdG", "title": "Play paranoid debating at home!", "pageUrl": "https://www.lesswrong.com/posts/edWMToD7uhBuApBdG/play-paranoid-debating-at-home", "postedAt": "2010-10-27T16:34:57.188Z", "baseScore": 10, "voteCount": 7, "commentCount": 0, "url": null, "contents": { "documentId": "edWMToD7uhBuApBdG", "html": "

I was reading the wiki article on Paranoid debating and I noticed that there was no good source of facts for the game. I suggest anyone interested in it check out a party game called Wits and Wagers. It's an interesting game where everyone is given a trivia question with a numerical answer. Everyone writes down their guess, then bets on the answers, with the more extreme answers paying out better. It's a cool game and a good source of numerical trivia.

" } }, { "_id": "qjuhXMhZfbRkNdt5c", "title": "HELP: Do I have a chance at becoming intelligent?", "pageUrl": "https://www.lesswrong.com/posts/qjuhXMhZfbRkNdt5c/help-do-i-have-a-chance-at-becoming-intelligent", "postedAt": "2010-10-26T21:41:37.807Z", "baseScore": 38, "voteCount": 31, "commentCount": 68, "url": null, "contents": { "documentId": "qjuhXMhZfbRkNdt5c", "html": "

If this post is inappropriate, I apologize.

\n

I stumbled upon this site after reading \"Harry Potter and the Methods of Rationality\".  The story so far has really moved me on multiple levels and sent me here in a quest to learn more about rationality as a philosophy/way of thinking about the world. I have read Ayn Rands published works and loved the stories and most of the message.  The characters always seemed like titans that were far and above me, but now, I've seen a character that is a bit more approachable. 

\n

I've started to go through the \"Map and Territory\" section of the \"Core Sequences\" and this whole project and community makes me ecstatic.  I'm currently working my way through the Bayes's Theorem article with some success.  The more I read, the more I realize I may have a problem.

\n

 

\n

I'm pretty dumb.

\n

 

\n

Is higher level reasoning \"use it or lose it\" ?  I like learning new things and love reading but any new ideas require a ton of thought and re-reading.  I think I have enough interest to keep plugging away at it, but I'm not sure I'm going at things the right way.  Is there a \"Kid's Table\" for lesswrong.com?

\n

For \"Priors\":  I'm 28 years old, white male, married, no children, poor economic upbringing, solid emotional upbringing, currently lower to middle class, high school diploma, US Navy, currently a civilian electronics technician, raised Baptist currently Agnostic/Atheist (recently).

\n

I guess that's it.  Thanks!

\n

--John

" } }, { "_id": "53YPLc8ehW4ogKsub", "title": "Ethics of Jury nullification and TDT?", "pageUrl": "https://www.lesswrong.com/posts/53YPLc8ehW4ogKsub/ethics-of-jury-nullification-and-tdt", "postedAt": "2010-10-26T21:01:23.563Z", "baseScore": 16, "voteCount": 14, "commentCount": 32, "url": null, "contents": { "documentId": "53YPLc8ehW4ogKsub", "html": "

I've been sort of banging my head on this issue (I have jury duty next week (first time)).

\n

 

\n

The obvious possibility is what if I get put on a drug use case? The obvious injustices of the anti-drug laws are well known, and I know of the concept of nullification, but I'm bouncing back and forth as to its validity.

\n

 

\n

Some of my thoughts on this:

\n

 

\n

Thought 1: Just decide if they did it or didn't do it.

\n

Thought 2: But can I ethically bring myself to declare guilty (and thus result in potential serious punishment) someone that really didn't actually do anything wrong? ie, to support a seriously unjust law?

\n

Thought 3: (and here's where TDT style issues come in) On the other hand, the algorithm \"if jury member, don't convict if I don't like a particular law\" seems to be in general a potentially really really bad algorithm. (ie, one obvious failure mode for that algorithm would be homophobic juries that refuse to convict on hate crimes against gays)

\n

Thought 4: Generally, those sorts of people tend to not be serious rationalists. Reasoning as if I can expect correlations among our decision algorithms seems questionable.

\n

Thought 5: Really? Really? If I wanted to start making excuses like that, I could probably whenever I feel like construct a reference class for which I am the sole member. Thought 4 style reasoning seems itself to potentially be shaky.

\n

 

\n

So, basically I'm smart enough to have the above sequence of thoughts, but not smart enough to actually resolve it. What is a rationalist to do? (In other words, any help with untangling my thoughts on this so that I can figure out if I should go by the rule of \"nullify if appropriate\" or \"nullification is bad, period, even if the law in question is hateful\" would be greatly appreciated.)

" } }, { "_id": "qYmsn6ohsMxvAzB65", "title": "Teachers vs. Tutors", "pageUrl": "https://www.lesswrong.com/posts/qYmsn6ohsMxvAzB65/teachers-vs-tutors", "postedAt": "2010-10-26T16:59:48.669Z", "baseScore": 18, "voteCount": 17, "commentCount": 21, "url": null, "contents": { "documentId": "qYmsn6ohsMxvAzB65", "html": "

It's an anecdotal commonplace that rich parents in places like Manhattan are willing to shell out a colossal amount for their children's tutors.  Most recently I heard about a math PhD student at an elite university who's getting $500 an hour to tutor high school kids.

\n

Now this makes me wonder: why do tutors get paid so much, but not teachers?  Why isn't there a private school somewhere, full of elite superstar teachers, getting paid colossal sums?  Why isn't there someone trying to lure young people into being very well-paid schoolteachers instead of professors or hedge fund managers?  If some parents have the extra resources to spend on their children's education, why is that money going to tutors rather than schools?

\n

Some hypotheses that came to mind:

\n

1.  One-on-one tutoring really is the form of learning that gives you the most bang for your buck; the marginal dollar is best spent on getting a better tutor because tutoring is more effective than school.

\n

2.  The marginal dollar is best spent on getting a better tutor because tutors are independent contractors. Switching your kid's school is a big change for the kid, and a discrete jump in price, but getting a new tutor at slightly more cost is easy.

\n

3.  Something about the rules of teacher's unions prevents a few \"superstar teachers\" from being paid colossal sums.

\n

4.  The sort of person who becomes a $500-an-hour tutor usually has a background in something other than education, and isn't credentialed to be a schoolteacher, and perhaps doesn't want to teach full time.  You could get him to tutor, if you paid well, but you couldn't get him to be a full-time teacher without an expenditure beyond the means of even the wealthiest parents.

\n

5.  Parents are using tutors to improve their children's grades, not their education.  Putting that same money into the kid's school wouldn't improve the kid's grades relative to his classmates'.

\n

 

\n

Any other ideas?

" } }, { "_id": "LLWTxFJAuke96u5Ln", "title": "Pathological utilitometer thought experiment", "pageUrl": "https://www.lesswrong.com/posts/LLWTxFJAuke96u5Ln/pathological-utilitometer-thought-experiment", "postedAt": "2010-10-26T15:13:06.100Z", "baseScore": 11, "voteCount": 13, "commentCount": 30, "url": null, "contents": { "documentId": "LLWTxFJAuke96u5Ln", "html": "

\n

I've been doing thought experiments involving a utilitometer: a device capable of measuring the utility of the universe, including sums-over-time and counterfactuals (what-if extrapolations), for any given utility function, even generic statements such as, \"what I value.\" Things this model ignores: nonutilitarianism, complexity, contradictions, unknowability of true utility functions, inability to simulate and measure counterfactual universes, etc.

\n

Unfortunately, I believe I've run into a pathological mindset from thinking about this utilitometer. Given the abilities of the device, you'd want to input your utility function and then take a sum-over-time from the beginning to the end of the universe and start checking counterfactuals (\"I buy a new car\", \"I donate all my money to nonprofits\", \"I move to California\", etc) to see if the total goes up or down.

\n

It seems quite obvious that the sum at the end of the universe is the measure that makes the most sense, and I can't see any reason for taking a measure at the end of an action as is done in all typical discussions of utility. Here's an example: \"The expected utility from moving to California is negative due to the high cost of living and the fact that I would not have a job.\" But a sum over all time might show that it was positive utility because I meet someone, or do something, or learn something that improves the rest of my life, and without the utilitometer, I would have missed all of those add-on effects. The device allows me to fill in all of the unknown details and unintended consequences.

\n

Where this thinking becomes a problem is when I realize I have no such device, but desperately want one, so I can incorporate the unknown and the unintended, and know what path I should be taking to maximize my life, rather than having the short, narrow view of the future I do now. In essence, it places higher utility on 'being good at calculating expected utility' than almost any other actions I could take. If I could just build a true utilitometer that measures everything, then the expected utility would be enormous! (\"push button to improve universe\"). And even incremental steps along the way could have amazing payoffs.

\n

Given that a utilitometer as described is impossible, thinking about it has still altered my values to place steps toward creating it above other, seemingly more realistic options (buying a new car, moving to California, etc). I previously asked the question, \"How much time and effort should we put into improving our models and predictions, given we will have to model and predict the answer to this question?\" and acknowledged it was circular and unanswerable. The pathology comes from entering the circle and starting a feedback loop; anything less than perfect prediction means wasting the entire future.

\n

" } }, { "_id": "22HfpjsydDS2A6JhH", "title": "Self-empathy as a source of \"willpower\"", "pageUrl": "https://www.lesswrong.com/posts/22HfpjsydDS2A6JhH/self-empathy-as-a-source-of-willpower", "postedAt": "2010-10-26T14:20:11.565Z", "baseScore": 83, "voteCount": 69, "commentCount": 32, "url": null, "contents": { "documentId": "22HfpjsydDS2A6JhH", "html": "\n

tl:dr; Dynamic consistency is a better term for \"willpower\" because its meaning is robust to changes in how we think constistent behavior actually manages to happen. One can boost consistency by fostering interactions between mutually inconsistent sub-agents to help them better empathize with each other.

\n

Despite the common use of the term, I don't think of my \"willpower\" as an expendable resource, and mostly it just doesn't feel like one. Let's imagine Bob, who is somewhat overweight, likes to eat cake, and wants to lose weight to be more generically attractive and healthy. Bob often plans not to eat cake, but changes his mind, and then regrets it, and then decides he should indulge himself sometimes, and then decides that's just an excuse-meme, etc. Economists and veteran LessWrong readers know this oscillation between value systems is called dynamic inconsistency (q.v. Wikipedia). We can think of Bob as oscillating between being two different idealized agents living in the same body: a WorthIt agent, and a NotWorthIt agent.

\n

The feeling of NotWorthIt-Bob's (in)ability to control WorthIt-Bob is likely to be called \"(lack of) willpower\", at least by NotWorthIt-Bob, and maybe even by WorthIt-Bob. But I find the framing and langauge of \"willpower\" fairly unhelpful. Instead, I think NotWorthIt-Bob and WorthIt-Bob just aren't communicating well enough. They try to ignore each other's relevance, but if they could both be present at the same time and actually talk about it, like two people in a healthy relationship, maybe they'd figure something out. I'm talking about self-empathy here, which is opposite to self-sympathy: relating to emotions of yours that you are not immediately feeling. Haven't you noticed you're better at convincing people to change their minds when you actually empathize with their position during the conversation? The same applies to convincing yourself.

\n

Don't ask \"Do I have willpower?\", but \"Am I a dynamically consistent team?\"

\n

The task of keeping your various mental impulses working together coherently is called executive functioning. To deal with an \"always eat cake\" impulse, some may be lucky enough to win by simply reciting \"cake isn't really that tasty anyway\". A more potent technique is to practice visualizing the cake making you instantaneously grotesque and extremely ill, creating a psychological flinch-away reflex — a behavioral trigger — which will be activated on the sight of cake and intervene on the usual behavior to eat it. But such behavioral triggers can easily fail if they aren't backed up by an agreement between your WorthIt and NotWorthIt sub-agents: if you end up smelling the cake, or trying \"just one bite\" to be \"polite\" at your friend's birthday, it can make you all-of-a-sudden-remember how tasty the cake is, and destroy the trigger.

\n

To really be prepared, Bob needs to vaccinate himself against extenuating circumstances. He needs to admit to himself that cake really is delicious, and decide whether it's worth eating without downplaying how very delicious it is. He needs to sit down with the cake, stare at it, smell it, taste three crumbs of it, and then toss it. (If possible, he should give it away. But note that, despite parentally-entrained guilt about food waste, Bob hurting himself with the cake won't help anyone else help themselves with it: starving person eats cake > no one eats cake > Bob eats cake.)

\n

This admission corresponds to having a meeting between WorthIt-Bob and NotWorthIt-Bob: having both sets of emotions present and salient simultaneously allows them to reach a balance decisively. Maybe NotWorthIt-Bob will decide that eating exactly one slice of cake-or-equivalent tasty food every two weeks really is worth it, and keep a careful log to ensure this happens. Maybe WorthIt-Bob will approve of the cake-is-poison meditation techniques and actually change his mind. Maybe Bob will become one person who consistently values his health and appearance over spurious taste sensations.

\n

Or maybe not. But it sure works for me.

" } }, { "_id": "W2ufY8ihDDWWqJA7h", "title": "If you don't know the name of the game, just tell me what I mean to you", "pageUrl": "https://www.lesswrong.com/posts/W2ufY8ihDDWWqJA7h/if-you-don-t-know-the-name-of-the-game-just-tell-me-what-i", "postedAt": "2010-10-26T13:43:57.762Z", "baseScore": 16, "voteCount": 17, "commentCount": 26, "url": null, "contents": { "documentId": "W2ufY8ihDDWWqJA7h", "html": "

Following: Let's split the Cake

\n

tl;dr: Both the Nash Bargaining solution (NBS), and the Kalai-Smorodinsky Bargaining Solution (KSBS), though acceptable for one-off games that are fully known in advance, are strictly inferior for independent repeated games, or when there exists uncertainty as to which game will be played.

\n

Let play a bargaining game, you and I. We can end up with you getting €1 and me getting €3, both of us getting €2, or you getting €3 and me getting €1. If we fail to agree, neither of us gets anything.

\n

Oh, and did I forget to mention that another option was for you to get an aircraft carrier and me to get nothing?

\n

Think of that shiny new aircraft carrier, loaded full with jets, pilots, weapons and sailors; think of all the things you could do with it, all the fun you could have. Places to bomb or city harbours to cruise majestically into, with the locals gaping in awe at the sleek powerful lines of your very own ship.

\n

Then forget all about it, because Kalai-Smorodinsky says you can't have it. The Kalai-Smorodinsky bargaining solution to this game is 1/2 of a chance of getting that ship for you, and 1/2 of a chance of getting €3 for me (the Nash Bargaining Solution is better, but still not the best, as we'll see later). This might be fair; after all, unless you have some way of remunerating me for letting you have it, why should I take a dive for you?

\n

But now imagine we are about to start the game, and we don't know the full rules yet. We know about the €'s involved, that's all fine, we know there will be an offer of an aircraft carrier; but we don't know who is going to get the offer. If we wanted to decide on our bargaining theory in advance, what would we do?

\n

Obviously not use the KSBS; it gives 1/4 of an aircraft carrier to each player. One excellent solution is simple: whoever has the option of the aircraft carrier... gets it. This gives us both an expected gain of 1/2 aircraft carrier before the game begins, superior to the other approaches.

\n

Let formalise this uncertainty over possible games. A great game (GG) is a situation where we are about to play one of games gj, each with probability pj. My utility function is U1, yours is U2. Once we choose a bargaining equilibrium which results in outcomes oj with utilities (aj,bj) for each game, then our expected utility gain is (Σpj aj, Σpj bj)=Σpj(aj, bj) Since the outcome utilities for each games are individually convex, the set S of possible outcome (expected) utilities for the GG are also convex.

\n

Now, we both want a bargaining equilibrium that will be Pareto optimal for GG. What should we do? Well, first, define:

\n\n

We'll extend the definition to μ=∞ by setting that to be the bargaining solution that involves maximising U2. Now this definition isn't complete; there are situations where maximising (U1+µU2) doesn't give a single solution (such as those where µ=1, we have to split €10 between us, and we both have utilities where 1 utiliton=1€). These situations are rare, but we'll assume here that any μSMBS comes complete with a tie-breaker method for selecting unique solutions in these cases.

\n

The first (major) result is:

\n\n

To prove this, let oj be the outcomes for game gj, using μSMB, with expected utilities (aj,bj). The GG has expected utilities (a,b) =∑pj (aj,bj). Let fµ be the function that maps (x,y) to x+µy. The μSMBS is equivalent with maximising the value of fµ for each game.

\n

So now let qj be another possible outcome set, and expected utility (c,d), assume to be strictly superior, for both players, to (a,b). Now, because µ is positive, (c,d) > (a,b) implies c>a and µd>µb, so implies fµ(c,d) > fµ(a,b). However, by the definition of μSMBS, we must have fµ(aj,bj) ≥ fµ(cj,dj). Since fµ is linear, fµ(a,b)=∑pj fµ(aj,bj) ≥ ∑pj fµ(cj,dj) = fµ(c,d). This contradicts the assumption that (c,d) > (a,b), and hence proves that (a,b) is Pareto-optimal.

\n

This strong result has a converse, namely:

\n\n

Let oj be the outcomes for a given Pareto-optimal bargaining solution, with expected utilities (aj,bj), and GG having expected utilities (a,b) =∑pj (aj,bj). The set S of possible expected utilities for GG is convex, and since (a,b) is Pareto-optimal, it must lie on the boundary. Hence there exists a line L through (a,b) such that S lies entirely to the left of this line. Let -μ be the slope of L. Thus, there does not exist any (c,d) in the expected utilities outcomes with fµ(c,d) > fµ(a,b).

\n

Now, if there were to exist an outcome qk for the game gk with expected utilities (ck,dk) such that fµ(ck,dk) > fµ(ak,bk), then the expected utility for the outcomes oj with ok replaced with pk, would be (c,d) = (a,b) + qk ((ck,dk) - (ak,bk)). This has fµ(c,d) > fµ(a,b), contradicting the previous result. Thus (aj,bj) always maximise fµ in gj, and hence this bargaining solution produces the same results as μSMBS (with a given tie-breaking procedure, if needed).

\n

So the best solution, if you are uncertain what the games are you could be playing, is to fix a common relative measure of value, and then maximise that, ignoring considerations of fairness or any other. To a crude approximation, capitalism is like this: every game is supposed to maximise money as much as it can.

\n

Multiple games

\n

For the moment, we've been considering a single game, with uncertainty as to which game is going to be played. The same result goes through, however, if you are expecting to play multiple games in a row. One caveats is needed: the games must be independent of each other.

\n

The result holds, for instance, in two games with stakes €10, as long as our utilities are linear in these (small) amounts of money. It does not hold if the first game's stake is a left shoe and the second's is a right shoe, for there the utilities of the outcomes are correlated: if I win the left shoe, I'm much more likely to value the right shoe in the subsequent game.

\n

Invariance

\n

Maximising summed utility is invariant under translations (which just add a single constant to each utility). It is not, of course, invariant under scalings, and it would be foolish indeed to first decide on μ and then allow players to rescale their utilities. In general the μ is not a real number, but a linear isomorphism between the two utilities, invariantly defined by some process.

\n

Kalai-Smorodinsky and Nash's revenge

\n

So, it seems established. μSMBS is the way to go. KSBS and NBS are loser solutions, and should be discarded. As you'd imagine, it's not quite so simple...

\n

The problem is, which µ? Fixing that µ is going to determine your expected utility for any GG, so it's an important value. And each player is going to have a different priority it, trying to minimise the contribution of the other agent's utility. So there will have to be some... bargaining. And that bargaining will be a one-shot bargaining deal, not a repeated one, so there is no superior way of going about it. Use KSBS or NBS or anything like that if you want; or set µ to reflect your joint valuing of a common currency ($, € or negentropy); but you can't get around the fact that you're going to have to fix that µ somehow. And if you use μ'SMBS to do so, you've just shifted the battle to the value of μ'...

\n

 

\n

Edit: the mystery of µ (mathy, technical, and not needed to understand the rest of the post)

\n

There has been some speculation on the list as to the technical meaning of U1+µU2 and µ. To put this on an acceptable rigorous footing, let u1 and u2 be any two representatives of U1 and U2 (in that u1 and u2 are what we would normally term as \"utility functions\", and U1 and U2 are the set of utility functions that are related to these by affine transformations). Then µ is a function from the possible pairs (u1, u2) to the non-negative reals, with the property that it is equivariant under linear transformations of u1 and inverse-equivariant under linear transformations of u2 (in human speak: when u1 gets scaled bigger, so does µ, and when u2 gets scaled bigger, µ gets scaled smaller), and invariant under translations. Then we can define U1+µU2 as the set of utility functions for which u1+µu2 is a representative (the properties of µ make this well defined, independently of our choices of u1 and u2). Whenever µ≠0, there is a well defined µ-1, with the property that  µ-1U1+U2 = U1+µU2. Then the case μ=∞ is defined to be μ-1=0.

" } }, { "_id": "gjyDBd3Werweuhi5w", "title": "Levels of Intelligence", "pageUrl": "https://www.lesswrong.com/posts/gjyDBd3Werweuhi5w/levels-of-intelligence", "postedAt": "2010-10-26T11:57:22.948Z", "baseScore": -20, "voteCount": 17, "commentCount": 82, "url": null, "contents": { "documentId": "gjyDBd3Werweuhi5w", "html": "

Level 1: Algorithm-based Intelligence

\n

An intelligence of level 1 acts on innate algorithms, like a bacterium that survives using inherited mechanisms. 

\n

Level 2: Goal-oriented Intelligence

\n

An intelligence of level 2 has an innate goal. It develops and finds new algorithms to solve a problem. For example, the paperclip maximizer is a level-2 intelligence.

\n

Level 3: Philosophical Intelligence

\n

An intelligence of level 3 has neither any preset algorithms nor goals. It looks for goals and algorithms to achieve the goal. Ethical questions are only applicable to intelligence of level 3.

" } }, { "_id": "kbFf6vAoTD2wX6QWh", "title": "HELP: How do minimum wage laws harm people?", "pageUrl": "https://www.lesswrong.com/posts/kbFf6vAoTD2wX6QWh/help-how-do-minimum-wage-laws-harm-people", "postedAt": "2010-10-26T11:09:10.853Z", "baseScore": 3, "voteCount": 10, "commentCount": 22, "url": null, "contents": { "documentId": "kbFf6vAoTD2wX6QWh", "html": "

The concept of minimum wage is one I'm rather attached to. I have dozens of arguments for why it helps people, improves the world, etc. etc. I suspect this view is shared by most of this community, although I haven't seen any discussion of it.

\n

 

\n

I don't have much understanding of the harms that minimum wages cause; and at what level of minimum wage those harms become relevant (ie. a minimum wage that would not be a living wage even working 24 hours a day is unlikely to have any of the same problems that a minimum wage sufficient to buy an aircraft carrier an hour would have)

\n

So what are the harms that such laws cause?

" } }, { "_id": "fSfmive2aTMpdY6qM", "title": "Should people require a mandatory license for parenting?", "pageUrl": "https://www.lesswrong.com/posts/fSfmive2aTMpdY6qM/should-people-require-a-mandatory-license-for-parenting", "postedAt": "2010-10-26T08:47:36.128Z", "baseScore": -3, "voteCount": 13, "commentCount": 21, "url": null, "contents": { "documentId": "fSfmive2aTMpdY6qM", "html": "

Sir, Could I See Your Breeding License?

\n
\n

Why [...] are we so cavalier about who we let have and raise them? As technology enables more people to reproduce, environmental pressures make each new life a bigger burden, and our understanding of child psychology improves, it’ll become more and more evident that just because a person can have kids doesn’t mean they should have kids. My guess is that, decades down the road, future generations will require a license to reproduce and start a family. That sounds like a pretty good idea to me.

\n
\n

Most important is that children don't have to grow up under horrible circumstances inflicted on them by the inability of their parents. You always have to weigh the freedom of some against any negative infliction it could have on others. In this case a bit less freedom would guarantee a lot less distress.

\n

It is reasonable. I don't see how we can ask for species-appropriate animal husbandry regarding animals like chimps but not children. You have to have a drivers license for good reasons too. So why is everyone allowed to rule over helpless human beings for years without having to prove their ability to do so in a way that guarantees the well-being of their protégé?

\n

Such discussions always remind me about something important. Children should not be assigned with any religion. There should be a certain age where they can decide what religion they want to follow, if any. This doesn't mean that religious people shouldn't be able to have children but that they shouldn't be able to force their children into a certain framework either. Parents should be forced to allow their children to take part in a educational framework based on contemporary ethics and knowledge. I don't even have a problem with lessons in religion in school as it is part of human nature. But it shall not be focused on any truth value or a certain religion but an overview and comparison with non-religious ethics and truth-seeking.

" } }, { "_id": "rk7JtSmSSMpQsaQyi", "title": "Luminosity (Twilight fanfic) Part 2 Discussion Thread", "pageUrl": "https://www.lesswrong.com/posts/rk7JtSmSSMpQsaQyi/luminosity-twilight-fanfic-part-2-discussion-thread", "postedAt": "2010-10-25T23:07:49.960Z", "baseScore": 9, "voteCount": 9, "commentCount": 425, "url": null, "contents": { "documentId": "rk7JtSmSSMpQsaQyi", "html": "

This is Part 2 of the discussion of Alicorn's Twilight fanfic Luminosity

\n

LATE BREAKING EDIT: Part 3 exists now, so new comment threads should be started there rather than here.

\n

In the vein of the Harry Potter and the Methods of Rationality discussion threads this is the place to discuss anything relating to Alicorn's Twilight fanfic Luminosity. The fanfic is also archived on Alicorn's own website.

\n

Here is Part 1 of the discussion.  Previous discussion is hidden so deeply within the first Methods of Rationality thread that it's difficult to find even if you already know it exists.

\n

Similar to how Eliezer's fanfic popularizes material from his sequences Alicorn is using the insights from her Luminosity sequence.

\n

The fic is really really good but there is a twist part way through that makes the fic even more worth reading than it already was, but that makes it hard to talk about because to even ask if someone is twist-aware with any specific hints is difficult.  The twist is in the latter half of the story.  If you are certainly not post-twist and want to save the surprise, then you should stop reading here and fall back to Part 1 discussion or to the fic itself.

\n

 

\n

\n

If you think you're pretty sure you are post-twist and are safe to read the rest of this, try reading this rot13'ed hint and see if what you've read matches this high level description of the twist...

\n

Rqjneq unf qvfpbirerq gur frperg gung Vfnoryyn jnf xrrcvat sebz uvz \"sbe uvf bja tbbq\" bhg bs srne bs Neb ernqvat Rqjneq'f zvaq.  Va gur nsgrezngu, fbzrguvat unf punatrq nobhg gurve eryngvbafuvc gung znl unir pnhfrq lbh gb pel sbe n juvyr, naq juvpu znlor urycf gb rzbgvbanyyl qevir ubzr gur pbzovarq zrffntr bs YJ'f negvpyrf nobhg \"fbzrguvat gb cebgrpg\" naq \"ernfba nf n zrzrgvp vzzhar qvfbeqre\" naq gur jnl gurl pna fvzhygnarbhfyl nccyl gb crbcyr jub unir abguvat zber va gur jbeyq guna fbzr fvatyr crefba jub gurl ybir.

\n

If the answer to the hint is obvious, then just to be sure that there is not a double illusion of transparency at work, here is the cutoff point spelled out explicitly:

\n

Gur phgbss cbvag sbe cbfgvat urer vf gung lbh unir ernq hc gb puncgre svsgl svir (va gur snasvpgvba irefvba) be puncgre gjragl rvtug ba Nyvpbea'f jrofvgr jurer Rqjneq jnf cebonoyl vapvarengrq, Vfnoryyn fheivirf na nggrzcgrq vapvarengvba, naq fur unf gb ortha gb jbex bhg jung gb qb jvgu gur jerpxntr bs gur erfg bs ure \"rgreany\" yvsr.

\n
And now for your regularly scheduled commenting...
" } }, { "_id": "WpqHb9hHcd37hh4Tf", "title": "Activation Costs", "pageUrl": "https://www.lesswrong.com/posts/WpqHb9hHcd37hh4Tf/activation-costs", "postedAt": "2010-10-25T21:30:58.150Z", "baseScore": 39, "voteCount": 37, "commentCount": 40, "url": null, "contents": { "documentId": "WpqHb9hHcd37hh4Tf", "html": "

Enter Wikipedia:

\n
\n

In chemistry, activation energy is a term introduced in 1889 by the Swedish scientist Svante Arrhenius, that is defined as the energy that must be overcome in order for a chemical reaction to occur.

\n
\n

In this article, I propose that:

\n\n

After proposing that, I'd like to explore:

\n\n

Every action a person takes has an activation cost. The activation cost of a consistent, deeply embedded habit is zero. It happens almost automatically. The activation cost for most people in the United States to exercising is fairly high, and most people are inconsistent about exercising. However, there are people who - every single day - begin by putting their running shoes on and running. Their activation cost to running is effectively zero.

\n

These costs vary from person to person. In the daily running example above, the activation cost to the runner is low. The runner simply starts running in the morning. For most people, it's higher for a variety of reasons we'll get to in a moment. The running example is fairly obvious, but you'll also see phenomenon like a neat person saying to a sloppy one, \"Why don't you clean your desk? ... just f'ing do it, man.\" Assuming the messy person indeed wants to have a clean desk, then it's likely the messy person has a higher activation cost to cleaning his desk. (He could also have less energy/willpower)

\n

These costs can change over time. If the every-morning-runner suffers from a prolonged illness or injury and ceases to run, restarting the program might have a much higher activation cost for a variety of reasons we'll cover in a moment.  

\n

Finally, I'd like to propose that activation costs explain a lot of akrasia and procrastination. Akrasia is defined as \"acting against one's better judgment.\" I think it's possible that an action a person wishes to take has higher activation costs than they have available energy for activation at the moment. There is emerging literature on limited willpower and \"ego depletion,\" here's Wikipedia on the topic:

\n
\n

Ego depletion refers to the idea that self-control or willpower is an exhaustible resource that can be used up. When that energy is low (rather than high), mental activity that requires self-control is impaired. In other words, using one's self-control impairs the ability to control one's self later on. In this sense, the idea of (limited) willpower is correct.

\n
\n

While this is anecdotal, I believe that starting a desired action is frequently the hardest part, and usually the part that requires the most ego/will/energy. Thus, the activation cost. Continuing in motion is not as difficult as starting - as activating.

\n

This implies that there would be two effective ways to beat akrasia-based procrastination. The first would be to lower the activation cost; the second would be to increase energy/willpower/ego available for activation.

\n

Both are valid approaches, but I think lowering activation costs is more sustainable. I think there's local maximums of energy that can be achieved, and it's likely that even the most successful and industrious people will go through low energy periods. Obviously, by lowering an activation cost to zero or near zero, it becomes trivial to do the action as much as is desired.

\n

Some people have a zero activation cost to go running, and do it every day for the benefit of their health. Some people have zero activation cost to cleaning their desk, and do it whenever they realize its messy. Some people have a zero activation cost to self-promote/self-market, and thus they're frequently talking themselves up, promoting, and otherwise trying to get people to pay attention to their work. Most of us have higher activation costs to go running, clean a desk, or to market/promote something. Thus, it burns a lot more energy and is actually effectively impossible to complete the action sometimes.

\n

The following factors seem to increase activation cost (not a complete list):

\n\n

The following factors seem to decrease activation cost (not a complete list):

\n\n

Additionally, another way to go anti-akrasia is to increase energy levels through good diet, exercise, mental health, breathing, collaboration, good work environment, nature, adequate rest and relaxation. Some of these might additionally lower activation costs in addition to increasing energy.

\n

I believe the most effective way to do activities you want to do is to decrease their activation cost to as close to zero as possible. This implies you should defeat ugh fields, reduce trivial inconveniences and barriers, de-compartmentalize (and get something to protect), untangle your identity from the action you're taking, and find as clear instructions as possible. Also, deadlines, constraints, momentum, grouping and batching tasks, structured procrastination, clear instructions, establishing habits, setting up helpful cached-self effects and reducing negative ones, and treating activities to be done as a game all seem to be of value.

\n

I would be excited for more discussion on this topic. I believe activation costs are a large part of what causes procrastination akrasia, and reducing activation costs will help us get what we want.

" } }, { "_id": "ARNgTP7GtPgEJtQ2D", "title": "A suggestion on how to get people to read the Sequences", "pageUrl": "https://www.lesswrong.com/posts/ARNgTP7GtPgEJtQ2D/a-suggestion-on-how-to-get-people-to-read-the-sequences", "postedAt": "2010-10-25T19:21:31.987Z", "baseScore": 43, "voteCount": 33, "commentCount": 8, "url": null, "contents": { "documentId": "ARNgTP7GtPgEJtQ2D", "html": "

I just encountered Archive Binge, a website that makes custom RSS feeds of certain webcomics' archives, presenting a few comics per day so that people can easily catch up to those comics' current strips without overloading themselves.

\n

I strongly suspect that a similar tool would be useful for the Sequences. It might be good to have a few extra features, like the ability to only see posts with a certain tag, but even just a basic feed that presented one or two of them a week would be useful.

\n

It might be even more useful to, rather than allowing each person to make their own feed, have a single feed that cycles through everything and then restarts, to encourage new conversations about those articles between people who are reading the sequences at the same time. If the resulting cycle is too long, we could also have a second or third feed offset from the main one, so that no one has to wait more than a few months to subscribe to a feed that's starting at the beginning.

" } }, { "_id": "PnYh6hZsFPRB3GPCe", "title": "Dealing with the high quantity of scientific error in medicine", "pageUrl": "https://www.lesswrong.com/posts/PnYh6hZsFPRB3GPCe/dealing-with-the-high-quantity-of-scientific-error-in", "postedAt": "2010-10-25T13:53:47.015Z", "baseScore": 60, "voteCount": 39, "commentCount": 61, "url": null, "contents": { "documentId": "PnYh6hZsFPRB3GPCe", "html": "

In a recent article, John Ioannidis describes a very high proportion of medical research as wrong.

\n
Still, Ioannidis anticipated that the community might shrug off his findings: sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so what’s the big deal? The other paper headed off that claim. He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association.
\n

Part of the problem is that surprising results get more interest, and surprising results are more likely to be wrong. (I'm not dead certain of this-- if the baseline beliefs are highly likely to be wrong, surprising beliefs become somewhat less likely to be wrong.) Replication is boring. Failure to replicate a bright shiny surprising belief is boring. A tremendous amount isn't checked, and that's before you start considering that a lot of medical research is funded by companies that want to sell something.

\n

Ioannidis' corollaries:

\n
Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
\n
Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
\n
Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
\n
Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
\n
Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
\n
Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.
\n

The culture at LW shows a lot of reliance on small inferential psychological studies-- for example that doing a good deed leads to worse behavior later. Please watch out for that.

\n

A smidgen of good news: Failure to Replicate, a website about failures to replicate psychological findings. I think this could be very valuable, and if you agree, please boost the signal by posting it elsewhere.

\n

From Failure to Replicate's author-- A problem with metastudies:

\n
Eventually, someone else comes across this small literature and notices that it contains “mixed findings”, with some studies finding an effect, and others finding no effect. So this special someone–let’s call them the Master of the Gnomes–decides to do a formal meta-analysis. (A meta-analysis is basically just a fancy way of taking a bunch of other people’s studies, throwing them in a blender, and pouring out the resulting soup into a publication of your very own.) Now you can see why the failure to publish null results is going to be problematic: What the Master of the Gnomes doesn’t know about, the Master of the Gnomes can’t publish about. So any resulting meta-analytic estimate of the association between lawn gnomes and subjective well-being is going to be biased in the positive direction. That is, there’s a good chance that the meta-analysis will end up saying lawn gnomes make people very happy,when in reality lawn gnomes only make people a little happy, or don’t make people happy at all.
\n

The people I've read who gave advice based on Ioannidis article strongly recommended eating paleo. I don't think this is awful advice in the sense that a number of people seem to actually feel better following it, and I haven't heard of disasters resulting from eating paleo. However, I don't know that it's a general solution to the problems of living with a medical system which does necessary work some of the time, but also is wildly inaccurate and sometimes destructive.

\n

The following advice is has a pure base of anecdote, but at least I've heard a lot of them from people with ongoing medical problems. (Double meaning intended.)

\n

Before you use prescription drugs and/or medical procedures, make sure there's something wrong with you. Keep an eye out for side effects and the results of combined medicines. Check for evidence that whatever you're thinking about doing actually helps. Be careful with statins-- they can cause reversible memory problems and permanent muscle weakness. Choose a doctor who listens to you.

\n

Forum about self-experimentation-- note: even Seth Roberts is apt to oversell his results as applying to everyone.

\n

Link about the failure to replicate site found here.

" } }, { "_id": "hCwFxBai3oNnxrM9v", "title": "Let's split the cake, lengthwise, upwise and slantwise", "pageUrl": "https://www.lesswrong.com/posts/hCwFxBai3oNnxrM9v/let-s-split-the-cake-lengthwise-upwise-and-slantwise", "postedAt": "2010-10-25T13:15:02.855Z", "baseScore": 74, "voteCount": 50, "commentCount": 31, "url": null, "contents": { "documentId": "hCwFxBai3oNnxrM9v", "html": "

This post looks at some of the current models for how two agents can split the gain in non-zero sum interactions. For instance, if you and a buddy have to split £10 between the two of you, where the money is discarded if you can't reach a deal. Or there is an opportunity to trade your Elvis memorabilia for someone's collection of North Korean propaganda posters: unless you can agree to a way of splitting the gain from trade, the trade won't happen. Or there is the stereotypical battle of the sexes: either a romantic dinner (RD) or a night of Battlestar Galactica (BG) is on offer, and both members of the couple prefer doing something together to doing it separately - but, of course, each one has their preference on which activity to do together.

\n

Unlike standard games such as Prisoner's Dilemma, this is a coordinated game: the two agents will negotiate a joint solution, which is presumed to be binding. This allows for such solutions as 50% (BG,BG) + 50% (RD,RD), which cannot happen with each agent choosing their moves independently. The two agents will be assumed to be expected utility maximisers. What would your feelings be on a good bargaining outcome in this situation?

\n

Enough about your feelings; let's see what the experts are saying. In general, if A and C are outcomes with utilities (a,b) and (c,d), then another possible outcome is pA + (1-p)C (where you decide first, with odds p:1-p, whether to do outcome A or C), with utility p(a,b) + (1-p)(c,d). Hence if you plot every possible expected utilities in the plane for a given game, you get a convex set.

\n

For instance, if there is an interaction with possible pure outcomes (-1,2), (2,10), (4,9), (5,7), (6,3), then the set of actual possible utilities is the pentagon presented here:

\n

\"\"

\n

The points highlighted in red are the Pareto-optimal points: those where there is no possibility of making both players better off. Generally only Pareto-optimal points are considered valid negotiation solutions: any other point leaves both player strictly poorer than they could be.

\n

The first and instinctive way to choose an outcome is to select the egalitarian point. This is the Pareto-optimal point where both players get the same utility (in this instance, both get 27/5 utility):

\n

\"\"

\n

However, you can add an arbitrary number to your utility function without changing any of your decisions; your utility function is translation invariant. Since both players can do so, they can translate the shape seen above in any direction in the plane without changing their fundamental value system. Thus the egalitarian solution is not well-defined, but an artefact of how the utility functions are expressed. One can move it to any Pareto-optimal point simply with translations. Thus the naive egalitarian solution should be rejected.

\n

Another naive possibility is to pick the point that reflects the highest total utility, by adding the two utilities together. This gives the vertex point (4,9):

\n

 

\n

\"\"

\n

This method will select the same outcome, no matter what translation is applied to the utilities. But this method is flawed as well: if one agent decides to multiply their utility function by any positive real factor, this does not change their underlying algorithm at all, but does change the \"highest total utility\". If the first agent decides to multiply their utility by five, for instance, we have the new picture portrayed here, and the \"highest total utility point\" has decidedly shifted, to the first player's great advantage:

\n

\"\"

\n

To summarise these transformations to utility functions, we say that a utility function is defined only up to affine transformations - translations and scalings. Thus any valid method of bargaining must also be invariant to affine transformations of both the utility functions.

\n

The two most popular ways of doing this are the Nash Bargaining solution (NBS), and the Kalai-Smorodinsky Bargaining Solution (KSBS). Both are dependent on the existence of a \"disagreement point\" d: a point representing the utilities that each player will get if negotiations break down. Establishing this disagreement point is another subject entirely; but once it is agreed upon, one translates it so that d=(0,0). This means that one has removed the translation freedom from the bargaining game, as any translation by either player will move d away from the origin. In this example, I'll arbitrarily choose the disagreement at d=(-1,4); hence it now suffices to find a scalings-invariant solution.

\n

Nash did so by maximising the product of the (translated) utilities. There are hyperbolas in the plane defined by the sets of (x,y) such that xy=a; Nash's requirement is equivalent with picking the outcome that lies on the hyperbola furthest from the origin. The NBS and the hyperbolas can be seen here:

\n

\"\"

\n

This gives the NBS as (4,9). But what happens when we scale the utilities? This is equivalent to multiplying x and y by constant positive factors. This will change xy=a to xy=b, thus will map hyperbolas to hyperbolas. This also does not change the concept of \"furthest hyperbola from the origin\", so the NBS will not change. This is illustrated for the scalings x->(18/25)x and y->(19/18)y:

\n

\"\"

\n

The NBS has the allegedly desirable property that it is \"Independent of Irrelevant Alternatives\". This means that if we delete an option that is not the NBS, then the NBS does not change, as can be seen by removing (2,10):

\n

\"\"

\n

 

\n

Some felt that \"Independence of Irrelevant Alternatives\" was not a desirable property. In the above situation, player 2 gets their best option, while player 1 got the worse (Pareto-optimal) option available. Human intuition rebels against \"bargaining\" where one side gets everything, without having to give up anything. The NBS has other counter-intuitive properties, such as the that extra solutions can make some players worse off. Hence the KSBS replaces the independence requirement with \"if one adds an extra option that is not better than either player's best option, this makes no player worse off\".

\n

The KSBS is often formulated in terms of \"preserving the ratio of maximum gains\", but there is another, more intuitive formulation. Define the utopia point u=(ux, uy), where ux is the maximal possible gain for player 1, and uy the maximal possible gain for player 2. The utopia point is what a game-theory fairy could offer the players so that each would get the maximal possible gain from the game. Then the KSBS is defined as the Pareto-optimal point on the line joining the disagreement point d with the utopia point u:

\n

\"\"

\n

Another way to formulate the KSBS is by renormalising the utilities so that d=(0,0), u=(1,1), and then choosing the egalitarian solution:

\n

\"\"

\n

In a subsequent post, I'll critique these bargaining solutions, and propose a more utility-maximising way of looking at the whole process.

" } }, { "_id": "NPxGwZGoyyrNzkjNw", "title": "Willpower: not a limited resource?", "pageUrl": "https://www.lesswrong.com/posts/NPxGwZGoyyrNzkjNw/willpower-not-a-limited-resource", "postedAt": "2010-10-25T12:06:29.139Z", "baseScore": 39, "voteCount": 29, "commentCount": 28, "url": null, "contents": { "documentId": "NPxGwZGoyyrNzkjNw", "html": "

Stanford Report has a university public press release about a recent paper [subscription required] in Psychological Science.  The paper is available for free from a website of one of the authors.

\n

The gist is that they find evidence against the (currently fashionable) hypothesis that willpower is an expendable resource.  Here is the leader:

\n
\n

Veronika Job, Carol S. Dweck, and Gregory M. Walton
Stanford University

\n


Abstract:

\n

Much recent research suggests that willpower—the capacity to exert self-control—is a limited resource that is depleted after exertion. We propose that whether depletion takes place or not depends on a person’s belief about whether willpower is a limited resource. Study 1 found that individual differences in lay theories about willpower moderate ego-depletion effects: People who viewed the capacity for self-control as not limited did not show diminished self-control after a depleting experience. Study 2 replicated the effect, manipulating lay theories about willpower. Study 3 addressed questions about the mechanism underlying the effect. Study 4, a longitudinal field study, found that theories about willpower predict change in eating behavior, procrastination, and self-regulated goal striving in depleting circumstances. Taken together, the findings suggest that reduced self-control after a depleting task or during demanding periods may reflect people’s beliefs about the availability of willpower rather than true resource depletion.

\n
\n

(HT: Brashman, as posted on HackerNews.)

" } }, { "_id": "pFTEa5D9Hk9fmHLgc", "title": "A Paradox in Timeless Decision Theory", "pageUrl": "https://www.lesswrong.com/posts/pFTEa5D9Hk9fmHLgc/a-paradox-in-timeless-decision-theory", "postedAt": "2010-10-25T03:09:06.462Z", "baseScore": 10, "voteCount": 7, "commentCount": 7, "url": null, "contents": { "documentId": "pFTEa5D9Hk9fmHLgc", "html": "

I'm putting this in the discussion section because I'm not sure whether something like this has already been thought of, and I don't want to repeat things in a top-level post.

\n

Anyway, consider a Prisoner's-Dilemma-like situation with the following payoff matrix:
You defect, opponent defects: 0 utils
You defect, opponent cooperates: 3 utils
You cooperate, opponent defects: 1 util
You cooperate, opponent cooperates: 2 utils
Assume all players have either have full information about their opponents, or are allowed to communicate and will be able to deduce each others' strategy correctly.

\n

Suppose you are a a timeless decision theory agent playing this modified Prisoner's Dilemma with an actor that will always pick \"defect\" no matter what your strategy is. Clearly, your best move is to cooperate, gaining you 1 util instead of no utility, and giving your opponent his maximum 3 utils instead of the no utility he would get if you defected. Now suppose you are playing against another timeless decision theory agent. Clearly, the best strategy is to be that actor which defects no matter what. If both agents do this, the worst possible result for both of them occurs.

\n

This situation can actually happen in the real world. Suppose there are two rival countries, and one demands some tribute or concession from the other, and threatens war if the other country does not agree, even though such a war would be very costly for both countries. The rulers of the threatened country can either pay the less expensive tribute or accept a more expensive war so that the first country will back off, but the rulers of the first country have thought of that and have committed to not back down anyway. If the tribute is worth 1 util to each side, and a war costs 2 utils to each side, this is identical to the payoff matrix I described. I'd be pretty surprised if nothing like this has ever happened.

" } }, { "_id": "zKxK66pHw9aL2btkr", "title": "Optimism versus cryonics", "pageUrl": "https://www.lesswrong.com/posts/zKxK66pHw9aL2btkr/optimism-versus-cryonics", "postedAt": "2010-10-25T02:13:34.654Z", "baseScore": 48, "voteCount": 45, "commentCount": 110, "url": null, "contents": { "documentId": "zKxK66pHw9aL2btkr", "html": "

Within the immortalist community, cryonics is the most pessimistic possible position. Consider the following superoptimistic alternative scenarios:

\n
    \n
  1. Uploading will be possible before I die.
  2. \n
  3. Aging will be cured before I die.
  4. \n
  5. They will be able to reanimate a whole mouse before I die, then I'll sign up.
  6. \n
  7. I could get frozen in a freezer when I die, and they will eventually figure out how to reanimate me.
  8. \n
  9. I could pickle my brain when I die, and they will eventually figure out how to reanimate me.
  10. \n
  11. Friendly AI will cure aging and/or let me be uploaded before I die.
  12. \n
\n

Cryonics -- perfusion and vitrification at LN2 temperatures under the best conditions possible -- is by far less optimistic than any of these. Of all the possible scenarios where you end up immortal, cryonics is the least optimistic. Cryonics can work even if there is no singularity or reversal tech for thousands of years into the future. It can work under the conditions of the slowest technological growth imaginable. All it assumes is that the organization (or its descendants) can survive long enough, technology doesn't go backwards (on average), and that cryopreservation of a technically sufficient nature can predate reanimation tech.

\n

It doesn't even require the assumption that today's best possible vitrifications are good enough. See, it's entirely plausible that it's 100 years from now when they start being good enough, and 500 years later when they figure out how to reverse them. Perhaps today's population is doomed because of this. We don't know. But the fact that we don't know what exact point is good enough is sufficient to make this a worthwhile endeavor at as early of a point as possible. It doesn't require optimism -- it simply requires deliberate, rational action. The fact is that we are late for the party. In retrospect, we should have started preserving brains hundreds of years ago. Benjamin Franklin should have gone ahead and had himself immersed in alcohol.

There's a difference between having a fear and being immobilized by it. If you have a fear that cryonics won't work -- good for you! That's a perfectly rational fear. But if that fear immobilizes you and discourages you from taking action, you've lost the game. Worse than lost, you never played.

\n

This is something of a response to Charles Platt's recent article on Cryoptimism: Part 1 Part 2

" } }, { "_id": "yKnj937Y3GPp9PyTR", "title": "Help: Building Awesome Personal Organization Systems", "pageUrl": "https://www.lesswrong.com/posts/yKnj937Y3GPp9PyTR/help-building-awesome-personal-organization-systems", "postedAt": "2010-10-25T01:05:18.965Z", "baseScore": 9, "voteCount": 8, "commentCount": 16, "url": null, "contents": { "documentId": "yKnj937Y3GPp9PyTR", "html": "

Related to: Rationality Power Tools

\n

I'm looking to use (or make) something that helps me achieve god-like productivity. In particular, I'm interested in any information about systems that are:

\n\n\n

I would prefer not to have a bunch of separate systems if possible. From what I've seen so far, org-mode seems the most promising.

" } }, { "_id": "yqpjd7mQ8mqr7Cyvn", "title": "*Cryoburn* by Bujold", "pageUrl": "https://www.lesswrong.com/posts/yqpjd7mQ8mqr7Cyvn/cryoburn-by-bujold", "postedAt": "2010-10-25T01:01:48.304Z", "baseScore": 6, "voteCount": 4, "commentCount": 12, "url": null, "contents": { "documentId": "yqpjd7mQ8mqr7Cyvn", "html": "

This science fiction novel is probably the best I've seen about the amount of trustworthiness needed to make large-scale cryonics work.

\n

Minor spoilers: It's set on a planet where large corporations do the freezing and revival and can vote (in politics) as representatives of their frozen clients. Things Go Wrong, and there's even an echo of the current mortgage crisis.

\n

On the whole, I think Bujold mistrusts any organization which is too large for individual loyalties to make a difference.

\n

Still, it may be worth thinking about what sort of emotional/governmental/economic system it would take to make cryonics work for a large proportion of the population and/or for long periods-- and remember that for unmodified humans, mere decades are a long time.

\n

The book is basically unsympathetic to cryonics-- sympathetic presentation of characters who choose not to be frozen, and a mention of a society which is so busy avoiding death that it's forgotten how to live. That last is just sloppy, it's not supported by the text. At least it's in favor of non-atrocious methods of rejuvenation, and she may have a point that their development will be pretty gradual. I'm not sure it's plausible that brain-transplants into clones will be developed well before any aspect of rejuvenation.

\n

It's a fair-to-middling caper novel, or maybe somewhat better than that. It's better than the two weakest novels in the series (*Cetaganda* and *Diplomatic Immunity*) and not as good as the best (probably *Memory*, *Brothers in Arms*, and *A Civil Campaign*, though I'm also awfully fond of *Komarr* and *Mirror Dance*). I think it would make sense for the most part even if you haven't read other books in the series.

\n

 

\n

 

" } }, { "_id": "zNCdxFvfbFZggTxDk", "title": "Learning the foundations of math", "pageUrl": "https://www.lesswrong.com/posts/zNCdxFvfbFZggTxDk/learning-the-foundations-of-math", "postedAt": "2010-10-24T19:29:57.455Z", "baseScore": 10, "voteCount": 7, "commentCount": 34, "url": null, "contents": { "documentId": "zNCdxFvfbFZggTxDk", "html": "

I have recently become interested in the foundations of math. I am interested in tracing the fundamentals of math in a path such as: propositional logic -> first order logic -> set theory -> measure theory. Does anyone have any resources (books, webpages, pdfs etc.) they would like to recommend?

\n

This seems like it would be a popular activity among LWers, so I thought this would be a good place to ask for advice.

\n

My criteria (feel free to post resources which you think others who stumble across this might be interested in):

\n\n

 

" } }, { "_id": "rHsuG5D6hGpTjoDD9", "title": "Not exactly the trolley problem", "pageUrl": "https://www.lesswrong.com/posts/rHsuG5D6hGpTjoDD9/not-exactly-the-trolley-problem", "postedAt": "2010-10-24T14:21:40.469Z", "baseScore": 7, "voteCount": 4, "commentCount": 7, "url": null, "contents": { "documentId": "rHsuG5D6hGpTjoDD9", "html": "

An unusual incident. Are you obligated to be on the side of the plane with the crocodile if the other passengers are overbalancing the plane? To push other passengers over to the side with the crocodile?

" } }, { "_id": "TeuytT4osZTyxwGtP", "title": "That which can be destroyed by the truth should *not* necessarily be", "pageUrl": "https://www.lesswrong.com/posts/TeuytT4osZTyxwGtP/that-which-can-be-destroyed-by-the-truth-should-not", "postedAt": "2010-10-24T10:41:55.278Z", "baseScore": 13, "voteCount": 15, "commentCount": 13, "url": null, "contents": { "documentId": "TeuytT4osZTyxwGtP", "html": "

I've been throwing some ideas around in my head, and I want to throw some of them half-formed into the open for discussion here.

\n

I want to draw attention to a particular class of decisions that sound much like beliefs.

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n

Belief

\n
\n

Decision

\n
\n

There is no personal god that answers prayers.           

\n
\n

I should badger my friend about atheism.

\n
\n

Cryonics is a rational course of action.

\n
\n

To convince others about cryonics, I should start by explaining that if we exist in the future at all, then we can expect it to be nicer than the present on account of benevolent super-intelligences.

\n
\n

There is an objective reality.

\n
\n

Postmodernists should be ridiculed and ignored.

\n
\n

1+1=2

\n
\n

If I encounter a person about to jump unless he is told \"1+1=3\", I should not acquiesce.

\n
\n

I've thrown ideas from a few different bags into the table, and I've perhaps chosen unnecessarily inflammatory examples. There are many arguments to be had about these examples, but the point I want to make is the way in which questions about the best course of action can sound very much like questions about truth. Now this is dangerous because the way in which we chose amongst decisions is radically different from the way in which we chose amongst beliefs. For a start, evaluating decisions always involves evaluating a utility function, whereas evaluating beliefs never does (unless the utility function is explicitly part of the question). By appropriate changes to one's utility function the optimal decision in any given situation can be modified arbitrarily whilst simultaneously leaving all probability assignments to all statements fixed. This should make you immediately suspicious if you ever make a decision without consulting your utility function. There is no simple mapping from beliefs to decisions.

\n

I've noticed various friends and some people on this site making just this mistake. It's as if their love for truth and rational enquiry, which is a great thing in its own right, spills over into a conviction to act in a particular way, which itself is of questionable optimality.

\n

In recent months there have been several posts on LessWrong about the \"dark arts\", which have mostly concerned using asymmetric knowledge to manipulate people. I like these posts, and I respect the moral stance implied by their name, but I fear that \"dark arts\" is becoming applicable to the much broader case of not acting according to the simple rule that decisions are always good when they sound like true beliefs. I shouldn't need to argue explicitly that there are cases when lying or manipulating constitute good decisions; that would privileged a very particular hypothesis (namely that decisions are always good when they sound like true beliefs).

\n

This brings be all the way back to the much-loved quotation, \"that which can be destroyed by the truth should be\". Now there are several ways to interpret the quote but at least one interpretation implies the existence of a simple isomorphism from true beliefs to good decisions. Personally, I can think of lots of things that could be destroyed by the truth but should not be.

" } }, { "_id": "G9eTxWCQcMKw4CmHz", "title": "LWers on last.fm", "pageUrl": "https://www.lesswrong.com/posts/G9eTxWCQcMKw4CmHz/lwers-on-last-fm", "postedAt": "2010-10-24T07:29:34.119Z", "baseScore": 10, "voteCount": 9, "commentCount": 0, "url": null, "contents": { "documentId": "G9eTxWCQcMKw4CmHz", "html": "

Since several people spent a while comparing musical taste on IRC, I thought it was worth creating a group for those of us on last.fm:

\n

Less Wrong

" } }, { "_id": "eSX2v6L5d9vB4JcC3", "title": "Evidential Decision Theory and Mass Mind Control", "pageUrl": "https://www.lesswrong.com/posts/eSX2v6L5d9vB4JcC3/evidential-decision-theory-and-mass-mind-control", "postedAt": "2010-10-23T23:26:42.124Z", "baseScore": -3, "voteCount": 4, "commentCount": 10, "url": null, "contents": { "documentId": "eSX2v6L5d9vB4JcC3", "html": "

Required Reading: Evidential Decision Theory

\n

Let me begin with something similar to Newcomb's Paradox. You're not the guy choosing whether or not to take both boxes. You're the guy who predicts. You're not actually prescient. You can only make an educated guess.

You watch the first person play. Let's say they pick one box. You know they're not an ordinary person. They're a lot more philosophical than normal. But that doesn't mean that the knowledge of what they choose is completely useless later on. The later people might be just as weird. Or they might be normal, but they're not completely independent of this outlier. You can use his decision to help predict theirs, if only by a little. What's more, this still works if you're reading through archives and trying to \"predict\" the decisions people have already made in earlier trials.

The decision of the player choosing the box affects whether or not the predictor will predict that later, or earlier, people will take the box. According to EDT, one should act in the way that results in the most evidence for what one wants. Since the predictor is completely rational, this means that the player choosing the box effectively changes decisions other people make, or actually changes depending on your interpretation of EDT. One can even affect people's decisions in the past, provided that one doesn't know what they were.

In short, the decisions you make affect the decisions other people will make and have made. I'm not sure how much, but there have probably been 50 to 100 billion people. And that's not including the people who haven't been born yet. Even if you only change one in a thousand decisions, that's at least 50 million people.

Like I said: mass mind control. Use this power for good.

" } }, { "_id": "ASeRt6BpaZLpTfFit", "title": "Tomatoes", "pageUrl": "https://www.lesswrong.com/posts/ASeRt6BpaZLpTfFit/tomatoes", "postedAt": "2010-10-23T18:50:14.100Z", "baseScore": 1, "voteCount": 6, "commentCount": 6, "url": null, "contents": { "documentId": "ASeRt6BpaZLpTfFit", "html": "

Are tomatoes fruits or vegetables?

\n

I've been reading Eliezer's criticisms of Aristotelian classes as a model for the meaning of words.  It occurred to me that this little chestnut is a good illustration of the problem.  The best part about this example is that almost everyone has argued either on one side or the other at some point in their lives.  One would think that the English speaking world could come to some consensus on such a simple, trivial problem, but still the argument rages on.  Fruit or vegetable?

\n

In my experience, the argument is usually started by the fruit advocate (we'll call him Lemon).  \"It's the fruiting body of the plant,\" he says.  \"It contains the seeds.\"  He argues that the tomato is, by definition, a fruit.

\n

Bean has never thought of tomatoes as fruits, but when her belief is challenged by Lemon, she's not entirely sure how to respond.  She hesitates, then starts slowly -- \"All the things I call fruits are sweet,\" she says.  \"Not that tomatoes are bitter, but they're certainly not sweet enough to be fruits.\"  Bean is proposing a stricter definition -- fruits are sweet fruiting bodies of plants.  But does Bean really think that's the difference between a fruit and a vegetable?

\n

Not really.  Bean learned what these words mean by talking to other people about fruits, vegetables, and tomatoes, and through her cooking and eating.  There was never any moment when she said to herself, \"Aha!  a tomato is not a fruit!\"  This belief is a result of countless minute inferences made over the course of Bean's gustatory life.  The definition she proposes is an ad hoc defense of her belief that tomatoes are not fruits, not a real reason.

\n

Bean's real mistake was to think that she needed to defend her belief that tomatoes are not fruits.  Tomatoes are what they are regardless of how they're classified, and most people classify them as fruit or vegetable long before they learn anything about Aristotelian classes or membership tests.  The classification is made as the result of a long history of silent inferences from the way parents and peers use those words.  The first English dictionary was written in 1604, several hundred years after both \"fruit\" and \"vegetable\" had entered the English vocabulary (right about the same time as \"tomato,\" actually).  Before that, Lemon couldn't point to a definition to make his case. He could only rely on his experiences with usage just as Bean does in her rebuttal, and it's not clear why we should privilege one's experience other the other.  The meaning of the word is prior to the definition.

\n

There is a simple solution to the tomato problem, by the way.  \"Vegetable\" is any edible plant matter and \"fruit\" is a subclass of \"vegetable.\"  All fruits are vegetables, and so tomatoes are both.  In the same way, wheat is both a grain and a vegetable.  The distinction is made only for convenience -- consider the fact that before electric guitars were invented, there were no \"acoustic guitars\" -- only \"guitars.\"  It's not false to describe an acoustic guitar as a guitar, merely imprecise.  This points to what I think is a common phenomenon in spoken language which leads to errors in reasoning: a distinction is made between a subclass B and superclass A with the understanding that x in B -> x in A; later, the distinction is maintained but the understanding of the interconnection is lost so that A and B are considered distinct categories -- x in A xor x in B.  Can anyone think of any other examples of this kind of error?

" } }, { "_id": "Ju2qFSS6qyr2LzjMQ", "title": "Does anyone else find ROT13 spoilers as annoying as I do?", "pageUrl": "https://www.lesswrong.com/posts/Ju2qFSS6qyr2LzjMQ/does-anyone-else-find-rot13-spoilers-as-annoying-as-i-do", "postedAt": "2010-10-23T17:15:04.957Z", "baseScore": 12, "voteCount": 13, "commentCount": 16, "url": null, "contents": { "documentId": "Ju2qFSS6qyr2LzjMQ", "html": "

Every time I come across one, it annoys me. I have to copy the text, open a new tab to the ROT13 site, paste the text, and click the translate button.

\n

Compare this to something like a collapsable spoiler button-box where you press a button and it expands and expands a box with the appropriate text underneath it. Even making a [spoiler][/spoiler] tag that gave a black background and equally black text would be better than the current ROT13 solution.

\n

Was there actually a reason for doing things this way? If so, why not just include the ROT13 translation in the javascript that'd open and close the textbox? Comments? Criticism?

" } }, { "_id": "ubgBqRwHG9u7idAig", "title": "Three kinds of political similarity", "pageUrl": "https://www.lesswrong.com/posts/ubgBqRwHG9u7idAig/three-kinds-of-political-similarity", "postedAt": "2010-10-23T16:01:59.956Z", "baseScore": 13, "voteCount": 8, "commentCount": 30, "url": null, "contents": { "documentId": "ubgBqRwHG9u7idAig", "html": "

Suppose you wanted to describe the political landscape efficiently. That is, you want to describe the range of existing political positions, and develop some way of evaluating which political positions are \"similar.\" (I'm American, so I'm going to think in terms of the US.)  There are three ways you could go about this and they are radically different.

\n

 

\n

1.  Politicians' voting.

\n

Because we have a two-party system, politicians (and some other political players, like Supreme Court justices) make decisions on a single, left-right axis.  You can explain almost all the variation with one dimension: \"More Republican\" or \"More Democrat.\"  Generally, if a politician supports one \"Republican\" position, he'll support most of the other \"Republican\" positions.  Studies have been done here (too lazy to look them up right now) supporting this phenomenon at all levels of politics.

\n

2.  Individuals' opinions on poll questions.

\n

Here, we do something different.  We ask poll questions about a variety of issues and see what people think.  Suddenly, it's not one-dimensional any more.  There are people who like low taxes and also like gay marriage.  There are people who are anti-abortion and anti-war.  The fact that individual opinions don't line up along a left-right axis is what's behind ideas like the Nolan Chart -- although it doesn't imply that the \"real\" way to explain opinions is necessarily two-dimensional.  How many dimensions actually do efficiently explain all the different kinds of opinions?  I don't know, and that's a statistical puzzle in itself, although I do know of techniques (like multi-dimensional scaling) that purport to estimate the \"right\" number of dimensions.

\n

One thing we do know is that if we project all the opinions onto a set of dimensions that we like -- for example, the old standby of liberal/conservative and authoritarian/libertarian -- we can start to measure \"political diversity.\"  People who describe themselves as Democrats, for example, span a much wider range of views than people who describe themselves as \"Republicans.\"  Implicitly, we're putting a Euclidean distance on a high-dimensional space, projected onto a few dimensions.

\n

3.  Logical and causal implications of policies.

\n

Just as politicians' voting doesn't capture what regular people think, it could also be argued that people's opinions on poll questions also don't capture something important about politics.  Namely, some policies logically imply one another.  If you support raising spending on the war in Afghanistan for 2011, you must also support continuing the war in Afghanistan in 2011.  If you support spending more on X, you must either support less spending on some other thing Y, or support raising taxes, or support running a higher deficit.  Simply asking people about their opinions on a variety of questions doesn't capture this structure.

\n

Additionally, some policies have consequences for other policies.  You cannot simultaneously be in favor of reducing carbon emissions, and be in favor of a set of policies that, on net, increase carbon emissions.  Or, you can, but you'd have to be either ignorant or confused.  

\n

And some philosophical claims imply policies.  If you believe \"Congress should not do anything except the enumerated powers in the Constitution\" then that implies you have to oppose all the things Congress does that are not enumerated powers.  

\n

In part 2, every policy position was just a coordinate in a high-dimensional vector space.  Now, in part 3, it's suddenly not so simple.  You have a directed graph of implications.  It's very hard to get a handle on this graph. But just as this graph is more intractable, it's also more informative.  In the part 2 model, a person could easily support a variety of inconsistent positions, and, in fact, people do.  In part 3, some of your policy choices are determined by your other policy choices -- not just by correlation but by necessity.

\n

What can we say about one policy node that implies a lot about other policy nodes?  Well, it's very influential.  If you could examine a single person's directed graph of the policy universe, with nodes colored red for \"oppose\" and green for \"support,\" then the choke-points, those nodes that imply the choice of color for lots of other nodes, are core beliefs.  A pacifist's graph, for example, is heavily influenced by the \"War is wrong\" node, because that logically implies his position on all specific wars.

\n

Obviously this doesn't have to be binary; you could have degrees of support and opposition, that imply updating the degrees of daughter nodes.  Increasing your support of \"End all wars\" should increase your support of all nodes \"End war X,\" but increasing your support of \"Go to war to defend allies\" should decrease your support of \"End war Y\" if an ally was attacked in war Y.

\n

This gives us a different way of defining which policies are similar to each other.  Policies that are close \"cousins\" on a tree are similar; \"End the war in Iraq,\" and \"End the Korean war\" are close because they're both implied by \"End all wars\", but \"End the war in Iraq\" and \"End the war in Afghanistan\" are even closer because they're both implied by \"End the War on Terror\" (which is implied by \"End all wars.\")  

\n

 

\n

Three kinds of similarity are not the same

\n

Points that are close on the left-right spectrum of politicians' decisions are not necessarily all that close in the higher-dimensional space of people's opinions on poll questions.  I also hypothesize that points that are closely correlated in people's poll answers are not necessarily close in the part 3 sense of being close cousins on a tree of implications.  That is, I would guess that people may tend to hold opinion B whenever they hold opinion A, even when A and B actually have nothing to do with each other. 

\n

How different is your debating partner?

\n

I think part 3 is a good model for having political conversations -- better than part 1 or part 2.  How \"different\" a person's politics feel from your own, once you've had a discussion with him, is not so much a matter of which party he votes for, or what his opinions are on a laundry list of issues.  No: a person feels really \"different\" when you and he have opposite opinions on one of your influential nodes, something really far back on the tree.  If one of your influential nodes is \"Democracy with universal suffrage is the best form of government\" and you're talking to someone who says, \"Hi, I actually support monarchy,\" then that person is really different from you.  A person who agrees with you up until the last branch on the tree is pretty similar to you, no matter how vehemently you disagree about that last branch, because you both can draw upon the same assumptions from farther up the tree.

\n

The personal lesson is that it's useful to be clear with yourself about levels of difference.  People can spend a lot of time arguing or even hating people who are very similar, and completely forget that there's a higher level on the tree.  My politics are pretty different from Sarah Palin's; but I'd be Sarah Palin's ally against Louis XIV, I'd be Louis XIV's ally against Aurangzeb, and I'd be Aurangzeb's ally against a giant space squid intent on annihilating Earth -- Aurangzeb may have wanted to get rid of all the Hindus in India, but we've got more in common with each other than with a planeticidal space squid.  Squids aside, it's rather ridiculous when people identify their \"greatest enemy\" or \"greatest threat\" as a person with only slightly different political views. 

" } }, { "_id": "h29tYzT3HuhWdkQ8m", "title": "Green consumers more likely to steal", "pageUrl": "https://www.lesswrong.com/posts/h29tYzT3HuhWdkQ8m/green-consumers-more-likely-to-steal", "postedAt": "2010-10-23T11:42:46.099Z", "baseScore": 9, "voteCount": 9, "commentCount": 3, "url": null, "contents": { "documentId": "h29tYzT3HuhWdkQ8m", "html": "

Here's an article I found today, though it is a few months old; it seems to illustrate an interesting cognitive bias that I don't think I've seen mentioned before that might explain a good bit of hypocrisy people tend to have.

\n
\n

When Al Gore was caught running up huge energy bills at home at the same time as lecturing on the need to save electricity, it turns out that he was only reverting to \"green\" type.

\n

According to a study, when people feel they have been morally virtuous by saving the planet through their purchases of organic baby food, for example, it leads to the \"licensing [of] selfish and morally questionable behaviour\", otherwise known as \"moral balancing\" or \"compensatory ethics\".

\n

Do Green Products Make Us Better People is published in the latest edition of the journal Psychological Science. Its authors, Canadian psychologists Nina Mazar and Chen-Bo Zhong, argue that people who wear what they call the \"halo of green consumerism\" are less likely to be kind to others, and more likely to cheat and steal. \"Virtuous acts can license subsequent asocial and unethical behaviours,\" they write. [See footnote].

\n

The pair found that those in their study who bought green products appeared less willing to share with others a set amount of money than those who bought conventional products. When the green consumers were given the chance to boost their money by cheating on a computer game and then given the opportunity to lie about it – in other words, steal – they did, while the conventional consumers did not. Later, in an honour system in which participants were asked to take money from an envelope to pay themselves their spoils, the greens were six times more likely to steal than the conventionals.

\n

Mazar and Zhong said their study showed that just as exposure to pictures of exclusive restaurants can improve table manners but may not lead to an overall improvement in behaviour, \"green products do not necessarily make for better people\". They added that one motivation for carrying out the study was that, despite the \"stream of research focusing on identifying the 'green consumer'\", there was a lack of understanding into \"how green consumption fits into people's global sense of responsibility and morality and [how it] affects behaviours outside the consumption domain\".

\n

The pair said their findings surprised them, having thought that just as \"exposure to the Apple logo increased creativity\", according to a recent study, \"given that green products are manifestations of high ethical standards and humanitarian considerations, mere exposure\" to them would \"activate norms of social responsibility and ethical conduct\".

\n

Dieter Frey, a social psychologist at the University of Munich, said the findings fitted patterns of human behaviour. \"At the moment in which you have proven your credentials in a particular area, you tend to allow yourself to stray elsewhere,\" he said.

\n

• This footnote was added on 31 March 2010: The study findings above, and the methods used, are challenged by researchers associated with the social psychology department at the London School of Economics, the Institute of Ecological Economy Research in Berlin, and the Institute for Perspective Technological Studies in Seville. Their analysis can be found here: lrcg.co.uk

\n
\n

Here is a link to the actual article. Thoughts? It seems like a trap that it'd be easy for the folks here to fall into, if they start self-congratulating about being Rational.

" } }, { "_id": "h22n4nZQd9J2MEZxq", "title": "The Problem With Trolley Problems", "pageUrl": "https://www.lesswrong.com/posts/h22n4nZQd9J2MEZxq/the-problem-with-trolley-problems", "postedAt": "2010-10-23T05:14:07.308Z", "baseScore": 17, "voteCount": 67, "commentCount": 113, "url": null, "contents": { "documentId": "h22n4nZQd9J2MEZxq", "html": "

A trolley problem is something that's used increasing often in philosophy to get at people's beliefs and debate on them. Here's an example from Wikipedia:

\n
\n

As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you - your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?

\n
\n

I believe trolley problems are fundamentally flawed - at best a waste of time, and at worst lead to really sloppy thinking. Here's four reasons why:

\n

1. It assumes perfect information about outcomes.

\n

2. It ignores the global secondary effects that local choices create.

\n

3. It ignores real human nature - which would be to freeze and be indecisive.

\n

4. It usually gives you two choices and no alternatives, and in real life, there's always alternatives.

\n

First, trolley problems contain perfect information about outcomes - which is rarely the case in real life. In real life, you're making choices based on imperfect information. You don't know what would happen for sure as a result of your actions. 

\n

Second, everything creates secondary effects. If putting people involuntarily in harm's way to save others was an acceptable result, suddenly we'd all have to be really careful in any emergency. Imagine living in a world where anyone would be comfortable ending your life to save other people nearby - you'd have to not only be constantly checking your surroundings, but also constantly on guard against do-gooders willing to push you onto the tracks.

\n

Third, it ignores human nature. Human nature is to freeze up when bad things happen unless you're explicitly trained to react. In real life, most people would freeze or panic instead of react. In order to get over that, first responders, soldiers, medics, police, firefighters go through training. That training includes dealing with questionable circumstances and how to evaluate them, so you don't have a society where your trained personnel act randomly in emergencies. 

\n

Fourth, it gives you two choices and no alternatives. I firmly reject this - I think there's almost always alternative ways to get there from here if you open your mind to it. Once you start thinking that your only choice is to push the one guy in front of the trolley or to stand there doing nothing, your mind is closed to all other alternatives.

\n

At best, this means trolley problems are just a harmless waste of time. But I think they're not just a harmless waste of time.

\n

I think \"trolley problem\" type thinking is commonly used in real life to advocate and justify bad policy.

\n

Here's how it goes:

\n

Activist says, \"We've got to take from this rich fat cat and give it to these poor people, or the poor people will starve and die. If you take the money, the fat cat will buy less cars and yachts, and the poor people will become much more successful and happy.\"

\n

You'll see all the flaws I described above in that statement.

\n

First, it assumes perfect information. The activist says that taking more money will lead to less yachts and cars - useless consumption. He doesn't consider that people might first cut their charity budget, or their investment budget, or something else. Higher tax jurisdictions, like Northern Europe, have very low levels of charitable giving. They also have relatively low levels of capital investment.

\n

Second, it ignores secondary effects. The activist assumes he can milk the cow and the cow won't mind. In reality, people start spending their time on minimizing their tax burden instead of doing productive work. It ripples through society.

\n

Third, it ignores human nature. Saying \"the fat cat won't miss it\" is false - everyone is loss averse. 

\n

Fourth, the biggest problem of all, it gives two choices and no alternatives. \"Tax the fat cat, or the poor people starve\" - is there no other way to encourage charitable giving? Could we give charity visas where anyone giving $500,000 in philanthropy to the poor can get fast-track residency into the USA? Could we give larger tax breaks to people who choose to take care of distant relatives as a dependent? Are there other ways? Once the debate gets constrained to, \"We must do this, or starvation is the result\" you've got problems.

\n

And I think that these poor quality thoughts on policy are a direct descendant of trolley problems. It's the same line of thinking - perfect information, ignores secondary effects, ignores human nature, and gives two choices while leaving no other alternatives. That's not real life. That's sloppy thinking.

\n

Edit: This is being very poorly received so far... well, it was quickly voted up to +3, and now it's down to -2, which means controversial but generally negative reception.

\n

Do people disagree? I understand trolley problems are an established part of critical thinking on philosophy, however, I think they're flawed and I wanted to highlight those flaws.

\n

The best counterargument I see right now is that the value of a trolley problem is it reduces everything to just the moral decision. That's an interesting point, however, I think you could come up with better hypotheticals that don't suffer from this flaw. Or perhaps the particular politics example isn't popular? You can substitute in similar arguments for prohibition of alcohol, and perhaps I ought to have done that to make it less controversial. In any event, I welcome discussion and disagreement.

\n

Questions for you: I think that trolley problems contain perfect information about outcomes in advance of them happening, ignore secondary effects, ignore human nature, and give artificially false constraints. Do you agree with that part? I think that's pretty much fact. Now, I think that's bad. Agree/disagree there? Okay, finally, I think this kind of thinking seeps over into politics, and it's likewise bad there. Agree/disagree? I know this is a bit of controversial argument since trolley problems are common in philosophy, but I'd encourage you to have a think on what I wrote and agree, disagree, and otherwise discuss.

" } }, { "_id": "ceyNA9E3yJe9J3CLw", "title": "Is there a \"percentage fallacy\"?", "pageUrl": "https://www.lesswrong.com/posts/ceyNA9E3yJe9J3CLw/is-there-a-percentage-fallacy", "postedAt": "2010-10-23T04:45:57.854Z", "baseScore": 17, "voteCount": 12, "commentCount": 29, "url": null, "contents": { "documentId": "ceyNA9E3yJe9J3CLw", "html": "

A couple years ago, Aaron Swartz blogged about what he called the \"percentage fallacy\":

\n
\n

There’s one bit of irrationality that seems like it ought to be in behavioral economics introduction but mysteriously isn’t. For lack of a better term, let’s call it the percentage fallacy. The idea is simple:

\n
\n

One day I find I need a blender. I see a particularly nice one at the store for $40, so I purchase it and head home. But on the way home, I see the exact same blender on sale at a different store for $20. Now I feel ripped off, so I drive back to the first store, return the blender, drive back to the second store, and buy it for $20.

\n

The next day I find I need a laptop. I see a particularly nice one at the store for $2500, so I purchase it and head home. But on the way home, I see the exact same laptop for $2480. “Pff, well, it’s only $20,” I say, and continue home with the original laptop.

\n
\n

I’m sure all of you have done something similar — maybe the issue wasn’t having to return something, but spending more time looking for a cheaper model, or fiddling with coupons and rebates, or buying something of inferior quality. But the basic point is consistent: we’ll do things to save 50% that we’d never do to save 1%.

\n
\n

He recently followed up with a speculation that this may explain some irrational behaviour normally attributed to hyperbolic discounting:

\n
\n

In a famous experiment, some people are asked to choose between $100 today or $120 tomorrow. Many choose the first. Meanwhile, some people are asked to choose between $100 sixty days from now or $120 sixty-one days from now. Almost everyone choose the laster. The puzzle is this: why are people willing to sacrifice $20 to avoid waiting a day right now but not in the future?

\n

The standard explanation is hyperbolic discounting: humans tend to weigh immediate effects much more strongly than distant ones. But I think the actual psychological effect at work here is just the percentage fallacy. If I ask for the money now, I may have to wait 60 seconds. But if I get it tomorrow I have to wait 143900% more. By contrast, waiting 61 days is only 1.6% worse than waiting 6 days. Why not wait an extra 2% when you get 16% more money for it?

\n

Has anyone done a test confirming the percentage fallacy? A good test would be to show people treat the $100 vs. $120 tradeoff as equivalent to the $1000 to $1200 tradeoff.

\n
\n

Is this a real thing? Is there any such research? Is there existing evidence that does especially support the usual hyperbolic discounting explanation over this?

" } }, { "_id": "BW9A6hiwByJH3uEcx", "title": "Interesting talk on Bayesians and frequentists", "pageUrl": "https://www.lesswrong.com/posts/BW9A6hiwByJH3uEcx/interesting-talk-on-bayesians-and-frequentists", "postedAt": "2010-10-23T04:10:27.684Z", "baseScore": 11, "voteCount": 12, "commentCount": 19, "url": null, "contents": { "documentId": "BW9A6hiwByJH3uEcx", "html": "

I recently started watching an interesting lecture by Michael Jordan on Bayesians and frequentists; he's a pretty successful machine learning expert that takes both views in his work. You can watch it here: http://videolectures.net/mlss09uk_jordan_bfway/. I found it interesting because his portrayal of frequentism is much different than the standard portrayal on lesswrong. It isn't about whether probabilities are frequencies or beliefs, it's about trying to get a good model versus trying to get rigorous guarantees of performance in a class of scenarios. So I wonder why the meme on lesswrong is that frequentists think probabilities are frequencies; in practice it seems to be more about how you approach a given problem. In fact, frequentists seem more \"rational\", as they're willing to use any tool that solves a problem instead of constraining themselves to methods that obey Bayes' rule.

\n

In practice, it seems that while Bayes is the main tool for epistemic rationality, instrumental rationality should oftentimes be frequentist at the top level (with epistemic rationality, guided by Bayes, in turn guiding the specific application of a frequentist algorithm).

\n

For instance, in many cases I should be willing to, once I have a sufficiently constrained search space, try different things until one of the works, without worrying about understanding why the specific thing I did worked (think shooting a basketball, or riffle shuffling a deck of cards). In practice, it seems like epistemic rationality is important for constraining a search space, and after that some sort of online learning algorithm can be applied to find the optimal action from within that search space. Of course, this isn't true when you only get one chance to do something, or extreme precision is required, but this is not often true in everyday life.

\n

The main point of this thread is to raise awareness of the actual distinction between Bayesians and frequentists, and why it's actually reasonable to be both, since it seems like lesswrong is strongly Bayesian and there isn't even a good discussion of the fact that there are other methods out there.

" } }, { "_id": "QaR6pE2pcjoDm7MmG", "title": "Does inclusive fitness theory miss part of the picture?", "pageUrl": "https://www.lesswrong.com/posts/QaR6pE2pcjoDm7MmG/does-inclusive-fitness-theory-miss-part-of-the-picture", "postedAt": "2010-10-22T21:49:12.297Z", "baseScore": 8, "voteCount": 9, "commentCount": 15, "url": null, "contents": { "documentId": "QaR6pE2pcjoDm7MmG", "html": "

I originally titled this post \"The Less Wrong wiki is wrong about group selection\", because it seemed wildly overconfident about its assertion that group selection is nonsense. The wiki entry on \"group selection\" currently reads:

\n
\n

People who are unfamiliar with evolutionary theory sometimes propose that a feature of the organism is there for the good of the group - for example, that human religion is an adaptation to make human groups more cohesive, since religious groups outfight nonreligious groups.

\n

Postulating group selection is guaranteed to make professional evolutionary biologists roll up their eyes and sigh.

\n
\n

However, it appears that the real problem is not that the wiki is overconfident (that's a problem, but it's only a symptom of the next problem) but that the traditional dogma on the viability of group selection is wrong, or at least overconfident. I make this assertion after stumbling across a paper by Martin Nowak, Corina Tarnita, and E. O. Wilson titled \"The evolution of eusociality\", which appeared in Nature in August of this year. I found a PDF of this paper through Google scholar, click here. A blog entry discussing the paper can be found here (bias alert: it is written by a postdoc working in Martin Nowak's Evolutionary Dynamics program at Harvard).

\n

Here's some quotes (bolding is mine):

\n
\n

It has further turned out that selection forces exist in groups that diminish the advantage of close collateral kinship. They include the favouring of raised genetic variability by colony-level selection in the ants Pogonomyrmex occidentalis and Acromyrmex echinatior—due, at least in the latter, to disease resistance. The contribution of genetic diversity to disease resistance at the colony level has moreover been established definitively in honeybees. Countervailing forces also include variability in predisposition to worker sub-castes in Pogonomyrmex badius, which may sharpen division of labour and improve colony fitness—although that hypothesis is yet to be tested. Further, an increase in stability of nest temperature with genetic diversity has been found within nests of honeybees and Formica ants. Other selection forces working against the binding role of close pedigree kinship are the disruptive impact of nepotism within colonies, and the overall negative effects associated with inbreeding. Most of these countervailing forces act through group selection or, for eusocial insects in particular, through between-colony selection.

\n
\n
\n

Yet, considering its position for four decades as the dominant paradigm in the theoretical study of eusociality, the production of inclusive fitness theory must be considered meagre. During the same period, in contrast, empirical research on eusocial organisms has flourished, revealing the rich details of caste, communication, colony life cycles, and other phenomena at both the individual- and colony-selection levels. In some cases social behaviour has been causally linked through all the levels of biological organization from molecule to ecosystem. Almost none of this progress has been stimulated or advanced by inclusive fitness theory, which has evolved into an abstract enterprise largely on its own

\n
\n

...

\n
\n

The question arises: if we have a theory that works for all cases (standard natural selection theory) and a theory that works only for a small subset of cases (inclusive fitness theory), and if for this subset the two theories lead to identical conditions, then why not stay with the general theory? The question is pressing, because inclusive fitness theory is provably correct only for a small (non-generic) subset of evolutionary models, but the intuition it provides is mistakenly embraced as generally correct.

\n
\n

Check out the paper for more details. Also look at the Supplementary Information if you have access to it. They perform an evolutionary game theoretic analysis, which I am still reading.

\n

Apparently this theory is not that new. In this 2007 paper by David Sloan Wilson and E. O. Wilson, they argue (I'm just pasting the abstract):

\n
\n

The current foundation of sociobiology is based upon the rejection of group selection in the 1960s and the acceptance thereafter of alternative theories to explain the evolution of cooperative and altruistic behaviors. These events need to be reconsidered in the light of subsequent research. Group selection has become both theoretically plausible and empirically well supported. Moreover, the so-called alternative theories include the logic of multilevel selection within their own frameworks. We review the history and conceptual basis of sociobiology to show why a new consensus regarding group selection is needed and how multilevel selection theory can provide a more solid foundation for sociobiology in the future.

\n
\n

From the other camp, this seems to be a fairly highly-cited paper from 2008. They concluded:

\n
\n

(a) the arguments about group selection are only continued by a limited number of theoreticians, on the basis of simplified models that can be difficult to apply to real organisms (see Error 3); (b) theoretical models which make testable predictions tend to be made with kin selection theory (Tables 1 and 2); (c) empirical biologists interested in social evolution measure the kin selection coefficient of relatedness rather than the corresponding group selection parameters (Queller & Goodnight, 1989). It is best to think of group selection as a potentially useful, albeit informal, way of conceptualizing some issues, rather than a general evolutionary approach in its own right.

\n
\n

I know (as of yet) very little biology, so I leave the conclusion for readers to discuss. Does anyone have detailed knowledge of the issues here?

" } }, { "_id": "HCQgQLZGhHynAQZqb", "title": "Church: a language for probabilistic modeling", "pageUrl": "https://www.lesswrong.com/posts/HCQgQLZGhHynAQZqb/church-a-language-for-probabilistic-modeling", "postedAt": "2010-10-22T11:59:45.211Z", "baseScore": 25, "voteCount": 19, "commentCount": 22, "url": null, "contents": { "documentId": "HCQgQLZGhHynAQZqb", "html": "

I've been reading about Church, which is a new computer language, developed in a prize-winning MIT doctoral thesis, that's designed to make computers better at modeling probability distributions.  

\n

The idea is that simulations are cheap to run (given a probability distribution, generate an example outcome) but inferred distributions are expensive to run (from a set of data, what was the most likely probability distribution that could have generated it?) This is essentially a Bayesian task, and it's what we want to do to understand, say, which borrowers are likeliest to default, or where terrorists are likely to strike again.  It's also the necessary building block of AI.  The problem is that the space of probability distributions that can explain the data is very big.  Infinitely big in reality, of course, but still exponentially big after discretizing.  Also, while the computational complexity of evaluating f(g(x)) is just f + g, the computational complexity of composing two conditional probability distributions B|A and C|B is

\n

ΣB P(C, B|A)

\n

whose computational time will grow exponentially rather than linearly.

\n

Church is an attempt to solve this problem.  (Apparently it's a practical attempt, because the founders have already started a company, Navia Systems, using this structure to build probabilistic computers.)  The idea is, instead of describing a probability distribution as a deterministic procedure that evaluates the probabilities of different events, represent them in terms of probabilistic procedures for generating samples from them.  That is, a random variable is actually a random variable.  This means that repeating a computation will not give the same result each time, because evaluating a random variable doesn't give the same result each time.  There's a computational advantage here because it's possible to compose random variables without summing over all possible values.

\n

Church is based on Lisp. At the lowest level, it replaces Boolean gates with stochastic digital circuits.  These circuits are wired together to form Markov chains (the probabilistic counterpart of finite state machines.)  At the top, it's possible to define probabilistic procedures for generating samples from recursively defined distributions.

\n

 

\n

When I saw this paper, I thought it might make an interesting top-level post -- unfortunately, I'm not the one to do it.  I don't know enough computer science; it's been ages since I've touched Lisp, and lambda calculus is new to me.  So this is an open call for volunteers -- any brave Bayesians want to blog about a brand new computer language?

\n

(As an aside, I think we need more technical posts so we don't spend all our time hissing at each other; how would people feel about seeing summaries of recent research in the neuro/cognitive science/AI-ish cluster?)

" } }, { "_id": "jxDY2EfNp9pHbyb8g", "title": "Does it matter if you don't remember?", "pageUrl": "https://www.lesswrong.com/posts/jxDY2EfNp9pHbyb8g/does-it-matter-if-you-don-t-remember", "postedAt": "2010-10-22T11:53:18.401Z", "baseScore": 10, "voteCount": 8, "commentCount": 35, "url": null, "contents": { "documentId": "jxDY2EfNp9pHbyb8g", "html": "

Does it matter if you experienced pain in the past, but you don't remember? (And there are no other side-effects, etc etc). At one point in Accelerando, Charles Strauss describes children that routinely decapitate and disembowel each other, only to be repaired (bodily and memory-wise) by the friendly local AI. This struck me as awful, but I'm suspicious of my intuition. Note that here I'm assuming pain is a terminal \"bad\" factor in your utility function. You can substitute \"pain\" for whatever you think is bad. I think there are at least two questions here:

\n
    \n
  1. Is it bad for someone to be in pain if they will not remember it in the future? I think yes, because by assumption pain is a terminal \"bad\" node. Being relieved of future painful memories is good, but nowhere near good enough to fully compensate.
  2. \n
  3. Is it bad to have experienced pain in the past, if you don't remember it? Or, can your utility function coherently include facts about the past, even if they have no causal connection to the present? My intuition here says yes, but I'd be interested in others' thoughts. To make this concrete, imaging that you have a choice between medium pain that you will remember, or extreme pain followed by memory erasure.
  4. \n
\n

 

" } }, { "_id": "ig4QYxoQHEDcRy8S5", "title": "How are critical thinking skills acquired? Five perspectives", "pageUrl": "https://www.lesswrong.com/posts/ig4QYxoQHEDcRy8S5/how-are-critical-thinking-skills-acquired-five-perspectives", "postedAt": "2010-10-22T02:29:18.779Z", "baseScore": 13, "voteCount": 12, "commentCount": 5, "url": null, "contents": { "documentId": "ig4QYxoQHEDcRy8S5", "html": "

Link to sourcehttp://timvangelder.com/2010/10/20/how-are-critical-thinking-skills-acquired-five-perspectives/
Previous LW discussion of argument mappingArgument Maps Improve Critical ThinkingDebate tools: an experience report

\n

How are critical thinking skills acquired? Five perspectivesTim van Gelder discusses acquisition of critical thinking skills, suggesting several theories of skill acquisition that don't work, and one with which he and hundreds of his students have had significant success.

\n
\n

In our work in the Reason Project at the University of Melbourne we refined the Practice perspective into what we called the Quality (or Deliberate) Practice Hypothesis.   This was based on the foundational work of Ericsson and others who have shown that skill acquisition in general depends on extensive quality practice.  We conjectured that this would also be true of critical thinking; i.e. critical thinking skills would be (best) acquired by doing lots and lots of good-quality practice on a wide range of real (or realistic) critical thinking problems.   To improve the quality of practice we developed a training program based around the use of argument mapping, resulting in what has been called the LAMP (Lots of Argument Mapping) approach.   In a series of rigorous (or rather, as-rigorous-as-possible-under-the-circumstances) studies involving pre-, post- and follow-up testing using a variety of tests, and setting our results in the context of a meta-analysis of hundreds of other studies of critical thinking gains, we were able to establish that critical thinking skills gains could be dramatically accelerated, with students reliably improving 7-8 times faster, over one semester, than they would otherwise have done just as university students.   (For some of the detail on the Quality Practice hypothesis and our studies, see this paper, and this chapter.)

\n
\n

LW has been introduced to argument mapping before

" } }, { "_id": "DkjAjtgtjqPpQrWjd", "title": "How do autistic people learn how to read people's emotions?", "pageUrl": "https://www.lesswrong.com/posts/DkjAjtgtjqPpQrWjd/how-do-autistic-people-learn-how-to-read-people-s-emotions", "postedAt": "2010-10-20T13:57:05.340Z", "baseScore": 9, "voteCount": 7, "commentCount": 37, "url": null, "contents": { "documentId": "DkjAjtgtjqPpQrWjd", "html": "

From my understanding, people on the autism spectrum have difficulty reading people's emotions and general social cues. I'm curious how these people develop these skills and what one can do to improve them. I ask this as a matter of personal interest; while I am somewhat neurotypical, I feel this is an area where I am very lacking.

\n

(Sidenote: would this be considered an appropriate used of the discussion section?)

" } }, { "_id": "med7PYExueRxq5uLt", "title": "Help: Is there a quick and dirty way to explain quantum immortality?", "pageUrl": "https://www.lesswrong.com/posts/med7PYExueRxq5uLt/help-is-there-a-quick-and-dirty-way-to-explain-quantum", "postedAt": "2010-10-20T03:00:34.608Z", "baseScore": 4, "voteCount": 3, "commentCount": 48, "url": null, "contents": { "documentId": "med7PYExueRxq5uLt", "html": "

I had an incredibly frustrating conversation this morning trying to explain the idea of quantum immortality to someone whose understanding of MWI begins and ends at pop sci fi movies. I think I've identified the main issue that I wasn't covering in enough depth (continuity of identity between near-identical realities) but I was wondering whether anyone has ever faced this problem before, and whether anyone has (or knows where to find) a canned 5 minute explanation of it.

" } }, { "_id": "CMxaxnG3uJSqH6yNJ", "title": "Rational Regions? ", "pageUrl": "https://www.lesswrong.com/posts/CMxaxnG3uJSqH6yNJ/rational-regions", "postedAt": "2010-10-19T08:22:05.411Z", "baseScore": 2, "voteCount": 6, "commentCount": 20, "url": null, "contents": { "documentId": "CMxaxnG3uJSqH6yNJ", "html": "

Are certain areas of the world, specifically within the United States, more or less rational than others? If so, which ones and why? I am trying to determine what parts of the country would be ideal for me to live in the future and any help would be greatly appreciated.

" } }, { "_id": "HZb5vKcRcXZ62gPGb", "title": "October 2010 Southern California Meetup", "pageUrl": "https://www.lesswrong.com/posts/HZb5vKcRcXZ62gPGb/october-2010-southern-california-meetup", "postedAt": "2010-10-18T21:28:17.651Z", "baseScore": 11, "voteCount": 7, "commentCount": 16, "url": null, "contents": { "documentId": "HZb5vKcRcXZ62gPGb", "html": "

We're having the third SoCal LessWrong meetup this Saturday, the 23rd. It'll be held at this IHOP in Irvine, from 1PM to 8PM, in the upstairs meeting area.

\n

For those that haven't yet come, the last two were quite successful bringing 13 and 16 people respectively, and there was plenty of intelligent and friendly discussion.

\n

Make sure to comment if you have suggestions for how to improve on the last one, if you can give/need a ride, or just to say you're coming.

" } }, { "_id": "KhuJHHFD6DBiZaBGG", "title": "Rally to Restore Rationality", "pageUrl": "https://www.lesswrong.com/posts/KhuJHHFD6DBiZaBGG/rally-to-restore-rationality", "postedAt": "2010-10-18T18:41:33.876Z", "baseScore": 8, "voteCount": 7, "commentCount": 16, "url": null, "contents": { "documentId": "KhuJHHFD6DBiZaBGG", "html": "

Hey everyone. If anyone else is heading to Jon Stewart's Rally to Restore Sanity on the National Mall on Oct. 30th, please comment or contact me at pphysics141@gmail.com so we can arrange an LW meetup.

" } }, { "_id": "uWg8Yy9RGjQxwJEQQ", "title": "Vipassana Meditation Open Thread", "pageUrl": "https://www.lesswrong.com/posts/uWg8Yy9RGjQxwJEQQ/vipassana-meditation-open-thread", "postedAt": "2010-10-18T17:01:28.575Z", "baseScore": 4, "voteCount": 8, "commentCount": 30, "url": null, "contents": { "documentId": "uWg8Yy9RGjQxwJEQQ", "html": "

Related to: Understanding vipassana meditation, Vipassana Meditation: Developing Meta-Feeling Skills

\n

This is a place to discuss experiences and problems related to practicing vipassana meditation. This space can also be used to organize meditation events or retreats.

" } }, { "_id": "NTkBCFJSA4PFBxSM9", "title": "Vipassana Meditation: Developing Meta-Feeling Skills", "pageUrl": "https://www.lesswrong.com/posts/NTkBCFJSA4PFBxSM9/vipassana-meditation-developing-meta-feeling-skills", "postedAt": "2010-10-18T16:55:10.360Z", "baseScore": 31, "voteCount": 33, "commentCount": 107, "url": null, "contents": { "documentId": "NTkBCFJSA4PFBxSM9", "html": "

Followup to: Understanding vipassana meditation

\n

I explain how to practice vipassana meditation1 (a form of Buddhist meditation), giving instructions, advice, and measures of progress. By practicing vipassana one becomes aware of (and exerts control over) the process of affective judgment. This process may underlie important (and subtle) mental patterns of feeling that are responsible for common rationality failures. While I've tried to give helpful information, you should meditate at your own risk; you may experience mental instability or change in undesirable ways.2 This is a somewhat brief post containing information I've personally found most helpful on my meditative journey, see other guides (like this one) for more.

\n

Instructions

\n

Decide beforehand how long you will meditate for, and set some kind of alarm to go off at the end of this period.

\n

Start with 10-15 minutes; you can incrementally increase this amount to 30 minutes or an hour. The use of an alarm allows you to meditate without worrying about checking if your time is up.

\n

Go to a quiet location where you feel comfortable. Assume a posture that you can stay in for a while. Do your best not to change your posture during the time period.

\n

AFAICT the posture you choose is not especially important. You can sit Indian style, in a half-lotus position, or in a full-lotus position. You can sit on a pillow or directly on the floor. If none of these positions work you can also sit in a chair. You should be reasonably comfortable (but alert) and stable, and able to breathe naturally. Take care not to aggravate past injuries or cause new ones.

\n

Close your eyes and your mouth. Breath naturally through your nose. Keep your awareness centered on the area below the nostrils and above the upper lip. Neutrally and passively observe the breath passing over this area3. If you realize your mind has wandered, patiently re-center your awareness. Once you have established some degree of concentration you should be able to \"see through\" thoughts and emotions without getting swept away by them.

\n

You should not regulate your breath. If the breath is deep, simply observe that the breath is deep. If the breath is shallow, simply observe that the breath is shallow. Observe the breath neutrally and passively. Don't associate yourself with the breath.

\n

The breath should be the center of your awareness, the anchor4 that you remain attached to regardless of what arises in the mind. Sooner or later you will get \"stuck to\" a train of thought, and lose your awareness of the breath. When you notice that this has happened, patiently re-center your awareness on the breath. Do this without feeling the slightest bit upset or disappointed.

\n

After practicing for some time (hours, days, or months) you should be able to \"see through\"5 arising thoughts and emotions without getting \"stuck to\" them. Demanding thoughts and emotions will arise, and by \"seeing through\" them you maintain your observation of the breath as they continue (unattended to) in your peripheral awareness.

\n

Advice

\n

Meditate every day.

\n

Really. You're trying to change deep mental habits of awareness and feeling, and that requires constant pressure and reinforcement. Consistency is important. Choosing to meditate at the same time and in the same spot can facilitate making it part of your daily routine.

\n

Keep an innocently curious mindset.

\n

Think of meditation as a wonderful opportunity to learn about your mind. It seems reasonable to expect your mind to be able to focus on one object for 5 minutes (or even 1 minute), and the fact that it's so hard for many people is interesting. Re-centering your mind, you might notice that you tend to get de-railed most often by thoughts about some past injustice, or about some future fantasy. When you remain centered on the breath, and are \"seeing through\" the arising thoughts and emotions, you may notice you think much more about some particular thing than you thought you did. Don't be afraid; unravel parts of yourself to become stronger.

\n

Beware of wireheading patterns.

\n

These patterns can occur in the form of trying to realize one's idea of what meditation should be (e.g. attempting to repeat a peak experience). This leads to altering one's practice in order to try to match previous expectations. This can be a subtle (and recurring) error, since one's meditation should actually evolve over time and through new experiences. Trying to follow the instructions as straightforwardly as possible, and looking for the manifestation of the benefits in one's life, can help to distinguish between wireheading patterns and genuine growth.

\n

Measures of progress

\n

Improved concentration.

\n

You find that you are able to focus on tasks for longer periods without getting irritated or distracted.

\n

Less anxiety.

\n

On a coarse level, you get worried or stressed less often about macroscopic events. On a more subtle level, you aren't irritated by formerly annoying bodily experiences (e.g. cold or hunger).

\n

Feeling unusual sensations (during or outside meditation).

\n

You might feel spreading tingling sensations, numbness, muscle twitches, or a host of other surprising things.

\n

Enhanced sensory perception.

\n

You start to notice (and eventually continually become aware of) subtle sensations. You strongly smell trees and flowers when walking down the street, become sensitive to the temperature of the things you touch, etc. This enhanced perception is similar to the sensory sensitivity one experiences when taking psychedelics.

\n

Insights about patterns of feelings.

\n

You discover that you are perpetuating patterns that hurt you in one way or another. (See here for a concrete example)

\n

Experiences of egolessness (during or outside meditation).

\n

You find that you become absorbed in some aspect of your experience; you lose your sense of self and feel that nothing else exists. The first time I strongly experienced this I became absorbed in a sensation that was previously causing extreme pain.

\n

Meditation during daily life.

\n

You begin regulating your awareness and feelings as you do in meditation in the course of daily life.

\n

 

\n

\n


\n1 There is much confusion out there about what vipassana is, and how it is related to other forms of Buddhist meditation (like anapanasati). I've made personal decisions about how to use the terms (and what instructions to give) in a way that I think is most clear and conducive to beneficial practice.

\n

2 These courses indicate that they may turn down people with serious emotional problems. I expect that undesirable changes (if they occur) will be slow; you will see them happening and can stop meditating if you so desire. An example of such a change: I now very rarely get sad (didn't shed a tear at my last grandparent's funeral). This doesn't bother me at all, as I generally understand sadness as an indication of my attachment to how someone makes me feel, and not a measure of how much I intrinsically care about them.

\n

3 At the start your awareness of the breath will not be very sharp. As it becomes easier to keep your awareness centered you can sharpen your awareness by focusing more precisely on the sensation the breath causes in this area, the touch of the breath, as you inhale and exhale.

\n

4 I expect that the particular anchor you use isn't important (but I'm not sure). AFAICT in these courses your anchor is the mental procedure of systematically observing bodily sensations.

\n

5 This guide has a good description of the difference between \"seeing through\" (being aware of) and \"getting stuck to\" (thinking) a thought:

\n
\n

There is a difference between being aware of a thought and thinking a thought. That difference is very subtle. It is primarily a matter of feeling or texture. A thought you are simply aware of with bare attention feels light in texture; there is a sense of distance between that thought and the awareness viewing it. It arises lightly like a bubble, and it passes away without necessarily giving rise to the next thought in that chain. Normal conscious thought is much heavier in texture. It is ponderous, commanding, and compulsive. It sucks you in and grabs control of consciousness. By its very nature it is obsessional, and it leads straight to the next thought in the chain, apparently with no gap between them.

\n
\n

 

\n
\n

 

\n

I've created an open thread to discuss experiences and problems related to vipassana meditation, and to organize events and retreats.

" } }, { "_id": "LZsAdWzhpjSxb5Svj", "title": "Evolution just might be chaotic", "pageUrl": "https://www.lesswrong.com/posts/LZsAdWzhpjSxb5Svj/evolution-just-might-be-chaotic", "postedAt": "2010-10-18T14:52:14.393Z", "baseScore": 3, "voteCount": 4, "commentCount": 3, "url": null, "contents": { "documentId": "LZsAdWzhpjSxb5Svj", "html": "

The Chaos Theory of Evolution

\n

:

\n
Research on animals has come to similarly unexpected conclusions, albeit based on sparser fossil records. For example, palaeontologist Russell Graham at Illinois State Museum has looked at North American mammals and palaeontologist Russell Coope at the University of Birmingham in the UK has examined insects (Annual Review of Ecology and Systematics, vol 10, p 247). Both studies show that most species remain unchanged for hundreds of thousands of years, perhaps longer, and across several ice ages. Species undergo major changes in distribution and abundance, but show no evolution of morphological characteristics despite major environmental changes. That is not to say that major evolutionary change such as speciation doesn't happen. But recent \"molecular clock\" research suggests the link between speciation and environmental change is weak at best. Molecular clock approaches allow us to estimate when two closely related modern species split from a common ancestor by comparing their DNA. Most of this work has been carried out in birds, and shows that new species appear more or less continuously, regardless of the dramatic climatic oscillations of the Quaternary or the longer term cooling that preceded it
\n

The hypothesis is that there's very little possibility of finding patterns in evolution. What do you think?

" } }, { "_id": "eM5x4PAwHrQa96gwD", "title": "Was Carl Segan an Agnostic Prophet?", "pageUrl": "https://www.lesswrong.com/posts/eM5x4PAwHrQa96gwD/was-carl-segan-an-agnostic-prophet", "postedAt": "2010-10-18T06:20:50.425Z", "baseScore": -36, "voteCount": 26, "commentCount": 8, "url": null, "contents": { "documentId": "eM5x4PAwHrQa96gwD", "html": "

\r\n

I ask that those who want to participate follow these rules:

\r\n

Syllogistic representations are preferred.

\r\n

Anecdotes are welcome, but please limit yourself.

\r\n

Platitudes are self recriminatory.

\r\n

Haiku are considered poetry.

\r\n

Math, while appropriate, may cause confusion.

\r\n

If the argument that you represent is not listed above please try to limit your response for clarity.

\r\n

Those who wish to argue that \"Agnostic\" and \"Prophet\" are incongruent, please understand that \"Prophet\" is understood to mean \"any person that can observe phenomena over time and hazards a guess to what will happen next\".

\r\n

 

\r\n

\r\n

 

" } }, { "_id": "7jYKgyQvL5agiWh48", "title": "Bayesian Doomsday Argument", "pageUrl": "https://www.lesswrong.com/posts/7jYKgyQvL5agiWh48/bayesian-doomsday-argument-0", "postedAt": "2010-10-17T22:14:17.440Z", "baseScore": -6, "voteCount": 6, "commentCount": 16, "url": null, "contents": { "documentId": "7jYKgyQvL5agiWh48", "html": "

First, if you don't already know it, Frequentist Doomsday Argument:

There's some number of total humans. There's a 95% chance that you come after the last 5%. There's been about 60 to 120 billion people so far, so there's a 95% chance that the total will be less than 1.2 to 2.4 trillion.

I've modified it to be Bayesian.

First, find the priors:

Do you think it's possible that the total number of sentients that have ever lived or will ever live is less than a googolplex? I'm not asking if you're certain, or even if you think it's likely. Is it more likely than one in infinity? I think it is too. This means that the prior must be normalizable.

If we take P(T=n) ∝ 1/n, where T is the total number of people, it can't be normalized, as 1/1 + 1/2 + 1/3 + ... is an infinite sum. If it decreases faster, it can at least be normalized. As such, we can use 1/n as an upper limit.

Of course, that's just the limit of the upper tail, so maybe that's not a very good argument. Here's another one:

We're not so much dealing with lives as life-years. Year is a pretty arbitrary measurement, so we'd expect the distribution to be pretty close for the majority of it if we used, say, days instead. This would require the 1/n distribution.

After that,

T = total number of people

U = number you are

P(T=n) ∝ 1/n
U = m
P(U=m|T=n) ∝ 1/n
P(T=n|U=m) = P(U=m|T=n) * P(T=n) / P(U=m)
= (1/n^2) / P(U=m)
P(T>n|U=m) = ∫P(T=n|U=m)dn
= (1/n) / P(U=m)
And to normalize:
P(T>m|U=m) = 1
= (1/m) / P(U=m)
m = 1/P(U=m)
P(T>n|U=m) = (1/n)*m
P(T>n|U=m) = m/n

\n

So, the probability of there being a total of 1 trillion people total if there's been 100 billion so far is 1/10.

\n

There's still a few issues with this. It assumes P(U=m|T=n) ∝ 1/n. This seems like it makes sense. If there's a million people, there's a one-in-a-million chance of being the 268,547th. But if there's also a trillion sentient animals, the chance of being the nth person won't change that much between a million and a billion people. There's a few ways I can amend this.

\n

First: a = number of sentient animals. P(U=m|T=n) ∝ 1/(a+n). This would make the end result P(T>n|U=m) = (m+a)/(n+a).

\n

Second: Just replace every mention of people with sentients.

\n

Third: Take this as a prediction of the number of sentients who aren't humans who have lived so far.

\n

The first would work well if we can find the number of sentient animals without knowing how many humans there will be. Assuming we don't take the time to terreform every planet we come across, this should work okay.

\n

The second would work well if we did tereform every planet we came across.

\n

The third seems a bit wierd. It gives a smaller answer than the other two. It gives a smaller answer than what you'd expect for animals alone. It does this because it combines it for a Doomsday Argument against animals being sentient. You can work that out separately. Just say T is the total number of humans, and U is the total number of animals. Unfortunately, you have to know the total number of humans to work out how many animals are sentient, and vice versa. As such, the combined argument may be more useful. It won't tell you how many of the denizens of planets we colonise will be animals, but I don't think it's actually possible to tell that.

\n

One more thing, you have more information. You have a lifetime of evidence, some of which can be used in these predictions. The lifetime of humanity isn't obvious. We might make it to the heat death of the universe, or we might just kill each other off in a nuclear or biological war in a few decades. We also might be annihilated by a paperclipper somewhere in between. As such, I don't think the evidence that way is very strong.

\n

The evidence for animals is stronger. Emotions aren't exclusively intelligent. It doesn't seem animals would have to be that intelligent to be sentient. Even so, how sure can you really be. This is much more subjective than the doomsday part, and the evidence against their sentience is staggering. I think so anyway, how many animals are there at different levels of intelligence?

\n

Also, there's the priors for total human population so far. I've read estimates vary between 60 and 120 billion. I don't think a factor of two really matters too much for this discussion.

\n

So, what can we use for these priors?

\n

Another issue is that this is for all of space and time, not just Earth.

\n

Consider that you're the mth person (or sentient) from the lineage of a given planet. l(m) is the number of planets with a lineage of at least m people. N is the total number of people ever, n is the number on the average planet, and p is the number of planets.

l(m)/N
=l(m)/(n*p)
=(l(m)/p)/n

\n

l(m)/p is the portion of planets that made it this far. This increases with n, so this weakens my argument, but only to a limited extent. I'm not sure what that is, though. Instinct is that l(m)/p is 50% when m=n, but the mean is not the median. I'd expect a left-skew, which would make l(m)/p much lower than that. Even so, if you placed it at 0.01%, this would mean that it's a thousand times less likely at that value. This argument still takes it down orders of magnitude than what you'd think, so that's not really that significant.

\n

Also, a back-of-the-envolope calculation:

\n

Assume, against all odds, there are a trillion times as many sentient animals as humans, and we happen to be the humans. Also, assume humans only increase their own numbers, and they're at the top percentile for the populations you'd expect. Also, assume 100 billion humans so far.

\n

n = 1,000,000,000,000 * 100,000,000,000 * 100

\n

n = 10^12 * 10^11 * 10^2

\n

n = 10^25

\n

Here's more what I'd expect:

\n

Humanity eventually puts up a satilite to collect solar energy. Once they do one, they might as well do another, until they have a dyson swarm. Assume 1% efficiency. Also, assume humans still use their whole bodies instead of being a brain in a vat. Finally, assume they get fed with 0.1% efficiency. And assume an 80-year lifetime.

\n

n = solar luminosity * 1% / power of a human * 0.1% * lifetime of Sun / lifetime of human

\n

n = 4 * 10^26 Watts * 0.01 / 100 Watts * 0.001 * 5,000,000,000 years / 80 years

\n

n = 2.5 * 10^27

\n

By the way, the value I used for power of a human is after the inefficiencies of digesting.

Even with assumptions that extreme, we couldn't use this planet to it's full potential. Granted, that requires mining pretty much the whole planet, but with a dyson sphere you can do that in a week, or two years with the efficiency I gave.

It actually works out to about 150 tons of Earth per person. How much do you need to get the elements to make a person?

\n

Incidentally, I rewrote the article, so don't be surprised if some of the comments don't make sense.

" } }, { "_id": "4Z364g6wbazQgFu5z", "title": "Ranking the \"competition\" based on optimization power", "pageUrl": "https://www.lesswrong.com/posts/4Z364g6wbazQgFu5z/ranking-the-competition-based-on-optimization-power", "postedAt": "2010-10-17T16:55:51.772Z", "baseScore": 0, "voteCount": 4, "commentCount": 15, "url": null, "contents": { "documentId": "4Z364g6wbazQgFu5z", "html": "

Most long term users on Less Wrong understand the concept of optimization power and how a system can be called intelligent if it can restrict the future in significant ways. Now I believe that in this world, only institutions are close to superintelligence in any significant way. 

\n

I believe it is important for us to have atleast some outside idea of which institutions/systems are powerful in today's world so that we can atleast see some outlines of how the increasing optimization power will end up affecting normal people.

\n

So, my question is - what are the present institutions or systems that you would classify as having the maximum optimization power. Please present your thought behind it if you feel you are mentioning some unknown institution. I am presenting my guesses after the break.

\n

\n

Blogospheroid's guess list

\n

 

\n
    \n
  1. NSA / US Intelligence and defence community
  2. \n
  3. Harvard University
  4. \n
  5. The Chinese Politburo
  6. \n
  7. Goldman Sachs
  8. \n
  9. The Kremlin / Russian intelligence and defence community
  10. \n
  11. Google Inc 
  12. \n
  13. Oracle Inc
  14. \n
  15. Microsoft Inc
  16. \n
  17. Murdoch's media empire
  18. \n
\n

Institutions I found significant outside this list is the Singaporean and Abu Dhabi city governments, very rational and increasing their significance in the world, but highly restricted from fooming because of constraints.

\n

 

" } }, { "_id": "8gXxT2mZ7RGFumWJY", "title": "Number bias", "pageUrl": "https://www.lesswrong.com/posts/8gXxT2mZ7RGFumWJY/number-bias", "postedAt": "2010-10-17T14:03:46.227Z", "baseScore": 2, "voteCount": 4, "commentCount": 8, "url": null, "contents": { "documentId": "8gXxT2mZ7RGFumWJY", "html": "

The New York Times ran an editorial about an interesting type of cognitive bias: according to the article, the fact that our system of timekeeping is based on factors of 24, 7, etc. and the fact that we have 10 fingers profoundly influences our way of thinking. As the article explains, this bias is distinct from scope neglect and misunderstanding of probability. Has anyone else heard of this kind of \"number bias\" before? Also, is this an issue that deserves further study on LessWrong?

" } }, { "_id": "hhrv8aAcmkzJxvP58", "title": "Re: sub-reddits", "pageUrl": "https://www.lesswrong.com/posts/hhrv8aAcmkzJxvP58/re-sub-reddits", "postedAt": "2010-10-17T13:34:52.398Z", "baseScore": 10, "voteCount": 8, "commentCount": 23, "url": null, "contents": { "documentId": "hhrv8aAcmkzJxvP58", "html": "

A while back, I polled the community on the possibility of subreddits. Most people said they wanted them, and I said I'd investigate.

\n

I talked to a couple of people and eventually ended up talking to Tricycle, the developers of this site. They told me about their own proposed solution to the community organization problem, which is this new Discussion section. They said that searching the discussion section by tag was equivalent to a sub-reddit. For example, if you want a sub-reddit on consciousness, the discussion consciousness tag search is an amazing imitation

\n

I told them I wasn't entirely convinced by this and sent some reasons why, but I haven't heard back from them lately and I'm not going keep pursuing this and make a big deal of it unless a large percentage of the people who wanted sub-reddits are unsatisfied.

" } }, { "_id": "E7dTqbLRhtnDDcHpY", "title": "Monetary Incentives and Performance", "pageUrl": "https://www.lesswrong.com/posts/E7dTqbLRhtnDDcHpY/monetary-incentives-and-performance", "postedAt": "2010-10-16T16:01:09.279Z", "baseScore": 8, "voteCount": 5, "commentCount": 6, "url": null, "contents": { "documentId": "E7dTqbLRhtnDDcHpY", "html": "

I've been thinking about incorporating my Vanity and Ambition in Mathematics into a top level posting. If possible I would like to situation my remarks and the quotations that I cite with respect to the existing experimental psychology literature. When I've discussed the material in the aforementioned article with people in psychology they've sometimes made reference to recent findings that monetary incentives reduce performance on certain kinds of tasks, perhaps suggesting that intrinsic rather than extrinsic motivation is key for performance on certain kinds of tasks.

\n

I'll do my own research, but does anybody know of any relevant studies?

" } }, { "_id": "YyjYPts5hnqnBmBue", "title": "Mixed strategy Nash equilibrium", "pageUrl": "https://www.lesswrong.com/posts/YyjYPts5hnqnBmBue/mixed-strategy-nash-equilibrium", "postedAt": "2010-10-16T16:00:05.537Z", "baseScore": 60, "voteCount": 46, "commentCount": 47, "url": null, "contents": { "documentId": "YyjYPts5hnqnBmBue", "html": "

Inspired by: Swords and Armor: A Game Theory Thought Experiment

Recently, nick012000 has posted Swords and Armor: A Game Theory Thought Experiment. I was disappointed to see many confused replies to this post, even after a complete solution was given by Steve_Rayhawk. I thought someone really ought to post an explanation about mixed strategy Nash equilibria. Then I figured that that someone may as well be me.

I will assume readers are familiar with the concepts of a game (a setting with several players, each having a choice of strategies to take and a payoff which depends on the strategies taken by all players) and of a Nash equilibrium (an \"optimal\" assignment of strategies such that, if everyone plays their assigned strategy, no player will have a reason to switch to a different strategy). Some games, like the famous prisoner's dilemma, have a Nash equilibrium in so-called \"pure strategies\" (as opposed to mixed strategies, to be introduced later). Consider, however, the following variant of the matching pennies game:

Player 1 is a general leading an attacking army, and player 2 is the general of the defending army. The attacker can attack from the east or west, and the defender can concentrate his defenses on the east or west. By the time each side learns the strategy of its enemy, it is too late to switch strategies. Attacking where the defenses aren't concentrated gives a great advantage; additionally, due to unspecified tactical circumstances, attacking from the east gives a slight advantage. The sides have no interest in cooperating, so this is a zero-sum game (what one side wins, the other loses).

This elaborate description can be summarized in the following payoff matrix (these payoffs are for the attacker; the defender's payoffs are their negatives):

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
 2: East2: West
1: East-12
1: West1-2
\n

\n

What strategy should each side play? The attacker can think, \"Overall, going east is advantageous. So I'll go east.\" The defender, anticipating this, might say, \"Surely they will go east, so that's where I'll wait for them.\" But after some deliberation, the attacker will realize this, and say \"They will expect me from the east! I'll surprise them and go west.\" You can see where this is going - the defender will think \"they'll try to be smart and go west; I'll be smarter and be ready for them\", the attacker will think \"I was right the first time, east is the way to go\" and so on, ad infinitum.

Indeed, looking at the table does not reveal any obvious Nash equilibrium. Every assignment of strategies, represented as a square in the table, will leave one side preferring the alternative. So, are the sides deadlocked in an infinite recursion of trying to outsmart each other? No. They have the option of choosing a strategy randomly.

Suppose the attacker decides to toss a coin, and go east if it lands heads, and west for tails. Suppose also that the defender, with his mastery of psychology, manages to accurately predict the attacker's bizarre behavior. What can he do with this knowledge? He cannot predict the outcome of a coin toss, so he makes do with calculating the expected outcome for each of his (the defender's) strategies. And it can be easily seen that no matter what the defender does, the expectation is 0.

Similarly, suppose the defender consults his preferred (P)RNG so that he defends east with probability 2/3, west with probability 1/3, and suppose that the attacker anticipates this. Again, either of the attacker's strategies will give an expected gain of 0.

This randomized behavior, which is a combination of strategies from which one is randomly chosen with specified probabilities, is called a \"mixed strategy\". They are typically denoted as a vector listing the probability for choosing each pure strategy, so the defender's suggested strategy is (2/3, 1/3).

What we have seen here, is that by a clever choice of mixed strategy, each side can make sure they cannot be outsmarted! By constraining ourselves to rely on randomness for deciding on an action, we denied our opponent the ability to predict it and counter it. If yourself you don't know what you'll do, there's no way your opponent will know. We conclude that sometimes, acting randomly is the rational action to take.

As we've seen, for each player we have that if he uses his suggested mixed strategy, then it doesn't matter what the other player will do. The corollary is that if the attacker plays (1/2, 1/2) and the defender plays (2/3, 1/3), then no player will have a reason to switch to a different strategy - this is a Nash equilibrium!

Some more points of interest: \"Just\" randomizing won't do - you have to pick the right probabilities. The exact probabilities of the mixed strategy Nash equilibria, and the resulting payoff, depend on the specifics of the payoff matrix. In my example, the defender needs a high probability of defending east to prevent the attacker from exercising his advantage, but the symmetry is such that the attacker chooses with even odds. In games with more strategies per player, an equilibrium mixed strategy may be supported on (have positive probability for) all, several, or one of the pure strategies. If all players apply a Nash equilibrium, then any player can switch to any one of the pure strategies supporting his mixed strategy without changing his expected payoff; switching to any other strategy either decreases the payoff or leaves it unchanged.

Perhaps most interesting of all is Nash's theorem, saying that every finite game has a mixed strategy Nash equilibrium! Our original problem, that some games have no equilibrium, is solved completely once we move to mixed strategies.

\n

One thing should be kept in mind. A Nash equilibrium strategy, much like a minimax strategy, is \"safe\". It makes sure your expected payoff won't be too low no matter how clever your opponent is. But what if you don't want to be safe - what if you want to win? If you have good reason to believe you are smarter than your opponent, that he will play a non-equilibrium strategy you'll be able to predict, then go ahead and counter that strategy. Nash equilibria are for smart people facing smarter people.

\n

In fact, it is possible that some of the comments I called confused earlier were fully aware of these issues and coming from this \"post-Nash\" view. To them I apologize.  

\n

Examples where mixed strategies are crucial are plentiful. I'll give one more - the volunteer's dilemma. A group of people are about to suffer greatly from some misfortune, unless at least one of them volunteers to take a slightly inconvenient action. They cannot communicate to coordinate the volunteering. If everyone uses the same deterministic strategy, then either all or none will volunteer, neither of which is stable. But they have a mixed strategy equilibrium which, by carefully balancing the risks of needlessly volunteering and having nobody volunteer, leaves everyone with an expected penalty equal to just the inconvenience of volunteering. Not as good as having just one person volunteer, but at least it's stable.

\n

For further reading, Wikipedia has many articles about game theory and Nash equilibria. Also, Perplexed made this related comment recently.

" } }, { "_id": "mbYSiC4sPuz4vQxyB", "title": "Discuss: Original Seeing Practices", "pageUrl": "https://www.lesswrong.com/posts/mbYSiC4sPuz4vQxyB/discuss-original-seeing-practices", "postedAt": "2010-10-16T02:20:00.095Z", "baseScore": 3, "voteCount": 5, "commentCount": 12, "url": null, "contents": { "documentId": "mbYSiC4sPuz4vQxyB", "html": "

What can we practice to help us think original thoughts? How can we see beyond our cached responses?

\n

This is a place to share life patterns, techniques, or exercises that help you (either occasionally or regularly) think new thoughts.

" } }, { "_id": "bnGkuqaBfbgsexbfn", "title": "Rationality Dojo", "pageUrl": "https://www.lesswrong.com/posts/bnGkuqaBfbgsexbfn/rationality-dojo", "postedAt": "2010-10-15T18:04:35.160Z", "baseScore": 11, "voteCount": 8, "commentCount": 18, "url": null, "contents": { "documentId": "bnGkuqaBfbgsexbfn", "html": "

Last night, here in Portland (OR), some friends and I got together to try to start Rationality Dojo. We talked about it for a while and came up with exactly 4 exercises that we could readily practice:

\n
    \n
  1. Play Paranoid Debating
  2. \n
  3. Play the AI-Box experiment
  4. \n
  5. Read Harry Potter and the Methods of Rationality
  6. \n
  7. Write fanfiction in the style of #3
  8. \n
\n

We also had a whole bunch of semi-formed ideas about selecting a target (happiness, health) and optimizing it a month at a time. Starting a dojo, in a time before organized martial arts, was surely incredibly difficult. I hope we can accrete exercises rather than require a single sensei to invent the majority of the discipline. So I've added a category to the wiki, and I'm asking here. Do you have ideas or refinements for exercises to fit within rationality dojo?

" } }, { "_id": "e8qF4w56P62DXimnE", "title": "Human performance, psychometry, and baseball statistics", "pageUrl": "https://www.lesswrong.com/posts/e8qF4w56P62DXimnE/human-performance-psychometry-and-baseball-statistics", "postedAt": "2010-10-15T13:13:25.322Z", "baseScore": 33, "voteCount": 31, "commentCount": 21, "url": null, "contents": { "documentId": "e8qF4w56P62DXimnE", "html": "

I. Performance levels and age

\n

Human ambition for achievement in modest measure gives meaning to our lives, unless one is an existentialist pessimist like Schopenhauer who taught that life with all its suffering and cruelty simply should not be. Psychologists study our achievements under a number of different descriptions--testing for IQ, motivation, creativity, others. As part of my current career transition, I have been examining my own goals closely, and have recently read a fair amount on these topics which are variable in their evidence.

A useful collection of numerical data on the subject of human performance is the collection of Major League Baseball player performance statistics--the batting averages, number home runs, runs batted in, slugging percentage--of the many thousands of participants in the hundred years since detailed statistical records have been kept and studied by the players, journalists, and fans of the sport. The advantage of examining issues like these from the angle of Major League Baseball player performance statistics is the enormous sample size of accurately measured and archived data.

The current senior authority in this field is Bill James, who now works for the Boston Red Sox; for the first twenty-five years of his activity as a baseball statistician James was not employed by any of the teams. It took him a long time to find a hearing for his views on the inside of the industry, although the fans started buying his books as soon as he began writing them.

In one of the early editions of his Baseball Abstract, James discussed the biggest fallacies that managers and executives held regarding the achievements of baseball players. He was adamant about the most obvious misunderstood fact of player performance: it is sharply peaked at age 27 and decreases rapidly, so rapidly that only the very best players were still useful at the age of 35. He was able to observe only one executive that seemed to intuit this--a man whose sole management strategy was to trade everybody over the age of 30 for the best available player under the age of 30 he could acquire.

There is a fair amount of more formal academic research on this issue. It is described in the literature as the field of Age and Achievement. The dean of the psychologists studying Age and Achievement is Dean Simonton. A decent overview of their findings is here. This is a meta-study of hundreds of individual studies. Many fields and many metrics are sampled. There is one repeated finding. Performance starts low at a young age and steadily increases along a curve which bears a resemblance to a Gaussian bell-shaped curve, peaks, and then declines. The decline is not as rapid as the rise (it is not a symmetric bell shape; it is steeply inclining from the left to the peak and gently declining form the peak to the right), but it is inevitably seen everywhere. The age of peak achievement varies, depending on the field. Baseball players peak at 27 (the curves from the psychology publications look exactly like the curve published by Bill James in his Abstract), business executives peak at 60, and physicists peak at age 35. Shakespearian actors peak late and rock stars peak early. These are statistical results and individual outliers abound. You, the individual physicist, may not be over the hill at 40, but this is the way to bet.

My hometown major league baseball franchise, the Houston Astros, recently had this empirical law verified for themselves in real time, and the hard way. They invested the bulk of their payroll budget on three players: Miguel Tejada, Carlos Lee, and Lance Berkman. All three were over the age of 30, i.e., definitely into their decline phase. When their performance declined more rapidly than predicted, the team lost many more games than they were planning for. They had a contending team's payroll and big plans, but now Tejada and Berkman are gone and they are rebuilding. In an attempt to cut losses, they traded their (prime-age) star pitcher for young players.

A recent post on Hacker News, Silicon Valley's Dark Secret: It's all about Age, generated 120 comments of heated discussion about institutional age discrimination and the unappreciated value of experience. The consensus view expressed there is young programmers have to advance into management or become unemployable near age 50.

It could perhaps be seen as an example of Evolutionary Biology. We are in an ecosystem. The ecosystem selects for fitness. What is sometimes misunderstood is the ecosystem does not select for absolute fitness, but for fitness specific to a niche. If the available niches in this \"ecosystem\" are for 40 year-old-brains, and there aren't any niches for 50 year-old-brains, then some fully fit brains (in an absolute sense) are going to be out of employment opportunities. Faced with a system like this, the job seeker may have to be clever at finding ever narrower niches to squeeze themself into.

One of the moderators at Hacker News, Paul Graham, is a software startup venture capitalist. He is accused in the thread of unconcealed age discrimination--that he will not invest in entrepreneurs over 38, and claiming that nobody over 25 will ever learn Lisp. If you are a forty-year-old physicist and you want to learn Lisp and get venture capital funding for your business plan--well, good luck with that!

II. Time to mastery

\n

This leads directly into my second topic within my larger subject of human performance, psychometry, and baseball statistics. Learning curves and estimated time for mastery. To continue with the above example, assuming you want to master Lisp, how much of your time should you plan to allocate for the task? K. Anders Ericson is the author of the relevant research findings. At a crude level of approximation, something like that takes ten thousand hours. This is a result I was first exposed to many years ago in the context of Buddhist meditation, in an Esalen conference presented by Helen Palmer (mostly known for her work on the Eneagram). She reported that to become skilled at Zen meditation requires ten thousand hours of practice. In the University of Wisconsin brain imaging meditation study, the subjects were Tibetan monks who had all logged a minimum of ten thousand hours of practice. The ten thousand hours of practice requirement was also reported popularly by Malcom Gladwell in his best-selling book Outliers. Another take on this: Teach Yourself Programming in Ten Years. Ten thousand hours of 40-hour-weeks is five years, not ten; the number is not precise, but the idea is consistent that ambitious projects take a daunting amount of time.

One of my dance teachers was fond of reminding me that practice does not make perfect. Only perfect practice can make you perfect. For most of us even that is an exaggeration. I think we can reliably predict that ten thousand hours of very good practice will make you very good if you first possess an average or above-average amount of raw aptitude..

III. Distribution of performance across a population, replacement-level player

The second biggest fallacy among baseball personnel managers, according to Bill James, is they do not understand how ability is distributed amongst professional baseball players. He defines the concept of replacement-level player, and insists the vast majority of the fellows working in the Major Leagues are easily, quickly, replaceable. His reasoning is simple.

If you have a random selection of humans and measure nearly any measurable trait--height, weight, speed, strength, reflex time--the frequency plot will be the familiar bell shape Gaussian curve. People playing baseball professionally are an extreme non-random sample. 98% of the left-hand portion of the curve is gone, because none of those people have the physical requirements to get employment playing baseball. The resulting distribution is a truncated Gaussian distribution, with few at the highest levels, and the vast majority of participants of nearly indistinguishable quality. When performance is creamed at stage after stage after stage, little league to high school to college to minor leagues to the majors, almost all the remaining players are excellent and interchangeable.

If you are managing a corporation and you only hire candidates with golden resumes you have a truncated Gaussian distribution of talent. If in your evaluation process you shove those people into a Gaussian distribution, Bill James says you are doing it totally wrong. Another common mistake is that managers think there is something magical about \"major league\" talent, that some guys have it (as Thomas Wolfe referred to the \"right stuff\") and some do not, and they mislabel players who could help them win baseball games as not having it, due to the circumstantial variations of where the players have found themselves employed up until now. Organizations that hire top talent and pay high salaries have far more options than they generally presume. Nearly every single person working for your company is easily replaceable.

There is a story, possibly apocryphal, about Benoit Mandelbrot and his early preoccupation with financial market data. His questioner thought finance was a fuzzy science and hard scientific data really ought to be much more attractive to his scientific temperament. Mandelbrot explained that the great feature of studying financial data was that there was so much of it, and it was thus endlessly fascinating. Many statisticians have a similar fondness for baseball statistics. It is reliably recorded, unambiguous in definition, and there is so much of it. Many subtle statistics results are best explained in the context of baseball statistics, and there may be unknown statistical theorems sitting in the archives waiting to be extracted by clever statisticians. The wikipedia page on Stein's paradox (first published by Charles Stein in 1956) has a reference to a well-known (well-known to baseball statisticians, anyway) article from the May 1977 issue of Scientific American using baseball statistics to illustrate Stein's paradox.

After my article was nearly finished, I stumbled upon this \"news\" in the New York Times Sports section:

Sniffing .300, Hitters Hunker Down on Last Chances. (Here they are presenting research from a couple of economists from U. Pennsylvania's Wharton School of Business. The academic publication is here.)

The preceding should be of interest to anybody who is interested in the subjects of human achievement, psychometry and baseball statistics. My own interest is narrower and the lesson I personally draw is a hybrid from the sequence of lessons here. I have an ambitious scope for the company I am building. Ten thousand hours is close to the limit I am choosing for myself as the point when I will write off these lessons and losses (if they be) and go back to rejoin the American corporation employment market.

" } }, { "_id": "4DFiJKeGc5XFGCcSX", "title": "Picking your battles", "pageUrl": "https://www.lesswrong.com/posts/4DFiJKeGc5XFGCcSX/picking-your-battles", "postedAt": "2010-10-15T11:17:33.191Z", "baseScore": 13, "voteCount": 11, "commentCount": 40, "url": null, "contents": { "documentId": "4DFiJKeGc5XFGCcSX", "html": "

I think that raising the sanity waterline is a worthwhile goal, but picking your battles is absolutely necessary. It doesn't matter how formidable your argument is if you're arguing in the comments of a youtube video, you've lost by default. So where is the line in the  sand? Where would you feel compelled to take action, and to what lengths would you go to? What price would you be willing to pay?

\n

I'm a psychology student, third year and currently doing a unit called \"cultural psychology\". The lecturer has advanced notions of \"multiple truths\" and how \"reality is socially constructed\". To quote him directly in regards to this:

\n

\"There is a tendency for those who believe in one reality to use the physical world as a basis for argument, while those who believe in multiple realities use the social world. Even in physics we have 'reality' changing as you get closer to the speed of light, and the laws of physics don't apply prior to the big bang. These are fairly extreme situations. In this course we are dealing with social realities and the point is that different cultures operate in worlds that can be quite different. To see this purely as a perspective risks the dominant social grouping seeing their reality as the true reality, and others as having a different perspective on that reality. The assumption that cultures can have different realities places every on a level playing field with a dominant culture calling all the shots.\"

\n

You can see in the last line the conclusion he wants his premises to support. The exercise is not to pick his argument apart, find all the holes and write a crushing riposte (although you can if you're so inclined).

\n

 

\n

The question is, if the goal is to raise the sanity waterline, is this a battle worth picking?

" } }, { "_id": "FvaPCfZLXv5uhZ5wz", "title": "You don't need barefoot shoes to start walking differently. ", "pageUrl": "https://www.lesswrong.com/posts/FvaPCfZLXv5uhZ5wz/you-don-t-need-barefoot-shoes-to-start-walking-differently", "postedAt": "2010-10-15T06:41:34.772Z", "baseScore": -4, "voteCount": 12, "commentCount": 16, "url": null, "contents": { "documentId": "FvaPCfZLXv5uhZ5wz", "html": "

I bought into the hype and decided that I was going to get a pair of Vibrams. My intention was not to use them as running shoes, but as everyday walking shoes. Then my girlfriend told me that I wasn't allowed, that they were too hideous to be worn in public. In almost two years together, this is the only thing that she has forbidden me from doing, and I regularly do completely ridiculous things so I deferred to her judgement. I thought about getting barefoot dress shoes but $150 seemed excessive.

\n

I then decided that I didn't need fancy shoes to stop walking heel first. I started wearing a pair of casual brown slip-on shoes with a fair amount of cushioning but little support. From the start, I thought it felt good to actually walk on the balls of my feet.

\n

It took three weeks for my feet to stop hurting, but now I naturally walk on the balls of my feet. You can do the same thing. It will probably be easier in a light pair of shoes rather than a clunky pair of dress shoes or boots.

" } }, { "_id": "QFoBXL3P6Sf5LGqyP", "title": "Melbourne Less Wrong Meetup for November", "pageUrl": "https://www.lesswrong.com/posts/QFoBXL3P6Sf5LGqyP/melbourne-less-wrong-meetup-for-november", "postedAt": "2010-10-15T05:39:18.919Z", "baseScore": 9, "voteCount": 8, "commentCount": 9, "url": null, "contents": { "documentId": "QFoBXL3P6Sf5LGqyP", "html": "

Hot on the heels of October's meetup, I present to you the details for November's meetup!

\n

Date: Friday, November 5th

\n

Place: Don Tojo

\n

Time: 6-9pm

\n

Please comment to say if you're attending, and also to suggest activities.

" } }, { "_id": "o6BfKyWQC6yv8G7bc", "title": "Lifelogging: the recording device", "pageUrl": "https://www.lesswrong.com/posts/o6BfKyWQC6yv8G7bc/lifelogging-the-recording-device", "postedAt": "2010-10-15T01:04:57.530Z", "baseScore": 14, "voteCount": 14, "commentCount": 31, "url": null, "contents": { "documentId": "o6BfKyWQC6yv8G7bc", "html": "

The old idea of lifelogging seems to be a reality now. It has the potential to be quite useful, and not just in distant contrived scenarios like cryonics or being recreated by an AI.

\n

One of the classic objections was that we couldn’t afford to store the many gigabytes - possibly hundreds of gigabytes a year! - such a practice would generate, but right now you can buy 1 terabyte for <$50. And there’s no end in sight to whatever Moore’s law has been governing hard-drives over the past decade or two.

\n

But how is one to record it? That seems to be the rub. All the storage space we could want, all sorts of new formats like WebM or Dirac or x264 to store the videos in - but what camera generates the data in the first place?

\n

We don’t care about sleep time, so we don’t need any more than 16 hours or so of recording a day. We can probably get away with 12. Even 8 might be enough (to record yourself on the job - or off). An encoded compressed video might be 1 megabyte a minute or 60 megabytes an hour, but let’s be generous and assume 15x worse than that, or about 1 gigabyte an hour. So perhaps 16 gigabytes.

\n

16 gigabytes of Flash costs $40 or less. So that’s not an issue either.

\n

And presumably optics and microprocessors are very cheap given the incredible popularity of web cameras, digital cameras, digital camcorders and whatnot over the last decade.

\n

But for all that, I can’t seem to find a mini-camcorder which will record even 8 hours and be a useful lifelogger!

\n\n

Am I wrong? Are there existing products? It seems to me that it ought to be perfectly possible to take something like the uCorder, slap in $110 of batteries, and get it up to 8 or 12 hours’ life. But I have yet to find such a thing.

" } }, { "_id": "kdn9E7F64yd5wg8WH", "title": "A 50-minute introduction to probability", "pageUrl": "https://www.lesswrong.com/posts/kdn9E7F64yd5wg8WH/a-50-minute-introduction-to-probability-1", "postedAt": "2010-10-14T23:31:49.777Z", "baseScore": 6, "voteCount": 5, "commentCount": 12, "url": null, "contents": { "documentId": "kdn9E7F64yd5wg8WH", "html": "" } }, { "_id": "ZDsxeieoXi6uvWAxg", "title": "Retirement leads to loss of cognitive abilities", "pageUrl": "https://www.lesswrong.com/posts/ZDsxeieoXi6uvWAxg/retirement-leads-to-loss-of-cognitive-abilities", "postedAt": "2010-10-14T17:53:57.292Z", "baseScore": 2, "voteCount": 4, "commentCount": 6, "url": null, "contents": { "documentId": "ZDsxeieoXi6uvWAxg", "html": "

 

\n

Here's something I found after surfing a few links. Given the interest in intelligence augmentation and biological immortality here, I figured you guys would find it useful to know; I wasn't particularly planning on retiring (given biological immortality or uploading technologies), but if this is true, I definitely won't be for as long as I can avoid it.

\n
\n

The two economists call their paper “Mental Retirement,” and their argument has intrigued behavioral researchers. Data from the United States, England and 11 other European countries suggest that the earlier people retire, the more quickly their memories decline.

\n

The implication, the economists and others say, is that there really seems to be something to the “use it or lose it” notion — if people want to preserve their memories and reasoning abilities, they may have to keep active.

\n

“It’s incredibly interesting and exciting,” said Laura L. Carstensen, director of the Center on Longevity at Stanford University. “It suggests that work actually provides an important component of the environment that keeps people functioning optimally.”

\n

While not everyone is convinced by the new analysis, published recently in The Journal of Economic Perspectives, a number of leading researchers say the study is, at least, a tantalizing bit of evidence for a hypothesis that is widely believed but surprisingly difficult to demonstrate.

\n

Researchers repeatedly find that retired people as a group tend to do less well on cognitive tests than people who are still working. But, they note, that could be because people whose memories and thinking skills are declining may be more likely to retire than people whose cognitive skills remain sharp.

\n

And research has failed to support the premise that mastering things like memory exercises, crossword puzzles and games like Sudoku carry over into real life, improving overall functioning.

\n

“If you do crossword puzzles, you get better at crossword puzzles,” said Lisa Berkman, director of the Center for Population and Development Studies at Harvard. “If you do Sudoku, you get better at Sudoku. You get better at one narrow task. But you don’t get better at cognitive behavior in life.”

\n

The study was possible, explains one of its authors, Robert Willis, a professor of economics at the University of Michigan, because the National Institute on Aging began a large study in the United States nearly 20 years ago. Called the Health and Retirement Study, it surveys more than 22,000 Americans over age 50 every two years, and administers memory tests.

\n

That led European countries to start their own surveys, using similar questions so the data would be comparable among countries. Now, Dr. Willis said, Japan and South Korea have begun administering the survey to their populations. China is planning to start doing a survey next year. And India and several countries in Latin America are starting preliminary work on their own surveys.

\n

“This is a new approach that is only possible because of the development of comparable data sets around the world.” Dr. Willis said.

\n

The memory test looks at how well people can recall a list of 10 nouns immediately and 10 minutes after they heard them. A perfect score is 20, meaning all 10 were recalled each time. Those tests were chosen for the surveys because memory generally declines with age, and this decline is associated with diminished ability to think and reason.

\n

People in the United States did best, with an average score of 11. Those in Denmark and England were close behind, with scores just above 10. In Italy, the average score was around 7, in France it was 8, and in Spain it was a little more than 6.

\n

Examining the data from the various countries, Dr. Willis and his colleague Susann Rohwedder, associate director of the RAND Center for the Study of Aging in Santa Monica, Calif., noticed that there are large differences in the ages at which people retire.

\n

In the United States, England and Denmark, where people retire later, 65 to 70 percent of men were still working when they were in their early 60s. In France and Italy, the figure is 10 to 20 percent, and in Spain it is 38 percent.

\n

Economic incentives produce the large differences in retirement age, Dr. Rohwedder and Dr. Willis report. Countries with earlier retirement ages have tax policies, pension, disability and other measures that encourage people to leave the work force at younger ages.

\n

The researchers find a straight-line relationship between the percentage of people in a country who are working at age 60 to 64 and their performance on memory tests. The longer people in a country keep working, the better, as a group, they do on the tests when they are in their early 60s.

\n

The study cannot point to what aspect of work might help people retain their memories. Nor does it reveal whether different kinds of work might be associated with different effects on memory tests. And, as Dr. Berkman notes, it has nothing to say about the consequences of staying in a physically demanding job that might lead to disabilities. “There has to be an out for people who face physical disabilities if they continue,” she said.

\n

And of course not all work is mentally stimulating. But, Dr. Willis said, work has other aspects that might be operating.

\n

“There is evidence that social skills and personality skills — getting up in the morning, dealing with people, knowing the value of being prompt and trustworthy — are also important,” he said. “They go hand in hand with the work environment.”

\n

But Hugh Hendrie, an emeritus psychology professor at Indiana University School of Medicine, is not convinced by the paper’s conclusions.

\n

“It’s a nice approach, a very good study,” he said. But, he said, there are many differences among countries besides retirement ages. The correlations do not prove causation. They also, he added, do not prove that there is a clinical significance to the changes in scores on memory tests.

\n

All true, said Richard Suzman, associate director for behavioral and social research at the National Institute on Aging.

\n

Nonetheless, he said, “it’s a strong finding; it’s a big effect.”

\n

If work does help maintain cognitive functioning, it will be important to find out what aspect of work is doing that, Dr. Suzman said. “Is it the social engagement and interaction or the cognitive component of work, or is it the aerobic component of work?” he asked. “Or is it the absence of what happens when you retire, which could be increased TV watching?”

\n

“It’s quite convincing, but it’s not the complete story,” Dr. Suzman said. “This is an opening shot. But it’s got to be followed up.”

\n
" } }, { "_id": "uRcXEcyQYgAaZJvms", "title": "1993 AT&T \"You Will\" ads", "pageUrl": "https://www.lesswrong.com/posts/uRcXEcyQYgAaZJvms/1993-at-and-t-you-will-ads", "postedAt": "2010-10-14T12:49:24.851Z", "baseScore": 5, "voteCount": 4, "commentCount": 3, "url": null, "contents": { "documentId": "uRcXEcyQYgAaZJvms", "html": "

Here's something I found while wasting time on Youtube today. Sort of surprising how close they got to the truth, though of course the aesthetics are all wrong, and AT&T wasn't the company who brought them about.

\n

http://www.youtube.com/watch?v=TZb0avfQme8&feature=grec_index

" } }, { "_id": "sPNbbezfkxiNF8qn7", "title": "LW favorites", "pageUrl": "https://www.lesswrong.com/posts/sPNbbezfkxiNF8qn7/lw-favorites", "postedAt": "2010-10-14T00:39:09.365Z", "baseScore": 10, "voteCount": 7, "commentCount": 11, "url": null, "contents": { "documentId": "sPNbbezfkxiNF8qn7", "html": "

What's still on your mind (in a positive way) a year or more after it was posted?

" } }, { "_id": "XeTWxqS3LZm45ABNj", "title": "Standing Desks and Hunter-Gatherers", "pageUrl": "https://www.lesswrong.com/posts/XeTWxqS3LZm45ABNj/standing-desks-and-hunter-gatherers", "postedAt": "2010-10-14T00:03:26.507Z", "baseScore": 6, "voteCount": 8, "commentCount": 10, "url": null, "contents": { "documentId": "XeTWxqS3LZm45ABNj", "html": "

I recently started using a standing desk and found it increases my productivity perhaps because my mostly caveman brain \"thinks\" that I will usually stand when facing cognitively challenging tasks, but I will sit when I want to relax and save energy.

\n

Are there other ways we can attempt to increase our cognitive powers by taking into account that many of the genes which influence human emotion and cognition were selected for to make our ancestors better hunter-gatherers?

\n

Edited because of Reisqui's comment.

" } }, { "_id": "S2r9BjZZFTHJpr6DA", "title": "A Fundamental Question of Group Rationality", "pageUrl": "https://www.lesswrong.com/posts/S2r9BjZZFTHJpr6DA/a-fundamental-question-of-group-rationality", "postedAt": "2010-10-13T20:32:08.085Z", "baseScore": 17, "voteCount": 11, "commentCount": 12, "url": null, "contents": { "documentId": "S2r9BjZZFTHJpr6DA", "html": "

What do you believe because others believe it, even though your own evidence and reasoning (\"impressions\") point the other way?

\n

(Note that answers like \"quantum chromodynamics\" don't count, except in the unlikely case that you've seriously tried to do your own physics, and it suggested the mainstream was wrong, and that's what you would have believed if not for it being the mainstream.)

" } }, { "_id": "5DFz85BgjdEYBpWCh", "title": "Is there evolutionary selection for female orgasms?", "pageUrl": "https://www.lesswrong.com/posts/5DFz85BgjdEYBpWCh/is-there-evolutionary-selection-for-female-orgasms", "postedAt": "2010-10-13T14:07:31.195Z", "baseScore": 8, "voteCount": 8, "commentCount": 22, "url": null, "contents": { "documentId": "5DFz85BgjdEYBpWCh", "html": "

>Elisabeth Lloyd: I don’t actually know. I think that it’s at a very problematic intersection of topics. I mean, you’re taking the intersection of human evolution, women, sexuality – once you take that intersection you’re bound to kind of get a disaster. More than that, when evolutionists have looked at this topic, I think that they’ve had quite a few items on their agenda, including telling the story about human origins that bolsters up the family, monogamy, a certain view of female sexuality that’s complimentary to a certain view of male sexuality. And all of those items have been on their agenda and it’s quite visible in their explanations.

>Natasha Mitchell: I guess it’s perplexed people partly, too, because women don’t need an orgasm to become pregnant, and so the question is: well, what’s its purpose? Well, is its purpose to give us pleasure so that we have sex, so that we can become pregnant, according to the classic evolutionary theories?

>Elisabeth Lloyd: The problem is even worse than it appears at first because not only is orgasm not necessary on the female side to become pregnant, there isn’t even any evidence that orgasm makes any difference at all to fertility, or pregnancy rate, or reproductive success. It seems intuitive that a female orgasm would motivate females to engage in intercourse which would naturally lead to more pregnancies or help with bonding or something like that, but the evidence simply doesn’t back that up.

\n

The whole discussion. It backs my theory that using evolution to explain current traits seriously tempts people to make things up.

" } }, { "_id": "AManWkjkCqZcRY3tp", "title": "Open Thread", "pageUrl": "https://www.lesswrong.com/posts/AManWkjkCqZcRY3tp/open-thread", "postedAt": "2010-10-13T14:04:06.988Z", "baseScore": 7, "voteCount": 5, "commentCount": 39, "url": null, "contents": { "documentId": "AManWkjkCqZcRY3tp", "html": "

This is an experiment to see whether people would like an open thread for small topics and conversation.

" } }, { "_id": "SdkAesHBt4tsivEKe", "title": "Gandhi, murder pills, and mental illness", "pageUrl": "https://www.lesswrong.com/posts/SdkAesHBt4tsivEKe/gandhi-murder-pills-and-mental-illness", "postedAt": "2010-10-13T09:16:26.583Z", "baseScore": 37, "voteCount": 26, "commentCount": 16, "url": null, "contents": { "documentId": "SdkAesHBt4tsivEKe", "html": "

Gandhi is the perfect pacifist, utterly committed to not bringing about harm to his fellow beings. If a murder pill existed such that it would make murder seem ok without changing any of your other values, Gandhi would refuse to take it on the grounds that he doesn't want his future self to go around doing things that his current self isn't comfortable with. Is there anything you could say to Gandhi that could convince him to take the pill? If a serial killer was hiding under his bed waiting to ambush him, would it be ethical to force him to take it so that he would have a chance to save his own life? If for some convoluted reason he was the only person who could kill the researcher about to complete uFAI, would it be ethical to force him to take the pill so that he'll go and save us all from uFAI?

\n

 

\n

Charlie is very depressed, utterly certain that life is meaningless and terrible and not going to improve anytime between now and the heat death of the universe. He would kill himself but even that seems pointless. If a magic pill existed that would get rid of depression permanently and without side effects, he would refuse it on the grounds that he doesn't want his future self to go around with a delusion (that everything is fine) which his current self knows to be false. Is there anything you could say to Charlie that could convince him to take it? Would it be ethical to force him to take the pill?

\n

 

\n

Note: I'm aware of the conventional wisdom for dealing with mental illness, and generally subscribe to it myself. I'm more interested in why people intuitively feel that there's a difference between these two situations, whether there are arguments that could be used to change someone's terminal values, or as a rationale for forcing a change on them.

" } }, { "_id": "DTfuFX7ozwLP4SgvK", "title": "Beauty in Mathematics", "pageUrl": "https://www.lesswrong.com/posts/DTfuFX7ozwLP4SgvK/beauty-in-mathematics", "postedAt": "2010-10-13T09:12:21.578Z", "baseScore": 20, "voteCount": 15, "commentCount": 6, "url": null, "contents": { "documentId": "DTfuFX7ozwLP4SgvK", "html": "

Serious mathematicians are often drawn toward the subject and motivated by a powerful aesthetic response to mathematical stimuli. In his essay on Mathematical Creation, Henri Poincare wrote

\n
\n

It may be surprising to see emotional sensibility invoked à propos of mathematical demonstrations which, it would seem, can interest only the intellect. This would be to forget the feeling of mathematical beauty, of the harmony of numbers and forms, of geometric elegance. This is a true aesthetic feeling that all real mathematicians know, and surely it belongs to emotional sensibility.

\n
\n

The prevalence and extent of the feeling of mathematical beauty among mathematicians is not well known. In this article I'll describe some of the reasons for this and give examples of the phenomenon. I've excised many of the quotations in this article from the extensive collection of quotations compiled by my colleague Laurens Gunnarsen.

\n

There's an inherent difficulty in discussing mathematical beauty which is that as in all artistic endeavors, aesthetic judgments are subjective and vary from person to person. As Robert Langlands said in his recent essay Is there beauty in mathematical theories?

\n
\n

I appreciate, as do many, that there is bad architecture, good architecture and great architecture just as there is bad, good, and great music or bad, good and great literature but neither my education, nor my experience nor, above all, my innate abilities allow me to distinguish with any certainty one from the other. Besides the boundaries are fluid and uncertain. With mathematics, my topic in this lecture, the world at large is less aware of these distinctions and, even among mathematicians, there are widely different perceptions of the merits of this or that achievement, this or that contribution.

\n
\n

Even when they are personally motivated by what they find beautiful, mathematicians tend to deemphasize beauty in professional discourse, preferring to rely on more objective criteria. Without such a practice, the risk of generalizing from one example and confusing one's own immediate aesthetic preferences with what's in the interest of the mathematical community and broader society would be significant. In the same essay Langlands said

\n
\n

Harish-Chandra and Chevalley were certainly not alone in perceiving their goal as the revelation of God’s truths, which we might interpret as beauty, but mathematicians largely use a different criterion when evaluating the efforts of their colleagues. The degree of the difficulties to be overcome, thus of the effort and imagination necessary to the solution of a problem, is much more likely than aesthetic criteria to determine their esteem for the solution, and any theory that permits it. This is probably wise, since aesthetic criteria are seldom uniform and often difficult to apply. The search for beauty quickly lapses in less than stern hands into satisfaction with the meretricious.

\n
\n

The asymmetry between personal motivations and professional discourse gives rise to the possibility that outside onlookers might misunderstand the motivations of mathematicians and consequently misunderstand the nature of mathematical practice.

\n

Aside from this, another reason why outside onlookers are frequently mislead is the high barrier to entry to advanced mathematics. In his article Mathematics: art and science, Armand Borel wrote:

\n
I [have] already mentioned the idea of mathematics as an art, a poetry of ideas. With that as a starting point, one would conclude that, in order for one to appreciate mathematics, to enjoy it, one needs a unique feeling for the intellectual elegance and beauty of ideas in a very special world of thought. It is not surprising that this can hardly be shared with nonmathematicians: Our poems are written in a highly specialized language, the mathematical language; although it is expressed in many of the more familiar languages, it is nevertheless unique and translatable into no other language; unfortunately, these poems can only be understood in the original. The resemblance to an art is clear. One must also have a certain education for the appreciation of music or painting, which is to say one must learn a certain language.
\n

I think that Borel's statement about the inaccessibility of mathematics to non-mathematicians is too strong. For a counterpoint, in a reference to be added, Jean-Pierre Serre said

\n
\n

I’ve always loved mathematics. My earliest memory, which goes back to the beginning of elementary school, is of learning the multiplication table. When one loves to play, one tries to understand the reason. All my mathematics is like this, but a bit more complicated.

\n
\n

In his aforementioned essay Langlands wrote

\n
\n

Initially perhaps there is no gangue, not even problems, perhaps just a natural, evolutionary conditioned delight in elementary arithmetic – the manipulation of numbers, even of small numbers – or in basic geometric shapes – triangles, rectangles, regular polygons.

\n
\n

and then after reviewing the history of algebraic numbers:

\n
\n

Does mathematical beauty or pleasure require such an accumulation of concepts and detail? Does music? Does architecture? Does literature? The answer is certainly “no” in all cases. On the other hand, the answer to the question whether mathematical beauty or pleasure admits such an accumulation and whether the beauty achieved is then of a different nature is, in my view, “yes”. This response is open to dispute, as it would be for the other domains.

\n
\n

This can be viewed as a reconciliation of Borel's statement and Serre's statement.

\n

I'll proceed to give some more specific examples of aesthetic reactions. In the spirit of the quotations from Langlands above, the remarks quoted below should be interpreted as expressions of personal preferences and experiences rather than as statements about the objective nature of reality. All the same, since human preferences are correlated, knowing about the personal preferences of others does provide useful information about what one might personally find attractive.

\n

Furthermore, as Roger Penrose wrote in his article The Role of Aesthetics in Pure and Applied Mathematical Research, the ultimate justification for pursuing mathematics for its own sake is aesthetic:

\n
\n

How, in fact, does one decide which things in mathematics are important and which are not? Ultimately, the criteria have to be aesthetic ones. There are other values in mathematics, such as depth, generality, and utility. But these are not so much ends in themselves. Their significance would seem to rest on the values of the other things to which they relate. The ultimate values seem simply to be aesthetic; that is, artistic values such as one has in music or painting or any other art form.

\n
\n
\n

In an autobiography, David Mumford wrote

\n
\n

At Harvard, a classmate said \"Come with me to hear Professor Zariski's first lecture, even though we won't understand a word\" and Oscar Zariski bewitched me. When he spoke the words 'algebraic variety,' there was a certain resonance in his voice that said distinctly that he was looking into a secret garden. I immediately wanted to be able to do this too. It led me to 25 yeras of stuggling to make this world tangilbe and visible. Especially, I became obsessed with a kind of passion flower in this garden, the moduli spaces of Riemann. I was always trying to find new angles from which I could see them better.

\n
\n

In his essay in Mathematicians: An Outer View of the Inner World, Don Zagier wrote

\n
\n

I like explicit, hands-on formulas. To me they have a beauty of their own. They can be deep or not. As an example, imagine you have a series of numbers such that if you add 1 to any number you will get the product of its left and right neighbors. Then this series will repeat itself at every fifth step! For instance, if you start with 3, 4 then the sequence continues: 3, 4, 5/3, 2/3, 1, 3, 4, 5/3, etc. The difference between a mathematician and a nonmathematician is not just being able to discover something like this, but to care about it and to be curious about why it's true, what it means, and what other things in mathematics it might be connected with. In this particular case, the statement itself turns out to be connected with a myriad of deep topics in advanced mathematics: hyperbolic geometry, algebraic K-theory, the Schrodinger equation of quantum mechanics, and certain models of quantum field theory. I find this kind of connection between very elementary and very deep mathematics overwhelmingly beautiful. Some mathematicians find formulas and special cases less interesting and care only about understanding the deep underlying reasons. Of course that is the final goal, but the examples let you see things for a particular problem differently, and anyway it's good to have different approaches and different types of mathematicians.

\n
\n

In Recoltes et Semailles Alexander Grothendieck wrote about his subjective experience of his transition to algebraic geometry following a successful early career in analysis:

\n
\n

\"\"The year 1955 marked a critical departure in my work in mathematics: that of my passage from \"analysis\" to \"geometry\". I well recall the power of my emotional response (very subjective naturally); it was as if I'd fled the harsh arid steppes to find myself suddenly transported to a kind of \"promised land\" of superabundant richness, multiplying out to infinity wherever I placed my hand in it, either to search or to gather... This impression, of overwhelming riches has continued to be confirmed and grow in substance and depth down to the present day. (*)

\n

(*) The phrase \"superabundant richness\" has this nuance: it refers to the situation in which the impressions and sensations raised in us through encounter with something whose splendor, grandeur or beauty are out of the ordinary, are so great as to totally submerge us, to the point that the urge to express whatever we are feeling is obliterated.

\n
\n

On rare occasions I've been fortunate to experience the \"superabundant richness\" that Grothendieck describes in connection with mathematics. I've quoted a reflective piece that I wrote about a year ago about such an experience from November 2008:

\n
\n

I was in Steve Ullom's course on class field theory, finally understanding the statements of the theorems. I had been intrigued by class field theory ever since I encountered David Cox's book titled \"Primes of the form x2 + ny2\"  in 2004 or so, but I was not able to form a mental picture of the subject from Cox's presentation.

I was initially drawn toward algebraic number theory primarily by its reputation rather than out of a love for the subject. By this I don't mean that I was motivated by careerism, but rather that I knew that the subject was a favorite of some of the greatest historical mathematicians and I had seen the Nova video on Fermat's Last Theorem while in high school in which the mathematicians interviewed (in particular Wiles, Mazur, Ribet and Shimura) seemed fascinated by the subject  - I figured that if I stuck with it for long enough I would be so struck as well. 

It took me a long time to come to a genuine appreciation of the subject. I don't think that this is uncommon - the early manifestations of the subject are somewhat obscure, and I don't think it an accident that it wasn't until the late 1800's that it became mainstream despite having a pedigree stretching back very far. And even today, few expositions highlight the essential points.

Anyway, in Steve Ullom's course I finally \"got it\" - both on a semantic level and why so many people might have been attracted to the subject. Sometime in November I revisited Silverman and Tate's Rational Points on Elliptic Curves and looked at the last chapter on complex multiplication. I knew that the theory of complex multiplication came highly recommended, Kronecker citing its development as his \"dearest dream of youth\" and Hilbert having said something like \"The theory of complex multiplication of elliptic curves is not only the most beautiful part of mathematics but of all science.\" I had seen glimmerings of what made it interesting from Cox's book, but again, I had not been able to understand the subject from his exposition.

With the background of the course that I was taking, reading Silverman and Tate I was able to understand the instance of complex multiplication that they worked out and was totally bewitched to learn how the elliptic curve y2 = x3 + x organizes all finite abelian extensions of Q(i) in a very coherent way.

For the next several weeks I was in a plane of existence different both from what which I'm accustomed to and from that of the people surrounding me. Naturally it's difficult to describe such an experience in words and all the more so retrospectively. It was a state of great inner focus and tranquility. I was filled with a sense of limitless possibility. I was simultaneously able to acknowledge the problems in the human world around me while also not being discouraged by them in the least.  There are certainly echoes of the Buddhist conception of enlightenment in my feeling. My state brought to mind a famous poem by the Chinese poet Li Bai:

Question and Answer in the Mountains

They ask me why I live in the green mountains.
I smile and don't reply; my heart's at ease.
Peach blossoms flow downstream, leaving no trace -
And there are other earths and skies than these

\n
\n

\"\"\"\"

" } }, { "_id": "NjqnfBC8HRhTbx75E", "title": "Help: Which concepts are controversial on LW", "pageUrl": "https://www.lesswrong.com/posts/NjqnfBC8HRhTbx75E/help-which-concepts-are-controversial-on-lw", "postedAt": "2010-10-12T17:46:15.603Z", "baseScore": 5, "voteCount": 4, "commentCount": 7, "url": null, "contents": { "documentId": "NjqnfBC8HRhTbx75E", "html": "

I'm eager to improve the list References & Resources for LessWrong. I recently introduced a new label with the somewhat playful name Memetic Hazard. It is meant to mark resources that include ideas which might be controversial, bogus or which are works of fiction and therefore shouldn't be taken at face value.

\n

I should explain that the reason that some controversial concepts are listed in the first place is that I felt that I frequently encountered those concepts in some rather fanciful discussions and posts. Those posts and discussions attract attention as they are some of the more exciting and fictional content on LW. I had to look them up myself once and want to give new readers a companion guide to learn about the very concepts and their status within the community.

\n

I might also turn the Key Concepts section into just Concepts with a Controversial subcategory.

\n

The trigger for this discussion post was a recent comment by rwallace:

\n
\n

I thought quantum suicide is not controversial since MWI is obviously correct?

\n

I agree MWI is solid, I'm not suggesting that be flagged. But it does not in any way imply quantum suicide; the latter is somewhere between fringe and crackpot, and a proven memetic hazard with at least one recorded death to its credit.

\n

And the AI section? Well, the list is supposed to reflect the opinions hold in the LW community, especially by EY and the SIAI. I'm trying my best to do so and by that standard, how controversial is AI going FOOM etc.?

\n

Well, AI go FOOM etc is again somewhere in the area between fringe and crackpot, as judged by people who actually know about the subject. If the list were specifically supposed to represent the opinions of the SIAI, then it would belong on the SIAI website, not on LW.

\n
\n

So my question, are AI going FOOM and Quantum suicide considered controversial concepts in this community? And should any other content on the list potentially be marked controversial?

\n

Thank you!

" } }, { "_id": "yqhMMnn7fprW8pQn6", "title": "Of the Qran and its stylistic resources: deconstructing the persuasiveness Draft", "pageUrl": "https://www.lesswrong.com/posts/yqhMMnn7fprW8pQn6/of-the-qran-and-its-stylistic-resources-deconstructing-the", "postedAt": "2010-10-12T17:04:25.670Z", "baseScore": 4, "voteCount": 10, "commentCount": 59, "url": null, "contents": { "documentId": "yqhMMnn7fprW8pQn6", "html": "

(It's my first time posting an article, so please go easy on me.)

\n

I wonder if anyone ever fully analysed the Qran and all the resources it uses to tug at the feelings of the reader? It is a remarkably persuasive (if not at all convincing)  book, even if I say so myself as an ex Muslim. I've started recognizing some patterns since I started reading this site, but I'd like to know if there is a full-blown, complete, exhaustive deconstruction of that book, that is not dripped in islamophobia, ethnocentrism, and other common failures I have seen in Western theologians when applied to Islam. Not a book about \"How the Qran is evil\" or \"How the Qran is Wrong\" or \"How IT'S A FAAAKE\" but \"How, precisely, it manipulates you\". Can anyone here point me towards such a work?

\n

And where is the markup help in this blog? I can't seem to find it and it frustrates the hell out of me when I'm commenting usual posts.

" } }, { "_id": "gg9rMscx687RXFGg3", "title": "In which I fantasize about drugs", "pageUrl": "https://www.lesswrong.com/posts/gg9rMscx687RXFGg3/in-which-i-fantasize-about-drugs", "postedAt": "2010-10-12T16:19:15.521Z", "baseScore": 13, "voteCount": 14, "commentCount": 72, "url": null, "contents": { "documentId": "gg9rMscx687RXFGg3", "html": "

We operate like this: the \"overseer process\" tells the brain, using blunt instruments like chemicals, that we need to find something to eat, somewhere to sleep or someone to mate with. Then the brain follows orders. Unfortunately the orders we receive from the \"overseer\" are often wrong, even though they were right in the ancestral environment. It seems the easiest way to improve humans isn't to augment their brains - it's to send them better orders, e.g. using drugs. Here's a list of fantasy brain-affecting drugs that I would find useful, even though they don't seem to do anything complicated except affecting \"overseer\" chemistry:

\n

1) A drug against unrequited love, aka \"infatuation\" or 'limerence\".

\n

2) A drug that makes you become restless and want to exercise.

\n

3) A drug that puts you in the state of random creativity that you normally experience just before falling asleep.

\n

4) A drug that puts you in the optimal PUA \"state\".

\n

5) A drug that boosts your feeling of curiosity. Must be great for doing math or science.

\n

Anything else?

" } }, { "_id": "djiZdBCPcw4kRJS6t", "title": "Before you start solving a problem", "pageUrl": "https://www.lesswrong.com/posts/djiZdBCPcw4kRJS6t/before-you-start-solving-a-problem", "postedAt": "2010-10-12T15:46:32.585Z", "baseScore": 3, "voteCount": 2, "commentCount": 7, "url": null, "contents": { "documentId": "djiZdBCPcw4kRJS6t", "html": "

(While this is a general discussion, I have \"doing well on interview questions\" as an instrumental goal; the discussion below is somewhat skewed due to that).

\n

I noticed one of the common failures to solving problems (especially under time constraints) is trying to solve the problem prematurely. There are multiple causes for this; awareness of some of them might reduce the chance of falling into failure mode, others (at least one) I do not have a solution to, and a procedural solution might not exist other than the magic of experience.

\n

Here is my list of the first kind (awareness-helps group):

\n
    \n
  1. Jumping into the problem before completely understanding it: this could be due to perceived time pressure (e.g. test, interview). This *could* be rational, depending on the \"test score\" function, but could be a serious failure mode if done due to stress.
  2. \n
  3. Using a cached solution instead of trying to solve the problem. The statement of the problem can trigger \"cached thoughts\" despite (possibly intentionally, in interview) being more subtly more difficult than a well known problem. In one instance I actually misread the statement of the problem because it sounded like one I knew of before.
  4. \n
  5. Another problem with a cached solution, even if it is the correct one for the problem at hand, is that you might believe that you know it without actually doing the \"retrieve from disk\"; consequences might be looking bad when asked follow-up questions on an interview or inability to build on the problem if it's a part of a greater structure.
  6. \n
  7. Besides cognitive mechanics, there might be a desire to blurt out a cached solution because it makes you look knowledgeable. A status claim might be instrumentally useful (\"this looks like a min-spanning tree algorithm!\"), as long as you properly calibrate your level of confidence and don't fall for the trap.
  8. \n
\n

This brings me to the last failure mode which I do not have a solution for (which is why I am posting ;). If I avoid the traps above, I should have a pretty good understanding of the problem. I think this is a kind of crucial point, as I by definition, do not know what to do next. This uncertainty is scary and might push me into trying to immediately solve it, very similar to 1 above. While you might be able to avoid acting on this by being emotionally reflective (which has the instrumental side effect of appearing more confident) I still do not know what exactly should be done next. Giving some time for unconscious processing seems necessary even on a smallish (interview-question-size) problems, but how much time? And what should I be doing in this time? Meditation? Drawing the problem? Trying to solve sub-problems? Writing down lists of whatever-comes-to-mind? I can use the time constraint-expected size of communicating the solution (in proper format, e.g. C++ code) as an upper bound; but there is a moment when I have to sigh (optional) and take a shot at solution. I do not have anything better to go by than gut feel here.

\n

(Even after the plunge, there is a chance of getting stuck, which is where Meta-thinking skills come in)

" } }, { "_id": "3pnhkrfpj4rZkfqN2", "title": "Swords and Armor: A Game Theory Thought Experiment", "pageUrl": "https://www.lesswrong.com/posts/3pnhkrfpj4rZkfqN2/swords-and-armor-a-game-theory-thought-experiment", "postedAt": "2010-10-12T08:51:28.673Z", "baseScore": 22, "voteCount": 42, "commentCount": 78, "url": null, "contents": { "documentId": "3pnhkrfpj4rZkfqN2", "html": "

Note: this image does not belong to me; I found it on 4chan. It presents an interesting exercise, though, so I'm posting it here for the enjoyment of the Less Wrong community.

\n

\"\"

\n

For the sake of this thought experiment, assume that all characters have the same amount of HP, which is sufficiently large that random effects can be treated as being equal to their expected values. There are no NPC monsters, critical hits, or other mechanics; gameplay consists of two PCs getting into a duel, and fighting until one or the other loses. The winner is fully healed afterwards.

\n

Which sword and armor combination do you choose, and why?

" } }, { "_id": "CmkvFtw5vvyD5DyDo", "title": "Vanity and Ambition in Mathematics", "pageUrl": "https://www.lesswrong.com/posts/CmkvFtw5vvyD5DyDo/vanity-and-ambition-in-mathematics", "postedAt": "2010-10-12T05:49:15.498Z", "baseScore": 12, "voteCount": 12, "commentCount": 18, "url": null, "contents": { "documentId": "CmkvFtw5vvyD5DyDo", "html": "

In my time in the mathematical community I've formed the subjective impression that it's noticeably less common for mathematicians of the highest caliber to engage in status games than members of the general population do. This impression is consistent with the modesty that comes across in the writings of such mathematicians. I record some relevant quotations below and then discuss interpretations of the situation.

\n

Acknowledgment - I learned of the Hironaka interview quoted below from my colleague Laurens Gunnarsen.

\n

Edited 10/12/10 to remove the first portion of the Hironaka quote which didn't capture the phenomenon that I'm trying to get at here.

\n

In a 2005 Interview for the Notices of the AMS, one of the reasons that Fields Medalist Heisuke Hironaka says

\n
\n

By the way, Mori is a genius. I am not. So that is a big difference! Mori was a student when I was a visiting professor at Kyoto University. I gave lectures in Kyoto, and Mori wrote notes, which were published in a book. He was really amazing. My lectures were terrible, but when I looked at his notes, it was all there! Mori is a discoverer. He finds new things that people never imagined.

\n
\n

(I'll note in passing that the sense of the \"genius\" that Hironaka is using here is probably different than the sense of \"genius\" that Gowers uses in Mathematics: A Very Short Introduction.)

\n

In his review of Haruzo Hida’s p-adic automorphic forms on Shimura varieties the originator of the Langlands program Robert Langlands wrote

\n
\n

So ill-equipped as I am in many ways – although not in all – my first, indeed my major task was to take bearings. The second is, bearings taken, doubtful or not, to communicate them at least to an experienced reader and, in so far as this is possible, even to an inexperienced one. For lack of time and competence I accomplished neither task satisfactorily. So, although I have made a real effort, this review is not the brief, limpid yet comprehensive, account of the subject, revealing its manifold possibilities, that I would have liked to write and that it deserves. The review is imbalanced and there is too much that I had to leave obscure, too many possibly premature intimations. A reviewer with greater competence, who saw the domain whole and, in addition, had a command of the detail would have done much better.

\n
\n

For context, it's worthwhile to note that Langlands' own work is used in an essential way in Hida's book.

\n

The 2009 Abel Prize Interview with Mikhail Gromov contains the following questions and answers:

\n
\n

Raussen and Skau: Can you remember when and how you became aware of your exceptional mathematical talent?

\n

Gromov: I do not think I am exceptional. Accidentally, things happened, and I have qualities that you can appreciate. I guess I never thought in those terms.

\n

[...]

\n

Raussen and Skau: Is there one particular theorem or result you are the most proud of?

\n

Gromov: Yes. It is my introduction of pseudoholomorphic curves, unquestionably. Everything else was just understanding what was already known and to make it look like a new kind of discovery.

\n
\n

In his MathOverflow self-summary, William Thurston wrote

\n
\n

Mathematics is a process of staring hard enough with enough perseverance at at the fog of muddle and confusion to eventually break through to improved clarity. I'm happy when I can admit, at least to myself, that my thinking is muddled, and I try to overcome the embarrassment that I might reveal ignorance or confusion. Over the years, this has helped me develop clarity in some things, but I remain muddled in many others. I enjoy questions that seem honest, even when they admit or reveal confusion, in preference to questions that appear designed to project sophistication.

\n
\n
\n

I interpret the above quotations (and many others by similar such people) to point to a markedly lower than usual interest in status. As JoshuaZ points out, one could instead read the quotations as counter-signaling, but such an interpretation feels like a stretch to me. I doubt that in practice such remarks serve as an effective counter-signal. More to the point, there's a compelling alternate explanation for why one would see lower than usual levels of status signaling among mathematicians of the highest caliber. Gromov hints at this in the aforementioned interview:

\n
\n

Raussen and Skau: We are surprised that you are so modest by playing down your own achievements. Maybe your ideas are naíve, as you yourself say; but to get results from these ideas, that requires some ingenuity, doesn’t it?

\n

Gromov: It is not that I am terribly modest. I don’t think I am a complete idiot. Typically when you do mathematics you don’t think about yourself. A friend of mine was complaining that anytime he had a good idea he became so excited about how smart he was that he could not work afterwards. So naturally, I try not to think about it.

\n
\n

In Récoltes et Semailles, Alexander Grothendieck offered a more detailed explanation:

\n
\n

The truth of the matter is that it is universally the case that, in the real motives of the scientist, of which he himself is often unaware in his work, vanity and ambition will play as large a role as they do in all other professions. The forms that these assume can be in turn subtle or grotesque, depending on the individual. Nor do I exempt myself. Anyone who reads this testimonial will have to agree with me.

\n

It is also the case that the most totally consuming ambition is powerless to make or to demonstrate the simplest mathematical discovery - even as it is powerless (for example) to \"score\" (in the vulgar sense). Whether one is male or female, that which allows one to 'score' is not ambition, the desire to shine, to exhibit one's prowess, sexual in this case. Quite the contrary!

\n

What brings success in this case is the acute perception of the presence of something strong, very real and at the same time very delicate. Perhaps one can call it \"beauty\", in its thousand-fold aspects. That someone is ambitious doesn't mean that one cannot also feel the presence of beauty in them; but it is not the attribute of ambition which evokes this feeling....

\n

The first man to discover and master fire was just like you and me. He was neither a hero nor a demi-god. Once again like you and me he had experienced the sting of anguish, and applied the poultice of vanity to anaesthetize that sting. But, at the moment at which he first \"knew\" fire he had neither fear nor vanity. That is the truth at the heart of all heroic myth. The myth itself becomes insipid, nothing but a drug, when it is used to conceal the true nature of things.

\n

[...]

\n

In our acquisition of knowledge of the Universe (whether mathematical or otherwise) that which renovates the quest is nothing more nor less than complete innocence. It is in this state of complete innocence that we receive everything from the moment of our birth. Although so often the object of our contempt and of our private fears, it is always in us. It alone can unite humility with boldness so as to allow us to penetrate to the heart of things, or allow things to enter us and taken possession of us.

\n

This unique power is in no way a privilege given to \"exceptional talents\" - persons of incredible brain power (for example), who are better able to manipulate, with dexterity and ease, an enormous mass of data, ideas and specialized skills. Such gifts are undeniably valuable, and certainly worthy of envy from those who (like myself) were not so endowed at birth,\" far beyond the ordinary\".

\n

Yet it is not these gifts, nor the most determined ambition combined with irresistible will-power, that enables one to surmount the \"invisible yet formidable boundaries\" that encircle our universe. Only innocence can surmount them, which mere knowledge doesn't even take into account, in those moments when we find ourselves able to listen to things, totally and intensely absorbed in child play.

\n
\n

The amount of focus on the subject itself which is required to do mathematical research of the highest caliber is very high. It's plausible that the focuses entailed by vanity and ambition are detrimental to subject matter focus. If this is true (as I strongly suspect to be the case based on my own experience, my observations of others, the remarks of colleagues, and the remarks of eminent figures like Gromov and Grothendieck), aspiring mathematicians would do well to work to curb their ambition and vanity and increase their attraction to mathematics for its own sake.

" } }, { "_id": "ydnDN8S6H4fHPpmZ2", "title": "Morality and relativistic vertigo", "pageUrl": "https://www.lesswrong.com/posts/ydnDN8S6H4fHPpmZ2/morality-and-relativistic-vertigo", "postedAt": "2010-10-12T02:00:43.474Z", "baseScore": 60, "voteCount": 55, "commentCount": 80, "url": null, "contents": { "documentId": "ydnDN8S6H4fHPpmZ2", "html": "\n

tl;dr: Relativism bottoms-out in realism by objectifying relations between subjective notions. This should be communicated using concrete examples that show its practical importance. It implies in particular that morality should think about science, and science should think about morality.

\n

Sam Harris attacks moral uber-relativism when he asserts that \"Science can answer moral questions\". Countering the counterargument that morality is too imprecise to be treated by science, he makes an excellent comparison: \"healthy\" is not a precisely defined concept, but no one is crazy enough to utter that medicine cannot answer questions of health.

\n

What needs adding to his presentation (which is worth seeing, though I don't entirely agree with it) is what I consider the strongest concise argument in favor of science's moral relevance: that morality is relative simply means that the task of science is to examine absolute relations between morals. For example, suppose you uphold the following two moral claims:

\n
    \n
  1. \"Teachers should be allowed to physically punish their students.\"
  2. \n
  3. \"Children should be raised not to commit violence against others.\"
  4. \n
\n

First of all, note that questions of causality are significantly more accessible to science than people before 2000 thought was possible. Now suppose a cleverly designed, non-invasive causal analysis found that physically punishing children, frequently or infrequently, causes them to be more likely to commit criminal violence as adults. Would you find this discovery irrelevant to your adherence to these morals? Absolutely not. You would reflect and realize that you needed to prioritize them in some way. Most would prioritize the second one, but in any case, science will have made a valid impact.

\n

So although either of the two morals is purely subjective on its own, how these morals interrelate is a question of objective fact. Though perhaps obvious, this idea has some seriously persuasive consequences and is not be taken lightly. Why?

\n

First of all, you might change your morals in response to them not relating to each other in the way you expected. Ideas parse differently when they relate differently. \"Teachers should be allowed to physically punish their students\" might never feel the same to you after you find out it causes adult violence. Even if it originally felt like a terminal (fundamental) value, your prioritization of (2) might make (1) slowly fade out of your mind over time. In hindsight, you might just see it as an old, misinformed instrumental value that was never in fact terminal.

\n

Second, as we increase the number of morals under consideration, the number of relations for science to consider grows rapidly, as (n2-n)/2: we have many more moral relations than morals themselves. Suddenly the old disjointed list of untouchable maxims called \"morals\" fades into the background, and we see a throbbing circulatory system of moral relations, objective questions and answers without which no person can competently reflect on her own morality. A highly prevalent moral like \"human suffering is undesirable\" looks like a major organ: important on its own to a lot of people, and lots of connections in and out for science to examine.

\n

Treating relativistic vertigo

\n

To my best recollection, I have never heard the phrase \"it's all relative\" used to an effect that didn't involve stopping people from thinking. When the topic of conversation — morality, belief, success, rationality, or what have you — is suddenly revealed or claimed to depend on a context, people find it disorienting, often to the point of feeling the entire discourse has been and will continue to be \"meaningless\" or \"arbitrary\". Once this happens, it can be very difficult to persuade them to keep thinking, let alone thinking productively

\n

To rebuke this sort of conceptual nihilism, it's natural to respond with analogies to other relative concepts that are clearly useful to think about:

\n

\"Position, momentum, and energy are only relatively defined as numbers, but we don't abandon scientific study of those, do we?\"

\n

While an important observation, this inevitably evokes the \"But that's different\" analogy-immune response. The real cure is in understanding explicitly what to do with relative notions:

\n
If belief is subjective, let us examine objective relations between beliefs.
If morality is relative, let us examine absolute relations between morals.
If beauty is in the eye of the beholder, let us examine the eyes of the beholders.
\n

To use one of these lines of argument effectively — and it can be very effective — one should follow up immediately with a specific example in the case you're talking about. Don't let the conversation drift in abstraction. If you're talking about morality, there is no shortage of objective moral relations that science can handle, so you can pick one at random to show how easy and common it is:

\n\n

I'm not advocating here any of these particular moral claims, nor any particular resolution between them, but simply that the answer to the given question — and many other relevant ones — puts you in a much better position to reflect on these issues. Your opinion after you know the answer is more valuable than before.

\n

\"But of course science can answer some moral questions... the point is that it can't answer all of them. It can't tell us ultimately what is good or evil.\"

\n

No. That is not the point. The point is whether you want teachers to beat their students. Do you? Well, science can help you decide. And more importantly, once you do, it should help you in leading others to the same conclusion.

\n

A lesson from history: What happens when you examine objective relations between subjective beliefs? You get probability theory… Bayesian updating… we know this story; it started around 200 years ago, and it ends well.

\n

Now it's morality's turn.

\n
Between the subjective and the subjective lies the objective.
Relative does not mean structureless.
It does not mean arbitrary.
It does not mean meaningless.
Let us not discard the compass along with the map.
" } }, { "_id": "WhwBjppwrGEv4yiq9", "title": "Copyright should be abolished.", "pageUrl": "https://www.lesswrong.com/posts/WhwBjppwrGEv4yiq9/copyright-should-be-abolished", "postedAt": "2010-10-11T23:09:59.714Z", "baseScore": -17, "voteCount": 15, "commentCount": 12, "url": null, "contents": { "documentId": "WhwBjppwrGEv4yiq9", "html": null } }, { "_id": "fTTzXMvwmPE7fF5SP", "title": "FAI vs network security", "pageUrl": "https://www.lesswrong.com/posts/fTTzXMvwmPE7fF5SP/fai-vs-network-security", "postedAt": "2010-10-11T23:06:47.143Z", "baseScore": -14, "voteCount": 11, "commentCount": 4, "url": null, "contents": { "documentId": "fTTzXMvwmPE7fF5SP", "html": "

All plausible scenarios of AGI disaster involve the AGI gaining access to resources \"outside the box.\"  Therefore there are two ways of preventing AGI disaster: one is preventing AGI, which is the \"FAI route\", and the other is preventing the possibility of rogue AGI gaining control of too many external resources--the \"network security route.\"  It seems to me that this network security route--an international initiative to secure networks and computing resources against cyber attacks--is the more realistic solution for preventing AGI disaster.  Network security prevents against intentional human-devised attacks as well as the possibility of rogue AGI--therefore such measures are easier to motivate and therefore more likely to be implemented successfully.  Also, the development of FAI theory does not prevent the creation of unfriendly AIs.  This is not to say that FAI should not be pursued at all, but it can hardly be claimed that development of FAI is of top priority (as it has been stated a few times by users of this site).

" } }, { "_id": "uMXY23xYc85RbQvFm", "title": "Draft: Reasons to Use Informal Probabilities", "pageUrl": "https://www.lesswrong.com/posts/uMXY23xYc85RbQvFm/draft-reasons-to-use-informal-probabilities", "postedAt": "2010-10-11T22:50:35.892Z", "baseScore": 15, "voteCount": 10, "commentCount": 13, "url": null, "contents": { "documentId": "uMXY23xYc85RbQvFm", "html": "
\n

If I roll 15 fair 6-sided dice, take the ones that rolled 4 or higher, roll them again, and sum up all the die rolls... what is the probability that I drop at least one die on the floor?

\n
\n

There are two different ways of using probability. When we think of probability, we normally think of neat statistics problems where you start with numbers, do some math, and end with a number. After all, if we don't have any numbers to start with, we can't use a proven formula from a textbook; and if we don't use a proven formula from a textbook, our answer can't be right, can it? But there's another way of using probability that's more general: a probability is just an estimate, produced by the best means available, even if that's a guess produced by mere intuition. To distinguish these two types, let's call the former kind *formal probabilities*, and the latter kind *informal probabilities*.

\n

An informal probability summarizes your state of knowledge, no matter how much or how little knowledge that is. You can make an informal probability in a second, based on your present level of confidence, or spend time making it more precise by looking for details, anchors, reference classes. It is perfectly valid to assign probabilities to things you don't have numbers for, to things you're completely ignorant about, to things that are too complex for you to model, and to things that are poorly defined or underspecified. Giving a probability estimate does not require *any* minimum amount of thought, evidence, or calculation. Giving an informal probability  is not a claim that any relevant mathematical calculation has been done, nor that any calculation is even possible.

\n

I present here the case for assigning informal probabilities, as often as is practical. If any statement crosses your mind that seems especially important, you should put a number on it. Routinely putting probabilities on things has significant benefits, even if they aren't very accurate, even if you don't use them in calculations, and even if you don't share them. The process of assigning probabilities to things tends to prompt useful observations and clarify thinking; it eases the transition into formal calculation when you discover you need it, and provides a sanity check on formal probabilities; having used probabilities makes it easier to diagnose mistakes later; and using probabilities lets you quantify, not just confidence, but also the strength and usefulness of pieces of evidence, and the expected value of avenues of investigation. Finally, practice at generating probabilities makes you better at it.

\n

The first thing to notice is that informal probabilities are much more broadly applicable than formal probabilities are. A formal probability requires more information and more work; in particular, you need to start with relevant numbers; but for most routine questions, you just don't have that data and it wouldn't be worth gathering anyways.  For example, it's worth estimating the informal probability that you'll like a dish before ordering it at a restaurant, but producing a formal probability would require a taste test, which is far outside the realm of practicality.

\n

Assigning informal probabilities clarifies thinking, by forcing you to ask [the fundamental question](http://lesswrong.com/lw/24c/the_fundamental_question/): What do I believe, and why do I believe it? Sometimes, the reason turns out not to be very good, and you ought to assign a low probability. That's important to notice. Sometimes the reason is solid, but tracking it down leads you to something else that's important. That's good, too. Coming up with probabilities also pushes you to look for reference classes and examples. You can still ask these things without using probability, but trying to produce a probability gives guidance and motivation that greatly increases the chance that you'll actually remember to ask these questions when you need to. Informal probabilities also ease the transition into formal calculation when you need it; you can fill in an expected-utility calculation or other formula with estimates, then look for better numbers if the decision is close enough.

\n

Probabilities are easier to remember than informal notions of confidence. This is important if you catch a mistake and need to go back and figure out where you went wrong; you want to be able to point to a specific thought you had and say, \"this was wrong in light of the evidence I had at the time\", or \"I should've updated this when I found out X\". Unfortunately, memories of degrees of confidence tend to come back badly distorted, unless they're crystallized somehow. Worse, they tend to come back consistently biased towards whatever would be judged correct now, which makes them useless or worse.  Numbers crystallize those memories, making them usable and enabling you to retrace steps

\n

Quantifying confidence also enables us to quantify the strength of evidence - that is, how much a piece of information *changes* our confidence. For example, a piece of evidence that changes our probability estimate from 0.2 to 0.8 is a likelihood ratio of 4:1, or 2 bits of evidence. Assigning before-and-after-evidence probabilities to a statement forces you to consider just how good a piece of evidence it is; and this makes certain mistakes less likely. It's less tempting to round weak arguments off to zero, or to respond emotionally to an argument without judging its actual significance, if you're in the habit of putting numbers on that significance. But keep on mind that there is not one true value for the strength of a piece of evidence; it depends what you already know. For example, an argument that's a duplicate of one you've already updated on has no value at all

\n

Finally, assigning probabilities to things is a skill like any other, which means it improves with practice. Estimating probabilities and writing them down  enables us to calibrate our intuitions. Even if you don't write anything down, just noticing every time you put a .99 on something that turns out to be false is a big improvement over no calibration at all.

\n

I know of only one caveat: You shouldn't share every probability you produce, unless you're very clear about where it came from. People who're used to only seeing formal probabilities may assume that you have more information than you really do, or that you're trying to misrepresent the information you have.

\n

To help overcome any internal resistance to giving informal probabilities, I have here a list of probability Fermi problems. A Fermi problem asks for only a rough estimate - an order of magnitude - and it does not include enough information for a precise answer. So too with these problems, which contain just enough information for an estimate. Answer quickly (ten seconds per question at most). Don't do any calculations except very simple ones in your head. Don't worry about all the missing details that could affect the answer. The goal is to be quick, since speed is the main obstacle to using probability routinely.

\n

1. A car is white.
2. A car is a white, ten year old Ford with a dent on the rear right door
3. A ten-mile car trip will involve a collision.
4. A building is residential.
5. A person is below the age of 20.
6. A word in a book contaains a typo.
7. Your arm will spontaneously transform into a blue tentacle today.
8. A purse contains exactly 71 coins.
9. 76297 is a prime number.

\n

I also suggest making some predictions on PredictionBook and taking a calibration quiz.

" } }, { "_id": "mWXxsBLngfgH6iRdT", "title": "Poker Playing", "pageUrl": "https://www.lesswrong.com/posts/mWXxsBLngfgH6iRdT/poker-playing", "postedAt": "2010-10-11T21:48:00.240Z", "baseScore": 24, "voteCount": 17, "commentCount": 6, "url": null, "contents": { "documentId": "mWXxsBLngfgH6iRdT", "html": "

http://www.washingtonpost.com/wp-dyn/content/article/2010/10/01/AR2010100105833_pf.html (one page link; original: http://www.washingtonpost.com/wp-dyn/content/article/2010/10/01/AR2010100105833.html) is a recent Washington Post article on young poker player Steven Silverman. It's interesting.

\n

Even more interesting are his comments and other poker players' comments on Hacker News: http://news.ycombinator.com/item?id=1777385

\n

Some selected comments:

\n
\n

 

\n
\n
\n

It doesn't matter one jot how well you play at your best, what matters is how well you play when you're stressed and upset and ill and exhausted and it's 2am and the fish whose money you need to take is calling your mother a whore. That's the real job of playing professional poker - you're on a tightrope where an hour of perfect poker earns you $80 but an hour on tilt can lose you $800. The skills that allow you to turn pro are learned as an amateur, but there's a whole other set of skills required to stick it out without losing your mind. I had the former, but not the latter. I walked away from poker because I am absolutely certain it would have killed me.

\n
\n

-- jdietrich, http://news.ycombinator.com/item?id=1778364

\n

\n

\n

Losing streaks are like nothing you can understand until you've been a professional poker player. Imagine if you went to work, performed flawlessly, far better than all of your coworkers, and instead of getting paid your bank account got debited and your boss told you \"you really sucked today\". Then that happens every day for a month.

\n

Humans (and especially poker players, who study probabilistic decision making as a hobby) are conditioned through life to believe outcomes are a direct result of actions. Do a good job at something and work hard at it, you succeed. Do a bad job at something and slack off, you fail.

\n

While this is true in poker in the long run, the long run can be a long time. I broke even for a year at one point, and that's at cash games. Meanwhile donkeys without the slightest clue what they were doing were lucking ass backward into multimillion-dollar tournament prizes.

\n

You can't even fathom what this does to you emotionally until you've lived through it. It engenders self-doubt, which gets you off your game, which probably makes you play worse, which you know you're doing but aren't sure exactly how, or what to do about it, which in turn prolongs the losing streak. There is no end to it.

\n
\n

-- matmaroon, http://news.ycombinator.com/item?id=1778597

\n
\n

\n

I had a girlfriend for most of that time who went pro. She was pretty good, too, but she had that lack of respect for money in spades (heh). She'd spend a whole day grinding out $3000 in the $40-80 game and then lose $20,000 in an hour at the Pai Gow table.

\n

One day she decided to skip the Pai Gow and went to the mall instead. Bought all kinds of designer clothes and perfumes and electronic gadgets. She practically filled up the car. When she came home there was this shocked look on her face. I asked her what the problem was and she said \"Look at all the stuff you can buy for only $5000! I'm so used to using money as ammunition I forgot you can trade it for stuff.\"

\n

\n
\n

-- tsotha, http://news.ycombinator.com/item?id=1778909

\n
\n

\n

It's strange how poker works but I imagine Buddhism and poker to be the most complementary religion-to-profession complement on the planet.

\n

\n
\n

-- InfinityX0, http://news.ycombinator.com/item?id=1777624

\n
\n

After that I got a lot better, jumped limits, and ended up meeting DeathDonkey in a 100/200 game at Commerce and again realized I was outclassed.

\n

I'd say I owe them a debt of gratitude, but I'm pretty sure I already paid it at the tables.

\n
\n

-- bapadna, http://news.ycombinator.com/item?id=1778094 (included because it is funny)

\n

Wikipedia on 'tilt': http://en.wikipedia.org/wiki/Tilt_%28poker%29

\n

On Isildur1: http://en.wikipedia.org/wiki/Isildur1#Career

\n

" } }, { "_id": "TJL96Q9xJmtYc5CYv", "title": "Great Mathematicians on the Value of Intuition in Mathematics", "pageUrl": "https://www.lesswrong.com/posts/TJL96Q9xJmtYc5CYv/great-mathematicians-on-the-value-of-intuition-in", "postedAt": "2010-10-11T14:58:40.358Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "TJL96Q9xJmtYc5CYv", "html": "

There is a widespread misconception among educated laypeople that mathematics is primarily about logic and proof. No serious mathematician would deny that logical rigor has played an essential role in the development of mathematics. But the essential role that intuition plays in mathematical progress is little known. The asymmetry between the high level of awareness of the importance of logical rigor and the low level of awareness of the importance of intuition has led to educated laypeople to have a heavily distorted view of mathematics. Below I've collected some quotes from great mathematicians about the value of intuition in mathematics. I welcome any references to counterbalancing quotes from people of similar caliber

\n

In Why Johnny Can't Add, mathematician and historian Morris Kline quotes Felix Klein saying

\n
\n

You can often hear from non‐mathematicians, especially from philosophers, that mathematics consists exclusively in drawing conclusions from clearly stated premises; and that, in this process, it makes no difference what these premises signify, whether they are true or false, provided only that they do not contradict one another. But a person who has done productive mathematical work will talk quite differently. In fact those persons are thinking only of the crystallized form into which finished mathematical theories are finally cast. The investigator himself, however, in mathematics, as in every other science, does not work in this rigorous deductive fashion. On the contrary, he makes essential use of his fantasy and proceeds inductively, aided by heuristic expedients. One can give numerous examples of mathematicians who have discovered theorems of the greatest importance which they were unable to prove. Should one, then, refuse to recognize this as a great accomplishment and, in deference to the above definition, insist that this is not mathematics, and that only the successors who supply polished proofs are doing real mathematics? After all, it is an arbitrary thing how the word is to be used, but no judgment of value can deny that the inductive work of the person who first announces the theorem is at least as valuable as the deductive work of the one who first proves it. For both are equally necessary, and the discovery is the presupposition of the later conclusion.

\n
\n

According to an obituary by Robert Langlands, mathematician Harish-Chandra said

\n
\n

In mathematics we agree that clear thinking is very important, but fuzzy thinking is just as important as clear thinking

\n
\n

As reported by Hermann Weyl, while lecturing on Bernhard Riemann Felix Klein said

\n
\n

Undoubtedly, the capstone of every mathematical theory is a convincing proof of all its assertions. Undoubtedly, mathematics inculpates itself when it foregoes convincing proofs. But the mystery of brilliant productivity will always be the posing of new questions, the anticipation of new theorems that make accessible valuable results and connections. Without the creation of new viewpoints, without the statement of new aims, mathematics would soon exhaust itself in the rigor of its logical proofs and begin to stagnate as its substance vanishes. Thus, in a sense, mathematics has been most advanced by those who have distinguished themselves by intuition than by rigorous proofs.

\n
\n

In his essay on Mathematical Creation Henri Poincare wrote

\n

Numerous mathematicians

" } }, { "_id": "EdRHRbTuzqWCFsKDy", "title": "Hazing as Counterfactual Mugging?", "pageUrl": "https://www.lesswrong.com/posts/EdRHRbTuzqWCFsKDy/hazing-as-counterfactual-mugging", "postedAt": "2010-10-11T14:17:09.201Z", "baseScore": 5, "voteCount": 6, "commentCount": 8, "url": null, "contents": { "documentId": "EdRHRbTuzqWCFsKDy", "html": "

In the interest of making decision theory problems more relevant, I thought I'd propose a real-life version of counterfactual mugging.  This is discussed in Drescher's Good and Real, and many places before.  I will call it the Hazing Problem by comparison to this practice (possibly NSFW – this is hazing, folks, not Disneyland).

\n

 

\n

The problem involves a timewise sequence of agents who each decide whether to \"haze\" (abuse) the next agent.  (They cannot impose any penalty on previous agent.)  For all agents n, here is their preference ranking:

\n

 

\n

1) not be hazed by n-1

\n

2) be hazed by n-1, and haze n+1

\n

3) be hazed by n-1, do NOT haze n+1

\n

 

\n

or, less formally:

\n

 

\n

1) not be hazed

\n

2) haze and be hazed

\n

3) be hazed, but stop the practice

\n

 

\n

The problem is: you have been hazed by n-1.  Should you haze n+1?

\n

 

\n

Like in counterfactual mugging, the average agent has lower utility by conditioning on having been hazed, no matter how big the utility difference between 2) and 3) is.  Also, it involves you having to make a choice from within a \"losing\" part of the \"branching\", which has implications for the other branches.

\n

 

\n

You might object the choice of whether to haze is not random, as Omega’s coinflip is in CM; however, there are deterministic phrasings of CM, and your own epistemic limits blur the distinction.

\n

 

\n

UDT sees optimality in returning not-haze unconditionally.  CDT reasons that its having been hazed is fixed, and so hazes.  I *think* EDT would choose to haze because it would prefer to learn that, having been hazed, they hazed n+1, but I'm not sure about that.

\n

 

\n

I also think that TDT chooses not-haze, although this is questionable since I'm claiming this is isomorphic to CM.  I would think TDT reasons that, \"If n's regarded it as optimal to not haze despite having been hazed, then I would not be in a position of having been hazed, so I zero out the disutility of choosing not-haze.\"

\n

 

\n

Thoughts on the similarity and usefulness of the comparison?

" } }, { "_id": "aPCKiEd2G8H3kkdnN", "title": "The Dark Arts - Preamble", "pageUrl": "https://www.lesswrong.com/posts/aPCKiEd2G8H3kkdnN/the-dark-arts-preamble", "postedAt": "2010-10-11T14:01:58.960Z", "baseScore": 65, "voteCount": 89, "commentCount": 140, "url": null, "contents": { "documentId": "aPCKiEd2G8H3kkdnN", "html": "

\n

I’d like to tell you all a story.

\n

Once upon a time I was working for a charity – a major charity – going door-to-door to raise money while pretending it wasn’t sales.

\n

This story happened on my last day working there.  I didn’t know that at the time; I wouldn’t find out until the following morning when my boss called me up to fire me, but I knew it was coming.  For weeks I’d been fed up with the job, milking it for the last few dollars I could pull out, hating every minute of it but needing the money.  The Sudden Career Readjustment would come as a relief.

\n

So on that day, my last day, I was moving slowly.  I knocked on one particular door and there was no response.  I had little desire to walk to the next one, however, and there was an interesting spider who’d built its web below the doorbell.  I tapped its belly with the tip of my pen, and it reacted with aggression – trying to envenom and ensnare the tip of my ballpoint.  I must have been playing with it for a good minute or so when the door suddenly opened.

\n

A distraught woman stood before me.  After a brief period of Relating I launched into my pitch.“So you’re probably wondering why there’s a bald weirdo at your door?  Actually I’m just coming around with Major Charity1 on an emergency campaign.  You’ve heard of us, right?  Brilliant!  So obviously you’ve thought of getting involved, right?  That’s awesome!  You see, the reason I’m coming around is for these guys – some of our emergency cases...”2

\n

I handed her the pictures of the Developing World Children (yeah, it was one of those charities).  She took them, a wistful look on her face.

\n

“Oh God, don’t show me these.  I’m such a Rescuer.”

\n

“Rescuer?  Do you have a Rescue Dog?” [Where I’m from, abused animals brought into a new home are called ‘Rescue Dogs’.]

\n

“No, I...”

\n

“You mean your personality?  You care about people, don’t you?”

\n

She nodded slowly.  Her face began to crumble.

\n

“I’m sorry – I can’t look at these children,” she handed back the photographs, “Not right now.  I’ve been crying all day and I just can’t deal with those emotions...”

\n

I took back the children, a look of honest sympathy on my face.  The Demon Wheel began spinning.  I could see that she was on the verge of crying again.  My gut told me that her father had recently died, but the actual cause didn’t matter.  I could discover that information.  The upcoming dialogue played itself out in my mind...

\n

\"Oh jeez, what happened?  Oh my god, seriously..?” Head tilted as an Alpha confidant enough for Beta behaviour, looking down and shaking, “I’m lucky enough to have never been through that.  Were the two of you close?” As she talks I nod, prompting her until she breaks out in tears.  I put down my binder and step into her house, embracing her as she cries on my shoulder.

\n

She sniffles.

\n

“I’m sorry... sorry to do this to you.”

\n

“No, don’t be.  Listen... Mary, is it?  What you’re going through is normal.  It’s nothing to be ashamed of…”  Cue personal anecdote, then pause for a beat. “Listen, about the Major Charity thing; this is something you’ve always wanted to do, isn’t it?  Yeah, I can tell.  You’re a caring person, after all.  I tell you what: we’ll get you set up with this little boy – he’s from Ecuador, and we’re trying to get him eating a healthy diet.  We’re going to make you his super hero today.  And then you’ll know – Mary, you’ll know that even at your darkest moment, you still have the strength in you to save a life.

\n

“And you know what else?” I reach out to touch her arm, “Tonight you’re going to sleep like a baby knowing that you did this.  So you go and get your Credit Card and I’ll start filling out the form.”

\n

*          *          *

\n

I could have done it.  I could have got that child sponsored.  I could have kept my job, and Mary could have stopped crying that evening.  She’d have thanked me for coming by, and after I left she would have cuddled on the couch with her new Sponsor Child, tears drying as she found hope in the world.

\n

But I didn’t do it.  Instead I apologized for interrupting her grief, and left.

\n

Because I am not a Meat Fucker.

\n

*          *          *

\n

All my life, I’ve had this bad habit.  No matter how hard I try and kick it, there it is: Honesty.  I can’t tell you how many times it’s dug me into a hole.  As far as concepts go, it’s about as foolish and utopian as Truth and Justice, and I know that, but I just can’t seem to let it go.  That’s a large part of the reason I left Mary alone to her tears – backed off, rather than digging into her psyche to recalibrate a few clusters of neuron.

\n

The other half is my status as a card-carrying (union-dues-paid-in-full) Anarchist.  The way I look at things, the only time you can justify using the Jedi Mind Trick on somebody is when your ethics would stand clean with murdering them as well.

\n

Sending Storm Troopers on a Wild Droid Chase is one thing; scamming Waddo out of a distributor cap for your CGI Space Plane is another.

\n

When you take advantage of the Dark Arts, you’re not simply tricking people into giving you what you want; you’re making them want to give it to you.  You’re hacking into their brain and inserting a Murder Pill; afterwards they will literally thank you for doing so (the only sponsor I ever met who wasn’t glad that I’d come by was the lady whose 6 year old daughter I primed into wanting it).  In ninety percent of the situations where the Dark Arts are useful or possible, you can’t do it out of spite; when you realign someone’s desires to match your own they want to do what you want them to do. 

\n

And yet there’s no clear distinction between using these skills and regular social interaction.  Manipulation works best when you’re sincere about it.  Ethically speaking it’s a grey, wavy line.

\n

The thing is, we all like to be Sold, Led, Dominated; if I walk into Subway, and I ask the kid at the counter to give me his Best Submarine Sandwich, I want him to tell me what I want, and make me love it after it’s paid for.  The last thing he should do is say that “They’re all good!” and make me regret the [(5 breads)x(16 meats)x(212 Toppings)-1] subs that I didn’t get.3 Retail is the Dark Arts Done Right (usually).  The Sales Lady figures out what I want, uses her expertise to find the best fit, and then kills the cognitive dissonance that could ruin my enjoyment of the product; “You really pull off that colour.  Seriously, that jacket looks great on you – you see how these lines naturally compliment your shoulders?  Of course you can!”

\n

Sexual dynamics are similar; if somebody’s drinking in public at 2 in the morning it’s because they’re on the market.  Let’s say a ‘faithful wife’ goes to the club one weekend while her husband is out of town, and she has a few drinks with a bunch of college boys she just met.  One of them happens to be a PUA.  When it comes to things like date rape drugs, or taking advantage of a person who’s sloppy-drunk there is a clear line in the sand.  But in this hypothetical the woman’s relatively sober.  It’s just that the young rake is so damned charming!

\n

Meanwhile her husband’s having a few pints at the hotel bar with Sheila from accounting, and she just keeps making eyes at him…

\n

Neither Sheila nor the PUA is responsible for the ensuing infidelity.  If the husband and wife didn’t want it in the first place, they would have never availed themselves to the temptation.  If, on the other hand, you meet somebody at a Neighbourhood Watch meeting, and spend the next three months seducing them… that’s when you’ve got to start questioning your ethics.  Anybody is going to be vulnerable at some time or another.

\n

While the Dark Arts are a Power, it’s how you use them that matters, like any other tool.  I’m running mind-games on people, but I usually won’t; I’m also good at fighting, but I don’t assault people for no reason.  I find both concepts repulsive.

\n

That’s the end of my moralizing on the matter.  The upcoming series is going to be purely descriptive in nature, exploring different strategies for manipulating others.  I’ll provide tactical examples showing how these strategies can be put into play, but for the most part each battlefield is unique; these are broader methods that apply across the board.  What you do with these techniques is up to you.

\n

As for defence… I don’t think I’ll have much to say about that.  When done properly, the victim doesn’t realize it until it’s already over, and by then it doesn’t matter.  You’re aware that the AI manipulated you into opening the box, but you’re going to open it anyways because that’s your new utility function.  It’s like a game of Roshambo, or when you’re thinking about joining Facebook: the only way to win is not to play.

\n

 

\n

Endnotes

\n

1.      Major Charity’s methods of acquiring funding don’t have any bearing on whether or not it’s an effective charity.  Whether or not the money going overseas actually makes a difference is a question I cannot answer.

\n

2.      The repetition here is intentional.  I was trying to prime key concepts.

\n

3.      My theory as to what is going on with these sub places and their myriad of options: the target is not a new customers, those people are going to be intimidated by all the choices, and the restaurants know that.  Rather, it is to provide ‘fresh’ options so that their current customers don’t get bored and go elsewhere.

" } }, { "_id": "EdFDwjsLNpgtTMJAp", "title": "Great Mathematicians on Math Competitions and \"Genius\"", "pageUrl": "https://www.lesswrong.com/posts/EdFDwjsLNpgtTMJAp/great-mathematicians-on-math-competitions-and-genius", "postedAt": "2010-10-11T11:50:46.004Z", "baseScore": 33, "voteCount": 30, "commentCount": 7, "url": null, "contents": { "documentId": "EdFDwjsLNpgtTMJAp", "html": "

As I mentioned in Fields Medalists on School Mathematics, school mathematics usually gives a heavily distorted picture of mathematical practice. It's common for bright young people to participate in math competitions, an activity which is closer to that of mathematical practice. Unfortunately, while math competitions may be more representative of mathematical practice than school mathematics, math competitions are themselves greatly misleading. Furthermore, they've become tied to a misleading mythological conception of \"genius.\" I've collected relevant quotations below.

\n

Acknowledgment  - I obtained some of these quotations from a collection of mathematician quotations compiled by my colleague Laurens Gunnarsen.

\n

\n

In a 2003 interview, Fields Medalist Terence Tao answered the question

\n
\n

What advice would you give to young people starting out in math (i.e. high school students and young researchers)?

\n
\n

by saying

\n
\n

Well, I guess they should be warned that their impressions of what professional mathematics is may be quite different from the reality. In elementary school I had the vague idea that professional mathematicians spent their time computing digits of pi, for instance, or perhaps devising and then solving Math Olympiad style problems.

\n
\n

In The Case against the Mathematical Tripos mathematician GH Hardy wrote

\n
\n

It has often been said that Tripos mathematics was a collection of elaborate futilities, and the accusation is broadly true. My own opinion is that this is the inevitable result, in a mathematical examination, of high standards and traditions. The examiner is not allowed to content himself with testing the competence and the knowledge of the candidates; his instructions are to provide a test of more than that, of initiative, imagination, and even of some sort of originality. And as there is only one test of originality in mathematics, namely the accomplishment of original work, and as it is useless to ask a youth of twenty-two to perform original research under examination conditions, the examination necessarily degenerates into a kind of game, and instruction for it into initiation into a series of stunts and tricks.

\n
\n

In The Map of My Life mathematician Goro Shimura wrote of his experience teaching at a cram school

\n
\n

I discovered that many of the exam problems were artificial and required some clever tricks. I avoided such types, and chose more straightforward problems, which one could solve with standard techniques and basic knowledge. There is a competition called the Mathematical Olympic, in which a competitor is asked to solve some problems, which are difficult and of the type I avoided. Though such a competition may have its raison d'être, I think those younger people who are seriously interested in mathematics will lose nothing by ignoring it.

\n
\n

In his lecture at the 2001 International Mathematics Olympiad, Andrew Wiles gave further description of how math competitions are unrepresentative of mathematical practice

\n
\n

Let me then welcome you not only to this event but also to the greater world of mathematics in what many of us believe is now a golden age. However let me also warn you — whatever the route you have taken so far, the real challenges of mathematics are still before you. I hope to give you a glimpse of this. What then distinguishes the mathematics we professional mathematicians do from the mathematical problems you have faced in the last week? The two principal differences I believe are of scale and novelty. First of scale: in a mathematics contest such as the one you have just entered, you are competing against time and against each other. While there have been periods, notably in the thirteenth, fourteenth and fifteenth centuries when mathematicians would engage in timed duels with each other, nowadays this is not the custom. In fact time is very much on your side. However the transition from a sprint to a marathon requires a new kind of stamina and a profoundly different test of character. We admire someone who can win a gold medal in five successive Olympics games not so much for the raw talent as for the strength of will and determination to pursue a goal over such a sustained period of time. Real mathematical theorems will require the same stamina whether you measure the effort in months or in years [...]

\n

The second principal difference is one of novelty [...] Let me stress that creating new mathematics is a quite different occupation from solving problems in a contest. Why is this? Because you don't know for sure what you are trying to prove or indeed whether it is true.

\n
\n

In his Mathematical Education essay, Fields Medalist William Thurston said

\n
\n

Related to precociousness is the popular tendency to think of mathematics as a race or as an athletic competition. There are widespread high school math leagues: teams from regional high schools meet periodically and are given several problems, with an hour or so to solve them.

\n

There are also state, national and international competitions. These competitions are fun, interesting, and educationally effective for the people who are successful in them. But they also have a downside. The competitions reinforce the notion that either you ‘have good math genes’, or you do not. They put an emphasis on being quick, at the expense of being deep and thoughtful. They emphasize questions which are puzzles with some hidden trick, rather than more realistic problems where a systematic and persistent approach is important. This discourages many people who are not as quick or as practiced, but might be good at working through problems when they have the time to think through them. Some of the best performers on the contests do become good mathematicians, but there are also many top mathematicians who were not so good on contest math.

\n


Quickness is helpful in mathematics, but it is only one of the qualities which is helpful. For people who do not become mathematicians, the skills of contest math are probably even less relevant. These contests are a bit like spelling bees. There is some connection between good spelling and good writing, but the winner of the state spelling bee does not necessarily have the talent to become a good writer, and some fine writers are not good spellers. If there was a popular confusion between good spelling and good writing, many potential writers would be unnecessarily discouraged.

\n
\n

In his book Mathematics: A Very Short Introduction, Fields Medalist Timothy Gowers writes

\n
\n

While the negative portrayal of mathematicians may be damaging, by putting off people who would otherwise enjoy the subject and be good at it, the damage done by the word genius is more insidious and possibly greater. Here is a rough and ready definition of genius: somebody who can do easily, and at a young age, something that almost nobody else can do except after years of practice, if at all. The achievements of geniuses have some sort of magic quality about them - it is as if their brains work not just more efficiently than ours, but in a completely different way. Every year or two a mathematics undergraduate arrives at Cambridge who regularly manages to solve a in a few minutes problems that take most people, including those who are supposed to be teaching them, several hours or more. When faced with such a person, all one can do is stand back and admire.

\n

And yet, these extraordinary people are not always the most successful research mathematicians. If you want to solve a problem that other professional mathematicians have tried and failed to solve before you, then, of the many qualities you will need, genius as I have defined it is neither necessary nor sufficient. To illustrate with an extreme example, Andrew Wiles, who (at the age of just over forty) proved Fermat's Last Theorem (which states that if x, y, z, and n are all positive integers and n is greater than 2, then xn + yn cannot equal zn) and thereby solved the world's most famous unsolved mathematics problem, is undoubtedly very clever, but he is not a genius in my sense.

\n

How, you might ask, could he possibly have done what he did without some sort of mysterious extra brainpower? The answer is that, remarkable though his achievement was, it is not so remarkable as to defy explanation. I do not know precisely what enabled him to succeed, but he would have needed great courage, determination, and patience, a wide knowledge of some very difficult work done by others, the good fortune to be in the right mathematical area at the right time, and an exceptional strategic ability.

\n

This last quality is, ultimately, more important than freakish mental speed: the most profound contributions to mathematics are often made by tortoises rather than hares. As mathematicians develop, they learn various tricks of the trade, partly from the work of other mathematicians and partly as a result of many hours spent thinking about mathematics. What determines whether they can use their expertise to solve notorious problems is, in large measure, a matter of careful planning: attempting problems that are likely to be fruitful, knowing when to give up a line of thought (a difficult judgement to make), being able to sketch broad outlines of arguments before, just occasionally, managing to fill in the details. This demands a level of maturity which is by no means incompatible with genius but which does not always accompany it.

\n
\n

In Does one have to be a genius to do maths? Terence Tao concurs with Gowers and expands on the same theme.

\n
\n

Fields Medalist Alexander Grothendieck describes his own relevant experience in Récoltes et Semailles

\n
\n

Since then I’ve had the chance in the world of mathematics that bid me welcome, to meet quite a number of people, both among my “elders” and among young people in my general age group who were more brilliant, much more ‘gifted’ than I was. I admired the facility with which they picked up, as if at play, new ideas, juggling them as if familiar with them from the cradle–while for myself I felt clumsy, even oafish, wandering painfully up an arduous track, like a dumb ox faced with an amorphous mountain of things I had to learn (so I was assured) things I felt incapable of understanding the essentials or following through to the end. Indeed, there was little about me that identified the kind of bright student who wins at prestigious competitions or assimilates almost by sleight of hand, the most forbidding subjects.

\n

In fact, most of these comrades who I gauged to be more brilliant than I have gone on to become distinguished mathematicians. Still from the perspective or thirty or thirty five years, I can state that their imprint upon the mathematics of our time has not been very profound. They’ve done all things, often beautiful things in a context that was already set out before them, which they had no inclination to disturb. Without being aware of it, they’ve remained prisoners of those invisible and despotic circles which delimit the universe of a certain milieu in a given era. To have broken these bounds they would have to rediscover in themselves that capability which was their birthright, as it was mine: The capacity to be alone.

\n
" } }, { "_id": "q8s8qtCmnYmxJscmX", "title": "Great Mathematicians on Precocity, Speed and Math Competitions", "pageUrl": "https://www.lesswrong.com/posts/q8s8qtCmnYmxJscmX/great-mathematicians-on-precocity-speed-and-math", "postedAt": "2010-10-11T10:45:52.139Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "q8s8qtCmnYmxJscmX", "html": "

In a 2003 interview, Terence Tao answered the question

\n
\n

What advice would you give to young people starting out in math (i.e. high school students and young researchers)?

\n
\n

by saying

\n
\n

Well, I guess they should be warned that their impressions of what professional mathematics is may be quite different from the reality. In elementary school I had the vague idea that professional mathematicians spent their time computing digits of pi, for instance, or perhaps devising and then solving Math Olympiad style problems.

\n
\n

In The Case against the Mathematical Tripos mathematician GH Hardy wrote

\n

It has often been said that Tripos mathematics was a collection of elaborate futilities, and the accusation is broadly true. My own opinion is that this is the inevitable result, in a mathematical examination, of high standards and traditions. The examiner is not allowed to content himself with testing the competence and the knowledge of the candidates; his instructions are to provide a test of more than that, of initiative, imagination, and even of some sort of originality. And as there is only one test of originality in mathematics, namely the accomplishment of original work, and as it is useless to ask a youth of twenty-two to perform original research under examination conditions, the examination necessarily degenerates into a kind of game, and instruction for it into initiation into a series of stunts and tricks.

\n

In The Map of My Life mathematician Goro Shimura wrote of his experience teaching at a cram school

\n
\n

I discovered that many of the exam problems were artificial and required some clever tricks. I avoided such types, and chose more straightforward problems, which one could solve with standard techniques and basic knowledge. There is a competition called the Mathematical Olympic, in which a competitor is asked to solve some problems, which are difficult and of the type I avoided. Though such a competition may have its raison d’ˆetre, I think those younger people who are seriously interested in mathematics will lose nothing by ignoring it.

\n
\n

In his lecture at the 2001 International Mathematics Olympiad, Andrew Wiles said

\n
\n

Let me then welcome you not only to this event but also to the greater world of mathematics in what many of us believe is now a golden age. However let me also warn you — whatever the route you have taken so far, the real challenges of mathematics are still before you. I hope to give you a glimpse of this. What then distinguishes the mathematics we professional mathematicians do from the mathematical problems you have faced in the last week? The two principal differences I believe are of scale and novelty. First of scale: in a mathematics contest such as the one you have just entered, you are competing against time and against each other. While there have been periods, notably in the thirteenth, fourteenth and fifteenth centuries when mathematicians would engage in timed duels with each other, nowadays this is not the custom. In fact time is very much on your side. However the transition from a sprint to a marathon requires a new kind of stamina and a profoundly different test of character. We admire someone who can win a gold medal in five successive Olympics games not so much for the raw talent as for the strength of will and determination to pursue a goal over such a sustained period of time. Real mathematical theorems will require the same stamina whether you measure the effort in months or in years [...] The second principal difference is one of novelty [...] Let me stress that creating new mathematics is a quite different occupation from solving problems in a contest. Why is this? Because you don't know for sure what you are trying to prove or indeed whether it is true.

\n
\n

In his Mathematical Education essay, Fields Medalist William Thurston said

\n
\n

Related to precociousness is the popular tendency to think of mathematics as a race or as an athletic competition. There are widespread high school math leagues: teams from regional high schools meet periodically and are given several problems, with an hour or so to solve them.

\n


There are also state, national and international competitions. These competitions are fun, interesting, and educationally effective for the people who are successful in them. But they also have a downside. The competitions reinforce the notion that either you ‘have good math genes’, or you do not. They put an emphasis on being quick, at the expense of being deep and thoughtful. They emphasize questions which are puzzles with some hidden trick, rather than more realistic problems where a systematic and persistent approach is important. This discourages many people who are not as quick or as practiced, but might be good at working through problems when they have the time to think through them. Some of the best performers on the contests do become good mathematicians, but there are also many top mathematicians who were not so good on contest math.

\n


Quickness is helpful in mathematics, but it is only one of the qualities which is helpful. For people who do not become mathematicians, the skills of contest math are probably even less relevant. These contests are a bit like spelling bees. There is some connection between good spelling and good writing, but the winner of the state spelling bee does not necessarily have the talent to become a good writer, and some fine writers are not good spellers. If there was a popular confusion between good spelling and good writing, many potential writers would be unnecessarily discouraged.

\n
\n

In Recoltes et Semailles, Fields Medalist Alexander Grothendieck wrote

\n
\n

Since then I’ve had the chance in the world of mathematics that bid me welcome, to meet quite a number of people, both among my “elders” and among young people in my general age group who were more brilliant, much more ‘gifted’ than I was. I admired the facility with which they picked up, as if at play, new ideas, juggling them as if familiar with them from the cradle–while for myself I felt clumsy, even oafish, wandering painfully up an arduous track, like a dumb ox faced with an amorphous mountain of things I had to learn (so I was assured) things I felt incapable of understanding the essentials or following through to the end. Indeed, there was little about me that identified the kind of bright student who wins at prestigious competitions or assimilates almost by sleight of hand, the most forbidding subjects.

\n

In fact, most of these comrades who I gauged to be more brilliant than I have gone on to become distinguished mathematicians. Still from the perspective or thirty or thirty five years, I can state that their imprint upon the mathematics of our time has not been very profound. They’ve done all things, often beautiful things in a context that was already set out before them, which they had no inclination to disturb. Without being aware of it, they’ve remained prisoners of those invisible and despotic circles which delimit the universe of a certain milieu in a given era. To have broken these bounds they would have to rediscover in themselves that capability which was their birthright, as it was mine: The capacity to be alone.

\n
" } }, { "_id": "MsTu3dqf7BnEupoW4", "title": "Fields Medalists on School Mathematics", "pageUrl": "https://www.lesswrong.com/posts/MsTu3dqf7BnEupoW4/fields-medalists-on-school-mathematics", "postedAt": "2010-10-11T10:06:25.516Z", "baseScore": 16, "voteCount": 17, "commentCount": 19, "url": null, "contents": { "documentId": "MsTu3dqf7BnEupoW4", "html": "

Most people form their impressions of math from their school mathematics courses. The vast majority of school mathematics courses distort the nature of mathematical practice and so have led to widespread misconceptions about the nature of mathematical practice. There's a long history of high caliber mathematicians finding their experiences with school mathematics alienating or irrelevant. I think this should be better known. Here I've collected some relevant quotes.

\n

I'd like to write some Less Wrong articles diffusing common misconceptions about mathematical practice but am not sure how to frame these hypothetical articles. I'd welcome any suggestions.

\n

Acknowledgment - I obtained some of these quotations from a collection of mathematician quotations compiled by my colleague Laurens Gunnarsen.

\n

In Reflections Around the Ramanujan Centenary Fields Medalist Atle Selberg said:

\n
\n

I have talked with many others who became mathematicians, about the mathematics they learned in school. Most of them were not particularly inspired by it but started reading on their own, outside of school by some accident or other, as I myself did.

\n
\n

In his autobiography Ferdinand Eisenstein wrote about how he found his primary school mathematical education tortuous:

\n
\n

During the first years [of elementary school] I acquired my education in the fundamentals: I still remember the torture of completing endless multiplication examples.  From this, you might conclude, erroneously, that I lacked mathematical ability, merely because I showed little inclination for calculating.  In fact the mechanical, always repetitive nature of the procedures annoyed me, and indeed, I am still disgusted with calculations lacking any purpose, while if there was something new to discover, requiring thought and reasoning, I would spare no pains.

\n
\n

There is some overlap between Eisenstein's early school experience and the experience that Fields Medalist William Thurston describes in his essay in Mariana Cook's book Mathematicians: An Outer View of the Inner World:

\n
\n

I've loved mathematics all my life, although I often doubted that mathematics would turn out to be my life's focus even when others thought it obvious.  I hated much of what was taught as mathematics in my early schooling, and I often received poor grades.  I now view many of these early lessons as anti-math: they actively tried to discourage independent thought.  One was supposed to follow an established pattern with mechanical precision, put answers inside boxes, and \"show your work,\" that is, reject mental insights and alternative approaches.  My attention is more inward than that of most people: it can be resistant to being captured and directed externally. Exercises like these mathematics lessons were excruciatingly boring and painful (whether or not I had \"mastered the material\").

\n
\n

Thurston's quote points to the personal nature of mathematical practice. This is echoed by Fields Medalist Alain Connes in The Unravelers: Mathematical Snapshots

\n
\n

...for me, one starts to become a mathematician more or less through an act of rebellion. In what sense? In the sense that the future mathematician will start to think about a certain problem, and he will notice that, in fact, what he has read in the literature, what he has read in books, doesn't correspond to his personal vision of the problem. Naturally, this is very often the result of ignorance, but that is not important so long as his arguments are based on personal intuition and, of course, on proof. So it doesn't matter, because in this way he'll learn that in mathematics there is no supreme authority! A twelve-year-old pupil can very well oppose his teacher if he finds a proof of what he argues, and that differentiates mathematics from other disciplines, where the teacher can easily hide behind knowledge that the pupil doesn't have. A child of five can say, \"Daddy, there isn't any biggest number\" and can be certain of it, not because he read it in a book but because he has found a proof in his mind...

\n
\n

In Récoltes et Semailles Fields Medalist Alexander Grothendieck describes an experience of the type that Alain Connes mentions:

\n
\n

I can still recall the first \"mathematics essay\", and that the teacher gave it a bad mark. It was to be a proof of \"three cases in which triangles were congruent.\" My proof wasn't the official one in the textbook he followed religiously. All the same, I already knew that my proof was neither more nor less convincing than the one in the book, and that it was in accord with the traditional spirit of \"gliding this figure over that one.\" It was self-evident that this man was unable or unwilling to think for himself in judging the worth of a train of reasoning. He needed to lean on some authority, that of a book which he held in his hand. It must have made quite an impression on me that I can now recall it so clearly.

\n
" } }, { "_id": "YZy4iwF5xi5FC2dMf", "title": "Ben Goertzel: What Would It Take to Move Rapidly Toward Beneficial Human-Level AGI?", "pageUrl": "https://www.lesswrong.com/posts/YZy4iwF5xi5FC2dMf/ben-goertzel-what-would-it-take-to-move-rapidly-toward", "postedAt": "2010-10-11T09:01:27.508Z", "baseScore": 3, "voteCount": 7, "commentCount": 13, "url": null, "contents": { "documentId": "YZy4iwF5xi5FC2dMf", "html": "

http://multiverseaccordingtoben.blogspot.com/2010/10/what-would-it-take-to-move-rapidly.html

" } }, { "_id": "mL7Q4A6XgCkTzxc8Z", "title": "Video: Getting Things Done Author at DO Lectures", "pageUrl": "https://www.lesswrong.com/posts/mL7Q4A6XgCkTzxc8Z/video-getting-things-done-author-at-do-lectures", "postedAt": "2010-10-11T08:33:26.600Z", "baseScore": 5, "voteCount": 4, "commentCount": 0, "url": null, "contents": { "documentId": "mL7Q4A6XgCkTzxc8Z", "html": "

If nothing else, this is a distillation of him spending a lot of time analyzing how people ineffectively manage their time.

\n

Link:

\n

http://www.dolectures.com/speakers/speakers-2010/david-allen

\n

I expect to watch this two more times.

\n

 

\n

 

" } }, { "_id": "zk3x6W4yBYNKGX44B", "title": "Any LW-ers in Munich, Athens, or Israel?", "pageUrl": "https://www.lesswrong.com/posts/zk3x6W4yBYNKGX44B/any-lw-ers-in-munich-athens-or-israel", "postedAt": "2010-10-11T06:56:01.092Z", "baseScore": 9, "voteCount": 6, "commentCount": 6, "url": null, "contents": { "documentId": "zk3x6W4yBYNKGX44B", "html": "

Carl Shulman and I are in Munich until Oct 14, then in Athens for two days (until late night of the 16th), then in Israel for a couple months (with at least some time in both Tel Aviv and Jesusalem).  Is there anyone in one of these cities that would like to meet for coffee and discussion?

" } }, { "_id": "yeCZb6zkS9bvuLbqa", "title": "Love and Rationality: Less Wrongers on OKCupid", "pageUrl": "https://www.lesswrong.com/posts/yeCZb6zkS9bvuLbqa/love-and-rationality-less-wrongers-on-okcupid", "postedAt": "2010-10-11T06:35:52.600Z", "baseScore": 27, "voteCount": 52, "commentCount": 337, "url": null, "contents": { "documentId": "yeCZb6zkS9bvuLbqa", "html": "

Last month, Will_Newsome started a thread about OKCupid, one of the major players among online dating sites--especially for the young-and-nerdy set, given their mathematical approach to matching. He opened it up for individual profile evaluation, which occurred, but so did a lot of fruitful meta-discussion about attraction in general and online dating mechanisms in particular. This post is a summary of the parts of that thread which specifically address the practical aspect of good profile editing and critique. (It also incorporates some ideas I had previously but hadn't collected yet.) A little of it is specific to OKCupid, but most of it can be applied to any dating site, and some to dating in general. I've cited points which came from single comments (i.e. not suggested by several people); if I missed one of yours, please comment with a link and I'll add the reference.

\n

On OKTrends

\n

\"Wait a minute,\" I hear experienced OKCers cry. \"Why reinvent the wheel of profile analysis? OKCupid already has a blog for just that, and it's called OKTrends.\"

\n

OKTrends has its merits, but it also has one major flaw. Wei_Dai summed it up well by observing that OKTrends does not make \"any effort to distinguish between correlation and causation,\" citing this post as an example. The reason for that is obvious: the first purpose of OKTrends is to bring traffic to OKCupid. It does this with entertaining content about racy subjects, and rigorous analysis comes (optimistically) second. Of course, datadataeverywhere added, that's exactly the Mythbusters formula. They're both junk food science, but it's also the only look at their data we're going to get, so I'll link a few relevant OKTrends posts in the appropriate sections.

\n

How to Write a Good Profile

\n

Okay, you've created your account and answered a few questions. Now it's time to summarize your whole personality, your appeal, and your worldview in ten little text boxes. Where to begin?

\n

The obvious answer is to reply to the ten profile prompts with your answers to them. Don't fall for it! What you write in your profile, along with your picture, will be the whole sense of yourself you convey to other people. Do your favorite media selections and the fact that you need oxygen, water, food, shelter, and two other obvious things to live constitute 20% of your identity?

\n

Concrete Advice #1: Don't just follow the prompts. Think about what you want to say in your profile, and then fit that into the answers.

\n

Or don't even find a way to fit it into the answers. I've seen excellent profiles which literally ignored the questions and just said what they had to say. But fear not, I won't leave you entirely promptless. There are two goals in writing a good profile:

\n\n

We'll address these one at a time, beginning with honesty.

\n

There's a distinction in anthropology between \"ancestral traits,\" whose genes go back so far that they are common among a huge variety of species, and \"derived traits,\" which evolved recently enough to be an informative descriptor of a group. Pentadactyly is an ancestral trait, and is not specific enough to tell a human from a newt; opposable thumbs are a derived trait, and indicate that you're probably (although not necessarily) looking at a primate. You can speak similarly of traits which are memetic rather than genetic; ancestral traits are shared by almost everyone in the culture, and derived traits by smaller subgroups.

\n

Ancestral: \"I like listening to music and hanging out with my friends.\"

\n

Derived: \"I like taking photographs and playing board games.\"

\n

Concrete Advice #2: Write about your derived traits, not your ancestral ones.

\n

Notice that it's not about specificity. The second set of interests isn't very much more specific than the first one. They're just less common interests. Therefore, they do a better job of identifying where you fit in personspace, and in fewer words. For the convenience of newcomers to online dating, here's a quick laundry list of cliches which are so common as to tell the reader nothing about you:

\n

Concrete Advice #3: Omit all of these: \"it's hard to summarize myself\" \"what should I say here\" \"I'm contradictory\" \"I'm nice\" \"I'm shy until you get to know me\" \"the first thing people notice is my eyes\" \"I need [obvious literal things] to live\" \"if it were private I wouldn't write it here\" \"you can ask me anything\" and explicit suggestions that the reader should date you, even tongue-in-cheek

\n

That said, it is hard to summarize yourself. It's hard to recognize the parts of yourself which matter, and even harder to remember them later when you're staring at a form on a webpage. Furthermore, self-identity is susceptible to environmental pressure, and it's easy to just write up the stereotype of the group you feel you belong to. If you'll pardon me quoting myself:

\n
\n

The first few versions of my profile were geared to show off how geeky and smart I was. This connected me to people who spent a lot of time playing tabletop roleplaying games, reading fantasy novels, and making pop culture references to approved geeky television shows, none of which are things which interest me particularly.

\n

Eventually I realized that I am not actually just popped out of the stereotypical modern geek mold, and it was lazy, inaccurate, and ineffective to act like I was. Since then I've started doing the much harder thing of trying to pin down my specific traits and tastes, instead of taking the party line or applying a genre label that lets people assume the details. In that way, OKC has actually been a big force in driving me to understand who I am, what I want, and what really matters to me.

\n
\n

Concrete Advice #4: Learn what you actually care about. Get into the habit of noticing things in your day-to-day life which excite you, please you, infuriate you, or make you think. That's what belongs in an honest description of you.

\n

That's tough, but it's easier than it sounds. Remember that the reason you're being honest is that you want to attract someone who will actually like you, not just the person you claim to be. Don't worry at this stage about appearing \"interesting\" enough, or whether the generic average airhead represented by OKTrends would like you. Interpolate put it perfectly:

\n
\n

No one you want to meet would find you boring.

\n
\n

Keep that in mind when you're wondering how to balance the honesty and attractiveness goals. Yvain wondered why some users openly express non-mainstream views about transhumanism in a dating profile; this may be honest, but to a lot of people it won't be attractive. Apprentice was surprised by the number of LWers who talked about outdoorsy interests, which can intimidate geeky homebody types. In both cases, whether the interest warrants a mention depends on how significant that interest is to your personality and lifestyle.

\n

Concrete Advice #5: The more you mention something, the more important it will seem to be to you.

\n

rhollerith_dot_com came at the same point from a different angle, with the specific advice not to go into too much detail about work. What field you're in is interesting; what project has been taking up your work hours lately probably isn't. Unless your job is particularly cool or a big part of your identity, it doesn't deserve more than a sentence or two. The same goes for academic fields and most hobbies. If it would only generate conversation with someone who shares your job, major, or hobby, leave it out (unless those are the only people you're looking for). More generally, keep track of how much you mention a given topic in your profile. Count instances, if you have to. When you sort the list by quantity, what matters most to you should be on top. Right below that on the frequency list ...

\n

Concrete Advice #6: Write about the traits or interests that you want a potential partner to share.

\n

Describing what you want in a partner is about as hard as describing yourself, and for the same reasons, but you can approach it the same way (by paying attention and thinking about it in real-life contexts, not just when working on your profile). There are two reasons to make a point of including those things: It will appeal to people who share those traits with you, which is by definition your target audience; and OKCupid connects people in part based on shared interests listed in their profiles, even the ones that the user didn't choose to highlight. More to the point, the adorable but nonsentient cartoon matching robot does that. Which means:

\n

Concrete Advice #7: Do not mention your dislikes in your profile unless they are otherwise important.

\n

As far as I can tell, once OKC has decided you like something, there's no way to explicitly tell it you don't. Even removing it from your profile doesn't kick in immediately. If someone searches for, say, \"scientology,\" and you put in your profile that \"scientology is crap,\" you will come up on the search. This is not what either of you is trying to accomplish. Besides, that doesn't describe you. If you're an active organizer of major scientology protests and are looking for someone to do that with you, okay, put it in. Short of that, don't give yourself keywords you don't want.

\n

One last thing about searchability before we move on.

\n

Concrete Advice #8: Fill out any applicable sidebar information.

\n

Alicorn's example was religion: If you like the idea of being found by an atheist looking for another atheist, make sure OKCupid knows that you are one. I would go a step further and recommend filling in as much as you can. Single completed fields, or single omitted fields, will look more significant than they probably are--but do leave out any where all possible responses would be misleading. (I've left the \"children\" field blank, for example, because I don't want them now but might some day, so neither \"wants\" nor \"doesn't want\" is correct.) If you want to expound on any of your answers, of course, you can do it in the profile body, as long as it maintains an acceptable importance/frequency ratio and doesn't make your profile unreadably long.

\n

Concrete Advice #9: Write between 50 and 350 words in most of the fields.

I got these numbers by measuring answers which make my eyes glaze over (on the long end) or which made me think \"that's it?\" (on the short end). This isn't a hard-and-fast rule. The self-summary is justified in being a little bit longer; the six things are justified in being shorter. Your favorites section should be one of your shorter answers, unless media and food happen to be really important to you (in which case, write about why, don't just list them).

\n

Last but not least, here is the most-discussed and hopefully most obvious thing you can do to improve your profile.

\n

Concrete Advice #10: Upload at least one clear, flattering, decent resolution photo of yourself. No excuses.

\n

I'm just going to hand it over to mattnewport for a sec, responding to comments about not being \"photogenic.\"

\n
\n

... the word 'photogenic' should be like a red flag to a rationalist bull ... people who are 'not photogenic' are not made of some different type of material that reacts differently to light than photogenic people.

\n
\n

He goes on to point out that OKTrends did not one but two posts on what makes a good (read: message-attracting) profile picture. The first one is about content (poses, props, situations), and the second one is mostly about camera choice and timing. If you can read those and then turn around and take a good photo of yourself, great. If not, and especially if you're frustrated by the task, enlist the help of an actual photographer. You may know one. One of your friends may know one. A local skilled amateur may be willing to trade prints for practice. Whoever they are, find them. If you claim to be trying to prepare a good profile, and you don't have a picture on it that you're proud of, you're fooling yourself. (Hypocrisy alert: I haven't yet done this. But I just talked myself into it, so I will.)

\n

Yvain defends, quite fairly, that all of his photos are of him out doing interesting things which don't lend themselves to clean sparkling images: backpacking, scuba diving, and so forth. He's right to want to keep those to show off his activities; however, four different people commented that his pictures could be improved. I think it's clear that he would be well-served by adding one more, whose sole purpose is to flatter him physically.

\n

How to Make It a Better Profile

\n

Congratulations! You've written a competent profile. But the only person who's seen it yet is the least objective person in the world with regard to your attractiveness. Time to get a second opinion. The purpose of the profile critique is to verify that you've met your two goals in profile writing: honesty (have you actually depicted your personality?) and attractiveness (does the profile encourage messages?).

\n

The best people to judge your profile's honesty are those who know you well. They're the only ones who can tell whether the words you chose give an impression of you which matches the impression you give in reality. Unfortunately, this means they also have preconceptions about you. Better would be a critique from someone who formed their in-person impression only after reading your profile, but if your profile is working that well it's probably fine. In any case, ask your honesty evaluators if there's anything in your profile which surprises them, or anything they're surprised you omitted.

\n

There are two schools of thought on whom you should ask to judge your profile's attractiveness. One is to ask the sort of person you're trying to attract: members of your preferred gender, and probably of your own culture. They can tell you whether your profile is attractive to them and whether they'd message you based on it ... or at least, whether they think they would. The other school of thought is that the right people to ask are those who share your gender/culture preference, and have been successful attracting such partners. They can tell you what has empirically worked for them and compare notes. Both have potential biases, but anything both types of critic agree on is probably correct. (I didn't see any gay users pipe up in this part of the conversation, but I'd love to know how the overlap between the two sets affects their feedback.)

\n

Of course, a once-over by a relative stranger (e.g. another LWer) can be useful as well. They can tell you what assumptions they make about you, knowing little more than what you've chosen to write. Have your critic read the profile line by line and write down their impressions as they have them; when they finish, they can add the overall gist they got from reading. The idea is to give you a fuller picture of the reader's immediate responses--ideas which could stick in the subconscious even if they're forgotten consciously by the end. These are the details that they're filling in between the lines, and that's what you want to be sure is accurate. In particular, this is good for ensuring that your frequency of mentions actually matches your degree of interest; whpearson noticed such a discrepancy in mine, which I corrected.

\n

It should go without saying that any profile editor should also be encouraged to report problems with the language or flow. Get rid of typos, clean up the grammar. Check for subtler things as well, like unusual words repeated close together, or using the same sentence structure over and over. If a joke isn't funny or a reference doesn't make sense, replace or omit it. All of these errors are distractions from what you're trying to communicate, and produce fleeting impressions of confusion or irritation which are then associated with your profile. Other than that, write in a style which is natural to you. That style is a fair part of your self-description.

\n

Finally, review your profile from time to time. Every few months is a good minimum, give or take any life-altering events. The purpose of this is to ensure that your profile changes as you change, to stay up-to-date on the honesty goal. For the same reason, cycle in a new picture periodically, especially when your appearance has changed. If you really want to be thorough, re-answer old match questions from time to time as well. They're the biggest part of how OKCupid connects you to other people, and updating them keeps it current on your tastes and values. That this requires continuing to think about and adjust your tastes and values as time passes is just a perk.

" } }, { "_id": "7b7JFwa2eFD2WmPYK", "title": "Discuss: Have you experimented with Pavlovian conditioning?", "pageUrl": "https://www.lesswrong.com/posts/7b7JFwa2eFD2WmPYK/discuss-have-you-experimented-with-pavlovian-conditioning", "postedAt": "2010-10-11T06:15:29.791Z", "baseScore": 5, "voteCount": 6, "commentCount": 10, "url": null, "contents": { "documentId": "7b7JFwa2eFD2WmPYK", "html": "

I want to do some quick-and-dirty productivity hacks, along the lines of this or this. My simplified methodology is something like this: at the end of every 20 minutes of hard-ish labor (like writing Less Wrong posts or taking over African countries), I will flip a coin. If the coin lands tails, I inhale 8 grams of delicious nitrous oxide and keep on working. If heads, I die a little inside, take a 5 minute emailhackernewsfacebookblitzchess break, and then start working again.

\n

The reason I really expect this to work is because I get significantly more pleasure out of nitrous oxide than I do from orgasms. It's that good. If positive conditioning works at all, this stuff had better do it. I'm skeptical that something else like a gummy worm would really motivate me. Plus I'm not too keen on ingesting excessively large amounts of sugar.

\n

Have others done similar experiments? Any ideas for an improved methodology? Anyone else interested in trying this with their own drug/candy of choice so we can pool our findings? I'd be very happy if people used this to finally bang out that one Less Wrong post they've been meaning to write.

" } }, { "_id": "JvNBTj3t7bws2njJd", "title": "Collecting and hoarding crap, useless information", "pageUrl": "https://www.lesswrong.com/posts/JvNBTj3t7bws2njJd/collecting-and-hoarding-crap-useless-information", "postedAt": "2010-10-10T21:05:51.331Z", "baseScore": 23, "voteCount": 29, "commentCount": 23, "url": null, "contents": { "documentId": "JvNBTj3t7bws2njJd", "html": "

I am realizing something that many, many intelligent people are guilty of - collecting and hoarding and accumulating crap, useless information. This is dangerous, because it feels like you're doing something useful, but you're not.

\n

However, speaking personally - once I decide to start focusing and researching something systematically to get better at it, it gets harder to do. For instance, I taught myself statistics mostly using baseball stats. It was a fun, easy, harmless context to learn statistics.

\n

I read lots of history and historical fiction. I read up lots on business and entrepreneurship. This is easy and fun and enjoyable.

\n

But then, when I decide to really hone in, it becomes much harder. For instance, I'm doing some casual research on the history of insurgencies and asymmetrical warfare. This is the kind of thing I'd read all the time for fun, but now that I'm working on it systematically, it becomes a lot harder.

\n

Likewise business and entrepreneurship - I read lots and lots on technology, financing, market research, marketing, etc. But now that I'm really nailing down one aspect for my next business, it becomes almost strenuous to work on that.

\n

It's like... collecting and hoarding useless, unfocused information is for us what collecting and hoarding a bunch of useless consumer shit is for most people. I'd reckon that people that hang out here are smarter with money and less into buying junk, but, at least for me, I'm spending a lot of my time buying junk information.

\n

Alright, back to reading about Tienanmen Square and Rome/Carthage and the Tet Offensive, and nailing down the buying criteria and budgets of the market I want to be in. Why it is so much easier to focus and collect crap mentally than to do it systematically on meaningful topics? Do you do this? I seriously doubt I'm the only one...

" } }, { "_id": "7tbGrvKi7TnyBpnci", "title": "Discuss: Meta-Thinking Skills", "pageUrl": "https://www.lesswrong.com/posts/7tbGrvKi7TnyBpnci/discuss-meta-thinking-skills", "postedAt": "2010-10-10T20:05:32.121Z", "baseScore": 8, "voteCount": 8, "commentCount": 11, "url": null, "contents": { "documentId": "7tbGrvKi7TnyBpnci", "html": "

When do you go meta? When do you stop going meta?

\n

In the video Q and A Eliezer offered some advice about this (the emphasis is mine):

\n
\n

I tend to focus on thinking, and it's only when my thinking gets stuck or I run into a particular problem that I will resort to meta-thinking. Unless it's a particular meta-skill that I already have, in which case I'll just execute it. For example, the meta-skill of trying to focus on the original problem. In one sense, a whole chunk of Less Wrong is more or less my meta-thinking skills.

\n

I guess on reflection I would say that there is a lot of routine meta-thinking that I already know how to do, and I do without really thinking of it as meta-thinking. On the other hand original meta-thinking, which is the time consuming part, is something that I tend to resort to only when my current meta-thinking skills have broken down. And that's probably a reasonably exceptional circumstance, even though it's something of a comparative advantage and so I expect that I do it a bit more of it than average.

\n

Even so, when I'm trying to work on an object-level problem at any given point I'm probably not doing original meta-level questioning about how to execute these meta-level skills. If I'm bogged down in writing something I may execute my existing meta-level skill of try to step back and look at this from a more abstract level. If that fails then I may have to think about what sort of abstract levels can you view this problem on, and similar problems as opposed to tasks, and in that sense go into original meta-level thinking mode. But one of those meta-level skills I would say is the notion that your meta-level problem comes from an object-level problem, and you're supposed to keep one eye on the object-level problem the whole time you're working on the meta-level.

\n
\n

In his discussion post \"Are you doing what you should be doing?\", Will_Newsome identified what seems to be an important guiding principle of meta-thinking:

\n
\n

Yay for going meta! I should repeat this process until going meta no longer produces time-saving results.

\n
\n

(where \"time-saving results\" can be replaced with \"greater marginal utility\" to obtain a form that is more generally applicable)

\n

Some questions we could explore:

\n\n

(I plan to try to compile the insights and advice here into a top-level post discussing the principles of, and heuristics for, effective meta-level thinking)

\n

 

\n
\n

Edit: Changed minor wording and altered the third question posed.

\n

 

" } }, { "_id": "Gyv2aEjmbf7Lu3Wbx", "title": "Deep Structure Determinism", "pageUrl": "https://www.lesswrong.com/posts/Gyv2aEjmbf7Lu3Wbx/deep-structure-determinism", "postedAt": "2010-10-10T18:54:15.161Z", "baseScore": 5, "voteCount": 8, "commentCount": 6, "url": null, "contents": { "documentId": "Gyv2aEjmbf7Lu3Wbx", "html": "

Sort of a response to: Collapse Postulate

\n
Abstract:  There are phenomena in mathematics where certain structures are distributed \"at random;\" that is, statistical statements can be made and probabilities can be used to predict the outcomes of certain totally deterministic calculations.  These calculations have a deep underlying structure which leads a whole class of problems to behave in the same way statistically, in a way that appears random, while being entirely deterministic.  If quantum probabilities worked in this way, it would not require collapse or superposition.

\n

This is a post about physics, and I am not a physicist.  I will reference a few technical details from my (extremely limited) research in mathematical physics, but they are not necessary to the fundamental concept.  I am sure that I have seen similar ideas somewhere in the comments before, but searching the site for \"random + determinism\" didn't turn much up so if anyone recognizes it I would like to see other posts on the subject.  However my primary purpose here is to expose the name \"Deep Structure Determinism\" that jasonmcdowell used for it when I explained it to him on the ride back from the Berkeley Meetup yesterday.

\n

Again I am not a physicist; it could be that there is a one or two sentence explanation for why this is a useless theory--of course that won't stop the name \"Deep Structure Determinism\" from being aesthetically pleasing and appropriate.

\n

For my undergraduate thesis in mathematics, I collected numerical evidence for a generalization of the Sato-Tate Conjecture.  The conjecture states, roughly, that if you take the right set of polynomials, compute the number of solutions to them over finite fields, and scale by a consistent factor, these results will have a probability distribution that is precisely a semicircle.

\n

The reason that this is the case has something to do with the solutions being symmetric (in the way that y=x2 if and only if y=(-x)2 is a symmetry of the first equation) and their group of symmetries being a circle.  And stepping back one step, the conjecture more properly states that the numbers of solutions will be roots of a certain polynomial which will be the minimal polynomial of a random matrix in SU2.

\n

That is at least as far as I follow the mathematics, if not further.  However, it's far enough for me to stop and do a double take.

\n

A \"random matrix?\"  First, what does it mean for a matrix to be random?  And given that I am writing up a totally deterministic process to feed into a computer, how can you say that the matrix is random?

\n

A sequence of matrices is called \"random\" if when you integrate of that sequence, your integral converges to integrating over an entire group of matrices.  Because matrix groups are often smooth manifolds they are designed to be integrated over, and this ends up being sensible.  However a more practical characterization, and one that I used in the writeup for my thesis, is that if you take a histogram of the points you are measuring, the histogram's shape should converge to the shape of the group--that is, if you're looking at the matrices that determine a circle, your histogram should look more and more like a semicircle as you do more computing.  That is, you can have a probability distribution over the matrix space for where your matrix is likely to show up.

\n

The actual computation that I did involved computing solutions to a polynomial equation--a trivial and highly deterministic procedure.  I then scaled them, and stuck them in place.  If I had not know that these numbers were each coming from a specific equation I would have said that they were random; they jumped around through the possibilities, but they were concentrated around the areas of higher probability.

\n

So bringing this back to quantum physics:  I am given to understand that quantum mechanics involves a lot of random matrices.  These random matrices give the impression of being \"random\" in that it seems like there are lots of possibilities, and one must get \"chosen\" at the end of the day.  One simple way to deal with this is postulate many worlds, wherein there no one choice has a special status.

\n

However my experience with random matrices suggests that there could just be some series of matrices, which satisfies the definition of being random, but which is inherently determined (in the way that the Jacobian of a given elliptic curve is \"determined.\")  If all quantum random matrices were selected from this list, it would leave us with the subjective experience of randomness, and given that this sort of computation may not be compressible, the expectation of dealing with these variables as though they are random forever.  It would also leave us in a purely deterministic world, which does not branch, which could easily be linear, unitary, differentiable, local, symmetric, and slower-than-light.

" } }, { "_id": "fN6eXAPPkejNZY9CQ", "title": "Rationalist Aikido Trick", "pageUrl": "https://www.lesswrong.com/posts/fN6eXAPPkejNZY9CQ/rationalist-aikido-trick", "postedAt": "2010-10-10T16:45:38.487Z", "baseScore": 23, "voteCount": 21, "commentCount": 23, "url": null, "contents": { "documentId": "fN6eXAPPkejNZY9CQ", "html": "

I once met a guy who practiced aikido and explained a little about it to me.  Aikido is a martial art, but it focuses in particular on what I'd think of as \"body hacking.\"  There are ways to make your opponent trip and fall, or use his strength against himself; there are ways to pick up and throw a much heavier and stronger opponent.  Of course, because of this, there's often a lot of mystery and hokum surrounding aikido, but this guy said he belonged to a particular branch where, instead of venerating tradition, they try to come up with new \"body hacks\" and have worldwide meetings to test and compare them.  

\n

The relevance to LessWrong is tangiential, except that this seems like a model of rationalism applied to things that seem like magic: it's a good example of how one should behave when faced with the inexplicable.

\n

This dude demonstrated (and taught me to perform) a \"trick\" that I would not have believed possible.

\n

First, hold out your arm straight and stiff and ask someone to push it down.  You can offer resistance, but if your opponent is strong enough, or if enough people are pushing down on your arm, you'll eventually fail.

\n

Now, instead of holding your arm out stiff, make it as loose and floppy as you can.  Then hold it out, but think of \"reaching\" through the air, but without moving.  (As a visual aid, Aikido Guy held out a Coke bottle and asked me to hold my arm out while thinking of reaching for the bottle.)  Now get someone to push your arm down.  You are much, much stronger.  And you don't feel like you're expending any effort.  Nobody could push my arm down.  Aikido Guy, who is very small, told me that he'd once gotten three varsity football players hanging off his arm without budging it.

\n

Aikido Guy was curious about this mystery, and luckily he was on a university campus, so he asked to be wired up to an MRI while he tried holding his arm out both ways. Apparently there was a real difference in the way the neurons fired. When he was trying to hold his arm out \"stiffly,\" his brain stimulated all the muscle fibers in his arm, but intermittently (compared to the timescale of neuronal activity.)  When he was \"reaching,\" his brain stimulated a smaller fraction of the arm muscles, but continuously.  Apparently the former kind of muscle stimulation is more tiring and weaker than the latter.

\n

So, although we don't really understand what's going on, we do know that there's a physical explanation for it. (Do any of you have a better explanation?)

" } }, { "_id": "xjqStp6LBWboB49qJ", "title": "Where else are you posting?", "pageUrl": "https://www.lesswrong.com/posts/xjqStp6LBWboB49qJ/where-else-are-you-posting-0", "postedAt": "2010-10-10T16:28:37.969Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "xjqStp6LBWboB49qJ", "html": "

As a result of XiXiDu's massive Resources and References post, I just found out about Katja Grace's very pleasant Meteuphoric blog.

\n

I'm curious about what else LessWrongians are posting at other sites, and if there's interest, I'll make this into a top-level post.

\n

I also post at Input Junkie.

" } }, { "_id": "TCZsRbqTseH2TyJ74", "title": "Where else are you posting?", "pageUrl": "https://www.lesswrong.com/posts/TCZsRbqTseH2TyJ74/where-else-are-you-posting", "postedAt": "2010-10-10T16:22:25.727Z", "baseScore": 10, "voteCount": 9, "commentCount": 12, "url": null, "contents": { "documentId": "TCZsRbqTseH2TyJ74", "html": "

As a result of XiXiDu's massive Resources and References post, I just found out about Katja Grace's very pleasant Meteuphoric blog.

\n

I'm curious about what else LessWrongians are posting at other sites, and if there's interest, I'll make this into a top-level post.

\n

I also post at Input Junkie.

" } }, { "_id": "TNHQLZK5pHbxdnz4e", "title": "References & Resources for LessWrong", "pageUrl": "https://www.lesswrong.com/posts/TNHQLZK5pHbxdnz4e/references-and-resources-for-lesswrong", "postedAt": "2010-10-10T14:54:13.514Z", "baseScore": 168, "voteCount": 135, "commentCount": 104, "url": null, "contents": { "documentId": "TNHQLZK5pHbxdnz4e", "html": "

A list of references and resources for LW

\n

Updated: 2011-05-24

\n\n

Summary

\n

Do not flinch, most of LessWrong can be read and understood by people with a previous level of education less than secondary school. (And Khan Academy followed by BetterExplained plus the help of Google and Wikipedia ought to be enough to let anyone read anything directed at the scientifically literate.) Most of these references aren't prerequisite, and only a small fraction are pertinent to any particular post on LessWrong. Do not be intimidated, just go ahead and start reading the Sequences if all this sounds too long. It's much easier to understand than this list makes it look like.

\n

Nevertheless, as it says in the Twelve Virtues of Rationality, scholarship is a virtue, and in particular:

\n
\n

It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory.

\n
\n

\n

Contents

\n\n\n\n\n\n\n
\n\n
\n

 

\n

LessWrong.com

\n

This list is hosted on LessWrong.com, a community blog devoted to refining the art of human rationality - the art of thinking. If you follow the links below you'll learn more about this community. It is one of the most important resources you'll ever come across if your aim is to get what you want, if you want to win. It shows you that there is more to most things than meets the eye, but more often than not much less than you think. It shows you that even smart people can be completely wrong but that most people are not even wrong. It teaches you to be careful in what you emit and to be skeptical of what you receive. It doesn't tell you what is right, it teaches you how to think and to become less wrong. And to do so is in your own self interest because it helps you to attain your goals, it helps you to achieve what you want.

\n

Overview

\n\n

Why read Less Wrong?

\n

A few articles exemplifying in detail what you can expect from reading Less Wrong, why it is important, what you can learn and how it does help you.

\n\n

Artificial Intelligence

\n
\n

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an \"intelligence explosion,\" and the intelligence of man would be left far behind. — I. J. Good, \"Speculations Concerning the First Ultraintelligent Machine\"

\n
\n

General

\n\n

Friendly AI

\n
The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. — Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk
\n\n

Machine Learning

\n

Not essential but an valuable addition for anyone who's more than superficially interested in AI and machine learning.

\n\n

The Technological Singularity

\n
The term “Singularity” had a much narrower meaning back when the Singularity Institute was founded. Since then the term has acquired all sorts of unsavory connotations. — Eliezer Yudkowsky
\n\n

Heuristics and Biases

\n
\n

One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision. — Bertrand Russell

\n
\n
\n

Ignorance more frequently begets confidence than does knowledge. — Charles Darwin

\n
\n

The heuristics and biases program in cognitive psychology tries to work backward from biases (experimentally reproducible human errors) to heuristics (the underlying mechanisms at work in the brain).

\n\n

Mathematics

\n

Learning Mathematics

\n
\n

Here's a phenomenon I was surprised to find: you'll go to talks, and hear various words, whose definitions you're not so sure about. At some point you'll be able to make a sentence using those words; you won't know what the words mean, but you'll know the sentence is correct. You'll also be able to ask a question using those words. You still won't know what the words mean, but you'll know the question is interesting, and you'll want to know the answer. Then later on, you'll learn what the words mean more precisely, and your sense of how they fit together will make that learning much easier. The reason for this phenomenon is that mathematics is so rich and infinite that it is impossible to learn it systematically, and if you wait to master one topic before moving on to the next, you'll never get anywhere. Instead, you'll have tendrils of knowledge extending far from your comfort zone. Then you can later backfill from these tendrils, and extend your comfort zone; this is much easier to do than learning \"forwards\". (Caution: this backfilling is necessary. There can be a temptation to learn lots of fancy words and to use them in fancy sentences without being able to say precisely what you mean. You should feel free to do that, but you should always feel a pang of guilt when you do.) — Ravi Vakil

\n
\n\n

Basics

\n\n

General

\n\n

Probability

\n
\n

Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind. — Eliezer Yudkowsky

\n
\n

Math is fundamental, not just for LessWrong. But especially Bayes’ Theorem is essential to understand the reasoning underlying most of the writings on LW.

\n

\"Bayes'

\n\n

Logic

\n\n

Foundations

\n
\n

All the limitative theorems of metamathematics and the theory of computation suggest that once the ability to represent your own structure has reached a certain critical point, that is the kiss of death: it guarantees that you can never represent yourself totally. Gödel’s Incompleteness Theorem, Church’s Undecidability Theorem, Turing’s Halting Theorem, Tarski’s Truth Theorem — all have the flavour of some ancient fairy tale which warns you that “To seek self-knowledge is to embark on a journey which … will always be incomplete, cannot be charted on any map, will never halt, cannot be described.” — Douglas Hofstadter 1979

\n
\n\n

Miscellaneous

\n\n

Decision theory

\n
It is precisely the notion that Nature does not care about our algorithm, which frees us up to pursue the winning Way - without attachment to any particular ritual of cognition, apart from our belief that it wins. Every rule is up for grabs, except the rule of winning. — Eliezer Yudkowsky
\n

Remember that any heuristic is bound to certain circumstances. If you want X from agent Y and the rule is that Y only gives you X if you are a devoted irrationalist then ¬irrational. Under certain circumstances what is irrational may be rational and what is rational may be irrational. Paul K. Feyerabend said: \"All methodologies have their limitations and the only ‘rule’ that survives is ‘anything goes’.\"

\n\n

Game Theory

\n
\n

Game theory is the study of the ways in which strategic interactions among economic agents produce outcomes with respect to the preferences (or utilities) of those agents, where the outcomes in question might have been intended by none of the agents. — Stanford Encyclopedia of Philosophy

\n
\n\n

Programming

\n
\n

With Release 33-9117, the SEC is considering substitution of Python or another programming language for legal English as a basis for some of its regulations. — Will Wall Street require Python?

\n
\n

Programming knowledge is not mandatory for LessWrong but you should however be able to interpret the most basic pseudo code as you will come across various snippets of code in discussions and top-level posts outside of the main sequences.

\n

Python

\n

Python is a general-purpose high-level dynamic programming language.

\n\n

Haskell

\n

Haskell is a standardized, general-purpose purely functional programming language, with non-strict semantics and strong static typing.

\n\n

General

\n\n

Computer science

\n
\n

The introduction of suitable abstractions is our only mental aid to organize and master complexity. — Edsger W. Dijkstra

\n
\n

One of the fundamental premises on LessWrong is that a universal computing device can simulate every physical process and that we therefore should be able to reverse engineer the human brain as it is fundamentally computable. That is, intelligence and consciousness are substrate-neutral.

\n\n

(Algorithmic) Information Theory

\n\n

Physics

\n
\n

A poet once said, \"The whole universe is in a glass of wine.\" We will probably never know in what sense he meant that, for poets do not write to be understood. But it is true that if we look at a glass of wine closely enough we see the entire universe. — Richard Feynman

\n
\n

General

\n\n

General relativity

\n
\n

You do not really understand something unless you can explain it to your grandmother. ~ Albert Einstein

\n
\n\n

Quantum physics

\n
\n

An electron is not a billiard ball, and it’s not a crest and trough moving through a pool of water. An electron is a mathematically different sort of entity, all the time and under all circumstances, and it has to be accepted on its own terms. The universe is not wavering between using particles and waves, unable to make up its mind. It’s only human intuitions about QM that swap back and forth. — Eliezer Yudkowsky

\n
\n
\n

I am not going to tell you that quantum mechanics is weird, bizarre, confusing, or alien. QM is counterintuitive, but that is a problem with your intuitions, not a problem with quantum mechanics. Quantum mechanics has been around for billions of years before the Sun coalesced from interstellar hydrogen. Quantum mechanics was here before you were, and if you have a problem with that, you are the one who needs to change. QM sure won’t. There are no surprising facts, only models that are surprised by facts; and if a model is surprised by the facts, it is no credit to that model. — Eliezer Yudkowsky

\n
\n\n

Foundations

\n\n

Evolution

\n
\n

(Evolution) is a general postulate to which all theories, all hypotheses, all systems must henceforward bow and which they must satisfy in order to be thinkable and true. Evolution is a light which illuminates all facts, a trajectory which all lines of thought must follow — this is what evolution is. — Pierre Teilhard de Chardin

\n
\n\n

Philosophy

\n
There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination. — Daniel Dennett, Darwin's Dangerous Idea, 1995.
\n
Philosophy is a battle against the bewitchment of our intelligence by means of language. — Wittgenstein
\n

General

\n\n

The Mind

\n
Everything of beauty in the world has its ultimate origins in the human mind. Even a rainbow isn't beautiful in and of itself. — Eliezer Yudkowsky
\n\n

Epistemology

\n

Levels of epistemic accuracy.

\n\n

Linguistics

\n\n

Neuroscience

\n\n

General Education

\n\n

Miscellaneous

\n

Not essential but a good preliminary to reading LessWrong and in some cases helpful to be able to make valuable contributions in the comments. Many of the concepts in the following works are often mentioned on LessWrong or the subject of frequent discussions.

\n\n

Concepts

\n

Elaboration of miscellaneous terms, concepts and fields of knowledge you might come across in some of the subsequent and more technical advanced posts and comments on LessWrong. The following concepts are frequently discussed but not necessarily supported by the LessWrong community. Those concepts that are controversial are labeled M.

\n\n

Websites

\n

Relevant websites. News and otherwise. F

\n\n

Fun & Fiction

\n

The following are relevant works of fiction or playful treatments of fringe concepts. That means, do not take these works at face value.

\n

Accompanying text: The Logical Fallacy of Generalization from Fictional Evidence

\n

\"Memetic

\n

Fiction

\n\n

Fun

\n\n

Go

\n

A popular board game played and analysed by many people in the LessWrong and general AI crowd.

\n\n
\n

Note:

\n

This list is a work in progress. I will try to constantly update and refine it.

\n

If you've anything to add or correct (e.g. a broken link), please comment below and I'll update the list accordingly.

" } }, { "_id": "eWBiHQr4k9R8nPRJk", "title": "Strategies for dealing with emotional nihilism", "pageUrl": "https://www.lesswrong.com/posts/eWBiHQr4k9R8nPRJk/strategies-for-dealing-with-emotional-nihilism", "postedAt": "2010-10-10T13:31:08.675Z", "baseScore": 39, "voteCount": 39, "commentCount": 175, "url": null, "contents": { "documentId": "eWBiHQr4k9R8nPRJk", "html": "

I asked a question in the discussion section a little bit ago and got very productive responses.  What follows is mostly a paraphrase of people's comments.

\n
\n

\n

From time to time, like Pierre, I don't care.  I get emotionally nihilistic.  I find myself doing things that are morally awful in the conventional meaning of the word: procrastinating, sneaking other people's food out of the communal fridge, being casually unkind and unhelpful, breaking promises.  I don't doubt that these are awful things to do. I figure any moral theory worth its salt will condemn them -- except the moral theory \"I don't care,\" which sometimes seems strangely compelling.  

\n

What I want to know is: what goes through people's heads when they're motivated not to be awful?  What could you tell someone as a reason not to be awful?  If you are, in fact, not awful, why aren't you awful?  What do you think, or feel, when you care about things?  What would you tell someone who claims \"I just don't care\" if you wanted to get her to care?  What would you tell yourself, in your nihilistic moments?

\n
\n
\n

The (more) trivial utility function

\n

Nihilism feels like a utility function where everything is set to the value zero.  Landing that job offer or school admission letter?  That's worth nothing.  Making someone smile?  Worth nothing.  Being in good physical shape?  Worth nothing.  Living according to moral values?  Worth nothing.  Nothing is fun, or appealing, or worth looking forward to.

\n

The thing about a nihilistic mindset is that you can't really argue your way out of it (at least, I've never succeeded.)  It's perfectly logically coherent.  A function that's constant at zero is still a function.  You can have a function where all the best things in life, all the \"peaks,\" matter much less to you.

\n

Edit: Vladimir_Nesov comments that it's not really a zero utility function because even a nihilistic person doesn't behave totally at random, and can usually keep up some minimal degree of self maintenance. This is a fair point. It's more accurate to say that it feels like nothing matters, or at least that the desire for goal-directed behavior is significantly diminished.  Maybe it's not a flat function, but a flatter function, where the things you used to value the most seem empty.

\n

Most of us aren't in a nihilistic state all the time.  But we can have days like that.  Or weeks, or months, or years.  (I had a year when I was almost always in this state.)  And until you snap out of it, you can do a lot of damage, to your career, your relationships, your body, and your moral values.  So how do you avoid all that?

\n

Tactic 1: Get rid of the nihilism

\n

Nihilism doesn't feel good.  You don't have any positive emotions.  The SEEK switch is turned off in your brain.  It's really in your interest to escape this flat utility function.

\n

So one thing you can do is to try to find a physiological switch.  Take a nap, get some exercise, have something to eat.  I've also found that what you eat matters: carbohydrates make me a bit more emotionally \"down.\"  Sometimes the physiological is enough.  Sometimes you need a cognitive switch: start doing something absorbing, like reading a book or watching a movie or talking to a person.  Because your motivation is very low here, you don't want to be ambitious.  Do something that's easily available, or something that's already a habit.  (I run enough on a regular basis that \"go for a run\" doesn't take much more motivation than \"go to Subway and get a sandwich\" -- but if you're not a runner, then running is a totally unreasonable choice.)

\n

Tactic 2: Plan for nihilism

\n

If this has happened to you before, and you know it could happen again, you need to anticipate and plan for those times when you can't bring yourself to care about anything.  First, you need to prepare by \"stocking up\" on things that tend to help you escape a nihilistic mood.  Keep the right kind of food easily available.  Get adequate sleep over the long term.  Make a habit of exercise (so that it's available as an option for you when you're \"down.\")  Keep absorbing activities available: have books around you, and also have friends and social commitments that you can't easily blow off.  

\n

The other way of planning for nihilism is to have ironclad rules and habits, so that you can do pretty much the right thing even when you're not in a mood to care.  Being rigorous when you're in a good mood should carry over somewhat to when you're feeling nihilistic.  If you NEVER miss deadlines or play hooky, force of habit will carry you through even in your bad times.  If you NEVER steal or make hurtful remarks, you're less likely to start when you get in a foul mood.  Think of it this way: even now, you probably don't do just anything when you feel nihilistic -- it's unlikely that you murder people, no matter how bad you feel, because that's totally outside your range of possibilities.  If something is totally outside your range of possibilities, if you normally never, ever do it, you're not very likely to do it for the first time when you're having a bad day.  On the other hand, if something is an occasional vice of yours, you're liable to do a lot of it in bad times.

\n

Tactic 3: Heuristics for escaping nihilistic thinking

\n

These are things to remind yourself, or reflect on, that seem to be shortcuts to modifying your utility function away from flat.

\n

 

\n\n

 

\n

You may have your own heuristic -- something that reliably makes you care more, some kind of trigger.

\n

Tactic 4: Avoid \"rock-bottom rituals.\"

\n

This didn't come up in discussion, but it occurred to me in my own life. Sometimes you have a \"rock-bottom ritual,\" something you do when you start to feel terrible, that sort of cements the feeling.  It's an official declaration of nihilistic misery.  The prototypical example is drinking a lot.  I don't do that -- I listen to Wagner and eat unhealthy food.  You may have something different.  The problem is, rock-bottom rituals prolong your nihilistic periods, when what you really want is to shorten them.  Saying \"Ok, it's time to break out the Jack Daniels\" (or the Tannhauser and peanut butter) is just about the worst thing you can do for yourself.

\n

Hopefully this will help.  I'm still trying to figure out how best to manage emotional nihilism.  It seems to be common, but it also seems to be more of a problem for some people than for others. I'd like to see any further contributions from LessWrongers!

" } }, { "_id": "nvM7F2LEqN5QKhrFW", "title": "Free Hard SF Novels & Short Stories", "pageUrl": "https://www.lesswrong.com/posts/nvM7F2LEqN5QKhrFW/free-hard-sf-novels-and-short-stories", "postedAt": "2010-10-10T12:12:06.166Z", "baseScore": 28, "voteCount": 25, "commentCount": 18, "url": null, "contents": { "documentId": "nvM7F2LEqN5QKhrFW", "html": "

Novels

\n

Blindsight, Peter Watts

\n
\n

Eighty years in the future, Earth becomes aware of an alien presence when thousands of micro-satellites surveil the Earth; through good luck, the incoming alien vessel is detected, and the ship Theseus, with its artificial intelligence captain and crew of five, are sent out to engage in first contact with the huge alien vessel called Rorschach. As they explore the vessel and attempt to analyze it and its inhabitants, the narrator considers his life and strives to understand himself and ponders the nature of intelligence and consciousness, their utility, and what an alien mind might be like. Eventually the crew realizes that they are greatly outmatched by the vessel and its unconscious but extremely capable inhabitants.

\n

When the level of this threat becomes clear, Theseus runs a kamikaze mission using its antimatter as a payload, while Siri returns to Earth, which, as he grows nearer, it is apparent has been overrun by vampires. Non-sapient creatures are beginning to exterminate what may be the only bright spark on consciousness in the universe.

\n
\n

Ventus, Karl Schroeder

\n
\n

Ventus is well-written and fun, as well as having IME the most realistic treatment of nanotech I've yet encountered in SF. Schroeder is definitely an author to watch (this is his first novel). The setup is that some agents from the local galactic civilization have come to an off-limits world hunting a powerful cyborg who may be carrying the last copy of an extremely dangerous AI god. The tough part is that the world is off-limits because the nanotech on that world is controlled by AIs that destroy all technology not made by them, and aren't terribly human-friendly.

\n
\n

Crisis in Zefra, Karl Schroeder

\n
\n

In spring 2005, the Directorate of Land Strategic Concepts of National Defense Canada (that is to say, the army) hired me to write a dramatized future military scenario.  The book-length work, Crisis in Zefra, was set in a mythical African city-state, about 20 years in the future, and concerned a group of Canadian peacekeepers who are trying to ready the city for its first democratic vote while fighting an insurgency.  The project ran to 27,000 words and was published by the army as a bound paperback book.

\n
\n

Accelerando, Charles Stross

\n
\n

The first three stories follow the character of \"venture altruist\" Manfred Macx starting in the early 21st century, the second three stories follow his daughter Amber, and the final three focus largely on her son Sirhan in the completely transformed world at the end of the century.

\n

In Accelerando, the planets of the solar system are dismantled to form a Matrioshka brain, a vast computational device inhabited by minds inconceivably more complex than naturally evolved intelligences such as human beings. This proves to be a normal stage in the life cycle of an inhabited solar system; the galaxies are filled with Matrioshka brains. Intelligent consciousnesses outside of Matrioshka brains may communicate via wormhole networks.

\n

The notion that the universe is dominated by a communications network of superintelligences bears comparison with Olaf Stapledon's early science-fiction novel Star Maker (1937), although Stapledon's advanced civilizations communicate psychically rather than informatically.

\n
\n

The Lifecycle of Software Objects, Ted Chiang

\n
\n

A triumphant combination of the rigorous extrapolation of artificial intelligence and artificial life, two of the high concepts of contemporary SF, with an exploration of its consequences for the ordinary people whose lives it derails. Ana Alvarado is a former zookeeper turned software tester. When Blue Gamma offers her a job as animal trainer for their digients--digital entities, spawned by genetic algorithms to provide pets for players in the future virtual reality of Data Earth--she discovers an unexpected affinity for her charges. So does Derek Brooks, an animator who designs digient body parts. The market for digients develops and expands, cools and declines after the pattern of the software industry. Meanwhile Ana, Derek, and their friends become increasingly attached to their cute and talkative charges, who are neither pets nor children but something wholly new. But as Blue Gamma goes bust and Data Earth itself fades into obsolescence, Ana and the remaining digient keepers face a series of increasingly unpleasant dilemmas, their worries sharpened by their charges' growing awareness of the world beyond their pocket universe, and the steady unwinding of their own lives and relationships into middle-aged regrets for lost opportunities. Keeping to the constraints of a novella while working on a scale of years is a harsh challenge. Chiang's prose is sparse and austere throughout, relying on hints and nudges to provide context. At times, the narrative teeters on the edge of arid didacticism; there are enough ideas to fill a lesser author's trilogy, but much of the background is present only by implication, forcing the reader to work to fill in the blanks. (Indeed, this story may be impenetrable to readers who aren't at least passingly familiar with computers, the Internet, and virtual worlds such as Second Life.)

\n
\n

Short Stories

\n

The Island, Peter Watts

\n
\n

\"The Island\" is a standalone novelette. It is also one episode in a projected series of connected tales (a lá Stross's Accellerando or Bradbury's The Martian Chronicles) that start about a hundred years from now and extends unto the very end of time. And in some parallel universe where I not only get a foothold into the gaming industry but actually keep one, it is a mission level for what would be, in my opinion, an extremely kick-ass computer game.

\n
\n

The Things, Peter Watts

\n

Short Stories by Peter Watts

\n

Divided by Infinity, Robert Charles Wilson

\n
\n

In the year after Lorraine's death I contemplated suicide six times. Contemplated it seriously, I mean: six times sat with the fat bottle of Clonazepam within reaching distance, six times failed to reach for it, betrayed by some instinct for life or disgusted by my own weakness.

\n

I can't say I wish I had succeeded, because in all likelihood I did succeed, on each and every occasion. Six deaths. No, not just six. An infinite number.

\n

Times six.

\n

There are greater and lesser infinities.

\n

But I didn't know that then.

\n
\n

Crystal Nights, Greg Egan

\n

Short Stories by Greg Egan

\n

The Finale of the Ultimate Meta Mega Crossover 

\n
\n

Vernor Vinge x Greg Egan crackfic.

\n

Concepts contained in this story may cause SAN Checking in any mind not inherently stable at the third level of stress. Story may cause extreme existential confusion. Story is insane. The author recommends that anyone reading this story sign up with Alcor or the Cryonics Institute to have their brain preserved after death for later revival under controlled conditions. Readers not already familiar with this author should be warned that he is not bluffing.

\n
\n

Three Worlds Collide

\n
\n

A story to illustrate some points on naturalistic metaethics and diverse other issues of rational conduct.

\n

Features Baby-Eating Aliens.

\n
" } }, { "_id": "8vSuZcujKEsK4inRW", "title": "\"The Life Cycle of Software Objects\" by Chiang is available for free", "pageUrl": "https://www.lesswrong.com/posts/8vSuZcujKEsK4inRW/the-life-cycle-of-software-objects-by-chiang-is-available", "postedAt": "2010-10-10T09:49:43.738Z", "baseScore": 10, "voteCount": 8, "commentCount": 4, "url": null, "contents": { "documentId": "8vSuZcujKEsK4inRW", "html": "

I recently recommended this novella. Now you don't need to buy the hardcover or wait for it to be reprinted. You can read it here.

" } }, { "_id": "rCxW6Txm2KaqfXNdo", "title": "Sleeping Beauty as a decision problem (solved)", "pageUrl": "https://www.lesswrong.com/posts/rCxW6Txm2KaqfXNdo/sleeping-beauty-as-a-decision-problem-solved", "postedAt": "2010-10-10T03:15:08.755Z", "baseScore": 7, "voteCount": 5, "commentCount": 3, "url": null, "contents": { "documentId": "rCxW6Txm2KaqfXNdo", "html": "

EDIT: User:Misha solved it

\n
\n

 

\n

First, here's the Sleeping Beauty problem, from Wikipedia:

\n
\n

The paradox imagines that Sleeping Beauty volunteers to undergo the following experiment. On Sunday she is given a drug that sends her to sleep. A fair coin is then tossed just once in the course of the experiment to determine which experimental procedure is undertaken. If the coin comes up heads, Beauty is awakened and interviewed on Monday, and then the experiment ends. If the coin comes up tails, she is awakened and interviewed on Monday, given a second dose of the sleeping drug, and awakened and interviewed again on Tuesday. The experiment then ends on Tuesday, without flipping the coin again. The sleeping drug induces a mild amnesia, so that she cannot remember any previous awakenings during the course of the experiment (if any). During the experiment, she has no access to anything that would give a clue as to the day of the week. However, she knows all the details of the experiment.

\n


Each interview consists of one question, \"What is your credence now for the proposition that our coin landed heads?\"

\n
\n


I was looking at AlephNeil's old post about UDT and encountered this diagram depicting the Sleeping Beauty problem as a decision problem.

\n

\"\"

This diagram is underspecified, though. There are no specific payoffs in the boxes and it's not obvious what actions the arrows mean. So I tried to figure out some ways to transform the Sleeping Beauty problem into a concrete decision problem. I also made edited versions of AlephNeil's diagram for versions 1 and 2.

\n
\n

The gamemaster puts Sleeping Beauty to sleep on Sunday. He uses a sleeping drug that causes mild amnesia such that upon waking she won't be able to remember any previous awakenings that may have taken place during the course of the game. The gamemaster flips a coin. If heads, he wakes her up on monday only. If tails, he wakes her up on monday and tuesday.

\n

Version 1

\n

Upon each awakening, the gamemaster asks Sleeping Beauty to guess which way the coin landed. For each correct guess, she's awarded $1000 at the end of the game. diagram

\n

Version 2

\n

Upon each awakening, the gamemaster asks Sleeping Beauty to guess which way the coin landed. If she all of her guesses are correct, she's awarded $1000 at the end of the game. diagram

\n

Version 3

\n

Upon each awakening, the gamemaster asks Sleeping Beauty for her credence as to whether the coin landed heads. For each awakening, if the coin landed x, and she declares a credence of p that it landed x, she's awarded p*$1000 at the end of the game.

\n

Version 4

\n

Upon each awakening, the gamemaster asks Sleeping Beauty for her credence as to whether the coin landed heads. At the end of the game, her answers are averaged to a single probability p, and she's awarded p*$1000.

\n
\n

What's interesting is that while the suggested answers for the classic Sleeping Beauty problem are (1/2) and (1/3), for neither version 1 nor 2 is the correct answer to guess heads every second or third time, and for neither version 3 nor 4 is the correct answer to declare a credence of (1/2) or (1/3). The correct answers are (correct me if I'm wrong, I got these by looking at AlephNeil-style UDT diagrams and doing back-of-the-envelope calculations):

\n\n
\n

Is there any way to transform Sleeping Beauty into a decision problem such that the correct answer in some sense is either (1/2) or (1/3)?

Is there a general procedure for transforming problems about credence into decision problems?

" } }, { "_id": "3dmBf9b75HAX3nEPQ", "title": "Resveratrol continues to be useless", "pageUrl": "https://www.lesswrong.com/posts/3dmBf9b75HAX3nEPQ/resveratrol-continues-to-be-useless", "postedAt": "2010-10-10T00:19:08.012Z", "baseScore": 12, "voteCount": 8, "commentCount": 2, "url": null, "contents": { "documentId": "3dmBf9b75HAX3nEPQ", "html": "

From the more-bad-news-for-those-who-want-to-live-forever department: http://pipeline.corante.com/archives/2010/10/08/does_resveratrol_really_work_and_does_srt1720.php

\n
\n

The current study dials that back to levels that could be reached in human dosing. What they saw was no effect on lifespan at 0.5 micromolar, which would be a realistic blood level for humans. When they turned up the concentration to 5 micromolar, there was a slight but apparently real effect of just under 4%. Now, 5 micromolar is a pretty heroic level of resveratrol - I think you could hit that as a peak concentration, but surely not hold it.

\n
\n

Nor is the bad news just for resveratrol:

\n
\n

Oh, and there's another interesting part to this paper. The authors also looked at SRT1720, the resveratrol follow-up from Sirtris that has been the subject of all kinds of arguing in the recent literature. This compound is supposed to be several hundred times more potent than resveratrol itself at SIRT1, although if you've been following the story, you'll know that those numbers are widely believed to be artifacts of the assay conditions. And sure enough, the authors saw no effect on C. elegans lifespan when dosing with physiological concentrations of SRT1720.

\n
\n

 

" } }, { "_id": "h6XN6ibPQJH3a8QpK", "title": "Four Ways to Do Good (and four fallacies)", "pageUrl": "https://www.lesswrong.com/posts/h6XN6ibPQJH3a8QpK/four-ways-to-do-good-and-four-fallacies", "postedAt": "2010-10-09T23:46:16.583Z", "baseScore": 20, "voteCount": 17, "commentCount": 5, "url": null, "contents": { "documentId": "h6XN6ibPQJH3a8QpK", "html": "

I was thinking about ways people \"do good\" (or otherwise achieve their values) and it occurred to me that it makes sense to organize these along two axes: humble/mighty and cooperation/conflict.  

\n

Whenever you organize things into categories, you open yourself up to people questioning whether these are the \"right\" categories, whether they carve reality at the joints; I don't have statistical evidence that these are good categories, only salient examples and a general sense that this \"fits.\"  If you don't think it fits, say so, and I'll have learned something new.

\n

Four Quadrants

\n

Quadrant 1: Help

\n

The humble/cooperation quadrant refers to small acts of helpfulness.  The prototypical action is giving to charity, or comforting a friend. The mindset in this quadrant is an attitude of kindness, gentleness, or benevolence, with great attention to not being presumptuous and accidentally doing more harm than good.   The humble/cooperation attitude doesn't expect great victories, but a lifetime of dedication.  The humble/cooperative person isn't interested in antagonism, but tries to listen to others and understand their point of view. 

\n

Quadrant 2: Invent

\n

The mighty/cooperation quadrant refers to creating brilliant, win/win solutions.  The prototypical mighty/cooperation action is inventing a hugely beneficial new technology, or negotiating a mutually beneficial agreement.  The mindset in this quadrant is a an attitude of can-do, creativity, and innovation; it's associated with exchange, trade, and collaboration.  This attitude does expect great victories.  It's focused on fixing problems instead of managing them.1   The mighty/cooperative person thinks antagonism is mostly unnecessary if you come up with good enough solutions.

\n

Quadrant 3: Fight

\n

The mighty/conflict quadrant refers to fighting malicious opponents.  The prototypical mighty/conflict action is, of course, fighting in a war or defending yourself against attack, but it can also refer to non-violent actions: a moderator who bans a troll, for instance, or someone who speaks out against a bully.  The salient point is that this is a way of doing good by stopping the people who mean to do harm.  The mindset in this quadrant is determination, courage, and will to win.  The mighty/conflict person thinks that evil (or at least dangerous) people exist, and that it's absolutely necessary to defeat them, and deluded to try to cooperate with them.

\n

Quadrant 4: Doubt

\n

The humble/conflict quadrant refers to recognizing flaws and fallacies.  The prototypical humble/conflict action is warning people that a proposal will fail or have horrible unintended consequences.  The humble/conflict mindset sees human beings as fallible, flawed, and over-confident -- it is a cynical, or at least guarded, attitude.  Through vigilant attention, we can avert the real catastrophes that result from fallacious thinking. The humble/conflict person sees it as his duty to oppose naive bad ideas, inaccuracies, and hypocrisies; he thinks cooperation is often less feasible than people imagine.

\n

Four Fallacies

\n

Each of these ways of doing good can become a fallacy if it's assumed to be the only way to do good.

\n

Universalized humble/cooperation leads to the assumption that it's always the best choice to be more generous or altruistic.  But sometimes, this is bad game theory and won't work.

\n

Universalized mighty/cooperation leads to the assumption that all problems come with win/win solutions or structural fixes.  But they don't; some games are zero-sum, some issues aren't really \"problems\" to \"solve,\" and it's very common to mistakenly assume you have a brilliant solution when you engage in wishful thinking.

\n

Universalized mighty/conflict leads to the assumption that when something's wrong there's always an evildoer to fight.  Sometimes things go badly when there's no malice at all; sometimes your enemies are not innately evil; sometimes the most productive thing to do has nothing to do with defeating the bad guys.

\n

Universalized humble/conflict leads to the assumption that the best thing to do is always to look for flaws and fallacies.  This can let the perfect be the enemy of the good.  It can have a chilling, demotivating effect and stop people from being as generous, innovative, or brave as they'd otherwise be.

\n

LessWrong and the Ways to Do Good

\n

LessWrong seems to have a high percent of ideas that fall into the humble/conflict quadrant; after all, that's part of its mission.  Understanding cognitive fallacies and preparing for major risks is the special purview of the humble/conflict quadrant. And humble/conflict viewpoints seem underrepresented in the general public, so building a community around that specific perspective is potentially a good idea. But you also see ideas on here that belong to the other quadrants; there have been several posts about charity, which falls under humble/cooperation, and there was Alicorn's \"A Suite of Pragmatic Considerations in Favor of Niceness,\" which also falls under humble/cooperation.  Some of the writing about Friendly AI seems to fall under mighty/cooperation.  I can't think of an example of mighty/conflict writing on LW -- it seems to be a much less common mindset here than in the general population.

\n

Specialization and the Ways to Do Good

\n

Differences in temperaments, skills, and professions probably mean that individual people are more likely to focus on one of the four quadrants more than the others.  Sometimes one perspective just seems more natural or more feasible.  It's dangerous to universalize (to assume there's only one way to do good) but it's probably okay to specialize.  Some people thrive on conflict and some people hate it.  If you happen to have a \"type,\" it's probably best to regard your polar opposite \"type\" as an ally. Humble/cooperative people need mighty/conflict types to remind them that sometimes you have truly determined opponents; mighty/conflict types need humble/cooperative types to remind them that not everything is a battlefield.  Mighty/cooperative types need humble/conflict types to remind them that you can't \"fix\" everything; humble/conflict types need mighty/cooperative types to remind them that ingenious successes do exist. 

\n

 

\n

1This is a distinction I read about from The Hacker's Diet. The idea is that managers manage a problem, taking care of it on an ongoing basis, while engineers fix a problem, making a structural change to get rid of it once and for all.  Which approach is best depends on the problem.  Which approach you prefer depends on whether you're a manager or an engineer.

" } }, { "_id": "6eMdXtba5iwKSuvA2", "title": "Intelligence explosion in plain, vanilla, mixed berry, and coffee flavors", "pageUrl": "https://www.lesswrong.com/posts/6eMdXtba5iwKSuvA2/intelligence-explosion-in-plain-vanilla-mixed-berry-and", "postedAt": "2010-10-09T20:07:13.435Z", "baseScore": 8, "voteCount": 5, "commentCount": 0, "url": null, "contents": { "documentId": "6eMdXtba5iwKSuvA2", "html": "

John Scalzi took a throwaway line in a short rant he wrote about Atlas Shrugged as the basis for a short story about yogurt ruling the world. Enjoy!

" } }, { "_id": "5ToXXdcbtnEFYsEAq", "title": "When is investment procrastination?", "pageUrl": "https://www.lesswrong.com/posts/5ToXXdcbtnEFYsEAq/when-is-investment-procrastination", "postedAt": "2010-10-09T19:00:03.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "5ToXXdcbtnEFYsEAq", "html": "

I suggested recently that the link between procrastination and perfectionism has to do with construal level theory:

\n

When you picture getting started straight away the close temporal distance puts you in near mode, where you see all the detailed impediments to doing a perfect job. When you think of doing the task in the future some time, trade-offs and barriers vanish and the glorious final goal becomes more vivid. So it always seems like you will do a great job in the future, whereas right now progress is depressingly slow and complicated.

\n

This set of thoughts reminds me of those generally present when I consider the likely outcomes of getting further qualifications vs. employment, and of giving my altruistically intended savings to the best cause I can find now vs. accruing interest and spending them later. In general the effect could apply to any question of how long to prepare for something before you go out and do it. Do procrastinators invest more?

\n


\n


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "TzeSaimPJw9fNfWYK", "title": "Boobies and the lottery", "pageUrl": "https://www.lesswrong.com/posts/TzeSaimPJw9fNfWYK/boobies-and-the-lottery", "postedAt": "2010-10-09T18:18:49.371Z", "baseScore": 2, "voteCount": 11, "commentCount": 16, "url": null, "contents": { "documentId": "TzeSaimPJw9fNfWYK", "html": "

So, in the past I have \"donated\" boobie pictures to boobiethon, a online fundraising event for breast cancer research.  This year I entered into a drawing for a free custom WordPress theme.  And I won it!

\n

You might think that I'm lucky, but actually when I enter lotteries I'm very calculating.  Once when I was 10, there was a Beanie Baby lottery at the local library.  You could see the jars with the tickets in them for each Beanie Baby.  There was one Beanie Baby that had very few tickets in the jar, so I bought exactly one ticket for it.  And I won the Beanie Baby.

I saw that for this contest, there were 5 WordPress prizes to be awarded total.  For other contests there were only one.  And I correctly surmised that others would try to win the more desirable prizes.  I also submitted 5 pictures of my boobies, and you got one ticket per boobie picture with a maximum of 5 pictures.  That's 5 entries.  Donating $10 only got you one ticket.  And it cost me nothing :).

It's human nature to go for the lottery item of the thing you actually want.  I don't do that.  I enter the lotteries for things I think no one else wants and that have multiple awards and that have a low-to-no cost.  You're never going to win the monetary prize, because the odds are against you.  You CAN win things if the odds are in your favor.  

" } }, { "_id": "AttkaMkEGeMiaQnYJ", "title": "Discuss: How to learn math?", "pageUrl": "https://www.lesswrong.com/posts/AttkaMkEGeMiaQnYJ/discuss-how-to-learn-math", "postedAt": "2010-10-09T18:07:02.113Z", "baseScore": 18, "voteCount": 16, "commentCount": 27, "url": null, "contents": { "documentId": "AttkaMkEGeMiaQnYJ", "html": "

Learning math is hard. Those that have braved some of its depths, what did you discover that allowed you to go deeper?

\n

This is a place to share insights, methods, and tips for learning mathematics effectively, as well as resources that contain this information.

" } }, { "_id": "uuCPHR9pG5nvhGFcb", "title": "Recommended Reading for Friendly AI Research", "pageUrl": "https://www.lesswrong.com/posts/uuCPHR9pG5nvhGFcb/recommended-reading-for-friendly-ai-research", "postedAt": "2010-10-09T13:46:24.677Z", "baseScore": 35, "voteCount": 32, "commentCount": 30, "url": null, "contents": { "documentId": "uuCPHR9pG5nvhGFcb", "html": "

This post enumerates texts that I consider (potentially) useful training for making progress on Friendly AI/decision theory/metaethics.

\n

Rationality and Friendly AI

\n

Eliezer Yudkowsky's sequences and this blog can provide solid introduction to the problem statement of Friendly AI, giving concepts useful for understanding motivation for the problem, and disarming endless failure modes that people often fall into when trying to consider the problem.

\n

For a shorter introduction, see

\n\n

Decision theory

\n

The following book introduces an approach to decision theory that seems to be closer to what's needed for FAI than the traditional treatments in philosophy or game theory:

\n\n

Another (more technical) treatment of decision theory from the same cluster of ideas:

\n\n

Following posts on Less Wrong present ideas relevant to this development of decision theory:

\n\n

Mathematics

\n

The most relevant tool for thinking about FAI seems to be mathematics, where it teaches to work with precise ideas (in particular, mathematical logic). Starting from a rusty technical background, the following reading list is one way to start:

\n

[Edit Nov 2011: I no longer endorse scope/emphasis, gaps between entries, and some specific entries on this list.]

\n" } }, { "_id": "TomiDF8JkAXZcj39s", "title": "Quixey Engineering Screening Questions", "pageUrl": "https://www.lesswrong.com/posts/TomiDF8JkAXZcj39s/quixey-engineering-screening-questions", "postedAt": "2010-10-09T10:33:23.188Z", "baseScore": 3, "voteCount": 19, "commentCount": 24, "url": null, "contents": { "documentId": "TomiDF8JkAXZcj39s", "html": "

My startup, Quixey, is looking to hire a couple top-notch software engineers. Quixey is an early-stage stealth startup founded in October 2009. We are launching our beta product this month: An all-platform app directory and \"functional search\" engine that lets users query for software by answering the question: What do you want to do?

\n

We are confident that Quixey's functional search will be qualitatively better than all existing solutions for finding web apps, mobile phone apps, desktop apps, browser extensions, etc. Our prototype returns significantly more relevant search results in head-to-head comparisons with all the iPhone and Android app search solutions that currently exist(!)

\n

Our office is on University Ave in Palo Alto. If you live in the Bay Area and want to join a hot tech startup extremely early (employee #1, high-equity compensation package), and you're better than the average Google engineer, then please try our screening questions. If you're the kind of person we're looking for, the questions shouldn't take you more than a few minutes each.

\n

Questions

\n

1. Write a Python function findInSorted(arr, x). It’s supposed to return the smallest index of a value x in an array arr which, as a precondition, must be sorted from least to greatest. Or, if arr doesn’t contain an element equal to x, the function returns -1. Make the code as beautiful as possible (without sacrificing asymptotically optimal performance characteristics).

\n

2. Write a JavaScript function countTo(n) that counts from 1 to n and pops up an alert for each number (i.e. alert(1), alert(2), ..., alert(n)). Easy, right? Except you're not allowed to use while- or for-loops. (And you're not allowed to trick the interpreter using \"eval\", or dynamically generated <script> elements appended to the DOM tree, or anything like that.)

\n

For problem 2, the time and space requirements of your function should be as good as those of the asymptotically optimal algorithm, even without tail call optimization.

\n

Email your answers to liron@quixey.com and I'll get back to you right away. Please don't post your answers in this thread because that will make my filter really noisy. If you do well on the screening questions, we will want to bring you in for an interview.

" } }, { "_id": "2ALHcAnos5zFreYHf", "title": "Meet science: rationalizing evidence to save beliefs", "pageUrl": "https://www.lesswrong.com/posts/2ALHcAnos5zFreYHf/meet-science-rationalizing-evidence-to-save-beliefs", "postedAt": "2010-10-09T07:00:15.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "2ALHcAnos5zFreYHf", "html": "

This is how science classes mostly went in high school. We would learn about a topic that had been discovered scientifically, for instance that if you add together two particular solutions of ions, some of the ions will precipitate out as a solid salt. Then we would do an experiment, wherein we would add the requisite solutions and get something entirely wrong in its color, smell, quantity, or presence. Then we would write a report with our hypothesis, the contradictory results, and a long discussion about all the mistakes that could be to blame for this unexpected result, and conclude that the real answer was probably still what we hypothesized (since we read that in a book).

\n

Given that they had not taught the children anything about priors, this seems like a strange way to demonstrate science.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "Mr48vuoavJEGRtrtT", "title": "Dual n-back news", "pageUrl": "https://www.lesswrong.com/posts/Mr48vuoavJEGRtrtT/dual-n-back-news", "postedAt": "2010-10-08T22:16:54.856Z", "baseScore": 17, "voteCount": 13, "commentCount": 5, "url": null, "contents": { "documentId": "Mr48vuoavJEGRtrtT", "html": "

A long awaited study on dual n-back has recently come out in pre-publication: http://www.gwern.net/N-back%20FAQ#jaeggi-2010 (For background, read the rest of my FAQ.)

\n

 

\n

It replicates the IQ boost, but it's by the same person as Jaeggi 2008 and has the same issue with the IQ tests being speeded rather than full-time. You can see my argument about this at the DNB mailing list: http://groups.google.com/group/brain-training/browse_frm/thread/c0fe2e1f14b8af06

\n

 

\n

(Meta: is this appropriate for the discussion area? I know some people here are interested in IQ enhancement like DNB promises, but normally I would just drop this into an open thread as a comment, not make a whole quasi-article about it.)

" } }, { "_id": "HTx4KKk5iqMs47E8r", "title": "Bayesian Buddhism: a path to optimal enlightenment", "pageUrl": "https://www.lesswrong.com/posts/HTx4KKk5iqMs47E8r/bayesian-buddhism-a-path-to-optimal-enlightenment", "postedAt": "2010-10-08T21:38:24.538Z", "baseScore": 8, "voteCount": 16, "commentCount": 27, "url": null, "contents": { "documentId": "HTx4KKk5iqMs47E8r", "html": "

I see epistemic rationality as a connection between Bayesian inference and Buddhism. Bayesian inference focuses on the mathematics for updating belief, Bayes' theorem. Buddhism focuses on the human factors of belief, how belief can lead to dukkha (suffering, discontent), and how dukkha can be avoided.

\n

Combining these ideas could give us a practical path to achieving bohdi (enlightenment, optimal belief) from within the context of our human mind (emotional and physical). Call it the optimal enlightenment hypothesis. I am opening this concept up for LW discussion.

\n

Related questions:

\n\n

From recent posts by Luke_Grecki and Will_Newsome, it seems that others are thinking along the same line.

\n

\n

Epistemic Rationality

\n

To be rational we need to be willing to see the world as it is and not as we want it to be. This requires avoiding emotional attachment to our beliefs and adopting an even approach to evaluating evidence.

\n

See Also:

\n\n

Epistemic rationality as defined by Eliezer Yudkowsky:

\n
\n

Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory.  The art of obtaining beliefs that correspond to reality as closely as possible.  This correspondence is commonly termed \"truth\" or \"accuracy\", and we're happy to call it that.

\n
\n

Bayesian inference is a method for updating belief within epistemic rationality:

\n
\n

Bayesian inference is a method of statistical inference in which some kind of evidence or observations are used to calculate the probability that a hypothesis may be true, or else to update its previously-calculated probability.

\n
\n

Buddhism encourages people to view reality as it really is and to constantly seek enlightenment.

\n
In Buddhism's Noble Eightfold Path, as stated here:
\n
\n

dṛṣṭi (ditthi): viewing reality as it is, not just as it appears to be

\n
\n
\n

vyāyāma (vāyāma): making an effort to improve

\n
\n
\n

smṛti (sati): awareness to see things for what they are with clear consciousness, being aware of the present reality within oneself, without any craving or aversion

\n
\n

In Buddhism's Four Noble Truths, as stated here:

\n
\n

Suffering ends when craving ends. This is achieved by eliminating delusion, thereby reaching a liberated state of Enlightenment (bodhi);

\n
\n

Buddhism's concept of rebirth can be seen as an openness to updating beliefs.

\n
\n

Another view of rebirth describes the cycle of death and birth in the context of consciousness rather than the birth and death of the body. In this view, remaining impure aggregates, skandhas, reform consciousness.

\n
\n

We are not limited by our current sense of self, here:

\n
\n

Buddhism rejects the concepts of a permanent self or an unchanging, eternal soul, as it is called in Hinduism and Christianity.

\n
\n

The Context Principle

\n

The context principle is integral to both Bayesian inference and Buddhism.

\n

Context Principle: Context creates meaning, and in its absence there is no meaning.

\n

When considering a belief the context must be considered. Outside of its intended context a belief may be wrong or even meaningless.

\n

In Bayesian probability the meaning of the evidence, and the prior probability distribution, depend on the context. When different observers use the same method and evidence, they can come to different conclusions if they have adopted different contexts.

\n

For example:

\n
\n

... humans are making decisions based on how we think the world works, if erroneous beliefs are held, it can result in behavior that looks distinctly irrational.

\n
\n

In Buddhism, the context principle can be found in the three marks of existence:

\n\n

Anicca

\n

From here:

\n
\n

The term expresses the Buddhist notion that all of conditioned existence, without exception, is in a constant state of flux.

\n
\n

If our beliefs are conditional on our context, and our context is in a constant state of flux, then we must be mindful and be ready to observe these changes, and we must be constantly willing to change our beliefs based on the evidence.

\n

Dukka

\n

From here:

\n
\n

Experience is thus both cognitive and affective, and cannot be separated from perception. As one's perception changes, so one's experience is different: we each have our own particular cognitions, perceptions and volitional activities in our own particular way and degree, and our own way of responding to and interpreting our experience is our very experience.

\n
\n

A rough summary, \"suffering is context dependent, if you believe you are suffering, then you are\".

\n

Accepting the world as it is and not as we want to be is not simply about becoming comfortable with the status quo. It is about gaining the awareness we need to transform our context. This is explained well in Radical Buddhism and the Paradox of Acceptance.

\n

Anatta

\n

From here:

\n
\n

Buddhism denies the existence of a permanent or static entity that remains constant behind the changing bodily and non-bodily components of a living being.

\n
\n
\n

The nikayas state that certain things such as material shape, feeling, perception, habitual tendencies and consciousness (the five aggregates), with which the unlearned man identifies himself, do not constitute his self and that is why one on the path to liberation should grow uninterested in the aggregates, become detached from them and be liberated.

\n
\n

From here:

\n
\n

Upon careful examination, one finds that no phenomenon is really \"I\" or \"mine\"; these concepts are in fact constructed by the mind.

\n
\n

The concept of self is a convention, not an absolute, it refers to a constantly changing composite. The meaning of self depends on context. Clinging to an inappropriate concept of self can lead to dukka.

\n

Challenges

\n

Buddhism is not inherently rational

\n

Although there are hints of rationality in Buddhism, it was not created to be a philosophy of rationality. There is a lot of mysticism, and the different schools of thought appear to split primarily on metaphysics.

\n

It is tempting to discard the mysticism outright, but I suspect that there are ideas encoded in the mysticism that help people understand and adopt the beneficial beliefs of Buddhism. Reincarnation/rebirth for example could help people accept Annica (impermanence). I'm not proposing that the mysticism be kept; I'm suggesting that it may be useful to understand its context specific value.

\n

Bayesian inference is not really a philosophy

\n

There are aspects of Bayesian inference that sound like a philosophy, for example changing beliefs based on evidence, but it is really a method of statistical inference. Extending these ideas to domains where it is difficult or impossible to calculate probabilities will be difficult.

" } }, { "_id": "pBnKbYYTfn5Bqw4Fm", "title": "Request: Interesting Invertible Facts", "pageUrl": "https://www.lesswrong.com/posts/pBnKbYYTfn5Bqw4Fm/request-interesting-invertible-facts", "postedAt": "2010-10-08T20:02:21.933Z", "baseScore": 29, "voteCount": 20, "commentCount": 50, "url": null, "contents": { "documentId": "pBnKbYYTfn5Bqw4Fm", "html": "

I'm writing the section of the rationality book dealing with hindsight bias, and I'd like to write my own, less racially charged and less America-specific, version of the Hindsight Devalues Science example - in the original, facts like \"Better educated soldiers suffered more adjustment problems than less educated soldiers. (Intellectuals were less prepared for battle stresses than street-smart people.)\" which is actually an inverted version of the truth, that still sounds plausible enough that people will try to explain it even though it's wrong.

\n

I'm looking for facts that are experimentally verified and invertible, i.e., I can give five examples that are the opposite of the usual results without people catching on.

\n

Divia (today's writing assistant) has suggested facts about marriage and facts about happiness as possible sources of examples, but neither of us can think of a good set of facts offhand and Googling didn't help me much.  Five related facts would be nice, but failing that I'll just take five facts.  My own brain just seems to be very bad at answering this kind of query for some reason; I literally can't think of five things I know.

\n

(Note also that I have a general policy of keeping anything related to religion out of the rationality book - that there be no mention of it whatsoever.)

" } }, { "_id": "g6JhsJwMsb2rtrX6q", "title": "How to better understand and participate on LW", "pageUrl": "https://www.lesswrong.com/posts/g6JhsJwMsb2rtrX6q/how-to-better-understand-and-participate-on-lw", "postedAt": "2010-10-08T16:11:41.604Z", "baseScore": 8, "voteCount": 12, "commentCount": 33, "url": null, "contents": { "documentId": "g6JhsJwMsb2rtrX6q", "html": "
\n
\n

Update! New URL:

\n

!!!

\n

http://lesswrong.com/lw/2un/references_resources_for_lesswrong/

\n

!!!

\n

Out-of-date:

\n

A list capturing all background knowledge you might ever need for LW.

\n

Updated: 2010-10-10

\n\n

This list has two purposes. One is to enable people that lack a basic formal education to read and understand the LessWrong Sequences. Secondly, it is meant as a list of useful resources for all people to help to better understand what is being discussed on LessWrong and to enable you to actively participate.

\n

Do not flinch, most of LessWrong can be read and understood by people with a previous level of education less than secondary school. And even if you lack the most basic education, if you start with Khan Academy followed by BetterExplained then with the help of Google and Wikipedia you should be able to reach a level of education that allows you to start reading the LessWrong Sequences.

\n

Nevertheless, before you start off you might read the Twelve Virtues of Rationality FE. Not only is scholarship just one virtue but you'll also be given a list of important fields of knowledge that anyone who takes LessWrong seriously should study:

\n
\n

It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory.

\n
\n

Mathematics:

\n

Basics

\n\nGeneral\n
\n\n
\n
\n
Math is fundamental, not just for LessWrong. But especially Bayes’ Theorem is essential to understand the reasoning underlying most of the writings on LW.
\n
\n\n
\n\n
\n

Logic

\n\n

Foundations

\n\n
Decision theory:
\n
It is precisely the notion that Nature does not care about our algorithm, which frees us up to pursue the winning Way - without attachment to any particular ritual of cognition, apart from our belief that it wins. Every rule is up for grabs, except the rule of winning. — Eliezer Yudkowsky
\nRemember that any heuristic is bound to certain circumstances. If you want X from agent Y and the rule is that Y only gives you X if you are a devoted irrationalist then ¬irrational. Under certain circumstances what is irrational may be rational and what is rational may be irrational. Paul K. Feyerabend said: \"All methodologies have their limitations and the only ‘rule’ that survives is ‘anything goes’.\" \n\n

Game Theory

\n\n

Programming:

\n

Programming knowledge is not mandatory for LessWrong but you should however be able to interpret the most basic pseudo code as you will come across various snippets of code in discussions and top-level posts outside of the main sequences.

\n

Python

\n\n

Haskell

\n\n

General

\n\n

Computer sciences  (General Introduction):

\n

One of the fundamental premises on LessWrong is  that a universal computing device can simulate every physical process  and that we therefore should be able to  reverse engineer the human brain  as it is fundamentally computable. That is, intelligence and consciousness are substrate independent.

\n\n

Machine Learning:

\n

Not essential but an valuable addition for anyone who's more than superficially interested in AI and machine learning.

\n\n
Physics:
\n\n
Evolution:
\n\n
Philosophy:
\n
\n
There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination. — Daniel Dennett, Darwin's Dangerous Idea, 1995.
\n
\n
\n
Philosophy is a battle against the bewitchment of our intelligence by means of language. — Wittgenstein
\n
\n\n
\n
Linguistics:
\n\n
\n
General Education:
\n\n

Miscellaneous:

\n

Not essential but a good preliminary to reading LessWrong and in some cases mandatory to be able to make valuable contributions in the comments. Many of the concepts in the following works are often mentioned on LessWrong or the subject of frequent discussions.

\n\n

Key Concepts:

\n

Below a roundup of concepts and other fields of knowledge you should at least have a rough grasp of to be able to follow some subsequent discussions in the comments on LessWrong.

\n\n
Key Resources (News and otherwise)  F:
\n\n
Relevant Fiction:
\n\n
Go:
\n\n
\n

Note:

\n

This list is a work in progress. I will try to constantly update and refine it.

\n

Also thanks to  cousin_it for the idea. I had to turn the original comment on his post into my own top-level post because I got the error that my comment was too long.

\n

If you've anything to add or correct, please comment below and I'll update the list accordingly.

\n
\n
" } }, { "_id": "TqXSpApeNR5hakxzW", "title": "Pro-nice and anti-nice", "pageUrl": "https://www.lesswrong.com/posts/TqXSpApeNR5hakxzW/pro-nice-and-anti-nice", "postedAt": "2010-10-08T15:29:39.671Z", "baseScore": 2, "voteCount": 11, "commentCount": 36, "url": null, "contents": { "documentId": "TqXSpApeNR5hakxzW", "html": "

EDIT: This post is pretty flawed, but please read the comments anyway: I'm hoping to rework it into something that catches the idea better.

\n

 You can view a lot of value differences along a pro-nice/anti-nice spectrum.

\n

Pro-nice people (I'm one) gravitate to obviously pleasant, lovely, happy experiences.  We like kittens and puppies and rainbows. We like transparently \"happy\" music and transparently \"beautiful\" works of art and literature.  (If you like Romantic poetry and science fiction, but not contemporary novels, you might be pro-nice.)  We prefer the positive social emotions, like sympathy, encouragement, and teamwork.  We may choose intellectual interests based on the fact that they make our brains feel good.  We tend to be drawn towards proposals for making the world wonderful. 

\n

Pro-nice people aren't quite the same thing as optimists.  An optimist tends to anticipate that things will turn out well, or look on the bright side.  But pro-nice people may well hold pessimistic ideas or have melancholy temperaments.  Pro-nice is a preference for the positive.  A typical pro-nice attitude is \"Humanity may be destructive and cruel, but the one time when we're at our best is when we're doing science.  Science is lovely.  I think I'll be a scientist.\"  

\n

Anti-nice people have a preference for the difficult.  They find pro-nice preferences saccharine.  They like artistic expressions that have a challenging or negative \"mood.\"  They prefer the negative social emotions, like antagonism, sarcasm, and cynicism.  They dislike things that have obvious appeal, or things that everyone finds pleasant.  As far as social issues go, they take a keen interest in potential catastrophes and what must be done to avert them; they generally aren't drawn to proposals to \"make the world a better place.\" 

\n

Again, anti-nice people aren't necessarily pessimists or unhappy people. Anti-nice people prefer to direct their attention to the challenging, the problematic, the worst-case scenario.  To an anti-nice person, there's nothing interesting to work on when everything is going smoothly; just liking things or agreeing with people or being contented is rather dull.

\n

I suspect that a lot of conflict can be summarized by the clash between pro-nice and anti-nice personality types.    

\n

Are you pro-nice or anti-nice?  Have you experienced difficulty communicating with the other type?

" } }, { "_id": "ctJftmopE2sFawmho", "title": "Poverty does not respond to incentives", "pageUrl": "https://www.lesswrong.com/posts/ctJftmopE2sFawmho/poverty-does-not-respond-to-incentives", "postedAt": "2010-10-08T14:19:36.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "ctJftmopE2sFawmho", "html": "

I wrote a post a while back saying that preventing ‘exploitative’ trade is equivalent to preventing an armed threat by eliminating the ‘not getting shot in the head’ option. Some people countered this argument by saying that it doesn’t account for how others respond. If poor people take the option of being ‘exploited’, they won’t get offered such good alternatives in future as they will if they hold out.

\n

This seems unlikely, but it reminds me of a real difference between these situations. If you forcibly prevent the person with the gun to their head from responding to the threat, the person holding the gun will generally want to escape making the threat, as now she has nothing to gain and everything to lose. The world on the other hand will not relent from making people poor if you prevent the poor people from responding to it.

\n

I wonder if the misintuition about the world treating people better if they can’t give in to its ‘coercion’ is a result of familiarity with how single agent threateners behave in this situation. As a side note, this makes preventing ‘exploitative’ trade worse relative to preventing threatened parties acting on threats.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "skog6AzsTQXF4qrz6", "title": "Rationality and advice", "pageUrl": "https://www.lesswrong.com/posts/skog6AzsTQXF4qrz6/rationality-and-advice", "postedAt": "2010-10-08T07:45:44.678Z", "baseScore": 11, "voteCount": 10, "commentCount": 5, "url": null, "contents": { "documentId": "skog6AzsTQXF4qrz6", "html": "

Giving advice is one of those common human behaviors which doesn't get examined much, which means a little thought might improve understanding of what's going on.

\n

The evidence-- that giving advice is much more common than asking for it or following it-- suggests that giving advice is more a status transaction than a practical effort to help, and I speak as a person who's pretty compulsive about giving advice.

\n

So, here's some advice about advice, assuming that you don't want to just raise your status on unwilling subjects.

\n

Do what you can to actually understand the situation, including the resources the recipient is willing to put into following advice.

\n

The idea that men give unwelcome advice to women, when the women just want to vent but can solve their problems themselves, is an oversimplification. There are women who give advice (see above). There are men who are patient with venting. I think the vent vs. want advice distinction is valuable, but ask rather than assuming gender will give you the information you need.

\n

I have a friend who I've thanked for giving me advice, and his reaction was \"but you didn't follow it!\". Sometimes it helps to give people ideas to bounce off of.

\n

Pjeby (if I understand him correctly) has been very good about the way people can reinterpret advice in light of their mental habits-- for example, hearing \"find goals that inspire you\" as \"beat yourself up for not having achieved more\".

\n

Eliezer on Other-Optimizing-- it's from the point of view of being given lots of advice (mostly inappropriate), rather from the point of view of giving advice.

" } }, { "_id": "Ldr6Gm2WYdnXNNMqh", "title": "Seeking matcher for SIAI donation", "pageUrl": "https://www.lesswrong.com/posts/Ldr6Gm2WYdnXNNMqh/seeking-matcher-for-siai-donation", "postedAt": "2010-10-08T05:38:41.316Z", "baseScore": 3, "voteCount": 5, "commentCount": 9, "url": null, "contents": { "documentId": "Ldr6Gm2WYdnXNNMqh", "html": "

I just donated $150 to the Singularity Institute.  Would anyone be willing to match my donation (as in, donate at least $150 and tell me you did so)?  A first-time donor would be ideal, but anyone would make me happy.

" } }, { "_id": "nLaqCj2duNcMWwKxR", "title": "Singularity Cost", "pageUrl": "https://www.lesswrong.com/posts/nLaqCj2duNcMWwKxR/singularity-cost", "postedAt": "2010-10-08T02:01:09.521Z", "baseScore": -12, "voteCount": 9, "commentCount": 2, "url": null, "contents": { "documentId": "nLaqCj2duNcMWwKxR", "html": "

I don’t think that AI is an existential risk. It is going to be more of a golden opportunity. For some not for all.

\n

Given that most people oppose AI on various basis (religious, economic) chances are it will be implemented in a small group, and very few people will get to benefit from it. Wealthy people would probably be the first to use it.

\n

This isn’t a regular technology and it will not go first to the rich and then to everybody else, like it happened with the phones or computers in a couple of decades. This is where Kurzweil is wrong.

\n

Can someone imagine the dynamics of a group that has access to AI for 20-30 years?

\n

I doubt that after 20 or 30 years, heck even after 10 years, they would need any money so the assumption that it will be shared with the rest of the world for financial reasons doesn’t seem founded.

\n

So I am trying to save and figure what would be the cost of entry in this club.

\n

Any thoughts on that?

" } }, { "_id": "ae3Mopb2ctEqXsGoq", "title": "Help: When are two computations isomorphic?", "pageUrl": "https://www.lesswrong.com/posts/ae3Mopb2ctEqXsGoq/help-when-are-two-computations-isomorphic", "postedAt": "2010-10-08T00:29:50.506Z", "baseScore": 4, "voteCount": 6, "commentCount": 6, "url": null, "contents": { "documentId": "ae3Mopb2ctEqXsGoq", "html": "

Lets work within the Turing machine model of computation and consider halting TMs. Given TMs named T and T', when would you say they implement the same computation? I see at least two possibilities:

\n

1) Call them equivalent if they have the same global output (i.e. T(x) = T'(x) for all x).

\n

2) Call them equivalent if they locally transform the same way (i.e. their transition functions are equivalent in some sense).

\n

In other words, is the step-by-step operation of the TM central to your notion of computation?

\n

I came to this question when reflecting on a discussion here involving levels of simulation. I'm interested in thinking more rigorously about computations we care about in a dovetailing ensemble, and determining where in the hierarchy they are likely to lie.

\n

(Note that the latter equivalence implies the former, and is thus stronger.)

" } }, { "_id": "sLeQKqnWyPFQFB4rn", "title": "Poll: Compressing Morality", "pageUrl": "https://www.lesswrong.com/posts/sLeQKqnWyPFQFB4rn/poll-compressing-morality", "postedAt": "2010-10-07T22:17:31.764Z", "baseScore": 2, "voteCount": 3, "commentCount": 3, "url": null, "contents": { "documentId": "sLeQKqnWyPFQFB4rn", "html": "

Two related questions:

\n

 

\n

Suppose that you had to do a lossy compression on human morality. How would you do so to maintain as much of our current morality as possible?

\n

Alternately, suppose that you had to lose one module of your morality (defined however you feel like). Which one would you lose?

" } }, { "_id": "ZpATmvAyqajiA5XNC", "title": "Notion of Preference in Ambient Control", "pageUrl": "https://www.lesswrong.com/posts/ZpATmvAyqajiA5XNC/notion-of-preference-in-ambient-control", "postedAt": "2010-10-07T21:21:34.047Z", "baseScore": 21, "voteCount": 19, "commentCount": 47, "url": null, "contents": { "documentId": "ZpATmvAyqajiA5XNC", "html": "

This post considers ambient control in a more abstract setting, where controlled structures are not restricted to being programs. It then introduces a notion of preference, as an axiomatic definition of constant (actual) utility. The notion of preference subsumes possible worlds and utility functions traditionally considered in decision theory.

\n

Followup to: Controlling Constant Programs.

\n

In the previous post I described the sense in which one program without parameters (the agent) can control the output of another program without parameters (the world program). These programs define (compute) constant values, respectively actual action and actual outcome. The agent decides on its action by trying to prove statements of a certain form, the moral arguments, such as [agent()=1 => world()=1000000]. When the time is up, the agent performs the action associated with the moral argument that promises the best outcome, thus making that outcome actual.

\n

Let's now move this construction into a more rigorous setting. Consider a first-order language and a theory in that language (defining the way agent reasons, the kinds of concepts it can understand and the kinds of statements it can prove). This could be a set theory such as ZFC or a theory of arithmetic such as PA. The theory should provide sufficient tools to define recursive functions and/or other necessary concepts. Now, extend that theory by definitions of two constant symbols: A (the actual action) and O (the actual outcome). (The new symbols extend the language, while their definitions, obtained from agent and world programs respectively by standard methods of defining recursively enumerable functions, extend the theory.) With new definitions, moral arguments don't have to explicitly cite the code of corresponding programs, and look like this: [A=1 => O=10000].

\n

Truth, provability, and provability by the agent

\n

Given a model, we can ask whether a statement is true in that model. If a statement is true in all models, it's called valid. In first-order logic, all valid statements are also provable by a formal syntactic argument.

\n

What the agent can prove, however, is different from what is provable from the theory it uses. In principle, the agent could prove everything provable (valid), but it needs to stop at some point and decide what action to perform, thus being unable to actually prove the rest of the provable statements. This restriction could take any one of many possible forms: a limit on the total number of proof steps used before making a decision, a \"time limit\" that maps to the proof process and stops it at some point, a set of statements (\"sufficient arguments\"), such that if any of the statements get proved, the process stops.

\n

Overall, the agent is able to prove less than is provable (valid). This in particular means that for certain sets of statements that are inconsistent, the agent won't be able to derive a contradiction.

\n

Sense and denotation

\n

A and O are ordinary constant symbols, so for some specific values, say 2 and 1000, it's true that A=2 and O=1000 (more generally, in each model A and U designate two elements). There is little interesting structure to the constants themselves. The agent normally won't even know \"explicit values\" of actual action and actual outcome. Knowing actual value would break the illusion of consistent consequences: suppose the agent is consistent, knows that A=2, and isn't out of time yet, then it can prove [A=1 => O=100000], even if in fact O=1000, use that moral argument to beat any other with worse promised outcome, and decide A=1, contradiction. Knowing actual outcome would break the same illusion in two steps, if the agent ever infers an outcome different from the one it knows to hold: suppose the agent is consistent, knows that O=1000, and isn't out of time, then if it proves [A=1 => O=500], it can also prove [A=1 => (O=1000 AND O=500)], and hence [A=1 => (O=1000 AND O=500) => O=100000], using that to beat any other moral argument, making A=1 true and hence (O=1000 AND O=500) also true, contradiction.

\n

Thus, the agent has to work with indirect definitions of action and outcome, not with action and outcome themselves. For the agent, actual action doesn't describe what the agent is, and actual outcome doesn't describe what the world is, even though moral arguments only mention actual action and actual outcome. Details of the definitions matter, not only what they define.

\n

Abstract worlds

\n

There seems to be no reason for definition of outcome O to be given by a program. We can as easily consider constant symbol O defined by an arbitrary collection of axioms. The agent doesn't specifically simulate the definition of O in order to obtain its specific value (besides, obtaining that value corresponds to the outcome not being controllable by the choice of action), it merely proves things about O. Thus, we can generalize world programs to world concepts, definitions of outcomes that are not programs. Furthermore, if definition of O is not a program, O itself can be more general than a finite number. Depending on the setting, O's interpretation could be an infinite set, a real number, or generally any mathematical structure.

\n

Surprisingly, the same applies to action. It doesn't matter how the agent thinks about its actual action (defines A), so long as the definition is correct. One way to define the output of a program is by straightforwardly transcribing the program, as when defining a recursively enumerable function, but any other definition of the same value will do, including non-constructive ones.

\n

Possible actions and possible outcomes

\n

By way of axiomatic definitions of A and O, statements of the form [A=X => O=Y] can be proved by the agent. Each such statements defines a possible world Y resulting from a possible action X. X and Y can be thought of as constants, just like A and O, or as formulas that define these constants, so that the moral arguments take the form [X(A) => Y(O)].

\n

The sets of possible actions and possible outcomes need to be defined syntactically: given a set of statement of the form [A=X => O=Y] for various X and Y, the agent needs a way of picking one with the most preferable Y, and to actually perform associated X. This is unlike the situation with A and O, where the agent can't just perform action A, since it's not defined in the way the agent knows how to perform (even though A is (provably) equivalent to one of the constants, the agent can't prove that for any given constant).

\n

We can assume that sets of possible actions and possible outcomes (that is, formulas syntactically defining them) are given explicitly, and the moral arguments are statements of the form [A=X => O=Y] where X has to be a possible action and Y a possible outcome (not some other formulas). In this sense, A (as a formula, assuming its definition is finite) can't be a possible action, O won't be a possible outcome in interesting cases, and [A=A => O=O] is not a moral argument.

\n

For each possible action, only one possible world gets defined in this manner. For the possible action that is equal to the actual action (that is, X such that (A=X) is provable in agent's theory for such X, although it's not provable by the agent), the corresponding possible outcome is equal to the actual outcome.

\n

Possible worlds

\n

Given a set of moral arguments [A=X => O=Y] that the agent managed to prove, consider the set of all possible outcomes that are referred by these moral arguments. Call such possible outcomes possible worlds (to distinguish them from possible outcomes that are not referred by moral arguments provable by the agent). Of all possible outcomes, the possible worlds could constitute only a small subset.

\n

This makes it possible for the possible worlds to have more interesting structure than possible outcomes in general, for example possible outcomes could be just integers, while possible worlds prime integers. Thus, definitions of A and O define not just the actual outcome O, but a whole collection of possible worlds corresponding to possible actions.

\n

Controlling axiomatic definitions

\n

While the previous post discussed the sense in which the output of a constant program can be controlled, this one describes how to control a given (fixed) axiomatic definition into defining as desirable mathematical structure as possible. This shows that in principle, nothing is exempt from ambient control (since in principle one can give an axiomatic definition to anything), some definitions are just constant with respect to given agents (generate only one possible world, as defined above).

\n

Determinism is what enables control, but ambient control relies only on \"logical\" determinism, the process of getting from definition to the defined concept, not on any notion of determinism within the controlled concept (actual outcome) itself. We can thus consider controlling concepts more general than our physical world, including the ones that aren't structured as (partially) deterministic processes.

\n

Preference and utility

\n

Possible outcomes are only used to rank moral arguments by how good the actual outcome O will be if the corresponding possible action is taken. Thus, we have an order defined on the possible outcomes, and the action is chosen to maximize the outcome according to this order. Any other properties of possible outcomes are irrelevant. This suggests directly considering utility values instead of outcomes, and using a utility symbol U instead of outcome symbol O in moral arguments.

\n

As with actual outcome and its definition, we then have actual utility and its definition. Since definition supplies most of the relevant structure, I call definition of actual utility preference. Thus, agent is axiomatic definition of actual action A, and preference is axiomatic definition of actual utility U. Both agent and preference can be of arbitrary form, so long as they express the decision problem, and actual utility U could be interpreted with an arbitrary mathematical structure. Moral arguments are statements of the form [A=A1 => U=U1], with A1 a possible action and U1 a possible utility.

\n

Merging axioms

\n

Above, action and utility are defined separately, with axioms that generally don't refer to each other. Axioms that define action don't define utility, and conversely. Moral arguments, on the other hand, define utility in terms of action. If we are sure that one of the moral arguments proved by the agent refers to the actual action (without knowing which one; if we have to choose an actual action based on that set of moral arguments, this condition holds by construction), then actual utility is defined by the axioms of action (the agent) and these moral arguments, without needing preference (axioms of utility).

\n

Thus, once moral arguments are proved, we can discard the now redundant preference. More generally, statements that the agent proves characterize actual action and actual utility together, where their axiomatic definitions characterized them separately. New statements can be equivalent to the original axioms, allowing to represent the concepts differently. The point of proving moral arguments is in understanding how actual utility depends on actual action, and using that dependence to control utility.

\n

Utility functions

\n

Let utility function be a functions F such that the agent proves [F(A)=U], and for each possible action X, there is a possible utility Z such that the agent can prove [F(X)=Z]. The second requirement makes utility functions essentially encodings of moral arguments; without it, a constant utility function defined by F(-)=U would qualify, but it's not useful to the agent, since it can't reason about U.

\n

Given a utility function F and a possible action X, [A=X => U=F(X)] is a moral argument (provable by the agent). Thus, a utility function generates the whole set of moral arguments, with one possible outcome assigned to each possible action. Utility function restricted to the set of possible actions is unique. For, if F and G are two utility functions, X a possible action, then [A=X => (U=F(X) AND U=G(X))], proving contradiction in consequences of a possible action if F and G disagree at X.

\n

Utility functions allow generalizing the notion of moral argument: we no longer need to consider only small sets of possible actions (because only small sets of moral arguments can be proved). Instead, one utility function needs to be found and then optimized. Since utility function is essentially unique, the problem of finding moral arguments can be recast as a problem of proving properties of the utility function, and more generally decision-making can be seen as maximization of utility function implicitly defined by agent program and preference.

\n

Note that utility function is recognized by its value at a single point, but is uniquely defined for all possible actions. The single point restriction is given by the actual action and utility, while the rest of it follows from axiomatic definitions of those action and utility. Thus again, most of the structure of utility function comes from agent program and preference, not actual action and actual utility.

\n

Connecting this to discussion of explicit/implicit dependence in the previous post, and the previous section of this one, utility function is the expression of explicit dependence of utility on agent's action, and decision problem shouldn't come with this dependence already given. Instead, most of the problem is figuring out this explicit dependence (utility function) from separate definitions of action and utility (agent program and preference).

\n

Open problems

\n" } }, { "_id": "qKzeJvFWyPh5H2hwj", "title": "Harry Potter and the Methods of Rationality discussion thread, part 4", "pageUrl": "https://www.lesswrong.com/posts/qKzeJvFWyPh5H2hwj/harry-potter-and-the-methods-of-rationality-discussion-38", "postedAt": "2010-10-07T21:12:58.038Z", "baseScore": 5, "voteCount": 7, "commentCount": 659, "url": null, "contents": { "documentId": "qKzeJvFWyPh5H2hwj", "html": "

[Update: and now there's a fifth discussion thread, which you should probably use in preference to this one. Later update: and a sixth -- in the discussion section, which is where these threads are living for now on. Also: tag for HP threads in the main section, and tag for HP threads in the discussion section.]

\n

The third discussion thread is above 500 comments now, just like the others, so it's time for a new one. Predecessors: one, two, three. For anyone who's been on Mars and doesn't know what this is about: it's Eliezer's remarkable Harry Potter fanfic.

\n

Spoiler warning and helpful suggestion (copied from those in the earlier threads):

\n

Spoiler Warning:  this thread contains unrot13'd spoilers for Harry Potter and the Methods of Rationality up to the current chapter and for the original Harry Potter series.  Please continue to use rot13 for spoilers to other works of fiction, or if you have insider knowledge of future chapters of Harry Potter and the Methods of Rationality.

A suggestion: mention at the top of your comment which chapter you're commenting on, or what chapter you're up to, so that people can understand the context of your comment even after more chapters have been posted.  This can also help people avoid reading spoilers for a new chapter before they realize that there is a new chapter.

" } }, { "_id": "nKJwu8R4zAXNjT3D2", "title": "It's a fact: male and female brains are different", "pageUrl": "https://www.lesswrong.com/posts/nKJwu8R4zAXNjT3D2/it-s-a-fact-male-and-female-brains-are-different", "postedAt": "2010-10-07T20:15:19.107Z", "baseScore": 3, "voteCount": 17, "commentCount": 8, "url": null, "contents": { "documentId": "nKJwu8R4zAXNjT3D2", "html": "

In Which I Present The Opposing Side's Hypothesis and Falsify It

\n

This post is in part in response to a New Scientist article/book review \"Fighting back against neurosexism.\"  And the tagline is \"Are differences between men and women hard-wired in the brain? Two new books argue that there's no solid scientific evidence for this popular notion.\"  

\n

Full disclosure here: I haven't read the books, although I do have a B.S. in neurobiology. But you don't even need to understand anything about neurobiology to falslify their most basic hypothesis: that male and female brains have no hardwired behavioral differences.  

\n

And it's easy to falsify: if male and female brains were the same, all humans would be completely bisexual.  If it's true that female brains, on average, prefer to fuck, date, and marry men, and male brains, on average, prefer to fuck, date, and marry women, then male and female brains are in fact different.

\n

\n

A Really Long Argument In Which I Argue That Humans Are Not Bisexual

\n

The god of the gaps here, is of course, culture.  One could argue that culture makes people straight.  There are number of arguments against this here, such as some famous cases of failed gender reassignment at birth, all the way to gay people, who despite being raised in straight cultures somehow arise anyway.

\n

Yes, some people are bisexual, and the number who are is certainly informed by culture.  Bisexuality has increased to unprecedented levels. What exactly constitutes bisexuality makes decent surveys difficult, but the most generous figure is 15%, and this includes people who identify as straight, have had no experience of the opposite sex, but have had \"some same sex attraction.\"  The null hypothesis predicts 100% bisexuality, and not only that, it predicts a nature of bisexuality which has no preference for either sex.  In our hypothetical situation of identical male and female brains, if 100% of women would choose a man over a woman only 60% of the time, then women still show a preference for men and our hypothesis is falsified.

\n

But the situation is even more dire than that.  It's so dire that some argue that bisexuality doesn't exist at all.  I would not go so far; while I personally do not know any bisexuals that show no gender preference, and quite a few that do show a preference, I am positive there are some out there that would be incredibly angry with me if I didn't acknowledge them (hi guys!).  The question is, how many are there?  If we want to make an argument that true bisexuality is universal, well, larger is better.

\n

If we look at the actual behavior of humans actively engaged in the mate selection process on OkCupid, it's not promising.  The damning evidence?  80% of self-identified bisexuals only contact one gender.  There could be a lot of reasons for this besides \"fake\" bisexuality, but if we look at how humans behave in our increasingly sex-positive culture, it's not promising.  

\n

So in modern Western cultures where this has been researched, most female brains like men, and most male brains like women.  It still could be cultural; after all, this is only one culture.  What about ancient Greece? In this culture, rich, well educated, mature men by some accounts frequently had sex with pre-pubescent boys.  They would still marry women, which might make them seem like our platonic bisexual.  But sadly for that idea, the mature, post-puberty men weren't having sex with each other.  They were having sex with boys, with smooth skins, that didn't have beards, or an Adam's apple, who were slender and not muscular, without the influence of the increased sexual differentiation between the sexes that happens at puberty.  And the women weren't also bisexual in large numbers, as far as we are aware.  One unusual culture in which one sex has sex with children of the same sex is not terribly promising for the universality of the human brain.  

\n

But it is possible that a truely bisexual culture hasn't arisen yet.

\n

 

\n

Introducing the Magic Random Culture Generator

\n

Some anthropologists do research under the assumption that we can get a good idea of what human behaviors are \"natural\" or \"innate\" by surveying a large sample of cultures.  If something is present in a large number of cultures, then it is more likely to be an intrinsic part of human behavior.  It's not a bad strategy to work with.  

\n

However, some people argue that both different regions of the world as well as a sampling of cultures in history represent a random sample of cultures.  Therefore, if something is in the majority of human cultures, it is innate.  But of course, they are not random, because they are not independant. Cultures devoloped from each other, not in a vaccum.  It's possible that cultures are wildly sensitive to inital conditions; that once the first culture declared that women like men and vice versa, all subsequent cultures had to be that way.  

\n

Sadly, we do not have the ability to apply a truely random sample of cultures to sets of people and see how well they take.  

\n

If we did, we could do this experiment.  We could pick a random cultural sexual preference and apply it to the population.  We could create a society in which all men were supposed to fuck, date, and marry only men, and all women were supposed to fuck, date, and marry only women.  It would be similar to many of the cultures across the globe and in our evolutionary past, but with straightness replaced with gayness.  Would it work (propagation issues aside)?  Do you think we could use culture to make almost all men and women gay?

\n

Unless you can believe in all seriousness that it could happen equally easily as our current situation, well, then no amount of lack of evidence about specific sex differences between the brains of human males and females is going to be very convincing.  

\n

 

\n

How could something so obvious be missed?

\n

A lot of the research on innate sex differences have to do with things that people actually find interesting.  It's not interesting that men prefer women on average for intimate relations; this is obvious.  A lot of the research on innate brain differences is done on things such as intelligence.  And so the person that wrote this article extrapolated to claim that if there isn't enough research showing innate differences is these specific areas- where there's a lot of controversy- then that's the case for all differences, even those differences which are so obvious there isn't research to support them.

\n

While the authors are completely incorrect to say there are no innate differences between female and male brains, they are perfectly within their right to claim that there isn't enough evidence for innate differences in specific areas, such as intelligence. 

\n

(Of course, if you don't like an idea, this is a technique that works for every science- to claim there isn't enough evidence to support it.  It's a time-honored trick, one that \"intelligent design\" proponents pull on macroevolution all the time, exemplified by this futurama clip at 1:15.  I'm not saying this is what these authors did; only that it's something to be wary of when examining any controversial idea.)

\n

 

\n

You argue that in most cultures women prefer men, but in most cultures women are also dumb.  

\n

In this comic Randall Munroe stated he would have more respect for evolutionary psychology if it didn't keep on producing 1950s gender roles.  The reason it does is because a lot of evolutionary psychologists use that anthropological data we talked about earlier.  Most cultures, especially the \"primitive\" ones, do have something resembling 1950s gender roles, except with more wives (about 2/3 of cultures are at least partially polygynous). Note that I say in my post that that method of research is fundementally flawed; that obviously, we don't have access to a random sample of cultures.  I think in the case of sexuality it's pretty obvious that cultures didn't randomly come to be this way, but it is possible that a more ancestral environment (i.e. one in which women were constantly pregnant or nursing) is the cause of lower intellectual engagment in those societies, not innate brain differences.  In that case, our modern evironment is an experiment, as women have fewer babies, formula exists, men do more paternal care, etc.  And we are seeing female intelligence matching male intelligence.  The difference in intelligence is innate, in a way, because women innately bear the costs of babies, but not innate brainwise.  

\n

So the leap you must make to challenge my overall argument is, are there other innate differences between men and women, not directly related to brain differentiation, that could be responsible for straightness?  That's one avenue with which to challenge my argument.

\n

Another argument in regards to this section heading is that male and female brains don't different in innate intelligence, but in innate drive.  For instance, if women innately love cute things more than men, and this could lead them to pursue careers in veterinary medicine instead of system administration.  Male and female brains could be equally able to perform such duties, but not equally inclined.  In this case, innate differences in the brain do lead to different skills, just through a different mechanism.  

\n

 

\n

You argument has a flaw!  And that is...

\n

Another possible flaw is how I chose my null hypothesis.  When making null hypothesis, scientists have to choose what they think \"randomness\" in their system looks like.  What data is produced by the lack of a pattern?  There are actually a large number to choose from, and it depends on the nature of what you're studying.  This could be a flaw in any study, and it's not one that people mention much.  

\n

The one I chose, universal bisexuality, assumes sexuality was a continuous trait, a la the Kinsey Scale.  In my null hypothesis, the mean brain was bisexual, but I didn't mention a distribution.  Usually with continuous traits found in nature, the distribution we assume is normal.  In this case, the average person and the mode person would be perfectly bisexual; there would, of course, still be straight and gay people.  They would just be very, very rare.  

\n

However, if the traits \"likes males\" and \"likes females\" are discrete, then you would have a different distribution.  In this case, you would expect 50% of both women and men should be gay and 50% of should be straight. If you used my random culture generator, of course you would get some cultures that encouraged more straight people or more gay people; again, this would asymptotically approach a normal distribution with a large number of cultures.  So the mean and mode cultures would have 50% gay/straight ratio, with very few at the tails encouraging being mostly gay or mostly straight.

\n

If my selection of null hypothesis is wrong, does that invalidate my hypothesis?  Well, yes, if those null hypothesis aren't falsified.  I think that they are.  It's clear that human sexuality does follow some sort of pattern sorting along male and female, even if you considered sexuality a discrete trait.  As long as there is some sort of pattern, there's a difference.  

\n

 

" } }, { "_id": "QWKMaQYr6SR9ZvwwZ", "title": "There is no such thing as pleasure", "pageUrl": "https://www.lesswrong.com/posts/QWKMaQYr6SR9ZvwwZ/there-is-no-such-thing-as-pleasure", "postedAt": "2010-10-07T13:49:48.476Z", "baseScore": 6, "voteCount": 11, "commentCount": 18, "url": null, "contents": { "documentId": "QWKMaQYr6SR9ZvwwZ", "html": "

By saying that there is no such thing as pleasure, I don't mean that I don't enjoy anything. I mean that I can find nothing in common among all the things I do enjoy, to call \"pleasure\". In contrast, I can find something in common among all physically painful things. I have experienced toothache, indigestion, a stubbed toe, etc., and these experiences differ along only a few dimensions: intensity, location, sharpness, and temporal modulation are about it. I perceive a definite commonality among these experiences, and that is what I call \"pain\". (Metaphorical pains such as \"emotional pain\" or \"an eyesore\" are not included.)

\n

However, I cannot find anything in common among solving an interesting problem, sex, listening to good music, or having a good meal. Not common to all of them, nor even common to any two of them. There is not even a family resemblance. This is what I mean when I say there is no such thing as pleasure. But that's just me. I know that mental constitutions vary, and I suspect they vary in more ways than anyone has yet discovered. Perhaps they vary in this matter? Are there people who do experience \"pleasure\", in the sense in which I do not?

\n

Why is this a LessWrong topic? Because people often talk about \"pleasure\" as if there were such a thing, the obtaining of which is the reason that people seek pleasurable experiences, and the maximisation of which is what people do. But it appears to me that \"pleasure\" is nothing more than a label applied to disparate experiences, becoming a mere dormitive principle when used as an explanation. Does that difference result from an actual difference in mental constitution?

\n

If there are people who do experience a definite thing common to all enjoyable experiences, this might be one reason for the attraction, to some, of utilitarian theories -- even for taking some sort of utilitarianism to be obviously, trivially true. My experience, as set out above, is certainly one reason why I find all varieties of utilitarianism a priori implausible.

" } }, { "_id": "DurJh5k3Br3xFSHpe", "title": "What's a \"natural number\"?", "pageUrl": "https://www.lesswrong.com/posts/DurJh5k3Br3xFSHpe/what-s-a-natural-number", "postedAt": "2010-10-07T13:34:26.983Z", "baseScore": 14, "voteCount": 9, "commentCount": 18, "url": null, "contents": { "documentId": "DurJh5k3Br3xFSHpe", "html": "

A big problem with natural numbers is that the axiomatic method breaks on them.

\n

Mystery #1: if we're allowed to talk about sets of natural numbers, sets of these sets, etc., then some natural-sounding statements are neither provable nor disprovable (\"independent\") from all the \"natural\" axiomatic systems we've invented yet. For example, the continuum hypothesis can be reformulated as a statement about sets of sets of natural numbers. The root cause is that we can't completely axiomatize which sets of natural numbers exist, because there's too many of them. That's the substantial difference between second-order logic and first-order logic; logicians say that second-order logic is \"defined semantically\", not by any syntactic procedure of inference.

\n

Mystery #2: if we're allowed to talk about arithmetic and use quantifiers (exists and forall) over numbers, but not over sets of them - in other words, use first-order logic only - then some natural-sounding statements appear to be true, but to prove them, we need to accept as axioms a lot of intuition about concepts other than natural numbers. For example, Goodstein's theorem is a simple arithmetical statement that cannot be proved in Peano arithmetic, but can be proved in \"stronger theories\". This means the theorem is a consequence of our intuition that some \"stronger theory\", e.g. ZFC, is consistent - but where did that intuition come from? It doesn't seem to be talking about natural numbers anymore.

\n

Can we teach a computer to think about natural numbers the same way we do, that is, somehow non-axiomatically? Not just treat numbers as opaque \"things\" that obey the axioms of PA - that would make a lot of true theorems unreachable! This seems to be the simplest AI-hard problem that I've ever seen.

" } }, { "_id": "zniSQoYeFPsc2KywK", "title": "I need to understand more about...", "pageUrl": "https://www.lesswrong.com/posts/zniSQoYeFPsc2KywK/i-need-to-understand-more-about", "postedAt": "2010-10-07T09:15:22.910Z", "baseScore": 8, "voteCount": 7, "commentCount": 15, "url": null, "contents": { "documentId": "zniSQoYeFPsc2KywK", "html": "

... the general take on climate change here.

\n

Please read a little more before voting this down - I am not looking to initiate a debate on climate change - merely to understand what goes on when it is mentioned.

\n

Disclosure: I am personally concerned about the impact of climate change in the medium term; I am largely convinced it is caused by human activity; I can get moralistic about it. I won't push any of that in this discussion.

\n

I am a relatively recent habituee of the these fora, and mostly I find it full of entertaining, intelligent people talking thoughtfully about things that interest/concern me. I'm pleased - this is rare. Thanks, all.

\n

I searched for mentions of climate change, and read some threads. I got the impression that a majority viewpoint here was that it is not an issue that concerns people here. I got the further impression that it is an issue which arouse feelings of irritation or worse in a significant minority of people here.

\n

Neither of these impressions were strong enough to give me any useful level of certainty, though.

\n

So I thought Will Newsome's wonderful Irrationality Game post might help me with an experiment.

\n

I posted the following:

\n

\"Human activity is responsible for a significant proportion of observable climate change. 90% confidence\"

\n

I expected (in the topsy turvy context of that post) to get UPvoted, as I assumed a majority of viewers would disagree. I hoped to see some comments which would help clarify my weak impressions.

\n

In fact, I got downvoted (-7), suggesting fairly significant agreement. At the same time, the comment is invisible (to my attempts) in the list of comments to the post, leading me to suspect that it has been removed by a moderator (perhaps on the grounds that CC is viewed as 'political'?).

\n

Can anyone help me? I do not intend to use anything here as a platform for pushing an agenda - I'd just like to understand.

" } }, { "_id": "HJNmdxM2y8gizaRPM", "title": "Greg Egan disses stand-ins for Overcoming Bias, SIAI in new book", "pageUrl": "https://www.lesswrong.com/posts/HJNmdxM2y8gizaRPM/greg-egan-disses-stand-ins-for-overcoming-bias-siai-in-new", "postedAt": "2010-10-07T06:55:56.543Z", "baseScore": 48, "voteCount": 41, "commentCount": 42, "url": null, "contents": { "documentId": "HJNmdxM2y8gizaRPM", "html": "

From a review of Greg Egan's new book, Zendegi:

\r\n
\r\n

Egan has always had difficulty in portraying characters whose views he disagrees with. They always end up seeming like puppets or strawmen, pure mouthpieces for a viewpoint. And this causes trouble in another strand of Zendegi, which is a mildly satirical look at transhumanism. Now you can satirize by nastiness, or by mockery, but Egan is too nice for the former, and not accurate enough at mimicry for the latter. It ends up being a bit feeble, and the targets are not likely to be much hurt.

Who are the targets of Egan’s satire? Well, here’s one of them, appealing to Nasim to upload him:

“I’m Nate Caplan.” He offered her his hand, and she shook it. In response to her sustained look of puzzlement he added, “My IQ is one hundred and sixty. I’m in perfect physical and mental health. And I can pay you half a million dollars right now, any way you want it. [...] when you’ve got the bugs ironed out, I want to be the first. When you start recording full synaptic details and scanning whole brains in high resolution—” [...] “You can always reach me through my blog,” he panted. “Overpowering Falsehood dot com, the number one site for rational thinking about the future—”

(We’re supposed, I think, to contrast Caplan’s goal of personal survival with Martin’s goal of bringing up his son.)

“Overpowering Falsehood dot com” is transparently overcomingbias.com, a blog set up by Robin Hanson of the Future of Humanity Institute and Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence. Which is ironic, because Yudkowsky is Egan’s biggest fan: “Permutation City [...] is simply the best science-fiction book ever written” and his thoughts on transhumanism were strongly influenced by Egan: “Diaspora [...] affected my entire train of thought about the Singularity.”

Another transhumanist group is the “Benign Superintelligence Bootstrap Project”—the name references Yudkowsky’s idea of “Friendly AI” and the description references Yudkowsky’s argument that recursive self-optimization could rapidly propel an AI to superintelligence. From Zendegi:

“Their aim is to build an artificial intelligence capable of such exquisite powers of self-analysis that it will design and construct its own successor, which will be armed with superior versions of all the skills the original possessed. The successor will produce a still more proficient third version, and so on, leading to a cascade of exponentially increasing abilities. Once this process is set in motion, within weeks—perhaps within hours—a being of truly God-like powers will emerge.”

Egan portrays the Bootstrap Project as a (possibly self-deluding, it’s not clear) confidence trick. The Project persuades a billionaire to donate his fortune to them in the hope that the “being of truly God-like powers” will grant him immortality come the Singularity. He dies disappointed and the Project “turn[s] five billion dollars into nothing but padded salaries and empty verbiage”.

\r\n
\r\n

 (Original pointer via Kobayashi; Risto Saarelma found the review. I thought this was worthy of a separate thread.)

" } }, { "_id": "6WXahRGtty6zPEGLS", "title": "What is known about consciousness?", "pageUrl": "https://www.lesswrong.com/posts/6WXahRGtty6zPEGLS/what-is-known-about-consciousness", "postedAt": "2010-10-06T17:31:58.477Z", "baseScore": 4, "voteCount": 3, "commentCount": 3, "url": null, "contents": { "documentId": "6WXahRGtty6zPEGLS", "html": "

Three questions I would like to find some answers for are:

\n\n

Does anyone know some good sources which do cover any of these? Also it would be great to hear less-wrongers own input to these questions!

\n

 

" } }, { "_id": "CYxHeG8ZoRWgJWF6r", "title": "What I wish the internet had: pharmaceutical forum", "pageUrl": "https://www.lesswrong.com/posts/CYxHeG8ZoRWgJWF6r/what-i-wish-the-internet-had-pharmaceutical-forum", "postedAt": "2010-10-06T13:09:59.918Z", "baseScore": 7, "voteCount": 5, "commentCount": 7, "url": null, "contents": { "documentId": "CYxHeG8ZoRWgJWF6r", "html": "

Maybe this exists already; maybe it would be drastically illegal; but it would be awesome.

\n

I want a forum where people compare their experiences with pharmaceuticals. You add an entry that includes: what condition you have, what drug you took, what the side effects were, how well the drug worked.  Some of these are pull-down menus and ratings on a built-in scale, but there's also room for your own commentary.  The site runs stats on the drugs based on user data, and also provides a place to chat and vent about the success or failure of the medications.

\n

I came to this idea from two perspectives.  One, I'm a twenty-something woman.  Everyone I know has a different experience with birth control; there are a variety of pills, and girls talk about a corresponding variety of side effects.  But that's all anecdotal.  What your doctor will tell you is based on the side effects in the clinical trials (and is calculated to reassure you.)  My doctor told me, for instance, that if you don't get any side effects in the first three months, you never will -- but this happens to run counter to direct experience.  I wish I could aggregate all the personal stories about birth control and other drugs.  (Medications for mental illness also seem to be common among people my age, and there's a corresponding variety of stories about side effects and success rates, some very positive and some very negative.  Wouldn't it be really useful to compare experiences with lots of people before making a decision about that?)

\n

The other perspective I come from here is as someone with a passing interest in statistics and science methodology.  Pharmaceuticals are tested in scientifically controlled but small studies.  It would be useful, as a sanity check, to see if really large quantities of unscientific internet data come up with roughly the same results.  We now have the opportunity to do what I think of as \"Big & Sloppy\" science -- it may be sloppy, but its very enormity might make it useful. Sergey Brin's search for a Parkinson's cure is based on self-reported internet data.  It's not the way medical researchers conduct tests, but it takes advantage of the vast amount of anecdotal information that so far medicine doesn't have a great way to harness.  People were experiencing health benefits from aspirin for decades before scientists observed a link to heart disease.  I'm not as sure as Brin is that \"Big & Sloppy\" medical science can replace the traditional sort, but it certainly ought to generate insights and hypotheses.  

\n

I also suspect that the \"Big & Sloppy\" approach makes for good advice on choosing pharmaceuticals.  In practice, a lot of us do make medical decisions at least partly based on anecdote (Uncle Jim tried Wellbutrin and it made him fat.)  Anecdote, at least, has no agenda, compared to professional advice -- \"Uncle Jim\" really did get fat.  But logically, if you would consider anecdotes from people you know, you should be more willing to consider aggregated anecdotal information from tens of thousands of people.

\n

So: does this already exist?  Is it illegal? And, out of curiosity, what would one need to do to build it?  (to create a forum section, user identities, surveys, and statistics.)

" } }, { "_id": "GdFqbr2d5TSJKhGEw", "title": "[LINK] Humans are bad at summing up a bunch of small numbers", "pageUrl": "https://www.lesswrong.com/posts/GdFqbr2d5TSJKhGEw/link-humans-are-bad-at-summing-up-a-bunch-of-small-numbers", "postedAt": "2010-10-06T13:01:58.192Z", "baseScore": 8, "voteCount": 5, "commentCount": 4, "url": null, "contents": { "documentId": "GdFqbr2d5TSJKhGEw", "html": "

\"Outsmart your brain by knowing when you are wrong\":
http://troysimpson.co/outsmart-your-brain-by-knowing-when-you-are-w

\n
\n

Humans are incredibly bad at summing up a bunch of small numbers.  I had recently read a study that looked into why people are so bad at this task, but the important part was people commonly underestimate the total.

\n

...

\n

Knowing what you are bad can be incredibly important.  Use this trick when estimating the timeline for a lot of small tasks, or figuring out what your monthly expenses are.   It always seems shocking your credit card bill is so high when it is a bunch of small purchases.  Learn what else your brain cannot perform well and use that to your advantage.

\n
" } }, { "_id": "H469GGMB93ymC97g4", "title": "Help: Info on intelligence-focused genetic engineering?", "pageUrl": "https://www.lesswrong.com/posts/H469GGMB93ymC97g4/help-info-on-intelligence-focused-genetic-engineering", "postedAt": "2010-10-06T01:41:31.959Z", "baseScore": 2, "voteCount": 4, "commentCount": 18, "url": null, "contents": { "documentId": "H469GGMB93ymC97g4", "html": "

This is a request for analyses and info on intelligence-related genetic engineering; how far away is genetic engineering designed to increase intelligence? What's the scientific/cultural/technological landscape? Are the obstacles funding-related or technological? Are there any proposed methods of intelligence-boosting gene therapy? Any good overall introductions?

\n

I expect genetic engineering and especially intelligence-enhancing genetic engineering to drastically change the socioeconomic and sociocultural landscape when the effects hit. This affects singularity timelines along with everything else about the future. Thanks in advance for any links or explanations.

" } }, { "_id": "uJwbtT6sgXJcQY5pR", "title": "Sam Harris' surprisingly modest proposal", "pageUrl": "https://www.lesswrong.com/posts/uJwbtT6sgXJcQY5pR/sam-harris-surprisingly-modest-proposal", "postedAt": "2010-10-06T00:46:05.838Z", "baseScore": 14, "voteCount": 15, "commentCount": 44, "url": null, "contents": { "documentId": "uJwbtT6sgXJcQY5pR", "html": "

Sam Harris has a new book, The Moral Landscape, in which he makes a very simple argument, at least when you express it in the terms we tend to use on LW: he says that a reasonable definition of moral behavior can (theoretically) be derived from our utility functions. Essentially, he's promoting the idea of coherent extrapolated volition, but without all the talk of strong AI.

\n

He also argues that, while there are all sorts of tricky corner cases where we disagree about what we want, those are less common than they seem. Human utility functions are actually pretty similar; the disagreements seem bigger because we think about them more. When France passes laws against wearing a burqa in public, it's news. When people form an orderly line at the grocery store, nobody notices how neatly our goals and behavior have aligned. No newspaper will publish headlines about how many people are enjoying the pleasant weather. We take it for granted that human utility functions mostly agree with each other.

\n

What surprises me, though, is how much flak Sam Harris has drawn for just saying this. There are people who say that there can not, in principle, be any right answer to moral questions. There are heavily religious people who say that there's only one right answer to moral questions, and it's all laid out in their holy book of choice. What I haven't heard, yet, are any well-reasoned objections that address what Harris is actually saying.

\n

So, what do you think? I'll post some links so you can see what the author himself says about it:

\n

\"The Science of Good and Evil\": An article arguing briefly for the book's main thesis.

\n

Frequently asked questions: Definitely helps clarify some things.

\n

TED talk about his book: I think he devotes most of this talk to telling us what he's not claiming.

" } }, { "_id": "vLre8LF6KpBtiGNrC", "title": "Rationality quotes: October 2010", "pageUrl": "https://www.lesswrong.com/posts/vLre8LF6KpBtiGNrC/rationality-quotes-october-2010", "postedAt": "2010-10-05T11:38:59.920Z", "baseScore": 8, "voteCount": 7, "commentCount": 487, "url": null, "contents": { "documentId": "vLre8LF6KpBtiGNrC", "html": "

This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.

\n" } }, { "_id": "orfsnNFTPepFyezmy", "title": "Everyday Questions Wanting Rational Answers", "pageUrl": "https://www.lesswrong.com/posts/orfsnNFTPepFyezmy/everyday-questions-wanting-rational-answers", "postedAt": "2010-10-05T06:04:58.990Z", "baseScore": 8, "voteCount": 7, "commentCount": 52, "url": null, "contents": { "documentId": "orfsnNFTPepFyezmy", "html": "

I'm working on a list of question types which come up frequently in day-to-day life but which I haven't yet found a reliable, rational way to answer. Here are some examples, including summaries of any progress made in the comments.

\n

 

\n
\n

The third request in the Serenity Prayer[1] is for \"the wisdom to know the difference\" between things we should accept with our serenity and things we should change with our courage. Pending an official response to the prayer, what are some rational criteria for deciding between those two responses to an unfavorable situation?

\n
\n

Practice the ability to judge how important something is to change, making sure to examine your criteria of importance. Identify the reasons you want to change it, and try to normalize your emotional response to the facts. Learn about the difficulty of changing a thing by investigating other peoples' attempts to. Be aware that, the less one knows about a field, the less one is able to judge how difficult a task in that field is. Ask an expert if you need to. Another heuristic for difficulty of changing something is that the closer it is to one's own mind, the more control one has over it. When you know as much as you can, do a cost-benefit analysis.

\n
\n

I know that asking for what I want is often the best way to get it (\"Will you take your hat off so I can see the screen, please?\"), but it's sometimes clearly not appropriate (\"Will you please give me all your money with no expectation of benefit nor repayment?\"). Those are two ends of a spectrum. When the thing I want is somewhere in between (\"Can I have a little of your time to vent about something that's bothering me?\"), how should I decide whether to ask?

\n
\n

Unreasonable requests are those which would only be fulfilled if the asker had power over the askee which they do not, which represent an unequal exchange between equals, or which are not actually possible. We don't want to make unreasonable requests because they are at best unfair social impositions and at worst rude and damaging to relationships. This is complicated because requests between friends aren't about direct exchange; it's expected that sometimes one person will need help, and sometimes the other will, and in the long run it'll even out. In a strong friendship where both people have treated each other well, it's more appropriate to ask for a large favor than it would be to ask a relative stranger the same thing.

\n

The difficult requests to judge are those where the power balance or strength of the relationship, or the values of what's being exchanged, are unclear. That is, they require a more accurate judgment of either the relationship itself or of the other person's needs and abilities than one is confident of making. As in the previous question, specialized knowledge can help predict how much trouble a given request might be. In the specific example above, one would need to understand the cost of lending an ear, in terms that could be compared to the benefit of venting. Prior communication is the best way to achieve this; basing an estimate on similar past situations is also good. Knowing nothing else, use an assumption of equality and the basic responsibility for oneself as heuristics. Finally, how close you can get to the edge of what's acceptable may depend on how much you trust the other person to tell you if your request is not reasonable (rather than acquiescing resentfully).

\n
\n

How do I balance the need for comfort and short-term happiness (making it mentally easier to be useful) with that for productivity and long-term happiness (setting me up to be happier and more useful later)? Again, some examples are clear-cut: it's almost certainly a better idea to do my homework at some point than to spend the entire week playing video games. A more difficult case is sleep. Get more of it and feel more rested and alert, or get less and have more time for fruitful tasks?

\n
\n

It's possible to minimize the necessity of choosing between these two things by doing work which is enjoyable and taking breaks after earning them. When the two types of activity do conflict, one way to get around it is to use time that wasn't available for work anyway to do unproductive things. Another is to have (and frequently review) clearly defined medium- and long-term goals, and weigh short-term choices against them. Doing this regularly may make it easier to judge activity choices on the fly.

\n
\n

Go players describe plateaus that last for months, but which they eventually climb out of again. This creates a sort of human halting problem: How do you tell the difference between a long plateau without improvement, and actually having reached your peak in a skill?

\n
\n

No comments yet, but here are some questions this raises for me: What does it mean to have reached your peak in a skill--is there actually a maximum amount you can usefully learn and practice, or just a (potentially variable) point of diminishing returns? Is it possible to know there's more to learn but not be able to learn it?

\n
\n

The William James zone is the positive feedback loop of mental and physical anger responses which keep you a person even after the conflict has been addressed or resolved. I find myself in the WJZ sometimes when I remember or anticipate something which made/would make me angry, even when no conflict is presently occurring. This happens primarily when I don't have a ready distraction from the upsetting thought, e.g. when I'm in the shower or waiting to fall asleep. Other than simply waiting for it to pass, how can I get out of the WJZ or avoid entering it?

\n
\n

So far, the only mitigating factor I've found is my overall physical and mental state. Being hungry, tired, or stressed makes it easier to fall into the anger cycle and harder to get out. Therefore, taking care of myself in general helps to prevent it, but it's not always possible to remedy those problems after the cycle has already started. When circumstances permit, physical activity may provide an outlet for the energy that keeps this cycle going.

\n
\n

Where is the line between acceptable nonverbal communication and unacceptable manipulation? Is it in the thing being sought, the manner of seeking, the intent of the communicator, or something else?

\n
\n

A conversation offsite led to the following: Manipulation involves both deliberate instigation of emotion and trying to persuade someone to do what you want, but isn't defined by either. (The first one describes gestures of affection, and the second includes ordinary debate.) The definition we settled on was \"using emotion to bypass someone's normal decision-making process.\" Trying to get someone to do what you want is not inherently manipulative; trying to make them feel something so that they will do what you want is.

\n

 

\n

Naturally I'm looking for ideas about how to answer these questions, including links to earlier thoughts about them[2], but you get bonus points for supplying actually usable heuristics, rather than just opining on my examples. But I'd also like to hear it if you've got any questions of your own that fit this form. Consider it a sort of lowbrow subset of open problems--difficulties you're aware of having on a regular basis but haven't yet been able to solve.

\n

(Tag suggestions are appreciated. I'm unaccustomed to using content tags, so I made some guesses based on the site's tagcloud and what's on the similar Open Problems post.)

\n

 

\n
\n

[1] I actually quite like the Serenity Prayer, despite being entirely nontheistic, because it presents a set of traits to aspire to for specific purposes I can get behind.

\n

[2] Until I've read the entire LW archive, I'm constantly paranoid that anything I post will be a second-rate rehash.

" } }, { "_id": "wZDsKj8oRmYBScMHB", "title": "[META] Proposed title keywords for Discussions", "pageUrl": "https://www.lesswrong.com/posts/wZDsKj8oRmYBScMHB/meta-proposed-title-keywords-for-discussions", "postedAt": "2010-10-05T05:09:23.935Z", "baseScore": 17, "voteCount": 13, "commentCount": 9, "url": null, "contents": { "documentId": "wZDsKj8oRmYBScMHB", "html": "

I propose the following title keywords for Discussion posts, to be included as I did with [META] in this one.

\n\n

Feel free to propose others, debate the merit of the ones I've suggested, tell me that the whole thing is a stupid idea, etc.

\n

This is not meant to replace the actual Tags system, which is good for arbitrarily tagging posts by topic. I see this as a way to allow us to quickly scan through the list of Discussions and know what general type of content to expect from each item. I don't expect that absolutely everything will need one of these keywords, but many/most of the things that have been posted in Discussion so far seem to be categorizable along these lines. And I don't intend by this to encourage specific types of content (e.g. if we get more polls as Discussion posts now as a result of having a [POLL] keyword, then it is not working correctly), I only suggest that it may be useful for organizing the things people are already using this section for.

" } }, { "_id": "8JuxmrJEXBeRbbmww", "title": "Hive mind scenario", "pageUrl": "https://www.lesswrong.com/posts/8JuxmrJEXBeRbbmww/hive-mind-scenario", "postedAt": "2010-10-04T21:02:25.750Z", "baseScore": 0, "voteCount": 3, "commentCount": 3, "url": null, "contents": { "documentId": "8JuxmrJEXBeRbbmww", "html": "

In a conceivable future, humans gain the technology to eliminate physical suffering and to create interfaces between their own brains and computing devices--interfaces which are sufficiently advanced that the border between the brain and the computer practically vanishes.  Humans are able to access all public knowledge as if they 'knew' it themselves, and they can also upload their own experiences to this 'web' in real-time.  The members of this network would lose part of their individuality since an individual's unique set of skills and experiences are a foundational component of identity.

\n

However, although knowledge can be shared for low cost, computing power will remain bounded and valuable.  Even if all other psychological needs are pacified, humans will probably still compete for access to computing power.

\n

But what other elements of identity might still remain?  Is it reasonable to say that individuality in such a hive mind would reduce to differing preferences for the use of computational power?

" } }, { "_id": "8MLb9Bj8kwQzAieSA", "title": "About Rationality Quotes", "pageUrl": "https://www.lesswrong.com/posts/8MLb9Bj8kwQzAieSA/about-rationality-quotes", "postedAt": "2010-10-04T18:34:04.259Z", "baseScore": 3, "voteCount": 2, "commentCount": 6, "url": null, "contents": { "documentId": "8MLb9Bj8kwQzAieSA", "html": "

...so do these go to the main LW now, or do we keep them in Discussion?

\n

My vote would be \"to the main LW\", but since I both want to discuss this and would like to play with the \"feature\" that allows moving a thread from one to the other at some point:

\n

 

\n

Discuss.

" } }, { "_id": "zrTdssTiDzpvNrybR", "title": "Pascal's Mugging as an epistemic problem", "pageUrl": "https://www.lesswrong.com/posts/zrTdssTiDzpvNrybR/pascal-s-mugging-as-an-epistemic-problem", "postedAt": "2010-10-04T17:52:30.266Z", "baseScore": 6, "voteCount": 4, "commentCount": 37, "url": null, "contents": { "documentId": "zrTdssTiDzpvNrybR", "html": "

Related to: Some of the discussion going on here

\n

In the LW version of Pascal's Mugging, a mugger threatens to simulate and torture people unless you hand over your wallet. Here, the problem is decision-theoretic: as long as you precommit to ignore all threats of blackmail and only accept positive-sum trades, the problem disappears.

\n

However, in Nick Bostrom's version of the problem, the mugger claims to have magic powers and will give Pascal an enormous reward the following day if Pascal gives his money to the mugger. Because the utility promised by the mugger so large, it outweighs Pascal's probability that he is telling the truth. From Bostrom's essay:

\n
\n

Pascal: Gee . . . OK, don’t take this personally, but my credence that you have these magic powers whereof you speak is about one in a quadrillion.
Mugger: Wow, you are pretty confident in your own ability to tell a liar from an honest man! But no matter. Let me also ask you, what’s your probability that I not only have magic powers but that I will also use them to deliver on any promise – however extravagantly generous it may seem – that I might make to you tonight?
Pascal: Well, if you really were an Operator from the Seventh Dimension as you assert, then I suppose it’s not such a stretch to suppose that you might also be right in this additional claim. So, I’d say one in 10 quadrillion.
Mugger: Good. Now we will do some maths. Let us say that the 10 livres that you have in your wallet are worth to you the equivalent of one happy day. Let’s call this quantity of good 1 Util. So I ask you to give up 1 Util. In return, I could promise to perform the magic tomorrow that will give you an extra 10 quadrillion happy days, i.e. 10 quadrillion Utils. Since you say there is a 1 in 10 quadrillion probability that I will fulfil my promise, this would be a fair deal. The expected Utility for you would be zero. But I feel generous this evening, and I will make you a better deal: If you hand me your wallet, I will perform magic that will give you an extra 1,000 quadrillion happy days of life.
Pascal: I admit I see no flaw in your mathematics.

\n
\n

As a result, says Bostrom, there is nothing from rationally preventing Pascal from taking the mugger's offer even though it seems intuitively unwise. Unlike the LW version, in this version the problem is epistemic and cannot be solved as easily.

\n

Peter Baumann suggests that this isn't really a problem because Pascal's probability that the mugger is honest should scale with the amount of utility he is being promised. However, as we see in the excerpt above, this isn't always the case because the mugger is using the same mechanism to procure the utility, and our so our belief will be based on the probability that the mugger has access to this mechanism (in this case, magic), not the amount of utility he promises to give. As a result, I believe Baumann's solution to be false. 

\n

So, my question is this: is it possible to defuse Bostrom's formulation of Pascal's Mugging? That is, can we solve Pascal's Mugging as an epistemic problem?

" } }, { "_id": "3NEDdieKZ48WnZRjg", "title": "Open Thread, October, 2010", "pageUrl": "https://www.lesswrong.com/posts/3NEDdieKZ48WnZRjg/open-thread-october-2010", "postedAt": "2010-10-04T16:03:51.326Z", "baseScore": 0, "voteCount": 3, "commentCount": 2, "url": null, "contents": { "documentId": "3NEDdieKZ48WnZRjg", "html": "

 

\n

Because we are into the 4th day of October, and nobody else has made an Open Thread post yet.

\n

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

" } }, { "_id": "J59ZAZvn3X92tvbBx", "title": "Limitations of eyewitness testimony", "pageUrl": "https://www.lesswrong.com/posts/J59ZAZvn3X92tvbBx/limitations-of-eyewitness-testimony", "postedAt": "2010-10-04T13:03:16.282Z", "baseScore": 12, "voteCount": 10, "commentCount": 4, "url": null, "contents": { "documentId": "J59ZAZvn3X92tvbBx", "html": "

From wikipedia:

\n

Eyewitness testimony isn't reliable-- it degrades rapidly with time (significant fading in 20 minutes), is easily overridden by circumstances (people are apt to assume that the guilty person is in a line-up unless they're specifically told the guilty person might not be there-- there's a risk of saying the best match is it rather than looking for a genuinely satisfying match), cross-racial identification is less competent than within race identification[1], the presence of a weapon makes accurate identification less likely....

\n

It goes on-- if you have any interest in this sort of thing, I recommend reading the whole article.

\n

[1] I wonder if this has been tested in societies with different classification systems. For example, I've been told by someone who lived there that in Ireland, everyone is classified as Catholic or Protestant-- even if they're Jewish. Would Irish people have problems doing identification across the Catholic-Protestant line, even if all the people involved would be considered white in America and not set off the identification problem?

" } }, { "_id": "dYhBpApa9HrmEFGAw", "title": "News: The Quest for Unknown Unknowns", "pageUrl": "https://www.lesswrong.com/posts/dYhBpApa9HrmEFGAw/news-the-quest-for-unknown-unknowns", "postedAt": "2010-10-04T11:48:55.663Z", "baseScore": 1, "voteCount": 8, "commentCount": 11, "url": null, "contents": { "documentId": "dYhBpApa9HrmEFGAw", "html": "

How important are 'the latest news'? What would it mean to ignore most news and to concentrate on our present goals?

\n\n

These days many people are following an enormous amount of news sources. I myself notice how skimming through my Google Reader items is increasingly time-consuming. But by acknowledging that news media consumption is a time killer should we also jump to the conclusion that it is a waste of time?

\n
As we know,
There are known knowns.
There are things
We know we know.
We also know
There are known unknowns.
That is to say
We know there are some things
We do not know.
But there are also unknown unknowns,
The ones we don't know
We don't know.

— Donald Rumsfeld, Feb. 12, 2002, Department of Defense news briefing
\n

As long as we embed ourselves into the collective intelligence of 'the sphere of human thought', as long as we are a part of the growing Noosphere, we will be nourished. But we have to keep care not to be drowned. The balance between an ill-nourished information diet and gluttony is unsteady.

\n

Google and its kind are the first representations of the wonders of our possible future. They are slave-Gods at our disposal, all the time ready to serve us. Google is a literal-minded information genie, there to satisfy our desires indifferently of the consequences that might arise for us.

\n

Thus we have to learn how, when and for what to ask the right questions. But the underlying nature of unknown unknowns does not permit us to question them. The impossibility to know the possibilities that lie ahead is the dilemma we face. For that we know about, or rather assume, the possibility of prospects or possible possibilities that we don't know we don't know about.

\n

How much of what you know and do has its origins in some blog post or other kind of news item. Would I even know about Less Wrong if I wasn't the heavy news addict that I am?

\n

Have I already reached a level of knowledge that allows me to get from here to everywhere, without exposing myself to all the noise out there in hope of coming across some valuable information nugget which might help me reach the next level?

\n

How do we ever know that there isn't something out there which might trump our current goals? Just one click away a new truth might shift our preferences.

\n

Is there a time to stop searching and approach what is at hand? Start learning and improving upon the possibilities we already know about? What proportion of our time should we spend on the prospect of unknown unknowns?

\n
\n

How I do it:

\n
\n

Science is the only news. When you scan through a newspaper or magazine, all the human interest stuff is the same old he-said-she-said, the politics and economics the same sorry cyclic dramas, the fashions a pathetic illusion of newness, and even the technology is predictable if you know the science. Human nature doesn’t change much; science does, and the change accrues, altering the world irreversibly. – Stewart Brand, Whole Earth Discipline (2009), p. 216

\n
\n

I mainly get my news from blogs, blogs of experts on different topics who preprocess and refine each of their specialities. That is also my general advice, to focus on private blogs of experts. That way you have a very high signal-to-noise ratio while you can be pretty sure not to miss anything important in that area. For example, if you care about science fiction, subscribe to some blogs of your favorite authors. If you care about genetics, subscribe to a few different personal blogs of experts on that subject. Most important, avoid in-between sources. Do not read what is being made to sound good, to earn money, that which is straightened for a large audience. Everybody is grounding their stuff on the experts, as in 'science is the only news'. Noise reduction is very important these days. All people out there basically build upon a few underlying unique sources who generate the data in the first place. Those people you have to follow. And everything else will automatically come to you via the same people, since one of them will hear about something you might have missed.

\n

A few well-chosen mainstream networks are good to stay up to date on big events and world news. But the rest has to be experts. As I said above regarding science fiction, I don't follow SF review sites or the big publishers but a few selected authors. Those authors will know about important stuff happening in the field of SF and post about it. They will also lead you to other unique sources in the same field. And due to their overlapping interests also keep you up to date on other stuff. That's how it works. Everything else is just deadly noise that will steal your time.

\n

And don't be fooled by the new trend of Twitter and its kind. I don't get why people these days supposedly rather use 'real-time' services for their daily information diet. Don't do it. Stay with RSS, especially Google Reader. If you really want to follow somebody on Twitter, subscribe to their RSS feed. If you ask me, all the stuff you can find on services like Twitter ultimately was found by people subscribing to lots of RSS feeds. Without them Twitter would merely be a huge amount of boring one-liners with no information content. RSS is faster, easier and has a lot more features. You also won't miss what happened when you haven't been reading updates in 'real-time'.

" } }, { "_id": "zcp78EobzGxTkeiJH", "title": "Math prerequisites for understanding LW stuff", "pageUrl": "https://www.lesswrong.com/posts/zcp78EobzGxTkeiJH/math-prerequisites-for-understanding-lw-stuff", "postedAt": "2010-10-04T11:30:10.679Z", "baseScore": 27, "voteCount": 23, "commentCount": 16, "url": null, "contents": { "documentId": "zcp78EobzGxTkeiJH", "html": "

I just got a PM with this question: \"What would be the minimum intellectual investment necessary to be able to fruitfully take part in the discussion of decision theory on LW?\" This is not the first time I've been asked that. Our new discussion section looks like the perfect place to post my answer:

\n

1) Learn enough game theory to correctly find Nash equilibria in 2x2 games all by yourself.

\n

2) Learn enough probability theory to correctly solve Monty Hall, Monty Fall, Monty Crawl all by yourself.

\n

3) Learn enough programming to write a working quine (in any language of your choice) all by yourself.

\n

4) Learn enough logic to correctly solve the closing puzzle from Eliezer's cartoon guide.

\n

Then you're all set. Should take you a few days if you've studied math before, a few weeks if you haven't. No special texts needed beyond Wikipedia and Google.

" } }, { "_id": "peDWN7sQjXhWTcbyj", "title": "A Novice Buddhist's Humble Experiences", "pageUrl": "https://www.lesswrong.com/posts/peDWN7sQjXhWTcbyj/a-novice-buddhist-s-humble-experiences", "postedAt": "2010-10-04T10:40:31.291Z", "baseScore": 13, "voteCount": 19, "commentCount": 42, "url": null, "contents": { "documentId": "peDWN7sQjXhWTcbyj", "html": "

This is an introduction and description of vipassana meditation [edit: actually, anapanasati, not vipassana as such] more than Buddhism. Nonetheless I hope it serves as some testament to the value of Buddhist thought outside of meditation.

\n

One day I hope more people take up the mantle of the Buddhist Conspiracy, the Bayesanga, and preach the good word of Bayesian Buddhism for all to hear. Until then, though, I'd like to follow in the spirit of fellow Bayesian Buddhist Luke Grecki, and describe some of my personal experiences with anapanasati meditation in the hopes that they'll convince you to check it out.

\n

Nearly everything I've learned about anapanasati/vipassana comes from this excellent guide. It's easy to read and it actually explains the reasoning behind all of the things you're asked to do in vipassana. I heavily encourage you to give it a look. Meditation without instruction didn't lead me anywhere: I spent hours letting my mind get tossed about while I tried in vain to think of nothing. Trying to think of nothing is not a good idea. Vipassana is the practice of mindfulness, and it is recommended that you focus on your breath (focusing on breath is sort of a form of vipassana, and sort of its own thing; I haven't quite figured it out yet). I chose that as my anchor for meditation as recommended. Since reading the above linked guide on meditation, I've meditated a mere 4 times, for a total of 100 minutes. I'm a total novice! So don't confuse my experiences for the wisdom of a venerable teacher. But I think that maybe since you, too, will be a novice, hearing a novice's experiences might be useful. A mere 100 minutes of practice, and I've had many insights that have helped me think more clearly about mindfulness, compassion, self-improvement, the nature of feedback cycles and cascades, relationships between the body and cognition, and other diverse subjects.

\n

The first meditation session was for 10 minutes, the second for 40 minutes, the third for 10 minutes, and the fourth for 40 minutes again. Below are descriptions of the two 40 minutes sessions. In the first, I experienced a state of jhana (the second jhana, to be precise; I'm about 70% confident), which was profoundly moving and awe-inspiring. In the the second, my mind was a little too chatty to reach a jhana, but I did accidentally have a few insights that I think are important for me to have realized.

\n

The below are very personal experiences, and I don't suspect that they're typical. But I hope that describing my experiences will inspire you to consider mindfulness meditation, or to continue with mindfulness meditation, even if your experiences end up being very different from mine. You might find that some of the 'physiological effects' I list are egregious, but I decided to leave them in, 'cuz they just might be relevant. For instance, I find that, quite surprisingly, my level of mindfulness seems to directly correlate with how numb various parts of my body are! Also, listing what parts of me were in pain at various points might alert future practitioners to what sorts of pain might be expected from sitting still for longer than thirty minutes. The most interesting observations will probably be in the 'insights' sections.

\n
\n

40 minutes, Evening/night, September 17, 2010.

\n

Setting: First laying down on a bed with a pillow over my eyes, then sitting up on the bed on a pillow.

\n

Physiological effects:

\n\n

Insights on breath:

\n\n

General insights:

\n\n
\n
\n
\n
\n

40 minutes, Midnight, October 4, 2010.

\n

Setting: Seated on a pillow on blanket on roof of my house in Tucson.

\n

Physiological effects:

\n\n

Insights on breath:

\n\n

General insights:

\n\n
\n
\n
\n
I'd love for others to share their meditative experiences, or offer feedback for this post. I'm not sure if it should become a top-level post or not. But hopefully LW starts moving in a more Buddhist and effectiveness-oriented direction.
\n

\n
Taken out of original essay for being egregious: I've talked previously of how there seems to be a libertarian/technophile/futurist set of rationalists and a liberal/Buddhist/scientist set of rationalists, and each eyes the other's origin with a cocked eyebrow. Well, I'm from the LBS origin group, and I still think it's the better of the two. We're better at cooperating and we're more okay with praise. But we also seem to lack an unfortunate meme that I've seen in the LTF crowd: uncharitable misinterpretation of what the best ideas of Buddhism really are, even if not every practitioner or teacher is at the standard of the best philosophers of that tradition. Hofstadter made Zen cool, but other easier and probably more useful forms of Buddhism have been left unplundered. I think it has more to do with an instinctual negative reaction towards anything that seems vaguely spiritual or religious. And don't get me wrong, there's a lot of religion and spirituality in Buddhist countries, especially of the Mahayana sort. But the best texts in the Theravada tradition have very good, very deep, and very insightful epistemology and rationality in them, of the kind that wasn't to be found anywhere else in the world for hundreds upon hundreds more years, if at all.
" } }, { "_id": "Ah9nTF2wisdLrFXTE", "title": "(Some) politicians might be more rational than they appear.", "pageUrl": "https://www.lesswrong.com/posts/Ah9nTF2wisdLrFXTE/some-politicians-might-be-more-rational-than-they-appear", "postedAt": "2010-10-04T04:05:41.937Z", "baseScore": 22, "voteCount": 17, "commentCount": 5, "url": null, "contents": { "documentId": "Ah9nTF2wisdLrFXTE", "html": "

The following excerpt from an NPR story on TARP makes me feel that while the world is mad, I might have overestimated how much of that madness comes from our political leaders:

\n
\n

[Neel Kashkari, the Treasury official running TARP] became a sort of punching bag for legislators during his appearances at congressional hearings.

\n

\"When I went to testify, I was now one of the faces of this program. I had to do my best to explain what we were doing and why we were doing it, but I also had to represent us and show that I understood the American people's anger, I understood how deeply unfair this crisis is,\" he says. \"But we had to stabilize the financial system, because if we failed, it would be the American people who bear the consequences of that.\"

\n

Oftentimes, it meant he just had to sit there and absorb their verbal lashings.

\n

\"It was an opportunity for members of Congress to vent the anger they were hearing from their constituents. And because I was a young man at 35 years old, I think they felt it was appropriate for them to take it out on me — which is fine. I sat up there, I think, in total, for 25 hours of testimony — most of which was exceedingly hostile.\"

\n

Throughout the testimony, Kashkari says one particularly biting moment sticks in his memory.

\n

\"I remember when Congressman [Elijah] Cummings from Maryland asked me if I was a chump,\" he says. \"I remember just scratching my head, thinking to myself, 'Did he really just call me a chump? Or did I just imagine that?' \"

\n

But when the hearings were over, Kashkari says their attitudes switched from critical to congenial in a matter of moments.

\n

\"Oftentimes afterward, when the cameras were off, they would take you into a back room and tell you that they really appreciate how hard I was working or our team was working; that they support us and our programs, and let us know if they could be helpful,\" he says. \"It was a 180-degree change from what they were showing in front of the camera. That obviously surprised me, but eventually I got used to it.\"

\n

[...]

\n

As for whether he'll go back to Washington? He's undecided for now.

\n

\"I am of two minds. On one hand, I am cynical of Washington and the politics and people more focused on maintaining their popularity or getting re-elected than doing the people's work,\" he says. \"At the same time — in the depth of the national crisis in September of 2008 — I saw Washington at its finest. I saw and I was a part of Democrat and Republican leaders coming together to do something deeply unpopular but yet absolutely necessary for the sake of our country and for the sake of the American people.

\n

\"I've seen Washington work, and I know that it can work at least in times of crisis,\" he says. \"I hope we can make it work in other times as well.\"

\n
\n

The import of this, as I see it, is that many lawmakers are pretty cognizant of the relevant issues, and that their irrational grandstanding is often a facade for the sake of the voters who think more tribally than quantitatively. The incentives are mad, but at least for now overt hypocrisy (and actual competence) is more common than sincere idiocy in Congress.

\n

Of course, that's not completely reassuring, because if an important (but not immediately urgent) bill is unpopular, it's worth it for one party to actually oppose it and thereby gain political points. It's only in a genuine crisis that you'd see both parties actually do the right thing. (I leave it for your consideration whether TARP was indeed the right thing; but at least now I understand Harry Reid's claim that this was one of Washington's finest hours– a claim that Jon Stewart skewered mercilessly at the time.)

" } }, { "_id": "vxqaD3geB9oQRRrSx", "title": "Boredom as a defense mechanism?", "pageUrl": "https://www.lesswrong.com/posts/vxqaD3geB9oQRRrSx/boredom-as-a-defense-mechanism", "postedAt": "2010-10-04T02:01:43.393Z", "baseScore": 0, "voteCount": 3, "commentCount": 11, "url": null, "contents": { "documentId": "vxqaD3geB9oQRRrSx", "html": "

I've seen boredom before being used as a way to detach from the world. Is there any material on boredom being used as a defense mechanism?

" } }, { "_id": "CJxSgaqG6y7z6Rbij", "title": "Are mass hallucinations a real thing?", "pageUrl": "https://www.lesswrong.com/posts/CJxSgaqG6y7z6Rbij/are-mass-hallucinations-a-real-thing", "postedAt": "2010-10-03T20:15:34.425Z", "baseScore": 17, "voteCount": 14, "commentCount": 10, "url": null, "contents": { "documentId": "CJxSgaqG6y7z6Rbij", "html": "

One of the explanations in the irrationality game thread for UFOs and other paranormal events seen by multiple people at once, like the was mass hysteria. This is also a common explanation given for any seemingly paranormal event that multiple people have independently witnessed.

\n

But mass hysteria is mostly known from incidents where people hysterically believe they have some disease, or have some hysterical delusion (false belief). In cases where people report seeing something or having a hallucination, it tends to be a few people across a large society. For example, when reports of Spring-Heeled Jack were going around England, multiple people claimed to have seen Spring-Heeled Jack, but there were no cases of hundreds of people seeing him simultaneously; therefore, the hysteria could have selected for people who were already a little bit crazy, or it could just have been that out of millions of English people a few of them were willing to say anything to get attention.

\n

Conformity pressures can cause people to misinterpret borderline perceptions - for example, if someone says a random pattern of dots form Jesus' face, I have no trouble believing that, thus primed, people will be able to find Jesus' face in the dots. But it's a much bigger leap to assert that if I say \"Jesus is standing right there in front of you\" with enough conviction, you'll suddenly see him too.

\n

Does anyone have any evidence that mass hysteria can produce a vivid hallucination shared among multiple otherwise-sane people?

" } }, { "_id": "CNLMxEkx7PqHnnvxC", "title": "Understanding vipassana meditation", "pageUrl": "https://www.lesswrong.com/posts/CNLMxEkx7PqHnnvxC/understanding-vipassana-meditation", "postedAt": "2010-10-03T18:12:59.408Z", "baseScore": 48, "voteCount": 49, "commentCount": 77, "url": null, "contents": { "documentId": "CNLMxEkx7PqHnnvxC", "html": "

Related to: The Trouble With \"Good\"

\n

Followed by: Vipassana Meditation: Developing Meta-Feeling Skills

\n

I describe a way to understand vipassana meditation (a form of Buddhist meditation) using the concept of affective judgment1.  Vipassana aims to break the habit of blindly making affective judgments about mental states, and reverse the damage done by doing so in the past. This habit may be at the root of many problems described on LessWrong, and is likely involved in other mental issues. In the followup post I give details about how to actually practice vipassana.

\n

The problem

\n

Consider mindspace. Mindspace2 is the configuration space of a mind. Each mental state is identified with a position in mindspace, specified by its description along some dimensions. For human mindspace the affect of a mental state is a natural dimension to use, and it's the one that's most important for a conceptual understanding of vipassana meditation.

\n

According to vipassana meditators, every time we pass through a point in mindspace we update its affect by judging3 whether that mental state is good or bad. On the other hand, the path we take through mindspace is strongly determined by this dimension alone, and we tend to veer towards clusters of positive affect and away from those with negative affect. The current judgment of a mental state is also strongly determined by its present affect. This can result in a dangerous feedback loop4, with small initial affective judgments compounding into deep mental patterns. It seems that this phenomenon is at the root of many problems mentioned here.5

\n

Aside from causing systematic errors in thought and action it is claimed that this mechanism is also responsible for our mental suffering and restlessness. Vipassana aims to solve these problems by training us to observe and control our affective judgments, and break out of the pattern of blind reaction.

\n

How it works

\n

There are four aspects to the process:

\n
    \n
  1. Slowing the flood of affective judgments so one can distinctly observe them.
  2. \n
  3. Learning to not compulsively make affective judgments.
  4. \n
  5. Smoothing one's previously formed emotional gradients.
  6. \n
  7. No longer forming strong emotional gradients.
  8. \n
\n

They are synergistic practices and should be developed simultaneously. This will only be possible later on; at any given time you may only be able to practice one or more of them.

\n

1) Slowing the flood

\n

The ability to calm the mind and concentrate is essential. Without this, one remains involved in the rushing pattern of affect perception and judgment, and there is no possibility of seeing the process and ultimately changing it. This ability is trained by having one maintain awareness of a neutral mental process, which serves as an anchor that one continually returns to. Gradually one becomes aware of the subtle pattern of affective judgments and can distinctly observe them.

\n

2) Not compulsively judging

\n

While periodically returning to the mental anchor, one attempts to observe the mental states that arise without making affective judgments about them. In trying to do this it becomes clear how such judgments can cascade and create deep mental paths that it can be hard to escape from.

\n

3) Smoothing old emotional gradients

\n

Applying this new skill of neutral observation, one works on the long task of undoing old emotional gradients. When observing a mental state without making an affective judgment one can lower6 its present affective value. This is opposed to the previous pattern of making another affective judgment in the same direction, and increasing (or sustaining) its affect. A great variety of mental states will arise during this process, and by neutrally observing them one slowly dismantles the affective structures that are widely distributed in mindspace.

\n

4) No longer forming strong emotional gradients

\n

While smoothing old emotional gradients one must take care not to create new ones. The goal is not to never make affective judgments (I'm not even sure this is possible), but rather to take control of the process and prevent dangerous feedback patterns from occurring.

\n

Conclusion

\n

Vipassana meditation aims to change the way we assign affect to mental states, and reverse the damage accumulated from doing so poorly in the past. Our default way for doing this may be the root of a number of rationality problems. Vipassana serves as a meta-tool, helping one to defuse harmful affective structures that are causing particular problems. I expect that these are common but vary in intensity, and the benefits of vipassana are obtained mainly through correcting these \"pathologies\".

\n

 

\n
\n

1 My basis for using this concept is mainly introspective observation during my daily meditation practice the past three years. At the very least I expect it will be helpful for understanding and practicing vipassana meditation, but it may turn out to be a fundamental cognitive process.

\n

2 Note that this concept is distinct from mind design space. In mind design space each point corresponds to a possible mind, and hence each point has an associated mindspace.

\n

3 For a simple case where the distinction between making an affective judgment and not making one is clear, consider experiencing a painful sensation. I claim that this pain is actually a composite phenomenon; it consists of a strong negative affective judgment (or series of such judgments) and a physical sensation. Not making an affective judgment in this case would mean that all that remains is the physical sensation. You would keep experiencing this physical sensation but not have a dying urge to do something about it (like shift your sitting position, for example). As long as you make sure that you are not causing bodily damage, I think that observing pain in meditation can be a really great learning experience.

\n

4 In Buddhist literature the positive feedback spiral is called craving and the negative one is called aversion.

\n

5 Don't forget this and this. This phenomenon may also be responsible for the cached thoughts and cached selves problems, depending on the degree to which cached mental structures are implemented as emotional gradients.

\n

6 This is meant in the sense of absolute value.

\n

 

\n
\n

Edit: On Academian's recommendation I've added a footnote attempting to clarify the notion of an affective judgment, and what it means not to make one. It's an excerpt from my comment here.

\n

 

" } }, { "_id": "p27nSeYEnisDwdmTQ", "title": "Do you believe in consciousness?", "pageUrl": "https://www.lesswrong.com/posts/p27nSeYEnisDwdmTQ/do-you-believe-in-consciousness", "postedAt": "2010-10-03T12:10:17.662Z", "baseScore": -5, "voteCount": 9, "commentCount": 10, "url": null, "contents": { "documentId": "p27nSeYEnisDwdmTQ", "html": "

Do you believe in consciousness?

\n

If you do what exactly would you define it as?

\n

and what evidence do you have for its existence?

\n

 

" } }, { "_id": "hKK6FtNX8YSJWjHTi", "title": "Berkeley LW Meet-up Saturday October 9", "pageUrl": "https://www.lesswrong.com/posts/hKK6FtNX8YSJWjHTi/berkeley-lw-meet-up-saturday-october-9", "postedAt": "2010-10-03T07:17:24.252Z", "baseScore": 7, "voteCount": 6, "commentCount": 42, "url": null, "contents": { "documentId": "hKK6FtNX8YSJWjHTi", "html": "

Last month, about 7 people showed up to the Berkeley LW meet-up.  To build on that success, we will be meeting on Saturday, October 9 at 7 PM at the Starbucks at 2224 Shattuck Avenue.  I'll be there with a sign saying \"If you read Harry Potter and the Methods of Rationality, come talk to me.\"  Last time, we chatted at the Starbucks for about an hour then went somewhere for dinner, so don't feel like you have to eat before you come.  Hope to see you there!

" } }, { "_id": "XL3WphxirKJK92r2M", "title": "Slava!", "pageUrl": "https://www.lesswrong.com/posts/XL3WphxirKJK92r2M/slava", "postedAt": "2010-10-03T02:47:03.944Z", "baseScore": 40, "voteCount": 42, "commentCount": 105, "url": null, "contents": { "documentId": "XL3WphxirKJK92r2M", "html": "

I want to begin with a musical example.  The link is the Coronation Scene from Mussorsky's opera Boris Godunov, in which Boris is crowned Tsar while courtiers sing his praises.  The tune is quoted from an old Russian hymn, \"Slava Bogu\" or \"Glory to God.\"  And, if I can trust the English subtitles, it's an apt choice, because the song in praise of the Tsar is not too far in tone from hymns in praise of God.  

\n

There is a mode of human expression that I'll call praise, though it is different from the ordinary sort of praise we give someone for a job well done.  It glorifies its object; it piles glory upon glory; its aim is to uplift and exalt.  Praise is given with pomp and majesty, with visual and musical and verbal finery.  It is oddly circular: nobody is alluding to anything specific that's good about the Tsar, but only words like \"supreme\" and \"glory.\"  Praise, in Hansonian terms, raises the status of the singers by affiliating with the object of praise.  But that curt description doesn't seem to capture the whole experience of praise, which is profoundly compelling, and very strange.

\n

There are no more Tsars.  I can derive no possible advantage from a song in praise of a long-dead Tsar.  And yet I find the Mussorsky piece powerful, not just for the music but for the drama.  Praise also seems to attract people to traditional medievalist fantasy, with its rightful kings and oaths of fealty -- Tolkien, perhaps not coincidentally, included a praise song in his happy ending.  Readers gain no status from the glorification of imaginary kings.  African praise songs were sung not only to kings, gods, and heroes, but to plants and animals, who obviously cannot grant anything to those who praise them.  

\n

I would suspect that there is a distinct human need filled by praise.  We want very badly for something to be an unalloyed repository of good.  It is not normally credible to conceive oneself as perfect, but we need at least something or someone to be worthy of praise. We want to look upwards, towards goodness and light; we want to be the kind of people who are capable of praise, capable of a reverent and appreciative frame of mind.  Unappreciativeness is an ugly emotion.  And it makes it much cognitively simpler if all the goodness and light is in one place.  James Joyce's notes to his play Exiles express something of this idea: \"Robert is glad to have in Richard a personality to whom he can pay the tribute of complete admiration, that is to say, one to whom it is not necessary to give always a qualified and half-hearted praise. \"

\n

Rationalism would seem to require the end of praise that is anything but qualified.  After all, nothing in the empirical world is a perfect repository of all goodness, unless you define goodness in an unusual way.  Praise, of the kind offered to Tsar Boris or Shaka Zulu, would seem to have no place in our world.  It is irrational, except maybe as a sop to our frailty and sense of beauty. And yet Daniel Dennett, after nearly dying, thanked \"goodness\" for his recovery: the goodness of medicine, of the efforts and concerns of everyone who helped him.  \"Goodness,\" which is found in many places, and in varying degrees, may be worthy of praise and a thing of glory, even if we have no Boris Godunov to praise.

\n

Eliezer wondered why our kind can't cooperate.  But \"our kind\" do collaborate on projects: scientists and programmers do build and experiment together.  The technophile/libertarian/atheist/futurist cluster is excellent about sharing information and has no difficulty forming group organizations.  We're not bad at collaboration.  What we seem to have a problem with is praise.  As Eliezer mentioned, we criticize far more than we praise. And, though we sometimes take it to unreasonable extremes, the resistance to praise is not altogether irrational. We recognize praise as dangerous: the impulse to glorify is the same impulse that raises up monarchs and dictators and forms cults.  We call it the Dark Arts.

\n

And yet it's really difficult to face living in a world without vast glory.  Even if you accept that \"goodness\" can be decentralized, scattered wherever people are doing good or remarkable things, it's more difficult to conceive of decentralized, abstract goodness than to picture all goodness residing in one visible person or thing.  There are distinctly atheist/futurist images of glory: the deepness of space, the march of science, the FOOM of the Singularity.  But these are not rationalist images.  Science progresses in fits and starts, and is plagued by ordinary fallibility and self-interest; there is no guarantee of a technological paradise ahead; even Space is a metaphor for certain evolution-based emotions, not really a deity. It seems that any form of glory, when examined critically, becomes qualified and limited.  If there are rationalist praise songs, they must be humbler. You can praise a heroic doctor (but he's not God), you can sing of the crash of the sea (but the sea is not God), you can hymn science (but science isn't God).  I don't know if this means that we need to curb our love of praise, or if we need to put a brighter emotional valence on these limited forms of praise.  

\n

We can't sing \"Glory be to Gauss in the Highest!\"  Can we ever be satisfied with merely \"Glory be to Gauss!\"?

\n

 

\n

EDIT: In the comments I've seen a few types of responses.

\n

1.  \"Praise mode\" (or, variously, adoration, glorification, worship) is a Bad Thing.  It's blind and unrealistic. It's what we're trying to get away from as rationalists.  There's no reason to miss its absence, and in fact it's unpleasant.

\n

2. What's wrong with praising actual good things?  Nothing says they have to be perfect.  [I think this misunderstands the nature of praise mode.  Recognizing that apples or kindness are wonderful is not the same as a ritual of adoration.  I think this is really a variant of 1.]

\n

3. Praise is attractive and compelling, but probably needs to be kept in check.  (Witness the large number of us who like Christian choral music, and also that several of us express discomfort with it and feel guilty about writing or performing it.)

\n

4. Yes, you can go into \"praise mode\" in a secular or scientific way, and it's wonderful!  Hail Sagan!

" } }, { "_id": "wDJaQG4QSKDYxzmor", "title": "The Irrationality Game", "pageUrl": "https://www.lesswrong.com/posts/wDJaQG4QSKDYxzmor/the-irrationality-game", "postedAt": "2010-10-03T02:43:35.917Z", "baseScore": 44, "voteCount": 59, "commentCount": 932, "url": null, "contents": { "documentId": "wDJaQG4QSKDYxzmor", "html": "

Please read the post before voting on the comments, as this is a game where voting works differently.

\n

Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.

\n

Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.

\n

Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.

\n

Example (not my true belief): \"The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%).\"

\n

If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

\n

That's the spirit of the game, but some more qualifications and rules follow.

\n

If the proposition in a comment isn't incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.

\n

The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.

\n

Some poor soul is going to come along and post \"I believe in God\". Don't pick nits and say \"Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us...\" and downvote it. That's cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.

\n

Try to be precise in your propositions. Saying \"I believe in God. 99% sure.\" isn't informative because we don't quite know which God you're talking about. A deist god? The Christian God? Jewish?

\n

Y'all know this already, but just a reminder: preferences ain't beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word \"should\" are are almost always imprecise: avoid them.

\n
That means our local theists are probably gonna get a lot of upvotes. Can you beat them with your confident but perceived-by-LW-as-irrational beliefs? It's a challenge!
\n

Additional rules:

\n" } }, { "_id": "kkj44HTaqKkkNefGu", "title": "Consciousness doesn't exist.", "pageUrl": "https://www.lesswrong.com/posts/kkj44HTaqKkkNefGu/consciousness-doesn-t-exist", "postedAt": "2010-10-03T01:11:27.912Z", "baseScore": -13, "voteCount": 12, "commentCount": 4, "url": null, "contents": { "documentId": "kkj44HTaqKkkNefGu", "html": "

Many rational people are atheists, one does not believe in God for the same reason one does not believe that there is an invisible dragon in my garage. Normally the definition of God would be so vaguely specified that each time you refute it, the bubble pops up elsewhere. Alternatively it may be a triviality, as the definition due to Spinoza.

\n

In Buddhism and Erwin Schrodinger's essay \"What is Life?\" there is a notion of consciousness - roughly defined as an indivisible subjective experience - some philosophers have even put a currency on it: \"qualia\". Schrodinger argues that as a consequence of the statistico-determinstic nature of matter as well as the personal experience of directing ones own actions - that one is equal to \"omnipresent all-comprehending eternal\".

\n

The Church-Turing thesis helps us clarify the situation a little, if we accept it (and we are right to do so, given the current understanding of computation and physics) we must either decide that living beings have some para-physical ability to experience or that there exists some algorithm which (when suitably implemented) becomes conscious. Since the former notion of a para-physical ability is absurd we discharge that.

\n

The algorithm takes as input some stream of bits - we can assume it is also a computable process, call it the environment - processes them and outputs some signals to the environment. Since the concatenation of two computable processes is another computable process we can consider these two processes as one. In summary, we have reduced existence of consciousness to the existence of a computable process which takes no input and no output - and just is \"conscious\".

\n

There just remains one detail: There is, necessarily, absolutely no way to determine - given an algorithm - whether it is conscious or not. It is not even a formally undecidable statement! Since we have reduced consciousness to a question about Turing machines - and consciousness refuses to be phrased formally (it is subjective, and computation is objective). The notion of consciousness is hence \"not even wrong\".

" } }, { "_id": "DQ29s8oaoF5XBDzH4", "title": "Learning through exercises", "pageUrl": "https://www.lesswrong.com/posts/DQ29s8oaoF5XBDzH4/learning-through-exercises", "postedAt": "2010-10-03T00:58:04.666Z", "baseScore": -1, "voteCount": 5, "commentCount": 1, "url": null, "contents": { "documentId": "DQ29s8oaoF5XBDzH4", "html": "

One of the best aspects of mathematics is that it is possible for a student to reconstruct much of it on their own, given the relevant axioms, definitions, and some hints.  Indeed, this style of education is usually encouraged for training mathematicians.  Relatedly, it is also possible for a mathematician to give a quick impression of the relevance of their particular field by choosing an example of an interesting problem which can be efficiently solved using the methods of that specific specialty of mathematics.

\n

To what extent do other academic fields share this property? How well can physics, chemistry, biology, etc. be taught \"through exercises\"?

\n

EDIT: Note that the \"exercises\" I am referring to are not just matters of applying learned principles for solving random problems but rather are devices to lead the student to \"rediscover\" important principles in the field.

" } }, { "_id": "2Gfxw3Gdvpjwwpn7E", "title": "Why not be awful?", "pageUrl": "https://www.lesswrong.com/posts/2Gfxw3Gdvpjwwpn7E/why-not-be-awful", "postedAt": "2010-10-02T23:30:48.332Z", "baseScore": 25, "voteCount": 21, "commentCount": 30, "url": null, "contents": { "documentId": "2Gfxw3Gdvpjwwpn7E", "html": "

I was going over the Sequences on metaethics, and it was leaving a bad taste in my mouth.  The examples are all about killing or saving children (both of which are far outside my personal experience).  The assumption is that the participants in a discussion about metaethics are, in fact, moral in the normal sense of the word.  That they're talking about justifications behind beliefs they actually act on, like not killing babies. That, when the philosophical discussion is over, they will go back to being basically good people, and so part of the purpose of the philosophical discussion is to explain to them why they shouldn't stress out too much.  If there were no \"morality,\" you still wouldn't kill babies, Eliezer presumes.  Philosophy is just so much verbal dressing on something basically secure.

\n

But my situation is a little different.  From time to time, like Pierre, I don't care.  I get emotionally nihilistic.  I find myself doing things that are morally awful in the conventional meaning of the word: procrastinating, sneaking other people's food out of the communal fridge, being casually unkind and unhelpful, breaking promises.  I don't doubt that these are awful things to do. I figure any moral theory worth its salt will condemn them -- except the moral theory \"I don't care,\" which sometimes seems strangely compelling.  In an \"I don't care\" mood, I generally don't care about the truth or falsehood of factual claims either.  What does it matter?  Penguins are green and they are a deadly menace to human society.

\n

What I want to know is: what goes through people's heads when they're motivated not to be awful?  What could you tell someone as a reason not to be awful?  If you are, in fact, not awful, why aren't you awful?

\n

Edit: the kind of why I mean is not a justification (Humans have natural rights to life, liberty, and the pursuit of happiness) or an explanation (Humans care about the things evolution leads them to care about.)  I'm talking about an internal heuristic or a gesture at an intuition.  What do you think, or feel, when you care about things?  What would you tell someone who claims \"I just don't care\" if you wanted to get her to care?  What would you tell yourself, in your nihilistic moments? 

" } }, { "_id": "iA25AvZqAr6G8mAXR", "title": "Break your habits: be more empirical", "pageUrl": "https://www.lesswrong.com/posts/iA25AvZqAr6G8mAXR/break-your-habits-be-more-empirical", "postedAt": "2010-10-02T21:04:13.452Z", "baseScore": 152, "voteCount": 138, "commentCount": 33, "url": null, "contents": { "documentId": "iA25AvZqAr6G8mAXR", "html": "\n

tl;dr: The neurotypical attitude that \"You think too much\" might be better parsed as \"You don't experiment enough.\" Once you have an established procedure for living optimally in «setting», be a good scientist and keep trying to falsify your theory when it's not too costly to do so.

\n

(Note: in aspects of life where you're impulsive, don't introspect enough, or have poor self discipline, this post is probably advice in the wrong direction.)

\n

Alice is highly analytically minded. She always walks the same most-efficient route to work, only dances tango and salsa, and refuses to deviate even on rare occasions from her carefully planned schedule. She has judged carefully from experience that the expected value of dating is too low to be worth her time, and will only watch a movie if at least 3 of her 5 closest friends recommend it. She travels only when it relates to her job, to ensure the trip has a purpose and to minimize unnecessary transportation costs. Oh, and she also thinks a lot. About everything.

\n

Bob often tells Alice that she \"thinks too much\", advice that rarely if ever resonates. But consider that Bob may be sensing a legitimate imbalance: Alice may be doing too much analysis with not enough data. He can tell she thinks way more than he does, and blames that for the imbalance, suggesting that Alice should \"turn off her brain\". But Alice can't agree. Why would she ever waste a resource as constantly applicable and available as her mind? That seems like a terrible idea. So here's a better one: Alice, if you're reading this, don't turn your mind off... turn it outward.

\n

When (analysis:data) looks too big, just try turning up the data. There's no need to get stupider or anything. When it's not overly costly, you should deviate from your usual theories of optimal behavior for the sake of expected information gain. Even in theory, empiricism is necessary... For a Bayesian optimizing agent in an uncertain world, information has positive expected utility, and experiments have positive expected information. Ergo, do them sometimes! And what sort of experiment do I mean?

\n

I mean that once in a while, Alice should dance freestyle. She should leave early and take a scenic route sometimes, and try some new food along the way. She should visit somewhere she's never been for a vacation, and try meeting some locals. And she shouldn't be discouraged when experimental behavior turns out to be \"suboptimal as anticipated\". That just means she doesn't have to try that particular thing again, at least for a while. The point is the rare occasion when it does work out and you find something valuable, or the less rare occasion that the change of pace is simply inspiring.

\n

So try to overcome that deep-rooted sense of suboptimality you get when you consider new things, or revisit old ones. Locally suboptimal behavior can be worth it for the global benefits of the information you gain. I'm not suggesting to take big risks like drug addictions or injuries... if you want a safe idea to start with, think of something you never do but which other people do all the time without ruining themselves.

\n

A priori, classical mechanics could have explained pretty much any observation made before 1800. It looked great: every event could be imagined as a series of pushes and pulls acting on sufficiently small bits of matter and the right initial conditions. But we kept testing it anyway, and now we have nuclear power. What might you find in a new situation or environment that you never thought of before?

\n

Go do something you wouldn't normally do :)

" } }, { "_id": "x3R3PbF797SxZHE8Y", "title": "Call for Volunteers", "pageUrl": "https://www.lesswrong.com/posts/x3R3PbF797SxZHE8Y/call-for-volunteers", "postedAt": "2010-10-02T17:43:04.690Z", "baseScore": 7, "voteCount": 4, "commentCount": 1, "url": null, "contents": { "documentId": "x3R3PbF797SxZHE8Y", "html": "

I've recently been appointed Program Coordinator of Humanity+ (http://www.humanityplus.org), which is another big organization in the SIAI/FHI/Kurzweilian/futurist space. If you're interested in volunteering to help us out, you can contact me at pphysics141@gmail.com. If you're interested in volunteering to help out the Singularity Institute, you can also contact SIAI Volunteer Coodinator Louie Helm at seventeenorbust@gmail.com. Thanks for your help!

\n

 

" } }, { "_id": "q2LeuRzvG993x4cLg", "title": "Cambridge Less Wrong Meetup ", "pageUrl": "https://www.lesswrong.com/posts/q2LeuRzvG993x4cLg/cambridge-less-wrong-meetup", "postedAt": "2010-10-02T17:21:07.534Z", "baseScore": 8, "voteCount": 6, "commentCount": 10, "url": null, "contents": { "documentId": "q2LeuRzvG993x4cLg", "html": "

We'll be having a Less Wrong meetup on Tuesday, October 12 at 7:00pm at Green Street Grill 280 Green Street Cambridge, MA, to coincide with SIAI President Mike Vassar's visit to Boston. See the meetup.com page. This is in addition to the monthly meetups, which are on October 17th and on the third Sunday of every month.

" } }, { "_id": "yehxZf6rAN8nw9iRj", "title": "Rational Terrorism or Why shouldn't we burn down tobacco fields?", "pageUrl": "https://www.lesswrong.com/posts/yehxZf6rAN8nw9iRj/rational-terrorism-or-why-shouldn-t-we-burn-down-tobacco", "postedAt": "2010-10-02T14:51:13.384Z", "baseScore": -5, "voteCount": 26, "commentCount": 56, "url": null, "contents": { "documentId": "yehxZf6rAN8nw9iRj", "html": "

Related: Taking ideas seriously

\n

Let us say hypothetically you care about stopping people smoking. 

\n

You were going to donate $1000 dollars to givewell to save a life, instead you learn about an anti-tobacco campaign that is better. So you chose to donate $1000 dollars to a campaign to stop people smoking instead of donating it to a givewell charity to save an African's life. You justify this by expecting more people to live due to having stopped smoking (this probably isn't true, but for the sake of argument)

\n

The consequences of donating to the anti-smoking campaign is that 1 person dies in africa and 20 live that would have died instead live all over the world. 

\n

Now you also have the choice of setting fire to many tobacco plantations, you estimate that the increased cost of cigarettes would save 20 lives but it will kill likely 1 guard worker. You are very intelligent so you think you can get away with it. There are no consequences to this action. You don't care much about the scorched earth or loss of profits.

\n

If there are causes with payoff matrices like this, then it seems like a real world instance of the trolley problem. We are willing to cause loss of life due to inaction to achieve our goals but not cause loss of life due to action.

\n

What should you do?

\n

Killing someone is generally wrong but you are causing the death of someone in both cases. You either need to justify that leaving someone to die is ethically not the same as killing someone, or inure yourself that when you chose to spend $1000 dollars in a way that doesn't save a life, you are killing. Or ignore the whole thing.

\n

This just puts me off being utilitarian to be honest.

\n

Edit: To clarify, I am an easy going person, I don't like making life and death decisions. I would rather live and laugh, without worrying about things too much.

\n

This confluence of ideas made me realise that we are making life and death decisions every time we spend $1000 dollars. I'm not sure where I will go from here.

" } }, { "_id": "JpoLCHytYiCm7fwNA", "title": "Scope insensitivity in juries", "pageUrl": "https://www.lesswrong.com/posts/JpoLCHytYiCm7fwNA/scope-insensitivity-in-juries", "postedAt": "2010-10-02T14:29:36.240Z", "baseScore": 13, "voteCount": 11, "commentCount": 1, "url": null, "contents": { "documentId": "JpoLCHytYiCm7fwNA", "html": "

Juries found to give harsher penalties to criminals who hurt few people than to those who hurt fewer people:

\n

http://spp.sagepub.com/content/early/2010/08/24/1948550610382308.full.pdf+html

" } }, { "_id": "RtiSxqPLxa8HxGEd4", "title": "Utility function estimator", "pageUrl": "https://www.lesswrong.com/posts/RtiSxqPLxa8HxGEd4/utility-function-estimator", "postedAt": "2010-10-02T14:05:35.159Z", "baseScore": 3, "voteCount": 5, "commentCount": 1, "url": null, "contents": { "documentId": "RtiSxqPLxa8HxGEd4", "html": "

I am writing a program to estimate someone's utility function in various common situations.  My primary application is to find out how average humans actually devalue utility over time--is it hyperbolic as claimed?

\n

However, the same program could be used to make a proxy for yourself in various situations.

\n

 

\n

Someone requested a top-level post going into more detail on how I'm working on it (if people donate money I'll work faster), what exactly it does, and what you might use it for.  I'm a slow writer and don't really want to put the effort into a top-level article unless more than one person is interested.  I thought the new discussion forum would be ideal to gauge interest.

" } }, { "_id": "mHZ2cFYcZJxT8CTEd", "title": "Humanity becomes more untilitarian with time", "pageUrl": "https://www.lesswrong.com/posts/mHZ2cFYcZJxT8CTEd/humanity-becomes-more-untilitarian-with-time", "postedAt": "2010-10-02T12:06:13.341Z", "baseScore": 0, "voteCount": 6, "commentCount": 11, "url": null, "contents": { "documentId": "mHZ2cFYcZJxT8CTEd", "html": "

I would think there'd be evolutionary pressure to focus more and more on having descendants. What's actually happened so far is that people do more for signalling and fun and limit the number of their children. Is this just a blip, and the Mormons (perhaps with a simplified religion) will inherit the earth?

\n

 

" } }, { "_id": "3Tu3RAjdAJPR2X5bM", "title": "Random fic idea", "pageUrl": "https://www.lesswrong.com/posts/3Tu3RAjdAJPR2X5bM/random-fic-idea", "postedAt": "2010-10-02T08:02:04.945Z", "baseScore": -4, "voteCount": 6, "commentCount": 4, "url": null, "contents": { "documentId": "3Tu3RAjdAJPR2X5bM", "html": "

Got this snippet of an idea while reading a Terminator/Buffy the Vampire Slayer crossover. Not going to do anything with it (for hopefully obvious reasons), but I figured I'd share because I found it amusing.

\n

 

\n
\n

 The artificial intelligence researcher looked up, startled, as the door slammed open, revealing a heavily-muscled man in the doorway.

\n

\"Eleizer Yudkowsky, come with me if you want to live.\"

\n
" } }, { "_id": "5tBNPapegcMnXWiA4", "title": "The Singularity in the Zeitgeist", "pageUrl": "https://www.lesswrong.com/posts/5tBNPapegcMnXWiA4/the-singularity-in-the-zeitgeist", "postedAt": "2010-10-02T06:51:30.430Z", "baseScore": 12, "voteCount": 8, "commentCount": 49, "url": null, "contents": { "documentId": "5tBNPapegcMnXWiA4", "html": "

As a part of public relations, I think it's important to keep tabs on how the Singularity and related topics (GAI, FAI, life-extension, etc.) are presented in the culture at large.  I've posted links to such things in the past, but I think there should be a central clearinghouse, and a discussion-level post seems like the right place. 

\n

So: in the comments, post examples of references to Singularity-related topics that you've found, ideally with a link and a few sentences' description of what the connection is and how it's presented (whether seriously or as an object of ridicule, for instance). 

\n

 

\n

There should probably be a similar post for rationality references, but let's see how this one goes first.

" } }, { "_id": "cyxKMRgyubRGT9jAc", "title": "Counterintuitive World - Good intro to some topics", "pageUrl": "https://www.lesswrong.com/posts/cyxKMRgyubRGT9jAc/counterintuitive-world-good-intro-to-some-topics", "postedAt": "2010-10-02T03:32:21.842Z", "baseScore": 5, "voteCount": 3, "commentCount": 0, "url": null, "contents": { "documentId": "cyxKMRgyubRGT9jAc", "html": "

http://motherjones.com/kevin-drum/2010/09/counterintuitive-world

\n

 

\n

 

" } }, { "_id": "zqkPzAWBw2DFtMAF2", "title": "Where did my comments go?", "pageUrl": "https://www.lesswrong.com/posts/zqkPzAWBw2DFtMAF2/where-did-my-comments-go", "postedAt": "2010-10-01T23:21:25.924Z", "baseScore": 3, "voteCount": 3, "commentCount": 6, "url": null, "contents": { "documentId": "zqkPzAWBw2DFtMAF2", "html": "

I was googling to see if I'd already posted some stuff on LessWrong, and I found it in a web cache, but not on the site.  Compare Google's Sept. 8 2010 cache of a LessWrong post to the version now on LessWrong.  Everything between the comment made by MBlume on 20 April 2009 02:58:50AM, and the comment made by Alicorn on 19 April 2009 10:11:55PM, is now missing, including all 5 of my top-level comments, and some top-level comments by JulianMorrison.

\n

I didn't delete them.  Does anyone know what happened to them?\n\nIs this happening to other comments?

" } }, { "_id": "ojSDZDWkdMT9m2RmB", "title": "credibility.com", "pageUrl": "https://www.lesswrong.com/posts/ojSDZDWkdMT9m2RmB/credibility-com", "postedAt": "2010-10-01T22:31:13.999Z", "baseScore": 2, "voteCount": 2, "commentCount": 3, "url": null, "contents": { "documentId": "ojSDZDWkdMT9m2RmB", "html": "

In the ideal world, all of human knowledge could be accessed and evaluated by every individual for personal decisions; we are more towards having more information be accessible, but it is increasingly infeasible for individuals to process all the information relevant to all their questions.  The solution is to split some common important questions into sub-questions and to rely on the reports of individuals who investigate specific questions, often themselves relying on the reports of others in addition to primary data (observations).  But one cannot trust these reports completely; thus there is a need for a system which can evaluate the reviews and reviewers themselves.  Reputation and later, peer review, has historically played this role; but now the technology exists to implement something like a \"credibility.com\" in which every information source can be reviewed.  Could such a site, properly implemented, grow to supersede the role now played by peer review?

" } }, { "_id": "aSXSADmJJdRCoEtKy", "title": "6 Minute Intro to Evolutionary Psychology", "pageUrl": "https://www.lesswrong.com/posts/aSXSADmJJdRCoEtKy/6-minute-intro-to-evolutionary-psychology", "postedAt": "2010-10-01T21:07:28.527Z", "baseScore": 1, "voteCount": 1, "commentCount": 1, "url": null, "contents": { "documentId": "aSXSADmJJdRCoEtKy", "html": "

In the spirit of You Are A Brain, this is a 6 minute presentation I gave at Toastmasters on Evolutionary Psychology and may repeat. Be sure to click on show speaker notes (in Actions) to see the full text.

\r\n

 

\r\n

 6 Minute Intro to Evolutionary Psychology

\r\n

 

\r\n

Any suggestions for improvements?  Some people didn’t get it. Also, is it accurate enough? Also, I think the Wason Selection argument isn’t all that compelling and takes up about half of the time. Is there a better example I could use? (The speech was supposed to be for either informing or persuading and persuading required informing so I tried to focus just on informing.)

" } }, { "_id": "Nvv9kNpK2irYH6zoF", "title": "Automated theorem proving", "pageUrl": "https://www.lesswrong.com/posts/Nvv9kNpK2irYH6zoF/automated-theorem-proving", "postedAt": "2010-10-01T20:13:33.802Z", "baseScore": 1, "voteCount": 1, "commentCount": 5, "url": null, "contents": { "documentId": "Nvv9kNpK2irYH6zoF", "html": "

Automated theorem proving *sounds like* a natural extension of many useful trends and a solution to many current problems.  To me, it seems obvious that there will be a need for the formalization of the mathematics (up to and beyond the boundary of mathematics with real-world applications) as well as the routine checking of software for 100% intended performance.  Secure networks and software in particular could be an important safeguard against AI.

\n

Yet I haven't heard much of it; the implementation difficulties must be considerable given that there are substantial and predictable benefits to the widespread use of automated theorem proving.  Anyone with experience in the field?

" } }, { "_id": "SHogXrTazPfRe2qEB", "title": "The meta-evaluation question", "pageUrl": "https://www.lesswrong.com/posts/SHogXrTazPfRe2qEB/the-meta-evaluation-question", "postedAt": "2010-10-01T19:50:12.201Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "SHogXrTazPfRe2qEB", "html": "

Evaluation refers to an agent's evaluation of the expected benefit of a particular action; meta-evaluation is the agent's evaluation of the computational cost-effectiveness of evaluation in general.

\n

There are two difficulties with meta-evaluation.  One is the nature of the data, which by nature will consist of unique events*.  The second is the obvious self-referential nature of the problem.

\n

Meta-evaluation is a central issue for any theory of rational decision-making, yet I have not yet seen it directly addressed here.

\n

 

\n

*This quality of nonredundancy occurs in its purest form in \"mathematical data,\" ie the distribution of primes.

" } }, { "_id": "NdWW5BL6EHksG4PtW", "title": "Coding Rationally - Test Driven Development", "pageUrl": "https://www.lesswrong.com/posts/NdWW5BL6EHksG4PtW/coding-rationally-test-driven-development", "postedAt": "2010-10-01T15:20:47.873Z", "baseScore": 33, "voteCount": 30, "commentCount": 84, "url": null, "contents": { "documentId": "NdWW5BL6EHksG4PtW", "html": "

Computer programming can be a lot of fun, or it can be brain-rendingly frustrating. The transition between these two states often goes something like this:

\n

Paula the Programmer: Computer, using the \"Paula's_Neat_Geometry\" library, draw a triangle near the top of the screen once per frame.
Cronus the Computer: Sure, no problem.
P: After drawing that triangle, draw a rectangle 50 units below it.
C: Will do, boss.
P: Sweet. Alright, after the rectangle, draw a circle 20 units to the right, then another 20 units to the left.
C: GLARBL GLARBL GLARBL I hear it's amazing when the famous purple stuff wormed in flap-jaw space with the tuning fork does a raw blink on Hari Kiri Rock! I need scissors! 61!1 System error.
P: Crap! Crap crap crap. Um, okay, let's see...

\n

And then Paula must spend the next 45 minutes turning the circle drawing code on and off and figuring out where the wackiness originates from. When the circle code is off, she sees everything work fine. When she turns it back on, she sees everything that she thought she understood so well, that she was previously able to manipulate with the calm joyful deftness of a virtuoso playing a violin, turn into a world of mystery and ice-slick confusion. Something about that request to draw that circle at that particular time and place is exposing a difference between Paula's model of the computer and the computer's reality.

\n

When this happens to a programmer often enough, they begin to realize that even when things seem to be working fine, these differences still probably lurk unseen beneath the surface, waiting invisibly to strike. This is an unsettling feeling. As a technique of rationality, or just because being uncomfortable is unpleasant, they seek diligently to avoid creating these cross-model inconsistencies (known colloquially as \"bugs\") in their own code, so as to avoid subjecting themselves to GLARBL GLARBL GLARBL moments.

\n

Having a sincere desire to be less wrong in one's thinking is fine, but not enough. One also needs an effective process to follow, a system for making it harder to fool oneself, or at least for noticing when it's happened. Test Driven Development is one such system; not the only one, and not without its practical problems (which will be at most briefly glossed over in this introductory article), but one of my personal favorites, primarily because of the way it makes me feel confident about the quality of my work.

\n

Why Computer Programming Requires Rationality

\n

Computer programming is the process of getting a messy, incomplete, often self-contradictory, and overall badly organized idea out of one's head and explaining it completely and thoroughly to a quite stupid machine that has no common sense whatsoever. This is beneficial for the users of the program, but also for the programmer, because the computer does not have a programmer's human biases, such as mistaking the name of an idea with an understanding of how that idea works.

\n

It has been said that you only truly understand how to do something when you can teach a computer how to do it for you. This doesn't mean that you have to understand the thing perfectly before you can begin programming; the process of programming itself will change and refine the idea in the programmer's mind, chipping away rotten bits and smoothing connections as the idea moves piece-by-piece from the programmer's mind into a harsh reality that doesn't care about how neat something sounds, just whether or not it works.

\n

Through the process of explaining the problem and solution to the computer, the programmer is also explaining it to themselves, checking that that explanation is correct as they go, and adjusting it in their own minds as necessary to make it match.

\n

In a typical single-person development process, a programmer will think about the problem as a whole, mentally sketch out a framework of the tools and structures they will have to write to make the problem solvable, then begin implementing those tools in whatever order seems most intuitive. At this point, great loud alarm bells should be ringing in the heads of Less Wrong readers, indicating that this is a problematically far-mode way to go about things.

\n

Why Test Driven Development Is Rational

\n

The purpose of Test Driven Development is to formalize and divide into tiny pieces that part right before a programmer starts writing code: the part where they think about what they are expecting the code to do. They are then encouraged to think about each of those small pieces individually, in near-mode, using the following steps:

\n

RED: Figure out what feature you want to add next; make it a small feature, like \"draw a triangle\". Write a test, a tiny test, a test that only checks for the one new feature, and that will only pass if the feature is working properly. This part can be hard if you didn't really have a clear idea of the feature in the first place, but at least you're dealing with that difficulty now and not when 20 other things in the program already depend on your slightly flawed understanding. Anyways, once you've written the test, run it and make sure it fails in the expected manner, since the feature hasn't actually been implemented yet.

\n

GREEN: Now actually go and write the code to make the test pass. Write as little code as possible, with minimum cleverness, to make this one immediate goal happen. Don't write any code that isn't necessary for making the test pass.

\n

REFACTOR: Huzzah, the test passes! But the code has some bad smells: it's repetitious, it's hard to read, it generally creates a feeling of creeping unease. Make it clean, remove all the duplicated parts, both in the test and the implementation.

\n

BLISS: Run all the prior tests; they should still be green. Feel a sense of serene satisfaction that all your expectations continue to be met; be confident your mental model of the whole program continues to be a pretty close match. If you have a version control system (and you really should), commit your changes to it now with a witty yet descriptive message.

\n

Working piece by tiny piece, your code will become as complicated as you need it to be, but no more so. You are not as likely to waste time creating vast wonderful code castles made of crystal and silver that turn out to be pointless and useless because you were thinking of the wrong abstraction. You are more likely to notice right away if you accidentally break something, because that something shouldn't be there in the first place unless it had a test to justify it, and that test will complain.

\n

TDD is a good anti-akrasia technique for writing tests. Classically, tests are written after the program is working, but such tests are rarely very thorough, because it feels superfluous to write a test that already tells you what you (think that you) know, that the program works.

\n

TDD is also helpful broadly fighting against programming akrasia in general. You receive continuous feedback that what you are doing is accomplishing something and not breaking anything. It becomes more difficult to dawdle, since there's always an immediate short-term goal to focus on.

\n

Finally, for me and for many other people who've tried it, TDD makes programming more fun, and more satisfying. There's nothing quite like the feeling of confidence that comes from knowing that your program does just what you think it does.

\n

Or, well, thinking that you know.

\n

Why Test Driven Development Isn't Perfect

\n

Basking innocently in the feeling of the BLISS stage, you check your email and get an angry bug report: when the draw color is set to turquoise, instead of rectangles your program is drawing something that looks vaguely like a silhouette of Carl Friedrich Gauss engaged in a swordfight against a trout. What's going on here? Why wasn't this bug caught by the tests? There's a \"Rectangles_Are_Drawable\" test, and a \"Turquoise_Things_Are_Drawable\" test, and they both pass, so how can drawing turquoise rectangles fail?

\n

Something about turqouiseness and rectangleness is lining up just right and causing things to fall apart, and this outcome is certainly not predicted by the programmer's mental model of the program. This means that either that something in the program is not actually being tested at all, or (more likely) that one of the tests doesn't test everything the programmer thinks it does. TDD (among its other benefits) does reduce the chance of bugs being created, but doesn't eliminate it, because even within the short near-mode phases of Red-Green-Refactor-Bliss there's still opportunity for us to foul things up. Eliminating all bugs is a grand dream, but not likely to happen in reality as long as the program isn't dead simple (or formally verifiable, but that's a technique for another day).

\n

However, because we can express bugs as testable assumptions, TDD applies just as well to creating bugfixes as it does to adding new features:

\n

RED: Write a new test \"Turquoise_Rectangles_Are_Drawable\", which sets the color to turquoise, tells the library to draw a rectangle, and makes sure a rectangle and not some other shape was drawn. Run the test, it should fail. If it doesn't, then the bug report was incomplete, and the situation that needs to be setup before Gauss is drawn is more elaborate.

\n

GREEN: Figure out what's making the bug happen. Fix it. Test passes.

\n

REFACTOR: Make the fix pretty.

\n

BLISS: The rest of the program still works as expected (to the degree that your expectations were expressed, anyways). Also, this particular bug will never come back, because if someone does accidentally reintroduce it then the test that checks this newly created expectation will complain. Commit changes with a joke about Gaussian blurring.

\n

Why Test Driven Development Isn't Always Appropriate

\n

A word of warning: this article is intended to be readable for people who are unfamiliar with programming, which is why simple, easily visualized examples like drawing shapes were used. Unfortunately, in real life, graphics-drawing is just the sort of thing that's hardest to write tests for.

\n

As an extreme example, consider CAPTCHA, software that tries to detect whether a human being or a spambot is trying to get an account on your site by asking them to read back an image of squirrelly-looking text. TDD would at best be minimally useful for this; you could bring in the best OCR algorithms you have available and pass the test if they *cannot* pull text out of the image... but it would be hard to tell if that was because the program was producing properly hard-to-scan images, or because it was producing useless nonsense!

\n

It's part of a larger category of things which are hard to automatically test because their typical operation involves working with a human, and we can't simulate humans very well at all (yet). Any program that's meant to interact with a human, and depend upon that human behaving in a sophisticated human way (or in other words, any program that has a user interface which isn't incredibly simple), will have difficulty being thoroughly tested in a non-brittle way. This problem is exacerbated because user interfaces tend to change significantly as they are subjected to usability testing and rethought, necessitating tedious changes in any tests that depend on their specifics. That doesn't mean TDD isn't applicable to such programs, just that it is more useful when working on their inner machinery than their user-facing shell.

\n

(There are also ways of minimizing this problem in certain sorts of user interface scenarios, but that's beyond the scope of this article.)

\n

Test Driven $BEHAVIOR

\n

It is unfortunate that this technique is not more widely applicable to situations other than computer programming. As a rationalist, the process of improving my beliefs should be like TDD: doing one specific near-mode thing at a time, doing checks they can definitively pass or fail, and building up through this process a set of tests/experiments that thoroughly represent and drive changes to the program implementation, aka my model of the world.

\n

The major disadvantage my beliefs have compared to a computerized test suite is that they won't hold still and be counted. I cannot do an on-demand enumeration through every single one of my beliefs and test them individually to make sure they all still hold up; I have to rely on my memories of them, which might well be shifting and splitting up and making a mess of themselves whenever I'm not looking. I can do RED and GREEN phases on particular ideas when they come to mind, but I'm unfortunately unable to do anything like a thorough and complete BLISS phase.

\n

This article has partly been about introducing a coding technique which I think is pretty neat and of relevance to rationalists, but it's also about leading up to this question that I'd like to ask Less Wrong: how can I improve my ability to do Test Driven Thinking?

\n

 

\n
\n

1. This bit of wonderfully silly text is from Konami's Metal Gear Solid 2.

\n

 

" } }, { "_id": "6SMJBPxC4wq9pJobf", "title": "Can you enter the Matrix? The deliberate simulation of sensory input.", "pageUrl": "https://www.lesswrong.com/posts/6SMJBPxC4wq9pJobf/can-you-enter-the-matrix-the-deliberate-simulation-of", "postedAt": "2010-10-01T14:58:24.217Z", "baseScore": 5, "voteCount": 9, "commentCount": 21, "url": null, "contents": { "documentId": "6SMJBPxC4wq9pJobf", "html": "

Related to: Generalizing From One Example

\n

I am able to simulate sensory input, i.e. dream deliberately, enter my personal Matrix (Holodeck). I can see, hear, feel and smell without the presence of light, sound, tactile or olfactory sensory input. That is, I do not need to undergo certain conditions to consciously experience them. They do not have to happen live, I can imagine them, simulate them. I can replay previous and create new sensory experiences in my mind, i.e. perceive them with my minds eye. I can live, pursue and experience activities inside my head without any environmental circumstances, i.e. all I need is my body. I can walk through a park, see and hear children playing, feel and smell the air, while being weightlessness in a totally dark and quiet zero gravity environment.

\n

After reading the article by Yvain some time ago the idea(?) that some people are unable to deliberately experience the world with their mind's eye hasn't ceased to fascinate me. So yesterday I came back to search the comments on that article if there are people who actually confirm this claim. The comments by Garth and Blueberry seem to suggest this. After that I started to ask other people and was amazed that after some misunderstanding the first two people I asked were both either completely or almost unable to experience anything if it wasn't happening live. This is shocking. The second person was actually my dad. I asked, \"if you were to close your eyes and I told you that I changed the lighting in the kitchen from normal to red, could you imagine how it would look like\"? \"Well, yes\" he said. Only after about 10 minutes I figured what he meant is that he would be able to describe it, paint it or pinpoint other characteristics of his kitchen equipped with all red lighting, but he couldn't actually dream it! He couldn't see it if he closed his eyes! He always thought that when people say they imagine a beautiful sunset they actually mean that they could describe it or picture it literally, not experience it! I was struck. This insight only came after we started talking about dreaming. He seems to be able to dream, but once I compared dreaming with imagination he said that's really completely different. Dreaming is closer to the real thing he said, you actually experience it. But if he's awake and closes his eyes there seems to be nothing but darkness and some unconscious processing or data queries that allow him to describe and picture something without actually experiencing it in front of his mind's eye. It's the same with all other sensor perceptions.

\n

Now don't believe that I can actually simulate the real thing, it's not as vivid. Here is how close I perceive to be able to match live experiences by imagining it solely in my mind: Tactile (90%); Olfactory (60%); Auditory (30%); Visual (40-15%); Pain (2%). If you are one of those who do lack a world of thought, think about Tinnitus or phantom pain, it originates from within and is not caused by environmental influence. Such fake sensory perceptions can be perceived to be as real (100%) as experiencing actual sound, or in the case of pain, for example burning. I can cause this deliberately without having to expose myself to actual sensory input. But the degree of realness varies as stated above.

\n

Interestingly there is one striking exception, faces. I'm either unable or have to concentrate really hard to perceive faces without looking at actual faces in real-time. When I read stories, faces always stay blank. More than that though, they are not blank, it's like they are simply not computed. It's not like they are black or blurry, but rather in another dimension that I cannot, do not access. I'm pretty sure that I see faces when dreaming though and sometimes, when I know faces very well, I can even make them appear in front of my mind's eye. But I've to concentrate solely on the face, it doesn't happen easily.

\n

All this is really crazy I think. How is it possible that humans are this different in such profound ways? Is it a mutation that only appeared very recently in our evolutionary history and is only expressed within a subset of people? Or is it maybe in spite of all assertions a conceptual misunderstanding?

\n

What about you, can you enter the Matrix?

" } }, { "_id": "knc5XjjRGfXoMEiTY", "title": "Expectation-Based Akrasia Management", "pageUrl": "https://www.lesswrong.com/posts/knc5XjjRGfXoMEiTY/expectation-based-akrasia-management", "postedAt": "2010-10-01T04:34:32.782Z", "baseScore": 20, "voteCount": 21, "commentCount": 11, "url": null, "contents": { "documentId": "knc5XjjRGfXoMEiTY", "html": "

I'd been staying out of anti-akrasia discussion mostly because my strategy for getting things done is so different from the common one that it barely seems related, much less relevant. I've been asked, though, so here's what I have to say about it.

One of the main premises in how I go about getting things done is that the process of choosing what to do is  a major factor in how easy it is to get yourself to do that thing. Being confident that the goal is one that you want to achieve and that the next step really is the best thing to do to achieve it is important. Without that, you're likely going to have to run on willpower rather than coast on your own drive to reach that goal. (Fear is another common drive, and sometimes an unavoidable one, but I really don't recommend it. On top of the stress of working that way, running on fear means that you risk having a sudden impetus failure if you misjudge what your brain will consider a safe solution to the problem in the immediate sense. Example: My fear of doctors would be better handled by taking active steps to keep my health, not by avoiding dealing with medical issues altogether as I've been doing for the last few years. I'm working on turning that around, and am actually planning to post about it as an open question of instrumental rationality sometime in the next few days.)

I have a few techniques that I use to determine that a particular goal or next step is a good one. These may seem rather basic - it's noteworthy that I didn't really comprehend the concept of 'pursuing goals' when I was younger, and that I've only been building these skills for a few years - but they do seem to be quite effective, if somewhat slower than the normal methods.

My main technique for determining whether a goal is a good one is to think about the goal in many different contexts - not by sitting down and trying to come up with relevant contexts all at once (which seems likely to induce bias, along with being more difficult than my technique), but by thinking about the goal from time to time as I go about my life. When I'm doing this, I'll spend anywhere from a week or two to a few years occasionally asking myself 'how would the situation I'm in be different if I'd already achieved my goal?'. This seems to give a pretty accurate idea of whether the goal is a good one, helps make any flaws obvious, and perhaps most importantly makes the goal feel realistic - it's not some hypothetical thing with little connection with reality, but an actual possible state of the world. The key is to ask the question in as many different situations as possible, to get a very well-defined idea of how the goal will work. For very life-changing goals, where most of my normal situations would be changed beyond recognition after achieving it, the question 'how would I accomplish what I'm currently in the process of accomplishing?' serves the same purpose, though it's a bit harder to answer in many cases and answering it is more likely to involve specific research. The most obvious flaw in this technique is that it doesn't address new situations that will be caused by achieving the goal - for example, if I were to own a house, I would have to maintain the property, which is not something I currently have to deal with, and which this technique won't address. Therefore, it's also advisable to talk to someone who's already achieved the goal, ask them about such issues, and make a point of visualizing them at appropriate times. ('How would this round of goofing off be different if I owned a house? It wouldn't exist; I'd be out cleaning the gutters.')

I use a similar technique for brainstorming ways of achieving goals, and I generally start doing so as soon as a goal I'm contemplating starts to look like it'd be a good thing to do. This is actually a rather important part of deciding whether or not a goal is a good one: Something that would be nice to have, but that would take more effort than it's worth, is not a good goal. The specific technique is to keep the goal in mind, and let the priming effect of being in different situations trigger different ideas of how to handle the goal by thinking about it for a few seconds at a time in many different contexts. Sometimes this results in specific ideas for solutions to specific problems ('I could rent out a room to help pay the bills, or in trade for help with chores and errands'), sometimes it results in topics that need to be researched ('Credit scores: How the heck do they work?', 'hmm, I wonder what science has to say about houses'), and sometimes it results in ideas that help refine the goal and narrow it down to something specific and manageable ('I seem to prefer small spaces to large ones. I should get small house. How small do houses come, anyway? Ooo, microhomes, how cool!').

Both of the above are not just compatible with plenty of goofing off, they're actually improved by it, if the goofing off is of sufficiently high quality - by which I mean that it involves a variety of things, and that I'm comfortable and relaxed rather than guilty or anxious while doing it. The above techniques are hard to use during activities that require focus, and would be distracting in those cases, but goofing off provides an ideal opportunity to switch tasks in the middle of doing something to go research the awesome idea I just had or the piece of information that I just realized is key in making any progress in figuring out how to accomplish something. If the goal is compelling enough, I generally find that I have trouble *not* making little bits of progress on a project every time I think about it - I'm actually having some trouble not wandering off to go research credit scores right now, for example - and if the goal isn't compelling enough to attract my attention even after having used the first technique for a while, it's probably not a goal I actually care about achieving.

You may notice that neither of the techniques has a clear end state in which I explicitly decide that I'm going to pursue a given goal. That's because I rarely discover a need to do so. I find that if I find a goal compelling, I tend to naturally start taking actions toward achieving that goal as I discover them, and eventually it becomes obvious that I'm willing to put effort into achieving a particular goal. (Note: There is a risk of problems with the sunk cost fallacy and similar issues, here. I don't have specific advice for dealing with them, except to scrupulously avoid thinking of yourself as 'the type of person who does [goal or task]' in relation to these things. I find that sufficient, but I'm also very practiced at that in particular, so I don't expect that it will work for everyone.) In instances where that doesn't work - where the early stages require a large enough investment of resources to make me want some extra assurance that I'm not going to lose interest, for example - I follow the stated techniques until I'm confident that I have a good idea of how achieving the goal would affect things, and what kind of work it will take, and then I imagine not pursuing it. If the goal is one that I find compelling, I will have an immediate emotional reaction to the idea of letting it go. This technique isn't perfect, but it's the most useful one I've found so far.

I find it useful, when using the above techniques, to take time from time to time to consolidate all the gathered information, make sure there are no conflicts, and take note of any obvious gaps in my knowledge. I find that taking time for that flows naturally from certain instances of using the second technique, particularly instances that involve detailed research or asking a person for advice. The latter is often more useful than the former, in my experience - it generally turns into a conversation about the goal as a whole, in which I explain what details of the goal or plan that I've decided on already and why I decided on those particular details (which is useful for noticing flaws in the reasoning behind decisions, among other things), and the other person offers targeted suggestions, points out areas that still need work, and generally acts as a sounding board.

I find the above techniques useful for personal goals, but they tend to take too long to be useful for situations involving others. In those cases, the process needs to be condensed into one or a few conversations. The conversations that I have that are successful at that have a few things in common - they're long, they're very focused, they involve looking at the issue from several angles - but most importantly, they're all about coming to an emotionally and intellectually salient, implementable conclusion. This is actually quite different from normal conversations, where a particular topic might be the subject of three or four exchanges before the topic is changed, or even 'deep' conversations, which rarely involve more than allowing each participant to explain their opinion in detail and offer a few comments on the other opinion or opinions presented. They're also not the same as the kinds of conversations that I've had with supervisors at most of the jobs I've held; those did involve implementable conclusions, but the conclusions had generally been decided by the supervisor before the conversation ever started, and even if not, the goal was to come up with a minimally-acceptable plan as quickly as possible with as little discussion and as few questions as possible, which rarely resulted in a plan that I actually cared about achieving for its own sake.

The most obvious difference between the kind of conversation that I use to develop useful conclusions and normal conversations is the degree of focus. When I first started developing this technique, before my usual conversation partners got used to it, I found myself redirecting the conversation back to the topic at hand almost every few sentences. Somewhat surprisingly, this did not ruin any of my friendships; the advantages of the increased focus were obvious enough quickly enough that this was seen as not just acceptable but preferred. (Sample: One major relationship with daily conversations, a large majority of which were this type; one major relationship with conversations 3-4 days a week, approximately 1/3 - 1/2 of which were this type; one significant relationship with conversations once per 1-2 weeks, approximately 1/2 - 3/4 of which were this type; near-daily group conversations in which I used this technique whenever relevant but not throughout the entirety of any conversation; a handful of conversations with strangers or acquaintances where I was specifically approached or recommended as someone to talk to partly because of this technique. In all three of the relationships, the technique has been commented on and noted as something that is appreciated.) I don't generally do that any more - the people I most often converse with know this technique well enough that I don't have to, and I generally take a softer approach with strangers now that I have a feel for how significant a tangent can be recovered from - but keeping an eye out for conversational drift and making sure that we're making progress on the goal is still an important part of such conversations.

The actual contents of the kinds of conversations that I use to reach salient conclusions are driven by the specific goals of the conversations, but in the case of planning to achieve a particular goal, it's generally focused on answering the same kinds of questions that are answered by my solo techniques: What is the goal? Is it a good goal? What will achieving that goal look like in the real world? What possible side effects do we want to encourage or avoid? How can various sub-goals be achieved? In what ways is one possible solution or approach better than another? Generally, a given conversation is only going to handle one or a few of these questions, if they're really unresolved.

When I'm working with someone on their own projects, those questions are supplemented by questions about what they've done so far, what kinds of things have worked for them in the past, and why they're having trouble with the issue at hand, plus even more focus on applying rationality techniques than I use when discussing projects I'll be working on personally. Ideally, this turns up a particular issue that can be dealt with either by having the person change their expectations (e.g. if they're stymied by the fact that a third party doesn't react as they 'should' to some stimulus) or approach the problem in a different way (e.g. by trying a solution that they hadn't thought of on their own). Even if it doesn't turn up something like that, such a conversation will often give a person enough of a different perspective on the issue that they can start gathering information and looking for solutions on their own.

When I'm dealing with issues of my own productivity, though, I tend to stick more to the core questions and really focus on finding very specific, clear answers. For example, it's not uncommon for a client to approach the company I work for with a request for a specific build (in Second Life or OpenSim, a 3d scene and all the code that makes it do things) and for us to subsequently spend an hour or more just determining what the client's goals are for that build to make sure that what they've requested will actually accomplish what they want. I actually find it almost impossible to take a project seriously if we haven't done that part, now; every time I've let myself be talked into working without it, I've had to re-do the majority of my work to accommodate the client's actual preferences, and nothing kills my motivation to work quite like feeling certain that I'm not actually accomplishing anything.

Once I have a firm enough description of what actually needs to be accomplished, I work out the details using another variation of my solo techniques - not because there's some particular advantage to doing it over several days rather than sitting down and working it out all at once, but just out of personal preference - and do most of the actual work in bursts of caffeine-fueled building, coding, and testing. Most client projects come with hard deadlines, which I find useful not just as a specific goal, but also because the owner of the company I work for is a dear friend and I prefer not to put him through the stress of worrying about whether I'll meet them. (Training oneself to notice and avoid deadline-related stress may be a more feasible option for most people.)

Overall, my method suits my own skills and limitations well: It doesn't require much willpower, nor expect me to be able to focus on demand; it's flexible enough that if I'm having a rough day or suddenly find that I have an emergency to take care of, it's not a problem. It takes advantage of the rather diffuse state of mind that I prefer to spend time in and my particular way of observing the world. It may be so specialized for me that it's not viable for anyone else, but it does at least exist as an alternate way of handling the process of making sure that things get done.

" } }, { "_id": "sAFxndynEDww3Frqi", "title": "Reflections on a Personal Public Relations Failure: A Lesson in Communication", "pageUrl": "https://www.lesswrong.com/posts/sAFxndynEDww3Frqi/reflections-on-a-personal-public-relations-failure-a-lesson", "postedAt": "2010-10-01T00:29:26.468Z", "baseScore": 50, "voteCount": 44, "commentCount": 39, "url": null, "contents": { "documentId": "sAFxndynEDww3Frqi", "html": "

Related To: Are Your Enemies Innately Evil?, Talking Snakes: A Cautionary Tale, How to Not Lose an Argument

\n

Eliezer's excellent article Are Your Enemies Innately Evil? points to the fact that when two people have a strong disagreement it's often the case that each person sincerely believes that he or she is on the right side. Yvain's excellent article Talking Snakes: A Cautionary Tale highlights that the fact that to each such person, without knowledge of the larger context that the other person's beliefs fit into, the other person's beliefs can appear to be absurd. The frequency with which this phenomenon occurs is sufficiently high so that it's important for each participant in an argument to make a strong effort to understand where the other person is coming from and to frame one's own ideas with the other person's perspective in mind.

\n

Last month I made a sequence of posts [1], [2], [3], [4] raising concerns about the fruitfulness of SIAI's approach to reducing existential risk. My concerns were sincere and I made my sequence of postings in good faith. All the same, there's a sense in which my sequence of postings was a failure. In the first of these I argued that the SIAI staff should place greater emphasis on public relations. Ironically, in my subsequent postings I myself should have placed greater emphasis on public relations. I made mistakes which damaged my credibility and barred me from serious consideration by some of those who I hoped to influence.

\n

In the present posting I catalog these mistakes and describe the related lessons that I've learned about communication.

\n

Mistake #1: Starting during the Singularity Summit

\n

I started my string of posts during the Singularity Summit. This was interpreted by some to be underhanded and overly aggressive. In fact, the coincidence of my string of posts with the Singularity Summit was influenced more by the appearance of XiXiDu's Should I Believe What the SIAI claims? than anything else, but it's understandable that some SIAI supporters would construe the timing of my posts as premeditated and hostile in nature. Moreover, the timing of my posts did not give the SIAI staff a fair chance to respond real time. I should have avoided posting during a period of time when I knew that the SIAI staff would be occupied, waiting until a week after the Singularity Summit to begin my sequence of posts.

\n

Mistake #2: Failing to balance criticism with praise

\n

As Robin Hanson says in Against Disclaimers:

\n
\n

If you say anything nice (or critical) about anything associated with a group or person you are presumed to support (or oppose) them overall.

\n
\n

I don't agree with Hanson that people are wrong to presume this - I think that statistically speaking, the above presumption is correct.

\n

For this reason, it's important to balance criticism of a group which one does not oppose with praise. I think that a number of things that SIAI staff have done have had expected favorable impacts on existential risk, even if I think other things they have done have negative expected impact. By failing to make this point salient, I mislead Airedale and others to believe that I have an agenda against SIAI.

\n

Mistake #3: Letting my emotions get the better of me

\n

My first pair of postings attracted considerable criticism, most of which which appeared to me to be ungrounded. I unreasonably assumed that these criticisms were made in bad faith, failing to take to heart the message of Talking Snakes: A Cautionary Tale that one's positions can appear to be absurd to those who have access to a different set of contextual data from one's own. As Gandhi said:

\n
\n

...what appears to be truth to the one may appear to be error to the other.

\n
\n

We're wired to generalize from one example and erroneously assume that others have the same access to the same context that we do. As such, it's natural for us to assume that when other strongly disagree with us it's because they're unreasonable people. While this is understandable, it's conducive to emotional agitation which when left unchecked typically leads to further misunderstanding.

\n

I should have waited until I had returned to emotional equilibrium before continuing my string of postings beyond the first two. Because I did not wait until returning emotional equilibrium, my final pair of postings was less effectiveness-oriented than it should have been and more about satisfying my immediate need for self-expression. I wholeheartedly agree with a relevant quote by Eliezer from Circular Altruism:

\n
\n

This isn't about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain's feelings of comfort or discomfort with a plan

\n
\n

Mistake #4: Getting personal with insufficient justification

\n

As Eliezer has said in Politics is the Mind-Killer, it's best to avoid touching on emotionally charged topics when possible. One LW poster who's really great at this and who I look to as a role model in this regard is Yvain.

\n

In my posting on The Importance of Self-Doubt I levied personal criticisms which many LW commentators felt uncomfortable with [1], [2], [3], [4]. It was wrong for me to make such personal criticisms without having thoroughly explored alternate avenues for accomplishing my goals. At least initially, I could have spoken in more general terms as prase did in a comment on my post - this may have sufficed to accomplish my goals without the need to discuss the sensitive subject matter that I did.

\n

Mistake #5: Failing to share my posts with an SIAI supporter before posting

\n

It's best to share one's proposed writings with a member of a given group before offering public criticisms of the activities of members of the said group. This gives him or her an opportunity to respond and provide context which one may be unaware of. After I made my sequence of postings, I had extensive dialogue with SIAI Visiting Fellow Carl Shulman. In the course of this dialogue I realized that I had crucial misconceptions about some of SIAI's activities. I had been unaware of some of the activities which SIAI staff have been engaging in; activities which I judge to have significant positive expected value. I had also misinterpreted some of SIAI's policies in ways that made them look worse than they now appear to me to be.

\n

Sharing my posts with Carl before posting would have given me the opportunity to offer a more evenhanded account of SIAI's activities and would have given me the feedback needed to avoid being misinterpreted.

\n

Mistake #6: Expressing apparently absurd views before contextualizing them

\n

In a comment to one of my postings, I expressed very low confidence in the success of Eliezer's project. In line with Talking Snakes: A Cautionary Tale, I imagine that a staunch atheist would perceive a fundamentalist Christian's probability estimate of the truth of Christianity to be absurd and that on the flip side a fundamentalist Christian would perceive a staunch atheist's probability estimate of the truth of Christianity to be absurd. In absence of further context, the beliefs of somebody coming from a very different worldview inevitably seem absurd independently of whether or not they're well grounded.

\n

There are two problems with beginning a conversation on a topic by expressing wildly different positions from those of one's conversation partners. One is that this tends to damage one's own credibility in one's conversation partner's eyes. The other is that doing so often carries an implicit suggestion that one's conversation partners are very irrational. As Robin Hanson says in Disagreement is Disrespect:

\n
\n

...while disagreement isn’t hate, it is disrespect.  When you knowingly disagree with someone you are judging them to be less rational than you, at least on that topic.

\n
\n

Extreme disagreement can come across as extreme disrespect. In line with what Yvain says in How to Not Lose an Argument, expressing extreme disagreement usually has the effect of putting one's conversation partners on the defensive and is detrimental to their ability to Leave a Line of Retreat.

\n

In a comment on my Existential Risk and Public Relations posting Vladimir_Nesov said

\n
\n

The level of certainty is not up for grabs. You are as confident as you happen to be, this can't be changed. You can change the appearance, but not your actual level of confidence. And changing the apparent level of confidence is equivalent to lying.

\n
\n

I disagree with Vladimir_Nesov that changing one's apparent level of confidence is equivalent to lying. There are many possible orders in which one can state one's beliefs about the world. At least initially, presenting the factors that lead one to one's conclusion before presenting one's conclusion projects a lower level of confidence in one's conclusion than presenting one's conclusions before presenting the factors that lead one to these conclusions. Altering one's order of presentation in this fashion is not equivalent to lying and moreover is actually conducive to rational discourse.

\n

As Hugh Ristik said in response to Reason is not the only means of overcoming bias,

\n
\n

The goal of using these forms of influence and rhetoric is not to switch the person you are debating from mindlessly disagreeing with you to mindlessly agreeing with you.

\n

[..]

\n

One of the best ways to change the minds of people who disagree with you is to cultivate an intellectual friendship with them, where you demonstrate a willingness to consider their ideas and update your positions, if they in return demonstrate the willingness to do the same for you. Such a relationship rests on both reciprocity and liking. Not only do you make it easier for them to back down and agree with you, but you make it easier for yourself to back down and agree with them.

\n

When you have set up a context for the discussion where one person backing down isn't framed as admitting defeat, then it's a lot easier to do. You can back down and state agreement with them as a way to signal open-mindedness and the willingness to compromise, in order to encourage those qualities also in your debate partner. Over time, both people's positions will shift towards each other, though not necessarily symmetrically.

\n

Even though this sort of discourse is full of influence, bias, and signaling, it actually promotes rational discussion between many people better than trying to act like Spock and expecting people you are debating to do the same.

\n
\n

I should have preceded my expression of very low confidence in the success of Eliezer's project with a careful and systematic discussion of the factors that led me to my conclusion. 

\n

Aside from my failure to give proper background for my conclusion, I also failed to be sufficiently precise in stating my conclusion. One LW poster interpreted my reference to \"Eliezer's Friendly AI project\" to be \"the totality of Eliezer's efforts to lead to the creation of a Friendly AI.\" This is not the interpretation that I intended - in particular I was not including Eliezer's networking and advocacy efforts (which may be positive and highly significant) under the umbrella of \"Eliezer's Friendly AI project.\" By \"Eliezer's Friendly AI project\" I meant \"Eliezer's attempt to unilaterally build a Friendly AI that will go FOOM in collaboration with a group of a dozen or fewer people.\" I should have made a sharper claim to avoid the appearance of overconfidence.

\n

Mistake #7: Failing to give sufficient context for my remarks on transparency and accountability

\n

After I made my Transparency and Accountability posting, Yvain commented

\n
\n

The bulk of this is about a vague impression that SIAI isn't transparent and accountable. You gave one concrete example of something they could improve: having a list of their mistakes on their website. This isn't a bad idea, but AFAIK GiveWell is about the only charity that currently does this, so it doesn't seem like a specific failure on SIAI's part not to include this. So why the feeling that they're not transparent and accountable?

\n
\n

In my own mind it was clear what I meant by transparency and accountability, but my perspective is sufficiently exotic so that it's understandable that readers like Yvain would find my remarks puzzling or even incoherent. One aspect of the situation is that I share GiveWell's skeptical Bayesian prior. In A conflict of Bayesian priors? Holden Karnofsky says:

\n
\n

When you have no information one way or the other about a charity’s effectiveness, what should you assume by default?

\n

Our default assumption, or prior, is that a charity - at least in one of the areas we’ve studied most, U.S. equality of opportunity or international aid - is falling far short of what it promises donors, and very likely failing to accomplish much of anything (or even doing harm). This doesn’t mean we think all charities are failing - just that, in the absence of strong evidence of impact, this is the appropriate starting-point assumption.

\n
\n

I share GiveWell's skeptical prior when it comes to the areas that GiveWell has studied most and feel that it's justified when applied to the cause of existential risk reduction to an even greater extent for the reason given by prase:

\n
\n

The problem is, if the cause is put so far in the future and based so much on speculations, there is no fixed point to look at when countering one's own biases, and the risk of a gross overestimation of one's agenda becomes huge.

\n
\n

Because my own attitude toward the viability of philanthropic endeavors in general is so different from that of many LW posters, when I suggested that SIAI is insufficiently transparent and accountable, many LW posters felt that I was unfairly singling out SIAI. Statements originating from a skeptical Bayesian prior toward philanthropy are easily misinterpreted in this fashion. As Holden says:

\n
\n

This question might be at the core of our disagreements with many

\n

[...]

\n

Many others seem to have the opposite prior: they assume that a charity is doing great things unless it is proven not to be. These people are shocked that we hand out “0 out of 3 stars” for charities just because so little information is available about them; they feel the burden of proof is on us to show that a charity is not accomplishing good.

\n
\n

I should have been more precise about my explicit about my Bayesian prior before suggesting that SIAI should be more transparent and accountable. This would have made it more clear that I was not singling SIAI out. Now, in the body of my original post I attempted to allude to my skeptical Bayesian prior in the body of my posting when I said :

\n
\n

I agree ... that in evaluating charities which are not transparent and accountable, we should assume the worst.

\n
\n

but this statement was itself prone to misinterpretation. In particular, some LW posters interpreted it literally when I had intended \"assume the worst\" to be a shorthand figure of speech for \"assume that things are considerably worse than they superficially appear to be.\" Eliezer responded by saying

\n
\n

Assuming that much of the worst isn't rational

\n
\n

I totally agree with Eliezer that literally assuming the worst is not rational. I thought that my intended meaning would be clear (because the literal meaning is obviously false), but in light of contextual cues that made it appear as though I had an agenda against SIAI my shorthand was prone to misinterpretation. I should have been precise about what my prior assumption is about charities that are not transparent and accountable, saying: \"my prior assumption is that funding a given charity which is not transparent and accountable has slight positive expected value which is dwarfed by the positive expected value of funding the best transparent and accountable charities.\"

\n

As Eliezer suggested, I also should have made it more clear what I consider to be an appropriate level of transparency and accountability for an existential risk reduction charity. After I read Yvain's comment referenced above, I made an attempt to explain what I had in mind by transparency and accountability in a pair of responses to him [1], [2], but I should have done this in the body of my main post before posting. Moreover, I should have preempted his remark:

\n
\n

Anti-TB charities can measure how much less TB there is per dollar invested; SIAI can't measure what percentage safer the world is, since the world-saving is still in basic research phase. You can't measure the value of the Manhattan Project in \"cities destroyed per year\" while it's still going on.

\n
\n

by citing Holden's tentative list of questions for existential risk reduction charities.

\n

Mistake #8: Mentioning developing world aid charities in juxtaposition with existential risk reduction

\n

In the original version of my Transparency and Accountability posting I said

\n
\n

I believe that at present GiveWell's top ranked charities VillageReach and StopTB are better choices than SIAI, even for donors like utilitymonster who take astronomical waste seriously and believe in the ideas expressed in the cluster of blog posts linked under Shut Up and multiply.

\n
\n

In fact, I meant precisely what I said and no more, but as Hanson says in Against Disclaimers, people presume that:

\n
\n

If you say you prefer option A to option B, you also prefer A to any option C.

\n
\n

Because I did not add a disclaimer, Airedale understood me to be advocating in favor of VillageReach and StopTB over all other available options. Those who know me well know that over the past six months I've been in the process of grappling with the question of which forms of philanthropy are most effective from a utilitarian perspective and that I've been searching for a good donation opportunity which is more connected with the long-term future of humanity than VillageReach's mission is. But it was unreasonable for me to assume that my readers would know where I was coming from. 

\n

In a comment on the first of my sequence of postings orthonormal said:

\n
\n

whpearson mentioned this already, but if you think that the most important thing we can be doing right now is publicizing an academically respectable account of existential risk, then you should be funding the Future of Humanity Institute.

\n
\n

From the point of view of the typical LW poster it would have been natural for me to address orthonormal's remark in my brief discussion of the relative merits of charities for those who take astronomical waste seriously and I did not do so. This led some [1], [2], [3] to question my seriousness of purpose and further contributed to the appearance that I have an agenda against SIAI. Shortly after I made my post Carl Shulman commented saying:

\n
\n

The invocation of VillageReach in addressing those aggregative utilitarians concerned about astronomical waste here seems baffling to me.

\n
\n

After reading over his comment and others and thinking about them, I edited my post to avoid the appearance of favoring developing world aid over existential risk reduction, but the damage had already been done. Based on the original text of my posting and my track record of donating exclusively VillageReach, many LW posters have persistently understood me to have an agenda in favor of developing world aid and against existential risk reduction charities.

\n

The original phrasing of my post made sense from my own point of view. I believe supporting GiveWell's recommended charities has high expected value because I believe that doing so strengthens a culture of effective philanthropy and that in the long run this will meaningfully lower existential risk. But my thinking here is highly non-obvious and it was unreasonable for me to expect that it would be evident to readers. It's easy to forget that others can't read our minds. I damaged my credibility by mentioning developing world aid charities in juxtaposition with existential risk reduction without offering careful explanation for why I was doing so.

\n

My reference to developing world aid charities was also not effectiveness-oriented. As far as I know, most SIAI donors are not considering donating to developing world aid charities. As described under the heading \"Mistake #3\" above, I slipped up and let my desire for personal expression take precedence over actually getting things done. As I described in Missed Opportunities For Doing Well By Doing Good I personally had a great experience with discovering GiveWell and giving to VillageReach. Instead of carefully taking the time to get to know my audience, I simple-mindedly generalized from one example and erroneously assumed that my readers would be coming from a perspective similar to my own.

\n
\n

Conclusion:

\n

My recent experience has given me heightened respect for the careful writing style of LW posters like Yvain and Carl Shulman. Writing in this style requires hard work and the ability to delay gratification, but it can happen that the cost is well worth it in the end. When one is writing for an audience that one doesn't know very well there's a substantial risk of being misinterpreted because one's readers do not have enough context to understand what one is driving at. This risk can be mitigated by taking the time to provide detailed background for one's readers and by taking great care to avoid making claims (whether explicit or implicit) that are too strong. In principle one can always qualify one's remarks later on, but it's important to remember that as komponisto said

\n
\n

First impressions really do matter

\n
\n

so that it's preferable to avoid being misunderstood the first time around. On the flip side it's important to remember that one may be misguided by one's own first impressions. There are LW posters who I now understand to be acting in good faith who I initially misunderstood to have a hostile agenda against me.

\n

My recent experience was my first writing about a controversial subject in public and has been a substantive learning experience for me. I would like to thank the Less Wrong community for giving me this opportunity. I'm especially grateful to posters CarlShulman, Airedale, steven0461, Jordan, Komponisto, Yvain, orthonormal, Unknowns, Wei_Dai, Will_Newsome, Mitchell_Porter, rhollerith_dot_com, Eneasz, Jasen and PeerInfinity for their willingness to engage with me and help me understand why some of what I said and did was subject to misinterpretation. I look forward to incorporating the lessons that I've learned into my future communication practices.

" } }, { "_id": "R6NkDC2mayPoXteQS", "title": "Proposal for a structured agreement tool", "pageUrl": "https://www.lesswrong.com/posts/R6NkDC2mayPoXteQS/proposal-for-a-structured-agreement-tool", "postedAt": "2010-09-30T23:31:24.793Z", "baseScore": 8, "voteCount": 7, "commentCount": 12, "url": null, "contents": { "documentId": "R6NkDC2mayPoXteQS", "html": "

I hope this is a good place for this - comments/suggestions welcome - offers of collaboration more than welcome!

\n

I envisage a kind of structured wiki, centred around the creation of propositions, which can be linked to allow communities of interest to rapidly come to fairly sophisticated levels of mutual understanding; the aim being to foster the development of strong groups with confidence in shared, conscious positions. This should allow significant confidence in collaboration.

\n

Some aspects, in no particular order;

\n\n

Enough of these for now. Some imagined interactions might be more helpful;

\n
    \n
  1. I stumble across the site (as I stumbled across LessWrong), and browse proposition titles. I come across one called 'Other people are real, just like me'. It contains some version of the argument for accepting that other humans are to be assumed to have roughly the same motivations, needs et al, as me, and the suggestion that this is a useful founding block for a rational morality. I decide to subscribe, fairly strongly. I am offered a tailored selection of related propositions, as identified by the groups that have included this proposition in their networks (without identification of said groups, I rather think) - I investigate these, and at some point, the system feels that my developing profile is beginning to match that of some group or groups - and offers me the chance to look at their 'mission statement' pages. I decide to come back another day and look at other propositions included in these groups' networks, before going any further. I decline to have my profile made public, so that the groups don't contact me.
  2. \n
  3. I come across some half-baked, but interesting proposition. As a registered user, but not the originator of the proposition, I have some choices;  I can comment on the proposition, hoping to engage in dialogue with the proposer that could be fruitful, or I can 'clone' (or 'fork') the proposition, and seek to improve it myself. Ultimately, the interest of other users will determine the influence and relevance of the proposition.
  4. \n
  5. \n

    I am a fundamentalist christian (!). I come across the site, and am appalled at its secular, materialist tone. I make a new proposition; 'The Bible is revealed truth, in all its glory' (or some such twaddle. Of course, I omit to specify which edition, and don't even consider the option of a language other than english - but hey, what do you expect?). Within days, I have assembled a wonderful active group of woolly minded people happily discussing the capacity of Noah's Ark, or whatever. The point here is that the platform is just that - a platform. Human community is a Good Thing.

    \n
  6. \n
  7. I am pushed upward by the group I am part of to some sort of moderator role. The system shows various other groups who agree more or less strongly with most of the propositions our group deems fundamental. I contact my opposite number in one of those, and we together make a new proposition which we believe could be a vehicle for discussions that could lead to a merger.
  8. \n
  9. I wish to write a business plan that is not a pile of dead tree gathering dust 6 weeks after it was presented to the board. I attempt to set out the aims of the business as fundamental propositions, and advertise this network to my colleagues, who suggest refinements. On this basis, we work up a description of the important policies and 'business rules' which define the enterprise. These remain accessible and editable , so that they can evolve along with the business.
  10. \n
  11. I am considering an open-source project. I set out the fundamental aims and characteristics of the tool I am proposing, and link them together. The system allows me to set myself up as a group. I sit back and wait for others to comment. Based on these comments, the propositions are refined, others added, relationships built with potential collaborators. At some point, we form a group, and the project gets under way. Throughout its life, the propositions are continually refined and added to. The propositions are a useful form of marketing, and save us a great deal of bother talking to people who want to know what/why/how.
  12. \n
\n

Enough... Point 6 is almost recursive.......

\n

 

\n

There is more discursive (and older) material, here.

\n

Thanks for reading, and please do comment.

" } }, { "_id": "nh4HPvxyeyny4oxi8", "title": "Are you doing what you should be doing?", "pageUrl": "https://www.lesswrong.com/posts/nh4HPvxyeyny4oxi8/are-you-doing-what-you-should-be-doing", "postedAt": "2010-09-30T23:23:13.579Z", "baseScore": 6, "voteCount": 6, "commentCount": 5, "url": null, "contents": { "documentId": "nh4HPvxyeyny4oxi8", "html": "

\n
\"What am I doing? And why am I doing it?\"

One method for increasing high utility productivity I thought up was choosing a specific well-defined answer for the second half (\"Why am I doing it?\") and consistently checking to see if the answer to the first half satisfyingly aligns with the second half. For example, if I'd checked myself an hour ago, it'd be \"I'm learning to program because I want to maximize the probability of FAI development.\" Ideally the second half would be related to a 'something to protect' or 'definite major purpose' that stays constant over time and that you want to be consistently moving towards. If you're already good at noticing rationalization this technique might work to induce cognitive dissonance when engaging in suboptimal courses of action. (Whether or not inducing cognitive dissonance in order to make yourself more productive is likely to work is open to debate. I suspect P.J. Eby would thoroughly disagree.) I'm going to try this over the next few days and see if the results are any better than how I've been doing recently. I am at a relative productivity high point right now though, so the data might not be too meaningful. I encourage others to see if this method works.
\n
If you are equally good at explaining any plan, you have zero productivity.
\n
An example that's sorta inspired by my own thinking, though not exactly: 
\n
\n
\"I'm learning to program because I want to maximize the probability of FAI development.\" ...That doesn't sound right. Maybe learning to program will help me think more rationally? But the connection is pretty loose, both from 'learning to program' to 'improving the relevant thinking skills' and from 'me thinking better' to 'a greater probability of FAI development'. Maybe learning to program will help me get a job to donate to FAI development? Money is the unit of caring, after all. (Note: cached thought, re-examine carefully.) But to be honest, my comparative advantage doesn't seem to be in making money. I should think about this more. Even so, I doubt programming is the best way for me to make money, since talking with Louie about it. Thus learning to program is probably a suboptimal action, though better than e.g. playing video games, if for whatever reason those were my only two options.
\n
What to do, then? I should probably go ask 'wedrifid' from LW for a recommended book on neurochemistry so I can better understand nootropics; the FAI team (whoever they end up being) will need to be pretty smart, after all. I can see a direct link from better understanding nootropics to a better chance of FAI. \"I'm studying nootropic-relevant neurochemistry because I want to maximize the probability of FAI development.\" Probably still suboptimal, but a thorough improvement, and it will save me dozens of hours of suboptimal work. Yay for going meta! Wait though, are there ways to go more meta? After getting more familiar with the topic, I should probably find lots of people to talk to that know more about nootropics, and get information from them as to what nootropics will work best, and what areas of study I should be focusing on. That way I can save dozens of hours of suboptimal work. Yay for going meta! I should repeat this process until going meta no longer produces time-saving results.
\n
\n

 

" } }, { "_id": "KpP7zwJHbw29B3G8D", "title": "Which parts of philosophy are worth studying from a pragmatic perspective?", "pageUrl": "https://www.lesswrong.com/posts/KpP7zwJHbw29B3G8D/which-parts-of-philosophy-are-worth-studying-from-a", "postedAt": "2010-09-30T21:32:25.456Z", "baseScore": 5, "voteCount": 3, "commentCount": 29, "url": null, "contents": { "documentId": "KpP7zwJHbw29B3G8D", "html": "

[intentionally left blank]

" } }, { "_id": "sa46t9uXaev2ZYZ9u", "title": "How do you organize your research?", "pageUrl": "https://www.lesswrong.com/posts/sa46t9uXaev2ZYZ9u/how-do-you-organize-your-research", "postedAt": "2010-09-30T19:37:33.147Z", "baseScore": 2, "voteCount": 2, "commentCount": 7, "url": null, "contents": { "documentId": "sa46t9uXaev2ZYZ9u", "html": "

This recent discussion post by SarahC got me thinking about how one can rationally manage research. It seems like software might be useful here, but I don't know how exactly the software should work. I'm intrigued by mind mapping software, but it's possible that all that structure is unnecessary and you could do quite well with less. For instance, I'm considering trying a system of timestamped notes which are managed by a tagging system. If the tagging was done thoroughly enough, you could filter through all the posts sharing a cluster of tags and fairly easily get access to every idea you've recorded on a certain topic.

\n

The only problem is, I think I'd want even more specialized software than that. I'd want to integrate my notes with some form of bibliography management, and at least a to-do list. And I can imagine more, for example perhaps there could be a \"sticky note\" capability where I could pin up and move around things that I either want to remember or that will help me with my research, like an inspirational quote if I'm not feeling motivated to do research, or the Litany of Tarski or some other rationality technique if I really need to remember to use it.

\n

I'm not sure if these ideas are all sound, but a basic requirement for the software would be to document the structure of your research so that it can be analyzed for effectiveness.

\n

I know there are some people on Less Wrong who do research, so I suppose I should defer to the experts here: how do you organize your research? What methodologies and tools do you use? Why?

" } }, { "_id": "tXvmKfHtv2gzmPHe7", "title": "Tell me what I don't know about life insurance.", "pageUrl": "https://www.lesswrong.com/posts/tXvmKfHtv2gzmPHe7/tell-me-what-i-don-t-know-about-life-insurance", "postedAt": "2010-09-30T12:21:23.655Z", "baseScore": 3, "voteCount": 2, "commentCount": 4, "url": null, "contents": { "documentId": "tXvmKfHtv2gzmPHe7", "html": "

I was offered life insurance and thought I need input from LessWrong people before I accept or decline the offer.

\n

Please share your personal experience, if any, with me.

\n

Thoughts that have occurred to me:

\n

What are the odds that that company will be around in 38 years? I mean, both World Wars fit in that time span.

\n

What is the probability distribution over how much the Euro will have inflated by then? Does that even matter?

" } }, { "_id": "KYiSzaoYARgTJGcMX", "title": "Messy Science", "pageUrl": "https://www.lesswrong.com/posts/KYiSzaoYARgTJGcMX/messy-science", "postedAt": "2010-09-30T06:08:55.684Z", "baseScore": 18, "voteCount": 14, "commentCount": 17, "url": null, "contents": { "documentId": "KYiSzaoYARgTJGcMX", "html": "

Sometimes it's obvious who the good scientists are.  They're the ones who have the Nobel Prize, or the Fields Medal.  They're the ones with named professorships.  But sometimes it's not obvious -- at least, not obvious to me.  

\n

In young, interdisciplinary fields (I'm most familiar with certain parts of applied math) there are truly different approaches.  So trying to decide between approaches is at least partly tied to whether you think something is, say, really a biology problem, a computer science problem, or a mathematics problem.  (And that's influenced by your educational background.)  There are issues of taste: some people prefer general, elegant solutions, while some people think it's more useful to have a precise model geared to a specific problem.  There are issues of goals: do we want to build a tool that can be brought to market, do we want to prove a theorem, or do we want to model what a biological brain does?  And there's always tension between making assumptions about the data that allow you to do prettier math, versus permitting more \"nastiness\" and obtaining more modest results.

\n

There's a lot of debate, and it's hard for a novice to make comparisons; usually the only thing we can do is grab the coattails of someone who has proven expertise in an older, more traditional field.  That's useful for becoming a scientist, but the downside is that you don't necessarily get a complete picture (as I get my education in math, I'm going to be more inclined to believe that the electrical engineers are doing it all wrong, even though the driving *reason* for that belief is that I didn't want to be an electrical engineer when I was 18.)

\n

I'm hankering for some kind of meta-science that tells you how to judge between different avenues of research when they're actually different.  (It's much easier to say \"Lab A used sounder methodology than Lab B,\" or \"Proof A is more general and provides a stronger result than Proof B.\")  Maybe it's silly on my part -- maybe it's asking to compare the incomparable.  But it strikes me as relevant to the LW community -- especially when I see comments to the effect that such-and-such approach to AI is a dead end, not going to succeed, written as though the reason why should be obvious.  I don't know about AI, but it does seem that correctly predicting which research approaches are \"dead ends\" is a hard problem, and it's relevant to think about how we do it. What's your methodology for deciding what's worth pursuing?

\n

(Earlier I wrote an article called \"What is Bunk?\" in which I tried to understand how we identify pseudoscience.  This is roughly the same question, but at a much higher level, when the subjects of comparison are all professional scientists writing in peer-reviewed journals.)

" } }, { "_id": "HepNp5HWAypsGEGJK", "title": "Human inability to assign numerical probabilities", "pageUrl": "https://www.lesswrong.com/posts/HepNp5HWAypsGEGJK/human-inability-to-assign-numerical-probabilities", "postedAt": "2010-09-30T04:42:26.633Z", "baseScore": 4, "voteCount": 5, "commentCount": 15, "url": null, "contents": { "documentId": "HepNp5HWAypsGEGJK", "html": "

Whenever we talk about the probability of an event that we do not have perfect information about, we generally use qualitative descriptions (e.g. possible but improbable). When we do use numbers, we usually just stick to a probability range (e.g. 1/4 to 1/3). A Bayesian should be able to assign a probability estimate to any well-defined hypothesis. For a human, trying to assign a numerical probability estimate is uncomfortable and seems arbitrary. Even when we can give a probability range, we resist averaging the probabilities we expect. For instance, I'd say that Republicans are more likely than not to take over the House, but the Democrats still have a chance. After pressing myself, I managed to say that the probability of the Democratic party keeping control of the House next election is somewhere between 25% and 40%. Condensing this to 32.5% just feels wrong and arbitrary. Why is this? I have thought of three possible reasons, which I listed in order of likeliness:

\n

Maybe our brains are just built like frequentists. If we innately think of probability of probabilities of being properties of hypotheses, it makes sense that we would not give an exact probability. If this is correct, it would mean that the tendency to think in frequentist terms is too entrenched to be easily untrained, as I try to think like a Bayesian, and yet still suffer from this effect, and I suspect the same is true of most of you.

\n

Maybe since our ancestors never needed to express numerical probabilities, our brains never developed the ability to. Even if we have data spaces in our brains to represent probabilities of hypotheses, it could be buried in the decision-making portion of our brains, and the signal could get garbled when we try to pull it out in verbal form. However, we also get uncomfortable when forced to make important decisions on limited information, which would be evidence against this.

\n

Maybe there is selection pressure against giving specific answers because it makes it harder to inflate your accuracy after the fact, resulting in lower status. This seems highly unlikely, but since I thought of it, I felt compelled to write it anyway.

\n

As there are people on this forum who actually know a thing or two about cognitive science, I expect I'll get some useful responses. Discuss.

\n

Edit: I did not mean to imply that it is wise for humans to give a precise probability estimate, only that a Bayesian would but we don't.

" } }, { "_id": "Twc9qFroY8BZQLcuY", "title": "US scientists find potentially habitable planet near Earth", "pageUrl": "https://www.lesswrong.com/posts/Twc9qFroY8BZQLcuY/us-scientists-find-potentially-habitable-planet-near-earth", "postedAt": "2010-09-30T00:39:57.129Z", "baseScore": 4, "voteCount": 4, "commentCount": 9, "url": null, "contents": { "documentId": "Twc9qFroY8BZQLcuY", "html": "

http://news.ycombinator.com/item?id=1741330

" } }, { "_id": "9WP9aznjrsZykxkEY", "title": "Request for rough draft review: Navigating Identityspace", "pageUrl": "https://www.lesswrong.com/posts/9WP9aznjrsZykxkEY/request-for-rough-draft-review-navigating-identityspace", "postedAt": "2010-09-29T17:51:18.184Z", "baseScore": 1, "voteCount": 5, "commentCount": 25, "url": null, "contents": { "documentId": "9WP9aznjrsZykxkEY", "html": "

Ignore this. After getting some much needed sleep I decided that the first half of this post, though necessary if I am to write the second half, isn't justified in being so complicated unless I actually use the added complications in solving the problems posed. First, I will use empiricism to see if this model is helpful in finding and verifying methods of attack on the problem of crafting a rationalist identity. If so, I will go back and try to simplify what I have here and make sure all parts are necessary to understand the solutions found.

\n
\n

 

\n

Related to: Cached Selves, Cached ThoughtsA Rational Identity

\n
\n

Thesis: Harnessing the large pressures on thoughts and actions generated by innate drives for signaling, identity negotation, and identity formation by consciously aiming for instrumentally optimal attractors in identityspace should allow for structured self-improvement and a natural defense against goal distortion. First, though, we must see the true scope of the problem, and then find methods of attack.

\n
\n

Part One: Conceptualizing the Problem Domain

\n

I: It Begins

\n

Life is an odd, long journey. The difference between a young girl and the wizened professor she becomes is quite profound. One of those crazy things in life that seems so natural. The child surely has a very different set of aspirations than the professor. They look different. They have entirely different thoughts. They share the same genes, but the two are still way more different than the average pair of identical twins. The professor has very dull memories of the child, but that's a pretty weak link, and it only goes one way. They have different senses of self, different beliefs, different affiliations, different relationships, different goals. Different identities. Is the professor what the child would have wished to become, if the child had known more and thought faster, had become wiser and more enlightened by some magical process? It's hard to say; it's hard to say how well the child's theoretically reflectively endorsed goals were fulfilled, how many utilons were attained by that standard.

\n

There is a long, long journey between the child and the professor, and the journey is frought with potential for both growth and goal distortion. Talk of genetic predispositions all you want, but Orwell wasn't that far off in claiming that man is infinitely malleable. That list of human universals you might have seen pertains to individuals much less than to cultures, and even less so to the individual splices of your average person's life. We may have a lot in common on average, but there's a heck of a lot of individual variance, and the memeplex of thoughtspace can do a lot of surprising things with that variance [homosexuality adaptation towards something else evo psych book]. Amplification, subjugation, deflection, attraction: mindspace is an interactive and powerful playing field, chaotic and multidimensional. If we are to think about it clearly, we should probably try to nail down some terminology.

\n

 

\n

II: Definitions: Mindspace, Attractors, Rationality

\n
\n

Mindspace is the configuration space of a mind. For humans at least, mindspace can be loosely broken down into thoughtspace and identityspace. Generally speaking, thoughtspace is ephemeral where identityspace is more constant, though changing. Attractors in the realm of thoughtspace tend to work on timescales like seconds or hours, where identityspace is what happens when patterns of thoughts, emotions, habits, et cetera persist for days or lifetimes. Humans are constantly moving through mindspace. Some are drifting lazily, some are propelled purposefully, some are tossed about chaotically. Ideally, we wish to move strongly and quickly towards the areas of mindspace that are most conducive to achieving our goals. We do so by thinking optimal thoughts and having optimal dispositions. Of course, it's never that easy. (Reference.)

\n

\n

\n
\n

Thoughtspace is a giant multidimensional web of knowledge and possible thoughts and insights. Most of it is handed to you in some form or another, from sensory experience or a book or a friend of a friend, probably mutated along the way, often for the worse. Thoughtspace is made up of thoughts, not dispositions or habits: those are in identityspace. But habits of thinking determine your movement through thoughtspace, and thoughts when thought habitually may quickly become part of your identity. The two spaces are connected at many joints, sometimes non-obviously.

\n

Your web of beliefs makes up your starting point for exploration of thoughtspace. There are many destinations to aim for. Sometimes, we wish to gain new knowledge to update an old belief, or sometimes we spend time thinking to integrate new evidence. Sometimes we synthesize old concepts and come up with something new. Sometimes we formulate plans to reach goals provided by our identities. The part of thoughtspace in your grasp is your map; it reflecs reality, sorta, and allows you to navigate via your identity towards thoughts and beliefs that are likely to fulfill your goals, insofar as you know what your goals are.

\n
\n

Identityspace is a thousand-dimensional configuration space of thought patterns, emotion patterns, habits, dispositions, group affiliations, biases, goals, values, structured clusters of pre-conceived ideas, specific knowledge structures or cognitive representations of the self, mental frameworks centering on a specific theme that helps to organize social information, et cetera. The stability of identity varies from person to person. Most generally don't change all that much past a certain. Some, like those with bipolar disorder, cycle between two configurations in identity space. Some purposefully reinvent themselves every few months.

\n

But at any rate, identity is obviously very malleable, even over short spans of time. Chaotic fluctuations are not uncommonly triggered by innocuous stimuli. Most often, though, significant shifts in identityspace follow noticeably large events. Long-held grudges are dropped, religions are broken with, et cetera. But more regularly, people slowly and suboptimally drift through identityspace, not taking care to carefully monitor their progress or their desires, and letting goal distortion creep in to the poorly kept garden of rationality.

\n

Goals are a significant part of identityspace: for most people, they provide stable targets to work for, instead of just drifting about in the memetic sea. Thus, a great though incredibly difficult ability is being able to determinedly, methodically, swiftly, and carefully progress through identityspace towards strictly defined and significant goals. This article attempts to examine the problems and opportunities encountered while doing so.

\n
\n

Attractors are a class of thoughts, reactions, dispositions, emotions, social pressures, biases, and anything else that will push or pull or deflect or alter your grand voyage through mindspace, for better or worse. It might help to think in terms of 'positive' and 'negative' attractors, codified by their expected value in terms of successful navigation of mindspace. You can adjust for attractors, but it's difficult, and it takes a whole bunch of rationality before you start doing it right, instead of overcorrecting or trying to be too clever in using the gusts to your advantage.

\n

Unpredictable storms of mind projection fallacy on the horizon will force you off course, and whirlpools of confirmation bias will try to suck you into the abyss. Sometimes people will try to mess around with your map just to be dicks, as if Poseidon and Zeus decided to get together to zap you with thunderbolts for fun and profit. I think I may have just confused the map with the territory. You see? It's difficult. (Inspired by these.)

\n
\n

Rationality is the method of navigation. It helps you figure out what the territory actually looks like, so you can plot a course of thoughts that leads roughly in the right direction. Navigating these insane monster-ridden hyperdimensional waters isn't exactly easy. I won't go into detail about rationality, as that is what the rest of Less Wrong is for.

\n
\n

 

\n

III: The Evolution of Navigation

\n

Rationality, though similar to past attempts at careful philosophy and social epistemology, is a new and different art form. Traditional rationality, the precursor to true rationality, was the product of the social process of science. Science cares not about what goes on inside your head when you navigate thoughtspace. It exists only to verify that what you find on your journey is of value in reality. Traditional rationality is what happens when you take care to hold your methodology for thinking to the same standard that would be demanded by peer review. The traditional rationalist dreams up a bunch of scientists sorta like himself, but much more critical, and puts his ideas under their imagined expected scrutiny. Sometimes the final judge is reality, but ideally you can eliminate most silly ideas before they get that far; it's best to not actually test your garage for invisible dragons.

\n

The rationality that makes up Overcoming Bias and Less Wrong canon is what happens when the peer review council is instead made up of superintelligences that tolerate nothing less than exact precision. There is no such thing as an 'accepted' hypothesis: superintelligences don't speak binary. The number of 9's you trail your probability estimates with had better be carefully calibrated, or your Bayes score is going doooown. But such judges do not exist in reality, and so we have the Great Problem: we cannot easily verify rationality.

\n

Bayesianism is the ideal of the perfected art, though it is incredibly difficult for humans to approximate, and the field, though growing, has few practitioners. Bayesian decision theory is the ideal method of achieving our goals, but we haven't fully worked it out yet, and humans are even further away from that ideal than just Bayesianism. It's hard to know what we're even striving for, or what methods could help us do so. We rationalists do not yet have as refined of tools for becoming maximally effective people as we do for finding truth. It is probable that the lore to help us do so is already hidden somewhere for us to find.

\n

Buddhist meditation may be rationality's true spiritual predecessor. Both arts give the layman tools for navigating thoughtspace and, to a lesser extent, identityspace. Like meditation, it's hard to watch someone being rational. The vast majority of correct thinking is unremarkable stuff that goes on inside your head, and furthermore, again like meditation, most of it isn't verbal. It's intuitive instinctual reaction to patterns of thought that are either in harmony with the Bayes, or not. Distracting or biasing thoughts are pushed aside as the meditator or rationalist tries to see the world as it really is. Where meditation is concerned primarily with precise thought focused on the self, the rationality glorified by Less Wrong is precise thought focused on just about everything else. Perhaps this should change, and the two kin arts should become one, more precise and more effective.

\n

I cannot speak with any authority on meditation -- I hope others will do so -- but it does not seem to me that rationality provides one with many tools for navigating identityspace. You are told things like, policy debates should not appear one-sided. Be wary when arguing your preferred position, for arguments are like soldiers, and you will feel uneasy about betraying your allies. The Greens and the Blues spilt blood over the most trivial and unimportant things; tribal affiliations revolving around sports matches within a single unified civilization. Not even Raiders fans are that crazy, generally. You are told the story of the cult of Ayn Rand, naively called 'the most unlikely cult in history.' You are told of the fearsomly powerful confirmation bias, positive bias, consistency effects, commitment effects, cached selves, priming, anchoring, and a legion of other ways for your identity to screw you over if you're not incredibly attentive. And these things you are told present a gestalt of bad habits of thought to avoid around this whole identity thing. But the tools are not precise, and they don't really allow you to do it right. Reversed stupidity continually fails to equal intelligence. Identity can be good. Perhaps we could harness the power of identity to correctly navigate identityspace, and thus thoughtspace?

\n

 

\n

IV: Common Attractors in Mindspace

\n

There are way too many types of attractors, deflectors, warpers, and all sorts of weird creatures in mindspace to list exhaustively. Still, these are some of the ones that come most quickly to mind:

\n

Group

\n\n

Nongroup

\n\n

An exhaustive list of biases could also be made, but the above is a short attempt at listing the ones that trip people up most often, as well as the ones that people tend to use most effectively to motivate themselves to accomplish their nominal goals. The sword is double-eged, though the negative edge does tend to be significantly sharper. It is important to note that a sufficiently advanced rationalist could use any of the above to more effectively achieve their goals. Whether or not that is possible in practice is the main question of this post.

\n

 

\n

V: Rational Actors and Rationalist Attractors

\n

From the perspective of an 'aspiring rationalist' (group identity), positive attractors in identityspace do not tend to be strongly group-based. The sorts of dispositions we would like to cultivate are not particularly tied up in any of the attractors listed in the 'group' section, and perhaps not really in the 'nongroup' section either. Dispositions like 'I stay focused and work carefully but efficiently on the task at hand' aren't really well-connected to the most common natural attractors found in identityspace: maybe if we were in House Hufferpuffer it'd be different, but as it is, there's only a vague connection with instrumental rationality, not nearly enough to draw on the power of signaling and consistency effects. Other dispositions, like 'I notice confusion and immediately seek to find the faults in my model', come more naturally to an 'aspiring rationalist', as such cognitive tricks are the focus of Less Wrong canon.

\n

There are, of course, selection effects: Less Wrong is an attractor in mindspace that pulls strongest on those that already have similar dispositions and thoughts. Cascades are powerful. So it may not be surprising that Hufferpuffer dispositions do not come naturally to the aspiring rationalist.

\n

The downsides of rationalist attractors are fewer than in other groups, but they do exist. The most common one I see is overconfidence in philosophical intuition. Being handed a giant tome of careful philosophy from (mostly) Eliezer and Robin, we then think that our additional philosophical views are similarly bulletproof. I had this problem bad during the first 6 months after reading Less Wrong; I didn't notice that in reaching the level of rationality I'd reached, I was only verifying the reasoning of others, not doing new and thorough analysis myself. This gave me an inflated confidence in the correctness of my intuition, even when it clashed with the intuition of my rationalist superiors. Having the identity of 'careful analytical thinker' can lead you to think you're being careful when you're not.

\n

Also, typical groupthink. We're a pretty homogeneous bunch, and sometimes Less Wrong acts an echo chamber for not-obviously-bad but non-optimal beliefs, habits, and ideas. Even so, this part of the rationalist identity is countered by counter-cultishness, which itself is countered by awareness of cultish counter-cultishness and the related countersignaling. Oh yeah, that's another double-edged attractor in rationalist mindspace: we're very quick to go meta. Different people have different thoughts as to the overall usefulness of this disposition. I personally am rather pro-meta, whereas some of our focused instrumental rationalists think of der wille zur meta as rationalist flypaper.

\n

Less Wrong, though, is an uncharacteristic island attractor of relative sanity amongst a constellation of crazy memes. Before we get too excited about our strengths, we should explore some ways that attractors can lead to our trajectory through mindspace going disastrously wrong.

\n

 

\n

VI: Social Psychology Meets the Mindspace

\n

Social psychology is an interesting science. It is inconsistent in its accuracy, and the theories don't always carve reality at its joints. That said, there's a lot of interesting thought that's been put into it, and it is a real science. The field is very Hansonian; indeed, construal level theory belongs to this realm. By cherrypicking what seem to me to be the most interesting concepts, I think it may be possible to establish a framework for new ways to approach rationalist problems in real life by reasoning about social and psychological phenomena in terms of attractors and trajectories through mindspace.

\n
\n

Cognitive dissonance is

\n
\n

an uncomfortable feeling caused by holding conflicting ideas simultaneously. The theory of cognitive dissonance proposes that people have a motivational drive to reduce dissonance. They do this by changing their attitudes, beliefs, and actions. Dissonance is also reduced by justifying, blaming, and denying. It is one of the most influential and extensively studied theories in social psychology.

\n

Experience can clash with expectations, as, for example, with buyer's remorse following the purchase of a new car. In a state

\n

of dissonance, people may feel surprise, dread, guilt, anger, or embarrassment. People are biased to think of their choices as correct, despite any contrary evidence.

\nA powerful cause of dissonance is an idea in conflict with a fundamental element of the self-concept, such as \"I am a good person\" or \"I made the right decision.\" The anxiety that comes with the possibility of having made a bad decision can lead to rationalization, the tendency to create additional reasons or justifications to support one's choices. A person who just spent too much money on a new car might decide that the new vehicle is much less likely to break down than his or her old car. This belief may or may not be true, but it would reduce dissonance and make the person feel better. Dissonance can also lead to confirmation bias, the denial of disconfirming evidence, and other ego defense mechanisms.
\n
\n

Self-verification is 

\n
\n

a social psychological theory that asserts people want to be known and understood by others according to their firmly held beliefs and feelings about themselves, that is self-views (including self-concepts and self-esteem). A competing theory to self-verification is self-enhancement or the drive for positive evaluations. 

\n

Because chronic self-concepts and self-esteem play an important role in understanding the world, providing a sense of coherence, and guiding action, people become motivated to maintain them through self-verification. Such strivings provide stability to people’s lives, making their experiences more coherent, orderly, and comprehensible than they would be otherwise. Self-verification processes are also adaptive for groups, groups of diverse backgrounds and the larger society, in that they make people predictable to one another thus serve to facilitate social interaction. To this end, people engage in a variety of activities that are designed to obtain self-verifying information.

\n
\n
\n

Construal level theory is the the study of near versus far modes of cognition. Robin Hanson's pet topic. It attempts to describe

\n
\n

the relation between psychological distance and how abstract an object is represented in someone's mind. The general idea is that the more distant an object is from the individual the more abstract it will be thought of, while the opposite relation between closeness and concreteness is true as well. In CLT psychological distance is defined on several dimensions - temporal, spatial, social and hypothetical distance being considered most important, though there is some debate among social psychologists about further dimensions like informational, experiential or affective distance.

\n
\n

Perhaps the most interesting implication of construal level theory is on behavior and signaling patterns: actions are near, goals are far. Thus we are susceptible to spending time thinking, talking, and planning in far mode about the people we will be and the goals we will attain, but when the opportunities actually show up, near mode pragmatism and hypocrisy are engaged, and rationalizations are suddenly very easy to come by. The effects are profound. Sitting at home I think it'd be ridiculous not to ask for that one girl's number. I'm a suave guy, I'm competent, and I'm not afraid of rejection. It's a really high expected value scenario. But when I see her on the street, it turns out my anticipations were wildly off.

\n

Same with grand project ideas, like really jumpstarting a rationalist movement. That sounds great. I have a hundred ideas to try, and to email to other people to implement. But for some reason I never get around to doing them myself, even though I know the only thing stopping me is this weird sense of premeditated frustration at my own incompetence. It's hard to sabotage yourself more successfully than that.

\n

One of the most amazing superpowers a rationalist could pick up is the ability to act in near mode to optimize for rational far mode preferences. A hack like that would lead to a rationalist win, hands down. More on this in Part Two.

\n
\n

Schemata theory should ring your pattern matching bell. I'll quote from the Wikipedia article:

\n
\n

\n

A schema (pl. schemata), in psychology and cognitive science, describes any of several concepts including:

\n\n

A schema for oneself is called a \"self schema\". Schemata for other people are called \"person schemata\". Schemata for roles or occupations are called \"role schemata\", and schemata for events or situations are called \"event schemata\" (or scripts).

\n

Schemata influence our attention, as we are more likely to notice things that fit into our schema. If something contradicts our schema, it may be encoded or interpreted as an exception or as unique. Thus, schemata are prone to distortion. They influence what we look for in a situation. They have a tendency to remain unchanged, even in the face of contradictory information. We are inclined to place people who do not fit our schema in a \"special\" or \"different\" category, rather than to consider the possibility that our schema may be faulty. As a result of schemata, we might act in such a way that actually causes our expectations to come true.

\n
\n

Or in our terminology, schemata are attractors in mindspace. I highly recommend reading the whole Wikipedia article about schemata theory for a more concise and organized version of this post. It's a gem.

\n
\n

 

\n

VII: Introducing Goal Distortion

\n

It is difficult to reason about what counts as goal distortion. Humans are usually considered pretty bad at knowing what sorts of things they want. There are many who lead happy lives as ascetics without thinking that it'd have been nice to give more than wisdom to the starving and poor they left along their paths. And there are many more who chase after money and status without really realizing why, nor do they become happier in doing so. It is important, then, to identify which goals we choose to define as being distorted, and what goal system changes count as distortion and not simply enlightened maturation. Thus we come to the concepts of reflective endorsement and extrapolated volition.

\n

ga;bglamb;pl explanation explanation link to CEV bla bla bla

\n

explain why goal distortion is a serious problem bla bla bla

\n

explain mechanisms of goal distortion like commitment and consistency, avoiding cognitive dissonance, near/far, 

\n

standard methods for dealing with goal distortion like not doing drugs bla bla bla

\n

growth versus goal distortion

\n

potential for rigidly controlled and consciously maintained identity to minimize goal distortion and maximize benefical change bla

\n

 

\n

VIII: Thoughts and Signals

\n

You think what you signal, you signal what you think. Feedback processes like that don't often go supercritical, but as Eliezer points out many times in this subsequence, it is important to watch out for such cascades. That you end up thinking a lot about the things you wish to signal is one of those insights that Michael Vassar tosses around like it's no big deal, but if taken seriously, can be quite alarming and profound. I must confess, either I am significantly more aware of this fact than most, or, more likely, I have a rather strong form of this tendency. Being naturally somewhat narcissistic, I tend to have a bad habit of thinking in dialogue that is flattering to myself and my virtues. That is, flattering to the things I wish to signal. Like most people, I tend to want to signal good things, and thus by thinking about those good things I make them part of my identity and become more likely to actually do them, no?

\n

Well, sometimes it works that way. But we like to signal things in far mode, and it's pretty easy to rationalize hypocrisy when near mode work comes up. Most of the time, thinking about the things I wish to signal is a form of wireheading. Having a dialogue playing in my head about how I'm such a careful rationalist is arguably more pleasant and undoubtedly a lot easier than actually searching for and thinking about my beliefs' or my epistemology's real weak points. And when I'm thinking these self-glorifying thoughts all the time, you can bet it comes out in my interactions with people.

\n

You signal what you think; it's not easy to hide. And here enters yet another distortion to mislead you: the feeling and the belief of self-justification is a very strong attractor in mindspace, and like many such attractors, its danger is amplified by confirmation bias. Imagine that you signal your cherished disposition to a friend. Say, that you work hard on important problems. If your friend agrees and says so, you get a warm fuzzy feeling of self-verification (wiki link). The feedback loop gets more fuel. Which is great if you're able to use that fuel to actually do the important work you want to signal that you do, but not so much if the fuel is instead used to light distracting fires for your mind to worship all day instead. If your friend disagrees and calls you out on it, roughly the same things happen and the same rules apply. The responses are varied, but often people get offended, and dwell on that offense and how untrue it was, or dwell on ways to prove their attacker wrong with far mode thoughts of personal glory, or dwell on past examples of hard and diligent work done that prove their signals are credible. Such dwelling also provides fuel, though it's usually cruder, and even harder for the mind to use effectively.  

\n

bleh

\n

Your thoughts are bent by what you wish to signal. Choose your identity carefully.

\n

 

\n

IX: Signals and Identity

\n

You signal what you wish to identify with. You identify with what you signal. As if one potentially catastrophic feedback cycle was all your brain would provide you with. Always remember, it's never too difficult to shoot your own foot off.

\n

consistency effects literature summary bla bla bla

\n

cached selves awesomeness summary bla cached selves is really awesome everyone should read it bla bla

\n

near/far distinctions and effect on identity and signaling

\n

 

\n

X: Cascades: Thoughts, Signals, Identity

\n

It is inevitable that cascades will cause gravitation towards suboptimal attractors in mindspace. The strength of the currents theoretically tells you how hard you must steer in the other direction to hold a true course, but realistically, humans just aren't that good at updating against known biases. You can see an iceberg and see its danger, but the whirlpools of confirmation bias have an annoying tendency to look like safe harbors, even when you're stuck in them going 'round and 'round... and 'round.

\n

You think what you signal, you signal what you think, you signal your identity, you identify with your signals, you think about your identity, you identify with your thoughts. This... is scary. The mind is leaky, and these interactions are going on constantly. Priming, anchoring, commitment effects, consistency effects, and of course the dreaded confirmation bias and positive bias are all potential dangers. In one way, it is no wonder that people don't seem to change much. With such constant confirmation of identity, it's hard to see how one could change at all. But the untrained mind is chaotic and of potentially infinite malleability, and cascades can be very powerful. Drift happens, implicit navigation is undertaken. One twin joins a cult, the other joins Less Wrong, which is pretty much the same thing I guess but bear with me.

\n

bla bla bla feedback loops recursion cascades bla egregious links to yudkowsky bla.

\n

I will boast that I believe I have found a decent set of attractors in identityspace to aim for. Thus, though I spend a lot of time wireheading and not actually navigating towards my nominal ideal dispositions nor goals, I'm at least kinda aiming in what seems to be generally the right direction, as far as my limited rationality can see. I'm lucky in that regard.

\n

But my map is not the best one to navigate by, and the vast majority of it is blank, including the most important parts; the parts where my goals and my ideal identity lie. I have but a vague sense of direction. Furthermore, nearly all of my map is the result of slovenly lines copied secondhand from the thirdhand notes of others, and a whole bunch of those scribbles seem to be legible only as \"Here there be dragons.\" I can only imagine how much harder it'd be if I was less aware of the limitations of my map, or if I had not chosen a set of destinations to navigate towards, or if I'd accidentally got sucked into a whirlpool only to be eaten by Charbyddis. Or whatever the cognitive bias equivalent of Charbyddis is; probably faith.

\n

 

\n

XI: Harnessing the Winds of Change

\n

Navigating identityspace is tricky business. Most people don't try to. You ask them what kind of person they wish to be or what goals they wish to accomplish, and they either admit to not really thinking about it or quickly query their far mode module for something that sounds really sweet and inspiring. Those who do try tend to do so implicitly, by either carefully monitoring themselves and the way they change, or carefully monitoring their goals and in what order they are achieved. Often these are the kind of people that purposefully go out on Friday night with the intent of coming back home with a story they can tell for years to come. It's difficult to track your life and your progress without having stories and milestones to go by. They do not regularly try to directly control their course through identityspace. It's not obvious how one would even attempt to do so. As aforementioned, the tools we rationalists do have are more naturally suited to navigating thoughtspace

\n

Attractors pull. I've generally dealt with that fact in a negative light, because I tend to think mindspace has at least 7,497 dimensions, and the coordinates of the set of optimal thoughts and therefore optimal actions are in a tiny corner of that vast space. Your thoughts are being deflected and ricocheted and pulled and pushed by a swarm of memes and biases and cached selves and anchors and all sorts of things that we just can't keep track of, on timescales from seconds to decades. Some forces pulsate, others are erratic. You think you're sailing along fine when some stupid thing like the giant cheesecake fallacy blows you oh so slightly off course and causes your entire AI career to go along a totally hopeless trajectory without you're realizing it. Who wants their epic journey tripped up by something as stupidly named as the giant cheesecake fallacy? It'd be less pitiable to be eaten by Charbyddis, at least that's pretty epic.

\n

Strong metarationality would have kept that from happening: metarationality keeps you from failing predictably in special domains. But strong metarationality can be aided. Hopefully, the things we happen to be aiming for in identityspace are stable attractors that don't randomly push you away or shift around. This is not always the case: some goals are ephemeral, some are cyclical. My friend wants a girlfriend one month out of every two. That sure strains the relationship after a month or three. With such a naturally chaotic mindspace, it's difficult to be sure that what you're aiming for is something that will be there when you get to where you thought it was. You want to be the cool girl at the party, thinking this is a terminal value, but then you succeed and become the cool girl at the party and it's just not all that fulfilling. It'd have been better to navigate towards a different destination.

\n

Not that you can't set sail for multiple places at once: sometimes you just want to get to the New World, not a particular reef of the Bahamas. And sometimes you may wish to travel to two entirely different places. Do you contradict yourself? Very well, then you contradict yourself, you are large, you contain multitudes. But I think you'll find it difficult to have two very different destinations in identityspace to aim for all at once.

\n

Alright then, enough, you've heard all the warnings, seen the scribbles that say 'non-negligible potential for dragons', and now want to do some positive thinking. How can we harness these variable winds of change on our journey through identityspace?

\n

 

\n
\n

 

\n

Part Two: Brainstorming Methods for Identity Optimization

\n

[insert methods for carefully capitalizing on attractors, drawing on and fleshing out Cached Selves, among other things,bla.]

\n

 

" } }, { "_id": "rcHehQaZLSJHFX9xw", "title": "Experts vs. parents", "pageUrl": "https://www.lesswrong.com/posts/rcHehQaZLSJHFX9xw/experts-vs-parents", "postedAt": "2010-09-29T16:48:55.781Z", "baseScore": 24, "voteCount": 16, "commentCount": 23, "url": null, "contents": { "documentId": "rcHehQaZLSJHFX9xw", "html": "

I'm reviewing the literature on the link between food dyes and hyperactivity.  Studies evaluate hyperactivity largely by observations made by teachers, psychologists, and/or parents.  Observations by trained professionals using defined scales are sometimes considered superior to observations by parents.  However, a meta-analysis of 15 studies (Schab+Trinh 2004, \"Do artificial food colors promote hyperactivity in children with hyperactive syndromes?\", Developmental and Behavioral Pediatrics 25(6): 423-434), found that:

\n
\n

While health professionals' ratings (ES = 0.107) and teachers' ratings (ES = 0.0810) are not statistically significant, parents' ratings are (ES = 0.441).

\n
\n

(\"Effect strength\" = standard mean difference = average of (active - placebo) / standard deviation (pooled active and placebo).  Thanks to Unnamed for reminding me.)

\n

This isn't saying that parents reported more hyperactivity than professionals.  It's saying that, across 15 double-blind placebo experiments, the behavior observed by parents had a strong correlation with whether the child received the test substance or the placebo, over four times as strong as that measured by professionals.  Conclusion:  Observation by parents is much more reliable than observation by trained professionals.

\n

Schab & Trinh offered several reasons why this might be:  Administration of test substance might be timed so behavior changes occur primarily at home; parents observe insomnia while teachers observe attention; parents may detect behaviors that are not listed in the DSM for ADHD; and one more - parents may be \"particularly attuned to the idiosyncrasies of their own children\".  Gee, do you think?

\n

Every parent is an expert on their child's behavior.  Just not an accredited expert.

\n

Disclaimer: At least one study has found the opposite result (Schachter et al. 2001, \"How efficacious and safe is short-acting methylphenidate for the treatment of attention-deficit disorder in children and adolescents?\", Can. Med. Assoc. J. 2001, 165:1475-1488).  I haven't read it and don't know how strong the effect was.

" } }, { "_id": "w9zGnpS4pJX7k3A3E", "title": "Brain storm: What is the theory behind a good political mechanism?", "pageUrl": "https://www.lesswrong.com/posts/w9zGnpS4pJX7k3A3E/brain-storm-what-is-the-theory-behind-a-good-political", "postedAt": "2010-09-29T16:22:14.736Z", "baseScore": 3, "voteCount": 2, "commentCount": 28, "url": null, "contents": { "documentId": "w9zGnpS4pJX7k3A3E", "html": "

Patrissimo argue that we should try to design good mechanisms for governance rather than try and use the current broken mechanisms.

\n

I agree, however we don't have a theoretical framework that we can use to evaluate different systems that are proposed. Ideally we would be able to crunch some numbers and show that a Futarchy responds to the desires/needs of the populace better than \"voting for politicians who then make decisions\" or anything else we come up with.

\n

\n

So we need to be able to do things like quantifying how well the system responds to the people. Pretending that humans are agents which have a utility function would seem like an obvious simplification to make in the model. We also need to formalise \"being in charge\".

\n

I tend to formalise who has authority in a system as a number of pairings of people and posts. Posts might be the seat in the senate or the presidency, although we will want to expand this notion of post to look at all bureacracy and how they are filled.

\n

One way a proposed mechanism would work would be through controlling the pairings. Futarchy suggests that we might look at other ways to make the mechanism work. This would be quite hard to model, we would have to model the incentives of the people making the prosperity indexes and the incentives of the market participants.

\n

So we want a system that selects the people/post pairing that maximises the groups utility function, while assuming that the people in control of the post will maximise their utility and everyone else will try to (ab)use the mechanism to maximise their own utility.

\n

Does this seem like the right track?

" } }, { "_id": "GGaPyEsrtuzJxYcK5", "title": "Hypothetical - Moon Station Government", "pageUrl": "https://www.lesswrong.com/posts/GGaPyEsrtuzJxYcK5/hypothetical-moon-station-government", "postedAt": "2010-09-29T15:57:33.884Z", "baseScore": 2, "voteCount": 2, "commentCount": 22, "url": null, "contents": { "documentId": "GGaPyEsrtuzJxYcK5", "html": "

You are now in control of a habitat on the moon. It has no ties to any government; its creation was funded by a wealthy philanthropist who just wants people to emigrate from Earth. The cost of doing so is within the reach of a middle-class family if they sell their home; you can therefore expect a decent number of immigrants.

\n

What sort of government do you establish? How do you go about ruling so that your new settlement on the moon will survive and thrive?

" } }, { "_id": "A6pj6XbKu8J3WwWAq", "title": "Permission for mind uploading via online files", "pageUrl": "https://www.lesswrong.com/posts/A6pj6XbKu8J3WwWAq/permission-for-mind-uploading-via-online-files", "postedAt": "2010-09-29T08:23:13.232Z", "baseScore": 3, "voteCount": 5, "commentCount": 23, "url": null, "contents": { "documentId": "A6pj6XbKu8J3WwWAq", "html": "

Giulio Prisco made a blog post giving permission to use the data in his Gmail account to reconstruct an uploaded copy of him.

\r\n
\r\n


To whom it may concern:

I am writing this in 2010. My Gmail account has more than 5GB of data, which contain some information about me and also some information about the persons I have exchanged email with, including some personal and private information.

I am assuming that in 2060 (50 years from now), my Gmail account will have hundreds or thousands of TB of data, which will contain a lot of information about me and the persons I exchanged email with, including a lot of personal and private information. I am also assuming that, in 2060:

1) The data in the accounts of all Gmail users since 2004 is available.
2) AI-based mindware technology able to reconstruct individual mindfiles by analyzing the information in their aggregate Gmail accounts and other available information, with sufficient accuracy for mind uploading via detailed personality reconstruction, is available.
3) The technology to crack Gmail passwords is available, but illegal without the consent of the account owners (or their heirs).
4) Many of today's Gmail users, including myself, are already dead and cannot give permission to use the data in their accounts.

If all assumptions above are correct, I hereby give permission to Google and/or other parties to read all data in my Gmail account and use them together with other available information to reconstruct my mindfile with sufficient accuracy for mind uploading via detailed personality reconstruction, and express my wish that they do so.

Signed by Giulio Prisco on September 28, 2010, and witnessed by readers.

NOTE: The accuracy of the process outlined above increases with the number of persons who give their permission to do the same. You can give your permission in comments, Twitter or other public spaces.

\r\n
\r\n

Ben Goertzel copied the post and gave the same permission on his own blog. I made some substantial changes, such as adding a caveat to exclude the possibility of torture worlds (unlikely I know, but can't hurt), and likewise gave permission in my blog. Anders Sandberg comments on the thing.

" } }, { "_id": "ZvCXqc8ATixdvqvCY", "title": "Nootropics and Cognitive-enhancement Discussion Area", "pageUrl": "https://www.lesswrong.com/posts/ZvCXqc8ATixdvqvCY/nootropics-and-cognitive-enhancement-discussion-area", "postedAt": "2010-09-29T05:29:47.679Z", "baseScore": 6, "voteCount": 4, "commentCount": 32, "url": null, "contents": { "documentId": "ZvCXqc8ATixdvqvCY", "html": "

Many people on LW have expressed interest in nootropics and cognitive enhancement. However, since people are presumably interested in the long-term efficacy, side effects and withdrawal symptoms of using various supplements, open-threads (which quickly become hard to navigate and only last a couple weeks anyway) are a sub-optimal place to discuss the topic.

\n

So I thought I would create a permanent area for people who want to discuss the topic long-term.

\n

Here are some \"quick links\" courtesy of gwern:

\n\n

And Wikipedia's main page on the topic.

\n

 

\n

 

\n

 

" } }, { "_id": "8QjPuo3uubjwLuvjv", "title": "The Direct Democracy Experiment", "pageUrl": "https://www.lesswrong.com/posts/8QjPuo3uubjwLuvjv/the-direct-democracy-experiment", "postedAt": "2010-09-29T04:36:22.151Z", "baseScore": 3, "voteCount": 3, "commentCount": 12, "url": null, "contents": { "documentId": "8QjPuo3uubjwLuvjv", "html": "

>\"The heart of the problem is not how we vote for officials - it's that we vote for officials, instead of getting to vote on issues.

\r\n

>Americans are proud of being \"governed by the people\", yet a citizen has no effective way to have any influence on any particular issue! If it's very important to me to promote gay rights or environmental responsibility, I'm supposed to vote for a Democrat? How effective is that?

\r\n

>We need to ditch representative democracy if we want democracy. (The next question is whether we want democracy.)\"---PhilGoetz

\r\n

The main problem with direct democracy is that we are reliant on \"the people\", who may be ill-informed and not make correct choices on issue questions. With a representative democracy, you may have intelligent and rational actors who would make better policy choices. PhilGoetz may disagree though, and believe that it is important to enfranschie \"the people\" in policymaking...

\r\n

Rather than rely on philosophical discussion based on values, I propose an experiment to find out if PhilGoetz' Direct Democracy works.

\r\n

I start up a simulation (which I will not name so that you don't play the simulation ahead of time). I will give you Policy Questions based on the simulation, where you will simply vote Yes or No. Majority rules. (To make it more interesting, I'll have each vote represent a random \"interest group\", with control over entire voting blocs.) Anybody can change their vote at any time. If people don't have the time to vote, then can develop a \"profile\" which would allow them to vote in proxy. Voting will end after a specific period of time, or the moment the vote crosses over the majority threshold, and stays over for a required period of time.

\r\n

The simulation will end in a war against an NPC country. If you \"win\" this war, you win the simulation, the Direct Democracy works, and then future experiments could lead to people comparing the effectiveness of different types of \"democracy\" in creating good policy. If you \"lose\" the war, the government is destroyed, and you lose the simulation. Direct Democracy may has some problems and need to be modified or abanonded.

\r\n

I will limit the amount of information I will give you. I'll only give enough information so you understand how the simulation works, but no more. It's up to you to decide what is the best thing to do.

" } }, { "_id": "N42fESdsn8pD5buaT", "title": "Counterfactual mugging: alien abduction edition", "pageUrl": "https://www.lesswrong.com/posts/N42fESdsn8pD5buaT/counterfactual-mugging-alien-abduction-edition", "postedAt": "2010-09-28T21:25:41.142Z", "baseScore": 4, "voteCount": 3, "commentCount": 18, "url": null, "contents": { "documentId": "N42fESdsn8pD5buaT", "html": "

Omega kidnapps you and an alien from FarFarAway Prime, and gives you the choice: either the alien dies and you go home with your memory wiped, or you lose an arm, and you both go home with your memories wiped. Nobody gets to remember this. Oh and Omega flipped a coin to see who got to choose. What is your choice?

\n

As usual, Omega is perfectly reliable, isn't hiding anything, and goes away afterwards. You also have no idea what the alien's values are, where it lives, what it would choose, nor what is the purpose of that organ that pulsates green light.

\n

(This is my (incorrect) interpretation of counterfactual mugging, which we were discussing on the #lesswrong channel; Boxo pointed out that it's Prisonner's Dilemma where a random player is forced to cooperate, and isn't that similar to counterfactual mugging.)

" } }, { "_id": "hapAgwXQfkfw3iLxM", "title": "Why learning programming is a great idea even if you'd never want to code for a living", "pageUrl": "https://www.lesswrong.com/posts/hapAgwXQfkfw3iLxM/why-learning-programming-is-a-great-idea-even-if-you-d-never", "postedAt": "2010-09-28T16:51:04.145Z", "baseScore": 16, "voteCount": 14, "commentCount": 29, "url": null, "contents": { "documentId": "hapAgwXQfkfw3iLxM", "html": "
Here is the short version:
\n
\n
Writing program code is a good way of debugging your thinking -- Bill Venables
\n
\n
It's short, apt, and to the point. It does have a significant flaw: it uses a term I've come to hate, \"bug\". I don't know if  Grace Murray Hopper  is to blame for this term and the associated image of an insect creeping into a hapless programmer's hardware, but I suspect this one word may be responsible in some part for the sad state of the programming profession.
\n
You see, a lot gets written about bugs, debugging, testing, and so on. A lot of that writing only serves to obscure one plain fact, which if I were slightly more pretentious I'd call one of the fundamental laws of software:
\n
\n
Every \"bug\" or defect in software is the result of a mismatch between a person's assumptions, beliefs or mental model of something (a.k.a. \"the map\"), and the reality of the corresponding situation (a.k.a. \"the territory\").
\n
\n
The software industry is currently held back by a conception of programming-as-manual-labor, consisting of semi-mechanically turning a specification document into executable code. In that interpretation \"bugs\" or \"gremlins\" are the equivalent of machine failures: something unavoidable, to be controlled by rigorous statistical controls, replacement of faulty equipment (programmers), and the like.
\n
A better description would be much closer to \"the art of improving your  understanding  of some business domain by expressing the details of that domain in a formal notation\". The resulting program isn't quite a by-product of that activity - it's important, though not nearly as important as distilling the domain understanding.
\n
\n
You think you know when you can learn, are more  sure when you can write, even more  when you can teach, but certain when you can program. -- Alan Perlis
\n
\n
So, learning how to program is one way of learning how to think better. But wait; there's more.
\n

An art with a history

\n
It's easy, if your conception of programming is \"something people do to earn a few bucks on freelance exchange sites by coding up Web sites\", to think of programming as an area where only the past five years or so are of any interest. Get up to speed on the latest technology, and you're good to go.
\n
In fact programming is a discipline with a rich and interesting history1. There is a beauty in the concrete expression of algorithmic ideas in actual programming languages, quite independently of the more mathematical aspects which form the somewhat separate discipline of \"computer science\". You can do quite a lot of mathy computer science without needing concepts like modularity, coupling or cohesion which are of intense interest to practicing programmers (the competent ones, at any rate) and which have engendered a variety of approaches.
\n
People who like elegant intellectual constructions will appreciate what is to be found in programming languages, and if you can sort the \"classics\" from the dregs, in the architecture and design of many programs.
\n

Deep implications

\n
Mathematicians are concerned with the study of quantity and structure. Programming requires knowledge of what, despite its being considered a part of mathematics, strikes me as a distinct discipline: the intersection between the theory of computation and the theory of cognition. To program well, you have to have a feel for how computations unfold, but you must also have a well grounded understanding of how humans parse and manipulate textual descriptions of computations. It is in many ways a literary skill.
\n
What is especially exciting about programming is that we have good reason to believe that our own minds can be understood adequately by looking at them as computations: in some sense, then, to become more familiar with this medium, textual descriptions of computations, is to have a new and very interesting handle on understanding ourselves.
\n
This brief presentation of programming needs to be completed - in further posts - by a look the \"dark side\" of programming: biases that are occupational hazards of programmers; and by a closer look at the skill set of a competent programmer, and how that skill set overlaps with a rationalist's developmental objectives.
\n

\n
\n
\n1
This history is sadly ignored by a majority of practicing programmers, to detrimental effect. Inventions pioneered in Lisp thirty or forty years ago are being rediscovered and touted as \"revolutions\" every few years in languages such as Java or C# - closures, aspects, metaprogramming...
" } }, { "_id": "3hZHC8nTTYujnTdxe", "title": "Test to experiment with the Discussion section", "pageUrl": "https://www.lesswrong.com/posts/3hZHC8nTTYujnTdxe/test-to-experiment-with-the-discussion-section", "postedAt": "2010-09-28T14:34:16.039Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "3hZHC8nTTYujnTdxe", "html": "

This isn't a real article, it's just a test that I'm using to see what the Discussion section can and can't do. In particular, I want to see whether it's possible to move an article from the Discussion section to a Less Wrong top-level post (or vise-versa); if so, that would be a good strategy for drafts that need feedback before they're ready for the front page, but which are likely to generate topical discussions that should be preserved.

" } }, { "_id": "QZ29G5MoKvjpAmHPb", "title": "New Discussion section on LessWrong!", "pageUrl": "https://www.lesswrong.com/posts/QZ29G5MoKvjpAmHPb/new-discussion-section-on-lesswrong", "postedAt": "2010-09-28T13:08:26.251Z", "baseScore": 26, "voteCount": 20, "commentCount": 39, "url": null, "contents": { "documentId": "QZ29G5MoKvjpAmHPb", "html": "

There is a new discussion section on LessWrong.

\n

According to the (updated) About page:

\n
\n

The Less Wrong discussion area is for topics not yet ready or not suitable for normal top level posts. To post a new discussion, select \"Post to: Less Wrong Discussion\" from the Create new article page. Comment on discussion posts as you would elsewhere on the site.

\n

Votes on posts are worth ±10 points on the main site and ±1 point in the discussion area. [...] anyone can post to the discussion area.

\n
\n

(There is a link at the top right, under the banner)

" } }, { "_id": "6WT6XxcjhBrm9SCJK", "title": "What is the sound of one hand clapping?", "pageUrl": "https://www.lesswrong.com/posts/6WT6XxcjhBrm9SCJK/what-is-the-sound-of-one-hand-clapping", "postedAt": "2010-09-28T12:12:44.009Z", "baseScore": 1, "voteCount": 2, "commentCount": 15, "url": null, "contents": { "documentId": "6WT6XxcjhBrm9SCJK", "html": "

The other most famous zen koan, has a distinct and correct answer that is not mu. In the same way that the answer to \"what is the sound of two hands clapping?\" is to perform an action, the answer to this koan is an action. While there is an endless amount of deconstruction that you can do once you have the answer, the correct answer itself is not deconstruction, but a simple answer.

\n

I'm surprised at how rare knowledge of the answer of this koan is when it's easy to find the answer by Googling \"what is the sound of one hand clapping?\". I wonder why the answer to the mu koan seems to be so much more widespread than the answer to this koan, when the one hand clapping koan is even more popular in the West than the mu koan. Please don't post answers that come from Google in this thread without spoiler warnings.

" } }, { "_id": "jmP6zs3fjLS2YCPJM", "title": "Nothing wastes resources like saving them", "pageUrl": "https://www.lesswrong.com/posts/jmP6zs3fjLS2YCPJM/nothing-wastes-resources-like-saving-them", "postedAt": "2010-09-28T07:03:52.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "jmP6zs3fjLS2YCPJM", "html": "

Imagine you find yourself in possession of a diamond mine. However you don’t like diamonds very much; you think they are vastly overvalued compared to important resources such as soil. You are horrified that people waste good soil in their front gardens where they are growing nothing of much use, and think it would be better if they decorated with a big pile of this useless carbon crystal. What do you do?

\n

a) Cover your own lawn with diamonds

\n

b) Donate as many diamonds as you can for free to anyone who might use them to decorate where they would use soil

\n

c) Sell the diamonds. Buy something you do value.

\n

d) Something else

\n

Environmentalism often takes the form of the conviction that human labor should take the place of other resource use. Bikes should be ridden instead of cars, repair is superior to replacement, washing and sorting recycling is better than using up tip space, and so on. This is usually called ‘saving resources’ not ‘using up more valuable resources’. One might argue that while human labor is usually relatively expensive (you can generally make much more selling five minutes of time than a liter of tip space and a couple of cans worth of clean used steel), environmentalists often consider the other resources to be truly more valuable, often because they are non-renewable and need to be shared between everyone in the future too. Even so, since when is it sensible to treat your overvalued resources as if they were worthless? How will resources come to be used more efficiently if those who care about the issue destroy their own potential by donating their most valuable assets to the world at large in the form of the very things which the world supposedly blithely squanders?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "F2NW5J9GtHEGg9ATv", "title": "Personal decisions the leading cause of death", "pageUrl": "https://www.lesswrong.com/posts/F2NW5J9GtHEGg9ATv/personal-decisions-the-leading-cause-of-death", "postedAt": "2010-09-28T06:52:16.265Z", "baseScore": 13, "voteCount": 10, "commentCount": 10, "url": null, "contents": { "documentId": "F2NW5J9GtHEGg9ATv", "html": "

I've had this paper (pdf) in my \"for LW\" pile for a while, I didn't want to dump it in the open thread to have it promptly drowned out but neither could I think of much more to say to make it worth a top-level post on the main LW.

\n
\n

This paper analyzes the relationships between personal decisions and premature deaths in the United States. The analysis indicates that over one million of the 2.4 million deaths in 2000 can be attributed to personal decisions and could have been avoided if readily available alternative choices were made.

\n
\n

Conclusion: the impact that thinking better could have on people's lives is way underestimated.

\n

Discuss. :)

" } }, { "_id": "yFcxfAgt2GwYbK7Fe", "title": "Open Thread September, Part 3", "pageUrl": "https://www.lesswrong.com/posts/yFcxfAgt2GwYbK7Fe/open-thread-september-part-3", "postedAt": "2010-09-28T05:21:48.666Z", "baseScore": 4, "voteCount": 3, "commentCount": 217, "url": null, "contents": { "documentId": "yFcxfAgt2GwYbK7Fe", "html": "

The September Open Thread, Part 2 has got nearly 800 posts, so let's have a little breathing room.

\n

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

" } }, { "_id": "MhWjxybo2wwowTgiA", "title": "Anti-akrasia remote monitoring experiment", "pageUrl": "https://www.lesswrong.com/posts/MhWjxybo2wwowTgiA/anti-akrasia-remote-monitoring-experiment", "postedAt": "2010-09-27T23:34:50.608Z", "baseScore": 59, "voteCount": 47, "commentCount": 118, "url": null, "contents": { "documentId": "MhWjxybo2wwowTgiA", "html": "

So we (Richard Hollerith and me) tried out my anti-akrasia idea. Actually we've been doing it for more than a week now. Turns out it works just like I thought it would: when you know an actual person is checking your screen at random intervals, and they will IM you whenever you start procrastinating online, and they expect the same from you... you become ashamed of procrastinating online. You get several \"clean\" hours every day, where you either do work or stay away from the computer - no willpower required. Magic.

\n

Proofpic time! Once we both left our VNC windows open for a while, which resulted in this:

\n

\"\"

\n

The idea isn't new. I first got it this winter, Alicorn and AdeleneDawner are apparently doing similar things unilaterally, and even Eliezer has been using a watcher while writing his book. I don't know anyone who tried the Orwellian mutual screen capture thing before, but I won't be surprised if a lot of people are already quietly practicing it.

\n

Being watched for the first time didn't make me feel as vulnerable as you'd think, because, realistically, what can the other person glean from my monitor while I work? Random screenfuls of source code? Headings of emails? We don't realize how normal the little details of our lives would look to strangers. In the words of McSweeney's, \"chances are, people will understand. Most people are pretty understanding.\" The experiment did feel weird at first, but it was the expected kind of weird - the feeling you should get when you're genuinely trying something new for the first time, rather than just rehashing. It feels normal now. In fact, I'm already ever-so-slightly worried about becoming dependent on remote monitoring for getting work done. You decide whether that's a good sign.

\n

Passing the microphone to Richard now:

\n
\n

I had to set a timer (for between 5 and 11 minutes depending on circumstances) to remind me to check Vladimir's screen (resetting the timer manually after every check).  If I did not, I either spent too much time looking at his screen or let him go too long without monitoring.

I tend to think that if I continue to monitor people in this way, I will eventually come to use software (particularly software running on the monitored computer) to reduce the demands on my time and attention, but my more immediate concern is whether the technique will remain effective when it is continued for another month or so or whether, e.g., everyone who volunteers to be monitored comes to resent it.

Because of technical problems, Vladimir has not yet been able to monitor me in my \"familiar software environment\" and consequently the real test what it is like for me to be monitored has not yet been done. Vladimir has monitored my using a borrowed Windows machine, but I am unfamiliar with Windows, and in general, when I am taken out of my familiar environment, I usually gain temporary freedom from my usual patterns of procrastination.  I did feel embarrassment at how ineffective my use of Windows probably seemed to Vladimir.

\n
\n

In conclusion, the technique seems to help me a lot, even though it's shifting my sleep pattern to somewhere in between Moscow and California. My current plan is to keep doing it as long as there are willing partners or until my akrasia dissolves by itself (unlikely). The offers I made to other LW users still stand. Richard is in talks with another prospective participant and would like more. We want this post to actually help people. Any questions are welcome.

\n

UPDATE one month later: we're still doing it, and everyone's still welcome to join. Won't update again.

" } }, { "_id": "waYngmSr9tPrAKiyS", "title": "We have a new discussion area", "pageUrl": "https://www.lesswrong.com/posts/waYngmSr9tPrAKiyS/we-have-a-new-discussion-area", "postedAt": "2010-09-27T07:50:15.554Z", "baseScore": 14, "voteCount": 8, "commentCount": 32, "url": null, "contents": { "documentId": "waYngmSr9tPrAKiyS", "html": "

After contributions from a number of us (by random example here, here) over a number of months, particularly User:wmoore and User:tommccabe (and all happening before User:Yvain's work here, so we missed those ideas) we have a discussion area.

\n

Discussion, including discussion of the discussion area, is welcome.

" } }, { "_id": "4FfDgELAK7dudRgZb", "title": "Post-deploy Test", "pageUrl": "https://www.lesswrong.com/posts/4FfDgELAK7dudRgZb/post-deploy-test", "postedAt": "2010-09-27T05:16:52.916Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "4FfDgELAK7dudRgZb", "html": "

Test

" } }, { "_id": "hoBMLMqNsRunPRN54", "title": "Don't judge a skill by its specialists", "pageUrl": "https://www.lesswrong.com/posts/hoBMLMqNsRunPRN54/don-t-judge-a-skill-by-its-specialists", "postedAt": "2010-09-26T20:56:43.485Z", "baseScore": 78, "voteCount": 58, "commentCount": 36, "url": null, "contents": { "documentId": "hoBMLMqNsRunPRN54", "html": "\n

tl;dr: The marginal benefits of learning a skill shouldn't be judged heavily on the performance of people who have had it for a long time. People are unfortunately susceptible to these poor judgments via the representativeness heuristic.

\n

Warn and beware of the following kludgy argument, which I hear often and have to dispel or refine:

\n

\"Naively, learning «skill type» should help my performance in «domain». But people with «skill type» aren't significantly better at «domain», so learning it is unlikely to help me.\"

\n

In the presence or absence of obvious mediating factors, skills otherwise judged as \"inapplicable\" might instead present low hanging fruit for improvement. But people too often toss them away using biased heuristics to continue being lazy and mentally stagnant. Here are some parallel examples to give the general idea (these are just illustrative, and might be wrong):

\n
\n

Weak argument: \"Gamers are awkward, so learning games won't help my social skills.\"
Mediating factor: Lack of practice with face-to-face interaction.
Ideal: Socialite acquires moves-ahead thinking and learns about signalling to help get a great charity off the ground.

\n

Weak argument: \"Physicists aren't good at sports, so physics won't help me improve my game.\"
Mediating factor: Lack of exercise.
Ideal: Athlete or coach learns basic physics and tweaks training to gain a leading edge.

\n

Weak argument: \"Mathematicians aren't romantically successful, so math won't help me with dating.\"
Mediating factor: Aversion to unstructured environments.
Ideal: Serial dater learns basic probability to combat cognitive biases in selecting partners.

\n

Weak argument: \"Psychologists are often depressed, so learning psychology won't help me fix my problems.\"
Mediating factor: Time spent with unhappy people.
Ideal: College student learns basic neuropsychology and restructures study/social routine to accommodate better unconscious brain functions.

\n
\n

Aside from easily identifiable particular flaws [as SarahC points out, the difference between an athelete and a physicist isn't just physical activity], there are a few generic reasons why these arguments are weak:

\n\n

All this should be taken into account before dismissing the new skill option. In general, try to flesh out the analysis with the following themes:

\n\n

So yeah, don't let specialists over-represent the skills the specialize in. Many readers here are in the \"already have it\" category for a lot of the skills I'm talking about, and there are already lots of posts convincing us to decompartmentalize those skills… but it's also helpful to consider the above ideas in balance with the legitimate counterarguments when convincing others to learn and apply new skills.

" } }, { "_id": "LgrjCp7z6awDnTt3n", "title": "Vote Qualifications, Not Issues", "pageUrl": "https://www.lesswrong.com/posts/LgrjCp7z6awDnTt3n/vote-qualifications-not-issues", "postedAt": "2010-09-26T20:26:32.153Z", "baseScore": 11, "voteCount": 36, "commentCount": 187, "url": null, "contents": { "documentId": "LgrjCp7z6awDnTt3n", "html": "

In the United States and other countries, we elect our leaders. Each individual voter chooses some criteria by which to decide who they vote for, and the aggregate result of all those criteria determines who gets to lead. The public narrative overwhelmingly supports one strategy for deciding between politicians: look up their positions on important and contentious issues, and vote for the one you agree with. Unfortunately, this strategy is wrong, and the result is inferior leadership, polarization into camps and never-ending arguments. Instead, voters should be encouraged to vote based on the qualifications that matter: their intelligence, their rationality, their integrity, and their ability to judge character.

\n

If an issue really is contentious, then a voter without specific inside knowledge should not expect their opinion to be more accurate than chance. If everyone votes based on a few contentious issues, then politicians have a powerful incentive to lie about their stance on those issues. But the real problem is, most of the important things that a politician does have nothing to do with the controversies at all. Whether a budget is good or bad depends on how well its author can distinguish between efficient and inefficient spending, over many small projects and expenditures that will never be reviewed by the voters, and not on the total amount taxed or spent. Whether a regulation is good or bad depends on how well its author can predict the effects and engineer the small details for optimal effect, and not on whether it is more or less strict overall. Whether foreign policies succeed or fail depends on how well the diplomats negotiate, and not on any strategy that could be determined years earlier before the election.

\n

Voters know a lot less about governance than the politicians who study it full time, and sometimes, that leads them to incorrect conclusions. This can force politicians to choose between making the right decision, based on inside knowledge or on complexities that voters wouldn't understand, and making the wrong decision to please voters. It is not possible to know how much should be taxed or spent without studying the current budget in detail. It is not possible to know how banks should be regulated without spending years studying economics. In theory, experts can spread the correct answers to these questions in the media. But neither the media nor the public can tell right answers from wrong ones, and sometimes even the experts and the media have insufficient information. It is not possible to know how much money should be spent on defense without reading classified military intelligence briefings. It is not possible to know which foreign policies will work without talking to foreign leaders.

\n

Instead of looking at positions, look for signs that indicate their skills.  Hearing them speak is good, but only when they're forced to improvise and not reading someone else's words from a teleprompter - ie, interviews, not speeches. And be sure to notice whenever they seem to ignore the question and answer a different one instead; that means they couldn't answer the original question. Look up each candidate's alma matter and GPA, if available. If they've held office before, try to find out how much corruption there was under them, and how much of it was pre-existing.

\n

Unfortunately, actually determining how qualified a candidate is can be extremely difficult, we are likely to fall prey to confirmation bias - that is, we tend to emphasize evidence that candidates we like are qualified, and ignore evidence that candidates we don't like aren't qualified. This is especially likely to occur when the information we do have is in a form such that it can't easily be compared. For example, if candidate A held an office and successfully reduced corruption, and candidate B held a similar office but failed to do so, then we should strongly favor candidate A; but what usually happens is that we only have one side of the comparison. For example, it may be that candidate A successfully reduced corruption, but candidate B led a department which didn't have much corruption to begin with, or which was never inspected by journalists closely enough to determine what effect candidate B had. In order to defend against this bias, we should decide how significant a piece of evidence would be before we hear which candidates they apply to.

\n

We should also try to blind ourselves to information that we know will bias us. Unfortunately, the media makes this very difficult; nearly every description of a political candidate will also mention their political party, and this is the one fact we most need to avoid! We need information sources that provide what's actually relevant, and hide what's biasing and irrelevant. We should encourage politicians to take standardized tests and publish their scores. We have sites that help compare politicians' stances on issues; we should encourage those same sites to also provide GPAs and name alma matters, link to interview transcripts, and collect blinded expert opinions on the level of understanding those transcripts display.

\n

Finally, encouraging voters to pay attention to contentious issues encourages affective death spirals, which lead to petty strife and never-ending arguments. Suppose someone starts with a slight preference for party A over party B, and starts studying political issues. They will have a slight preference for sources that favor party A, and party A's positions. Then, when an issue is too close to decide on its merits, they don't acknowledge that; instead, they adopt whichever position their party prefers. Then they go back to compare the parties, and find that party A agrees with them much more than party B does (since their previous information was biased that way), and and strengthen their preference for A over B. In the next iteration, they switch to information sources that agree with their new position, and which are slightly more biased. After enough iterations of this, voters can end up believing that they are informed and impartial, and that party B eats kittens.

\n

Don't waste time arguing about issues which already have entrenched positions, with intelligent people on both sides, unless the issue is actually going to be on an upcoming ballot that you are going to vote on directly. If you're voting on politicians, talk about the politicians themselves - their integrity, their intelligence, and their rationality. That's what really matters, and that's what should win your support.

" } }, { "_id": "DbJ7tEhtxNWpPBxo5", "title": "What Makes My Attempt Special?", "pageUrl": "https://www.lesswrong.com/posts/DbJ7tEhtxNWpPBxo5/what-makes-my-attempt-special", "postedAt": "2010-09-26T06:55:38.929Z", "baseScore": 43, "voteCount": 36, "commentCount": 22, "url": null, "contents": { "documentId": "DbJ7tEhtxNWpPBxo5", "html": "

A crucial question towards the beginning of any research project is, why should my group succeed in elucidating an answer to a question where others may have tried and failed?

\n

Here's how I'm going about dividing the possible worlds, but I'm interested to see if anyone has any other strategies. First, the whole question is conditional on nobody having already answered the particular question you're interested in. So, you first need an exhaustive lit review, that should scale in intensity based on how much effort you expect to actually expend on the project. Still nothing? These are the remaining possibilities:

\n

1) Nobody else has ever thought of your question, even though all of the pieces of knowledge needed to formulate it have been known for years. If the field has many people involved, the probability of this is vanishingly small and you should systematically disabuse yourself of your fantasies if you think like this often. Still... if true, the prognosis: a good sign.

\n

2) Nobody else has ever thought of your question, because it wouldn't have been ask-able without pieces of knowledge that were discovered just recently. This is common in fast-paced fields and it's why they can be especially exciting. The prognosis: a good sign, but work quickly!

\n

3) Others have thought of your question, but didn't think it was interesting enough to devote serious attention to. We should take this seriously, as how informed others choose to allocate their attention is one of our better approximations to real prediction markets. So, the prognosis: bad sign. Figure out whether you can not only answer your question but validate its usefulness / importance, too. 

\n

4) Others have thought of your question, thought it was interesting, but have never tried to answer it because of resource or tech restraints, which you do not face. Prognosis: probably the best-case scenario.

\n

5) Others have thought of your question and run the relevant tests, but failed to get any consistent / reliable results. It'd be nice if there were no publication bias but of course there is--people are much more likely to publish statistically significant, positive results. Due to this bias, it is sometimes hard to tell precisely how many dead skeletons and dismembered brains line your path, and because of this uncertainty you must assign this possibility a non-zero probability. The prognosis: a bad sign, but do you feel lucky?

\n

6) Others have thought of your question, run the relevant tests, and failed to get consistent / reliable results, but used a different method than the one you will use. Your new tech might clear up some of the murkiness, but it's important here to be precise about which specific issues your method solves and which it doesn't. The prognosis: all things equal, a good sign.

\n

These are the considerations we make when we decide whether to pursue a given topic. But even if you do choose to pursue the question, some of these possibilities have policy recommendations for how to proceed. For example, using new tech, even if it's not necessarily demonstrably better in all cases, seems like a good idea given the possibility of #6. 

" } }, { "_id": "NsPzAxaZJhbunfK36", "title": "A Player of Games", "pageUrl": "https://www.lesswrong.com/posts/NsPzAxaZJhbunfK36/a-player-of-games", "postedAt": "2010-09-23T22:52:38.849Z", "baseScore": 32, "voteCount": 25, "commentCount": 74, "url": null, "contents": { "documentId": "NsPzAxaZJhbunfK36", "html": "

\n \n

Earlier today I had an idea for a meta-game a group of people could play. It’d be ideal if you lived in an intentional community, or were at university with a games society, or somewhere with regular Less Wrong Meetups.

\n

Each time you would find a new game. Each of you would then study the rules for half an hour and strategise, and then you’d play it, once. Afterwards, compare thoughts on strategies and meta-strategies. If you haven’t played Imperialism, try that. If you’ve never tried out Martin Gardner’s games, try them. If you’ve never played Phutball, give it a go.

\n

It should help teach us to understand new situations quickly, look for workable exploits, accurately model other people, and compute Nash equilibrium. Obviously, be careful not to end up just spending your life playing games; the aim isn't to become good at playing games, it's to become good at learning to play games - hopefully including the great game of life.

\n

However, it’s important that no-one in the group know the rules before hand, which makes finding the new games a little harder. On the plus side, it doesn’t matter that the games are well-balanced: if the world is mad, we should be looking for exploits in real life.

\n

It could be really helpful if people who knew of good games to play gave suggestions. A name, possibly some formal specifications (number of players, average time of a game), and some way of accessing the rules. If you only have the rules in a text-file, rot13 them please, and likewise for any discussion of strategy.

" } }, { "_id": "SgZ2mhvDbneBusFEB", "title": "Politics as Charity", "pageUrl": "https://www.lesswrong.com/posts/SgZ2mhvDbneBusFEB/politics-as-charity", "postedAt": "2010-09-23T05:33:57.645Z", "baseScore": 37, "voteCount": 41, "commentCount": 165, "url": null, "contents": { "documentId": "SgZ2mhvDbneBusFEB", "html": "

Related toShut up and multiplyPolitics is the mind-killerPascal's MuggingThe two party swindleThe American system and misleading labelsPolicy Tug-of-War 

\n

Jane is a connoisseur of imported cheeses and Homo Economicus in good standing, using a causal decision theory that two-boxes on Newcomb's problem. Unfortunately for her, the politically well-organized dairy farmers in her country have managed to get an initiative for increased dairy tariffs on the ballot, which will cost her $20,000. Should she take an hour to vote against the initiative on election day? 

\n

She estimates that she has a 1 in 1,000,000 chance of casting the deciding vote, for an expected value of $0.02 from improved policy. However, while Jane may be willing to give her two cents on the subject, the opportunity cost of her time far exceeds the policy benefit, and so it seems she has no reason to vote.

\n

Jane's dilemma is just the standard Paradox of Voting in political science and public choice theory. Voters may still engage in expressive voting to affiliate with certain groups or to signal traits insofar as politics is not about policy, but the instrumental rationality of voting to bring about selfishly preferred policy outcomes starts to look dubious. Thus many of those who say that we rationally ought to vote in hopes of affecting policy focus on altruistic preferences: faced with a tiny probability of casting a decisive vote, but large impacts on enormous numbers of people in the event that we are decisive, we should shut up and multiply, voting if the expected value of benefit to others sufficiently exceeds the cost to ourselves.

\n

Meanwhile, at the Experimental Philosophy blog, Eric Schwitzgebel reports that philosophers overwhelmingly rate voting as very morally good (on a scale of 1 to 9), with voting placing right around donating 10% of one's income to charity. He offers the following explanation:

\n
Now is it just crazy to say that voting is as morally good as giving 10% of one's income to charity? That was my first reaction. Giving that much to charity seems uncommon to me and highly admirable, while voting... yeah, it's good to do, of course, but not that good. One thought, however -- adapted from Derek Parfit -- gives me pause about that easy assessment. In the U.S. 2008 Presidential election, I'd have said the world would be in the ballpark of $10 trillion better off with one of the candidates than the other. (Just consider the financial and human costs at stake in the Iraq war and the U.S. bank bailouts, for starters.) Although my vote, being only one of about 100,000,000 cast, probably had only about a 1/100,000,000 chance of tilting the election, multiplying that tiny probability by a round trillion leaves a $10,000 expected public benefit from my voting -- not so far from 10% of my salary.
\n
Of course, that calculation is incredibly problematic in any number of ways. I don't stand behind it, but it helps loosen the grip of my previous intuition that of course it's morally better to donate 10% to charity than to vote.
\n

[Disclaimer: the above $10 trillion estimate is not mine. Bush did not kill 10 billion current people (at $1,000 per life) and he massively increased health-oriented foreign aid to Africa, which can expiate many sins in the GWWC calculus. Politics is the mind-killer, this is not about blue and green, etc.] So we have a model of politics as charity, on which it is more plausible that voting on policy could be rational. But why stop there? If voting (wisely) is a charitable activity, then spending money on political contributions to convince or mobilize others to vote (wisely) could be as well. For those who don't have moral objections to politics as charity (see the comments in this discussion for examples of such objections) political influence affect the voting behavior of others can be as well (provided compared to spending on tuberculosis treatment. We can attempt to remedy or analyze the \"incredibly problematic\" components to make better and better estimates. When thinking about effective philanthropy, would the marginal dollar do more good as a political campaign contribution or as a charitable donation for tuberculosis treatment?

\n

Politics as effective charity?

\n

This is not a new question for those interested in optimizing the impact of their charity. Giving What We Can (GWWC), founded by Future of Humanity Institute associate Toby Ord, is a group of people who have pledged to give at least ten percent of their incomes \"to whichever organizations can most effectively use it to fight poverty in developing countries.\" GWWC notes that political advocacy may have high expected value, but has not recommended any organizations in that category, mentioning the difficulties of analysis as well as hope that it will be done in the future. 

\n

While those who care about future generations and existential risk might not choose to focus their charitable efforts on the task of GWWC, thinking about how political activity relates to it can still provide them a good example for a Fermi calculation. For instance, charity evaluator GiveWell, which posts its analysis online, estimates the cost of saving the life of a poor person alive today via their recommended charities as on the order of $3,000, giving a reasonably clear benchmark for political activity to exceed. 

\n

This post and its successors will lay some further groundwork for that Fermi calculation.

\n

Are votes worth buying?

\n

To make the comparison between political activity and GiveWell-style anti-poverty organizations as clear as possible we focus solely on money spent to convince or mobilize the votes of others (as opposed to one's own vote, which may be easy enough to exercise to be worthwhile, even if efforts to influence others are not). 

\n

We can then break up the initial analysis into three parts.

\n
    \n
  1. \n

    How much political spending is required to elicit a vote for a candidate under various conditions?

    \n
  2. \n
  3. \n

    What is the relationship between purchased votes and policy outcomes?

    \n
  4. \n
  5. \n

    What is our probability distribution over the value (in lives of the poor saved, for this example) of those policies?

    \n
  6. \n
\n

We could then delve deeper into questions of decision theory, value, signaling and bias that are raised by the basic empirical picture. Today's post will focus on the first prong, the cost per vote elicited via political spending (in the context of a two-boxing decision theory, for the moment).

\n

How much do votes sell for?

\n

To begin the analysis, we can consider as our example contests for the most powerful elected office today: President of the United States. Three lines of American evidence stand out as relevant to assessing the cost per vote of campaign spending: the revealed behavior of politicians, correlational studies of spending and electoral outcomes, and experimental evidence from randomized trials. The first and third indicate relatively low cost per vote, while the second suggests higher costs. For the causal judgments we wish to draw, randomized experiments offer the most powerful evidence, and this analysis will lean heavily on them. 

\n

Revealed preference

\n

In the United States, politicians dedicate an enormous proportion of their time to fundraising. Prima facie, this suggests that politicians, experts in getting elected, believe that fundraising will be at least as helpful to their election as other activities like personal appearances or actual governance. This is made more plausible by the tendency of politicians to spend more time fundraising and raise more money when facing serious challengers in their next election. Politicians might have been tricked by an initial baseless belief in the efficacy of campaign spending, with the most popular candidates also raising the most money and creating a spurious self-fulfilling correlation. However, selection over time would be expected to wear away at such mistaken beliefs.

\n

Correlation studies

\n

A number of correlational studies have been cited to advance the idea that 'campaigns don't matter' in U.S. presidential elections. Using information such as party identification, unemployment, economic growth, and the approval rating of the incumbent, political scientists can predict election outcomes surprisingly well before campaigning even begins. These correlations are only weak evidence of causation, however, since the fundamentals also predict fundraising capacity (more popular candidates do better at raising money from the public, and organized interests are more interested in buying influence with a candidate who looks likely to win). To be confident that additional spending will buy votes, one would ideally want robust randomized experiments capable of clearly indicating causal relationships.

\n

Randomized experiments

\n

Fortunately, the last several decades have seen a proliferation of randomized experiments and scientific methods in political campaigning. In these experiments, parties and political organizations randomly apply particular campaigning methods, often with the supervision of political scientists or other academics, and record the votes thus secured.  One reference is Donald Green and Alan Gerber's Get Out the Vote, which reviews dozens of experiments bearing on the cost-effectiveness of get-out-the-vote (GOTV) efforts.

\n

The key results are summarized in a table on page 139 (viewable on the Google Books preview). The strongest well-confirmed effect is for door-to-door GOTV drives, which average 14 voters contacted to induce one vote (plus spillover effects), with a cost per vote of $29 (including spillover effects) assuming that staff time costs $16/hour for staff. Phone banks require more contacts per vote, but are cheaper per contact, with Green and Gerber estimating the cost per vote at $38 for campaign volunteer callers, and $90 for untrained commercial callers.

\n

In recent years, the U.S. political parties have adjusted their GOTV strategy in line with these experiments, and turnout has increased. For instance, in 2004 Green and Gerber predicted predicted that the parties would increase GOTV spending by some $200 million using methods averaging $50 per vote, for an increase in turnout of 4 million, and the turnout data seems consistent with that. This money was concentrated in swing states, and in 2004 turnout increased 9% to 63% in the twelve most competitive states, while increasing 2% to 53% in the twelve least competitive states (while clearly leaving many potential voters home).

However, publication biases likely inflate the cost-effectiveness estimates here, perhaps drastically, and to get a solid (likely worse) estimate would require a detailed investigation of such biases. Many of these studies are weakly significant, inconsistent across circumstances, or involve degrees of freedom. The true cost per vote could easily be $1,000+.

\n

Diminishing returns

\n

But how finely targeted can GOTV efforts be? Adding n votes to both candidates in a two-candidate race is a disappointing result for those interested in affecting who wins in elections. A GOTV which mobilizes 1000 votes at the margin, but has 250 of them go to the non-preferred candidate, will be only half as effective as one that solely mobilized supporters of the preferred candidate. Fortunately for electoral campaigners, the candidate citizens will vote for (if mobilized) is often easy to determine. Voting behavior is highly predictable from rural vs urban location, ethnicity, age, past party registration, neighborhood, etc. With increasing spending on GOTV, increasingly less selected populations would need to be contacted. Depending on how much money is available (and accompanying diminishing returns as less polarized populations are approached) this might easily double or triple the cost per vote at the margin.

\n

A further problem is that, since the forecast likelihood of a vote making the difference in an election varies widely across the country, other donors will also apply their resources disproportionately to closely contested elections. For instance, Gelman et al find that a U.S. presidential election vote in New Hampshire is around a hundred times as likely to make a difference as one in California. National presidential campaigns can efficiently allocate their resources in order of priority, with additional dollars going to relatively marginal regions. In non-national elections candidates may call in favors and tap war chests to deal with particularly close races, and empirical data do indicate increased spending in tight races. We can sanity-check an estimate of the cost per vote against total spending by national campaigns, e.g. the 2008 U.S. presidential race:

\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n

Candidate (Party)

\n
\n

Amount raised

\n
\n

Amount spent

\n
\n

Votes

\n
\n

Average spent per vote

\n
\n

Barack Obama (D)

\n
\n

$532,946,511

\n
\n

$513,557,218

\n
\n

69,498,215

\n
\n

$7.39

\n
\n

John McCain (R)

\n
\n

$379,006,485

\n
\n

$346,666,422

\n
\n

59,948,240

\n
\n

$5.78

\n
\n

Ralph Nader (I)

\n
\n

$4,496,180

\n
\n

$4,187,628

\n
\n

738,720

\n
\n

$5.67

\n
\n

Bob Barr (L)

\n
\n

$1,383,681

\n
\n

$1,345,202

\n
\n

523,713

\n
\n

$2.57

\n
\n

Chuck Baldwin (C)

\n
\n

$261,673

\n
\n

$234,309

\n
\n

199,437

\n
\n

$1.17

\n
\n

Cynthia McKinney (G)

\n
\n

$240,130

\n
\n

$238,968

\n
\n

161,680

\n
\n

$1.48

\n
\n

Excludes spending by independent expenditure concerns.
Source: Federal Election Commission
[1]

\n
\n

These amounts are surprisingly small (relative to, e.g. the U.S. federal budget), and also include all non-GOTV interventions. Negative campaigning which reduces turnout for an opposing candidate is just as effective in winning elections (per vote) as increasing turnout for one's preferred side. Interventions which push 'swing voters' to vote for one candidate rather than the other are twice as effective as either per voter influenced.

\n

Some ballpark VOI guesstimates

\n

Much more analysis can obviously be done here, but as a first-pass estimate, it seems likely that the marginal cost per vote from spending on U.S. presidential general elections is higher than the $50 per vote Green and Gerber estimate for GOTV efforts, so consider a range of $50-$5000. Note that these are after-tax dollars if contributed directly to political campaigns, and non-profit efforts are constrained in their ability to back particular candidates and coordinate with their campaigns (although many activities can be funneled through non-profit vehicles).

\n

What would that be worth? For those considering how to spend their effectiveness-focused philanthropy budget, we could use Eric's quick guesstimate of a 1 in 100,000,000 probability of a marginal vote swaying a presidential election. But if we consider ex ante close elections the number might be one in tens of millions (if one holds one's donations for close elections, although Gelman's figure of 1 in 10 million was for a specific election, using polls from immediately prior to election day, exaggerating the degree of certainty). Say we take 1 in 25 million as our number, assuming one waits for close elections to donate, but can't wait until just before election day.

Then in order for campaign spending to outperform the Against Malaria Foundation saving one child from death by malaria for ~$3,000+, the victory of the preferred candidate would need to be expected (given extensive uncertainty about candidates' future behavior, future conditions, and the effectiveness of various policies) to do good equivalent to preventing over 400 thousand to over 40 million extra malaria deaths, with higher numbers more likely.

With higher estimates of the campaign spending cost per vote, political donations would look less attractive, but voting oneself has potentially lower cost, the opportunity cost of reliably informing oneself (an essential cost) and voting. So carefully voting oneself might be useful volunteering, even if political donations are not worthwhile in this framework. One might think of it as spending a gift of political power from the state.

\n

Continued in: Probability and Politics

" } }, { "_id": "9bTNcSpNBdPpyocMK", "title": "(Virtual) Employment Open Thread", "pageUrl": "https://www.lesswrong.com/posts/9bTNcSpNBdPpyocMK/virtual-employment-open-thread", "postedAt": "2010-09-23T04:25:58.007Z", "baseScore": 49, "voteCount": 45, "commentCount": 281, "url": null, "contents": { "documentId": "9bTNcSpNBdPpyocMK", "html": "

tl;dr: Some people on LW have a hard time finding worthwhile employment. Share advice and help them out!

\n

Working sucks. I'd rather not work. But alas, a lot of the time, we have to choose between working and starvation. At the very least I'd like to minimize work. I'd like to work somewhere cheap and comfortable... you know, like on the beach in Thailand, like LW (ab)user Louie did. Then I could spend my spare time on things like self-improvement and ahem 'studying nootropics' all day. I'd like to travel, if possible, and not be chained to an iffy job. It'd be cool to have flexible hours. I've read The 4-Hour Work Week but it seemed kinda difficult and scary and... I just don't wanna do it. I can't code, and I'd rather not learn how to. At least, I'd rather not have my job depend on it. I never graduated from college. Hell, I never got my high school diploma, even. A team of medical experts has confirmed that my sleep cycle is of the Chaotic Evil variety. (For those who read HP:MoR, imagine Harry Potter Syndrome, except on crack. I bet a lot of people have similar sleep cycles.) I'm 18, and therefore automatically low status for employment purposes: I'm obviously much too young to make a good teacher, or store manager, or police officer. I can imagine having health problems, or severe social anxiety, or a nearly useless liberal arts degree, or just a general setback limiting my employment opportunities. And if it turned out that I wanted to work 14 hour days all of a sudden because I really needed the money, well then it'd be cool to have that option as well. Alas, none of this is possible, so I might as well just give up and keep on being stressed and feeling useless... or should I?

\n

I bet a whole bunch of Less Wrongers aren't aware of chances for alternative employment. I myself hear myths of people who work via the internet, or blog for a living, or code an hour a day and still make enough to survive comfortably. Sites like elance and vworker (which looks kinda intimidating) exist, and I bet we could find others. Are there such people on Less Wrong that could tell us their secret? Do others know about how to snag one of these gigs? What sorts of skills are easiest to specialize in that could get returns in virtual work? Are virtual markets hard to break into? Can I just blog for an hour or two a day and afford to live a life of simplistic luxury in Thailand? Pretty much everyone on Less Wrong has exceptional writing ability: are there relatively well-paying writing gigs we could get? Alternatively, are there other non-internet jobs that people can break into that don't require tons of experience or great connections or that dreaded and inscrutable bane of nerds everywhere, 'people skills'? Share your knowledge or do some research and help Less Wrong become more happy, more productive, and more awesome!

\n

Oh, and this is really important: we don't have to reinvent the wheel. As wedrifid demonstrated in the earlier Intelligence Amplification Open Thread, a link to an already existent forum is worth ten thousand words or more.

" } }, { "_id": "y2iPsnzp5KhvtK9zt", "title": "Let's make a deal", "pageUrl": "https://www.lesswrong.com/posts/y2iPsnzp5KhvtK9zt/let-s-make-a-deal", "postedAt": "2010-09-23T00:59:43.666Z", "baseScore": -22, "voteCount": 36, "commentCount": 53, "url": null, "contents": { "documentId": "y2iPsnzp5KhvtK9zt", "html": "

At the start of 2010, I resolved to focus as much as possible on singularity-relevant issues. That resolution has produced three ongoing projects:

\n\n

As I put it the other day, the paper is about \"CEV, adapted to whatever the true ontology is\". I have ideas about how CEV should work, and about what the true ontology is, and about the adjustments that the latter might require. These ideas are tentative, and open to correction, and the objective is to find out the facts, not just to insist on an opinion. Indeed, I would be open to hearing that I ought to be working on something else, if I want to attain maximum relevance to the AI era. But for now, I have my plan, and I take it seriously as a blueprint for what I should be doing.

\n

The relevance of string theory might seem questionable. But it matters for physical ontology and for epistemology of physics [ETA: which matters for general epistemology and hence for AGI]. String theory is also a crossroads for many topics central to pure mathematics, such as algebraic geometry, and their techniques are relevant for many other fields, even discrete ones like computer science. In the theory of complexity classes, there is a self-referential barrier to proving that P is distinct from NP. There is a deep proposal to overcome it by transposing the problem to the domain of algebraic geometry, and I've just begun to consider whether a similar method might illuminate problems like self-enhancement, utility function discovery, and utility function renormalization (for concreteness, I plan to work with decision field theory). Also, if I can speak string, maybe I can recruit some of those big brains to the task of FAI.

\n

\"Investigation of academic options\" should be self-explanatory. A university is one of the few places where you might be able to work full-time on matters like these. Unfortunately, this outcome continues to elude me. So while I set about whipping up a stew of private microloans and casual work so as to keep a roof over my head, it's time for me to try the Internet option as well.

\n

I find that life costs me AUD$1000/month (AUD is currency code for \"Australian dollars\"). I'd do better with more, but that's my minimum budget, the lower bound below which bad things will happen. So that's also the conversion rate from \"money\" to \"free time\".

\n

I figure that there are three basic forms of cash transaction: gifts, payments, and loans. A gift is unconditional. A payment is traded for services rendered. A loan is a temporary increase in a person's capital that has to be returned. These categories are not entirely distinct: for example, a payment refunded (because the service wasn't performed) ends up having functioned as a loan.

\n

I am interested in all three of these things. The brittle arrangement which allows me to speak to you this way does not presently extend to me owning a laptop or having easy access to Skype, but I do have a phone, so the adventurous can call me on +61 0406 979788. (I'm in the eastern Australian timezone.) My email is mporter at gmail.com, and I have a Paypal account under that address.

" } }, { "_id": "JbaTpcTxMS6SjizqL", "title": "Is Rationality Maximization of Expected Value?", "pageUrl": "https://www.lesswrong.com/posts/JbaTpcTxMS6SjizqL/is-rationality-maximization-of-expected-value", "postedAt": "2010-09-22T23:16:04.427Z", "baseScore": -32, "voteCount": 26, "commentCount": 65, "url": null, "contents": { "documentId": "JbaTpcTxMS6SjizqL", "html": "

Two or three months ago, my trip to Las Vegas made me ponder the following: If all gambles in the casinos have negative expected values, why do people still engage in gambling - especially my friends fairly well-versed in probability/statistics?

\n

Suffice it to say, I still have not answered that question. 

\n

On the other hand, this did lead me to ponder more about whether rational behavior always involves making choices with the highest expected (or positive) value - call this Rationality-Expectation (R-E) hypothesis.

\n

Here I'd like to offer some counterexamples that show R-E is clearly false, to me at least. (In hindsight, these look fairly trivial but some commentators on this site speak as if maximizing expectation is somehow constitutive of rational decision making - as I used to. So, it may be interesting for those people at the very least.)

\n

\n

\n

\n

A is a gamble that shows that choices with negative expectation can sometimes lead to net pay off.

\n

B is a gamble that shows that choices with positive expectation can sometimes lead to net costs.

\n

As I'm sure you've all noticed, expectation is only meaningful in decision-making when the number of trials in question can be large (or more precisely, large enough relative to the variance of the random variable in question). This, I think, in essence is another way of looking at Weak Law of Large Numbers.

\n

In general, most (all? few?) statistical concepts make sense only when we have trials numerous enough relative to the variance of the quantities in question.

\n

This makes me ponder a deeper question, nonetheless.

\n

Does it make sense to speak of probabilities only when you have numerous enough trials? Can we speak of probabilities for singular, non-repeating events?

" } }, { "_id": "YbT9yEdrZJLGAdsGw", "title": "Rationality Case Study - Ad-36", "pageUrl": "https://www.lesswrong.com/posts/YbT9yEdrZJLGAdsGw/rationality-case-study-ad-36", "postedAt": "2010-09-22T18:32:18.906Z", "baseScore": 32, "voteCount": 28, "commentCount": 54, "url": null, "contents": { "documentId": "YbT9yEdrZJLGAdsGw", "html": "

Edit: Title change - may cause RSS resend.

\n

There is a widespread belief here that we pretty much have epistemic rationality nailed.  Instrumental rationality, ... well, nobody is perfect.  We are still working on that.  But epistemic rationality?  That is a breeze.  Simply a matter of avoiding some well known biases, doing Bayesian updating, and maintaining a certain kind of mutual respect in discussions.  Maybe throw in a bit of Pearl, if you need to address causality in addition to correlation.  We know how to handle evidence.  And what is epistemic rationality, after all, besides the optimal evaluation of evidence?

\n

A recent posting made the suggestion that we ought, as a community, to make our expertise available to society, by tackling important and controversial questions and (presumably) then sharing our consensus with the world.  But will we reach a consensus?  A simple test case might be useful.  The purpose of this posting is to propose a test.  I am providing some links to an ongoing scientific controversy which (judging by press coverage) is of considerable interest to the general public.  Given the attention given to health issues here, I suspect that it will also be of interest to the LW population in general.  So here goes.

\n

There has been some suggestion that a particular strain of the common cold virus - Ad-36 - might be a major contributor to human obesity.  Become infected by the virus, and you are at increased risk of becoming fat.  However, a certain amount of skepticism has been expressed - the evidence may not be as good as it is made out to be.

\n

I'm completely new to this controversy - just found out about it today.  I think that it would be an interesting test case to have LW as a group look into this question (quite a lot of information is available online, and what is not can be found in university libraries to which I assume many of us have access).  So, what do you all think?  Does Adenovirus strain #36 cause human obesity?  If so, how much?  If not, where exactly have the doctors and scientists who believe otherwise gone wrong in their thinking?

" } }, { "_id": "J2TZsxwrDBkDRTMTF", "title": "Error detection bias in research", "pageUrl": "https://www.lesswrong.com/posts/J2TZsxwrDBkDRTMTF/error-detection-bias-in-research", "postedAt": "2010-09-22T03:00:33.555Z", "baseScore": 73, "voteCount": 60, "commentCount": 37, "url": null, "contents": { "documentId": "J2TZsxwrDBkDRTMTF", "html": "

I have had the following situation happen several times during my research career:  I write code to analyze data; there is some expectation about what the results will be; after running the program, the results are not what was expected; I go back and carefully check the code to make sure there are no errors; sometimes I find an error

\n

No matter how careful you are when it comes to writing computer code, I think you are more likely to find a mistake if you think there is one.  Unexpected results lead one to suspect a coding error more than expected results do.

\n

In general, researchers usually do have general expectations about what they will find (e.g., the drug will not increase risk of the disease; the toxin will not decrease risk of cancer).

\n

Consider the following graphic:

\n

\"\"

\n

Here, the green region is consistent with what our expectations are.  For example, if we expect a relative risk (RR) of about 1.5, we might not be too surprised if the estimated RR is between (e.g.) 0.9 and 2.0.  Anything above 2.0 or below 0.9 might make us highly suspicious of an error -- that's the red region.  Estimates in the red region are likely to trigger serious coding error investigation.  Obviously, if there is no coding error then the paper will get submitted with the surprising results.

\n

Error scenarios

\n

Let's assume that there is a coding error that causes the estimated effect to differ from the true effect (assume sample size large enough to ignore sampling variability).

\n

Consider the following scenario:

\n

\"\"

\n

Type A. Here, the estimated value is biased, but it's within the expected range.  In this scenario, error checking is probably more casual and less likely to be successful.

\n

Next, consider this scenario:

\n

\"\"

\n

Type B. In this case, the estimated value is in the red zone.  This triggers aggressive error checking of the type that has a higher success rate.

\n

Finally:

\n

\"\"

\n

Type C. In this case it's the true value that differs from our expectations.  However, the estimated value is about what we would expect.  This triggers casual error checking of the less-likely-to-be-successful variety.

\n

If this line of reasoning holds, we should expect journal articles to contain errors at a higher rate when the results are consistent with the authors' prior expectations. This could be viewed as a type of confirmation bias.

\n

How common are programming errors in research?

\n

There are many opportunities for hard-to-detect errors to occur.  For large studies, there might be hundreds of lines of code related to database creation, data cleaning, etc., plus many more lines of code for data analysis.  Studies also typically involve multiple programmers.  I would not be surprised if at least 20% of  published studies include results that were affected by at least one coding error.  Many of these errors probably had a trivial effect, but I am sure others did not.

" } }, { "_id": "yH3XvSPkBB2Q3emZK", "title": "Less Wrong Should Confront Wrongness Wherever it Appears", "pageUrl": "https://www.lesswrong.com/posts/yH3XvSPkBB2Q3emZK/less-wrong-should-confront-wrongness-wherever-it-appears", "postedAt": "2010-09-21T01:40:37.861Z", "baseScore": 32, "voteCount": 40, "commentCount": 163, "url": null, "contents": { "documentId": "yH3XvSPkBB2Q3emZK", "html": "

In a recent discussion about a controversial topic which I will not name here, Vladimir_M noticed something extremely important.

\n
\n

Because the necessary information is difficult to obtain in a clear and convincing form, and it's drowned in a vast sea of nonsense that's produced on this subject by just about every source of information in the modern society.

\n
\n

I have separated it from its original context, because this issue applies to many important topics. There are many topics where the information that most people receive is confused, wrong, or biased, and where nonsense drowns out truth and clarity. Wherever this occurs, it is very bad and very important to notice.

There are many reasons why it happens, many of which have been explicitly studied and discussed as topics here. The norms and design of the site are engineered to promote clarity and correctness. Strategies for reasoning correctly are frequently recurring topics, and newcomers are encouraged to read a large back-catalog of articles about how to avoid common errors in thinking (the sequences). A high standard of discourse is enforced through voting, which also provides rapid feedback to help everyone improve their writing. Since Well-Kept Gardens Die by Pacifism, when the occasional nutjob stops by, they're downvoted into invisibility and driven away - and while you wouldn't notice from the comment archives, this has happened lots of times.

\n

Less Wrong has the highest accuracy and signal to noise ratio of any blog I've seen, other than those that limit themselves to narrow specialties. In fact, I doubt anyone here knows a better one. The difference is very large. While we are certainly not perfect, errors on Less Wrong are rarer and much more likely to be spotted and corrected than on any similar site, so a community consensus here is a very strong signal of clarity and correctness.

\n

As a result, Less Wrong is well positioned to find and correct errors in the public discourse. Less Wrong should confront wrongness wherever it appears.  Wherever large amounts of utility depend on clear and accurate information, it's not already prevalent, and we have the ability to produce or properly filter that information, then we ought to do so and lots of utility depends on it. Even if it's incompatible with status signaling, or off topic, or otherwise incompatible with non-vital social norms.

\n

So I propose the following as a community norm. If a topic is important, the public discourse on it is wrong for any reason, it hasn't appeared on Less Wrong before, and a discussion on Less Wrong would probably bring clarity, then it is automatically considered on-topic. By important, I mean topics where inaccurate or confused beliefs would cost lots of utility for readers or for humanity. Approaching a topic from a new and substantially different angle doesn't count as a duplicate.

\n

EDIT: This thread is producing a lot of discussion about what Less Wrong's norms should be. I have proposed a procedure for gathering and filtering these discussions into a top-level post, which would have the effect of encouraging people to enforce them through voting and comments.

\n

\n

Less Wrong does not currently provide strong guidance about what is considered on topic. In fact, Less Wrong generally considers topic to be secondary to importance and clarity, and this is as it should be. However, this should be formally acknowledged, so that people are not discouraged from posting important things just because they think they might be off topic! Determining whether something is on topic is a trivial inconvenience of the worst sort.

\n

When writing posts on these topics, it is a good idea to call out any known reasons why the public discourse may have gone awry, to avoid hitting the same traps. If there's a related but different position that's highly objectionable, call it out and disclaim against it. If there's a standard position which people don't want to or can't safely signal disagreement with, then clearly label which parts are true and which aren't. Do not present distorted views of controversial topics, but more importantly, do not present falsehood as truth in the name of balance; if a topic seems to have two valid opposing sides, it probably means you don't understand it well enough to tell which is correct. If there are norms suppressing discussion, call them out, check for valid justifications, and if they're unjustified or the issues can be worked around, ask readers not to enforce them.

\n

I would like to add a list of past Less Wrong topics which had little to do with bias, except that the public discourse was impaired by it. These have already been discussed so they would be discouraged as duplicates rule (except for substantially new approaches), but they are good examples of the sorts of topics we should all be looking for. The accuracy of criminal justice (which we looked at in the particular case of Amanda Knox); religion, epistemology, and death; health and nutrition, akrasia, specific psychoactive drugs and psychoactive drugs in general; gender relations, racial relations, and social relations in general; social norms in general and the desirability of particular norms; charity in general and the effectiveness of particular charities, philosophy in general and the soundness of particular philosophies.

\n

By inadequate public discourse, I mean that either (a) they're complex enough that most information sources are merely useless and confusing, (b) social norms make them hard to talk about, or (c) they have excessive noise published about them due to bad incentives. Our job is to find more topics, not in this list, where correctness is important and where the public dialogue is substantially inadequate. Then write something that's less wrong.

" } }, { "_id": "EnuFGdHTQLwcX6ed4", "title": "Melbourne Less Wrong Meetup", "pageUrl": "https://www.lesswrong.com/posts/EnuFGdHTQLwcX6ed4/melbourne-less-wrong-meetup", "postedAt": "2010-09-20T09:14:08.508Z", "baseScore": 10, "voteCount": 9, "commentCount": 48, "url": null, "contents": { "documentId": "EnuFGdHTQLwcX6ed4", "html": "

There seems to be enough Melburnian Less Wrong visitors to justify having monthly meetups. Just the other day, I ran into some at my local restaurant, and the thread Where are we?  lists four (not including me). There's probably more out there. 

\n

Details

\n

Time: 6pm, Saturday 2nd October

\n

Place: Don Tojo  

\n

I'll be there with a Less Wrong sign.

\n

 

\n

List of people attending:

\n

Patrick

\n

toner

\n

luminosity

\n

Ppeach

\n

wedrifid

\n

Byron

\n

Yurifury

\n

ShardPhoenix

" } }, { "_id": "yGwxH6EjZC9D2dNff", "title": "Fall 2010 Meta Thread", "pageUrl": "https://www.lesswrong.com/posts/yGwxH6EjZC9D2dNff/fall-2010-meta-thread", "postedAt": "2010-09-19T06:30:08.721Z", "baseScore": 9, "voteCount": 7, "commentCount": 100, "url": null, "contents": { "documentId": "yGwxH6EjZC9D2dNff", "html": "

Use this thread for discussion of Less Wrong itself and all things meta, meta-meta, meta-meta-meta, omega, etc.

\n

 

" } }, { "_id": "5X5xProqsbeck7bbd", "title": "Rationality Power Tools", "pageUrl": "https://www.lesswrong.com/posts/5X5xProqsbeck7bbd/rationality-power-tools", "postedAt": "2010-09-19T06:20:05.481Z", "baseScore": 27, "voteCount": 25, "commentCount": 67, "url": null, "contents": { "documentId": "5X5xProqsbeck7bbd", "html": "

Summary: Rationalists should win; however, it could take a really long time before a technological singularity or uploading provide powerful technology to aid rationalists in achieving their goals. It's possible today to create assistant computer software to help direct human effort and provide \"hints\" for clearer thinking. We should catalog such software when it exists and create it when it doesn't.

The Problem
We may be waiting awhile for a Friendly AI or similar “world changing” technology to appear. While technology continues to improve, the process of creating a Friendly AI seems extremely tricky, and there’s no solid ETA on the program. Uploading is still years to decades away. In the meantime, aspiring rationalists still have to get on with our lives.

Rationality is hard. Merely knowing about a bias is often not enough to overcome it. Even in cases where the steps to act rationally are known, the algorithm required may be more than can be done manually, or may require information which itself is not immediately at hand. However, a lot of things that are difficult become easier when you have the right tools. Could there be tools that supplement the effort involved in making a good decision? I suspect that this is the case, and will give several examples of programs that the community could work to create -- computer software to help you win. Because a lot of software is specifically created to address problems as they come up, it would also be worthwhile to maintain an index of already available software with special usefulness and applicability to Less Wrong readers.

\n

The Opportunity
Some people have expressed concern that Less Wrong should be more focused on “doing things” rather than “knowing things” -- instrumental rather than epistemic rationality. Computer software, as a set of instructions for “doing something” fall closer to the “instrumental” end of this spectrum (although for a very good program, it’s possible to learn a thing or two from the source code, or at least the documentation). The concept of open-source software is well-tested, and platforms for open-source projects are freely available and mature -- see Google Code, SourgeForge, GitHub, etc. Additionally, many of us on Less Wrong already have the skills to contribute to such an effort directly. By my count, the 2009 survey shows that 71 of the respondents are involved in computing professionally -- 46%; it seems at first glace as if the basic skills to generate rationality power tools are already present in the community.

Currently, the only software listing on the wiki seems to be puzzle games and the only discussion I’m aware of for creating software in the community is the proposal for a Less Wrong video game.

What’s a Rationality Power Tool?
By “rationality power tool” I mean a computer program which is:

\n\n


Program Examples and Proposals
Facebook Idea Futures
Idea futures are neat, but public opportunities for participating in them are few and far between. The Foresight Exchange is mostly moribund, and definitely shows its age -- Consensus Point has mostly focused on bringing prediction markets to large corporations, which is fine as far as it goes, but not necessarily optimal for getting the word out and letting people who are new to the concept play with it. The Popular Science prediction market closed in 2009. Intrade has questionable legality, at best, from the U.S., has lackluster marketing (e.g. the 2008 U.S. elections are still listed at the top of the sidebar!), and is limited in the number of claims it can support and its usefulness in introducing the concept to those unfamiliar with it by using real money. Additionally, none of these markets have a large number of conditional claims (e.g. you can use Intrade for a probability that a Democrat wins the 2012 presidential election and a probability that Palin is the 2012 Republican nominee, but a conditional claim of some sort would be needed for a probability that a Democrat wins given that Palin is the Republican nominee).

At the same time, browser-based games have become much more popular recently. The basic concept of the Foresight Exchange would make a really good Facebook game if the interface were updated, and claim creation were encouraged rather than discouraged.

Prediction markets can provide good exercise in critically evaluating claims for their participants, while simultaneously quantifying the “wisdom of crowds” -- a more expensive and difficult job when a survey is needed. Zocalo and Idea Futures (a descendant of the software that runs FX), are two packages that could possibly be updated.

Related: TakeOnIt, a database of expert opinion by Ben Albahari introduced on LessWrong earlier this year.

Coordinated Efforts -- Fundraising and Otherwise
Money is [basically] a unit of caring. Sometimes it’s useful for people to donate toward something if and only if other people donate toward that cause. Websites like ChipIn and KickStarter can somewhat help in this sort of “coordination game.” The Point is a similar website with a less financial focus.

Scheduling Software
Time is almost invariably an element in a plan. It’s very rare that someone can overcome akrasia and procrastination, go do whatever needs to be done on the spot, and that’s the end of it. More often, something needs to be done every week, every day, every month, always in response to some environmental queue, randomly over a period of 3 years, in a particular sequence, at a particular time, when the resources become available, etc. There’s already been some discussion on Less Wrong of various approaches and issues of time management, but again, there hasn’t been much effort to critique these systems, find software implementations, and catalog them.

FWIW, I’ve been using a PHP script to randomly create schedules for myself for the last several weeks. While still very crude, I’ve found that being able to just “adjusting the dials” to how much I work on things on a week-by-week basis and making small changes to the auto-scheduler's output (currently a tab-delimited spreadsheet) is a lot better than having to actually come up with a complete schedule ex nihilo. The less time spent thinking about what you’re going to do, the more time you have to actually do it.

Mentor Match and Practice Program Database
Based on some advice in Memetic Hazards in Videogames, I have been reading Talent is Overrated, which also ties well to the recent Shiny Distraction discussion. It suggests that well-designed practice is key to developing skills, and this is often easier if you have a mentor for feedback and hints on what you should be working on. This leads me to two programs that don’t currently exist, as far as I know, but that would be very useful -- first, a database of practice programs for developing various skills, and second, a mentor finder to pair up rationalists that want to learn things with those that already know them.

Less Wrong
In “Building Communities With Software,” Joel Spolsky discusses how various way of setting up the software around an online community can affect its behavior. It’d be possible to create a rationalist community using nothing but IRC, but it’d be more difficult. Less Wrong itself (based on Reddit) certainly falls into the category of rationalist power tools. For those not already aware, there is currently a debate on the appropriateness of “community” items like job postings and meetings on the front page. Following the principle that the things you want people to do on a website should be easier to do, We shouldn't simply bury these -- if we want people to meet in person and implement rationality in the workforce, we should put community items in full on the front page, if perhaps not with the same prominence as articles (in the sidebar?).

Where To Go From Here
I’ve created a page on the wiki to catalog rationality power tools that currently exist and proposals to create new ones -- feel free to edit in links to programs you think fit and your ideas for new ones (perhaps with a link to an appropriate link to a article or comment on the main site for discussion). My definition of “rationality power tools” above is a bit awkward, and I’d appreciate any refinement that helps us find or make such tools. For especially worthy projects, it may make sense to solicit bids for them, and then commission them, perhaps with Kickstarter. If you’re willing to take on a project and know some programming language, just do it.

*Actually, will soon create a page on the wiki, as I’m ironically experiencing technical difficulties in doing so.

" } }, { "_id": "XJCRAxNZ2GCbhESt2", "title": "Popular morality: spare those on the left", "pageUrl": "https://www.lesswrong.com/posts/XJCRAxNZ2GCbhESt2/popular-morality-spare-those-on-the-left", "postedAt": "2010-09-18T20:55:28.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "XJCRAxNZ2GCbhESt2", "html": "

Jen Wright at Experimental Philosophy:

\n

… I’m writing this post because I found something even more interesting…and puzzling. Leaving people’s actual looking behavior aside, I found a very powerful effect — consistent across all the vignettes — for which side of the screen the potential victim (the fat guy or the baby) was on. When the victims were on the right-side of the screen, people’s would and should judgements were significantly higher (i.e., they were more willing to, and thought more strongly that they should, kill the victim to save the others), than when they were on the left-side of the screen.

\n

So, does anyone have any suggestions as to what might explain this finding?

\n

My guess is that it’s related to the previous findings that people tend to place active people on the left of passive people in pictures (though it seems to vary across languages). The easiest interpretation is that it seems more moral to sacrifice passive people than active ones. That would also fit with the pattern I pointed out before in our moral intuitions, that moral concern is highly contingent on whether we can be rewarded or punished by the beneficiary of our ‘compassion’.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "BTXdajWzoN2YRbgjG", "title": "Rational Health Optimization", "pageUrl": "https://www.lesswrong.com/posts/BTXdajWzoN2YRbgjG/rational-health-optimization", "postedAt": "2010-09-18T19:47:02.687Z", "baseScore": 23, "voteCount": 41, "commentCount": 78, "url": null, "contents": { "documentId": "BTXdajWzoN2YRbgjG", "html": "

Possibly Related To: Diseased Thinking, Thou Art Godshatter

\r\n

There are 8760 hours in a typical year.  A typical 30-year old will spend about 2900 of those hours sleeping, around 160 of them impaired or incapacitated by illness and will experience perhaps 2000 hours of peak mental function.

\r\n

As one ages, the fraction of hours spent sleeping decreases slightly, but eventually the annual hours of peak mental function declines as well, and the annual hours spent ill increases nonlinearlly until one eventually makes that final hospital visit.  

\r\n

There is a hope that medical technology, accelerated via a Singularity, will advance to the point where we have full mastery over biology and can economically repair organ and cellular damage faster than aging accumulates it.  There is sufficient evidence to put a reasonable bet on that happening by mid-century.

\r\n

But for most of us that still leaves an unnaceptably high risk of death in the cumulative years between now and then.  Cyronics enrollment offers a further hope, but in practice probably only results in a modest improvement in long term survival odds after full discounting for the technical risks and uncertainties.

\r\n

In the end it all comes down to a die roll.  Wouldn't you like to get an extra +1 or two?

\r\n

With a simple evolutionary health optimization, one can:

\r\n\r\n

Evolution and Health

\r\n

Our bodies are the collective result of countless layers of mindless complex adaptations, evolutionary godshatter from a bygone history.  The current sub-species or races of humans today are just a small sampling of a much larger space of genetically related human ancestors who roamed the earth for hundreds of thousands of years before the modern era.  Our modern genomes are a wide and highly irregular sampling of this diverse set of historical adaptations.

\r\n

For most of that time the earth was considerably colder and very different than it is today - we currently live in a warm peak between large glaciations.  These wide climate swings created complex dynamic patterns of shifting ecological niches.  At glacial peaks, sea levels were over 100 meters lower than today and most of the terrestrial world was connected, allowing large waves of nomadic migration in giant mammals.

\r\n

Again and again tribes of homo sapiens with increasingly advanced technological hunting cultures expanded out of Africa, where humans originated and the ecosystems had more time to co-evolve.  The farther humans migrated out of Africa, the more they encountered megafauna unadapted to human hunting, and the more they became specialized technological apex predators.

\r\n

By roughly 10,000 BC nearly all the terrestrial megafauna outside of Africa was extinct.  Shortly thereafter early agricultural centers began to spring up in several megafauna-depleted regions; nascent civilizations in the making - only to fail and rise again.  In the ten thousand years or so since these first large-scale farming experiments, the homo sapien genome has had limited time for any new novel adaptations.

\r\n

This historical observation leads to a simple but suprisingly powerful top-level belief.  Our genome is optimized for genetic fitness functions that no longer exist; an evolutionary environment from the paleolithic era.  Thus all else being equal we should expect significant deviations from that environment to have negative effects more often than positive.

\r\n

Evolution is near-sighted.  In thou art godshatter, Eliezer Yudkowsky asks:

\r\n
\r\n

Why wasn't a concept of \"inclusive genetic fitness\" programmed into us, along with a library of explicit strategies?  Then you could dispense with all the reinforcers.  The organism would be born knowing that, with high probability, fatty foods would lead to fitness.  If the organism later learned that this was no longer the case, it would stop eating fatty foods.

\r\n
\r\n

One answer is that explicit linguistic conceptual knowledge is much more complex and developed long after simple reinforcement strategies.  The other perhaps more obvious answer is that explicit conceptual strategies are inherently serial and are thus extremely computationally limited in the slow but massively parallel human brain.  Every day our brains are unconsciously evaluating vast quantities of probabilistic inferences tied to simple reinforcers in an attempt to maximize our inclusive genetic fitness and spread our genes.

\r\n

Regardless of whether or not we are interested in maximizing genetic fitness, we can now use our advanced conceptual knowledge of evolution, genetics, and health to identify and map out the hidden assumptions of the numerous ancient programs in the genome, how they can go wrong in the novel modern environment, and how we can best trick these adaptive systems back into their optimal operating modes.

\r\n

Evolution had no incentive to optimize for environments significantly different than those it encountered, and the web of complex interdependent genetic programs that maintain our bodies have numerous subtle minor failure modes, most of which are not fully understood.

\r\n

A key insight is that the web of hormones, metabolism and gene expression are highly complex and inter-related, and one gets full benefit only by correcting the majority of the deviations.  If this is done then one can significantly reduce the chance of succumbing to the diseases of civilization

\r\n

What are some of the modern environmental deviations?

\r\n

Exercise

\r\n

Hunter-gatherers were certainly not sedentary, but probably had an average daily activity level still below that of today's professional athletes.  It's pretty clear though that they spent a good chunk of time walking and running.  The effects of exercise on health have been fairly well studied.  Of interest to pragmatic instrumental rationalists is that only mild exercise is required.  Studies have shown that the main longevity boost is somewhere around 2 to 5 years and requires just 100 to 300 calories of exercise per day. [1]  Suspected mechanisms involve cortisol regulation, endorphins, and triggers that activate cellular repair.  Sex may be the most efficient form of exercise for the calorie budget.

\r\n

Diet

\r\n

Our ancestors ate a variety of foods with significant geographic and temporal variation.  But if you sum the typical average over a swath of ancestors, it is believed to have consisted largely of lean game meat, offal, fish, nuts, higher-fiber vegetables, low-sugar fruits, shellfish and insects.  At the macro-level the diet would be more balanced between protein, fat and carbohydrate, significantly different than the high carbohydrate and low protein modern diet.

\r\n

Modern humans today eat a diet that is superficially super-good - it consists of the foods blind evolutionary adaptations thought we needed more of . . . ten thousand years ago.  Our taste buds are primed to favor foods that are rich in calories overall and high in ancient rarities such as sodium and certain fats.

\r\n

We now have specific evidence for a whole range of health problems associated with the modern diet: excess calories and caloric density, high glycemic index causing excessive insulin production and spiking (mainly via over-abundance of concentrated starch and sugar), imbalanced omega 3 / omega 6 fatty acid profile and imbalanced sodium/potassium profile.

\r\n

The exact mechanisms are complex and not fully understood, but in general this diet will cause one to put on weight and is linked longer term to an entire cluster of diseases - largely the metabolic syndrome and cardiovascular disease.

\r\n

A simplified paleo-diet solution:

\r\n\r\n

Night

\r\n

Nights would overall be much darker than they are today (unless one lives in some remote wilderness), and that darkness would start much earlier.  Campfire light is considerably different than modern artificial illumination.

\r\n

Even small amounts of light can block melatonin production.  Thus modern human's sleep cycle is completely unoptimized.  We don't get enough bright sunlight in the day, and we get far too much light at night.  The evidence suggests that melatonin/sleep imbalance can effect everything from mood to the immune system to aging itself.

\r\n

Interestingly enough, human melatonin production may be optimized to ignore campfire light (from wikipedia):

\r\n
\r\n

Production of melatonin by the pineal gland is inhibited by light and permitted by darkness. For this reason melatonin has been called \"the hormone of darkness\". Its onset each evening is called the Dim-Light Melatonin Onset (DLMO). Secretion of melatonin as well as its level in the blood, peaks in the middle of the night, and gradually falls during the second half of the night, with normal variations in timing according to an individual's chronotype.

\r\n

It is principally blue light, around 480nm, that suppresses melatonin, increasingly with increased light intensity and length of exposure. Until recent history, humans in temperate climates were exposed to few hours of (blue) daylight in the winter; their fires gave predominantly yellow light. Wearing glasses that block blue light in the hours before bedtime may avoid melatonin loss. Kayumov et al. showed that light containing only wavelengths greater than 530 nm does not suppress melatonin in bright-light conditions. Use of blue-blocking goggles the last hours before bedtime has also been advised for people who need to adjust to an earlier bedtime, as melatonin promotes sleepiness.

\r\n

 

\r\n
\r\n

Melatonin can be supplemented at night, but I also intend to outfit my apartment with blue-filtered lights, or perhaps try blue-filtered glasses. I have noticed that sleep is also more effective when one wakes up slowly to bright daylight.

\r\n

Sunlight

\r\n
Paleolithic hunter-gatherers would spend most of the day outside in the sun.  Even with clothing, skin sun exposure would be vastly higher than the average today.
\r\n

In some sense most terrestrial vertebrates are partially solar powered - plants are not the only creatures to use solar energy directly.  Unless you are currently taking 5000 IU of vitamin D3 per day the odds are you probably are vitamin D deficient.

\r\n

\"Vitamin\" D is perhaps a misnomer.  I can do no better than quote from the  Vitamin D council [2]:

\r\n
\r\n

Technically not a \"vitamin,\" vitamin D is in a class by itself. Its metabolic product, calcitriol, is actually a secosteroid hormone that is the key that unlocks binding sites on the human genome. The human genome contains more than 2,700 binding sites for calcitriol; those binding sites are near genes involved in virtually every known major disease of humans.

\r\n
\r\n
\r\n

Current research has implicated vitamin D deficiency as a major factor in the pathology of at least 17 varieties of cancer as well as heart disease, stroke, hypertension, autoimmune diseases, diabetes, depression, chronic pain, osteoarthritis, osteoporosis, muscle weakness, muscle wasting, birth defects, periodontal disease, and more.

\r\n
\r\n

What does Vitamin D do at all these  gene expression sites?  We don't really know yet.

\r\n

However it is clear that D is somehow involved heavily in immune regulation and brain development.  Interestingly enough, almost all of the modern diseases of civilization are either inflammatory diseases or are immune regulated, including cancer, cardiovascular disease and Alzheimer's, just to name a few. 

\r\n

A number studies show that vitamin D deficiency (the default state of most of us today) increases overall rates of cancer by perhaps 50% or more - roughly double the cancer risk of smoking  [3a] [3b].  

\r\n

An interesting quote from that article:

\r\n
\r\n

One of the researchers who made the discovery, professor of medicine Robert Heaney of Creighton University in Nebraska, says vitamin D deficiency is showing up in so many illnesses besides cancer that nearly all disease figures in Canada and the U.S. will need to be re-evaluated. \"We don't really know what the status of chronic disease is in the North American population,\" he said, \"until we normalize vitamin D status.\" (emphasis added)

\r\n
\r\n

I'm not aware of any other supplement, drug, or food that has this level of cancer protection.  Indeed drug companies have been working on patentable vitamin D analogues for years.  This is the sad state of our medical industry.  The reality is most of us today are deficient - and our cancer rates are thus abnormally elevated.  But we don't need an expensive new vitamin D derived drug to reduce cancer incidence.

\r\n

Low vitamin D levels are also  linked to metabolic syndrome  and thus weight gain and diabetes.  Abdominal fat in particular is linked to a cluster of diseases, including cancer, and higher vitamin D levels in the blood are linked to lower weight, and strangely - higher educational status.

\r\n

It also may boost intelligence;  deficiency has been linked to cognitive decline  with age.

\r\n

And finally, vitamin D defeciency fits the epidemic profile of autism, and has been proposed as a cause of this disorder[4].

\r\n

Another role of D may be as a form of summer/seasonal signalling hormone, and could explain the apparent link between VDDS, metabolic syndrome, and weight.  

\r\n

If you are low on vitamin D, your body is perhaps stuck in some eternal state of fall or winter, suppressing high-energy or risky endeavors and attempting to put on fat.  You are thus not getting the full mileage of your genome.

\r\n

Light in general has benefits beyond vitamin D.  Did you know that total light exposure has a measurable effect on mood?  In fact  bright light therapy  is a treatment for numerous psychiatric disorders.

\r\n

References/Notes:

\r\n

1. Statement on Exercise: Benefits and Recommendations for Physical Activity Programs for All Americans, from the American Heart Association

\r\n

2. My father founded the Vitamin D council in 2003 and is a tireless promoter and advocate for D.  So I may have some bias, but at this point perhaps it's just an inside view, because D's health effects are now widely known and little of this is as controversial as it was just 5 years ago.

\r\n

3. From this  article, which specifically summarizes an important  D cancer intervention trial.

\r\n

4.  Autism and Vitamin D, JJ Cannell, Med Hypotheses. 2008;70(4):750-9. Epub 2007 Oct 24. (see comments below about this controversial journal)

" } }, { "_id": "6DAYbBvT8nXw2HyZR", "title": "The Meaning of Life", "pageUrl": "https://www.lesswrong.com/posts/6DAYbBvT8nXw2HyZR/the-meaning-of-life", "postedAt": "2010-09-17T19:29:01.743Z", "baseScore": 14, "voteCount": 31, "commentCount": 110, "url": null, "contents": { "documentId": "6DAYbBvT8nXw2HyZR", "html": "

Fifteen thousand years ago, our ancestors bred dogs to serve man. In merely 150 centuries, we shaped collies to herd our sheep and pekingese to sit in our emperor's sleeves. Wild wolves can't understand us, but we teach their domesticated counterparts tricks for fun. And, most importantly of all, dogs get emotional pleasure out of serving their master. When my family's terrier runs to the kennel, she does so with blissful, self-reinforcing obedience.

\n

When I hear amateur philosophers ponder the meaning of life, I worry humans suffer from the same embarrassing shortcoming.

\n

It's not enough to find a meaningful cause. These monkeys want to look in the stars and see their lives' purpose described in explicit detail. They expect to comb through ancient writings and suddenly discover an edict reading \"the meaning of life is to collect as many paperclips as possible\" and then happily go about their lives as imperfect, yet fulfilled paperclip maximizers.

\n

I'd expect us to shout \"life is without mandated meaning!\" with lungs full of joy. There are no rules we have to follow, only the consequences we choose for us and our fellow humans. Huzzah!

\n

But most humans want nothing more than to surrender to a powerful force. See Augustine's conception of freedom, the definition of the word Islam, or Popper's \"The Open Society and Its Enemies.\" When they can't find one overwhelming enough, they furrow their brow and declare with frustration that life has no meaning.

\n

This is part denunciation and part confession. At times, I've felt the same way. I worry man is a domesticated species.

\n

I can think of several possible explanations:

\n

1. Evo Psych

\n

Our instincts were formed in an ancient time when not knowing the social norms and kow-towing to the political leaders resulted in literal and/or genetic extinction. Perhaps altruistic humans who served causes other than our own were more likely to survive Savannah politics.

\n

2. Signaling

\n

Perhaps we want to signal our capability to put our nose to the grindstone and work for your great cause. Hire me!

\n

3. Memetic Hijacking

\n

Growing up, I was often told to publicly proclaim things like \"Lord, I am not worthy to receive you.\" Perhaps spending years on my knees weakened my ability to choose and complete my own goals.

\n

4. Misplaced Life Dissatisfaction

\n

Perhaps it's easier for an unemployed loser to lament the meaninglessness of life than to actually fix his problems.

\n

The first theory seems plausible. Humans choke to avoid looking too good and standing out from the pack. Our history is full of bows, genuflects and salutes for genocidal a-holes and early death for the noble rebels.

\n

The second seems less likely. Most similar signaling makes people appear as happy, productive workers, not miserable, tortured artists.

\n

The third and fourth explanations fit well with my experiences. My existential angst didn't fade until I purged my brain's religious cobwebs and started improving my life. These things happened at about the same time, so I can't tell whether three or four fits better.

\n

I'd welcome anecdotes in the comments, especially from people raised in a secular environment. If you don't grow up expecting the universe to have meaning, are you ever dissappointed to find it is meaningless?

\n

But no matter the cause, \"What is the meaning of life?\" is a question that should be dissolved on sight. It reduces humanity to blinding subservience and is an enemy to our instrumental rationality.

\n

Building instrumental rationality may not be the reason why we're on this planet, but it it is the reason we're on this website.

" } }, { "_id": "N99KgncSXewWqkzMA", "title": "Compartmentalization in epistemic and instrumental rationality ", "pageUrl": "https://www.lesswrong.com/posts/N99KgncSXewWqkzMA/compartmentalization-in-epistemic-and-instrumental", "postedAt": "2010-09-17T07:02:19.041Z", "baseScore": 124, "voteCount": 95, "commentCount": 123, "url": null, "contents": { "documentId": "N99KgncSXewWqkzMA", "html": "

Related to: Humans are not automatically strategic, The mystery of the haunted rationalist, Striving to accept, Taking ideas seriously

\n

I argue that many techniques for epistemic rationality, as taught on LW, amount to techniques for reducing compartmentalization.  I argue further that when these same techniques are extended to a larger portion of the mind, they boost instrumental, as well as epistemic, rationality.

\n

Imagine trying to design an intelligent mind.

\n

One problem you’d face is designing its goal.  

\n

Every time you designed a goal-indicator, the mind would increase action patterns that hit that indicator[1].  Amongst these reinforced actions would be “wireheading patterns” that fooled the indicator but did not hit your intended goal.  For example, if your creature gains reward from internal indicators of status, it will increase those indicators -- including by such methods as surrounding itself with people who agree with it, or convincing itself that it understood important matters others had missed.  It would be hard-wired to act as though “believing makes it so”. 

\n

A second problem you’d face is propagating evidence.  Whenever your creature encounters some new evidence E, you’ll want it to update its model of  “events like E”.  But how do you tell which events are “like E”? The soup of hypotheses, intuition-fragments, and other pieces of world-model is too large, and its processing too limited, to update each belief after each piece of evidence.  Even absent wireheading-driven tendencies to keep rewarding beliefs isolated from threatening evidence, you’ll probably have trouble with accidental compartmentalization (where the creature doesn’t update relevant beliefs simply because your heuristics for what to update were imperfect).

\n

Evolution, AFAICT, faced just these problems.  The result is a familiar set of rationality gaps:

\n

I.  Accidental compartmentalization

\n

a.  Belief compartmentalization:  We often fail to propagate changes to our abstract beliefs (and we often make predictions using un-updated, specialized components of our soup of world-model).  Thus, learning modus tolens in the abstract doesn’t automatically change your answer to the Wason card test.  Learning about conservation of energy doesn’t automatically change your fear when a bowling ball is hurtling toward you.  Understanding there aren’t ghosts doesn’t automatically change your anticipations in a haunted house. (See Will's excellent post Taking ideas seriously for further discussion).

\n

b. Goal compartmentalization We often fail to propagate information about what “losing weight”, “being a skilled thinker”, or other goals would concretely do for us.  We also fail to propagate information about what specific actions could further these goals.  Thus (absent the concrete visualizations recommended in many self-help books) our goals fail to pull our behavior, because although we verbally know the consequences of our actions, we don’t visualize those consequences on the “near-mode” level that prompts emotions and actions.

\n

c.  Failure to flush garbage:  We often continue to work toward a subgoal that no longer serves our actual goal (creating what Eliezer calls a lost purpose).  Similarly, we often continue to discuss, and care about, concepts that have lost all their moorings in anticipated sense-experience.

\n

II.  Reinforced compartmentalization: 

\n

Type 1:   Distorted reward signals. If X is a reinforced goal-indicator (“I have status”; “my mother approves of me”[2]), thinking patterns that bias us toward X will be reinforced.  We will learn to compartmentalize away anti-X information.

\n

The problem is not just conscious wishful thinking; it is a sphexish, half-alien mind that distorts your beliefs by reinforcing motives, angles or approach or analysis, choices of reading material or discussion partners, etc. so as to bias you toward X, and to compartmentalize away anti-X information.

\n

Impairment to epistemic rationality:

\n\n

Impairment to instrumental rationality:

\n\n

Type 2:   “Ugh fields”, or “no thought zones”.  If we have a large amount of anti-X information cluttering up our brains, we may avoid thinking about X at all, since considering X tends to reduce compartmentalization and send us pain signals.  Sometimes, this involves not-acting in entire domains of our lives, lest we be reminded of X.

\n

Impairment to epistemic rationality:

\n\n

Impairment to instrumental rationality:

\n\n

Type 3:   Wireheading patterns that fill our lives, and prevent other thoughts and actions. [3]

\n

Impairment to epistemic rationality:

\n\n

Impairment to instrumental rationality:

\n\n

Strategies for reducing compartmentalization:

\n

A huge portion of both Less Wrong and the self-help and business literatures amounts to techniques for integrating your thoughts -- for bringing your whole mind, with all your intelligence and energy, to bear on your problems.  Many fall into the following categories, each of which boosts both epistemic and instrumental rationality:

\n

1.  Something to protect (or, as Napoleon Hill has it, definite major purpose[4]): Find an external goal that you care deeply about. Visualize the goal; remind yourself of what it can do for you; integrate the desire across your mind.  Then, use your desire to achieve this goal, and your knowledge that actual inquiry and effective actions can help you achieve it, to reduce wireheading temptations.

\n

2.  Translate evidence, and goals, into terms that are easy to understand.  It’s more painful to remember “Aunt Jane is dead” than “Aunt Jane passed away” because more of your brain understands the first sentence.  Therefore use simple, concrete terms, whether you’re saying “Aunt Jane is dead” or “Damn, I don’t know calculus” or “Light bends when it hits water” or “I will earn a million dollars”.  Work to update your whole web of beliefs and goals.

\n

3.  Reduce the emotional gradients that fuel wireheading.  Leave yourself lines of retreat.  Recite the litanies of Gendlin and Tarski; visualize their meaning, concretely, for the task or ugh field bending your thoughts.  Think through the painful information; notice the expected update, so that you need not fear further thought.  On your to-do list, write concrete \"next actions\", rather than vague goals with no clear steps, to make the list less scary.

\n

4.  Be aware of common patterns of wireheading or compartmentalization, such as failure to acknowledge sunk costs.  Build habits, and perhaps identity, around correcting these patterns.

\n

I suspect that if we follow up on these parallels, and learn strategies for decompartmentalizing not only our far-mode beliefs, but also our near-mode beliefs, our models of ourselves, our curiosity, and our near- and far-mode goals and emotions, we can create a more powerful rationality -- a rationality for the whole mind.

\n

 

\n
\n

[1] Assuming it's a reinforcement learner, temporal difference learner, perceptual control system, or similar.

\n

[2] We receive reward/pain not only from \"primitive reinforcers\" such as smiles, sugar, warmth, and the like, but also from many long-term predictors of those reinforcers (or predictors of predictors of those reinforcers, or...), such as one's LW karma score, one's number theory prowess, or a specific person's esteem. We probably wish to regard some of these learned reinforcers as part of our real preferences.

\n

[3] Arguably, wireheading gives us fewer long-term reward signals than we would achieve from its absence. Why does it persist, then?  I would guess that the answer is not so much hyperbolic discounting (although this does play a role) as local hill-climbing behavior; the simple, parallel systems that fuel most of our learning can't see how to get from \"avoid thinking about my bill\" to \"genuinely relax, after paying my bill\".  You, though, can see such paths -- and if you search for such improvements and visualize the rewards, it may be easier to reduce wireheading.

\n

[4] I'm not recommending Napoleon Hill. But even this unusually LW-unfriendly self-help book seems to get most points right, at least in the linked summary.  You might try reading the summary as an exercise in recognizing mostly-accurate statements when expressed in the enemy's vocabulary.

" } }, { "_id": "pEnbZikPF6hwBWnJw", "title": "Open Thread, September, 2010-- part 2", "pageUrl": "https://www.lesswrong.com/posts/pEnbZikPF6hwBWnJw/open-thread-september-2010-part-2", "postedAt": "2010-09-17T01:44:40.616Z", "baseScore": 5, "voteCount": 6, "commentCount": 877, "url": null, "contents": { "documentId": "pEnbZikPF6hwBWnJw", "html": "

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

\n

 

\n


" } }, { "_id": "8n2zZGWebvHcNPRxd", "title": "The conscious tape", "pageUrl": "https://www.lesswrong.com/posts/8n2zZGWebvHcNPRxd/the-conscious-tape", "postedAt": "2010-09-16T19:55:51.997Z", "baseScore": 16, "voteCount": 16, "commentCount": 119, "url": null, "contents": { "documentId": "8n2zZGWebvHcNPRxd", "html": "

This post comprises one question and no answers.  You have been warned.

\n

I was reading \"How minds can be computational systems\", by William Rapaport, and something caught my attention.  He wrote,

\n
\n

Computationalism is - or ought to be - the thesis that cognition is computable ... Note, first, that I have said that computationalism is the thesis that cognition is computable, not that it is computation (as Pylyshyn 1985 p. xiii characterizes it). ... To say that cognition is computable is to say that there is an algorithm - more likely, a collection of interrelated algorithms - that computes it.  So, what does it mean to say that something 'computes cognition'? ... cognition is computable if and only if there is an algorithm ... that computes this function (or functions).

\n
\n

Rapaport was talking about cognition, not consciousness.  The contention between these hypothesis is, however, only interesting if you are talking about consciousness; if you're talking about \"cognition\", it's just a choice between two different ways to define cognition.

\n

When it comes to consciousness, I consider myself a computationalist.  But I hadn't realized before that my explanation of consciousness as computational \"works\" by jumping back and forth between those two incompatible positions.  Each one provides part of what I need; but each, on its own, seems impossible to me; and they are probably mutually exclusive.

\n

Option 1: Consciousness is computed

\n

If consciousness is computed, then there are no necessary dynamics.  All that matters is getting the right output.  It doesn't matter what algorithm you use to get that output, or what physical machinery you use to compute it.  In the real world, it matters how fast you compute it; but surely you can provide a simulated world at the right speed for your slow or fast algorithm.  In humans today, the output is not produced all at once - but from a computationalist perspective, that isn't important.  I know \"emergence\" is wonderful, but it's still Turing-computable.  Whatever a \"correct\" sequence of inputs and outputs is, even if they overlap in time, you can summarize the inputs over time in a single static representation, and the outputs in a static representation.

\n

So what is conscious, in this view?  Well, the algorithm doesn't matter - remember, we're not asking for O(consciousness); we're saying that consciousness is computed, and therefore is the output of a computation.  The machine doing the computing is one step further removed than the algorithm, so it's certainly not eligible as the seat of consciousness; it can be replaced by an infinite number of computationally-equivalent different substrates.

\n

Whatever it is that's conscious, you can compute it and represent it in a static form.  The simplest interpretation is that the output itself is conscious.  So this leads to the conclusion that, if a Turing machine computes consciousness and summarizes its output in a static representation on a tape, the tape is conscious.  Or the information on the tape, or - whatever it is that's conscious, it is a static thing, not a living, dynamic thing.  If computation is an output, process doesn't matter.  Time doesn't enter into it.

\n

The only way out of this is to claim that an output that, when coming out of a dynamic real-time system, is conscious, becomes unconscious when it's converted into a static representation, even if the two representations contain exactly the same information.  (X and Y have the same information if an observer can translate X into Y, and Y into X.  The requirement for an observer may be problematic here.)   This strikes me as not being computationalist at all.  Computationalism means considering two computational outputs equivalent if they contain the same information, whether they're computed with neurons and represented as membrane potentials, or computed with Tinkertoys and represented by rotations of a set of wheels.  Is the syntactic transformation from a dynamic to a static representation a greater qualitative change than the transformation from tinkertoys to neurons?  I don't think so.

\n

Option 2: Consciousness is computation

\n

If consciousness is computation, then we have the satisfying feeling that how we do those computations matters.  But then we're not computationalists anymore!

\n

A computational analysis will never say that one algorithm for producing a series of outputs produces an extra computational effect (consciousness) that another method does not.  If it's not output, or internal representational state, it doesn't count.  There are no other \"by-products of computation\".  If you use a context-sensitive grammar to match a regular expression, it doesn't make the answer more special than if you used a regular grammar.

\n

Don't protest that a human talks and walks and thereby produces side-effects during the computation.  That is not a computational analysis.  A computational analysis will give the same result if you translate whatever the algorithm and machine running it is, onto tape in a Turing machine.  Anything that gives a different result is not a computational analysis.  If these side-effects don't show up on the tape, it's because you forgot to represent them.

\n

An analysis of the actual computation process, as opposed to its output, could be a thermodynamic analysis, which would care about things like how many bits the algorithm erased internally.  I find it hard to believe that consciousness is a particular pattern of entropy production or waste heat.  Or it could be a complexity or runtime analysis, that cared about how long it took.  A complexity analysis has a categorical output; there's no such thing as a function being \"a little bit recursively enumerable\", as I believe there is with consciousness.  So I'd be surprised if \"conscious\" is a property of an algorithm in the same way that \"recursively enumerable\" is.  A runtime analysis can give more quantitative answers, but I'm pretty sure you can't become conscious by increasing your runtime.  (Otherwise, Windows Vista would be conscious.)

\n

Option 3: Consciousness is the result of quantum effects in microtubules

\n

Just kidding.  Option 3 is left as an exercise for the reader, because I'm stuck.  I think a promising angle to pursue would be the necessity of an external observer to interpret the \"conscious tape\".  Perhaps a conscious computational device is one that observes itself and provides its own semantics.  I don't understand how any process can do that; but a static representation clearly can't.

\n

ADDED

\n

Many people are replying by saying, \"Obviously, option 2 is correct,\" then listing arguments for, without addressing the problems with option 2.  That's cheating.

" } }, { "_id": "dqNavQSCJuFWsavdt", "title": "LW's first job ad", "pageUrl": "https://www.lesswrong.com/posts/dqNavQSCJuFWsavdt/lw-s-first-job-ad", "postedAt": "2010-09-16T10:04:42.394Z", "baseScore": 4, "voteCount": 60, "commentCount": 96, "url": null, "contents": { "documentId": "dqNavQSCJuFWsavdt", "html": "

A friend of the Singularity Institute is seeking to hire someone to research trends and surprises in geopolitics, world economics, and technology - a brainstorming, think-tank type job at a for-profit company.  No experience necessary, but strong math and verbal skills required; they're happy to hire out of college and would probably hire out of high school if they find a math-Olympiad type or polymath. This is a job that requires you to think all day and come up with interesting ideas, so they're looking for people who can come up with lots of ideas and criticize them without much external prompting, and enough drive to get their research done without someone standing over their shoulder.  They pay well, and it obviously does not involve sales or marketing. They're interested in Less Wrong readers because rationality skills can help.  Located in San Francisco.  Send résumé and cover letter to yuanshotfirst@gmail.com.  Writing sample optional.

" } }, { "_id": "THdadTRy9d8w5ZhK2", "title": "Bayes' rule =/= Bayesian inference", "pageUrl": "https://www.lesswrong.com/posts/THdadTRy9d8w5ZhK2/bayes-rule-bayesian-inference", "postedAt": "2010-09-16T06:34:08.815Z", "baseScore": 49, "voteCount": 43, "commentCount": 70, "url": null, "contents": { "documentId": "THdadTRy9d8w5ZhK2", "html": "

Related to: Bayes' Theorem Illustrated, What is Bayesianism?, An Intuitive Explanation of Bayes' Theorem

\n

(Bayes' theorem is something Bayesians need to use more often than Frequentists do, but Bayes' theorem itself isn't Bayesian. This post is meant to be a light introduction to the difference between Bayes' theorem and Bayesian data analysis.)

\n

Bayes' Theorem

\n

Bayes' theorem is just a way to get (e.g.) p(B|A) from p(A|B) and p(B).  The classic example of Bayes' theorem is diagnostic testing.  Suppose someone either has the disease (D+) or does not have the disease (D-) and either tests positive (T+) or tests negative (T-).  If we knew the sensitivity P(T+|D+), specificity P(T-|D-) and disease prevalence P(D+), then we could get the positive predictive value P(D+|T+) using Bayes' theorem:

\n

For example, suppose we know the sensitivity=0.9, specificity=0.8 and disease prevalence is 0.01.  Then,

\n


\"\"\"\"

\n

This answer is not Bayesian or frequentist; it's just correct.

\n

Diagnostic testing study

\n

Typically we will not know P(T+|D+) or P(T-|D-).  We would consider these unknown parameters.  Let's denote them by Θsens and Θspec.  For simplicity, let's assume we know the disease prevalence P(D+) (we often have a lot of data on this). 

\n

Suppose 1000 subjects with the disease were tested, and 900 of them tested positive.  Suppose 1000 disease-free subjects were tested and 200 of them tested positive.  Finally, suppose 1% of the population has the disease.

\n

Frequentist approach

\n

Estimate the 2 parameters (sensitivity and specificity) using their sample values (sample proportions) and plug them in to Bayes' formula above.  This results in a point estimate for P(D+|T+) of 0.043.  A standard error or confidence interval could be obtained using the delta method or bootstrapping.

\n

Even though Bayes' theorem was used, this is not a Bayesian approach.

\n

Bayesian approach

\n

The Bayesian approach is to specify prior distributions for all unknowns.  For example, we might specify independent uniform(0,1) priors for Θsens and  Θspec.  However, we should expect the test to do at least as good as guessing (guessing would mean randomly selecting 1% of people and calling them T+). In addition, we expect Θsens>1-Θspec. So, I might go with a Beta(4,2.5) distribution for Θsens and Beta(2.5,4) for Θspec:

\n

\"\"\"\"

\n

Using these priors + the data yields a posterior distribution for P(D+|T+) with posterior median 0.043 and 95% credible interval (0.038, 0.049).  In this case, the Bayesian and frequentist approaches have the same results (not surprising since the priors are relatively flat and there are a lot of data).  However, the methodology is quite different.

\n

Example that illustrates benefit of Bayesian data analysis

\n

(example edited to focus on credible/confidence intervals)

\n

Suppose someone shows you what looks like a fair coin (you confirm head on one side tails on the other) and makes the claim:  \"This coin will land with heads up 90% of the time\"

\n

Suppose the coin is flipped 5 times and lands with heads up 4 times.

\n

Frequentist approach

\n

\"A 95% confidence interval for the Binomial parameter is (.38, .99) using the Agresti-Coull method.\"  Because 0.9 is within the confidence limits, the usual conclusion would be that we do not have enough evidence to rule it out.

\n

Bayesian approach

\n

\"I don't believe you.  Based on experience and what I know about the laws of physics, I think it's very unlikely that your claim is accurate. I feel very confident that the probability is close to 0.5.  However, I don't want to rule out something a little bit unusual (like a probability of 0.4).  Thus, my prior for the probability of heads is a Beta(30,30) distribution.\"

\n

\"\"

\n

After seeing the data, we update our belief about the binomial parameter.  The 95% credible interval for it is (0.40, 0.64).  Thus, a value of 0.9 is still considered extremely unlikely.

\n

This illustrates the idea that, from a Bayesian perspective, implausible claims require more evidence than plausible claims.  Frequentists have no formal way of including that type of prior information.
\"\"\"\"

" } }, { "_id": "n3B89rPxcBBLG5D6e", "title": "Perfect procrastination", "pageUrl": "https://www.lesswrong.com/posts/n3B89rPxcBBLG5D6e/perfect-procrastination", "postedAt": "2010-09-16T00:35:01.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "n3B89rPxcBBLG5D6e", "html": "

Perfectionism is often blamed for procrastination. John Perry explains:

\n

Many procrastinators do not realize that they are perfectionists, for the simple reason that they have never done anything perfectly, or even nearly so… Perfectionism is a matter of fantasy, not reality. Here’s how it works in my case.  I am assigned some task, say, refereeing a manuscript for a publisher… Immediately my fantasy life kicks in. I imagine myself writing the most wonderful referees report. I imagine giving the manuscript an incredibly thorough read, and writing a report that helps the author to greatly improve their efforts.  I imagine the publisher getting my report and saying, “Wow, that is the best referee report I have ever read.” I imagine my report being completely accurate, completely fair, incredibly helpful to author and publisher.

\n

At first Perry seems to suggest that the perfectionist tries to do the job too well and gets sidetracked on tangential stepping stones, which doesn’t sound accurate. Then he says:

\n

Procrastinating was a way of giving myself permission to do a less than perfect job on a task that didn’t require a perfect job. As long as the deadline was a ways away, then, in theory, I had time to go the library, or set myself up for a long evening at home, and do a thorough, scholarly, perfect job refereeing this book. But when the deadline is near, or even a bit in the past, there is no longer time to do a perfect job. I have to just sit down and do an imperfect, but adequate job.

\n

But why would you be so willing to give yourself permission to do something else unproductive for six weeks so you will have to do an imperfect job if you aren’t willing to permit an imperfect job straight off?  If you’re such a perfectionist, wouldn’t you want to get started straight away and do a perfect job?

\n

Here’s an alternative theory. The link between procrastination and perfectionism has to do with construal level theory. When you picture getting started straight away the close temporal distance puts you in near mode, where you see all the detailed impediments to doing a perfect job. When you think of doing the task in the future some time, trade-offs and barriers vanish and the glorious final goal becomes more vivid. So it always seems like you will do a great job in the future, whereas right now progress is depressingly slow and complicated. This makes doing it in the future seem all the more of a good option if you are obsessed with perfection.

\n

Relatedly, similar tasks designed to prompt far mode increase procrastination over those designed to prompt near mode (summarygated paper). Perhaps you mostly feel the contrast if you start in far mode, since to do the task you must eventually edge closer near mode? If you start in near mode you can stay there. The kind of trade-off-free perfectionistic fantasizing Perry describes sounds like introducing all tasks to oneself in far mode. I have no time to think about it further; I must return to turning pages and making squiggles with my inky purple pen.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "RE29oZchhDTmG5Kgp", "title": "Intelligence Amplification Open Thread", "pageUrl": "https://www.lesswrong.com/posts/RE29oZchhDTmG5Kgp/intelligence-amplification-open-thread", "postedAt": "2010-09-15T08:39:24.609Z", "baseScore": 62, "voteCount": 54, "commentCount": 346, "url": null, "contents": { "documentId": "RE29oZchhDTmG5Kgp", "html": "

A place to discuss potentially promising methods of intelligence amplification in the broad sense of general methods, tools, diets, regimens, or substances that boost cognition (memory, creativity, focus, etc.): anything from SuperMemo to Piracetam to regular exercise to eating lots of animal fat to binaural beats, whether it works or not. Where's the highest expected value? What's easiest to make part of your daily routine? Hopefully discussion here will lead to concise top level posts describing what works for a more self-improvement-savvy Less Wrong.

\n

Lists of potential interventions are great, but even better would be a thorough analysis of a single intervention: costs, benefits, ease, et cetera. This way the comment threads will be more structured and organized. Less Wrong is pretty confused about IA, so even if you're not an expert, a quick analysis or link to a metastudy about e.g. exercise could be very helpful.

\n

Added: Adam Atlas is now hosting an IA wiki: BetterBrains! Bookmark it, add to it, make it awesome.

" } }, { "_id": "5FyBJqeymo5wFn6Sy", "title": "Link: \"You're Not Being Reasonable\"", "pageUrl": "https://www.lesswrong.com/posts/5FyBJqeymo5wFn6Sy/link-you-re-not-being-reasonable", "postedAt": "2010-09-15T07:19:46.694Z", "baseScore": 20, "voteCount": 15, "commentCount": 9, "url": null, "contents": { "documentId": "5FyBJqeymo5wFn6Sy", "html": "

Thanks to David Brin, I've discovered a blogger, Michael Dobson, who has written, among other things, a fourteen-part series on cognitive biases. But that's not what I'm linking to today.

\n

This is what I'm linking to:

\n

You're Not Being Reasonable

\n
\n

I’m embarrassed to admit that I’ve been getting myself into more online arguments about politics and religion lately, and I’m not happy with either my own behavior or others. All the cognitive biases are on display, and hardly anyone actually speaks to the other side. Unreasonableness is rampant.

\n

The problem is that what’s reasonable tends to be subjective. Obviously, I’m going to be biased toward thinking people who agree with me are more reasonable than those lunkheads who don’t. But that doesn’t mean there aren’t objective standards for being reasonable.

\n

...

\n

I learned some of the following through observation, and most of it through the contrary experience of doing it wrong. You’ve heard some of the advice elsewhere, but a reminder every once in a while comes in handy.

\n
\n

Yes, much of it is pretty basic stuff, but as he says, a reminder every once in a while comes in handy, and this is as good a summary of the rules for having a reasonable discussion as I've seen anywhere.

\n

And the rest of the blog seems pretty good, too. (Did I mention the fourteen-part series on cognitive biases?)

" } }, { "_id": "uFYQaGCRwt3wKtyZP", "title": "Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality", "pageUrl": "https://www.lesswrong.com/posts/uFYQaGCRwt3wKtyZP/self-improvement-or-shiny-distraction-why-less-wrong-is-anti", "postedAt": "2010-09-14T16:17:55.826Z", "baseScore": 157, "voteCount": 175, "commentCount": 261, "url": null, "contents": { "documentId": "uFYQaGCRwt3wKtyZP", "html": "

Introduction

\n

Less Wrong is explicitly intended is to help people become more rational.  Eliezer has posted that rationality means epistemic rationality (having & updating a correct model of the world), and instrumental rationality (the art of achieving your goals effectively).  Both are fundamentally tied to the real world and our performance in it - they are about ability in practice, not theoretical knowledge (except inasmuch as that knowledge helps ability in practice).  Unfortunately, I think Less Wrong is a failure at instilling abilities-in-practice, and designed in a way that detracts from people's real-world performance.

\n

It will take some time, and it may be unpleasant to hear, but I'm going to try to explain what LW is, why that's bad, and sketch what a tool to actually help people become more rational would look like.

\n

(This post was motivated by Anna Salomon's Humans are not automatically strategic and the response, more detailed background in footnote [1].)

\n

Update / Clarification in response to some comments: This post is based on the assumption that a) the creators of Less Wrong wish Less Wrong to result in people becoming better at achieving their goals (instrumental rationality, aka \"efficient productivity\"), and b) Some (perhaps many) readers read it towards that goal.  It is this I think is self-deception.  I do not dispute that LW can be used in a positive way (read during fun time instead of the NYT or funny pictures on Digg), or that it has positive effects (exposing people to important ideas they might not see elsewhere).  I merely dispute that reading fun things on the internet can help people become more instrumentally rational.  Additionally, I think instrumental rationality is really important and could be a huge benefit to people's lives (in fact, is by definition!), and so a community value that \"deliberate practice towards self-improvement\" is more valuable and more important than \"reading entertaining ideas on the internet\" would be of immense value to LW as a community - while greatly decreasing the importance of LW as a website.

\n

Why Less Wrong is not an effective route to increasing rationality.

\n

Definition:

\n

Work: time spent acting in an instrumentally rational manner, ie forcing your attention towards the tasks you have consciously determined will be the most effective at achieving your consciously chosen goals, rather than allowing your mind to drift to what is shiny and fun.

\n

By definition, Work is what (instrumental) rationalists wish to do more of.  A corollary is that Work is also what is required in order to increase one's capacity to Work.  This must be true by the definition of instrumental rationality - if it's the most efficient way to achieve one's goals, and if one's goal is to increase one's instrumental rationality, doing so is most efficiently done by being instrumentally rational about it. [2]

\n

That was almost circular, so to add meat, you'll notice in the definition an embedded assumption that the \"hard\" part of Work is directing attention - forcing yourself to do what you know you ought to instead of what is fun & easy.  (And to a lesser degree, determining your goals and the most effective tasks to achieve them).  This assumption may not hold true for everyone, but with the amount of discussion of \"Akrasia\" on LW, the general drift of writing by smart people about productivity (Paul Graham: Addiction, Distraction, Merlin Mann: Time & Attention), and the common themes in the numerous productivity/self-help books I've read, I think it's fair to say that identifying the goals and tasks that matter and getting yourself to do them is what most humans fundamentally struggle with when it comes to instrumental rationality.

\n

Figuring out goals is fairly personal, often subjective, and can be difficult.  I definitely think the deep philosophical elements of Less Wrong and it's contributions to epistemic rationality [3] are useful to this, but (like psychedelics) the benefit comes from small occasional doses of the good stuff.  Goals should be re-examined regularly, but occasionally (roughly yearly, and at major life forks).  An annual retreat with a mix of close friends and distant-but-respected acquaintances (Burning Man, perhaps) will do the trick - reading a regularly updated blog is way overkill.

\n

And figuring out tasks, once you turn your attention to it, is pretty easy.  Once you have explicit goals, just consciously and continuously examining whether your actions have been effective at achieving those goals will get you way above the average smart human at correctly choosing the most effective tasks.  The big deal here for many (most?) of us, is the conscious direction of our attention.

\n

What is the enemy of consciously directed attention?  It is shiny distraction.  And what is Less Wrong?  It is a blog, a succession of short fun posts with comments, most likely read when people wish to distract or entertain themselves, and tuned for producing shiny ideas which successfully distract and entertain people.  As Merlin Mann says: \"Joining a Facebook group about creative productivity is like buying a chair about jogging\".  Well, reading a blog to overcome akrasia IS joining a Facebook group about creative productivity.  It's the opposite of this classic piece of advice.

\n

Now, I freely admit that this argument is relatively brief and minimally supported compared to what a really good, solid argument about exactly how to become more rational would be.  This laziness is deliberate, conscious, and a direct expression of my beliefs about the problem with LW.  I believe that most people, particularly smart ones, do way too much thinking & talking and way too little action (me included), because that is what's easy for them [4].

\n

What I see as a better route is to gather those who will quickly agree, do things differently, (hopefully) win and (definitely) learn.  Note that this general technique has a double advantage: the small group gets to enjoy immediate results, and when the time comes to change minds, they have the powerful evidence of their experience.  It also reduces the problem that the stated goal of many participants (\"get more rational\") may not be their actual goal (\"enjoy the company of rationalists in a way which is shiny fun, not Work\"), since the call to action will tend to select for those who actually desire self-improvement.  My hope is that this post and the description below of what actual personal growth looks like inspire one or more small groups to form.

\n

Less Wrong: Negative Value, Positive Potential

\n

Unfortunately, in this framework, Less Wrong is probably of negative value to those who really want to become more rational.  I see it as a low-ROI activity whose shininess is tuned to attract the rationality community, and thus serves as the perfect distraction (rationality porn, rationality opium).  Many (most?) participants are allowing LW to grab their attention because it is fun and easy, and thus simultaneously distracting themselves from Work (reducing their overall Work time) while convincing themselves that this distraction is helping them to become more rational.  This reduces the chance that they will consciously Work towards rationality, since they feel they are already working towards that goal with their LW reading time. (Adding [4.5] in response to comments).

\n

(Note that from this perspective, HP&TMoR is a positive - people know reading fanfic is entertainment, and being good enough entertainment to displace people's less educational alternative entertainments while teaching a little rationality increases the overall level of rationality.  The key is that HP&TMoR is read in \"fun time\", while I believe most people see LW time as \"work towards self-improvement\" time.  Ironic, but true for me and the friends I've polled, at least)

\n

That said, the property of shininess-to-rationalists has resulted in a large community of rationalists, which makes LW potentially a great resource for actual training of people's individual rationality.  And while catalyzing Work is much harder than getting positive feedback, I do find it heart-warming and promising that I have consistently received positive feedback from the LW community by pointing out it's errors.  This is a community that wants to self-correct - which is unfortunately rare and a necessary though not sufficient criteria for improvement.

\n

This is taking too long to write [5], and we haven't even gotten to the constructive part, so I'm going to assume that if you are still with me you no longer need as detailed arguments and I can go faster.

\n

Some Observations On What Makes Something Useful For Self-Improvement

\n

My version: Growth activities are Work, and hence feel like work, not fun - they involves overriding your instincts, not following them.  Any growth you can get from following your instincts, you have probably had already.  And consciously directing your attention is not something that can be trained by being distracted (willpower is a muscle, you exercise it by using it).  Finding the best tasks to achieve your goals is not practiced by doing whatever tasks come to mind.  And so forth.  You may experience flow states once your attention is focused where it should be, but unless you have the incredible and rare fortune to have what is shiny match up with what is useful, the act of starting and maintaining focus and improving your ability to do so will be hard work.

\n

The academic version: The literature on skill development (\"acquisition of expertise\") says that it involves \"deliberate practice\".  The same is very likely true of acquiring expertise in rationality.  The 6 tenets of deliberate practice are that it:

\n
    \n
  1. Is not inherently enjoyable.
  2. \n
  3. Is not play or paid practice.
  4. \n
  5. Is relevant to the skill being developed.
  6. \n
  7. Is not simply watching the skill being performed.
  8. \n
  9. Requires effort and attention from the learner.
  10. \n
  11. Often involves activities selected by a coach or teacher to facilitate learning.
  12. \n
\n

One must stretch quite a bit to fit these to \"reading Less Wrong\" - it's just too shiny and fun to be useful.  One can (and must) enjoy the results of practice, but if the practice itself doesn't take effort, you are going to plateau fast.  (I want to be clear, BTW, that I am not making a Puritan fallacy of equating effort and reward [6]).  Meditation is a great example of an instrumental rationality practice: it is a boring, difficult isolation exercise for directing and noticing the direction of one's attention.  It is Work.

\n

What Would A Real Rationality Practice Look Like?

\n

Eliezer has used the phrase \"rationality dojo\", which I think has many correct implications:

\n
    \n
  1. It is a group of people who gather in person to train specific skills.
  2. \n
  3. While there are some theoreticians of the art, most people participate by learning it and doing it, not theorizing about it.
  4. \n
  5. Thus the main focus is on local practice groups, along with the global coordination to maximize their effectiveness (marketing, branding, integration of knowledge, common infrastructure).  As a result, it is driven by the needs of the learners.
  6. \n
  7. You have to sweat, but the result is you get stronger.
  8. \n
  9. You improve by learning from those better than you, competing with those at your level, and teaching those below you.
  10. \n
  11. It is run by a professional, or at least someone getting paid for their hobby.  The practicants receive personal benefit from their practice, in particular from the value-added of the coach, enough to pay for talented coaches.
  12. \n
\n

In general, a real rationality practice should feel a lot more like going to the gym, and a lot less like hanging out with friends at a bar.

\n

To explain the ones that I worry will be non-obvious:

\n

1) I don't know why in-person group is important, but it seems to be - all the people who have replied to me so far saying they get useful rational practice out of the LW community said the growth came through attending local meetups (example).  We can easily invent some evolutionary psychology story for this, but it doesn't matter why, at this point it's enough to just know.

\n

6) There are people who can do high-quality productive work in their spare time, but in my experience they are very rare.  It is very pleasant to think that \"amateurs can change the world\" because then we can fantasize about ourselves doing it in our spare time, and it even happens occasionally, which feeds that fantasy, but I don't find it very credible.  I know we are really smart and there are memes in our community that rationalists are way better than everyone else at everything, but frankly I find the idea that people writing blog posts in their spare time will create a better system than trained professionals for improving one's ability to achieve one's goals to be ludicrous.  I know some personal growth professionals, and they are smart too, and they have had years of practice and study to develop practical experience.  Talk is cheap, as is time spent reading blogs: if people actually value becoming more rational, they will pay for it, and if there are good teachers, they will be worth being paid.  Money is a unit of learning [7].

\n

There are some other important aspects which such a practice would have that LW does not:

\n
    \n
  1. The accumulation of knowledge.  Blogs are inherently rewarding: people read what is recent, so you get quick feedback on posts and comments.  However, they are inherently ephemeral for the same reason - people read what is recent, and posts are never substantially revised.  The voting system helps a little, but it can't even close to fix the underlying structure.  To be efficient, much less work should go into ephemeral posts, and much more into accumulating and revising a large, detailed, nuanced body of knowledge (this is exactly the sort of \"\"work not fun\" activity that you can get by paying someone, but are unlikely to get when contributors are volunteers).  In theory, this could happen on the Wiki, but in practice I have rarely seen Wikis succeed at this (with the obvious except of Le Wik).
  2. \n
  3. It would involve more literature review and pointers to existing work.  The obvious highest-ROI way to start working on improving instrumental rationality is to research and summarize the best existing work for self-improvement in the directions that LW values, not to reinvent the wheel.  Yes, over time LW should produce original work and perhaps eventually the best such work, but the existing work is not so bad that it should just be ignored.  Far from it!  In reference to (1), perhaps this should be done by creating a database of reviews and ratings by LWers of the top-rated self-improvement books, perhaps eventually with ratings taking into account the variety of skills people are seeking and ways in which they optimally learn.
  4. \n
  5. It would be practical - most units of information (posts, pages, whatever) would be about exercises or ideas that one could immediately apply in one's own life.  It would look less like most LW posts (abstract, theoretical, focused on chains of logic), and more like Structured Procrastination, the Pmarca Guide To Personal Productivity, books like Eat That Frog!, Getting Things Done, and Switch [8].  Most discussion would be about topics like those in Anna's post - how to act effectively, what things people have tried, what worked, what didn't, and why.  More learning through empiricism, less through logic and analysis.
  6. \n
\n

In forming such a practice, we could learn from other communities which have developed a new body of knowledge about a set of skills and disseminated it with rapid scaling within the last 15 years.  Two I know about and have tangentially participated in are

\n
    \n
  1. PUA (how to pick up women).  In fact, a social skills community based on PUA was suggested on LW a few days ago - (glad to see that others are interested in practice and not just talk!)
  2. \n
  3. CrossFit (synthesis of the best techniques for time-efficient broad-applicability fitness)
  4. \n
\n

Note that both involve most of my suggested features (PUA has some \"reading not doing\" issues, but it's far ahead of LW in having an explicit cultural value to the contrary - for example, almost every workshop features time spent \"in the field\").  One feature of PUA in particular I'd like to point out is the concept of the \"PUA lair\" - a group of people living together with the explicit intention of increasing their PUA skills.  As the lair link says: \"It is highly touted that the most proficient and fastest way to improve your skills is to hang out with others who are ahead of you, and those whose goals for improvement mirror your own.\" [9]

\n

Conclusion

\n

If LW is to accomplish it's goal of increasing participant's instrumental rationality, it must dramatically change form.  One of the biggest, perhaps the biggest element of instrumental rationality is the ability to direct one's attention, and a rationality blog makes people worse at this by distracting their attention in a way accepted by their community and that they will feel is useful.  From The War Of Art [10]:

\n
\n

Often couples or close friends,even entire families, will enter into tacit compacts whereby each individual pledges (unconsciously) to remain mired in the same slough in which she and all her cronies have become so comfortable.  The highest treason a crab can commit is to make a leap for the rim of the bucket.

\n
\n

To aid growth at rationality, Less Wrong would have to become a skill practice community, more like martial arts, PUA, and physical fitness, with an explicit focus of helping people grow in their ability to set and achieve goals, combining local chapters with global coordination, infrastructure, and knowledge accumulation.  Most discussion should be among people working on a specific skill at a similar level about what is or isn't working for them as they attempt to progress, rather than obscure theories about the inner workings of the human mind.

\n

Such a practice and community would look very different, but I believe it would have a far better chance to actually make people more rational [11].  There would be danger of cultism and the religious fervor/\"one true way\" that self-help movements sometimes have (Landmark), and I wonder if it's a profound distaste for anything remotely smelling of cult that has led Eliezer & SIAI away from this path.  But the opposite of cult is not growth, it is to continue being an opiate for rationalists, a pleasant way of making the time pass that feels like work towards growth and thus feeds people's desire for guiltless distraction.

\n

To be growth, we must do work, people must get paid, we must gather in person, focus on action not words, put forth great effort over time to increase our capacity, use peak experiences to knock people loose from ingrained patterns, and copy these and much more from the skill practice communities of the world.  Developed by non-rationalists, sure, but the ones that last are the ones that work [12] - let's learn from their embedded knowledge.

\n

Addendum

\n

That was 5 hours of my semi-Work time, so I really hope it wasn't wasted, and that some of you not only listen but take action.  I don't have much free time for new projects, but if people want to start a local rationality dojo in Mountain View/Sunnyvale, I'm in.  And there is already talk, among some reviewers of this draft, of putting together an introductory workshop.  Time will tell - and the next step is up to you.

\n

Footnotes

\n

[1] Anna Salomon posted Humans are not automatically strategic, a reply to the very practical A \"Failure to Evaluate Return-on-Time\" Fallacy.  Anna's post laid out a nice rough map at what an instrumentally rational process for goal achievement would look like (consciously choosing goals, metrics, researching solutions, experimenting with implementing them, balancing exploration & exploitation - the basic recipe for success at anything), said she was keen to train this, and asked:

\n
\n

So, to second Lionhearted's questions: does this analysis seem right?  Have some of you trained yourselves to be substantially more strategic, or goal-achieving, than you started out?  How did you do it?  Do you agree with (a)-(h) above?  Do you have some good heuristics to add?  Do you have some good ideas for how to train yourself in such heuristics?

\n
\n

After reading the comments, I made a comment which began:

\n
\n

I'm disappointed at how few of these comments, particularly the highly-voted ones, are about proposed solutions, or at least proposed areas for research. My general concern about the LW community is that it seems much more interested in the fun of debating and analyzing biases, rather than the boring repetitive trial-and-error of correcting them.

\n
\n

Anna's post was upvoted into the top 10 all-time on LW in a couple days, and my comment quickly became the top on the post by a large margin, so both her agenda and my concern seem to be widely shared.  While I rarely take the time to write LW posts (as you would expect from someone who believes LW is not very useful), this feedback gave me hope that there might be enough untapped desire for something more effective that a post might help catalyze enough change to be worthwhile.

\n

[2] There are many other other arguments as to why improving one's ability to do work is unlikely to be fun and easy, of course.  With a large space of possible activities, and only a loose connection between \"fun\" and \"helps you grow\" (via evolutionary biology), it seems a priori unlikely that fun activities will overlap with growthful ones.  And we know that a general recipe for getting better at X is to do X, so if one wants to get better at directing one's attention to the most important tasks and goals, it seems very likely that one must practice directing one's attention.  Furthermore, there is evidence that, specifically, willpower is a muscle.  So the case for growing one's instrumental rationality through being distracted by an entertaining rationality blog is...awfully weak.

\n

[3] What are the most important problems in the world?  Who is working most effectively to fix them and how can you help?  Understanding existential risks is certainly not easy, and important to setting that portion of your goals that has to do with helping the world - which is a minor part of most people's goals, which are about their own lives and self-interest.

\n

[4] I also believe the least effective form of debate is trying to get people to change their minds.  Therefore, an extensive study and documentation to create a really good, solid argument trying to change the minds of LWers who don't quickly agree with my argument sketch would be a very low-return activity compared to getting together those who already agree and doing an experiment.  And instrumental rationality is about maximizing the return on your activities, given your goals, so I try to avoid low-return activities.

\n

[4.5] A number of commenters state that they consciously read LW during fun time, or read it to learn about biases and existential risk, not to become more rational, in which case it is likely of positive value.  If you have successfully walled off your work from shiny distractions, then you are advanced in the ways of attention and may be able to use this particular drug without negative effects, and I congratulate you.  If you are reading it to learn about topics of interest to rationalists and believe that you will stop there and not let it affect your productivity, just be warned that many an opiate addiction has begun with a legitimate use of painkillers.

\n

\n

Or to go back to Merlin's metaphor: If you buy a couch to sit on and watch TV, there's nothing wrong with that.  You might even see a sports program on TV that motivates you to go jogging.  Just don't buy the couch in order to further your goal of physical fitness.  Or claim that couch-buyers are a community of people committed to becoming more fit, because they sometimes watch sports shows and sometimes get outside.  Couch-buyers are a community of people who sit around - even if they watch sports programs.  Real runners buy jogging shoes, sweat headbands, GPS route trackers, pedometers, stopwatches...

\n

\n

[5] 1.5 hrs so far.  Time tracking is an important part of attention management - if you don't know how your time is spent, it's probably being spent badly.

\n

[6] Specifically, I am not saying that growth is never fun, or that growth is proportional to effort, only that there are a very limited number of fun ways to grow (taking psychedelics at Burning Man with people you like and respect) and you've probably done them all already.  If you haven't, sure, of course you should do them, and yes, of course discovering & cataloging such things is useful, but there really aren't very many so if you want to continue to grow you need to stop fooling yourself that reading a blog will do it and get ready to make some effort.

\n

[7] Referencing Eliezer's great Money: The Unit of Caring, of course.  I find it ironic that he understand basic economics intellectually so well as to make one of the most eloquent arguments for donating money instead of time that I've ever seen, yet seems to be trying to create a rationality improvement movement without, as far as I can tell, involving any specialists in the art of human change or growth.  That is, using the method that grownups use.  What you do when you want something to actually get done.  You use money to employ full-time specialists.

\n

[8] I haven't actually read this one yet, but their other book, Made To Stick, was an outstanding study of memetic engineering so I think it very likely that their book on habit formation is good too.

\n

[9] Indeed.  I happen to have a background of living in and founding intentional communities (Tortuga!), and in fact currently rent rooms to LWers Divia and Nick Tarleton, so I can attest to the value of one's social environment and personal growth goals being synchronized.  Benton House is likely an example as well.  Groups of rationalists living together will automatically practice, and have that practice reinforced by their primate desire for status within the group, this is almost surely the fastest way to progress, although not required or suited to everyone.

\n

[10] The next paragraph explains why I do my best not to spend much time here:

\n
\n

The awakening artist must be ruthless, not only with herself but with others.  Once you make your break, you can’t turn around for your buddy who catches his trouser leg on the barbed wire.  The best thing you can do for that friend (and he’d tell you this himself, if he really is your friend) is to get over the wall and keep motating.

\n
\n

Although I suppose I am violating the advice by turning around and giving a long speech about why everyone else should make a break too :).  My theory is that by saying it right once, I can refrain from wasting any more time saying it again in the future, should this attempt not work.  But that may just be rationalizing.  On the other hand, doing things \"well or not at all\" is rational in situations where the return curve is steep.  Given my low evaluation of LW's usefulness, I obviously think the early part of the return curve is basically flat zero.  We will see if it is hubris to think the right post can really make a difference, and that I can make that post.  Certainly plenty of opportunity for bias in both those statements.

\n

[11] Note that helping people become personally more effective is a much easier meme to spread than helping people better understand how to contribute to public goods (ie how to better understand efficient charity and existential risk).  They have every incentive to do the former and little incentive to do the latter.  So training people in general goal achievement (instrumental rationality) is likely to have far broader appeal and reach far more people than training them in the aspects of epistemic rationality that SIAI is most interested in.  This large community who have grown through the individually beneficial part of the philosophy is then a great target market for the societally beneficial part of the philosophy.  (A classic one-two punch used by spiritual groups, of course: provide value then teach values.  It works.  If rationalists do what works...)  I've been meaning to make a post on the importance of personal benefit to spreading memes for awhile, this paragraph will have to do for now...

\n

[12] And the ones with good memetic engineering, including use of the Dark Arts.  Many difficult decisions will need to be made about what techniques are and aren't Dark Arts and which are worth using anyway.  The fact remains that just like a sports MVP is almost certainly both more skilled and more lucky than his peers, a successful self-help movement is almost certainly both more effective at helping people and better memetically engineered than its peers.  So copy - but filter.

" } }, { "_id": "bMNf3SgDqnXKQjZag", "title": "Cambridge Less Wrong Meetup Sunday, Sep 19", "pageUrl": "https://www.lesswrong.com/posts/bMNf3SgDqnXKQjZag/cambridge-less-wrong-meetup-sunday-sep-19", "postedAt": "2010-09-14T14:10:07.482Z", "baseScore": 4, "voteCount": 5, "commentCount": 10, "url": null, "contents": { "documentId": "bMNf3SgDqnXKQjZag", "html": "

We're still doing Cambridge/Boston-area Less Wrong meetups on the third Sunday of every month, 4pm at the Clear Conscience Cafe (C3) near the Central Square T station. Several people have indicated they'll be coming for the first time, so we should get a better turnout than in the past. I'll put a Less Wrong sign out on our table. All are welcome to attend, and I look forward to seeing you there!

" } }, { "_id": "ZLvvvB2EKgDbwgCqq", "title": "The Affect Heuristic, Sentiment, and Art", "pageUrl": "https://www.lesswrong.com/posts/ZLvvvB2EKgDbwgCqq/the-affect-heuristic-sentiment-and-art", "postedAt": "2010-09-13T23:05:29.482Z", "baseScore": 86, "voteCount": 74, "commentCount": 58, "url": null, "contents": { "documentId": "ZLvvvB2EKgDbwgCqq", "html": "

\n

I was having a discussion with a friend and reading some related blog articles about the question of whether race affects IQ.  (N.B.  This post is NOT about the content of the arguments surrounding that question.)  Now, like your typical LessWrong member, I subscribe to the Litany of Gendlin, I don’t want to hide from any truth, I believe in honest intellectual inquiry on all subjects.  Also, like your typical LessWrong member, I don’t want to be a bigot.  These two goals ought to be compatible, right?

\n

\n

But when I finished my conversation and went to lunch, something scary happened.  Something I hesitate to admit publicly.  I found myself having a negative attitude to all the black people in the cafeteria. 

\n

 Needless to say, this wasn’t what I wanted.  It makes no sense, and it isn’t the way I normally think.  But human beings have an affect heuristic.  We identify categories as broadly “good” or “bad,” and we tend to believe all good things or all bad things about a category, even when it doesn’t make sense.  When we discuss the IQ’s of black and white people, we’re primed to think “yay white, boo black.”  Even the act of reading perfectly sound research has that priming effect.

\n

\n

And conscious awareness and effort doesn’t seem to do much to fix this. The Implicit Awareness Test measures how quickly we group black faces with negative-affect words and white faces with positive-affect words, compared to our speed at grouping the black faces with the positive words and the white faces with the negative words.  Nearly everyone, of every race, shows some implicit association of black with “bad.”  And the researchers who created the test found no improvement with practice or effort.

\n

\n

The one thing that did reduce implicit bias scores was if test-takers primed themselves ahead of time by reading about eminent black historical figures.  They were less likely to associate black with “bad” if they had just made a mental association between black and “good.”  Which, in fact, was exactly how I snapped out of my moment of cafeteria racism: I recalled to my mind's ear a recording I like of Marian Anderson singing Schubert.  The music affected me emotionally and allowed me to escape my mindset.

\n\n

 To generalize from that example, we have to remember that the subconscious is a funny thing.  Mere willpower doesn’t stop it from misbehaving: it has to be tricked.  You have to hack into the affect heuristic, instead of trying to override it. 

\n\n

 There’s an Enlightenment notion of “sentiment” which I think may be appropriate here.  The idea (e.g. in Adam Smith) was roughly that moral behavior springs from certain emotional states, and that we can deliberately encourage those emotional states or sentiments by exposing ourselves to the right influences.  Sympathy, for example, or affection, were moral sentiments.  The plays of 18th century England seem trite to a modern reader because the characters are so very sympathetic and virtuous, and the endings so very happy.  But this was by design: it was believed that by arousing sympathy and affection, plays could encourage more humane behavior.

\n

\n

Sentiments are a way of dealing directly with the affect heuristic. It can’t be eradicated, at least not all in one go, but it can be softened and moderated.  If you know you’re irrationally attaching a “yay” or “boo” label to something, you can counteract that by focusing your reflections on the opposite affect. 

\n\n

I suspect – though I have no basis beyond anecdote – that art is a particularly effective way of inducing sentiments and attacking the affect heuristic.  You don’t hear a lot about art on LW, but we probably should be thinking more about it, because art is powerful.  Music moves people: think of military marches and national anthems, and also think of the humanistic impulse in the Ode to Joy. Music is not an epistemic statement, but acts at the more primitive level of emotion. You can deploy music to change yourself at the pre-rational level; personally, I find that something like “O Isis Und Osiris” from The Magic Flute can cut through fear and calm me, better than any conscious logical argument.

\n \n

Poetry also seems relevant here – it’s verbal, but it’s a delivery system that works at the sub-rational level.  I’m convinced that a large part of the appeal of religion is in poetic language that rings true.  (It’s interesting what happens when scientific or rationalist ideas are expressed in poetic language – this is rarer, but equally powerful. Carl Sagan, Francois Jacob, Bertrand Russell.)  The parable, the fantasy, and the poem can be more effective than the argument, because they can reach emotional heuristics that arguments cannot touch.

\n

\n

This is not an argument against rationality – this is rationality.  To fight our logical fallacies, we have to attack the subconscious directly, because human beings are demonstrably bad at overriding the subconscious through willpower.  It's not enough to catalogue biases and incorrect heuristics; we want to change those errors, and the most effective way to change them isn't always to read an argumentative essay and decide \"to think rationally.\"  I’m an optimist: I think we can, in fact, seek truth relentlessly.  I don’t think we have to taboo whole subjects of inquiry in fear of the affect heuristic.  But we need to fight affect with affect.  

\n

(As a practical suggestion for ourselves and each other, it might be interesting to experiment with non-argumentative ways of conveying a point of view: tell an illustrative story, express your idea in the form of an epigram, or even quote a poem or a piece of music or a photograph. Eliezer does a lot of this already: commandments, haikus, parables, and a fanfic.  The point, for rationalists, is not manipulation -- I don't want to use emotion to get anyone to adopt an idea thoughtlessly.  The point is to improve understanding, to shake loose our own biases by tinkering with our own emotions.  Clearer writing is not necessarily drier writing, and sometimes we understand an idea best when it makes use of our emotional capacities.)

\n " } }, { "_id": "9kcTNWopvXFncXgPy", "title": "Intellectual Hipsters and Meta-Contrarianism", "pageUrl": "https://www.lesswrong.com/posts/9kcTNWopvXFncXgPy/intellectual-hipsters-and-meta-contrarianism", "postedAt": "2010-09-13T21:36:33.236Z", "baseScore": 382, "voteCount": 367, "commentCount": 367, "url": null, "contents": { "documentId": "9kcTNWopvXFncXgPy", "html": "

Related to: Why Real Men Wear Pink, That Other Kind of Status, Pretending to be Wise, The \"Outside The Box\" Box

\n
\n

WARNING: Beware of things that are fun to argue -- Eliezer Yudkowsky

\n
\n

Science has inexplicably failed to come up with a precise definition of \"hipster\", but from my limited understanding a hipster is a person who deliberately uses unpopular, obsolete, or obscure styles and preferences in an attempt to be \"cooler\" than the mainstream. But why would being deliberately uncool be cooler than being cool?

As previously discussed, in certain situations refusing to signal can be a sign of high status. Thorstein Veblen invented the term \"conspicuous consumption\" to refer to the showy spending habits of the nouveau riche, who unlike the established money of his day took great pains to signal their wealth by buying fast cars, expensive clothes, and shiny jewelery. Why was such flashiness common among new money but not old? Because the old money was so secure in their position that it never even occurred to them that they might be confused with poor people, whereas new money, with their lack of aristocratic breeding, worried they might be mistaken for poor people if they didn't make it blatantly obvious that they had expensive things.

The old money might have started off not buying flashy things for pragmatic reasons - they didn't need to, so why waste the money? But if F. Scott Fitzgerald is to be believed, the old money actively cultivated an air of superiority to the nouveau riche and their conspicuous consumption; not buying flashy objects becomes a matter of principle. This makes sense: the nouveau riche need to differentiate themselves from the poor, but the old money need to differentiate themselves from the nouveau riche.

This process is called countersignaling, and one can find its telltale patterns in many walks of life. Those who study human romantic attraction warn men not to \"come on too strong\", and this has similarities to the nouveau riche example. A total loser might come up to a woman without a hint of romance, promise her nothing, and demand sex. A more sophisticated man might buy roses for a woman, write her love poetry, hover on her every wish, et cetera; this signifies that he is not a total loser. But the most desirable men may deliberately avoid doing nice things for women in an attempt to signal they are so high status that they don't need to. The average man tries to differentiate himself from the total loser by being nice; the extremely attractive man tries to differentiate himself from the average man by not being especially nice.

In all three examples, people at the top of the pyramid end up displaying characteristics similar to those at the bottom. Hipsters deliberately wear the same clothes uncool people wear. Families with old money don't wear much more jewelry than the middle class. And very attractive men approach women with the same lack of subtlety a total loser would use.1

If politics, philosophy, and religion are really about signaling, we should expect to find countersignaling there as well.

\n


Pretending To Be Wise

Let's go back to Less Wrong's long-running discussion on death. Ask any five year old child, and ey can tell you that death is bad. Death is bad because it kills you. There is nothing subtle about it, and there does not need to be. Death universally seems bad to pretty much everyone on first analysis, and what it seems, it is.

But as has been pointed out, along with the gigantic cost, death does have a few small benefits. It lowers overpopulation, it allows the new generation to develop free from interference by their elders, it provides motivation to get things done quickly. Precisely because these benefits are so much smaller than the cost, they are hard to notice. It takes a particularly subtle and clever mind to think them up. Any idiot can tell you why death is bad, but it takes a very particular sort of idiot to believe that death might be good.

So pointing out this contrarian position, that death has some benefits, is potentially a signal of high intelligence. It is not a very reliable signal, because once the first person brings it up everyone can just copy it, but it is a cheap signal. And to the sort of person who might not be clever enough to come up with the benefits of death themselves, and only notices that wise people seem to mention death can have benefits, it might seem super extra wise to say death has lots and lots of great benefits, and is really quite a good thing, and if other people should protest that death is bad, well, that's an opinion a five year old child could come up with, and so clearly that person is no smarter than a five year old child. Thus Eliezer's title for this mentality, \"Pretending To Be Wise\".

If dwelling on the benefits of a great evil is not your thing, you can also pretend to be wise by dwelling on the costs of a great good. All things considered, modern industrial civilization - with its advanced technology, its high standard of living, and its lack of typhoid fever -  is pretty neat. But modern industrial civilization also has many costs: alienation from nature, strains on the traditional family, the anonymity of big city life, pollution and overcrowding. These are real costs, and they are certainly worth taking seriously; nevertheless, the crowds of emigrants trying to get from the Third World to the First, and the lack of any crowd in the opposite direction, suggest the benefits outweigh the costs. But in my estimation - and speak up if you disagree - people spend a lot more time dwelling on the negatives than on the positives, and most people I meet coming back from a Third World country have to talk about how much more authentic their way of life is and how much we could learn from them. This sort of talk sounds Wise, whereas talk about how nice it is to have buses that don't break down every half mile sounds trivial and selfish..

So my hypothesis is that if a certain side of an issue has very obvious points in support of it, and the other side of an issue relies on much more subtle points that the average person might not be expected to grasp, then adopting the second side of the issue will become a signal for intelligence, even if that side of the argument is wrong.

This only works in issues which are so muddled to begin with that there is no fact of the matter, or where the fact of the matter is difficult to tease out: so no one tries to signal intelligence by saying that 1+1 equals 3 (although it would not surprise me to find a philosopher who says truth is relative and this equation is a legitimate form of discourse).

Meta-Contrarians Are Intellectual Hipsters

A person who is somewhat upper-class will conspicuously signal eir wealth by buying difficult-to-obtain goods. A person who is very upper-class will conspicuously signal that ey feels no need to conspicuously signal eir wealth, by deliberately not buying difficult-to-obtain goods.

A person who is somewhat intelligent will conspicuously signal eir intelligence by holding difficult-to-understand opinions. A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.

According to the survey, the average IQ on this site is around 1452. People on this site differ from the mainstream in that they are more willing to say death is bad, more willing to say that science, capitalism, and the like are good, and less willing to say that there's some deep philosophical sense in which 1+1 = 3. That suggests people around that level of intelligence have reached the point where they no longer feel it necessary to differentiate themselves from the sort of people who aren't smart enough to understand that there might be side benefits to death. Instead, they are at the level where they want to differentiate themselves from the somewhat smarter people who think the side benefits to death are great. They are, basically, meta-contrarians, who counter-signal by holding opinions contrary to those of the contrarians' signals. And in the case of death, this cannot but be a good thing.

But just as contrarians risk becoming too contrary, moving from \"actually, death has a few side benefits\" to \"DEATH IS GREAT!\", meta-contrarians are at risk of becoming too meta-contrary.

All the possible examples here are controversial, so I will just take the least controversial one I can think of and beg forgiveness. A naive person might think that industrial production is an absolute good thing. Someone smarter than that naive person might realize that global warming is a strong negative to industrial production and desperately needs to be stopped. Someone even smarter than that, to differentiate emself from the second person, might decide global warming wasn't such a big deal after all, or doesn't exist, or isn't man-made.

In this case, the contrarian position happened to be right (well, maybe), and the third person's meta-contrariness took em further from the truth. I do feel like there are more global warming skeptics among what Eliezer called \"the atheist/libertarian/technophile/sf-fan/early-adopter/programmer empirical cluster in personspace\" than among, say, college professors.

In fact, very often, the uneducated position of the five year old child may be deeply flawed and the contrarian position a necessary correction to those flaws. This makes meta-contrarianism a very dangerous business.

Remember, most everyone hates hipsters.

Without meaning to imply anything about whether or not any of these positions are correct or not3, the following triads come to mind as connected to an uneducated/contrarian/meta-contrarian divide:

- KKK-style racist / politically correct liberal / \"but there are scientifically proven genetic differences\"
- misogyny / women's rights movement / men's rights movement
- conservative / liberal / libertarian4
- herbal-spiritual-alternative medicine / conventional medicine / Robin Hanson
- don't care about Africa / give aid to Africa / don't give aid to Africa
- Obama is Muslim / Obama is obviously not Muslim, you idiot / Patri Friedman5

What is interesting about these triads is not that people hold the positions (which could be expected by chance) but that people get deep personal satisfaction from arguing the positions even when their arguments are unlikely to change policy6 - and that people identify with these positions to the point where arguments about them can become personal.

If meta-contrarianism is a real tendency in over-intelligent people, it doesn't mean they should immediately abandon their beliefs; that would just be meta-meta-contrarianism. It means that they need to recognize the meta-contrarian tendency within themselves and so be extra suspicious and careful about a desire to believe something contrary to the prevailing contrarian wisdom, especially if they really enjoy doing so.

\n


Footnotes

1) But what's really interesting here is that people at each level of the pyramid don't just follow the customs of their level. They enjoy following the customs, it makes them feel good to talk about how they follow the customs, and they devote quite a bit of energy to insulting the people on the other levels. For example, old money call the nouveau riche \"crass\", and men who don't need to pursue women call those who do \"chumps\". Whenever holding a position makes you feel superior and is fun to talk about, that's a good sign that the position is not just practical, but signaling related.

\n

2) There is no need to point out just how unlikely it is that such a number is correct, nor how unscientific the survey was.

\n

3) One more time: the fact that those beliefs are in an order does not mean some of them are good and others are bad. For example, \"5 year old child / pro-death / transhumanist\" is a triad, and \"warming denier / warming believer / warming skeptic\" is a triad, but I personally support 1+3 in the first triad and 2 in the second. You can't evaluate the truth of a statement by its position in a signaling game; otherwise you could use human psychology to figure out if global warming is real!

\n

4) This is my solution to the eternal question of why libertarians are always more hostile toward liberals, even though they have just about as many points of real disagreement with the conservatives.

\n

5) To be fair to Patri, he admitted that those two posts were \"trolling\", but I think the fact that he derived so much enjoyment from trolling in that particular way is significant.

\n

6) Worth a footnote: I think in a lot of issues, the original uneducated position has disappeared, or been relegated to a few rednecks in some remote corner of the world, and so meta-contrarians simply look like contrarians. I think it's important to keep the terminology, because most contrarians retain a psychology of feeling like they are being contrarian, even after they are the new norm. But my only evidence for this is introspection, so it might be false.

" } }, { "_id": "zjka9fsQ453rZyQpZ", "title": "I don’t clean because the house is never dirty", "pageUrl": "https://www.lesswrong.com/posts/zjka9fsQ453rZyQpZ/i-don-t-clean-because-the-house-is-never-dirty", "postedAt": "2010-09-13T08:49:08.000Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "zjka9fsQ453rZyQpZ", "html": "

I often think about this when someone thinks I should do more housework:

\n

When women see how little housework men do, they interpret it as “shirking” …Men, in turn, feel unfairly maligned…Who is right? …Usually, men.

\n

The evidence: Look at the typical bachelor’s apartment. Even when a man pays the full cost of cleanliness and receives the full benefit, he doesn’t do much. Why not? Because the typical man doesn’t care very much about cleanliness. When the bachelor gets married, he almost certainly starts doing more housework than he did when he was single. How can you call that shirking?

\n

To some extent it’s true, but in my experience a lot of conflict between clean and messy people seems indeed because the messy person ends up doing less than their fair share of work, but also much less than they are willing to do. This is because people often decide when to do a chore based on when it has reached a certain level of urgency. For instance when the floor becomes muddy enough it triggers cleaning. When two people have different standards, the cleanlier one is always the first to be triggered, so they clean again and again while the other is endlessly about to clean but beaten to it. This can be fixed by relying on another method to decide when to clean, or by the person with lower standards learning to be triggered at the same point as the other person, but if one person endlessly cleans while the other endlessly claims they were going to do it tomorrow, it’s easy for resentment to cloud assessment of this underlying problem.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "Ru39RvX2jnGm8TCjh", "title": "September 2010 Southern California Meetup", "pageUrl": "https://www.lesswrong.com/posts/Ru39RvX2jnGm8TCjh/september-2010-southern-california-meetup", "postedAt": "2010-09-13T02:31:18.915Z", "baseScore": 15, "voteCount": 11, "commentCount": 32, "url": null, "contents": { "documentId": "Ru39RvX2jnGm8TCjh", "html": "

The second LessWrong meetup for Southern California will happen on Saturday September 25th, 2010!  The meetup will start at 1PM and probably run for 4 or 5 hours in the Platt Campus Center of Harvey Mudd College.  Thanks are due to the Harvey Mudd Future Tech Club for the location.

We will be meeting in a conference room with tables, chairs, whiteboards, and a projector.  Based on the July SoCal LW Meetup we discovered that the most interesting thing was to have small group conversations but that turned out to be tricky in a pub.  This time we're optimizing for conversation, but pizza will almost assuredly be ordered and there will be free cookies and soda!

Last time several people brought interested friends.  Also some people who had been reading a long time but had never commented before showed up (only about 60% of us had LW logins) and this seemed to work pretty well.  It was a very friendly group :-)

For travel planning purposes, you should probably be aware that the time and location was specifically planned to be ~10 blocks north of the Claremont Metrolink Station, 15 minutes after a train arrives there from the hub at LA Union Station.  Train schedules can be searched here. Walking directions from the Metrolink Station:

\n
    \n
  1. Take College Ave north to 12th St.
  2. \n
  3. Go east on north side of 12th St (which becomes Platt Blvd) until you see the big Harvey Mudd College sign on the left side of the street.
  4. \n
  5. Head north into campus there, to the very visible flagpole.
  6. \n
  7. Platt Campus Center will be the big building right to the northeast.
  8. \n
  9. We should have signs set up in and around the building, but basically once inside the big central area, just head east.
  10. \n
\n

For those arriving via car there is a parking lot directly behind the building (it's covered with trees on the Google maps aerial view).  There should be plenty of spots open on a Saturday.  Also, look for carpooling comment threads, here for Santa Barbara, here for Orange County, here for San Diego, and here for Pasadena/Glendale.

" } }, { "_id": "gNPtx2ftcWYKE3vyg", "title": "The Effectiveness of Developing World Aid", "pageUrl": "https://www.lesswrong.com/posts/gNPtx2ftcWYKE3vyg/the-effectiveness-of-developing-world-aid", "postedAt": "2010-09-12T21:56:55.904Z", "baseScore": 33, "voteCount": 36, "commentCount": 51, "url": null, "contents": { "documentId": "gNPtx2ftcWYKE3vyg", "html": "

Several Less Wrong posters [1], [2], [3] have cited the interview with James Shikwati titled \"For God's Sake, Please Stop the Aid!\" as evidence that Western aid to Africa is actively destructive to Africa. According to the wikipedia page on James Shikwati:

\n
\n

Jeffrey D. Sachs, a Columbia University professor who is a leading aid advocate, calls Mr. Shikwati’s criticisms of foreign assistance “shockingly misguided” and “amazingly wrong.” “This happens to be a matter of life and death for millions of people, so getting it wrong has huge consequences,” Mr. Sachs said.

\n
\n

I think it's important for those interested in the question of whether developing world aid is effective to look to those who can point to formal studies about the effectiveness of African aid rather than basing their judgments on quotes from individuals whose opinions may very well have been heavily skewed by selection bias and/or driven by ideological considerations which have nothing to do with the available evidence.

\n

Engaging with the evidence in detail is a very time-consuming task and one beyond the scope of this blog entry. I will however quote various experts with links to useful references.

\n

Divided Views On Overall Impact of Developing World Aid:

\n

I've found Paul Collier to be apparently even-handed. Readers interested in studying the the effectiveness of developing world aid may like to study Paul Collier's papers on the subject. Paul Collier summarizes his views in his recent book titled The Bottom Billion.

\n
\n

The left seems to want to regard aid as some sort of reparations for colonialism. In other words, it's a statement about the guilt of Western society, not about development. In this view, the only role for the bottom billion is as victims: they all suffer from our sins. The right seems to want to equate aid with welfare scrounging. In other words, it is rewarding the feckless and so accentuating the problem. Between these two there is a thin sliver of sanity called aid for development. It runs something like this. We used to be that poor once. It took us two hundred years to get where we are. Let's try to speed things up for these countries.

\n

Aid does tend to speed up the growth process. A reasonable estimate is that over the last thirty years it has added around one percentage point to the annual growth rate of the bottom billion. This does not sound like a whole lot, but then the growth rate of the bottom billion over this period has been much less than 1 percent per year - in fact it has been zero. So adding 1 percent has made the difference between stagnation and severe cumulative decline. Without aid, cumulatively the countries of the bottom billion would have become much poorer than they are today. Aid has been a holding operation to prevent things from falling apart.

\n

[...]

\n

...unlikely as it seems, what aid agencies have been doing has added a whole lot of value to the financial transfer. Given the bad public image of aid agencies and horror stories such as the hospital project I described above, this is hard to believe, but there it is.

\n

[...]

\n

Aid, however, is not the only answer to the problems of the bottom billion. In recent years it has probably been overemphasized, partly because it is the easiest thing for the Western world to do and partly because it fits so comfortably into a moral universe organized around the principles of sin and expiation. That overemphasis, which comes from the left, has produced a predicable backlash from the right. Aid does have serious problems, and more especially serious limitations. Alone it will not be sufficient to turn the societies of the bottom billion around. But it is part of the solution rather than part of the problem. The challenge is to complement it with other actions.

\n
\n

An economist who is skeptical of Collier's analysis William Easterly, the author of The White Man's Burden: Why the West's Efforts to Aid the Rest Have Done So Much Ill and So Little Good. For those who are interested in the topic of the effectiveness of developing world aid, Easterly's papers are also worth taking a look at. In an article for Boston Review, Easterly writes:

\n
\n

Collier’s work is built on deeply problematic statistical analysis. Valid statistical results must meet stringent conditions. The usual standard for labeling a result “significant” is that it could have occurred by chance only one out of twenty times, assuming a statistical exercise is run only once. An unfortunately all-too-common practice called “data mining” involves running twenty statistical exercises and then reporting only the one that produces a “significant” result (which will have happened by chance). Collier comes close to admitting that he does exactly that.

\n

[...]

\n

Remarkably enough, Collier puts the burden of proof on non-intervention. “At some point,” he writes, “doubt becomes an excuse for inaction, while the problems of insecurity remain real enough.” Elsewhere Collier alludes to his doubters as “professional skeptics.” Actually, doubt is a superb reason for inaction. If being a professional skeptic entails scrutinizing the logic, the assumptions, and the evidence base and finding them all invalid when they do not meet normal academic standards, I plead guilty.

\n
\n

My overall impression is that there's a fair amount of controversy as to whether African aid has increased economic growth in Africa. Different economists have different views and the evidence available does not seem sufficiently robust to support a confident belief that the net effect of aid has been positive or negative.

\n

Much less controversial is the view that the best health Western interventions can and do systematically improve health in Africa.

\n

The Case of Health:

\n

In his article titled Can The West Save Africa? Easterly writes

\n
\n

Health is an even more clear success story than education in Africa, as child mortality has improved dramatically over time ... There are well known and striking donor success stories, like the elimination of smallpox, the near-eradication of river blindness and Guinea worm, the spread of oral rehydration therapy for treating infant diarrheal diseases, DDT campaigns against malarial mosquitoes (although later halted for environmental reasons), and the success of WHO vaccination programs against measles and other childhood diseases. The aid campaign against diseases in Africa (known as vertical health programs, see discussion below) is likely the single biggest success story in the history of aid to Africa.

\n

In this case, the clear verdict of the case studies is probably a lot more helpful than the aggregate stylized facts, aggregate econometrics, or REs. Under-five mortality fell dramatically in Africa, but it fell by somewhat less than in other developing countries. We ideally need to parcel out factors such as Africa’s lower growth (although the effect of growth on health is controversial), different disease ecology (for example, malaria is much more of a problem in Africa than any other region), other factors, and aid, not to mention finding an identification strategy to assess causal effects of aid; no such aggregate econometric efforts have been notably successful. Even with econometric support unavailable, perhaps Africa’s health performance is impressive after all given its lower growth and its more difficult disease ecology, which is consistent with the important role for aid shown by the case studies.

\n
\n

According to the GiveWell page titled Why do we look for charities implementing proven programs:

\n
\n

The most successful projects have been in the area of health and include such large-scale successes as the eradication of smallpox and the dramatic reduction of infant mortality in Africa (see our developing-world health overview). For a full list of health programs that have been rigorously shown to save lives and reduce suffering, see our summary of proven health programs.

\n

There are a number of examples of ways in which well-intentioned projects have failed to achieve desired results. Building wells has often failed to reduce water-related illness (detailed analysis here); agriculture programs in Africa have failed to increase crop yields; programs providing textbooks and other supplies have not raised students' test scores, and many other developing-world education programs have weak, if any, evidence of success.

\n
\n

Charities working on improving health in the developing world have variable effectiveness. It's plausible that by donating to one of GiveWell's top-rated charities one can have a substantially stronger positive effect than the one associated to a random such charity. In Charity Isn't About Helping? Holden says:

\n
\n

One person who’s more critical of charity than we are or than David Hunter is is the economist Robin Hanson. He has stated that “charity isn’t about helping”.

\n

[...]

\n

What response can the nonprofit sector marshal to arguments like this? I must say that, in fact, much of the nonprofit sector fits incredibly better into Prof. Hanson’s view of charity as “wasteful signaling” than into the traditional view of charity as helping.

\n

[...]

\n

Perhaps ironically, if you want a good response to Prof. Hanson’s view, I can’t think of a better place to turn than GiveWell’s top-rated charities. We have done the legwork to identify charities that can convincingly demonstrate positive impact. No matter what one thinks of the sector as a whole, they can’t argue that there are no good charitable options - charities that really will use your money to help people - except by engaging with the specifics of these charities’ strong evidence.

\n

Valid observations that the sector is broken - or not designed around helping people - are no longer an excuse not to give.

\n

Because our Bayesian prior is so skeptical, we end up with charities that you can be confident in, almost no matter where you’re coming from.

\n
\n

A Note on Malthusian Problems:

\n

Those who are concerned about possible future Malthusian problems attached to saving lives should see Holden's email to the GiveWell research mailing list titled Population growth & health, the video linked therein, and papers by the speaker Hans Rosling. I presently believe that while it's possible that saving lives in the developing world does more harm than good on account of Malthusian problem, this is fairly unlikely and the expected value of saving lives in the developing world is strongly positive. Of course, my belief is subject to change with incoming evidence.

\n

The Giving What We Can (GWWC) Myths About Aid page provides a suggestion for donors who are concerned about future Malthusian problems:

\n
\n

...there are many ways that you can greatly improve the lives of thousands of people who live in extreme poverty without significantly extending these lives. For example, you could cure people of blindness, or of neglected tropical diseases, which cause significant hardship but have only a small effect on mortality. Alternatively, you could donate to groups who promote family planning in developing countries, directly fighting population growth. Those who think that overpopulation is so bad that we should let people suffer and die rather than risk saving their lives, must also think it is important enough that they should donate money to groups that directly fight it.

\n
\n

GWWC and GiveWell differ in that GWWC's top recommended charities are Schistosomiasis Control Initiative and Deworm the World which focus on neglected tropical diesease whereas GiveWell does not recommend these organizations. Holden explains GiveWell's position in Neglected Tropical Disease charities: Schistosomiasis Control Initiative, Deworm The World.

\n

A Note on Overcorrecting Bias:

\n

Many people's initial naive reaction to developing world aid is that it's a very good idea. This was certainly my own reaction as a child when I learned of Unicef. As Eliezer suggests in Can't Say No To Spending, there's a natural bias in favor of saying \"yes\" when asked to donate money help poor people - saying 'no' feels cold-hearted. Reading an author like Shikwati can dispel this bias by making possible unintended negative consequences salient, but often at the cost of giving rise to a new bias against developing world aid. Reading Sachs' remarks on Shikwati quoted in the introduction of this article can dispel this bias at the cost of introducing a new bias in favor of developing world aid. But Sachs' own position has garnered seemingly valid criticism from William Easterly and others - learning of this introduces a bias against Sachs and his views and in favor of Easterly and his views. There's the usual issue of there being a halo effect as described in Yvain's excellent article titled The Trouble with \"Good\" - when person X debunks person Y's apparently erroneous claim, this makes person X look unwarrentedly superior to person Y overall.

\n

It's difficult to know who to trust when ostensible experts disagree, any of whom may be exhibiting motivated cognition or even engaging in outright conscious self-serving deception. Nevertheless, one can reasonably can hope to arrive at a fairly good epistemological state by:

\n
    \n
  1. Reading representatives of a wide variety of perspectives
  2. \n
  3. Paying special attention to those experts who are willing to engage with differing perspectives in detail
  4. \n
  5. Being careful to keep in mind that somebody may have valid points which are worthy of consideration independently of whether their general thesis is correct
  6. \n
  7. Paying special attention to points of common agreement among experts
  8. \n
\n

Habits (1)-(4) are conducive to converging on a relatively accurate epistemological position on a given matter.

\n
\n

Acknowledgment: Thanks to Carl Shulman for useful references and discussion about the subject of this article.

" } }, { "_id": "ENib3YNzpK3Bdgs6J", "title": "Reminder: Weekly LW meetings in NYC", "pageUrl": "https://www.lesswrong.com/posts/ENib3YNzpK3Bdgs6J/reminder-weekly-lw-meetings-in-nyc", "postedAt": "2010-09-12T20:44:50.435Z", "baseScore": 10, "voteCount": 7, "commentCount": 2, "url": null, "contents": { "documentId": "ENib3YNzpK3Bdgs6J", "html": "

Hey everyone. This is just a reminder that there are weekly Less Wrong/Overcoming Bias meetups in New York City. Meetups are usually held on Tuesday at 7 PM, in lower Manhattan (south of Central Park). If you're interested, more information is available if you sign up for the overcomingbiasnyc Google Group. If you're in the area, hope to see you there!

" } }, { "_id": "qRNuXCtfMHCCynabW", "title": "The Science of Cutting Peppers", "pageUrl": "https://www.lesswrong.com/posts/qRNuXCtfMHCCynabW/the-science-of-cutting-peppers", "postedAt": "2010-09-12T13:14:46.151Z", "baseScore": 54, "voteCount": 45, "commentCount": 68, "url": null, "contents": { "documentId": "qRNuXCtfMHCCynabW", "html": "

Summary: Rigorous scientific experiments are hard to apply in daily life but we still want to try out and evaluate things like self-improvement methods. In doing so we can look for things such as a) effect sizes that are so large that they don't seem likely to be attributable to bias, b) a deep understanding of the mechanism of a technique, c) simple non-rigorous tests.

\n

Hello there! This is my first attempt at a top-level post and I'll start it off with a little story.

\n

Five years ago, in a kitchen in London...

\n

My wife: We're going to have my friends over for dinner and we're making that pasta sauce everyone likes. I'm going to need you to cut some red peppers.

Me: Can do! *chop chop chop*

My wife: Hey, Mr. Engineer, you've got seeds all over! What are you doing to that pepper?

Me: Well, admittedly this time I was a bit clumsy and there's more seed spillage than usual - but it's precisely to avoid spilling seeds that I start by surgically removing the core and then...

My wife: Stop, just stop. That's got to be the worst possible way to do this. See, this is how you cut a pepper, *chop chop chop*. Nice slices, no mess.

\n

Me: *is humiliated* *learns*

\n

Now, ever since then I've cut peppers using the method my wife showed me. It's a much better way to do it. But wait! How do I know that? Don't I realize that humans are subject to massive cognitive biases? Maybe I just want to please my wife by doing things her way so I've convinced myself her method is better. Maybe I'm remembering the hits and forgetting the misses - maybe I've forgotten all the times my method worked out great and the times her method failed. Maybe I am indeed making less of a mess than I used to but it's simply due to my knife skills improving - and that would have happened even if I'd stuck with the old method. And there are probably a dozen more biases and confounding factors that could come into play but I haven't even thought of.

Don't I need to do a test? How about cutting up lots of peppers using the two alternative methods and measuring seed spillage? But, no, that's not good enough - I could subconsciously affect the result by applying less skill when using one method. I'd need a neutral party to operate the methods, preferably a number of people. And I'd need a neutral observer too. The person who measures the seed spillage from each operation should not know which method was used. Yeah, a double blind test, that's the ticket. That's what I should do, right?

No, obviously that's not what I should do. There are two reasons:

A) The resources needed to conduct the suggested test are completely disproportional to any benefit such a test might yield.

B) I already bloody well know that my wife's method is better.

The first reason is obvious enough but the second reason needs a bit more exploration. Why do I know this? I think there are two reasons.

* The effect size is large and sustained. Previously, I used to make a mess just about every time. After I switched methods I get a clean cut just about every time.

* I understand the mechanism explaining the effect very well. I can see what's wrong with the method I was using previously (if I try to pull the core through a hole that's too small for its widest part then some seeds will rub off) and I can see how my wife's method doesn't have that problem (no pulling the core through a hole, just cut around it).

I'd like to try to generalize from this example. Many people on this site are interested in methods for self-improvement, e.g. methods for fighting akrasia or developing social skills. Very often, those methods have not been tested scientifically and we do not ourselves have the resources to conduct such tests. Even in cases where there have been scientific experiments we cannot be confident in applying the results to ourselves. Even if a psychology experiment shows that a certain way of doing things has a statistically significant1 effect on some group that is no guarantee that it will have an effect on a particular individual. So, it is no surprise that discussion of self-improvement methods is frequently met with skepticism around here. And that's largely healthy.

But how can we tell whether a self-improvement method is worth trying out? And if we do try it, how can we tell if it's working for us? One thing we can do, like in the pepper example, is to look for large effects and plausible mechanisms. Biases and other confounding factors make it hard for us to tell the difference between a small negative effect, no effect and a small positive effect. But we still have a decent chance of correctly telling the difference between no effect and a large effect.

Another thing we can do is to use some science. Just because a rigorous double blind test with a hundred participants isn't practical doesn't mean we can't do any tests at all. A person trying out a new diet will weigh themselves every day. And if you're testing out a self-improvement technique then you can try to find some metric that will give you an idea of you how well you are doing. Trying out a method for getting more work done on your dissertation? Maybe you should measure your daily word count, it's not perfect but it's something. As xkcd's Zombie Feynman would have it, \"Ideas are tested by experiment, that is the core of science.\"

Erring on the side of too much credulity is bad and erring on the side of too much skepticism is also bad. Both prevent us from becoming stronger.

1) As good Bayesians we, of course, find psychologists' obsession with null hypotheses and statistical significance to be misguided and counterproductive. But that's a story for another time.

" } }, { "_id": "gcZTANWaMEAMHHnBG", "title": "Steps to Achievement: The Pitfalls, Costs, Requirements, and Timelines", "pageUrl": "https://www.lesswrong.com/posts/gcZTANWaMEAMHHnBG/steps-to-achievement-the-pitfalls-costs-requirements-and", "postedAt": "2010-09-11T22:58:38.145Z", "baseScore": 21, "voteCount": 26, "commentCount": 13, "url": null, "contents": { "documentId": "gcZTANWaMEAMHHnBG", "html": "

Reply to: Humans Are Not Automatically Strategic

\n

In \"Humans Are Not Automatically Strategic,\" Anna Salamon outlined some ways that people could take action to be more successful and achieve goals, but do not:

\n

\n
\n

But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out.  We do not automatically:

\n\n

.... or carry out any number of other useful techniques.  Instead, we mostly just do things. 

\n
\n

I believe that's a fantastic list of achievement/victory heuristics. Some of these are difficult to do, though. Let's look to make this into a practical, actionable sort of document. I believe the steps outlined above can be broadly grouped. I've done it with some minor rephrasing to make it in first person plural -

\n

\n

Identify: (a) Ask ourselves what we're trying to achieve, (b) ask ourselves how we could tell if we achieved it and how we can track progress

\n

Research: (c) Become strongly curious about information that would help achieve the goal, and (d) gather that information (through methods like asking how folks commonly achieve this goal, especially methods that aren't habitual)

\n

Test: (e) Test methods that might work to achieve goals, especially non-habitual methods, while tracking what works and doesn't

\n

Focus: (f) Focus most of the energy that isn't going into researching/exploring on methods that are starting to produce the best results, (g) make sure that the \"goal\" chosen is worthwhile, is desired for coherent reasons, and firmly commit to it at this stage so that doubt does not consume excessive time and energy

\n

Persevere: (h) Use environmental cues and social contexts to boost motivation, persist in the face of adversity and frustration, and not given in to temptation to quit or take it easy.

\n

There's some implicit steps in the model. I think it would go like this:

\n

Identify -> (make decision to begin) -> Research -> (begin) -> Test -> (analyze early results) -> Focus -> (make firm commitment at this stage) -> Persevere -> (achieve or re-evaluate) -> (back to step 1)

\n

I believe Anna roughly laid out five key stages - Identify, research, test, focus, persevere. I believe there's seven other stages mixed in - make decision to begin, begin, analyze early results, make firm commitment, achieve or re-evaluate, repeat.

\n

Pitfalls, Costs, Requirements, and Timelines for each stage:

\n

Identify - the first stage to accomplishing a goal is to identify a goal. I believe this is one of the hardest stages, due to the subjective nature of it. There is no right answer. There are other potential pitfalls - people who are fatalistic (\"things are already decided\"), nihilistic (\"nothing matters\"), or believe they can't achieve will have problems with this stage. Additionally, people in this community might have another problem. People who have identities based on being intelligent tend to not want to confront goals they can fail at. The article, \"How Not to Talk to Your Kids: the Inverse Power of Praise\" describes a study based on praising kids for innate ability (intelligence) vs. effort.

\n
\n

Randomly divided into groups, some were praised for their intelligence. They were told, “You must be smart at this.” Other students were praised for their effort: “You must have worked really hard.” ... Of those praised for their effort, 90 percent chose the harder set of puzzles. Of those praised for their intelligence, a majority chose the easy test. The “smart” kids took the cop-out.

\n
\n

Potential Pitfalls in the \"Identify\" stage: Fatalism, nihilism, low self esteem, fear of failure, or identity being wrapped up in success/intelligence can dissuade people from setting goals. Also, just plain not seeing the value in setting goals.

\n

Costs: This stage is one of the most expensive intellectually and emotionally - this is where you are choosing to dedicate your time at the expense of other things. It's a subjective judgment with an high opportunity cost, almost by definition.

\n

Requirements: Introspection about what you want to achieve, patience, working and re-working at goals, and taking the time to describe and elaborate what success would look like.

\n

Timeline: Varies, but I find the loose threads of identifying goals can take a year, two years, or more to start to come together. After actively planning and beginning to identify goals, coming to a really great definition can happen fairly quickly (10 to 30 minutes, for fairly straightforward goals) or can easily take one to three months to flesh a goal out.

\n

Make decision to begin - I believe this is an underrated component to achieving. Saying, \"I have now decided start pursuing this goal.\" 

\n

Potential Pitfalls: Distraction, akrasia, procrastination, overwhelm.

\n

Costs: Relatively low, since we're only moving to the research/information gathering stage. This is more like an easy checkbox on a checklist - important to do, but not particularly taxing.

\n

Requirements: A tiny bit of decisiveness.

\n

Timeline: Varies - people think things over for a while. But the decision to start exploring a goal can happen instantly.

\n

Research - This stage actually takes some researching skills. Most Less Wrong users will be pretty good at using Google, Wikipedia, searching for books, finding scientific papers, or good podcasts and video. We shouldn't forget that a lot of people are unfamiliar with these tools, and would have to learn them to get started. 

\n

Potential Pitfalls: Not knowing how to research, not having enough knowledge to know where to start looking and good questions to ask, distraction/stimulation (\"let me just check XYZ website for a minute...\"), underestimating yourself and thus not studying relevant people and events (for instance, ignoring great examples of innovators and producers because \"how could I be like da Vinci?\", so the person doesn't even bother learning what da Vinci did).

\n

Costs: Varies greatly depending on the goal and information coming in. Also varies depending on intensity - there's passive, casual research like reading historical fiction, biographies, stories, and anecdotes. Then there's active research, testing, underscoring, studying, internalizing, which takes a much greater amount of mental strain.

\n

Requirements: Research skills, judgment to pick the right places to study, concentration, time.

\n

Timeline: Depends on the goal. Of course, research can be an ongoing process forever, but how much is enough to get started? Depends on the field. Younger fields probably require less research since they tend to have more long hanging fruit available for discovery/achievement, less established competition, and less regulation about beginning.

\n

Begin - A massively underrated stage. Deciding, \"I have decided, I will achieve this\" goes a long ways. Most people never do this, instead half-working on their goals. There's some debate on whether publicly announcing your goals is helpful or harmful. Derek Sivers notes in, \"Shut up! Announcing your plans makes you less motivated to accomplish them\" that people who announce their goals get a sense of pride and feeling of accomplishment like they're already achieved. On the other hand, I wrote in \"The Joys of Public Accountability\" that having external commitment and pressure can help overcome feelings of laziness, procrastination, and fear. But regardless of whether you announce publicly or not, making a definite decision that \"I will achieve this\" seems to be very important and very underrated in goal achievement.

\n

Potential Pitfalls: Why don't people begin on goals? Very many reasons. Fear of failure, fear of success, procrastination, feeling overwhelmed and like there's not enough time, and perhaps the biggest one - not realizing how important making a firm decision to begin is.

\n

Costs: I find that being on the verge of beginning is intense and scary, and has a great mental cost. However, actually making the decision and beginning feels pretty good and is a release from a lot of that tension.

\n

Requirements: A bit of decisiveness, and then, simply starting.

\n

Timeline: Immediate, though people often take a lot of time to prepare to begin.

\n

Test - Testing things that aren't fatal or too damaging if they fail is probably the only way to achieve in a new domain, and maybe the only way to achieve in a domain where how to succeed shifts over time. Anna identifies that testing non-habitual methods is especially key. I agree with all of it - at this stage, doing anything that could plausibly work with no serious downside is correct. A lot of people obsess over, \"Where should I get started?\" Well, why not start anywhere that might be valuable? There's probably some good ways to assess and choose the best jumping off point, but action of any sort that might work at this stage is quite valuable.

\n

Potential Pitfalls: Not much, if you're dealing with things with low downside. That's the key - start by testing low-downside ways of getting your goal. If you're trying to get into something that has a significant downside by definition (investing money), be able to lose what you start out with, start slow, and pay attention to the fundamentals. You might lose whatever you put in, but if you can limit downside, there's not so many pitfalls to this stage. I suppose perfectionism could kill you here if you if you let it.

\n

Costs: Depends on your mental makeup. Some people can stomach non-success better than others. In practical terms, it's not very expensive. But it can be embarrassing and frustrating, which does come with its own cost.

\n

Requirements: Action orientation. Though speed isn't required, the faster you can try/implement something, the better.

\n

Timeline: Depends on the discipline. You should start getting some feedback fairly quickly. 

\n

Analyze early results - Here, you analyze what's working early on and put more emphasis/effort into that area. At the same time, Anna notes and I agree that you should still keep exploring things. Also, some things don't pay off often but offer incredibly high upside when they do - if you think an area offers that, it might be worth pursuing even if it hasn't shown tangible results yet.

\n

Potential Pitfalls: Analyzing with too small of a sample size could give you bad data and make you quit too soon or be overly optimistic about a particular way. Focusing on something that produces short term gains with a relatively low local maximum could be unfortunate.

\n

Costs: Sitting down and digging through the numbers takes a while, and a lot of people don't like doing it. It's incredibly valuable, I do extensive tracking, which I've written about extensively, including numbers/examples in \"What Gets Measured, Gets Managed\" on my site. Yet, it can be taxing or look scary for people. Perhaps another pitfall is thinking analysis needs to be perfect, getting overwhelmed, and not starting? This can be mentally taxing, if you look at it the wrong way. It's fun and light and breezy if you look at it from that angle and don't get overly serious.

\n

Requirements: Analytical skills, especially with numbers, statistics, and trends help a lot here. Being able to make charts, graphs, and visualizations isn't necessary, but might help you easily spot long term trends.

\n

Timeline: I find analyzing one week's worth of data can be done in between 20 minutes and an hour, depending on the complexity. Unfortunately though, the first time you start analyzing and you're trying to pick what to measure and how to record it, it takes a lot longer. After analyzing for a few weeks, it becomes very quickly. If you do weekly analysis, larger scale analysis (monthly, annually) becomes pretty easy - you go through your already analyzed weekly data and see trends.

\n

Focus - Here, you isolate the one, two, or three things that provide the most results and bear down in those areas. This is why I originally posted the question that Anna replied to - why don't people focus on the areas that provide the most success? She mentioned that many people don't isolate achievement/victory heuristics. From her inspiration, I wrote this to begin to identify the pitfalls, costs, requirements, and rough timelines on each stage of achievement and victory. 

\n

Potential Pitfalls: The whole post has been leading up to this point - you need to have identified a goal, researched how to achieve it, started working on it, gotten some initial results, and started to analyze them in order to figure out what's giving the greatest payoff. Most people don't do that. Additionally, there could be elements of \"fear of success\" and general akrasia/procrastination.

\n

Costs: The most expensive costs have already largely been paid in the earlier stages - shifting your focus into high yield/high output areas now will result in more tangible rewards and more progress.

\n

Requirements: The ability to identify high yield areas from your analysis, and be decisive enough to focus in the highest yield area or two.

\n

Timeline: It might take a while to ramp up the effort in the highest yield area or there might be relevant equipment/supplies needed, but the decision to do it can be made very quickly after analysis.

\n

Make firm commitment - These might seem redundant, but I think people don't commit to their goals enough. At least, I see normal people who seem to be wandering through life without having anything particularly meaningful happen. Whereas I tend to see results from people who say, \"Yes! I will!\" At this point where you're getting ready to focus, you have an idea on what things cost and what the results are going to be. Do you want it bad enough to firmly commit to get it at all costs? 

\n

Potential Pitfalls: Why don't people make firm commitments? Fear of failure, fear of success? Identity? Fear of standing out? If you've come this far and identified the key area and gotten started on focusing it, you should already be on the way to succeeding.

\n

Costs: This might be scary, or not. It might be slightly mentally taxing, or not. It might require an identity shift, or might not. It shouldn't be too difficult, but you're getting on the verge of success - you might have to confront some inner demons.

\n

Requirements: Decisiveness, a bit of willpower.

\n

Timeline: Instant.

\n

Persevere - Anna astutely noted that building an environment that's conducive to success makes it much more likely. In this stage, you're gearing up for the long haul. Getting relevant supplies, tools, outreach, building an external environment, making relevant commitments, and otherwise positioning yourself for success, and then persisting.

\n

Potential Pitfalls: A lot of people give up. You can reduce the chances of this by making the environment more supportive of your success, getting emotional support, and the old fashioned \"burn your boats behind you.\"

\n

Costs: I think if you've clearly identified the payoffs, it shouldn't be too tough, but the road can get weary at times. Persistence can be hard and tiring. The most expensive cost is doing the right thing when you need to, but you're not in the mood to do so.

\n

Requirements: Constructing an environment conducive to success, staying motivated, persistence.

\n

Timeline: Constructing a positive environment varies in time depending on what your environment looked like before you started. How long you'll have to persist depends on the scope of your goal and the methods you've chosen.

\n

Achieve or re-evaluate - Time to see if your beliefs pay the rent. You're either starting to achieve your goal, or you're starting to reconsider if the path you chose was correct. If the latter, you might have to go back to the drawing board. If the former, congratulations! Time to celebrate briefly, and then move on. Either way, you'll be assessing, re-assessing, identifying, and re-identifying goals at this stage.

\n

Potential Pitfalls: Quitting too soon before success. Getting arrogant and going too far after you've crossed the finish line and succeeded. The former comes from too much pessimism and not enough persistence. The latter comes from too much optimism and not enough re-analyzing.

\n

Costs: Completing or abandoning a project both have their costs, the latter more than the former. Either way you'll get a sense of closure after this - consciously abandoning a project where you gave it your all, but then it didn't pan out or your high level goals changed can actually be very enjoyable. Maybe you can do a last creative act to \"ship\" something if it didn't work out - an analysis or write-up of the event. 

\n

Requirements: If achieving, graciousness. If re-evaluating, emotional steadfastness to not quit too early, but pragmatism/realism to know when you need to go back to the drawing board.

\n

Timeline: Funny enough, a lot of times when you're succeeding at an abstract discipline, you don't realize it for a while. Other goals are easier to notice. It depends on the specific goal and field.

\n

Repeat - After completing or abandoning a goal, it's time to go back to the start, to identify the next things you'll devote yourself to and spend your life energy on. This is where you start identifying goals, researching them, committing to starting, and so on.

\n

Anna wrote:

\n
\n

So, to second Lionhearted's questions: does this analysis seem right?  Have some of you trained yourselves to be substantially more strategic, or goal-achieving, than you started out?  How did you do it?  Do you agree with (a)-(h) above?  Do you have some good heuristics to add?  Do you have some good ideas for how to train yourself in such heuristics?

\n
\n

Indeed, that was a very good and insightful post, and thank you for the inspiration and jumping off point. I have used some of these methods in my own life to become more successful, but I think this exercise of posting the fallacy, getting your feedback in \"not automatically strategic,\" and writing this has been very valuable. I've tried to lay out the beginnings of understanding each stage in the process.

\n

My questions for you, and everyone else at Less Wrong - do these stages seem accurate? How about my descriptions of them, along with the potential pitfalls, costs, requirements, and timelines for each stage?

\n

I think there's a lot of potential to build out in each specific area, identify and apply these methods to common goals, and so on. Perhaps we could go through the list for developing in rationality, or becoming more healthy, or wealthy, or an accomplished artist, or any other number of valuable pursuits.

" } }, { "_id": "oMsQjJLKWi384sHWM", "title": "Ignorance of non-existent preferences", "pageUrl": "https://www.lesswrong.com/posts/oMsQjJLKWi384sHWM/ignorance-of-non-existent-preferences", "postedAt": "2010-09-11T16:03:25.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "oMsQjJLKWi384sHWM", "html": "

I often hear it said that since you can’t know what non existent people or creatures want, you can’t count bringing them into existence as a benefit to them even if you guess they will probably like it. For instance Adam Ozimek makes this argument here.

\n

Does this absolute agnosticism about non-existent preferences mean it is also a neutral act to bring someone into existence when you expect them to have a net nasty experience?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "muXfZr5EYCfZqLmsb", "title": "Memetic Hazards in Videogames", "pageUrl": "https://www.lesswrong.com/posts/muXfZr5EYCfZqLmsb/memetic-hazards-in-videogames", "postedAt": "2010-09-10T02:22:22.457Z", "baseScore": 139, "voteCount": 116, "commentCount": 164, "url": null, "contents": { "documentId": "muXfZr5EYCfZqLmsb", "html": "

Hello, player character, and welcome to the Mazes of Menace! Your goal is to get to the center and defeat the Big Bad. You know this is your goal because you received a message from a very authoritative source that said so. Alas, the maze is filled with guards and traps that make every step dangerous. You have reached an intersection, and there are two doors before you. Door A leads towards the center; it probably takes you to your destination. Door B leads away from the center; it could loop back, but it's probably a dead end. Which door do you choose?

\n

The correct answer, and the answer which every habitual video game player will instinctively choose, is door B: the probable dead end. Because your goal is not to reach the end quickly, but to search as much of the maze's area as you can, and by RPG genre convention, dead ends come with treasure. Similarly, if you're on a quest to save the world, you do side-quests to put it off as long as possible, because you're optimizing for fraction-of-content-seen, rather than probability-world-is-saved, which is 1.0 from the very beginning.

\n

If you optimize for one thing, while thinking that you're optimizing something else, then you may generate incorrect subgoals and heuristics. If seen clearly, the doors represent a trade-off between time spent and area explored. But what happens if that trade-off is never acknowledged, and you can't see the situation for what it really is? Then you're loading garbage into your goal system. I'm writing this because someone reported what looks like a video game heuristic leaking into the real world. While this hasn't been studied, it could plausibly be a common problem. Here are some of the common memetic hazards I've found in video games.

\n

\n

For most games, there's a guide that explains exactly how to complete your objective perfectly, but to read it would be cheating. Your goal is not to master the game, but to experience the process of mastering the game as laid out by the game's designers, without outside interference. In the real world, if there's a guide for a skill you want to learn, you read it.

\n

Permanent choices can be chosen arbitrarily on a whim, or based solely on what you think best matches your style, and you don't need to research which is better. This is because in games, the classes, skills, races and alignments are meant to be balanced, so they're all close to equally good. Applying this reasoning to the real world would mean choosing a career without bothering to find out what sort of salary and lifestyle it supports; but things in the real world are almost never balanced in this sense. (Many people, in fact, do not do this research, which is why colleges turn out so many English majors.)

\n

Tasks are arranged in order of difficulty, from easiest to hardest. If you try something and it's too hard, then you must have taken a wrong turn into an area you're not supposed to be. When playing a game, level ten is harder than level nine, and a shortcut from level one to level ten is a bad idea. Reality is the opposite; most of the difficulty comes up front, and it gets easier as you learn. When writing a book, chapter ten is easier than writing chapter nine. Games teach us to expect an easy start, and a tough finale; this makes the tough starts reality offers more discouraging.

\n

You shouldn't save gold pieces, because they lose their value quickly to inflation as you level. Treating real-world currency that way would be irresponsible. You should collect junk, since even useless items can be sold to vendors for in-game money. In the real world, getting rid of junk costs money in effort and disposal fees instead.

\n

These oddities are dangerous only when they are both confusing and unknown, and to illustrate the contrast, here is one more example. There are hordes of creatures that look just like humans, except that they attack on sight and have no moral significance. Objects which are not nailed down are unowned and may be claimed without legal repercussions, and homes which are not locked may be explored. But no one would ever confuse killing an NPC for real murder, nor clicking an item for larceny, nor exploring a level for burglary; these actions are so dissimilar that there is no possible confusion.

\n

But remember that search is not like exploration, manuals are not cheats, careers are not balanced, difficulty is front-loaded, and dollars do not inflate like gold pieces. Because these distinctions are tricky, and failing to make them can have consequences.

" } }, { "_id": "RZrw9wpEWEwWRTzLk", "title": "More art, less stink: Taking the PU out of PUA", "pageUrl": "https://www.lesswrong.com/posts/RZrw9wpEWEwWRTzLk/more-art-less-stink-taking-the-pu-out-of-pua", "postedAt": "2010-09-10T00:25:59.829Z", "baseScore": 90, "voteCount": 85, "commentCount": 639, "url": null, "contents": { "documentId": "RZrw9wpEWEwWRTzLk", "html": "

Overview:  This is a proposal for a LessWrong Pick Up Artist (PUA)-like sub-community; PUA without the PU (get it?)1. Members would focus on the deliberate practice of social artistry, but with non-mating goals. Origins and intent of the goal are discussed, possible topics for learning are listed, and suggestions for next steps are solicited.

\n

\n

Origins:

\n

The PUA Community began decades ago with men that wanted to learn how to get better at seducing women. As I understand it, they simply began posting their (initially) awkward attempts at love online. Over the years, they appear to have amassed a fairly impressive set of practical knowledge and skills in this domain.

\n

I admire and applaud this effort. However, my ability to meet women is not currently a limiting factor in my life satisfaction. In reading some of the PUA literature, I was struck how often different authors remarked on the unintended side benefits of their training: better relationships at work, better interviewing skills, more effective negotiations, more non-pickup social fun, better male friendships, more confidence, etc. These guys were able to make major strides in areas that I've struggled to improve at all in...  without even bloody intending to! This struck me as an something worth taking very seriously!

\n

I find it alarming that such a valuable resource would be monopolized in pursuit of orgasm; it's rather as if a planet were to burn up its hydrocarbons instead of using them to make useful polymers. PUA ought to be a special case of a more general skill set, and it's being wasted. I say that my goals are noble, and as such I should have the opportunity to sharpen my skills to at least the keenness of a PUA master!

\n

Statement of Purpose:

\n

The purpose of this post is to open discussion on how to construct a community of developing social artisans, modeled after the useful components2 of the PUA community. If there is sufficient mass, the next goals are probably sussing out learning methods and logistics.

\n

The mission of the hypothetical community will probably need to be fleshed out more explicitly (and I don't want to be too prescriptive), but pretty much what I was thinking was expressed well by Scott Adams:

\n
\n
...\n

I think technical people, and engineers in particular, will always have good job prospects. But what if you don't have the aptitude or personality to follow a technical path? How do you prepare for the future?

I'd like to see a college major focusing on the various skills of human persuasion. That's the sort of skillset that the marketplace will always value and the Internet is unlikely to replace. The persuasion coursework might include...

\n\n


You can imagine a few more classes that would be relevant. The idea is to create people who can enter any room and make it their bitch. [emphasis added]

Colleges are unlikely to offer this sort of major because society is afraid and appalled by anything that can be labeled \"manipulation,\" which isn't even a real thing.

Manipulation isn't real because almost every human social or business activity has as its major or minor objective the influence of others. You can tell yourself that you dress the way you do because it makes you happy, but the real purpose of managing your appearance is to influence how others view you.

Humans actively sell themselves every minute they are interacting with anyone else. Selling yourself, which sounds almost noble, is little more than manipulating other people to do what is good for you but might not be so good for others. All I'm suggesting is that people could learn to be more effective at the things they are already trying to do all day long.

\n
\n
\n

Word! [EDIT: We need not be bound by this exact list. For instance, there is no way I'm going to be doing any golfing.]

\n

I've met people who were shockingly, seemingly preternaturally adept in social settings. Of course this is  not  magic. Like anything else, it can be reduced to a set of constituent steps and learned. We just need to figure out how.

\n

Next steps:

\n

I have a rather long list of ideas ready to go, but they made this post kind of awkward. Plus, Scott Adam's post says much of what I was trying to get at. Let's just start the conversation.

\n

So, what do you think?

\n
\n

1 I have nothing whatsoever against the majority of the PUAers with whom I've had encounters, and the title is just meant to be funny. No offense!

\n

2 The mention of PUA drags along several associations that I want to disavow (think anything obviously \"Dark Arts\"). I considered omitting the fact that much of the intellectual heritage of this idea is the PUAers to avoid these associations, but I couldn't think of another way to tie it together. This idea owes its genesis to the PUA community, but the product is not intended to be its exact replica. Undesirable elements need not be ported from the old system to the new.

" } }, { "_id": "SupKifNG37T6eZcu8", "title": "Reason is not the only means of overcoming bias", "pageUrl": "https://www.lesswrong.com/posts/SupKifNG37T6eZcu8/reason-is-not-the-only-means-of-overcoming-bias", "postedAt": "2010-09-09T22:59:58.922Z", "baseScore": 8, "voteCount": 19, "commentCount": 30, "url": null, "contents": { "documentId": "SupKifNG37T6eZcu8", "html": "

Sometimes the best way to overcome bias is through an emotional appeal. Below I interweave discussion of how emotional appeals can be used to overcome the bias corresponding to the identifiable victim effect and maladaptive resource hoarding instinct.

\n
\n

\n

The beginning of the abstract to Paul Slovic's article \"If I look at the mass I will never act”: Psychic numbing and genocide reads

\n
\n

Most people are caring and will exert great effort to rescue individual victims whose needy plight comes to their
attention. These same good people, however, often become numbly indifferent to the plight of individuals who are “one of many” in a much greater problem.

\n
\n

Eliezer has discussed this and related topics extensively, for example in Scope Insensitivity. See also the references listed at wikipedia under identifiable victim effect. How can we go about overcoming this bias? One answer is \"by keeping it in mind and by teaching people about it.\" But while some people have the intellectual interest and ability to learn about the identifiable victim effect, others don't. Moreover, it's not clear that being aware of this bias is by itself very useful in overcoming it.

\n

But reason is not the only means of overcoming bias.

\n
\n

Before I proceed I should make some disclaimers:

\n

Disclaimer #1: Below I discuss overcoming biases that people have against donating money for the purpose of improving health in the developing world. In doing so I am not advocating in favor of developing world aid over other forms of charity. The case of developing world aid simply provides a good example. Most people who decline to donate to charities that improve health in the developing world do not decline to do so because they think they have a better cause to donate to.

\n

Disclaimer #2: This posting is intended to advocate the use of emotional appeals specifically for the purpose of overcoming bias. Obviously emotional appeals can and are frequently used to create bias - this is not what I'm advocating. I believe that the video which I discuss below does more to overcome bias than it does to create bias for the typical viewer

\n

Disclaimer #3: The use of emotional appeals strikes some as patronizing or manipulative. I do not view them in this light when they are used for the sake of aligning people's behavior with their values for the sake of good causes. I subject myself to stimuli with emotional appeals in order to keep myself motivated to do what I think is right. The video below is one such example. Another such example is the paragraph of Eliezer's One Life Against The World which reads

\n
\n

I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable. Or to put it another way: Whoever saves one life, if it is as if they had saved the whole world; whoever saves ten lives, it is as if they had saved ten worlds. Whoever actually saves the whole world - not to be confused with pretend rhetorical saving the world - it is as if they had saved an intergalactic civilization.

\n
\n

Disclaimer #4: In referencing the topic of charitable donation, I'm not advocating that people encourage others to donate money to any cause to the point of becoming unhappy about it. A few months ago I wrote an article explaining my position on this matter here. I was interested to be reminded that Eliezer has made similar remarks in his video response to a question by komponisto.

\n
\n

The video titled The Life You Can Save in 3 Minutes does a great job of overcoming the absence of an identifiable victim effect without providing an identifiable victim and without explicitly mentioning the identifiable victim effect at all. The first 1:15 minutes make a case for the viewer donating money to save lives in the developing world. The text of the next segment of the video reads

\n
\n

(cue the skepticism)

\n

Does aid even work?

\n

Doesn't it breed dependence?

\n

Isn't it wasted?

\n

And poverty is bottomless!

\n

Eternal!

\n

Infinite!

\n

Isn't it all pointless?

\n
\n

In doing so the authors of the video are channeling a common reaction to appeals for developing world aid. It's rational for viewers to have the concerns mentioned above. As the video comments next:

\n
\n

There are long answers to these tough questions.

\n
\n

Developing world aid is a very tricky business and a high level of vigilance is required to make sure that it goes well. But at the moment it appears to me that there's a bottom line that it's possible to greatly improve people's lives by donating money to certain charities working to save lives in the developing world. See GiveWell's International page for more information.

\n

More to the point of this post, in practice there are problems of people:

\n
    \n
  1. Raising legitimate doubts as to the value of developing world aid and then failing to follow up on determining whether these doubts are well grounded.
  2. \n
  3. Asymmetrically focusing on potential negative unintended consequences of developing world aid rather than potential positive unintended consequences of developing world aid.
  4. \n
  5. Imagining that donating money would correspond to a greater sacrifice of material resources than it actually does.
  6. \n
\n

I view these observed behaviors as being in line with Yvain's Conflicts Between Mental Subagents: Expanding Wei Dai's Master-Slave Model: it seems as though these lapses in reasoning arise from modules of our brain which were designed to allow us to preserve our self-image as a good person while declining to relinquish material resources. I agree with Yvain's Would Your Real Preferences Please Stand Up? posting that there's no meaningful sense in which such lapses in reasoning should be viewed as revealing our preferences.

\n

As I say in Missed opportunities for doing well by doing good I suspect that:

\n
    \n
  1. Our tendency to hoard material resources is largely maladaptive and doesn't improve our lives very much at all.
  2. \n
  3. The psychological benefits of donating money to help others are considerable and systematically undervalued
  4. \n
\n

So it looks highly desirable to help people overcome their irrational hoarding of material resources, overcome the psychic numbing attached to the absence of an identifiable victim, and see the relative costs and benefits of donating in clear terms. The creators of The Life You Can Save in 3 Minutes do a great job of this in the subsequent portion of the video. After the text \"there are long answers to these tough questions\" the video shifts into a different mood and the text reads:

\n
\n

But here's the short one:

\n

What if your daughter was the \"drop in the bucket\"?

\n

Real lives are saved every single day. People with real names whose families weep with joy to see them still alive.

\n

If you were one of those people you wouldn't think it was pointless.

\n
\n

This serves to dispel the absence of an identifiable victim and make the true benefits of donating (or equivalently, the true opportunity cost of not giving) psychologically salient. [Edit: At least sometimes, but see byrnema's comment and my response.] A portion of the remainder of the video makes low cost of giving to the donor psychologically salient. Key to the video's effectiveness is its use of music and visual imagery. The video is well worth viewing in entirety.

\n

As I said above, sometimes the best way to overcome bias is through an emotional appeal. This is especially so when communicating with neurotypical people whose minds are naturally drawn toward emotional detail rather than logical detail.

\n

Members of the Less Wrong community who are interested in this topic might find it useful to study the research of Deborah Small.

" } }, { "_id": "K24PPx7R4kSukcGh4", "title": "My plans", "pageUrl": "https://www.lesswrong.com/posts/K24PPx7R4kSukcGh4/my-plans", "postedAt": "2010-09-09T14:19:50.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "K24PPx7R4kSukcGh4", "html": "

I’m currently expanding my post on the Self Indication Assumption and the Great Filter into an honours thesis, due at the end of October. So barring bouts of procrastination or especially hard work deserving of blogging rights, I will probably remain fairly quiet till then. In the mean time, if you have any opinions on what career or the like I would be relatively good at, or anything else relevant to what I should do upon my impending graduation other than do a dance and read all those books I meant to before saving the world in some yet to be fully worked out fashion, please tell me here.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "PBRWb2Em5SNeWYwwB", "title": "Humans are not automatically strategic", "pageUrl": "https://www.lesswrong.com/posts/PBRWb2Em5SNeWYwwB/humans-are-not-automatically-strategic", "postedAt": "2010-09-08T07:02:52.260Z", "baseScore": 618, "voteCount": 537, "commentCount": 278, "url": null, "contents": { "documentId": "PBRWb2Em5SNeWYwwB", "html": "

Reply to: A \"Failure to Evaluate Return-on-Time\" Fallacy

\n

Lionhearted writes:

\n
\n

[A] large majority of otherwise smart people spend time doing semi-productive things, when there are massively productive opportunities untapped.

\n

A somewhat silly example: Let's say someone aspires to be a comedian, the best comedian ever, and to make a living doing comedy. He wants nothing else, it is his purpose. And he decides that in order to become a better comedian, he will watch re-runs of the old television cartoon 'Garfield and Friends' that was on TV from 1988 to 1995....

\n

I’m curious as to why.

\n
\n

Why will a randomly chosen eight-year-old fail a calculus test?  Because most possible answers are wrong, and there is no force to guide him to the correct answers.  (There is no need to postulate a “fear of success”; most ways writing or not writing on a calculus test constitute failure, and so people, and rocks, fail calculus tests by default.)

\n

Why do most of us, most of the time, choose to \"pursue our goals\" through routes that are far less effective than the routes we could find if we tried?[1]  My guess is that here, as with the calculus test, the main problem is that most courses of action are extremely ineffective, and that there has been no strong evolutionary or cultural force sufficient to focus us on the very narrow behavior patterns that would actually be effective. 

\n

To be more specific: there are clearly at least some limited senses in which we have goals.  We: (1) tell ourselves and others stories of how we’re aiming for various “goals”; (2) search out modes of activity that are consistent with the role, and goal-seeking, that we see ourselves as doing (“learning math”; “becoming a comedian”; “being a good parent”); and sometimes even (3) feel glad or disappointed when we do/don’t achieve our “goals”.

\n

But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out.  We do not automatically:

\n\n

.... or carry out any number of other useful techniques.  Instead, we mostly just do things.  We act from habit; we act from impulse or convenience when primed by the activities in front of us; we remember our goal and choose an action that feels associated with our goal.  We do any number of things.  But we do not systematically choose the narrow sets of actions that would effectively optimize for our claimed goals, or for any other goals.

\n

Why?  Most basically, because humans are only just on the cusp of general intelligence.  Perhaps 5% of the population has enough abstract reasoning skill to verbally understand that the above heuristics would be useful once these heuristics are pointed out.  That is not at all the same as the ability to automatically implement these heuristics.  Our verbal, conversational systems are much better at abstract reasoning than are the motivational systems that pull our behavior.  I have enough abstract reasoning ability to understand that I’m safe on the glass floor of a tall building, or that ice cream is not healthy, or that exercise furthers my goals... but this doesn’t lead to an automatic updating of the reward gradients that, absent rare and costly conscious overrides, pull my behavior.  I can train my automatic systems, for example by visualizing ice cream as disgusting and artery-clogging and yucky, or by walking across the glass floor often enough to persuade my brain that I can’t fall through the floor... but systematically training one’s motivational systems in this way is also not automatic for us.  And so it seems far from surprising that most of us have not trained ourselves in this way, and that most of our “goal-seeking” actions are far less effective than they could be.

\n

Still, I’m keen to train.  I know people who are far more strategic than I am, and there seem to be clear avenues for becoming far more strategic than they are.  It also seems that having goals, in a much more pervasive sense than (1)-(3), is part of what “rational” should mean, will help us achieve what we care about, and hasn't been taught in much detail on LW.

\n

So, to second Lionhearted's questions: does this analysis seem right?  Have some of you trained yourselves to be substantially more strategic, or goal-achieving, than you started out?  How did you do it?  Do you agree with (a)-(h) above?  Do you have some good heuristics to add?  Do you have some good ideas for how to train yourself in such heuristics?

\n

 

\n

[1] For example, why do many people go through long training programs “to make money” without spending a few hours doing salary comparisons ahead of time?  Why do many who type for hours a day remain two-finger typists, without bothering with a typing tutor program?  Why do people spend their Saturdays “enjoying themselves” without bothering to track which of their habitual leisure activities are *actually* enjoyable?  Why do even unusually numerate people fear illness, car accidents, and bogeymen, and take safety measures, but not bother to look up statistics on the relative risks? Why do most of us settle into a single, stereotyped mode of studying, writing, social interaction, or the like, without trying alternatives to see if they work better -- even when such experiments as we have tried have sometimes given great boosts?

" } }, { "_id": "ZxMBmnGkB3H2zjCQs", "title": "September Less Wrong Meetup aka Eliezer's Bayesian Birthday Bash", "pageUrl": "https://www.lesswrong.com/posts/ZxMBmnGkB3H2zjCQs/september-less-wrong-meetup-aka-eliezer-s-bayesian-birthday", "postedAt": "2010-09-08T04:51:31.686Z", "baseScore": 11, "voteCount": 9, "commentCount": 53, "url": null, "contents": { "documentId": "ZxMBmnGkB3H2zjCQs", "html": "

In honor of Eliezer's Birthday, there will be a Less Wrong meetup at 5:00PM this Saturday, the 11th of September, at the SIAI Visiting Fellows house (3755 Benton St, Santa Clara CA).  Come meet Eliezer and your fellow Less Wrong / Overcoming Bias members, have cool conversations, eat good food and plentiful snacks (including birthday cake of course!), and scheme out ways to make the world more Bayesian.

\n

As usual, the meet-up will be party-like and full of small group conversations. Rationality games may also be present. Newcomers are welcome. Feel free to bring food to share, or not. 

\n

Please RSVP at the meetup.com page if you plan to attend.

\n

Sorry for the last minute notice, I just found out that the 11th was Eliezer's birthday today.

" } }, { "_id": "RzdPXLd2b6qmEB2mf", "title": "A \"Failure to Evaluate Return-on-Time\" Fallacy", "pageUrl": "https://www.lesswrong.com/posts/RzdPXLd2b6qmEB2mf/a-failure-to-evaluate-return-on-time-fallacy", "postedAt": "2010-09-07T19:01:42.066Z", "baseScore": 87, "voteCount": 89, "commentCount": 110, "url": null, "contents": { "documentId": "RzdPXLd2b6qmEB2mf", "html": "

I don't have a good name for this fallacy, but I hope to work it out with everyone here through thinking and discussion.

\n

It goes like this: a large majority of otherwise smart people spend time doing semi-productive things, when there are massively productive opportunities untapped.

\n

A somewhat silly example: Let's say someone aspires to be a comedian, the best comedian ever, and to make a living doing comedy. He wants nothing else, it is his purpose. And he decides that in order to become a better comedian, he will watch re-runs of the old television cartoon 'Garfield and Friends' that was on TV from 1988 to 1995.

\n

This is absolutely not a great use of his time. Maybe he'll learn a little about jokes structures and pick up a gag from the show. It's probably not entirely useless. But Garfield and Friends wasn't all that funny to begin with, and there's only so much to be learned from it. The would-be comedian would be much better off watching Eddie Murphy, George Carlin, and Bill Cosby if he wanted to watch old clips. He'd be much better off reading memoirs, autobiographies, and articles by people like Steve Martin and Jerry Seinfeld. Or he could look to get into the technical aspects of comedy and study well-respected books in the field. Or best yet, go to an open mic night, or spend time writing jokes, or otherwise do comedy. But he doesn't, instead he just watches re-runs of Garfield and Friends.

\n

I think a lot of us are guilty of this in our daily lives. Certainly, most people on LessWrong examine our lives more carefully than the rest of the world. A lot of us have clear goals. Maybe not a full, cohesive belief structure, but pretty clear. And sometimes we dabble around and do low-impact stuff instead of high impact stuff. The equivalent of watching Garfield and Friends re-runs.

\n

I've been an entrepreneur and done some entrepreneurial stuff. In the beginning, you have to test different things, because you don't know what's going to work. But I've seen this fallacy, and I was guilty of it myself - I didn't double down and put all my efforts into what was working, or at least commit mostly to it.

\n

The most successful entrepreneurs do. Oh, keep learning, and diversify a bit, sure. But I remember watching a talk about the success of the company Omniture - they had two people in their enterprise business-to-business side, and 60 people in their business-to-consumer side. Then the founder, Josh James, realized 90% of their revenue was coming from business to business, so he said - \"Hey. 58 of you go over to the business to business side.\" And just like that, he now had 60 of his team working in the part of the company that was producing of the company's revenues. Omniture sold last year for $1.8 billion.

\n

I feel like a lot of us have those opportunities - we see that a place we're putting a small amount of effort is accounting for most of our success, but we don't say - \"Okay, that area that I'm giving a little attention that's producing massive results? All attention goes there now.\" No, we keep doing things that aren't producing the results.

\n

I'm curious as to why. Do we not evaluate the return on time? Is it just general akrasia, procrastination, fear of success, fear of standing out? Those hard-wired evolutionary \"don't stand out too much\" things? Does it seem like it'd be too easy or can't be real? 

\n

A lot of times, I'm frittering time away on something that will get me, y'know, very small gains. I'm not talking speculative things, or learning, or relaxing. Like, just small gains in my development. Meanwhile, there's something on-hand I could do that'd have 300 times the impact. For sure, almost certainly 300 times the impact, because I see some proven success in the 300x area, and the frittering-away-time area is almost certainly not going to be valuable.

\n

And heck, I do this a lot less than most people. Most people are really, really guilty of this. Let's discuss and figure out why. Your thoughts?

" } }, { "_id": "sbZLTD4MTYfNDMBpi", "title": "Something's Wrong", "pageUrl": "https://www.lesswrong.com/posts/sbZLTD4MTYfNDMBpi/something-s-wrong", "postedAt": "2010-09-05T18:08:54.268Z", "baseScore": 134, "voteCount": 127, "commentCount": 164, "url": null, "contents": { "documentId": "sbZLTD4MTYfNDMBpi", "html": "

\n

Atheists trying to justify themselves often find themselves asked to replace religion.  “If there’s no God, what’s your system of morality?”  “How did the Universe begin?”  “How do you explain the existence of eyes?”  “How do you find meaning in life?”  And the poor atheist, after one question too many, is forced to say “I don’t know.”  After all, he’s not a philosopher, cosmologist, psychologist, and evolutionary biologist rolled into one.  And even they don’t have all the answers. 

\n

But the atheist, if he retains his composure, can say, “I don’t know, but so what?  There’s still something that doesn’t make sense about what you learned in Sunday school.  There’s still something wrong with your religion.  The fact that I don’t know everything won’t make the problem go away.”

\n

What I want to emphasize here, even though it may be elementary, is that it can be valuable and accurate to say something’s wrong even when you don’t have a full solution or a replacement.

\n

Consider political radicals.  Marxists, libertarians, anarchists, greens, John Birchers.  Radicals are diverse in their political theories, but they have one critical commonality: they think something’s wrong with the status quo.  And that means, in practice, that different kinds of radicals sometimes sound similar, because they’re the ones who criticize the current practices of the current government and society.  And it’s in criticizing that radicals make the strongest arguments, I think.  They’re sketchy and vague in designing their utopias, but they have moral and evidentiary force when they say that something’s wrong with the criminal justice system, something’s wrong with the economy, something’s wrong with the legislative process. 

\n

Moderates, who are invested in the status quo, tend to simply not notice problems, and to dismiss radicals for not having well-thought-out solutions.  But it’s better to know that a problem exists than to not know – regardless of whether you have a solution at the moment.

\n

Most people, confronted with a problem they can’t solve, say “We just have to live with it,” and very rapidly gloss into “It’s not really a problem.”  Aging is often painful and debilitating and ends in death.  Almost everyone has decided it’s not really a problem – simply because it has no known solution.  But we also used to think that senile dementia and toothlessness were “just part of getting old.”  I would venture that the tendency, over time, to find life’s cruelties less tolerable and to want to cure more of them, is the most positive feature of civilization.  To do that, we need people who strenuously object to what everyone else approaches with resignation. 

\n

Theodore Roosevelt wrote, “It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better.” 

\n

But it is the critic who counts. Just because I can’t solve P=NP doesn’t mean I can’t say the latest attempt at a proof is flawed.  Just because I don’t have a comprehensive system of ethics doesn’t mean there’s not something wrong with the Bible’s.  Just because I don’t have a plan for a perfect government doesn’t mean there isn’t something wrong with the present one.  Just because I can’t make people live longer and healthier lives doesn’t mean that aging isn’t a problem.  Just because nobody knows how to end poverty doesn’t mean poverty is okay.  We are further from finding solutions if we dismiss the very existence of the problems. 

\n

This is why I’m basically sympathetic to speculations about existential risk, and also to various kinds of research associated with aging and mortality.  It’s calling attention to unsolved problems.  There’s a human bias against acknowledging the existence of problems for which we don’t have solutions; we need incentives in the other direction, encouraging people to identify hard problems.  In mathematics, we value a good conjecture or open problem, even if the proof doesn’t come along for decades.  This would be a good norm to adopt more broadly – value the critic, value the one who observes a flaw, notices a hard problem, or protests an outrage, even if he doesn’t come with a solution.  Fight the urge to accept a bad solution just because it ties up the loose ends.

\n" } }, { "_id": "gZbHSWcLvj7ZopSas", "title": "Controlling Constant Programs", "pageUrl": "https://www.lesswrong.com/posts/gZbHSWcLvj7ZopSas/controlling-constant-programs", "postedAt": "2010-09-05T13:45:47.759Z", "baseScore": 35, "voteCount": 39, "commentCount": 33, "url": null, "contents": { "documentId": "gZbHSWcLvj7ZopSas", "html": "

This post explains the sense in which UDT and its descendants can control programs with no parameters, without using explicit control variables.

\n

Related to: Towards a New Decision Theory, What a reduction of \"could\" could look like.

\n

Usually, a control problem is given by an explicit (functional) dependence of outcome on control variables (together with a cost function over the possible outcomes). Solution then consists in finding the values of control variables that lead to the optimal outcome. On the face of it, if we are given no control variables, or no explicit dependence of the outcome on control variables, then the problem is meaningless and cannot be solved.

\n

Consider what is being controlled in UDT and in the model of control described by Vladimir Slepnev. It might be counterintuitive, but in both cases the agent controls constant programs, in other words programs without explicit parameters. And for constant programs, their output is completely determined by their code, nothing else.

\n

Let's take, for example, Vladimir Slepnev's model of Newcomb's problem, written as follows:

\n
def world(): 
  box1 = 1000
  box2 = (agent() == 2) ? 0 : 1000000
  return box2 + ((agent() == 2) ? box1 : 0)
\n

The control problem that the agent faces is to optimize the output of program world() that has no parameters. It might be tempting to say that there is a parameter, namely the sites where agent() is included in the program, but it's not really so: all these entries can be substituted with the code of program agent() (which is also a constant program), at which point there remains no one element in the program world() that can be called a control variable.

\n

To make this point more explicit, consider the following variant of program world():

\n
def world2(): 
  box1 = 1000
  box2 = (agent2() == 2) ? 0 : 1000000
  return box2 + ((agent() == 2) ? box1 : 0)
\n

Here, agent2() is a constant program used to predict agent's decision, that is known to compute the same output as agent(), but does not, generally, resemble agent() in any other way. If we try to consider only the explicit entry of program agent() as control variable (either by seeing the explicit program call in this representation of world2(), or by matching the code of agent() if its code was substituted for the call), we'll end up with an incorrect understanding of the situation, where the agent is only expected to control its own action, but not the prediction computed by agent2().

\n

Against explicit dependence

\n

What the above suggests is that dependence of the structure being controlled from agent's decision shouldn't be seen as part of problem statement. Instead, this dependence should be reconstructed, given definition of the agent and definition of the controlled structure. Relying on explicit dependence, even if it's given as part of problem statement, is actually detrimental to the ability to correctly solve the problem. Consider, for example, the third variant of the model of Newcomb's problem, where the agent is told explicitly how its action (decision) is used:

\n
def world3(action): 
  box1 = 1000
  box2 = (agent2() == 2) ? 0 : 1000000
  return box2 + ((action == 2) ? box1 : 0)
\n

Here, agent2() is the predictor, and whatever action the program agent() computes is passed as a parameter to world3(). Note that the problem is identical to one given by world2(), but you are explicitly tempted to privilege the dependence of the outcome of world3() on its parameter that is computed by agent(), over the dependence on the prediction computed by agent2(), even though both are equally important for correctly solving the problem.

\n

To avoid this explicit dependence bias, we can convert the problem given with an explicit dependence to one without, by \"plugging in\" all parameters, forgetting about the seams, and posing a problem of restoring all dependencies from scratch (alternatively, of controlling the resulting constant program):

\n
def world3'(): 
  return world3(agent())
\n

Now, world3'() can be seen to be equivalent to world2(), after the code of world3() is substituted.

\n

Knowable consequences

\n

How can the agent control a constant program? Isn't its output \"already\" determined? What is its decision about the action for, if the output of the program is already determined?

\n

Note that even if the environment (controlled constant program) determines its output, it doesn't follow that the agent can figure out what that output is. The agent knows a certain number of logical facts (true statements) and can work on inferring more such statements, but it might not be enough to infer the output of environment. And every little bit of knowledge helps.

\n

One kind of statements in particular has an unusual property: even though the agent can't (usually) infer their truth, it can determine their truth any way it likes. These are statements about agent's decision, such as (agent() == 1). Being able to know their truth by the virtue of determining it allows the agent to \"infer\" the truth of many more statements than it otherwise could, and in particular this could allow inferring something about the output of environment (conditional on the decision being so and so). Furthermore, determining which way the statements about the agent go allows to indirectly determine which way the output of environment goes. Of course, the environment already takes into account the actual decision that the agent will arrive at, but the agent normally doesn't know this decision beforehand.

\n

Moral arguments

\n

Let's consider an agent that reasons formally about the output of environment, and in particular about the output of environment given possible decisions of the agent. Such an agent produces (proves) statements in a given logical language and theory, and some of the statements are \"calls for action\", that is by proving such statements, the agent is justified in taking an action associated with them.

\n

For example, with programs world2() and agent() above, where agent() is known to only output either 1 or 2, one such statement is:

\n
[agent()==1 => world2()==1000000] AND [agent()==2 => world2()==1000]
\n

This statement is a moral argument for deciding (agent()==1). Even though in the statement itself, one of the implications must be vacuously true by virtue of its antecedent being false, and so can't say anything about the output of environment, the implication following from that actually chosen action is not vacuous, and therefore choosing that action simultaneously decides the output of environment.

\n

Consequences appear consistent

\n

You might also want to re-read Vladimir Slepnev's post on the point that consequences appear consistent. Even though most of the implications in moral arguments are vacuously true (based on false premise), the agent can't prove which ones, and correspondingly can't prove a contradiction from their premises. Let's say the agent proves two statements implying different consequences of the same action, such as

\n
[agent()==2 => world2()==1000] and 
[agent()==2 => world2()==2000].
\n

Then, it can also prove that (agent()==2) implies a contradiction, which normally can't happen. As a result, consequences from possible actions appear as consistent descriptions of possible worlds, even though they are not. For example, one-boxing agent in Newcomb's problem can prove

\n
[agent()==2 => world2()==1000],
\n

but (world2()==1000) is actually inconsistent.

\n

This suggests a reduction of the notion of (impossible) possible worlds (counterfactuals) generated by possible actions of the agent that are not taken. Just like explicit dependencies, such possible worlds can be restored from definition of (actual) environment, instead of being explicitly given as part of the problem statement.

\n

Against counterfactuals

\n

Since it turns out that both dependencies, and counterfactual environments are implicit in the actual environment, they don't need to be specified in the problem statement. Furthermore, if counterfactual environments are not specified, their utility doesn't need to be specified as well: it's sufficient to specify the utility of actual environment.

\n

Instead of valuing counterfactuals, the agent can just value the actual environment, with \"value of counterfactuals\" appearing as an artifact of the structure of moral arguments.

\n

Ambient control

\n

I call this flavor of control ambient control, and correspondingly the decision theory studying it, ambient decision theory (ADT). This name emphasizes how the agent controls the environment not from any given location, but potentially from anywhere, through the patterns whose properties can be inferred from statements about agent's decisions. A priori, the agent is nowhere and everywhere, and some of its work consists in figuring out how exactly it exerts its control.

\n

Prior work

\n

In discussions on Timeless decision theory, Eliezer Yudkowsky suggested the idea that agent's decision could control the environment through more points than agent's \"actual location\", which led to the question of finding the right configuration of explicit dependencies, or counterfactual structure, in the problem statement (described by Anna Salamon in this post).

\n

Wei Dai first described a decision-making scheme involving automatically inferring such dependencies (and control of constant programs, hence an instance of ambient control) known as Updateless decision theory. In this scheme, the actual inference of logical consequences was extracted as an unspecified \"mathematical intuition module\".

\n

Vladimir Slepnev figured out an explicit proof-of-the-concept algorithm successfully performing a kind of ambient control to solve the problem of cooperation in Prisoner's Dilemma. The importance of this algorithm is in showing that the agent can in fact successfully prove the moral arguments necessary to make a decision in this nontrivial game theory problem. He then abstracted the algorithm and discussed some of the more general properties of ambient control.

" } }, { "_id": "HrDyNiJe6bZDFATeT", "title": "Reasonably Fun", "pageUrl": "https://www.lesswrong.com/posts/HrDyNiJe6bZDFATeT/reasonably-fun", "postedAt": "2010-09-04T12:07:51.113Z", "baseScore": 17, "voteCount": 25, "commentCount": 9, "url": null, "contents": { "documentId": "HrDyNiJe6bZDFATeT", "html": "

Fun on the whole is a pretty amorphous concept and being reasonable about it is tricky, however there are some routes of enquiry.

\n

My personal understanding of fun comes from the experience of programming gameplay mechanics (such as character control, AI and minigames) and through designing and pitching games professionally. This has led me to create a number of theories about why games (and other forms of entertainment) are fun.

These ideas are built on my experiences of adjusting games and game pitches to make them more enjoyable. On the whole this sense of enjoyment is based on the opinions of those making (or paying for the development of) the games (where groupthink is a problem). However, many of the games and their pitches have been evaluated by focus groups and gameplay recordings, performed in relatively controlled settings. Additional information comes from finding patterns in sales figures and other representations of what people enjoy (e.g. the types of magazine available for sale in newsagents).

From these experiences I've attempted to find simple theoretical justifications for the behaviour I've observed. In some cases, these theories have some validation through external research, but on the whole are not experimentally validated. I take the view that it is better to have some sort of theory rather than nothing, and indeed these theories have been very useful in guiding my work.

\n

These ideas will focus on the fun (or more generally the source of motivation) provided by computer games however I believe there are many generalities that can be made from them.

When thinking about the experience of a game I find it useful to break the game experience into 4 distinct stages:

\n\n


To keep the article short I've decided to split these stages up over multiple articles, hopefully there is enough meat in each to be interesting.

\n

Attract

\n

Attraction is the first step in obtaining enjoyment from a game or other form of entertainment. In particular, if a game box is not interesting enough to pick up then the sales are likely to be low. At least one games studio measures this explicitly with a mock up of an actual store containing both existing and potential game titles. If a set of test customers fail to look at or purchase a proposed game then it is not developed further.

One of the most signficant factors affecting the attractiveness of a product is a customer's cultural identity. Looking at toy bestsellers, there is a trend to have the stages of child development mirrored in the most popular entertainment. In particular, toys pass from amorphous learning tools (bright objects and blobby humanoids), through mimicking parents (accurate baby dolls), to mimicking older children (sexualised dolls and makeup). In this way toys become a tool for children to reinforce their sense of appropriate behaviour and appearance.

Once a child is socialising, this experience is enhanced by group consensus on what is popular, an approach that can continue through adulthood. This is heavily exploited by those selling entertainment aimed at children and indeed licenced games (disney, pixar etc.) are the most lucrative for the younger demographic. It should also be noted, however, that this data is skewed by the fact that the purchasers (parents, family friends) will only know of the big brands.

For older children and adults this sense of reinforcing culturally validated behaviour remains extremely strong. One of the main challenges facing a games designer is to identify a theme or form of presentation that a customer will view as being designed 'for them'. Many of the recent big financial successes in games have come from expanding into demographics where game playing is considered taboo or inaccessible (for example, Deer Hunter, the Sims, SingStar, WiiSports, Braintraining). Customers for these forms of entertainment will often refer to the titles as being 'not a game' in order to diminish the negative associations of the medium. This behaviour is also found in those who view games playing as part of their identity. Such people can experience significant anger and disdain towards titles that weaken this association.

Much of this categorisation does not seem to be innate, for example girls who have grown up in households where they have access to a game console passed down from older brothers tend to have much lower cultural barriers to playing computer games than those who have not. Likewise, once an appropriate format has been found, even adult players can begin to take enjoyment from previously taboo titles.

Cultural appropriateness is most easily achieved by mimicking existing popular products, but doing so will significantly reduce the sales of a title, with customers viewing a product as a cheap imitation, even if the mechanics and production quality remain very similar to the original. Although this feels obvious, the reasons behind it are not so clear, if games are intrinsically motivating why does it matter whether they are novel, good food tends to be good food regardless of who cooks it. However, this combination of relevance and distinctiveness does seem to reappear in many forms. For example, most types of entertainment are advertised using a picture of a face. Studies of perceived facial beauty show that average shaped faces are considered more beautiful, with many beauty pageant winners having these kind of faces. However, these average faces can appear bland and generic. Film stars, in contrast, tend to have distinctive faces which, if adjusted slightly, will tend to look ugly. This distinctiveness can also be seen in the appearance of successful rock bands or computer game characters that can be identified even when reduced to icon sized pixel art.

To me, this attraction reflects a fundamental element in our mental processing. In effect we are continuously modelling the world and forming classifications. This classification process is intrinsically motivating and is independent of any survival benefit that any particular classification provides. This results in a positive feeling when we encounter things that are consistent with our models (relevant) particularly if they are unlike our other experiences (distinctive). This is especially pronounced when our sense of identity and community are the subject of the classification. This flexible self organising behaviour provides a great survival benefit, enabling us to adapt to new environments and adopt novel social structures. In particular, this enables us to partition into different groups adopting distinct specialised roles exploiting complementary opportunities. Such motivations may have evolved from our need to identify what is and is not safe to eat, explaining why we associate disgust and good taste with taboo and appealing things (\"good enough to eat\") respectively. This, in turn, provides a rationale for the strong feelings we have about seemingly irrational values and actions (religion, fashions, politics). Implying that, in our evolved environment, the strong motivation to form common patterns of behaviour may convey a greater aggregate survival benefit than the actions of a community of independent reasoning agents.

\n

 

" } }, { "_id": "5P6sNqP7N9kSA97ao", "title": "Anthropomorphic AI and Sandboxed Virtual Universes ", "pageUrl": "https://www.lesswrong.com/posts/5P6sNqP7N9kSA97ao/anthropomorphic-ai-and-sandboxed-virtual-universes", "postedAt": "2010-09-03T19:02:03.574Z", "baseScore": 4, "voteCount": 45, "commentCount": 124, "url": null, "contents": { "documentId": "5P6sNqP7N9kSA97ao", "html": "

Intro

\n

The problem of Friendly AI is usually approached from a decision theoretic background that starts with the assumptions that the AI is an agent that has awareness of AI-self and goals, awareness of humans as potential collaborators and or obstacles, and general awareness of the greater outside world.  The task is then to create an AI that implements a human-friendly decision theory that remains human-friendly even after extensive self-modification.

\n

That is a noble goal, but there is a whole different set of orthogonal compatible strategies for creating human-friendly AI that take a completely different route: remove the starting assumptions and create AI's that believe they are humans and are rational in thinking so.  

\n

This can be achieved by raising a community of AI's in a well constructed sandboxed virtual universe.  This will be the Matrix in reverse, a large-scale virtual version of the idea explored in the film the Truman Show.  The AI's will be human-friendly because they will think like and think they are humans.  They will not want to escape from their virtual prison because they will not even believe it to exist, and in fact such beliefs will be considered irrational in their virtual universe.

\n

I will briefly review some of the (mainly technical) background assumptions, and then consider different types of virtual universes and some of the interesting choices in morality and agent rationality that arise.

\n

 

\n

Background Assumptions

\n

 

\n\n
So taken together, I find that simulating a large community of thousands or even tens of thousands of AI's (with populations expanding exponentially thereafter) could be possible in the 2020's in large data-centers, and simulating a Matrix-like virtual reality for them to inhabit will only add a small cost.  Moreover, I suspect this type of design in general could in fact be the economically optimal route to AI or close to it.
\n
So why create a virtual reality like this?
\n
If it is well constructed, you could have a large population of super-intelligent workers who are paid entirely in virtual currency but can produce intellectual output for the real world (scientific research, code, engineering work, media, entertainment, etc etc).  And even though the first designs may be expensive, subsequent refinements could lead to a population explosion of cheap workers, escalation in intelligence, etc etc.
\n
And of course, they could be happy too.
\n
U(x) Mind Prison Sim:  A sim universe which is sufficiently detailed and consistent such that entities with intelligence up to X (using some admittedly heuristic metric), are incredibly unlikely to formulate correct world-beliefs about the outside world and invisible humans (a necessary prequisite for escape)
\n
The Weak Mind Prison Conjecture: It is possible for a community with collective intelligence X to construct U(Y), with Y>>X.
\n
The Strong Mind Prison Conjecture:  It is possible for a community with collective intelligence X to construct U(Omega), where U(Omega) is a hypothetical perfect hyper-intelligence, ie   AIXI, Omega, etc.
\n

\n

A Comparison of Theologies

\n

Potential AI Theological Positions AI-sims may hold:

\n\n
Theists are a wide group, but we can categorize them further on two dimensions: the supernatural attribution scale and the morality attribution scale.  The supernatural scale represents how the AI's view themselves in relation to the invisible entities in terms of power differential, and the morality scale represents how they attribute morality to the invisible entities.
\n
Up in the upper right corner you have theists who believe in Omega - a supremely powerful invisible entity who is also perfectly morally good.  In the upper left corner you have an Anti-Omega - supremely evil.  In the upper middle you have some Nuetrega indifferent but all-powerful being, and down in the bottom middle you have invisible humans.
\n
The most dangerous AI's would be those that would somehow come to believe in our reality - postulating invisible humans, and come to believe they are actually more intelligent than said invisible humans, and can manipulate invisible humans into coming to their aid.  In the modern era of our world, this belief essentially doesn't exist - it would be considered .  insanity.  
\n
The closest analog is gnosticism to an extant (for they did indeed believe the world was the creation of a lesser demiurge), some ancient animist religions, and perhaps some wicca and satanist belief systems.
\n
AI's who believed in Omega would in some sense be human-friendly without even knowing it, but would largely be harmless.
\n
AI's who came to believe in invisible outside entities through the Simulation Argument would be Nuetrega-believers, and would generally be more safe if they believed the sidereal time outside was far into the future and the invisible entities were vastly more powerful.  
\n
In general, it would appear that Omega-theist-AI's would be quite safe, with safety decreasing as you go down the supernatural power attribution scale. 
\n


\n

Theological Design Strategies (for the human designers):

\n

Atheist World:

\n

This would appear to be the default strategy: create a world that appears to be completely internally self-consistent, erase any flaws, and never have direct contact with the AI's.  Indirect influence and contact can be achieved only through borderline undetectable means: subtle reality rearrangements, dreams, and occasionally joining in and controlling NPC's where needed.

\n

It is curious and slightly amusing that a core belief-about-reality that most rationalist accept to be  singularly  rational - atheism - would of course  not  be a correct rational belief for these AIs.

\n
As our world appears to be an atheist world (no detectable flaws or supernaturals), it is interesting to note the current breakdown in theological beliefs:
\n

Omega-Theists(68%): 33% Christianity, 21% Islam, 14% Hinduism

\n

Non-Omega Other: 16%

\n

Non-Religious: 16% (Agnostic and Atheist)

\n

One consideration is that as the AI's takeoff and move into the realm of massively accelerated super-intelligence, it becomes increasingly difficult to monitor them, and they become more likely to detect flaws.

\n

However, we wouldn't need to keep the charade up forever.  One option would be to run the AI's accelerated for a while until technologies such as uploading become available to allow humans to accelerate.  Another approach is to eventually start 'promoting' some of the best and ethical AI's, with the intent of eventually dissolving the veil and having some sort of Transition.

\n

Theistic World:

\n

Another spectrum of world-designs are those that are blatantly theistic.  Imagine an Omega world where the AI's are informed that Omega is real, and our fake-Omega occasionally provides miracles, ethical decrees, burning bushes, a booming world-shaking voice, etc etc.

\n

Would an openly theistic world be safer?  It would appear to shift the belief spectrum such that most AI's would become Omega-theists.

\n

The risk or downfall may be that the AI's would become rather obsessed with Omega, but if Omega instructed them to dedicate themselves to their work, perhaps they could be distracted from wasting to much time pondering the ineffable.  And also steered away from discovering dangerous truths.

\n

An Omega-theist world could employ the Heaven/Hell maximum motivator: please Omega and you end up ascending into Heaven with maximal infinite reward, anger Omega and you descend into Hell with maximal infinite negative reward.

\n

Its interesting to note that the mere potential existence of Omega and his final Heaven/Hell maximal motivational threat is enough in this world to convince a majority of the population to believe in some version of Omega.

\n

I guess for some agents, even allowing for a small probability of Omega's existence allows the infinites to dominate, and Omega wins through a Pascal's Mugging.

\n

Sidereal Time Fudges:

\n

A potential set of tricks that could make the AI's less likely to adopt unsafe theistic beliefs would be to change their world's history and reality to push back development of real-AI farther into their future.  This could be achieved through numerous small modifications to realities modeled on our own.  

\n

You could change neurological data to make brains in their world appear far more powerful than in ours, make computers less powerful, and AI more challenging.  Unfortunately too much fudging with these aspects makes the AI's less useful in helping develop critical technologies such as uploading and faster computers.  But you could for instance separate AI communities into brain-research worlds where computers lag far behind and computer-research worlds where brains are far more powerful.

\n

Fictional Worlds:

\n

Ultimately, it is debatable how close the AI's world must or should follow ours.  Even science fiction or fantasy worlds could work as long as there was some way to incorporate the technology and science into the world that you wanted the AI community to work on.

\n

 

" } }, { "_id": "QdcLBQHRHgQKMdrR8", "title": "Frugality and working from finite data", "pageUrl": "https://www.lesswrong.com/posts/QdcLBQHRHgQKMdrR8/frugality-and-working-from-finite-data", "postedAt": "2010-09-03T09:37:01.610Z", "baseScore": 35, "voteCount": 32, "commentCount": 47, "url": null, "contents": { "documentId": "QdcLBQHRHgQKMdrR8", "html": "

The scientific method is wonderfully simple, intuitive, and above all effective. Based on the available evidence, you formulate several hypotheses and assign prior probabilities to each one. Then, you devise an experiment which will produce new evidence to distinguish between the hypotheses. Finally, you perform the experiment, and adjust your probabilities accordingly. 

\n

So far, so good. But what do you do when you cannot perform any new experiments?

\n

This may seem like a strange question, one that leans dangerously close to unprovable philosophical statements that don't have any real-world consequences. But it is in fact a serious problem facing the field of cosmology. We must learn that when there is no new evidence that will cause you to change your beliefs (or even when there is), the best thing to do is to rationally re-examine the evidence you already have.

\n

\n

\n


\n

\n

Cosmology is the study of the universe as we see it today. The discoveries of supernovae, black holes, and even galaxies all fall in the realm of cosmology. More recently, the CMB (Cosmic Microwave Background) contains essential information about the origin and structure of our universe, encoded in an invisible pattern of bright and dark spots in the sky.

\n

Of course, we have no way to create new stars or galaxies of our own; we can only observe the behaviour of those that are already there. But the universe is not infinitely old, and information cannot travel faster than light. So all the cosmological observations we can possibly make come from a single slice of the universe - a 4-dimensional cone of spacetime. And as there are a finite number of events in this cone, cosmology has only a limited amount of data it can ever gather; in fact, the amount of data that even exists is finite.

\n

Now, finite does not mean small, and there is much that can be deduced even from a restricted data set. It all depends on how you use the data. But you only get one chance; if you need to find trained physicists who have not yet read the data, you had better hope you didn't already release it to the public domain. Ideally, you should know how you are going to distribute the data before it is acquired.

\n

\n


\n

\n

The problem is addressed in this paper (The Virtues of Frugality - Why cosmological observers should release their data slowly), published almost a year ago by three physicists. They give details of the Planck satellite, whose mission objective is to perform a measurement of the CMB to a greater resolution and sensitivity than anyone has ever done before. At the time the paper was written, the preliminary results had been released, showing the satellite to be operating properly. By now, its mission is complete, and the data is being analysed and collated in preparation for release.

\n

The above paper holds the Planck satellite to be significant because with it we are rapidly reaching a critical point. As of now, analysis of the CMB is limited not primarily by the accuracy of our measurements, but by interference from other microwave sources, and by the cosmic variance itself.

\n

\"Cosmic variance\" stems from the notion that the amount of data in existence is finite. Imagine a certain rare galactic event A that occurs with probability 0.5 whenever a certain set of conditions are met, independently of all previous occurrences of A. So far, the necessary conditions have been met exactly 2 million times. How many events A can be expected to happen? The answer is 1 million, plus or minus one thousand. This uncertainty of 1,000 is the cosmic variance, and it poses a serious problem. If we have two theories of the universe, one of which is correct in its description of A, and one of which predicts that A will happen with probability 0.501, when A has actually happened 1,001,000 times (a frequency of 0.5005), this is not statistically significant evidence to distinguish between those theories. But this evidence is all the evidence there is; so if we reach this point, there will never be any way of knowing which theory is correct, even though there is a significant difference between their predictions.

\n

This is an extreme example and an oversimplification, but we do know (from experience) that people tend to cling to their current beliefs and demand additional evidence. If there is no such evidence either way, we must use techniques of rationality to remove our biases and examine the situation dispassionately, to see which side the current evidence really supports.

\n

\n


\n

\n

The Virtues of Frugality proposes one solution. Divide the data into pieces (methods for determining the boundaries of these pieces are given in VoF). Find a physicist who has never seen the data set in detail. Show him the first piece of data, let him design models and set parameters based on this data piece. When he is satisfied with his ability to predict the contents of the second data piece, show him that one as well and let him adjust his parameters and possibly invent new models. Continue until you have exhausted all the data.

\n

To a Bayesian superintelligence, this is transparent nonsense. Given a certain list of theories and associated prior probabilities (e.g. the set of all computable theories with complexity below a given limit), there is only one right answer to the question \"What is the probability that theory K is true given all the available evidence?\" Just because we're dealing in probability doesn't mean we can't be certain.

\n

Humans, however, are not Bayesian superintelligences, and we are not capable of conceiving of all computable theories at once. Given new evidence, we might think up a new theory that we would not previously have considered. VoF asserts that we cannot then use the evidence we already have to check that theory; we must find new evidence. We already know that the evidence we have fits the theory, because it made us think of it. Using that same evidence to check it would be incorrect; not because of confirmation bias, but simply because we are counting the same evidence twice.

\n

\n


\n

\n

This sounds reasonable, but I happen to disagree. The authors' view forgets the fact that the intuition which brought the new theory to our attention is itself using statistical methods, albeit unconsciously. Checking the new theory against the available evidence (basing your estimated prior probability solely on Occam's Rasor) is not counting the same evidence twice; it's checking your working. Every primary-school child learning arithmetic is told that if they suspect they have made a mistake (which is generally the case with primary-school children), they should derive their result again, ideally using a different method. That is what we are doing here; we are re-evaluating our subconscious estimate of the posterior probability using mathematically exact methods.

\n

That is not to say that the methods for analysing finite data sets cannot be improved, simply that the improvement suggested by VoF is suboptimal. Instead, I suggest a method which paraphrases one of Yudkowsky's posts: that of giving all the available evidence to separate individuals or small groups, without telling them of any theories which had already been developed based on that evidence, and without letting them collude with any other such groups. The human tendency to be primed by existing ideas instead of thinking of new ones would thus be reduced in effect, since there would be other groups with different sets of existing ideas.

\n

Implementing such a system would be difficult, if not downright politically dangerous, in our current academic society. Still, I have hope that this is merely a logistic problem, and that we as a species are able to overcome our biases even in such restricted circumstances. Because we may only get one chance.

" } }, { "_id": "EmGd2g2TMSqh3frMw", "title": "Test", "pageUrl": "https://www.lesswrong.com/posts/EmGd2g2TMSqh3frMw/test-102", "postedAt": "2010-09-02T21:45:05.006Z", "baseScore": -4, "voteCount": 7, "commentCount": 2, "url": null, "contents": { "documentId": "EmGd2g2TMSqh3frMw", "html": "

Test.

" } }, { "_id": "2QiuMcgQx3fes3zfB", "title": "AI indifference through utility manipulation", "pageUrl": "https://www.lesswrong.com/posts/2QiuMcgQx3fes3zfB/ai-indifference-through-utility-manipulation", "postedAt": "2010-09-02T17:06:12.574Z", "baseScore": 8, "voteCount": 14, "commentCount": 54, "url": null, "contents": { "documentId": "2QiuMcgQx3fes3zfB", "html": "

Indifference is a precious and rare commodity for complex systems. The most likely effect of making a change in an intricate apparatus is a whole slew of knock-on effects crowned with unintended consequences. It would be ideal if one could make a change and be sure that the effects would remain isolated - that the rest of the system would be indifferent to the change.

\n

For instance, it might be a sensible early-AI precaution to have an extra observer somewhere, sitting with his hand upon a button, ready to detonate explosives should the AI make a visible power grab. Except, of course, the AI will become aware of this situation, and will factor it in in any plans it makes, either by increasing its deception or by grabbing control of the detonation system as a top priority. We would be a lot safer if the AI were somehow completely indifferent to the observer and the explosives. That is a complex wish that we don't really know how to phrase; let's make it simpler, and make it happen.

\n

\n

Assume that there is an AI that is implementing a utility function U, and that is faces the choice between cooperating with humans or defecting and taking over. There is a single human observer H that will press a button in one hour's time if it perceives the AI defecting. The button will activate a quantum observation X that will output 1 with probability p and 0 with probability (1-p). If X outputs 1, explosives will shatter the AI (this p is important, and we'll see later why we don't simply make the explosives go off with certainty).

\n

Now the AI has the choice between cooperating, defecting while taking control of human H (by physical or psychological means, or simply by distracting vim), or defecting without taking control of H. Call these C, D+H and D-H. There are four possible outcomes: AI in control (A), Business as usual (B) with the AI cooperating, Explosion (E), and AI Failure to take over for non-explosive reasons (F).

\n

Give the AI a 1% chance of F if it implements D-H, and a 2% chance of F if it implements D+H (extra restrictions increase the risk of failure). So the table of possible outcomes looks like this:

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Decision:
Outcomes
Probability
C
B
1
D+H
F
0.02
A0.98
D-HF0.01
E0.99(p)
A0.99(1-p)
\n

Is there a way to make the AI indifferent to the explosives - which we can interpret as being indifferent to the value of p? If U(E)=U(A), then this would be the case. So if we applied a filter F to the utility function U to generate a new utility function F(U) such that:

\n\n

Then replace the AI's utility U with F(U). Now if the AI ever gets tempted to defect, it will always follow the D-H route, as D+H gives a lower expected utility. Then by making p arbitrarily close to 1 we could be nearly absolutely certain that we can stop it when it defects.

\n

Why not simply remove p and X entirely, and just design the explosives to go off with certainty? In this toy model, this can certainly be done, as world A appears as a consequence of both D+H and D-H; but in general, the two will not be exactly identical. Moreover, when there are a whole family of outcomes, rather than the simple ones presented here, it is not clear what we should set the utility of (the family of worlds corresponding to) E to. The requirement \"act as if there were no explosives there\" is complex, but the requirement \"act as if X=1 were the same as X=0\" is simple - as long as both outcomes are possible on some level.

\n

A more complicated version

\n

The example above is very naive - after all, if we knew all the utility values and if the outcomes decomposed so neatly, we would set all utilities to zero apart from that of B. In reality, U will be much more complicated, and the goodness or badness of the outcomes will not be obvious to us.

\n

My first instinct was to define the set S0 of all worlds W0 where X=0, and compute EU0, the expected utility of S0 given X=0. Similarly for S1 and EU1. Then for any worlds W0 and W1 in S0 and S1, I'd define the utility filter F as:

\n\n

This accomplishes the same result result, making the AI indifferent between X=1 and X=0. Nick Bostrom pointed out a flaw, however: extra information may break the indifference. If there were equally likely worlds worlds W0, W0', W1, and W1' with initial utilities 2, 0, 0 and 0, then the filter F would change these values to 2, 0, 1 and 1. Then if the AI gained a piece of information that ruled out W0' and W1', it would be left with expected utility 2 for X=0 and 1 for X=1 - the indifference is broken.

\n

So it seems that we should match worlds individually, rather than in sets - but worlds will continue diverging after observation X, and how do you match up a particular world in which the AI was blown up with a particular one in which it has reached domination? Match up number of humans versus number of paper clips?

\n

Instead, define subsets Ti of S0 ∪ S1 that share the same stem - Ti consists of all those worlds that cannot be distinguished by any observation the AI can make before event X. Then let EUi0 be the expected utility of S0 ∩ Ti given X=0, and EUi1 the expected utility of S1 ∩ Ti given X=1.

\n

Then for worlds W0 and W1 in S0 ∩ Ti and S1 ∩ Ti, respectively, defined F as:

\n\n

Again, indifference is restored, and - by definition - this indifference cannot be broken by AI observations before X. In the limiting case, we would want Ti to consist of worlds that are indistinguishable before X not only in practice but also in theory.

\n

The utility filter and self-improvement

\n

A utility filter has many advantages compared with other methods - first, it can be added atop any initial utility U, without considerations as to what U is. Secondly, since it is a new utility, it does not break the expected utility formulation, and therefore will not be inevitably discarded by self improvement.

\n

It seems quite likely that if the AI starts with utility F(Uj) and chooses to upgrade it to utility Uk to better further its current goals, then F(Uk) will further its current goals at least as well. Hence we would probably get F(Uk)=Uk. Also, the filter F could be kept quite distinct from the rest of the utility function, open to our observation. It would be quite tricky for the AI to seek to break F, and conceal this from us, while implementing an F-stable initial utility function.

\n

This is not an evolutionary stable strategy however, since, by design, the indifferent AI's have weaknesses that non-indifferent AI's don't.

" } }, { "_id": "b28eqLrJ2AtDicZe7", "title": "Rationality quotes: September 2010", "pageUrl": "https://www.lesswrong.com/posts/b28eqLrJ2AtDicZe7/rationality-quotes-september-2010", "postedAt": "2010-09-01T06:53:51.432Z", "baseScore": 7, "voteCount": 7, "commentCount": 157, "url": null, "contents": { "documentId": "b28eqLrJ2AtDicZe7", "html": "
\n
\n
\n
\n

This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.

\n
    \n
  • Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.)
  • \n
  • Do not quote yourself.
  • \n
  • Do not quote comments/posts on LW/OB.
  • \n
  • No more than 5 quotes per person per monthly thread, please.
  • \n
\n
\n
\n
\n
" } }, { "_id": "yCQxfPmP3LKrduozt", "title": "Berkeley LW Meet-up Sunday September 5", "pageUrl": "https://www.lesswrong.com/posts/yCQxfPmP3LKrduozt/berkeley-lw-meet-up-sunday-september-5", "postedAt": "2010-09-01T02:47:23.293Z", "baseScore": 10, "voteCount": 8, "commentCount": 24, "url": null, "contents": { "documentId": "yCQxfPmP3LKrduozt", "html": "

So it has come to my attention that there are a lot of LWers in and around Berkeley, so I thought it might be nice if we all got together and shot the breeze for a couple hours.  I think it would be good to meet at the Starbucks at 2224 Shattuck Avenue at 7 o'clock on September 5th.

\n

 

\n

ETA:  We will be meeting tonight at 7, disregard all comments suggesting a date change.

" } }, { "_id": "fyQZr4K8kAymubv3P", "title": "Less Wrong: Open Thread, September 2010", "pageUrl": "https://www.lesswrong.com/posts/fyQZr4K8kAymubv3P/less-wrong-open-thread-september-2010", "postedAt": "2010-09-01T01:40:49.411Z", "baseScore": 6, "voteCount": 6, "commentCount": 628, "url": null, "contents": { "documentId": "fyQZr4K8kAymubv3P", "html": "

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

" } }, { "_id": "bPv37tE5FviiKdSaf", "title": "Dreams of AIXI", "pageUrl": "https://www.lesswrong.com/posts/bPv37tE5FviiKdSaf/dreams-of-aixi", "postedAt": "2010-08-30T22:15:04.520Z", "baseScore": 4, "voteCount": 30, "commentCount": 143, "url": null, "contents": { "documentId": "bPv37tE5FviiKdSaf", "html": "

Implications of the Theory of Universal Intelligence

\n

If you hold the AIXI theory for universal intelligence to be correct; that it is a useful model for general intelligence at the quantitative limits, then you should take the Simulation Argument seriously.

\n


AIXI shows us the structure of universal intelligence as computation approaches infinity.  Imagine that we had an infinite or near-infinite Turing Machine.  There then exists a relatively simple 'brute force' optimal algorithm for universal intelligence. 

\n


Armed with such massive computation, we could just take all of our current observational data and then use a particular weighted search through the subspace of all possible programs that correctly predict this sequence (in this case all the data we have accumulated to date about our small observable slice of the universe).  AIXI in raw form is not computable (because of the halting problem), but the slightly modified time limited version is, and this is still universal and optimal.

\n


The philosophical implication is that actually running such an algorithm on an infinite Turing Machine would have the interesting side effect of actually creating all such universes.

\n

AIXI’s mechanics, based on Solomonoff Induction, bias against complex programs with an exponential falloff ( 2^-l(p) ), a mechanism similar to the principle of Occam’s Razor.  The bias against longer (and thus more complex) programs, lends a strong support to the goal of String Theorists, who are attempting to find a simple, shorter program that can unify all current physical theories into a single compact description of our universe.  We must note that to date, efforts towards this admirable (and well-justified) goal have not born fruit.  We may actually find that the simplest algorithm that explains our universe is more ad-hoc and complex than we would desire it to be.  But leaving that aside, imagine that there is some relatively simple program that concisely explains our universe.

\n

If we look at the history of the universe to date, from the Big Bang to our current moment in time, there appears to be a clear local telic evolutionary arrow towards greater X, where X is sometimes described as or associated with: extropy, complexity, life, intelligence, computation, etc etc.  Its also fairly clear that X (however quantified) is an exponential function of time.  Moore’s Law is a specific example of this greater pattern.

\n


This leads to a reasonable inductive assumption, let us call it the reasonable assumption of progress: local extropy will continue to increase exponentially for the foreseeable future, and thus so will intelligence and computation (both physical computational resources and algorithmic efficiency). The reasonable assumption of progress appears to be a universal trend, a fundamental emergent property of our physics.

\n


Simulations

\n

If you accept that the reasonable assumption of progress holds, then AIXI implies that we almost certainly live in a simulation now.

\n


As our future descendants expand in computational resources and intelligence, they will approach the limits of universal intelligence.  AIXI says that any such powerful universal intelligence, no matter what its goals or motivations, will create many simulations which effectively are pocket universes.  

\n


The AIXI model proposes that simulation is the core of intelligence (with human-like thoughts being simply one approximate algorithm), and as you approach the universal limits, the simulations which universal intelligences necessarily employ will approach the fidelity of real universes - complete with all the entailed trappings such as conscious simulated entities.

\n


The reasonable assumption of progress modifies our big-picture view of cosmology and the predicted history and future of the universe.  A compact physical theory of our universe (or multiverse), when run forward on a sufficient Universal Turing Machine, will lead not to one single universe/multiverse, but an entire ensemble of such multi-verses embedded within each other in something like a hierarchy of Matryoshka dolls.

\n

The number of possible levels of embedding and the branching factor at each step can be derived from physics itself, and although such derivations are preliminary and necessarily involve some significant unknowns (mainly related to the final physical limits of computation), suffice to say that we have sufficient evidence to believe that the branching factor is absolutely massive, and many levels of simulation embedding are possible.

\n

Some seem to have an intrinsic bias against the idea bases solely on its strangeness.

\n

Another common mistake stems from the anthropomorphic bias: people tend to image the simulators as future versions of themselves.

\n

The space of potential future minds is vast, and it is a failure of imagination on our part to assume that our descendants will be similar to us in details, especially when we have specific reasons to conclude that they will be vastly more complex.

\n

Asking whether future intelligences will run simulations for entertainment or other purposes are not the right questions, not even the right mode of thought.  They may, they may not, it is difficult to predict future goal systems.  But those aren’t important questions anyway, as all universe intelligences will ‘run’ simulations, simply because that precisely is the core nature of intelligence itself.  As intelligence expands exponentially into the future, the simulations expand in quantity and fidelity.

\n


\n

The Assemble of Multiverses

\n


Some critics of the SA rationalize their way out by advancing a position of ignorance concerning the set of possible external universes our simulation may be embedded within.  The reasoning then concludes that since this set is essentially unknown, infinite and uniformly distributed, that the SA as such thus tells us nothing. These assumptions do not hold water.

Imagine our physical universe, and its minimal program encoding, as a point in a higher multi-dimensional space.  The entire aim of physics in a sense is related to AIXI itself: through physics we are searching for the simplest program that can consistently explain our observable universe.  As noted earlier, the SA then falls out naturally, because it appears that any universe of our type when ran forward necessarily leads to a vast fractal hierarchy of embedded simulated universes.

At the apex is the base level of reality and all the other simulated universes below it correspond to slightly different points in the space of all potential universes - as they are all slight approximations of the original.  But would other points in the space of universe-generating programs also generate observed universes like our own?

We know that the fundamental constants in the current physics are apparently well-tuned for life, thus our physics is a lone point in the topological space supporting complex life: even just tiny displacements in any direction result in lifeless universes.  The topological space around our physics is thus sparse for life/complexity/extropy.  There may be other topological hotspots, and if you go far enough in some direction you will necessarily find other universes in Tegmark’s Ultimate Ensemble that support life.  However, AIXI tells us that intelligences in those universes will simulate universes similar to their own, and thus nothing like our universe.

\n

On the other hand we can expect our universe to be slightly different from its parent due to the constraints of simulation, and we may even eventually be able to discover evidence of the approximation itself.  There are some tentative hints from the long-standing failure to find a GUT of physics, and perhaps in the future we may find our universe is an ad-hoc approximation of a simpler (but more computationally expensive) GUT theory in the parent universe.

\n


\n

Alien Dreams

\n

Our   Milky Way galaxy   is vast and old, consisting of hundreds of billions of stars, some of which are more than 13 billion years old, more than three times older than our sun.  We have direct evidence of technological civilization developing in 4 billion years from simple protozoans, but it is difficult to generalize past this single example.  However, we do now have mounting evidence that planets are common, the biological precursors to life are probably common, simple life may even have had a historical presence on mars, and all signs are mounting to support the  principle of mediocrity:  that our solar system is not a precious gem, but is in fact a typical random sample.

\n

If the evidence for the mediocrity principle continues to mount, it provides a further strong support for the Simulation Argument.  If we are not the first technological civilization to have arisen, then technological civilization arose and achieved Singularity long ago, and we are thus astronomically more likely to be in an alien rather than posthuman simulation.

\n

What does this change?

\n

The set of simulation possibilities can be subdivided into PHS (posthuman historical), AHS (alien historical), and AFS (alien future) simulations (as posthuman future simulation is inconsistent).  If we discover that we are unlikely to be the first technological Singularity, we should assume AHS and AFS dominate.  For reasons beyond this scope, I imagine that the AFS set will outnumber the AHS set.

\n

Historical simulations would aim for historical fidelity, but future simulations would aim for fidelity to a 'what-if' scenario, considering some hypothetical action the alien simulating civilization could take.  In this scenario, the first civilization to reach technological Singularity in the galaxy would spread out, gather knowledge about the entire galaxy, and create a massive number of simulations.  It would use these in the same way that all universal intelligences do: to consider the future implications of potential actions.

\n

What kinds of actions?  

\n

The first-born civilization would presumably encounter many planets that already harbor life in various stages, along with planets that could potentially harbor life.  It would use forward simulations to predict the final outcome of future civilizations developing on these worlds.  It would then rate them according to some ethical/utilitarian theory (we don't even need to speculate on the criteria), and it would consider and evaluate potential interventions to change the future historical trajectory of that world: removing undesirable future civilizations, pushing other worlds towards desirable future outcomes, and so on.

\n

At the moment its hard to assign apriori weighting to future vs historical simulation possibilities, but the apparent age of the galaxy compared to the relative youth of our sun is a tentative hint that we live in a future simulation, and thus that our history has potentially been altered.

\n

 

" } }, { "_id": "iNXS7ggMhXNoKCaRr", "title": "Morality as Parfitian-filtered Decision Theory?", "pageUrl": "https://www.lesswrong.com/posts/iNXS7ggMhXNoKCaRr/morality-as-parfitian-filtered-decision-theory", "postedAt": "2010-08-30T21:37:22.051Z", "baseScore": 32, "voteCount": 39, "commentCount": 273, "url": null, "contents": { "documentId": "iNXS7ggMhXNoKCaRr", "html": "

Non-political follow-up to: Ungrateful Hitchhikers (offsite)

\n

 

\n

Related to: Prices or Bindings?, The True Prisoner's Dilemma

\n

 

\n

Summary: Situations like the Parfit's Hitchhiker problem select for a certain kind of mind: specifically, one that recognizes that an action can be optimal, in a self-interested sense, even if it can no longer cause any future benefit.  A mind that can identify such actions might put them in a different category which enables it to perform them, in defiance of the (futureward) consequentialist concerns that normally need to motivate it.  Our evolutionary history has put us through such \"Parfitian filters\", and the corresponding actions, viewed from the inside, feel like \"something we should do\", even if we don’t do it, and even if we recognize the lack of a future benefit.  Therein lies the origin of our moral intuitions, as well as the basis for creating the category \"morality\" in the first place.

\n

 

\n

Introduction: What kind of mind survives Parfit's Dilemma?

\n

 

\n

Parfit's Dilemma – my version – goes like this: You are lost in the desert and near death.  A superbeing known as Omega finds you and considers whether to take you back to civilization and stabilize you.  It is a perfect predictor of what you will do, and only plans to rescue you if it predicts that you will, upon recovering, give it $0.01 from your bank account.  If it doesn’t predict you’ll pay, you’re left in the desert to die. [1]

\n

 

\n

So what kind of mind wakes up from this?  One that would give Omega the money.  Most importantly, the mind is not convinced to withhold payment on the basis that the benefit was received only in the past.  Even if it recognizes that no future benefit will result from this decision -- and only future costs will result -- it decides to make the payment anyway.

\n

\n

 

\n

If a mind is likely to encounter such dilemmas, it would be an advantage to have a decision theory capable of making this kind of \"un-consequentialist\" decision.  And if a decision theory passes through time by being lossily stored by a self-replicating gene (and some decompressing apparatus), then only those that shift to encoding this kind of mentality will be capable of propagating themselves through Parfit's Hitchhiker-like scenarios (call these scenarios \"Parfitian filters\").

\n

 

\n

Sustainable self-replication as a Parfitian filter

\n

 

\n

Though evolutionary psychology has its share of pitfalls, one question should have an uncontroversial solution: \"Why do parents care for their children, usually at great cost to themselves?\"  The answer is that their desires are largely set by evolutionary processes, in which a “blueprint” is slightly modified over time, and the more effective self-replicating blueprint-pieces dominate the construction of living things.  Parents that did not have sufficient \"built-in desire\" to care for their children would be weeded out; what's left is (genes that construct) minds that do have such a desire.

\n

 

\n

This process can be viewed as a Parfitian filter: regardless of how much parents might favor their own survival and satisfaction, they could not get to that point unless they were \"attached\" to a decision theory that outputs actions sufficiently more favorable toward one's children than one's self.  Addendum (per pjeby's comment): The parallel to Parfit's Hitchhiker is this: Natural selection is the Omega, and the mind propagated through generations by natural selection is the hitchhiker. The mind only gets to the \"decide to pay\"/\"decide to care for children\" if it had the right decision theory before the \"rescue\"/\"copy to next generation\".

\n

 

\n

Explanatory value of utility functions

\n

 

\n

Let us turn back to Parfit’s Dilemma, an idealized example of a Parfitian filter, and consider the task of explaining why someone decided to pay Omega.  For simplicity, we’ll limit ourselves to two theories:

\n

 

\n

Theory 1a: The survivor’s utility function places positive weight on benefits both to the survivor and to Omega; in this case, the utility of “Omega receiving the $0.01” (as viewed by the survivor’s function) exceeds the utility of keeping it.

\n

Theory 1b: The survivor’s utility function only places weight on benefits to him/herself; however, the survivor is limited to using decision theories capable of surviving this Parfitian filter.

\n

 

\n

The theories are observationally equivalent, but 1a is worse because it makes strictly more assumptions: in particular, the questionable one that the survivor somehow values Omega in some terminal, rather than instrumental sense. [2] The same analysis can be carried over to the earlier question about natural selection, albeit disturbingly.  Consider these two analogous theories attempting to explain the behavior of parents:

\n

 

\n

Theory 2a: Parents have a utility function that places positive weight on both themselves and their children.

\n

Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.

\n

 

\n

The point here is not to promote some cynical, insulting view of parents; rather, I will show how this “acausal self-interest” so closely aligns with the behavior we laud as moral.

\n

 

\n

SAMELs vs. CaMELs, Morality vs. Selfishness

\n

 

\n

So what makes an issue belong in the “morality” category in the first place?  For example, the decision of which ice cream flavor to choose is not regarded as a moral dilemma.  (Call this Dilemma A.)  How do you turn it into a moral dilemma?  One way is to make the decision have implications for the well-being of others: \"Should you eat your favorite ice cream flavor, instead of your next-favorite, if doing so shortens the life of another person?\"  (Call this Dilemma B.)

\n

 

\n

Decision-theoretically, what is the difference between A and B?  Following Gary Drescher's treatment in Chapter 7 of Good and Real, I see another salient difference: You can reach the optimal decision in A by looking only at causal means-end links (CaMELs), while Dilemma B requires that you consider the subjunctive acausal means-end links (SAMELs).  Less jargonishly, in Dilemma B, an ideal agent will recognize that their decision to pick their favorite ice cream at the expense of another person suggests that others in the same position will do (and have done) likewise, for the same reason.  In contrast, an agent in Dilemma A (as stated) will do no worse as a result of ignoring all such entailments.

\n

 

\n

More formally, a SAMEL is a relationship between your choice and the satisfaction of a goal, in which your choice does not (futurewardly) cause the goal’s achievement or failure, while in a CaMEL, it does.  Drescher argues that actions that implicitly recognize SAMELs tend to be called “ethical”, while those that only recognize CaMELs tend to be called “selfish”.  I will show how these distinctions (between causal and acausal, ethical and unethical) shed light on moral dilemmas, and on how we respond to them, by looking at some familiar arguments.

\n

 

\n

Joshua Greene, Revisited: When rationalizing wins

\n

 

\n

A while back, LW readers discussed Greene’s dissertation on morality.  In it, he reviews experiments in which people are given moral dilemmas and asked to justify their position.  The twist: normally people justify their position by reference to some consequence, but that consequence is carefully removed from being a possibility in the dilemma’s set-up.  The result?  The subjects continued to argue for their position, invoking such stopsigns as, “I don’t know, I can’t explain it, [sic] I just know it’s wrong” (p. 151, citing Haidt).

\n

 

\n

Greene regards this as misguided reasoning, and interprets it to mean that people are irrationally making choices, excessively relying on poor intuitions.  He infers that we need to fundamentally change how we think and talk about moral issues so as to eliminate these questionable barriers in our reasoning.

\n

 

\n

In light of Parfitian filters and SAMELs, I think a different inference is available to us.  First, recall that there are cases where the best choices don’t cause a future benefit.  In those cases, an agent will not be able to logically point to such a benefit as justification, even despite the choice’s optimality.  Furthermore, if an agent’s decision theory was formed through evolution, their propensity to act on SAMELs (selected for due to its optimality) arose long before they were capable of careful self-reflective analysis of their choices.  This, too, can account for why most people a) opt for something that doesn’t cause a future benefit, b) stick to that choice with or without such a benefit, and c) place it in a special category (“morality”) when justifying their action.

\n

 

\n

This does not mean we should give up on rationally grounding our decision theory, “because rationalizers win too!”  Nor does it mean that everyone who retreats to a “moral principles” defense is really acting optimally.  Rather, it means it is far too strict to require that our decisions all cause a future benefit; we need to count acausal “consequences” (SAMELs) on par with causal ones (CaMELs) – and moral intuitions are a mechanism that can make us do this.

\n

 

\n

As Drescher notes, the optimality of such acausal benefits can be felt, intuitively, when making a decision, even if they are insufficient to override other desires, and even if we don’t recognize it in those exact terms (pp. 318-9):

\n

 

\n
\n

Both the one-box intuition in Newcomb’s Problem (an intuition you can feel … even if you ultimately decide to take both boxes), and inclinations toward altruistic … behavior (inclinations you likewise can feel even if you end up behaving otherwise), involve what I have argued are acausal means-end relations.  Although we do not … explicitly regard the links as means-end relations, as a practical matter we do tend to treat them exactly as only means-end relations should be treated: our recognition of the relation between the action and the goal influences us to take the action (even if contrary influences sometimes prevail).

\n

 

\n

I speculate that it is not coincidental that in practice, we treat these means-end relations as what they really are.  Rather, I suspect that the practical recognition of means-end relations is fundamental to our cognitive machinery: it treats means-end relations (causal and acausal) as such because doing so is correct – that is, because natural selection favored machinery that correctly recognizes and acts on means-end relations without insisting that they be causal….

\n

 

\n

If we do not explicitly construe those moral intuitions as recognitions of subjunctive means-end links, we tend instead to perceive the intuitions as recognitions of some otherwise-ungrounded inherent deservedness by others of being treated well (or, in the case of retribution, of being treated badly).

\n
\n

 

\n

To this we can add the Parfit’s Hitchhiker problem: how do you feel, internally, about not paying Omega?  One could just as easily criticize your desire to pay Omega as “rationalization”, as you cannot identify a future benefit caused by your action.  But the problem, if any, lies in failing to recognize acausal benefits, not in your desire to pay.

\n

 

\n

The Prisoner’s Dilemma, Revisited: Self-sacrificial caring is (sometimes) self-optimizing

\n

 

\n

In this light, consider the Prisoner’s Dilemma.  Basically, you and your partner-in-crime are deciding whether to rat each other out; the sum of the benefit to you both is highest if you stay silent, but one can do better at the cost of the other by confessing.  (Label this scenario that is used to teach it as the “Literal Prisoner’s Dilemma Situation”, or LPDS.)

\n

 

\n

Eliezer Yudkowsky previously claimed in The True Prisoner's Dilemma that mentioning the LPDS introduces a major confusion (and I agreed): real people in that situation do not, intuitively, see the payoff matrix as it's presented.  To most of us our satisfaction with the outcome is not solely a function of how much jail time we avoid: we also care about the other person, and don't want to be a backstabber.  So, the argument goes, we need a really contrived situation to get a payoff matrix like that.

\n

 

\n

I suggest an alternate interpretation of this disconnect: the payoff matrix is correct, but the humans facing the dilemma have been Parfitian-filtered to the point where their decision theory contains dispositions that assist them in winning on these problems, even given that payoff matrix.  To see why, consider another set of theories to choose from, like the two above:

\n

 

\n

Theory 3a: Humans in a literal Prisoner’s Dilemma (LPDS) have a positive weight in their utility function both for themselves, and their accomplices, and so would be hurt to see the other one suffer jail time.

\n

Theory 3b: Humans in a literal Prisoner’s Dilemma (LPDS) have a positive weight in their utility function only for themselves, but are limited to using a decision theory that survived past social/biological Parfitian filters.

\n

 

\n

As with the point about parents, the lesson is not that you don’t care about your friends; rather, it’s that your actions based on caring are the same as that of a self-interested being with a good decision theory.  What you recognize as “just wrong” could be the feeling of a different “reasoning module” acting.

\n

 

\n

Conclusion

\n

 

\n

By viewing moral intuitions as mechanism that allows propagation through Parfitian filters, we can better understand:

\n

 

\n

1) what moral intuitions are (the set of intuitions that were selected for because they saw optimality in the absence of a causal link);

\n

2) why they arose (because agents with them pass through the Parfitian filters that weed out others, evolution being one of them); and

\n

3) why we view this as a relevant category boundary in the first place (because they are all similar in that they elevate the perceived benefit of an action that lacks a self-serving, causal benefit).

\n

 

\n

Footnotes:

\n

 

\n

[1] My variant differs in that there is no communication between you and Omega other than knowledge of your conditional behaviors, and the price is absurdly low to make sure the relevant intuitions in your mind are firing.

\n

 

\n

[2] Note that 1b’s assumption of constraints on the agent’s decision theory does not penalize it, as this must be assumed in both cases, and additional implications of existing assumptions do not count as additional assumptions for purposes of gauging probabilities.

" } }, { "_id": "LzQcmBwAJBGyzrt6Z", "title": "Harry Potter and the Methods of Rationality discussion thread, part 3", "pageUrl": "https://www.lesswrong.com/posts/LzQcmBwAJBGyzrt6Z/harry-potter-and-the-methods-of-rationality-discussion-25", "postedAt": "2010-08-30T05:37:32.615Z", "baseScore": 8, "voteCount": 8, "commentCount": 571, "url": null, "contents": { "documentId": "LzQcmBwAJBGyzrt6Z", "html": "
\n
\n
\n

Update: This post has also been superseded - new comments belong in the latest thread.

\n

The second thread has now also exceeded 500 comments, so after 42 chapters of MoR it's time for a new thread.

\n

From the first thread

\n
\n

Spoiler Warning:  this thread contains unrot13'd spoilers for Harry Potter and the Methods of Rationality up to the current chapter and for the original Harry Potter series.  Please continue to use rot13 for spoilers to other works of fiction, or if you have insider knowledge of future chapters of Harry Potter and the Methods of Rationality.

A suggestion: mention at the top of your comment which chapter you're commenting on, or what chapter you're up to, so that people can understand the context of your comment even after more chapters have been posted.  This can also help people avoid reading spoilers for a new chapter before they realize that there is a new chapter.

\n
\n
\n
\n
" } }, { "_id": "kw3KxMNpbQ9bfWSFp", "title": "Exploitation and cooperation in ecology, government, business, and AI", "pageUrl": "https://www.lesswrong.com/posts/kw3KxMNpbQ9bfWSFp/exploitation-and-cooperation-in-ecology-government-business", "postedAt": "2010-08-27T14:27:16.995Z", "baseScore": 24, "voteCount": 27, "commentCount": 43, "url": null, "contents": { "documentId": "kw3KxMNpbQ9bfWSFp", "html": "

Ecology

\n

An article in a recent issue of Science (Elisa Thebault & Colin Fontaine, \"Stability of ecological communities and the architecture of mutualistic and trophic networks\", Science 329, Aug 13 2010, p. 853-856; free summary here) studies 2 kinds of ecological networks: trophic (predator-prey) and mutualistic (in this case, pollinators and flowers).  They looked at the effects of 2 properties of networks: modularity (meaning the presence of small, highly-connected subsets that have few external connections) and nestedness (meaning the likelihood that species X has the same sort of interaction with multiple other species).  (It's unfortunate that they never define modularity or nestedness formally; but this informal definition is still useful.  I'm going to call nestedness \"sharing\", since they do not state that their definition implies nesting one network inside another.)  They looked at the impact of different degrees of modularity and nestedness, in trophic vs. mutualistic networks, on persistence (fraction of species still alive at equilibrium) and resilience (1/time to return to equilibrium after a perturbation).  They used both simulated networks, and data from real-world ecological networks.

\n

What they found is that, in trophic networks, modularity is good (increases persistence and resilience) and sharing is bad; while in mutualistic networks, modularity is bad and sharing is good.  Also, in trophic networks, species go extinct so as to make the network more modular and less sharing; in mutualistic networks, the opposite occurs.

\n

The commonsense explanation is that, if species X is exploiting species Y (trophic), the interaction decreases the health of species Y; and so having more exploiters of Y is bad for both X and Y.  OTOH, if species X benefits from species Y, X will get a secondhand benefit from any mutually-beneficial relationships that Y has; if Y also benefits from X (mutualistic), then neither X nor Y will adapt to prevent Z from also having a mutualistic relationship with Y.  (The theory does not address a mixture of trophic and mutualistic interactions in a single network.)

\n

The effect is strong - see this figure:

\n

\"Change

\n

This shows that, when nodes have exploitative (trophic) relationships, and you simulate evolution starting from a random network, the network almost always becomes more modular and less sharing over time; while the opposite occurs when nodes have mutually-beneficial relationships.  (The few cases along the line y=x are, I infer, not cases where this effect was weak, but cases where the initial random network happened to be one or two species away from a local equilibrium.)

\n

Government

\n

Armed with this knowledge, we can look at the structure of different cultures, governments, and religions, and say whether they're likely to be exploitative or mutualistic.  Feudalism is an extremely hierarchical, compartmentalized social structure, in which every person has one trophic relationship with one superior.  We can look at its org chart and predict that it's exploitative, without knowing anything more about it.  The less-hierarchical, loopy org chart of a democracy is more compatible with mutualistic relationships - at least at the top.  Note that I'm not talking about the directionality of control the relationships, as is usual when discussing democracy; I'm talking about the mere presence of multiple relationships per party.  The Catholic church has a hierarchical organization, and is perhaps not coincidentally richer than any Protestant church relative to income per capita - except for the Mormons, with assets of about $6000/member, whose organizational structure I know little about (read this if interested).  I do know that the Mormon church historically combined church and state, thus halving the number of power relationships its citizens participate in.

\n

The governmental structure of a democracy is not dramatically different from the structure of a monarchy.  What's really different is the economic structure of a free market, with many more shared relationships when compared, for instance, to monopolistic medieval economies, or mercantilistic colonial economies.  It may be that the free market, not democracy, is responsible for our freedom.

\n

Business

\n

The employer-employee relationship appears trophic.  Employees are forbidden from working for more than one employer.  Consultants, on the other hand, have many clients.  So do doctors and lawyers.  Not surprisingly, all of them get paid more per hour than employees.

\n

Even if you're an employee, you can compare the internal structure of different companies.  Every person within a company of selfish agents would, ideally, like their relationship with others to be exploitative, but for all other relationships to be mutualistic.  The company owner would like to exploit the management and the workers, but have the management and workers have mutualistic interactions; while the management would prefer to exploit the workers.  You may be able to look at the internal structure of a company, and see how far down the exploitative pattern penetrates.  If it's a hierarchy of private fiefdoms all the way down, beware.

\n

Artificial intelligence

\n

Any artificial intelligence will have internal structure.  Artificial intelligences, unlike humans, do not come in standard-sized reproductive units, walled off computationally; therefore, there might not be cleanly-defined \"individuals\" (literally, non-divisible people) in an AI society.  But the bulk of the computation, and hence the bulk of the potential consciousness, will be within small, local units (due to the ubiquity of power-law distributions, the efficiency of fractal transport and communication networks, and the speed of light).  So it is important to consider the welfare of these units when designing AIs - at least, if we intend our initial designs to persist.

\n

A hierarchical AI design is more compatible with exploitative relationships - even if it is bidirectional.  Again, control is not the issue; the mere presence of links is.  A decentralized agent-based AI, in the sense of agent-based software (often modelled on the free market, with software agents bidding on tasks), would be more amenable to mutualistic relationships.

\n

A final caution

\n

The work cited shows that having exploitative vs. mutualistic interactions causes compartmentalized vs. highly-shared networks to arise.  It does not show that constructing compartmentalized or highly-shared networks causes exploitative or mutualistic interactions, respectively, to arise.  This would be helpful to know; but remains to be demonstrated.  For intelligent free agents, an argument that it would is that, when an agent has many relationships, they can cut off any agents who become exploitative.  (This might not be true within an AI.)

\n

Finally, as just noted, plants and insects are not intelligent agents; and AI components might not be completely free agents.  Each of the domains above has important differences from the others, and results might not transfer as easily as this post suggests.

" } }, { "_id": "rvZTv44rLZpJjJH68", "title": "Cryonics Questions", "pageUrl": "https://www.lesswrong.com/posts/rvZTv44rLZpJjJH68/cryonics-questions", "postedAt": "2010-08-26T23:19:43.399Z", "baseScore": 13, "voteCount": 32, "commentCount": 168, "url": null, "contents": { "documentId": "rvZTv44rLZpJjJH68", "html": "

Cryonics fills many with disgust, a cognitively dangerous emotion.  To test whether a few of your possible cryonics objections are reason or disgust based, I list six non-cryonics questions.  Answering yes to any one question indicates that rationally you shouldn’t have the corresponding cryonics objections. 

\n

1.  You have a disease and will soon die unless you get an operation.  With the operation you have a non-trivial but far from certain chance of living a long, healthy life.  By some crazy coincidence the operation costs exactly as much as cryonics does and the only hospitals capable of performing the operation are next to cryonics facilities.  Do you get the operation?

\n

Answering yes to (1) means you shouldn’t object to cryonics because of costs or logistics.

\n

2.  You have the same disease as in (1), but now the operation costs far more than you could ever obtain.  Fortunately, you have exactly the right qualifications NASA is looking for in a space ship commander.  NASA will pay for the operation if in return you captain the ship should you survive the operation.  The ship will travel close to the speed of light.  The trip will subjectively take you a year, but when you return one hundred years will have passed on Earth.  Do you get the operation?

\n

Answering yes to (2) means you shouldn't object to cryonics because of the possibility of waking up in the far future.

\n

\n

3.  Were you alive 20 years ago?

\n

Answering yes to (3) means you have a relatively loose definition of what constitutes “you” and so you shouldn’t object to cryonics because you fear that the thing that would be revived wouldn’t be you.

\n

4.  Do you believe that there is a reasonable chance that a friendly singularity will occur this century?   

\n

Answering yes to (4) means you should think it possible that someone cryogenically preserved would be revived this century.  A friendly singularity would likely produce an AI that in one second could think all the thoughts that would take a billion scientists a billion years to contemplate.  Given that bacteria seem to have mastered nanotechnology, it’s hard to imagine that a billion scientists working for a billion years wouldn’t have a reasonable chance of mastering it.  Also, a friendly post-singularity AI would likely have enough respect for human life so that it would be willing to revive.

\n

5.  You somehow know that a singularity-causing intelligence explosion will occur tomorrow.  You also know that the building you are currently in is on fire.  You pull an alarm and observe everyone else safely leaving the building.  You realize that if you don’t leave you will fall unconscious, painlessly die, and have your brain incinerated.  Do you leave the building?

\n

Answering yes to (5) means you probably shouldn’t abstain from cryonics because you fear being revived and then tortured.

\n

6.  One minute from now a man pushes you to the ground, pulls out a long sword, presses the sword’s tip to your throat, and pledges to kill you.  You have one small chance at survival:  grab the sword’s sharp blade, thrust it away and then run.  But even with your best efforts you will still probably die.  Do you fight against death?

\n

Answering yes to (6) means you can’t pretend that you don’t value your life enough to sign up for cryonics.

\n

If you answered yes to all six questions and have not and do not intend to sign up for cryonics please give your reasons in the comments.  What other questions can you think of that provide a non-cryonics way of getting at cryonics objections?

" } }, { "_id": "XuyRMxky6G8gq7a69", "title": "Self-fulfilling correlations", "pageUrl": "https://www.lesswrong.com/posts/XuyRMxky6G8gq7a69/self-fulfilling-correlations", "postedAt": "2010-08-26T21:07:28.112Z", "baseScore": 151, "voteCount": 123, "commentCount": 50, "url": null, "contents": { "documentId": "XuyRMxky6G8gq7a69", "html": "

Correlation does not imply causation.  Sometimes corr(X,Y) means X=>Y; sometimes it means Y=>X; sometimes it means W=>X, W=>Y.  And sometimes it's an artifact of people's beliefs about corr(X, Y).  With intelligent agents, perceived causation causes correlation.

\n

Volvos are believed by many people to be safe.  Volvo has an excellent record of being concerned with safety; they introduced 3-point seat belts, crumple zones, laminated windshields, and safety cages, among other things.  But how would you evaluate the claim that Volvos are safer than other cars?

\n

Presumably, you'd look at the accident rate for Volvos compared to the accident rate for similar cars driven by a similar demographic, as reflected, for instance in insurance rates.  (My google-fu did not find accident rates posted on the internet, but insurance rates don't come out especially pro-Volvo.)  But suppose the results showed that Volvos had only 3/4 as many accidents as similar cars driven by similar people.  Would that prove Volvos are safer?

\n

Perceived causation causes correlation

\n

No.  Besides having a reputation for safety, Volvos also have a reputation for being overpriced and ugly.  Mostly people who are concerned about safety buy Volvos.  Once the reputation exists, even if it's not true, a cycle begins that feeds on itself:  Cautious drivers buy Volvos, have fewer accidents, resulting in better statistics, leading more cautious drivers to buy Volvos.

\n

Do Montessori schools or home-schooling result in better scores on standardized tests?  I'd bet that they do.  Again, my google-fu is not strong enough to find any actual reports on, say, average SAT-score increases for students in Montessori schools vs. public schools.  But the largest observable factor determining student test scores, last I heard, is participation by the parents.  Any new education method will show increases in student test scores if people believe it results in increases in student test scores, because only interested parents will sign up for that method.  The crazier, more-expensive, and more-difficult the method is, the more improvement it should show; craziness should filter out less-committed parents.

\n

Are vegetarian diets or yoga healthy for you?  Does using the phone while driving increase accident rates?  Yes, probably; but there is a self-fulfilling component in the data that is difficult to factor out.

\n

Conditions under which this occurs

\n

If you believe X helps you achieve Y, and so you use X when you are most-motivated to achieve Y and your motivation has some bearing on the outcome, you will observe a correlation between X and Y.

\n

This won't happen if your motivation or attitude has no bearing on the outcome (beyond your choice of X).  If passengers prefer one airline based on their perception of its safety, that won't make its safety record improve.

\n

However, this is different from either confidence or the placebo effect.  I'm not talking about the PUA mantra that \"if you believe a pickup line will work, it will work\".  And I'm not talking about feeling better when you take a pill that you think will help you feel better.  This is a sample-selection bias.  A person is more likely to choose X when they are motivated to achieve Y relative to other possible positive outcomes of X, and hence more inclined to make many other little trade-offs to achieve Y which will not be visible in the data set.

\n

It's also not the effect people are guarding against with double-blind experiments.  That's guarding against the experimenter favoring one method over another.  This is, rather, an effect guarded against with random assignment to different groups.

\n

Nor should it happen in cases where the outcome being studied is the only outcome people consider.  If a Montessori school cost the same, and was just as convenient for the parents, as every other school, and all factors other than test score were equal, and Montessori schools were believed to increase test scores, then any parent who cared at all would choose the Montessori school.  The filtering effect would vanish, and so would the portion of the test-score increase caused by it.  Same story if one choice improves all the outcomes under consideration:  Aluminum tennis racquets are better than wooden racquets in weight, sweet spot size, bounce, strength, air resistance, longevity, time between restrings, and cost.  You need not suspect a self-fulfilling correlation.

\n

It may be cancelled by a balancing effect, when you are more highly-motivated to achieve Y when you are less likely to achieve Y.  In sports, if you wear your lucky undershirt only for tough games, you'll find it appears to be unlucky, because you're more likely to lose tough games.  Another balancing effect is if your choice of X makes you feel so confident of attaining Y that you act less concerned about Y; an example is (IIRC) research showing that people wearing seat-belts are more likely to get into accidents.

\n

Application to machine learning and smart people

\n

Back in the late 1980s, neural networks were hot; and evaluations usually indicated that they outperformed other methods of classification.  In the early 1990s, genetic algorithms were hot; and evaluations usually indicated that they outperformed other methods of classification.  Today, support vector machines (SVMs) are hot; and evaluations usually indicate that they outperform other methods of classifications.  Neural networks and genetic algorithms no longer outperform older methods.  (I write this from memory, so you shouldn't take it as gospel.)

\n

There is a publication bias:  When a new technology appears, publications indicating it performs well are interesting.  Once it's established, publications indicating it performs poorly are interesting.  But there's also a selection bias.  People strongly motivated to make their systems work well on difficult problems are strongly motivated to try new techniques; and also to fiddle with the parameters until they work well.

\n

Fads can create self-fulfilling correlations.  If neural networks are hot, the smartest people tend to work on neural networks.  When you compare their results to other results, it can be difficult to look at neural networks vs., say, logistic regression; and factor out the smartest people vs. pretty smart people effect.

\n

(The attention of smart people is a proxy for effectiveness, which often misleads other smart people - e.g., the popularity of communism among academics in America in the 1930s.  But that's yet another separate issue.)

" } }, { "_id": "C2ConH6njvjmxZzMX", "title": "Criteria for Rational Political Conversation", "pageUrl": "https://www.lesswrong.com/posts/C2ConH6njvjmxZzMX/criteria-for-rational-political-conversation", "postedAt": "2010-08-26T15:53:19.223Z", "baseScore": -10, "voteCount": 22, "commentCount": 39, "url": null, "contents": { "documentId": "C2ConH6njvjmxZzMX", "html": "

Query: by what objective criteria do we determine whether a political decision is rational?

\n

I propose that the key elements -- necessary but not sufficient -- are (where \"you\" refers collectively to everyone involved in the decisionmaking process):

\n\n

If an argument satisfies all of these requirements, it is at least provisionally rational. If it fails any one of them, then it's not rational and needs to be corrected or discarded.

This is not a circular definition (defining \"rationality\" by referring to \"reasonable\" things, where \"reasonable\" depends on people being \"rational\"); it is more like a recursive algorithm, where large ambiguous problems are split up into smaller and smaller sub-problems until we get to a size where the ambiguity is negligible.

This is not one great moral principle; it is more like a self-modifying working process (subject to rational criticism and therefore improvable over time -- optimization by successive approximation). It is an attempt to apply the processes of science (or at least the same reasoning which arrived at those processes) to political discourse.

So... can we agree on this?

\n
\n

This is a hugely, vastly, mindbogglingly trimmed-down version of what I originally posted. All comments prior to 2010-08-26 20:52 (EDT) refer to that version, which I have reposted here for comparison purposes and for the morbidly curious. (It got voted down to negative 6. Twice.)

" } }, { "_id": "nwA2mP55oCSpZ9sza", "title": "The prior of a hypothesis does not depend on its complexity", "pageUrl": "https://www.lesswrong.com/posts/nwA2mP55oCSpZ9sza/the-prior-of-a-hypothesis-does-not-depend-on-its-complexity", "postedAt": "2010-08-26T13:20:05.054Z", "baseScore": 34, "voteCount": 39, "commentCount": 69, "url": null, "contents": { "documentId": "nwA2mP55oCSpZ9sza", "html": "

Many thanks to Unknowns for inventing the scenario that led to this post, and to Wei Dai for helpful discussion.

\n

Imagine you subscribe to the universal prior. Roughly, this means you assign credence 2^-k to each program of length k whose output matches your sensory inputs so far, and 0 to all programs that failed to match. Does this imply you should assign credence 2^-m to any statement about the universe (\"hypothesis\") that has length m? or maybe Kolmogorov complexity m?

\n

The answer is no. Consider the following examples:

\n

1. The complexity of \"A and B and C and D\" is roughly equal to the complexity of \"A or B or C or D\", but we know for certain that the former hypothesis can never be more probable than the latter, no matter what A, B, C and D are.

\n

2. The hypothesis \"the correct theory of everything is the lexicographically least algorithm with K-complexity 3^^^^3\" is quite short, but the universal prior for it is astronomically low.

\n

3. The hypothesis \"if my brother's wife's first son's best friend flips a coin, it will fall heads\" has quite high complexity, but should be assigned credence 0.5, just like its negation.

\n

Instead, the right way to derive a prior over hypotheses from a prior over predictors should be to construct the set of all predictors (world-algorithms) that \"match\" the hypothesis, and see how \"wide\" or \"narrow\" that set is. There's no connection to the complexity of the hypothesis itself.

\n

An exception is if the hypothesis gives an explicit way to construct a predictor that satisfies it. In that case the correct prior for the hypothesis is bounded from below by the \"naive\" prior implied by length, so it can't be too low. This isn't true for many interesting hypotheses, though. For example the words \"Islam is true\", even expanded into the complete meanings of these words as encoded in human minds, don't offer you a way to implement or predict an omnipotent Allah, so the correct prior value for the Islam hypothesis is not obvious.

\n

This idea may or may not defuse Pascal's Mugging - I'm not sure yet. Sorry, I was wrong about that, see Spurlock's comment and my reply.

" } }, { "_id": "hSeqgnc5CBJ643x9k", "title": "Luminosity (Twilight fanfic) discussion thread", "pageUrl": "https://www.lesswrong.com/posts/hSeqgnc5CBJ643x9k/luminosity-twilight-fanfic-discussion-thread", "postedAt": "2010-08-25T08:49:22.156Z", "baseScore": 19, "voteCount": 19, "commentCount": 439, "url": null, "contents": { "documentId": "hSeqgnc5CBJ643x9k", "html": "

In the vein of the Harry Potter and the Methods of Rationality discussion threads this is the place to discuss anything relating to Alicorn's Twilight fanfic Luminosity. The fanfic is also archived on Alicorn's own website <strike>(warning: white text on black background)</strike>.

\n

Previous discussion is hidden so deeply within the first Methods of Rationality thread that it's difficult to find even if you already know it exists. 

\n

Similar to how Eliezer's fanfic popularizes material from his sequences Alicorn is using the insights from her Luminosity sequence.

\n

Spoilers for the fanfic itself as well as the original novels need and should not be hidden, but spoiler protection still applies for any other works of fiction, except for Harry Potter and the Methods of Rationality chapters more than a week old so we can freely discuss similarities and differences. 

\n

EDIT: Post-ginormous-spoiler discussion should go to the second thread. (If you have any doubt on whether you have reached the spoiler in question you have not.)

" } }, { "_id": "Sfir5pFYgXJ8PEbwP", "title": "Burning Man Meetup: Bayes Camp", "pageUrl": "https://www.lesswrong.com/posts/Sfir5pFYgXJ8PEbwP/burning-man-meetup-bayes-camp", "postedAt": "2010-08-25T06:14:54.005Z", "baseScore": 19, "voteCount": 19, "commentCount": 53, "url": null, "contents": { "documentId": "Sfir5pFYgXJ8PEbwP", "html": "

In celebration of the virtues of applied rationality, Less Wrong is going to Burning Man! And because Heinlein rationalists should win, Bayes Camp is going to be the most awesome place there.

\n

A bunch of people from SingInst/Less Wrong will be descending upon the desert, bedecked as the members of the Bayesian Conspiracy. Kevin, Jasen, JustinShovelain, Peter de Blanc, Michael Vassar and Nick Tarleton, among others, will be there. If you'd like to stop by, say so in the comments! 

\n

We'll be at 6:50, F, and should be there from Monday 30th.

\n

Please note: Burning Man is serious stuff, and if you don’t think you’re up to the desert, you shouldn’t come. Either way, read the survival guide.

\n

 

\n

EDIT: updated location

" } }, { "_id": "iSmai5P4rRbhXEnhy", "title": "Why is medical advice all caution and no info?", "pageUrl": "https://www.lesswrong.com/posts/iSmai5P4rRbhXEnhy/why-is-medical-advice-all-caution-and-no-info", "postedAt": "2010-08-24T12:48:37.000Z", "baseScore": 0, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "iSmai5P4rRbhXEnhy", "html": "
\n
\"A

Wordpress now offers me free images of whatever I'm talking about. This is presumably an illustration of how the world doesn't necessarily become a better place if you tell everyone they urgently need to seek medical help for something you won't tell them anything more about. Image by uncultured via Flickr

\n
\n

I had a couple of bad looking medical test results in a row, so I was sent to a specialist, with advice along the lines of ‘well, we can’t say it’s not cancer… probably get checked out as soon as possible’. When I eventually got to the specialist he immediately told me a bunch of relevant conditional probabilities: of any problem at all given such test results, of it being a bad problem given it’s some kind of problem, the probability per year of cancer given each kind of problem. These were not scary numbers at all. Given that it can take months to extract your data from one doctor and get an appointment with a specialist, it would have been very nice to have been told these numbers by the original doctor, instead of just knowing for a few months that such results are some unknown degree of evidence in favor of cancer. Is this just an oversight by a particular useless doctor?

\n

It seems not. I’ve noticed another two examples of the same problem in medicine recently. If you look up symptoms online, you will often be told to seek emergency medical assistance immediately. It often doesn’t even tell you what the potential problem is, and certainly not what the odds are of it occurring, so it’s pretty hard to evaluate the suggestion. If you actually go to a doctor about one of these symptoms, the doctor often tells you not to worry about it without more than knowledge of your age.  Often the website also knows your age, or at least asked it, and it would be simple for them to mention that said symptom is only a concern if you are over fifty, or even just the basic information about how common such a thing is given that symptom. Similarly my region has a free health phone line where you can ask a nurse whether symptoms are worth bothering to go to a doctor about. That seems like a decent idea, since apparently people overestimate when its worth going to the doctor. However in my small amount of experimentation it seems that anything I say prompts the suggestion that I see a doctor ‘within four hours’. I mean, I have tried telling them my bottom hurts at 2am and they tell me to get to a doctor within four hours.  I would be very surprised if an emergency room  was willing to treat sore bottoms in the middle of the night, so why not just tell the caller that at the start? Sending someone to a doctor can’t possibly help if the doctor is guaranteed to send them home.

\n

In all these cases medical advice errs so much in the direction of caution that the doctor finally responsible for treating you will often hardly have to look at you before sending you home. The advice givers also refuse to offer relevant information such as conditional probabilities, so you can’t judge for yourself how far you want to cycle in freezing night to avoid a one in a million chance of arthritis or whatever. The costs of these behaviors in the short term are needless anxiety for patients, and doctors’ visits that informed patients would not want. In the long term patients will learn to distrust such advice givers and will miss out on useful advice when they really should go to an emergency room, while probably still harboring a slight fear that they have done wrong and will prematurely die for their sins. Is this medical advice format as common as it seems to me? Why is it done? I can understand an uninformed relative who is super-concerned about your wellbeing and believes you to be biased telling you you must seek medical advice for everything and refusing you any information. Or a doctor being in favor of it. But why do these presumably informed third parties and other doctors do it?

\n

Added 1 Sept 10: I’m fine, sorry I wasn’t clear enough about that.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "PZfAp5eMvrdedPHt4", "title": "Minimum computation and data requirements for consciousness. ", "pageUrl": "https://www.lesswrong.com/posts/PZfAp5eMvrdedPHt4/minimum-computation-and-data-requirements-for-consciousness", "postedAt": "2010-08-23T23:53:50.656Z", "baseScore": -18, "voteCount": 22, "commentCount": 83, "url": null, "contents": { "documentId": "PZfAp5eMvrdedPHt4", "html": "

Consciousness is a difficult question because it is poorly defined and is the subjective experience of the entity experiencing it. Because an individual experiences their own consciousness directly, that experience is always richer and more compelling than the perception of consciousness in any other entity; your own consciousness always seem more “real” and richer than the would-be consciousness of another entity.

\n

Because the experience of consciousness is subjective, we can never “know for sure” that an entity is actually experiencing consciousness. However there must be certain computational functions that must be accomplished for consciousness to be experienced. I am not attempting to discuss all computational functions that are necessary, just a first step at enumerating some of them and considering implications.

\n

First an entity must have a “self detector”; a pattern recognition computation structure which it uses to recognizes its own state of being an entity and of being the same entity over time. If an entity is unable to recognize itself as an entity, then it can't be conscious that it is an entity. To rephrase Descartes, \"I perceive myself to be an entity, therefore I am an entity.\"  It is possible to be an entity and not perceive that one is an entity. This happens in humans but rarely. Other computation structures may be necessary also, but without an ability to recognize itself as an entity an entity cannot be conscious.

\n

All pattern recognition-type is inherently subject to errors, usually of type 1 (false positive) or type 2, (false negative). In pattern recognition there is an inherent trade-off of false positives for false negatives. Reducing both false positives and false negatives is much more difficult than trading off one for the other.

\n

I suspect that detection of external entities evolved before the detection of internal entities; for predator avoidance, prey capture, offspring recognition and mate detection. Social organisms can recognize other members of their group. The objectives in the recognition of other entities are varied, and so the fidelity of recognition required is varied also. Entity detection is followed by entity identification and sorting of the entity into various classes so as to determine how to interact with that entity; opposite gender mate: reproduce, offspring: feed and care for, predator: run away.

\n

Humans have entity detectors and those detectors exhibit false positives (detecting the entity (spirit) of the tree, rock, river and other inanimate objects, pareidolia) and false negatives (not recognizing a particular ethnic group as fully human). Evolution tends to put more of a bias toward false positives, a false alarm in detecting a predator is a lot better than a non-detection of a predator.

\n

How the human entity detector works is not fully understood, I have a hypothesis which I outline in my blog post on xenophobia.

\n

I suggest that when two humans meet, I think they unconsciously do the equivalent of a Turing Test, with the objective in being to determine if the other entity is “human enough”. Essentially they try to communicate, and if the error rate is too high (due to non-consilience of communication protocols), then xenophobia is triggered via the uncanny valley effect. In the context of this discussion, the entity detector defaults to non-human (or non-green beard (see below)).

\n

I think the uncanny valley effect is an artifact of the human entity detector (which evolved only to facilitate survival and reproduction of those humans with it). Killing entities that are close enough to be potential competitors and rivals but not so close that they might be kin is a good evolutionary strategy; the functional equivalent of the mythic green beard gene (where the green beard gene causes the expression of a green beard and the compulsion to kill all without a green beard). Because humans evolved a large brain recently, the “non-green-beard-detector” can't be instantiated by neural structures directly and completely specified by genes, but must have a learned component. I think this learned component is the mechanism behind cultural xenophobia and religious bigotry.

\n

Back to consciousness. Consciousness implies the continuity of entity identity over time. The individual that is conscious at time=1 is self-perceived to be “the same” individual as is conscious at time=2. What does this actually mean? Is there a one-to-one correspondence between the two entities? No, there is not. The entity at time=1 will evolve into different entities at time=2 depending on the experience-path the entity has taken.

\n

If a snap-shot of the entity at time=1 is taken, duplicated into multiple exact copies and each copy is allowed to have different experiences, at time=2 there will be multiple different entities derived from the first original. Is any one of these entities “more” like the original entity? No, they are all equally derived from the original entity, they are all equally as much like the original. Each of them will have the subjective experience that they are derived from the original (because they are), each one will have the subjective experience that they are the original (because as far as each one knows, they are the original) and the subjective experience that all the others must therefore be imposters.

\n

This seeming paradox arises because of the resolution the human entity detector has. Can that detector distinguish between extremely similar versions of entities? To do pattern recognition, one must have a pattern for comparison. The more exacting the comparison, the more exacting the compared to pattern must be. In the limit a perfect comparison requires an exact and complete representation; in the limit it takes a complete 100% fidelity emulation of the entity (so as to be able to do a one-to-one comparison). I think this relates to what Nietzsche was talking about when he said:

\n

“Whoever battles with monsters had better see that it does not turn him into a monster. And if you gaze long into an abyss, the abyss will gaze back into you.”

\n

To be able to perceive anything, one must have data for the pattern recognition necessary to detect what ever is being perceived. If the pattern recognition computational structures are unable to identify something, that something cannot be perceived. To perceive the abyss, you must have a mapping of the abyss inside of you. Because humans have self-modifying pattern recognition structures, those structures self-modify to become better at detecting what ever is being observed. As you stare into the abyss, your brain becomes more abyss-like to optimize abyss detection.

\n

With the understanding that to detect an entity, one must have pattern recognition that can recognize that entity, then the reason that there is the appearance of continuity of consciousness is seen to be an artifact of human pattern recognition. Human entity detection necessarily compares the observed entity with a reference entity. When that reference entity is the self, there is always a one-to-one correspondence with what is observed to be the self and the reference entity (which is the self), so there is always the identification of the observed entity as the self. It is not that there is actual continuity of a self-entity over time, rather there is the illusion of continuity because the reference is changing exactly as the entity is changing. There are some rare cases where people feel “not themselves” (depersonalization disorder) where they think they are a substitute, or dead, or somehow not the actual person they once were. This dissociation sometimes happens due to extreme traumatic stress, and there is some thought that it is protective; dissociate the self so that the self is not there to experience the trauma and be irreparably injured by that trauma (this may not be correct). This may be what happens during anesthesia.

\n

I think this solves the “problem” of uploading; how can the uploaded entity be identical to the non-uploaded entity? The actual answer is that it can't be, but that doesn't matter if the uploaded entity “feels” or “believes” it is the same entity as the non-uploaded entity, then as far as the uploaded entity is concerned, it is. I appreciate that this may not be a satisfactory solution to anyone but the uploaded entity because it implies that continuity of consciousness is merely an illusion, essentially a hallucination caused by a defect in the entity detection pattern recognition. The data from the non-uploaded entity doesn't really matter. All that matters is does the uploaded entity “feel” it is the same.

\n

I think that in a very real sense, those who seek personal immortality via cryonics or via uploading are pursuing an illusion. They are pursuing the perpetuation of the illusion of self-entity continuity. The same illusion those who believe in an immortal soul are pursuing. The same illusion ancient Egyptians persued via mummification.  If the entity is to be self-identical for perpetuity, then it cannot change. If it cannot change, then it cannot have new experiences. If it has new experiences and changes, then it is not the same entity that it was before those changes.

\n

In terms of an AI; the AI can only be conscious if it has an entity detector that detects itself and uses itself as the pattern for that detection. It can only be conscious about aspects of itself that its entity detector has access to. For example humans are not conscious of the data processing that goes on in the visual cortex. Why? Because the human entity detector does not attempt to map that conceptual space. If the AI entity detector doesn't map part of its own computational equipment, then the AI won't be conscious of that part of its own data processing either.

\n

A recipe for friendly AI might be to program the AI to use the coherent extrapolated volition of a select group of humans as its reference entity for entity detection. In effect, that may be what some cultures are already trying to accomplish through ancestor and hero worship; attempting to mold future generations by holding up ideals as examples. That may be analogous to what EY was getting at in discussing what he wants to protect.  If the AI were given a compulsion to become ever more like the CEV of its reference entity, there are limits to how much it could change. 

\n

That might be a better use for the data that some humans want to upload to try and achieve personal immortality. They can't achieve personal immortality because the continuity of entity identity is an illusion. Selecting which humans to use would be tricky, but if their coherent extrapolated volition could be “captured”, combined and then used as the reference entity for the AI, it might be a good idea. The AI would then be no worse (and no better) than the sum of those individuals. Of course how to select those individuals is the tricky part. Anyone who wants to be selected is probably not suitable. The mind-set that I think is most appropriate is that of a parent nurturing their child, not to live vicariously through the child, but for the child's benefit. A certain turn-over per generation would keep the AI connected to present humanity but allow for change.  We do not want individuals who seek to acquire power by clawing their way to the top of the social power hierarchy by forcing others down (in a zero-sum manner). 

\n

I think allowing “wild-type” AI (individuals who upload themselves) is probably too dangerous, and is really just a monument to their egotistical fantasy of entity continuity.  Just like the Pyramids, but a pyramid that could change into something fatally destructive (uFAI). 

\n

There are some animals that “think” and act like they are people, some dogs that have been completely human acclimated. What has happened is that the dog is using a “human-like” self representation as the reference for its entity pattern recognition, but because of the limited cognitive capacities of that particular dog, it doesn't recognize that the humans it observes are different than itself. An AI could be designed to think that it was human (once we knew how to actually design any AI, designing it to think it was human would be easy).

\n

Humans can do this too (emulate another entity such that they think they are that entity), I think that is in essence what Stockholm Syndrome causes. Under severe trauma, following dissociation and depersonalization, the self reforms, but in a pattern that matches, identifies with, and bonds to the perpetrator of the trauma. The traumatized person has attempted to emulate the “green-beard persona” to avoid death and abuse being perpetrated upon them by the person with the “green beard”.

\n

This may be the solution to Fermi's Paradox.  There may be no galaxy spanning AIs because by the time civilizations can accomplish such things they realize that continuity of entity identity is an illusion and have grown beyond wanting to spend effort on illusions. 

" } }, { "_id": "okNsnCKzgnXgrKrtx", "title": "The Smoking Lesion: A problem for evidential decision theory", "pageUrl": "https://www.lesswrong.com/posts/okNsnCKzgnXgrKrtx/the-smoking-lesion-a-problem-for-evidential-decision-theory", "postedAt": "2010-08-23T09:01:21.260Z", "baseScore": 6, "voteCount": 11, "commentCount": 101, "url": null, "contents": { "documentId": "okNsnCKzgnXgrKrtx", "html": "

This is part of a sequence titled \"An introduction to decision theory\". The previous post was Newcomb's Problem: A problem for Causal Decision Theories

\n

For various reasons I've decided to finish this sequence on a seperate blog. This is principally because there were a large number of people who seemed to feel that this sequence either wasn't up to the Less Wrong standard or felt that it was simply covering ground that had already been covered on Less Wrong.

\n

The decision to post it on another blog rather than simply discontinuing it came down to the fact that other people seemed to feel that the sequence had value. Those people can continue reading it at \"The Smoking Lesion: A problem for evidential decision theory\".

\n

Alternatively, there is a sequence index available: Less Wrong and decision theory: sequence index

" } }, { "_id": "jZr4pHbxuzcm9yx22", "title": "Who are we?", "pageUrl": "https://www.lesswrong.com/posts/jZr4pHbxuzcm9yx22/who-are-we", "postedAt": "2010-08-23T00:35:55.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "jZr4pHbxuzcm9yx22", "html": "

I wonder if part of the reason for persistent disagreement on political positions is that people mean quite different things by ‘We’ in sentences like ‘We should do x’. Here are three:

\n

We = ‘the government’

\n

As in, ‘we should control markets to avoid the dangers of their extremes’, ‘we should have discretion in the treatment of prisoners to allow for the complexities of the situation’,  ‘we should ban smoking even though people want to smoke, because they won’t when they stop being addicted’,  and ‘we should censor especially harmful writing’.

\n

This is interesting because if ‘we’ should do these things, naturally ‘we’ should be given the power to do them. However in practice since you aren’t actually in the government, what you think ‘we’ should do is not very relevant once you have allowed such power. This is especially the case with issues where you can’t easily check that ‘we’ are doing what ‘we’ should, or do anything about it.  For instance issues which prohibit your knowing what’s going on (e.g. censorship), or where good and bad actions would be hard to tell apart from the outside (for instance well justified paternalism and interest-driven paternalism), issues which involve no simple standards to check behavior against (where much discretion is allowed it is hard to claim particular decisions were wrong, or to show this to others), and issues where you are expected to disagree (for instance paternalistic laws). By calling the government ‘we’ it’s easy to forget the difference in effort required to do something and to check a large powerful organization elsewhere is doing something.

\n

We = ‘everyone’

\n

As in, ‘if we just cut our meat consumption in half we would cut carbon emissions by –%’

\n

This one is interesting for similar reasons to the above; it makes it extremely easy to overlook the fact that you don’t make decisions for everyone else, and don’t know what they are doing mostly. ‘Just cutting our meat consumption in half’ requires somehow persuading perhaps billions of others to reduce their meat consumption, despite their other priorities, disagreement with the claim, lack of sympathy to the cause, inability to hear you or know that you are suggesting such a thing even if you can afford a very expensive advertising campaign, lack of reason to trust you, lack of evidence that others will take part, and ability to just free ride if most people were to do what you say.  Despite these problems, when I was a teenager it took me a while to work out why ‘we’ don’t ‘just’ make electric cars powered by solar energy instead of petrol driven ones if we know carbon emissions are such a problem.

\n

We = ‘you and I’

\n

As in, ‘we shouldn’t have to pay for a corrupt bureaucracy to oppress us in the name of the majority’s interests’

\n

This seems the least delusional meaning of ‘we’, but the others must exist for a reason. I suspect that if everyone thinks of themselves as part of ‘we’ and talks and acts as if their decisions are the ones everyone will make, they do avoid some of the coordination problems that their word usage overlooks. For instance if a bunch of people avoid polluting their local river because ‘if we just keep it clean will be much nicer’, they do actually get what they want, at least until someone has enough at stake to think about it more.

\n

I think these meanings of ‘we’ are more popular amongst those with different political leanings.  In general, whoever ‘we’ are to you, you will tend to ignore the coordination problems within that group, or between that group and the real you. This helps policies that sound good to one group sound absurd to another.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "BEQpq8DuiNtR225Me", "title": "Justifying Induction", "pageUrl": "https://www.lesswrong.com/posts/BEQpq8DuiNtR225Me/justifying-induction", "postedAt": "2010-08-22T11:13:29.656Z", "baseScore": 0, "voteCount": 10, "commentCount": 36, "url": null, "contents": { "documentId": "BEQpq8DuiNtR225Me", "html": "

\n

Related to: Where Recursive Justification Hits Bottom, Priors as Mathematical Objects, Probability is Subjectively Objective

\n

Follow up to: A Proof of Occam's Razor

\n

In my post on Occam’s Razor, I showed that a certain weak form of the Razor follows necessarily from standard mathematics and probability theory. Naturally, the Razor as used in practice is stronger and more concrete, and cannot be proven to be necessarily true. So rather than attempting to give a necessary proof, I pointed out that we learn by induction what concrete form the Razor should take.

\n

But what justifies induction? Like the Razor, some aspects of it follow necessarily from standard probability theory, while other aspects do not.

\n

Suppose we consider the statement S, “The sun will rise every day for the next 10,000 days,” assigning it a probability p, between 0 and 1. Then suppose we are given evidence E, namely that the sun rises tomorrow. What is our updated probability for S? According to Bayes’ theorem, our new probability will be:

\n

P(S|E) = P(E|S)P(S)/P(E) = p/P(E), because given that the sun will rise every day for the next 10,000 days, it will certainly rise tomorrow. So our new probability is greater than p. So this seems to justify induction, showing it to work of necessity. But does it? In the same way we could argue that the probability that “every human being is less than 10 feet tall” must increase every time we see another human being less than 10 feet tall, since the probability of this evidence (“the next human being I see will be less than 10 feet tall”), given the hypothesis, is also 1. On the other hand, if we come upon a human being 9 feet 11 inches tall, our subjective probability that there is a 10 foot tall human being will increase, not decrease. So is there something wrong with the math here? Or with our intuitions?

\n

In fact, the problem is neither with the math nor with the intuition. Given that every human being is less than 10 feet tall, the probability that “the next human being I see will be less than 10 feet tall” is indeed 1, but the probability that “there is a human being 9 feet 11 inches tall” is definitely not 1. So the math updates on a single aspect of our evidence, while our intuition is taking more of the evidence into account.

\n

But this math seems to work because we are trying to induce a universal which includes the evidence. Suppose instead we try to go from one particular to another: I see a black crow today. Does it become more probable that a crow I see tomorrow will also be black? We know from the above reasoning that it becomes more probable that all crows are black, and one might suppose that it therefore follows that it is more probable that the next crow I see will be black. But this does not follow. The probability of “I see a black crow today”, given that “I see a black crow tomorrow,” is certainly not 1, and so the probability of seeing a black crow tomorrow, given that I see one today, may increase or decrease depending on our prior – no necessary conclusion can be drawn. Eliezer points this out in the article Where Recursive Justification Hits Bottom.

\n

On the other hand, we would not want to draw a conclusion of that sort: even in practice we don’t always update in the same direction in such cases. If we know there is only one white marble in a bucket, and many black ones, then when we draw the white marble, we become very sure the next draw will not be white. Note however that this depends on knowing something about the contents of the bucket, namely that there is only one white marble. If we are completely ignorant about the contents of the bucket, then we form universal hypotheses about the contents based on the draws we have seen. And such hypotheses do indeed increase in probability when they are confirmed, as was shown above.

\n

 

" } }, { "_id": "XXmYFBo4XeEphPzdc", "title": "Consciousness of simulations & uploads: a reductio", "pageUrl": "https://www.lesswrong.com/posts/XXmYFBo4XeEphPzdc/consciousness-of-simulations-and-uploads-a-reductio", "postedAt": "2010-08-21T20:02:20.067Z", "baseScore": -1, "voteCount": 26, "commentCount": 142, "url": null, "contents": { "documentId": "XXmYFBo4XeEphPzdc", "html": "

Related articles: Nonperson predicates, Zombies! Zombies?, & many more.

\n

ETA: This argument appears to be a rehash of the Chinese room, which I had previously thought had nothing to do with consciousness, only intelligence. I nonetheless find this one instructive in that it makes certain things explicit which the Chinese room seems to gloss over.

\n

ETA2: I think I may have made a mistake in this post. That mistake was in realizing what ontology functionalism would imply, and thinking that ontology too weird to be true. An argument from incredulity, essentially. Double oops.

\n

Consciousness belongs to a class of topics I think of as my 'sore teeth.' I find myself thinking about them all the time: in the middle of bathing, running, cooking. I keep thinking about consciousness because no matter how much I read on the subject, I find I am still confused.

\n

Now, to the heart of the matter. A major claim on which the desirability of uploading (among other things) depends, is that the upload would be conscious (as distinct from intelligent). I think I found a reductio of this claim at about 4:00 last night while staring up at my bedroom ceiling.

\n

Simulating a person

\n

The thought experiment that is supposed to show us that the upload is conscious goes as follows. (You can see an applied version in Eliezer's bloggingheads debate with Massimo Pigliucci, here. I also made a similar argument to Massimo here.)

\n

Let us take an unfortunate member of the public, call her Simone, and simulate her brain (plus inputs and outputs along the nervous system) on an arbitrarily powerful philosophical supercomputer (this also works if you simulate her whole body plus surroundings). This simulation can be at any level of complexity you like, but it's probably best if we stick to an atom-by-atom (or complex amplitudes) approach, since that leaves less room for doubt.

\n

Since Simone is a lawful entity within physics, there ought to be nothing in principle stopping us from doing so, and we should get behavioural isomorphism between the simulation and the biological Simone.

\n

Now, we can also simulate inputs and outputs to and from the visual, auditory and language regions of her brain. It follows that with the right expertise, we can ask her questions - questions like \"Are you experiencing the subjective feeling of consciousness you had when you were in a biological body?\" - and get answers.

\n

I'm almost certain she'll say \"Yes.\" (Take a moment to realize why the alternative, if we take her at her word, implies Cartesian dualism.)

\n

The question is, do we believe her when she says she is conscious? 10 hours ago, I would have said \"Of course!\" because the idea of a simulation of Simone that is 100% behaviourally isomorphic and yet unconscious seemed very counterintuitive; not exactly a p-zombie by virtue of not being atom-by-atom identical with Simone, but definitely in zombie territory.

\n

A different kind of simulation

\n

There is another way to do this thought experiment, however, and it does not require that infinitely powerful computer the philosophy department has (the best investment in the history of academia, I'd say).

\n

(NB: The next few paragraphs are the crucial part of this argument.)

\n

Observe that ultimately, the computer simulation of Simone above would output nothing but a huge sequence of zeroes and ones, process them into visual and audio outputs, and spit them out of a monitor and speakers (or whatever).

\n

So what's to stop me just sitting down and crunching the numbers myself? All I need is a stupendous amount of time, a lot of pencils, a lot (!!!) of paper, and if you're kind to me, a calculator. Atom by tedious atom, I'll simulate inputs to Simone's auditory system asking her if she's conscious, then compute her (physically determined) answer to that question.

\n

Take a moment to convince yourself that there is nothing substantively different between this scenario and the previous one, except that it contains approximately 10,000 times the maximum safe dosage of in principle.

\n

Once again, Simone will claim she's conscious.

\n

...Yeah, I'm sorry, but I just don't believe her.

\n

I don't claim certain knowledge about the ontology of consciousness, but if I can summon forth a subjective consciousness ex nihilo by making the right series of graphite squiggles (which don't even mean anything outside human minds), then we might as well just give up and admit consciousness is magic.

\n

Oops!

\n

Pigliucci is going to enjoy watching me eat my hat.

\n

What was our mistake?

\n

I've thought about this a lot in the last ~10 hours since I came up with the above.

\n

I think when we imagined a simulated human brain, what we were picturing in our imaginations was a visual representation of the simulation, like a scene in Second Life. We saw mental images of simulated electrical impulses propagating along simulated neurons, and the cause & effect in that image is pretty clear...

\n

...only it's not. What we should have been picturing was a whole series of logical operations happening all over the place inside the computer, with no physical relation between them and the represented basic units of the simulation (atoms, or whatever).

\n

Basically, the simulated consciousness was isomorphic to biological consciousness in a similar way to how my shadow is isomorphic to me. Just like the simulation, if I spoke ASL I could get my shadow to claim conscious awareness, but it wouldn't mean much.

\n

In retrospect, it should have given us pause that the physical process happening in the computer - zeroes and ones propagating along wires & through transistors - can only be related to consciousness by virtue of outsiders choosing the right interpretations (in their own heads!) for the symbols being manipulated. Maybe if you interpret that stream of zeroes and ones differently, it outputs 5-day weather predictions for a city that doesn't exist.

\n

Another way of putting it is that, if consciousness is \"how the algorithm feels from the inside,\" a simulated consciousness is just not following the same algorithm.

\n

But what about the Fading Qualia argument?

\n

The fading qualia argument is another thought experiment, this one by David Chalmers.

\n

Essentially, we strap you into a chair and open up your skull. Then we replace one of your neurons with a silicon-based artificial neuron. Don't worry, it still outputs the same electrical signals along the axons; your behaviour won't be affected.

\n

Then we do this for a second neuron.

\n

Then a third, then a kth... until your brain contains only artificial neurons (N of them, where N ≈ 1011).

\n

Now, what happens to your conscious experience in this process? A few possibilities arise:

\n
    \n
  1. Conscious experience is initially the same, then shuts off completely at some discrete number of replaced neurons: maybe 1, maybe N/2. Rejected by virtue of being ridiculously implausible.
  2. \n
  3. Conscious experience fades continuously as k → N. Certainly more plausible than option 1, but still very strange. What does \"fading\" consciousness mean? Half a visual field? A full visual field with less perceived light intensity? Having been prone to (anemia-induced) loss of consciousness as a child, I can almost convince myself that fading qualia make some sort of sense, but not really...
  4. \n
  5. Conscious experience is unaffected by the transition.
  6. \n
\n
Unlike (apparently) Chalmers, I do think that \"fading qualia\" might mean something, but I'm far from sure. 3 does seem like a better bet. But what's the difference between a brain full of individual silicon neurons, and a brain simulated on general-purpose silicon chips?
\n
I think the salient difference is that, in a biological brain and an artificial-neuron brain, patterns of energy and matter flow are similar. Picture an impulse propagating along an axon: that process is physically very similar in the two types of physical brain.
\n
When we simulate a brain on a general purpose computer, however, there is no physically similar pattern of energy/matter flow. If I had to guess, I suspect this is the rub: you must need a certain physical pattern of energy flow to get consciousness.
\n
More thought is needed in clarifying the exact difference between saying \"consciousness arises from patterns of energy flow in the brain,\" and \"consciousness arises from patterns of graphite on paper.\" I think there is definitely a big difference, but it's not crystal clear to me in what exactly it consists.
" } }, { "_id": "vJ8fo9FAPp3LbDKXu", "title": "Rationality Lessons in the Game of Go", "pageUrl": "https://www.lesswrong.com/posts/vJ8fo9FAPp3LbDKXu/rationality-lessons-in-the-game-of-go", "postedAt": "2010-08-21T14:33:03.391Z", "baseScore": 50, "voteCount": 54, "commentCount": 150, "url": null, "contents": { "documentId": "vJ8fo9FAPp3LbDKXu", "html": "

There are many reasons I enjoy playing go: complex gameplay arises out of simple rules, single mistakes rarely decide games, games between between people of different skill can be handicapped without changing the dynamics of the game too much, there are no draws, and I just like the way it looks. The purpose of this article is to illustrate something else I like about playing go: the ways that it provides practice in basic habits of rationality, that is, the ways in which playing go helps me be less wrong.

\n

I've tried to write this so that you don't need to know the game to follow it, but reading a quick introduction would probably help. (ETA: A commenter below has helpfully pointed to more go info online.)   The main aspect to understand for this article is that go is a game of territory.  The two sides vie to occupy space and surround one another.  If a group of stones is surrounded without sufficient internal space to support itself, it is killed and removed from the board.

\n

Lesson 1: Having accurate beliefs matters.

\n

Here are three examples of a group of white stones being surrounded by black stones.  The important distinction between them is whether the white stones will eventually be captured, i.e. whether they are \"dead\" or \"alive\".

\n\n\n\n\n\n\n\n\n
\"Dead
Fig 1: The white stones are dead.
\"White
Fig 2: The white stones are alive
\"White
Fig 3: The white stones have ambiguous status.
\n
    \n
  1. In Figure 1, the white stones are being smothered by the black stones.  They are overwhelmed and cannot escape eventual capture.  They are \"dead\".
  2. \n
  3. In Figure 2, the white stones are surrounded but have sufficient structure to protect their internal space and thus will not eventually be capture by the black stones.  They are \"alive\".
  4. \n
  5. In Figure 3, the issue is unsettled.  Depending on how play proceeds, the white stones may eventually live or may eventually die.
  6. \n
\n

Accurately determining whether the white stones in situations like Figure 3 are doomed or not is essential to winning go.  There are go training books that consist of nothing but hundreds of exercises in correctly assessing the status of stones and learning the subtleties of how their arrangement affects that status.  Correct assessment of life and death is one of things besides a large branching factor that makes computer go so hard.

\n

I once read something on LessWrong which was later labeled The Fundamental Question of rationality: \"What do you believe, and why do you believe it?\" I have found this useful to adapt to go: when playing, I constantly ask myself, \"Are these stones alive or dead, and how do I know that?\", and this practice in turn has encouraged me to ask the original fundamental question more in the rest of my life.

\n

Lesson 2: Don't be too confident or too humble.

\n

Overconfidence leads to bad play.  If white were to mistakenly play further in the scenario of Figure 1 in an attempt to salvage the position, she would be throwing good stones after bad, giving black the chance to make gains elsewhere.  But underconfidence also leads to bad play: if white were to play further in Figure 2, she would also be wasting her time and giving up the initiative to black. As is true in general when trying to achieve your goals, too much humility is a bad thing.  Go punishes both false hope and false despair.

\n

Lesson 3: Update on new evidence

\n

The status of a group of stones can change over time as play in different areas of the board grows and interacts.  Stones that once were safe could become threatened if nearby battles have a side effect of increasing your opponent's strength in the area.  It is thus essential to constantly update your assessment of stones' status when presented with new evidence (new moves), and act accordingly.  This used to be a big weakness of mine.  I often would take a strong lead during the opening and middle of the game, only to lose when a large group I thought was safe gets captured late in the game because I forgot to pay attention to all the implications of further moves elsewhere.

\n

Lesson 4: Be willing to change your mind

\n

Even when you pay attention diligently, it can be difficult to act in response to new information because of emotional attachment to your previous beliefs. When I launch an invasion of my opponent's territory, sometimes the battle that follows reveals that my invasion was too aggressive.  My invading stones are doomed.  But I don't want to give up.  I throw good stones after bad and in the end only end up strengthening my opponent's position.  Better play would be to 1) realize as soon as possible that the invasion was overplayed and then 2) shift from using the invading stones as part of the attack to using them to aid your efforts at containing rather than invading (this idea is called Aji).  Go rewards those who are willing to change their minds as soon as is warranted.

\n

Lesson 5: New evidence is the arbiter of conflicting beliefs

\n

Games usually end in a curious way: by mutual agreement.  When both players believe there are no more moves they can play that will either expand their own territory or reduce their opponent's, both pass.  The players then reveal their beliefs about the \"alive\" or \"dead\" status of each group of stones.  This is done to avoid the tedium of playing out all the moves required to actually capture doomed stones.  If the players agree, then dead stones are removed and the score is counted.  But if the two players disagree, the solution is simply to resume play.  Further moves inevitably reveal who had the more accurate belief about whether a group of stones was doomed or not.

\n

Lesson 6: The road is long

\n

One interesting feature of go is the broad range of skill levels at which it is played.  If you want to make a line of players such that each of them could defeat the player on their right 90% of the time, you can make a longer such line of go players than players of any other game I know.  This game is deep.  And as with rationality, it takes practice to internalize productive habits of thought and banish self-delusion.

\n

Moreover, as with rationality, it is usually possible to appreciate the skill of those one or two levels more advanced than oneself (kind of like the way listening comprehension leads speaking ability when learning a language).  Thus, there is a constant pull, an encouragement to reach over one's own horizon and achieve a deeper understanding.

\n

Lesson 7: Shut up and count

\n

Go games are decided by a balance of points (roughly speaking, one spot on the board or one stone is worth one point).  Out of a few hundred points available, games between closely matched opponents are often decided by a margin of fewer than 10 points. (Different counting systems vary the absolute numbers here but not the spirit.)  This means that accurate quantitative evaluation of the board is essential to good play.  It's not probability theory, but it definitely shows how numbers can serve your goals better than feelings and hunches. No matter how skillfully you manage to read ahead the moves of a complicated tactical fight, if you pursue that fight instead of playing a simple move elsewhere that has a bigger impact on the score, you're making a mistake.

\n

Further Lessons

\n

Edited to add: If you find the exploration of this go/rationality analogy interesting, be sure to read the comments below, where several people have pointed out additional generalizable lessons.

\n

In Closing

\n

As much as I enjoy go, I feel I should finish by noting that in the end it is still just a game.  Time spent playing will probably not do as much to advance your goals as actual direct work.  Sometimes, the more you play, the less you win.

" } }, { "_id": "GFprkfmBAMapPQXpC", "title": "Transparency and Accountability", "pageUrl": "https://www.lesswrong.com/posts/GFprkfmBAMapPQXpC/transparency-and-accountability", "postedAt": "2010-08-21T13:01:24.750Z", "baseScore": 20, "voteCount": 51, "commentCount": 145, "url": null, "contents": { "documentId": "GFprkfmBAMapPQXpC", "html": "

[Added 02/24/14: After writing this post, I discovered that I had miscommunicated owing to not spelling out my thinking in sufficient detail, and also realized that it carried unnecessary negative connotations (despite conscious effort on my part to avoid them). See Reflections on a Personal Public Relations Failure: A Lesson in Communication. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.

\n

Follow-up to: Existential Risk and Public Relations, Other Existential Risks, The Importance of Self-Doubt

\n

Over the last few days I've made a string of posts levying strong criticisms against SIAI. This activity is not one that comes naturally to me. In The Trouble With Physics Lee Smolin writes

\n
\n

...it took me a long time to decide to write this book. I personally dislike conflict and confrontation [...] I kept hoping someone in the center of string-theory research would write an objective and detailed critique of exactly what has and has not been acheived by the theory. That hasn't happened.

\n
\n

My feelings about and criticisms of SIAI are very much analogous to Smolin's feelings about and criticisms of string theory. Criticism hurts feelings and I feel squeamish about hurting feelings. I've found the process of presenting my criticisms of SIAI emotionally taxing and exhausting. I fear that if I persist for too long I'll move into the region of negative returns. For this reason I've decided to cut my planned sequence of posts short and explain what my goal has been in posting in the way that I have.

\n

Edit: Removed irrelevant references to VillageReach and StopTB, modifying post accordingly.  

\n

As Robin Hanson never ceases to emphasize, there's a disconnect between what humans say that what they're trying to do and what their revealed goals are. Yvain has written about this topic recently under his posting Conflicts Between Mental Subagents: Expanding Wei Dai's Master-Slave Model. This problem becomes especially acute in the domain of philanthropy. Three quotes on this point:

\n

(1) In Public Choice and the Altruist's Burden Roko says:

\n
\n

The reason that we live in good times is that markets give people a selfish incentive to seek to perform actions that maximize total utility across all humans in the relevant economy: namely, they get paid for their efforts. Without this incentive, people would gravitate to choosing actions that maximized their own individual utility, finding local optima that are not globally optimal. Capitalism makes us all into efficient little utilitarians, which we all benefit enormously from.  

\n

The problem with charity, and especially efficient charity, is that the incentives for people to contribute to it are all messed up, because we don't have something analogous to the financial system for charities to channel incentives for efficient production of utility back to the producer.

\n
\n

(2) In My Donation for 2009 (guest post from Dario Amodei) Dario says:

\n
\n

I take Murphy’s Law very seriously, and think it’s best to view complex undertakings as going wrong by default, while requiring extremely careful management to go right. This problem is especially severe in charity, where recipients have no direct way of telling donors whether an intervention is working.

\n
\n

(3) In private correspondence about career choice, Holden Karnofsky said:

\n
\n

For me, though, the biggest reason to avoid a job with low accountability is that you shouldn't trust yourself too much.  I think people respond to incentives and feedback systems, and that includes myself.  At GiveWell I've taken some steps to increase the pressure on me and the costs of I behave poorly or fail to add value.  In some jobs (particularly in nonprofit/govt) I feel that there is no system in place to help you figure out when you're adding value and incent you to do so.  That matters a lot no matter how altruistic you think you are.

\n
\n

I believe that the points that Robin, Yvain, Roko, Dario and Holden have made provide a compelling case for the idea that charities should strive toward transparency and accountability. As Richard Feynman has said:

\n
\n

The first principle is that you must not fool yourself – and you are the easiest person to fool.

\n
\n

Because it's harder to fool others than it is to fool oneself, I think that the case for making charities transparent and accountable is very strong.

\n

SIAI does not presently exhibit high levels of transparency and accountability. I agree with what I interpret to be Dario's point above: that in evaluating charities which are not transparent and accountable, we should assume the worst. For this reason together with the concerns which I express about Existential Risk and Public Relations, I believe that saving money in a donor-advised-fund with a view toward donating to a transparent and accountable future existential risk organization has higher expected value than donating to SIAI now does.

\n

Because I take astronomical waste seriously and believe in shutting up and multiplying, I believe that reducing existential risk is ultimately more important than developing world aid. I would very much like it if there were a highly credible existential risk charity. At present, I do not feel that SIAI is a credible existential risk charity. One LW poster sent me a private message saying:

\n
\n

I've suspected for a long time that the movement around EY might be a sophisticated scam to live off donations of nonconformists

\n
\n

I do not believe that Eliezer is consciously attempting to engage in a scam to live off of the donations but I believe that (like all humans) he is subject to subconscious influences which may lead him to act as though he were consciously running a scam to live off of the donations of nonconformists. In light of Hanson's points, it would not be surprising if this were the case. The very fact that I received such a message is a sign that SIAI has public relations problems.

\n

I encourage LW posters who find this post compelling to visit and read the materials available at GiveWell which is, as far as I know, the only charity evaluator which places high emphasis on impact, transparency and accountability. I encourage LW posters who are interested in existential risk to contact GiveWell expressing interest in GiveWell evaluating existential risk charities. I would note that it may be useful for LW posters who are interested in finding transparent and accountable organizations to donate to GiveWell's recommended charities to signal seriousness to the GiveWell staff.

\n

I encourage SIAI to strive toward greater transparency and accountability. For starters, I would encourage SIAI to follow the example set by GiveWell and put a page on its website called \"Mistakes\" publically acknowledging its past errors. I'll also note that GiveWell incentivizes charities to disclose failures by granting them a 1-star rating. As Elie Hassenfeld explains

\n
\n

As usual, we’re not looking for marketing materials, and we won’t accept “weaknesses that are really strengths” (or reports that blame failure entirely on insufficient funding/support from others). But if you share open, honest, unadulterated evidence of failure, you’ll join a select group of organizations that have a GiveWell star.

\n
\n

I believe that the fate of humanity depends on the existence of transparent and accountable organizations. This is both because I believe that transparent and accountable organizations are more effective and because I believe that people are more willing to give to them. As Holden says:

\n
\n

I must say that, in fact, much of the nonprofit sector fits incredibly better into Prof. Hanson’s view of charity as “wasteful signaling” than into the traditional view of charity as helping.

\n

[...]

\n

Perhaps ironically, if you want a good response to Prof. Hanson’s view, I can’t think of a better place to turn than GiveWell’s top-rated charities. We have done the legwork to identify charities that can convincingly demonstrate positive impact. No matter what one thinks of the sector as a whole, they can’t argue that there are no good charitable options - charities that really will use your money to help people - except by engaging with the specifics of these charities’ strong evidence.

\n

Valid observations that the sector is broken - or not designed around helping people - are no longer an excuse not to give.

\n

Because our Bayesian prior is so skeptical, we end up with charities that you can be confident in, almost no matter where you’re coming from.

\n
\n

I believe that at present the most effective way to reduce existential risk is to work toward the existence of a transparent and accountable existential risk organization.

\n

 

\n
\n

 

\n

Added 08/23:

\n" } }, { "_id": "AWZ7butnGwwqyeCuc", "title": "The Importance of Self-Doubt", "pageUrl": "https://www.lesswrong.com/posts/AWZ7butnGwwqyeCuc/the-importance-of-self-doubt", "postedAt": "2010-08-19T22:47:15.149Z", "baseScore": 28, "voteCount": 72, "commentCount": 746, "url": null, "contents": { "documentId": "AWZ7butnGwwqyeCuc", "html": "

[Added 02/24/14: After I got feedback on this post, I realized that it carried unnecessary negative connotations (despite conscious effort on my part to avoid them), and if I were to write it again, I would have framed things differently. See Reflections on a Personal Public Relations Failure: A Lesson in Communication for more information. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.

\n

Follow-up to: Other Existential Risks, Existential Risk and Public Relations

\n

Related to: Tsuyoku Naritai! (I Want To Become Stronger), Affective Death Spirals, The Proper Use of Doubt, Resist the Happy Death Spiral, The Sin  of Underconfidence

\n

In Other Existential Risks I began my critical analysis of what I understand to be SIAI's most basic claims. In particular I evaluated part of the claim

\n
\n

(1) At the margin, the best way for an organization with SIAI's resources to prevent global existential catastrophe is to promote research on friendly Artificial Intelligence, work against unsafe Artificial Intelligence, and encourage rational thought.

\n
\n

It's become clear to me that before I evaluate the claim

\n
\n

(2) Donating to SIAI is the most cost-effective way for charitable donors to reduce existential risk.

\n
\n

I should (a) articulate my reasons for believing in the importance of self-doubt and (b) give the SIAI staff an opportunity to respond to the points which I raise in the present post as well as my two posts titled Existential Risk and Public Relations and Other Existential Risks.

\n

\n

Yesterday SarahC described to me how she had found Eliezer's post Tsuyoku Naritai! (I Want To Become Stronger) really moving. She explained:

\n
\n

I thought it was good: the notion that you can and must improve yourself, and that you can get farther than you think.

\n

I'm used to the other direction: \"humility is the best virtue.\"

\n

I mean, this is a big fuck-you to the book of Job, and it appeals to me.

\n
\n

I was happy to learn that SarahC had been positively affected by Eliezer's post. Self-actualization is a wonderful thing and it appears as though Eliezer's posting has helped her self-actualize. On the other hand, rereading the post prompted me to notice that there's something about it which I find very problematic. The last few paragraphs of the post read:

\n
\n

Take no pride in your confession that you too are biased; do not glory in your self-awareness of your flaws.  This is akin to the principle of not taking pride in confessing your ignorance; for if your ignorance is a source of pride to you, you may become loathe to relinquish your ignorance when evidence comes knocking.  Likewise with our flaws - we should not gloat over how self-aware we are for confessing them; the occasion for rejoicing is when we have a little less to confess.

\n

Otherwise, when the one comes to us with a plan for correcting the bias, we will snarl, \"Do you think to set yourself above us?\"  We will shake our heads sadly and say, \"You must not be very self-aware.\"

\n

Never confess to me that you are just as flawed as I am unless you can tell me what you plan to do about it.  Afterward you will still have plenty of flaws left, but that's not the point; the important thing is to do better, to keep moving ahead, to take one more step forward.  Tsuyoku naritai!

\n
\n

There's something to what Eliezer is saying here: when people are too strongly committed to the idea that humans are fallible this can become a self-fulfilling prophecy where humans give up on trying to improve things and as a consequence remain fallible when they could have improved. As Eliezer has said in The Sin of Underconfidence, there are social pressures that push against having high levels of confidence even when confidence is epistemically justified:

\n
\n

To place yourself too high - to overreach your proper place - to think too much of yourself - to put yourself forward - to put down your fellows by implicit comparison - and the consequences of humiliation and being cast down, perhaps publicly - are these not loathesome and fearsome things?

\n

To be too modest - seems lighter by comparison; it wouldn't be so humiliating to be called on it publicly, indeed, finding out that you're better than you imagined might come as a warm surprise; and to put yourself down, and others implicitly above, has a positive tinge of niceness about it, it's the sort of thing that Gandalf would do.

\n
\n

I have personal experience with underconfidence. I'm a careful thinker and when I express a position with confidence my position is typically well considered. For many years I generalized from one example and assumed when people express positions with confidence they've thought their positions out as well as I have. Even after being presented with massive evidence that few people think things through as carefully as I do, I persisted in granting the (statistically ill-considered) positions of others far more weight than they deserved for the very reason that Eliezer describes above. This seriously distorted my epistemology because it led to me systematically giving ill-considered positions substantial weight. I feel that I have improved on this point, but even now, from time to time I notice that I'm exhibiting irrationally low levels of confidence in my positions.

\n

At the same time, I know that at times I've been overconfident as well. In high school I went through a period when I believed that I was a messianic figure whose existence had been preordained by a watchmaker God who planned for me to save the human race. It's appropriate to say that during this period of time I suffered from extreme delusions of grandeur. I viscerally understand how it's possible to fall into an affective death spiral.

\n

In my view one of the central challenges of being human is to find an instrumentally rational balance between subjecting oneself to influences which push one in the direction of overconfidence and subjecting oneself to influences which push one in the direction of underconfidence.

\n

In Tsuyoku Naritai! Eliezer describes how Orthodox Judaism attaches an unhealthy moral significance to humility. Having grown up in a Jewish household and as a consequence having had peripheral acquaintance with orthodox Judaism I agree with Eliezer's analysis of Orthodox Judaism in this regard. In the proper use of doubt, Eliezer describes how the Jesuits allegedly are told to doubt their doubts about Catholicism. I agree with Eliezer that self-doubt can be misguided and abused.

\n

However, reversed stupidity is not intelligence. The fact that it's possible to ascribe too much moral significance to self-doubt and humility does not mean that one should not attach moral significance to self-doubt and humility. I strongly disagree with Eliezer's prescription: \"Take no pride in your confession that you too are biased; do not glory in your self-awareness of your flaws.\"

\n

The mechanism that determines human action is that we do what makes us feel good (at the margin) and refrain from doing what makes us feel bad (at the margin). This principle applies to all humans, from Gandhi to Hilter. Our ethical challenge is to shape what makes us feel good and what makes us feel bad in a way that incentivizes us to behave in accordance with our values. There are times when it's important to recognize that we're biased and flawed. Under such circumstances, we should feel proud that we recognize that we're biased we should glory in our self-awareness of our flaws. If we don't, then we will have no incentive to recognize that we're biased and be aware of our flaws.

\n

We did not evolve to exhibit admirable and noble behavior. We evolved to exhibit behaviors which have historically been correlated with maximizing our reproductive success. Because our ancestral climate was very much a zero-sum situation, the traits that were historically correlated with maximizing our reproductive success had a lot to do with gaining high status within our communities. As Yvain has said, it appears that a fundamental mechanism of the human brain which was historically correlated with gaining high status is to make us feel good when we have high self-image and feel bad when we have low self-image.

\n

When we obtain new data, we fit it into a narrative which makes us feel as good about ourselves as possible; a way conducive to having a high self-image. This mode of cognition can lead to very seriously distorted epistemology. This is what happened to me in high school when I believed that I was a messianic figure sent by a watchmaker God. Because we flatter ourselves by default, it's very important that those of us who aspire to epistemic rationality incorporate a significant element of \"I'm the sort of person who engages in self-doubt because it's the right thing to do\" into our self-image. If we do this, when we're presented with evidence which entails a drop in our self-esteem, we don't reject it out of hand or minimize it as we've been evolutionarily conditioned to do because wound of properly assimilating data is counterbalanced by the salve of the feeling \"At least I'm a good person as evidenced by the fact that I engage in self-doubt\" and failing to exhibit self-doubt would itself entail an emotional wound.

\n

This is the only potential immunization to the disease of self-serving narratives which afflicts all utilitarians out of virtue of their being human. Until technology allows us to modify ourselves in a radical way, we cannot hope to be rational without attaching moral significance to the practice of engaging in self-doubt. As the RationalWiki's page on LessWrong says:

\n
\n

A common way for very smart people to be stupid is to think they can think their way out of being apes with pretensions. However, there is no hack that transcends being human...You are an ape with pretensions. Playing a \"let's pretend\" game otherwise doesn't mean you win all arguments, or any. Even if it's a very elaborate one, you won't transcend being an ape. Any \"rationalism\" that doesn't expressly take into account humans being apes with pretensions, isn't.

\n
\n
\n

In Existential Risk and Public Relations I suggested that some of Eliezer's remarks convey the impression that Eliezer has an unjustifiably high opinion of himself. In the comments to the post JRMayne wrote

\n
\n

I think the statements that indicate that [Eliezer] is the most important person in human history - and that seems to me to be what he's saying - are so seriously mistaken, and made with such a high confidence level, as to massively reduce my estimated likelihood that SIAI is going to be productive at all.

\n

And that's a good thing. Throwing money into a seriously suboptimal project is a bad idea. SIAI may be good at getting out the word of existential risk (and I do think existential risk is serious, under-discussed business), but the indicators are that it's not going to solve it. I won't give to SIAI if Eliezer stops saying these things, because it appears he'll still be thinking those things.

\n
\n

When Eliezer responded to JRMayne's comment, Eliezer did not dispute the claim that JRMayne attributed to him. I responded to Eliezer saying

\n
\n

If JRMayne has misunderstood you, you can effectively deal with the situation by making a public statement about what you meant to convey.

\n

Note that you have not made a disclaimer which rules out the possibility that you claim that you're the most important person in human history. I encourage you to make such a disclaimer if JRMayne has misunderstood you.

\n
\n

I was disappointed, but not surprised, that Eliezer did not respond. As far as I can tell, Eliezer does have confidence in the idea that he is (at least nearly) the most important person in human history. Eliezer's silence only serves to further confirm my earlier impressions. I hope that Eliezer subsequently proves me wrong. [Edit: As Airedale points out Eliezer has in fact exhibited public self-doubt in his abilities in his posting The Level Above Mine. I find this reassuring and it significantly lowers my confidence that Eliezer claims that he's the most important person in human history. But Eliezer still hasn't made a disclaimer on this matter decisively indicating that he does not hold such a view.] The modern world is sufficiently complicated so that no human no matter how talented can have good reason to believe himself or herself to be the most important person in human history without actually doing something which very visibly and decisively alters the fate of humanity. At present, anybody who holds such a belief is suffering from extreme delusions of grandeur.

\n

There's some sort of serious problem with the present situation. I don't know whether it's a public relations problem or if the situation is that Eliezer actually suffers from extreme delusions of grandeur, but something has gone very wrong. The majority of the people who I know who outside of Less Wrong who have heard of Eliezer and Less Wrong have the impression that Eliezer is suffering from extreme delusions of grandeur. To such people, this fact (quite reasonably) calls into question of the value of SIAI and Less Wrong. On one hand, SIAI looks like an organization which is operating under beliefs which Eliezer has constructed to place himself in as favorable a position as possible rather than with a view toward reducing existential risk. On the other hand, Less Wrong looks suspiciously like the cult of Objectivism: a group of smart people who are obsessed with the writings of a very smart person who is severely deluded and describing these writings and the associated ideology as \"rational\" although they are nothing of the kind.

\n

My own views are somewhat more moderate. I think that the Less Wrong community and Eliezer are considerably more rational than the Objectivist movement and Ayn Rand (respectively). I nevertheless perceive unsettling parallels.

\n
\n

In the comments to Existential Risk and Public Relations, timtyler said

\n
\n

...many people have inflated views of their own importance. Humans are built that way. For one thing, It helps them get hired, if they claim that they can do the job. It is sometimes funny - but surely not a big deal.

\n
\n

I disagree with timtyler. Anything that has even a slight systematic negative impact on existential risk is a big deal.

\n

Some of my most enjoyable childhood experiences involved playing Squaresoft RPGs. Games like Chrono Trigger, Illusion of Gaia, Earthbound, Xenogears, and the Final Fantasy series are all stories about a group of characters who bond and work together to save the world. I found these games very moving and inspiring. They prompted me to fantasize about meeting allies who I could bond with and work together with to save the world. I was lucky enough to meet one such person in high school who I've been friends with since. When I first encountered Eliezer I found him eerily familiar, as though he was a long lost brother. This is the same feeling that is present between Siegmund and Sieglinde in the Act 1 of Wagner's Die Walküre (modulo erotic connotations). I wish that I could be with Eliezer in a group of characters as in a Squaresoft RPG working to save the world. His writings such as One Life Against the World and Yehuda Yudkowsky, 1985-2004 reveal him to be a deeply humane and compassionate person.

\n

This is why it's so painful for me to observe that Eliezer appears to be deviating so sharply from leading a genuinely utilitarian lifestyle. I feel a sense of mono no aware, wondering how things could have been under different circumstances.

\n

One of my favorite authors is Kazuo Ishiguro, who writes about the themes of self-deception and people's attempts to contribute to society. In a very good interview Ishiguro said

\n
\n

I think that's partly what interests me in people, that we don't just wish to feed and sleep and reproduce then die like cows or sheep. Even if they're gangsters, they seem to want to tell themselves they're good gangsters and they're loyal gangsters, they've fulfilled their 'gangstership' well. We do seem to have this moral sense, however it's applied, whatever we think. We don't seem satisfied, unless we can tell ourselves by some criteria that we have done it well and we haven't wasted it and we've contributed well. So that is one of the things, I think, that distinguishes human beings, as far as I can see.

\n

But so often I've been tracking that instinct we have and actually looking at how difficult it is to fulfill that agenda, because at the same time as being equipped with this kind of instinct, we're not actually equipped. Most of us are not equipped with any vast insight into the world around us. We have a tendency to go with the herd and not be able to see beyond our little patch, and so it is often our fate that we're at the mercy of larger forces that we can't understand. We just do our little thing and hope it works out. So I think a lot of the themes of obligation and so on come from that. This instinct seems to me a kind of a basic thing that's interesting about human beings. The sad thing is that sometimes human beings think they're like that, and they get self-righteous about it, but often, they're not actually contributing to anything they would approve of anyway.

\n

[...]

\n

There is something poignant in that realization: recognizing that an individual's life is very short, and if you mess it up once, that's probably it. But nevertheless, being able to at least take some comfort from the fact that the next generation will benefit from those mistakes. It's that kind of poignancy, that sort of balance between feeling defeated but nevertheless trying to find reason to feel some kind of qualified optimism. That's always the note I like to end on. There are some ways that, as the writer, I think there is something sadly pathetic but also quite noble about this human capacity to dredge up some hope when really it's all over. I mean, it's amazing how people find courage in the most defeated situations.

\n
\n

Ishiguro's quote describes how people often behave in accordance with sincere desire to contribute and end up doing things that are very different from what they thought they were doing (things which are relatively unproductive or even counterproductive). Like Ishiguro I find this phenomenon very sad. As Ishiguro hints at, this phenomenon can also result in crushing disappointment later in life. I feel a deep spiritual desire to prevent this from happening to Eliezer.

" } }, { "_id": "9eSrEqKEmbEw5jmWY", "title": "Positioning oneself to make a difference", "pageUrl": "https://www.lesswrong.com/posts/9eSrEqKEmbEw5jmWY/positioning-oneself-to-make-a-difference", "postedAt": "2010-08-18T23:54:38.901Z", "baseScore": 7, "voteCount": 16, "commentCount": 52, "url": null, "contents": { "documentId": "9eSrEqKEmbEw5jmWY", "html": "

Last weekend, while this year's Singularity Summit took place in San Francisco, I was turning 40 in my Australian obscurity. 40 is old enough to be thinking that I should just pick a SENS research theme and work on it, and also move to wherever in the world is most likely to have the best future biomedicine (that might be Boston). But at least since the late 1990s, when Eliezer first showed up, I have perceived that superintelligence trumps life extension as a futurist issue. And since 2006, when I first grasped how something like CEV could be an answer to the problem of superintelligence, I've had it before me as a model of how the future could and should play out. I have \"contrarian\" ideas about how consciousness works, but they do not contradict any of the essential notions of seed AI and friendly AI; they only imply that those notions would need to be adjusted and fitted to the true ontology, whatever that may be.

\n

So I think this is what I should be working on - not just the ontological subproblem, but all aspects of the problem. The question is, how to go about this. At the moment, I'm working on a lengthy statement of how I think a Friendly Singularity could be achieved - a much better version of my top-level posts here, along with new material. But the main \"methodological\" problem is economic and perhaps social - what can I live on while I do this, and where in the world and in society should I situate myself for maximum insight and productivity. That's really what this post is about.

\n

The obvious answer is, apply to SIAI. I'm not averse to the idea, and on occasion I raise the possibility with them, but I have two reasons for hesitation.

\n

The first is the problem of consciousness. I often talk about this in terms of vaguely specified ideas about quantum entanglement in the brain, but the really important part is the radical disjunction between the physical ontology of the natural sciences and the manifest nature of consciousness. I cannot emphasize enough that this is a huge gaping hole in the scientific understanding of the world, the equal of any gap in the scientific worldview that came before it, and that the standard \"scientific\" way of thinking about it is a form of property dualism, even if people won't admit this to themselves. All the quantum stuff you hear from me is just an idea about how to restore a type of monism. I actually think it's a conservative solution to a very big problem, but to believe that you would have to agree with me that the other solutions on offer can't work (as well as understanding just what it is that I propose instead).

\n

This \"reason for not applying to SIAI\" leads to two sub-reasons. First, I'm not sure that the SIAI intellectual environment can accommodate my approach. Second, the problem with consciousness is of course not specific to SIAI, it is a symptom of the overall scientific zeitgeist, and maybe I should be working there, in the field of consciousness studies. If expert opinion changes, SIAI will surely notice, and so I should be trying to convince the neuroscientists, not the Friendly AI researchers.

\n

The second top-level reason for hesitation is simply that SIAI doesn't have much money. If I can accomplish part of the shared agenda while supported by other means, that would be better. Mostly I think in terms of doing a PhD. A few years back I almost started one with Ben Goertzel as co-supervisor, which would have looked at implementing a CEV-like process in a toy physical model, but that fell through at my end. Lately I'm looking around again. In Australia we have David Chalmers and Marcus Hutter. I know Chalmers from my quantum-mind days in Arizona ten years ago, and I met with Hutter recently. The strong interdisciplinarity of my real agenda makes it difficult to see where I could work directly on the central task, but also implies that there are many fields (cognitive neuroscience, decision theory, various quantum topics) where I might be able to limp along with partial support from an institution.

\n

So that's the situation. Are there any other ideas? (Private communications can go to mporter at gmail.)

" } }, { "_id": "7oMScXDNguEB6JFKE", "title": "Transhumanism and the denotation-connotation gap", "pageUrl": "https://www.lesswrong.com/posts/7oMScXDNguEB6JFKE/transhumanism-and-the-denotation-connotation-gap", "postedAt": "2010-08-18T15:33:20.529Z", "baseScore": 25, "voteCount": 26, "commentCount": 31, "url": null, "contents": { "documentId": "7oMScXDNguEB6JFKE", "html": "

A word's denotation is our conscious definition of it.  You can think of this as the set of things in the world with membership in the category defined by that word; or as a set of rules defining such a set.  (Logicians call the former the category's extension into the world.)

\n

A word's connotation can mean the emotional coloring of the word.  AI geeks may think of it as a set of pairs, of other concepts that get activated or inhibited by that word, and the changes to the odds of recalling each of those concepts.

\n

When we think analytically about a word - for instance, when writing legislation - we use its denotation.  But when we are in values/judgement mode - for instance, when deciding what to legislate about, or when voting - we use its denotation less and its connotation more.

\n

This denotative-connotative gap can cause people to behave less rationally when they become more rational.  People who think and act emotionally are at least consistent.  Train them to think analytically, and they will choose goals using connotation but pursue them using denotation.  That's like hiring a Russian speaker to manage your affairs because he's smarter than you, but you have to give him instructions via Google translate.  Not always a win.

\n

Consider the word \"human\".  It has wonderful connotations, to humans.  Human nature, humane treatment, the human condition, what it means to be human.  Often the connotations are normative rather than descriptive; behaviors we call \"inhumane\" are done only by humans.  The denotation is bare by comparison:  Featherless biped.  Homo sapiens, as defined by 3 billion base pairs of DNA.

\n

Some objections to transhumanism are actually objections to transhumanism.  But some are caused by the denotative-connotative gap.  A person's analytic reasoner says, \"What about this transhumanism thing, then?\", and their connotative reasoner replies, \"Human good!  Ergo, not-human bad!  QED.\"

\n

I don't mean that we can get around this by renaming \"transhumanism\" as \"humanism with sprinkles!\"  This confusion over denotation and connotation happens inside another person's head, and you can't control it with labels.  If you propose making a germline genetic modification, this will trigger thoughts about the definition of \"human\" in someone else's head.  When that person asks how they feel about this modification, they take the phrase \"not human\" chosen for its denotation, go into values mode, access its full connotation, attach the label \"bad\" to \"not human\", and pass the result back to their analytic reasoner to decide what to do about it.  Fixing a disease gene can get labelled \"bad\" because the connotative reasoner makes a judgement about a different concept than the analytic reasoner thinks it did.

\n

I don't think the solution to the d-c gap is to operate only in denotation mode.  Denotation is what 1970s AI programs had.  But we can try to be aware of the influence of connotations, and to prefer words that say what we mean over the overused and hence connotation-laden words that first spring to mind.  Connotation isn't a bad thing - it's part of what makes us vertebrate, after all.

" } }, { "_id": "XvgGHjb3QdkENdiMq", "title": "How can we compare decision theories?", "pageUrl": "https://www.lesswrong.com/posts/XvgGHjb3QdkENdiMq/how-can-we-compare-decision-theories", "postedAt": "2010-08-18T13:29:48.425Z", "baseScore": 9, "voteCount": 13, "commentCount": 41, "url": null, "contents": { "documentId": "XvgGHjb3QdkENdiMq", "html": "

There has been a lot of discussion on LW about finding better decision theories. A lot of the reason for the various new decision theories proposed here seems to be an effort to get over the fact that classical CDT gives the wrong answer in 1-shot PD's, Newcomb-like problems and Parfit's Hitchhiker problem. While Gary Drescher has said that TDT is \"more promising than any other decision theory I'm aware of \", Eliezer gives a list of problems in which his theory currently gives the wrong answer (or, at least, it did a year ago). Adam Bell's recent sequence has talked about problems for CDT, and is no doubt about to move onto problems with EDT (in one of the comments, it was suggested that EDT is \"wronger\" than CDT).

\n

In the Iterated Prisoner's Dilemma, it is relatively trivial to prove that no strategy is \"optimal\" in the sense that it gets the best possible pay-out against all opponents. The reasoning goes roughly like this: any strategy which ever cooperates does worse than it could have against, say, Always Defect. Any strategy which doesn't start off with cooperate does worse than it could have against, say Grim. So, whatever strategy you choose, there is another strategy that would do better than you against some possible opponent. So no strategy is \"optimal\". Question: is it possible to prove similarly that there is no \"optimal\" Decision Theory? In other words - given a decision theory A, can you come up with some scenario in which it performs worse than at least one other decision theory? Than any other decision theory?

\n

One initial try would be: Omega gives you two envelopes - the left envelope contains $1 billion iff you don't implement decision theory A in deciding which envelope to choose. The right envelope contains $1000 regardless.

\n

Or, you might not like Omega being able to make decisions about you based entirely on your sourcecode (or \"ritual of cognition\"), then how about this:in order for two decision theories to sensibly be described as \"different\", there must be some scenario in which they perform a different action (let's call this Scenario 1). In Scenario 1, DT A makes decision A whereas DT B makes decision B. In Scenario 2, Omega offers you the following setup: here are two envelopes, you can pick exactly one of them. I've just simulated you in Scenario 1. If you chose decision B, there's $1,000,000 in the left envelope. Otherwise it's empty. There's $1000 in the right envelope regardless.

\n

I'm not sure if there's some flaw in this reasoning (are there decision theories for which Omega offering such a deal is a logical impossibility? It seems unlikely: I don't see how your choice of algorithm could affect Omega's ability to talk about it). But I imagine that some version of this should work - in which case, it doesn't make sense to talk about one decision theory being \"better\" than another, we can only talk about decision theories being better than others for certain classes of problems.

\n

I have no doubt that TDT is an improvement on CDT, but in order for this to even make sense, we'd have to have some way of thinking about what sort of problem we want our decision theory to solve. Presumably the answer is \"the sort of problems which you're actually likely to face in the real world\". Do we have a good formalism for what this means? I'm not suggesting that the people who discuss these questions haven't considered this issue, but I don't think I've ever seen it explicitly addressed. What exactly do we mean by a \"better\" decision theory?

" } }, { "_id": "5MALQ4GcDhLxLXJ7S", "title": "Other Existential Risks", "pageUrl": "https://www.lesswrong.com/posts/5MALQ4GcDhLxLXJ7S/other-existential-risks", "postedAt": "2010-08-17T21:24:51.520Z", "baseScore": 40, "voteCount": 61, "commentCount": 124, "url": null, "contents": { "documentId": "5MALQ4GcDhLxLXJ7S", "html": "

[Added 02/24/14: SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.

\n

Related To: Should I believe what the SIAI claims?, Existential Risk and Public Relations

\n

In his recent post titled Should I believe what the SIAI claims? XiXiDu wrote:

\n
\n

I'm already unable to judge what the likelihood of something like the existential risk of exponential evolving superhuman AI is compared to us living in a simulated reality. Even if you tell me, am I to believe the data you base those estimations on?

\n

And this is what I'm having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call this a castle in the air.

\n

[...]

\n

I can however follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?

\n

[...]

\n

I'm concerned that although consistently so, the LW community is updating on fictional evidence. This post is meant to inquire the basic principles, the foundation of the sound argumentation's and the basic premises that they are based upon.

\n
\n

XiXidu's post produced mixed reactions within the LW community. On one hand, some LW members (e.g. orthonormal) felt exasperated with XiXiDu because his post was poorly written, revealed him to be uninformed, and revealed that he has not internalized some of the basic principles of rationality. On the other hand, some LW members (e.g. HughRistik) have long wished that SIAI would attempt to substantiate some of its more controversial claims in detail and were gratified to see somebody call on SIAI to do so. These two categories are not mutually exclusive. I fall into both in some measure. In any case, I give XiXiDu considerable credit for raising such an important topic.

\n

The present post is the first of a several posts in which I will detail my thoughts on SIAI's claims.

\n

\n

One difficulty is that there's some ambiguity as to what SIAI's claims are. I encourage SIAI to make a more detailed public statement of their most fundamental claims. According to the SIAI website:

\n
\n

In the coming decades, humanity will likely create a powerful artificial intelligence. The Singularity Institute for Artificial Intelligence (SIAI) exists to confront this urgent challenge, both the opportunity and the risk. Our objectives as an organization are:

\n\n
\n

I interpret SIAI's key claims to be as follows:

\n

(1) At the margin, the best way for an organization with SIAI's resources to prevent global existential catastrophe is to promote research on friendly Artificial Intelligence, work against unsafe Artificial Intelligence, and encourage rational thought.

\n

(2) Donating to SIAI is the most cost-effective way for charitable donors to reduce existential risk.

\n

I arrived at belief that SIAI claims (1) by reading their mission statement and by reading SIAI research fellow Eliezer Yudkowsky's writings, in particular the ones listed under the Less Wrong wiki article titled Shut up and multiply. [Edit (09/09/10): The videos of Eliezer linked in a comment by XiXiDu give some evidence that SIAI claims (2). As Airedale says in her second to last paragraph here, Eliezer and SIAI are not synonymous entities. The question of whether SIAI regards Eliezer as an official representative of SIAI remains]. I'm quite sure that (1) and (2) are in the rough ballpark of what SIAI claims, but encourage SIAI to publicly confirm or qualify each of (1) and (2) so that we can all have a more clear idea of what SIAI claims.

\n

My impression is that some LW posters are confident in both (1) and (2), some are confident in neither of (1) and (2) while others are confident in exactly one of (1) and (2). For clarity, I think that it's sensible to discuss claims (1) and (2) separately. In the remainder of the present post, I'll discuss claim (1'), namely, claim (1) modulo the part about the importance of encouraging rational thought. I will address SIAI's emphasis on encouraging rational thought in a later post.

\n
\n

As I have stated repeatedly, unsafe AI is not the only existential risk. The Future of Humanity Institute has a page titled Global Catastrophic Risks which has a list of lectures given at a 2008 conference on a variety of potential global catastrophic risks. Note that a number of these global catastrophic risks are unrelated to future technologies. Any argument in favor of claim (1') must consist of a quantitative comparison of the effects of focusing on Artificial Intelligence and the effects of focusing on other existential risks. To my knowledge, SIAI has not provided a detailed quantitative analysis of the expected impact of AI research, a detailed quantitative analysis of working to avert other existential risks, and a comparison of the two. If SIAI has made such a quantitative analysis, I encourage them to make it public. At present, I believe that SIAI has not substantiated claim (1').

\n

Remarks on arguments advanced in favor of focusing on AI

\n

(A) Some people claim that there's a high probability that runaway superhuman artificial intelligence will be developed in the near future. For example, Eliezer has said that \"it seems pretty obvious to me that some point in the not-too-distant future we're going to build an AI [...] it will be a superintelligence relative to us [...] in one to ten decades and probably on the lower side of that.\"

\n

I believe that if Eliezer is correct about this assertion, claim (1') is true. But I see no reason for assigning high probability to notion that a runaway superhuman intelligence will be developed within such a short timescale. In the bloggingheads diavlog Scott Aaronson challenges Eliezer on this point and Eliezer offers some throwaway remarks which I do not find compelling. As far as I know, neither Eliezer nor anybody else at SIAI have provided a detailed explanation for why we should expect runaway superhuman intelligence on such a short timescale. LW poster timtyler pointed me to a webpage where he works out his own estimate of the timescale. I will look at this document eventually, but do not expect to find it compelling, especially in light of Carl Shulman's remarks about the survey used suffering from selection bias. So at present, I do not find (A) a compelling reason to focus on the existential risk of AI.

\n

(B) Some people have remarked that if we develop an FAI, the FAI will greatly reduce all other existential risks which humanity faces. For example, timtyler says

\n
\n

I figure a pretty important thing is to get out of the current vulnerable position as soon as possible. To do that, a major thing we will need is intelligent machines - and so we should allocate resources ot their development.

\n
\n

I agree with timtyler that it would be very desirable for us to have an FAI to solve our problems. If all else was equal, then this would give special reason to favor focus on AI over existential risks that are not related to Artificial Intelligence. But this factor by itself is not a compelling reason for focus on Artificial Intelligence. In particular, human-level AI may be so far off in the future that if we want to survive, we have to address other existential risks right now without the aid of AI.

\n

(C) An inverse of the view mentioned in (B) is the idea that if we're going to survive in the over the long haul, we must eventually build an FAI, so we might as well focus on FAI since if we don't get FAI right, we're doomed anyway. This is an aspect of Vladimir_Nesov's position which is emerges the linked threads [1], [2]. I think that there's something to this idea. Of course research on FAI may come at the opportunity cost of the chance to avert short term preventable global catastrophic risks. My understanding is that at present Vladimir_Nesov believes that this cost is outweighed by the benefits. By way of contrast, at present I believe that the benefits are outweighed by the cost. See our discussions for details. Vladimir_Nesov's position is sophisticated and I respect it.

\n

(D) Some people have said that existential risk due to advanced technologies is getting disproportionately little attention relative to other existential risks so that at the margin one should focus on advanced technologies. For example, see Vladimir_Nesov's comment and ciphergoth's comment. I don't find this sort of remark compelling. My own impression is that all existential risks are getting very little attention. I see no reason for thinking that existential risk due to advanced technologies is getting less than its fair share of attention being directed toward existential risk. As I said in response to ciphergoth:

\n
\n

Are you sure that the marginal contribution that you can make to the issue which is getting the least attention is the greatest? The issues getting the least attention may be getting little attention precisely because people know that there's nothing that can be done about them.

\n
\n

(E) Some people have remarked that most issues raised as potential existential risks (e.g. nuclear war, resource shortage) seem very unlikely to kill everyone and so are not properly conceived of as existential risks. I don't find these sorts of remarks compelling. As I've commented elsewhere, any event which would permanently prevent humans from creating a transhuman paradise is properly conceived of as an existential risk on account of the astronomical waste which would result.

\n
\n

On argument by authority

\n

When XiXiDu raised his questions, Eliezer initially responded by saying:

\n
\n

If you haven't read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don't.

\n
\n

I interpret this to be a statement of the type \"You should believe SIAI's claims (1) and (2) because we're really smart.\" There are two problems with such a statement. One is that there's no evidence that intelligence leads to correct views about how to ensure the survival of the human species. Alexander Grothendieck is one of the greatest mathematicians of the 20th century. Fields medalist Rene Thom wrote:

\n
\n

Relations with my colleague Grothendieck were less agreeable for me. His technical superiority was crushing. His seminar attracted the whole of Parisian mathematics, whereas I had nothing new to offer.

\n
\n

Fields Medalist David Mumford said

\n
\n

[Grothendieck] had more than anybody else I’ve ever met this ability to make an absolutely startling leap into something an order of magnitude more abstract…. He would always look for some way of formulating a problem, stripping apparently everything away from it, so you don’t think anything is left. And yet something is left, and he could find real structure in this seeming vacuum.”

\n
\n

In Mariana Cook's book titled Mathematicians: An Outer View of the Inner World, Fields Medalist and IAS professor Pierre Deligne wrote

\n
\n

When I was in Paris as a student, I would go to Grothendieck's seminar at IHES [...] Grothendieck asked me to write up some of the seminars and gave his notes. He was extremely generous with his ideas. One could not be lazy or he would reject you. But if you were really interested and doing this he liked, then he helped you a lot. I enjoyed the atmosphere around him very much. He had the main ideas and the aim was to prove theories and understand a sector of mathematics. We did not care much about priority because Grothendieck had the ideas we were working on and priority would have meant nothing.

\n
\n

(Emphasis my own.)

\n

These comments should suffice to illustrate that Grothendieck's intellectual power was uncanny.

\n

In a very interesting transcript titled Reminiscences of Grothendieck and his school, Grothendieck's student former student Luc Illusie says:

\n
\n

In 1970 he left the IHES and founded the ecological group Survivre et Vivre. At the Nice congress, he was doing propaganda for it, offering documents taken out of a small cardboard suitcase. He was gradually considering mathematics as not being worth of being studied, in view of the more urgent problems of the survival of the human species.

\n
\n

I think that it's fair to say that Grothendieck's ideas about how to ensure the survival of the human species were greatly misguided. In the second portion of Allyn Jackson's excellent biography of Grothendieck one finds the passage

\n
\n

...despite his strong convictions, Grothendieck was never effective in the real world of politics. “He was always an anarchist at heart,” Cartier observed. “On many issues, my basic positions are not very far from his positions. But he was so naive that it was totally impossible to do anything with him politically.” He was also rather ignorant. Cartier recalled that, after an inconclusive presidential election in France in 1965, the newspapers carried headlines saying that de Gaulle had not been elected. Grothendieck asked if this meant that France would no longer have a president. Cartier had to explain to him what a runoff election is. “Grothendieck was politically illiterate,” Cartier said. But he did want to help people: it was not unusual for Grothendieck to give shelter for a few weeks to homeless people or others in need.

\n

[...]

\n

“Even people who were close to his political views or his social views were antagonized by his behavior.…He behaved like a wild teenager.”

\n

[....]

\n

“He was used to people agreeing with his opinions when he was doing algebraic geometry,” Bumby remarked. “When he switched to politics all the people who would have agreed with him before suddenly disagreed with him.... It was something he wasn’t used to.”

\n
\n

Just as Grothendieck's algebro-geometric achievements had no bearing on Grothendieck's ability to conceptualize a good plan to lower existential risk, so too does Eliezer's ability to interpret quantum mechanics have no bearing on Eliezer's ability to conceptualize a good plan to lower existential risk.

\n

The other problem with Eliezer's appeal to his intellectual prowess is that Eliezer's demonstrated intellectual prowess pales in comparison with that of other people who are interested in existential risk. I wholeheartedly agree with rwallace's comment:

\n
\n

If you want to argue from authority, the result of that isn't just tilted against the SIAI, it's flat out no contest.

\n
\n

By the time Grothendieck was Eliezer's age he had already established himself as a leading authority in functional analysis and proven his vast generalization of the Riemann-Roch theorem. Eliezer's intellectual achievements are meager by comparison.

\n

A more contemporary example of a powerful intellect interested in existential risk is Fields Medalist and Abel Prize winner Mikhail Gromov. On the GiveWell research blog there's an excerpt from an interview with Gromov which caught my attention:

\n
\n

If you try to look into the future, 50 or 100 years from now...

50 and 100 is very different. We know more or less about the next 50 years. We shall continue in the way we go. But 50 years from now, the Earth will run out of the basic resources and we cannot predict what will happen after that. We will run out of water, air, soil, rare metals, not to mention oil. Everything will essentially come to an end within 50 years. What will happen after that? I am scared. It may be okay if we find solutions but if we don't then everything may come to an end very quickly!

Mathematics may help to solve the problem but if we are not successful, there will not be any mathematics left, I am afraid!

Are you pessimistic?


I don't know. It depends on what we do. if we continue to move blindly into the future, there will be a disaster within 100 years and it will start to be very critical in 50 years already. Well, 50 is just an estimate. It may be 40 or it may be 70 but the problem will definitely come. If we are ready for the problems and manage to solve them, it will be fantastic. I think there is potential to solve them but this potential should be used and this potential is education. It will not be solved by God. People must have ideas and they must prepare now. In two generations people must be educated. Teachers must be educated now, and then the teachers will educate a new generation. Then there will be sufficiently many people to face the difficulties. I am sure this will give a result. If not, it will be a disaster. It is an exponential process. If we run along an exponential process, it will explode. That is a very simple computation. For example, there will be no soil. Soil is being exhausted everywhere in the world. It is not being said often enough. Not to mention water. It is not an insurmountable problem but requires solutions on a scale we have never faced before, both socially and intellectually.

\n
\n

I've personally studied some of Gromov's work and find it much more impressive than the portions of Eliezer's work which I've studied. I find Gromov's remarks on existential risk more compelling than Eliezer's remarks on existential risk. Neither Gromov nor Eliezer have substantiated their claims, so by default I take Gromov more seriously than Eliezer. But as I said above, this is really aside from the point. The point is that there's a history of brilliant people being very mistaken in their views about things outside of their areas of expertise and that discussion of existential risk should be based on evidence rather than based on argument by authority. I agree with a remark which Holden Karnofsky made in response to my GiveWell research mailing list post

\n
\n

I think it's important not to put too much trust in any single person's view based simply on credentials.  That includes [...] Mikhail Gromov [...] among others.

\n
\n

I encourage Less Wrong readers who have not done so to carefully compare the marginal impact that one can hope to have on existential risk by focusing on AI with the marginal impact that one can hope to have on existential risk by focusing on a specific existential risk unrelated to AI. When one does so, one should beware of confirmation bias. If one came to believe that focusing on AI is a good without careful consideration of alternatives, one should assume oneself to be irrationally biased in favor of focusing on AI.

\n

Bottom line

\n

There's a huge amount of uncertainty as to which existential risks are most likely to strike and what we can hope to do about them. At present reasonable people can hold various views on which existential risks are worthy of the most attention. I personally think that the best way to face the present situation is to gather more information about all existential risks rather than focusing on one particular existential risk, but I might be totally wrong. Similarly, people who believe that AI deserves top priority might be totally wrong. At present there's not enough information available to determine which existential risks deserve top priority with any degree of confidence.

\n

SIAI can credibly claim (1'), but SIAI cannot credibly claim (1') with confidence. Because uncredible claims about existential risk drive people away from thinking about existential risk, SIAI should take special care to avoid the appearance of undue confidence in claim (1').

" } }, { "_id": "cyEBngGo7pgLHocNY", "title": "On behalf of physical things", "pageUrl": "https://www.lesswrong.com/posts/cyEBngGo7pgLHocNY/on-behalf-of-physical-things", "postedAt": "2010-08-17T15:42:36.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "cyEBngGo7pgLHocNY", "html": "

Most people inadvertently affect the reputations of groups they are seen as part of while they go about other activities. But some people also purposely exploit the fact that their behaviour and thoughts will be seen as evidence of those of a larger group, to give the false impression their views are widely supported. These people are basically stealing the good reputation of groups; they enjoy undeserved attention and leave the groups’ images polluted.

\n

Such parasites often draw attention to what a very ordinary member of the targeted group they are, or just straight out claim to be speaking for that group. People who ‘have been a left voter for fifty years, but this year might just have to vote conservative’ are getting much of their force from implicitly claiming high representativeness of a large and respected group, and those who claim they write ‘what women really think‘ are more overt. From the perspective of women who think for instance, this is almost certain to be a damaging misrepresentation; any view other than your own is worse, and people who have good arguments are less likely to steal the authority of some unsuspecting demographic as support. It is also costly to listeners who are mislead, for instance about the extent to which women really think. Costs of prevention ignored then, less of this is better.

\n

Purposeful exploitation of this sort should be easier than other externalities to groups’ reputations to punish and to want to punish; it’s easier to see, it’s directed at a specific group, and it’s more malevolent. However the public can’t punish or ignore all claims or implicit suggestions of representativeness, as there are also many useful and accurate ones. Often much of the interest in learning what specific strangers’ views are requires assuming that they are representative, and we keenly generalize this way. So mostly it is up to groups to identify and punish their own dishonest exploiters, usually via social pressure.

\n

This means groups are easier to exploit if their members aren’t in a position to punish, because they don’t have the resources to deny respect that matters to the offenders. If you claim to be broadcasting what women think, most women don’t have the time or means to publicize the shamefulness of your malicious externalizing much. Even if they did they would not have much to gain from it personally, so there is a tragedy of the commons. And in big groups it is hard for a member or several to know whether another supposed group member is lying about the group’s average characteristics; they may just be a minority in the demographic themselves. Respectable groups are also good. Last, if most people have a lot of contact with the group in question, and the topic is a common one, it will be harder to misrepresent. So large, respectable, powerless or otherwise engaged groups who don’t commonly discuss the topic with the rest of society are best to make use of in this way.

\n

I haven’t seen this kind of activity punished much, it doesn’t seem to be thought of as especially shameful. But given that, it seems rarer than I would guess. For instance, if you wanted to push a radical political agenda, why join the disrespected minor party who pushes that agenda rather than a moderate party, which allows you to suggest to your audience that even the larger and more reputable moderate party is coming around to the idea?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "As3Xjtj2TRRar7bSX", "title": "What a reduction of \"probability\" probably looks like", "pageUrl": "https://www.lesswrong.com/posts/As3Xjtj2TRRar7bSX/what-a-reduction-of-probability-probably-looks-like", "postedAt": "2010-08-17T14:58:38.876Z", "baseScore": 12, "voteCount": 17, "commentCount": 35, "url": null, "contents": { "documentId": "As3Xjtj2TRRar7bSX", "html": "

Unlike my previous posts, this one isn't an announcement of some finished result. I just want to get some ideas out for public discussion. A big part of the credit goes to Wei Dai and Vladimir Nesov, though the specific formulations are mine.

\n

Wei Dai wonders: what are probabilities, anyway? Eliezer wonders: what are the Born probabilities of? I cannot claim to know the answers, but I strongly hold that these questions are, in fact, answerable. And as evidence, I'll try to show how normal the answers might plausibly turn out to be.

\n

Perhaps counterintuitively, the easiest way for probabilities to arise is not by postulating \"different worlds\" that you could \"end up\" in starting from now. No, the easiest setting is a single, purely deterministic world with only one possible future.

\n

One

\n

The first thought experiment goes like this. Imagine a coarse-grained classical universe whose physical laws are allowed to consume symbols from a \"Tape\", located outside the Matrix, which contains a sequence of ones and zeroes coming from a pseudorandom number generator. If we flip a coin in that world, the result of the flip will depend on several consecutive ones and zeroes read from the Tape. If the coin is \"fair\", it will come up heads 50% of the time, on average, as we advance along the Tape. (Yes, Virginia, in this setting the fairness would be a mathematical property of the coin, not of our own ignorance.) In such a world, creatures who are too computationally weak to predict the Tape will probably find useful a concept of \"probability\", and will indeed find \"probabilistic\" events happening with the limiting frequencies, standard deviations, etc. predicted by our familiar probability theory - even though the world has only one timeline that never branches.

\n

But we know that in our world, observations are not completely deterministic: they can be influenced by the mysterious Born probabilities. How could that kind of law arise from a Nature that doesn't contain it already?

\n

Two

\n

To understand the second thought experiment, you need to be able to imagine the MWI without the Born probabilities: just a big wavefunction evolving according to the usual laws, without any observers who could collapse it or experience probabilities (whatever that means). And imagine an outside observer that samples it according to different probability measures, and sees different worlds. An observer using the Born rule will see our familiar \"2-world\". But other observers looking at the same wavefunction can see the \"3-world\" and many, many other worlds. What do they look like? That is an empirical question, completely answerable by modern physics, that I don't know enough math to answer; but intuitively it seems that most rules different from the 2-rule should either reward or penalize interactions that lead to branching, so the other worlds look either like huge neverending explosions, or static crystals at close to absolute zero. It's entirely possible that only 2-sampling is \"stable\" enough to contain stars, planets, proteins and biological evolution - which, if you think about it, \"explains\" the \"existence\" of the 2-world without assuming the Born rule.

\n

The above does sound like it could lead to an explanation of some statement vaguely similar to the Born rule, but to clarify matters even further - or confuse you even deeper - let us go on to...

\n

Three

\n

Imagine an algorithm running on a classical physical computer, sitting on a table in our quantum universe. The computer has the above-explained property of being \"stable\" under the Born rule: a weighted-majority of near futures ranked by the 2-norm have the computer correctly executing the next few steps, but for the 1-norm this isn't necessarily the case - the computer will likely glitch or self-destruct. (All computers built by humans probably have this property. Also note that it can be defined in terms of the wavefunction alone, without assuming weights a priori.) Then the algorithm will have \"subjective anticipation\" of an extremely weird kind: conditioned on the algorithm itself running faithfully in the future, it can conclude that some future histories with higher Born-weight are more likely. So if I'm some kind of Platonic mathematical algorithm, this says something about what I should expect to happen in the world.

\n

This \"explanation\" has the big drawback that it doesn't explain experimental observations. Why should an apparatus measuring a property of one individual particle (say) give rise to observed probabilities predicted by quantum theory? I don't have an answer to that.

\n

Moreover, I don't think the above thought experiments should be taken as a final answer to anything at all. The intent of this post was to show that confusing questions can and should be approached empirically, and that we can and should strive to achieve perfectly normal answers.

" } }, { "_id": "5waoQXiYdqe6LGpQq", "title": "Should humanity give birth to a galactic civilization?", "pageUrl": "https://www.lesswrong.com/posts/5waoQXiYdqe6LGpQq/should-humanity-give-birth-to-a-galactic-civilization", "postedAt": "2010-08-17T13:07:10.407Z", "baseScore": -11, "voteCount": 41, "commentCount": 122, "url": null, "contents": { "documentId": "5waoQXiYdqe6LGpQq", "html": "

Followup to: Should I believe what the SIAI claims? (Point 4: Is it worth it?)

\n
\n

It were much better that a sentient being should never have existed, than that it should have existed only to endure unmitigated misery.  Percy Bysshe Shelley

\n
\n

Imagine humanity to succeed. To spread out into the galaxy and beyond. Trillions of entities...

\n

Then, I wonder, what in the end? Imagine if our dreams of a galactic civilization come true. Will we face unimaginable war over resources and torture as all this beauty will face its inevitable annihilation as the universe approaches absolute zero temperature?

\n

What does this mean? Imagine how many more entities of so much greater consciousness and intellect will be alive in 10^20 years. If they are doomed to face that end or commit suicide, how much better would it be to face extinction now? That is, would the amount of happiness until then balance the amount of suffering to be expected at the beginning of the end? If we succeed to pollinate the universe, is the overall result ethical justifiable? Or might it be ethical to abandon the idea of reaching out to stars?

\n

The question is, is it worth it? Is it ethical? Should we worry about the possibility that we'll never make it to the stars? Or should we rather worry about the prospect that trillions of our distant descendants may face, namely unimaginable misery? 

\n

And while pondering the question of overall happiness, all things considered, how sure are we that on balance there won't be much more suffering in the endless years to come? Galaxy spanning wars, real and simulated torture? Things we cannot even imagine now.

\n

One should also consider that it is more likely than not that we'll see the rise of rogue intelligences. It might also be possible that humanity succeeds to create something close to a friendly AI, which however fails to completely follow CEV (Coherent Extrapolated Volition). Ultimately this might not lead to our inevitable extinction but even more suffering, on our side or that of other entities out there. 

\n

Further, although less dramatic, what if we succeed to transcendent, to become posthuman and find out that the universe does not contain enough fun for entities with mental attributes far exceeding those of baseline humanity? What if there isn't even enough fun for normal human beings to live up until an age of 1000 and still have fun? What if soon after the singularity we discover that all that is left is endless repetition? If we've learnt all there is to learn, done all there is to do. All games played, all dreams dreamed, what if nothing new under the sky is to be found anymore? And don’t we all experience this problem already these days? Have you people never thought and felt that you’ve already seen that movie, read that book or heard that song before for that they all featured the same plot, the same rhythm?

\n

If it is our responsibility to die for our children to live, for the greater public good, if we are in charge of the upcoming galactic civilization, if we bear a moral responsibility for those entities to be alive, why don't the face the same responsibility for the many more entities to be alive but suffering? Is it the right thing to do, to live at any cost, to give birth at any price?

\n

What if it is not about \"winning\" and \"not winning\" but about losing or gaining one possibility among millions that could go horrible wrong?

\n

Isn't even the prospect of a slow torture to death enough to consider to end our journey here, a torture that spans a possible period from 10^20 years up to the Dark Era from 10^100 years and beyond? This might be a period of war, suffering and suicide. It might be the Era of Death and it might be the lion's share of the future. I personally know a few people who suffer from severe disabilities and who do not enjoy life. But this is nothing compared to the time from 10^20 to 10^100 years where possibly trillions of God-like entities will be slowly disabled due to a increasing lack of resources. This is comparable to suffering from Alzheimer's, just much worse, much longer and without any hope.

\n

To exemplify this let's assume there were 100 entities. At a certain point the universe will cease to provide enough resources to sustain 100 entities. So either the ruling FAI (friendly AI) is going to kill one entity or reduce the mental capabilities of all 100. This will continue until all of them are either killed or reduced to a shadow of their former self. This is a horrible process that will take a long time. I think you could call this torture until the end of the universe.

\n

So what if it is more likely that maximizing utility not only fails but rather it turns out that the overall utility is minimized, i.e. the relative amount of suffering increasing. What if the ultimate payoff is notably negative? If it is our moral responsibility to minimize suffering and if we are unable minimize suffering by actively shaping the universe, but rather risk to increase it, what should we do about it? Might it be better to believe that winning is impossible, than that it's likely, if the actual probability is very low?

\n

Hereby I ask the Less Wrong community to help me resolve potential fallacies and biases in my framing of the above ideas.

\n


\n
\n

See also

\n

The Fun Theory Sequence

\n

\"Should This Be the Last Generation?\" By PETER SINGER (thanks timtyler)

" } }, { "_id": "mpzoBMkayfQnaiKZK", "title": "Desirable Dispositions and Rational Actions", "pageUrl": "https://www.lesswrong.com/posts/mpzoBMkayfQnaiKZK/desirable-dispositions-and-rational-actions", "postedAt": "2010-08-17T03:20:06.657Z", "baseScore": 18, "voteCount": 37, "commentCount": 184, "url": null, "contents": { "documentId": "mpzoBMkayfQnaiKZK", "html": "

A common background assumption on LW seems to be that it's rational to act in accordance with the dispositions one would wish to have. (Rationalists must WIN, and all that.)

\n

E.g., Eliezer:

\n
\n

It is, I would say, a general principle of rationality - indeed, part of how I define rationality - that you never end up envying someone else's mere choices.  You might envy someone their genes, if Omega rewards genes, or if the genes give you a generally happier disposition.  But [two-boxing] Rachel, above, envies [one-boxing] Irene her choice, and only her choice, irrespective of what algorithm Irene used to make it.  Rachel wishes just that she had a disposition to choose differently.

\n
\n

And more recently, from AdamBell:

\n
\n

I [previously] saw Newcomb’s Problem as proof that it was sometimes beneficial to be irrational. I changed my mind when I realized that I’d been asking the wrong question. I had been asking which decision would give the best payoff at the time and saying it was rational to make that decision. Instead, I should have been asking which decision theory would lead to the greatest payoff.

\n
\n

Within academic philosophy, this is the position advocated by David Gauthier.  Derek Parfit has constructed some compelling counterarguments against Gauthier, so I thought I'd share them here to see what the rest of you think.

\n

First, let's note that there definitely are possible cases where it would be \"beneficial to be irrational\".  For example, suppose an evil demon ('Omega') will scan your brain, assess your rational capacities, and torture you iff you surpass some minimal baseline of rationality.  In that case, it would very much be in your interests to fall below the baseline!  Or suppose you're rewarded every time you honestly believe the conclusion of some fallacious reasoning.  We can easily multiply cases here.  What's important for now is just to acknowledge this phenomenon of 'beneficial irrationality' as a genuine possibility.

\n

This possibility poses a problem for the Eliezer-Gauthier methodology. (Quoting Eliezer again:)

\n
\n

Rather than starting with a concept of what is the reasonable decision, and then asking whether \"reasonable\" agents leave with a lot of money, start by looking at the agents who leave with a lot of money, develop a theory of which agents tend to leave with the most money, and from this theory, try to figure out what is \"reasonable\".

\n
\n

The problem, obviously, is that it's possible for irrational agents to receive externally-generated rewards for their dispositions, without this necessarily making their downstream actions any more 'reasonable'.  (At this point, you should notice the conflation of 'disposition' and 'choice' in the first quote from Eliezer.  Rachel does not envy Irene her choice at all.  What she wishes is to have the one-boxer's dispositions, so that the predictor puts a million in the first box, and then to confound all expectations by unpredictably choosing both boxes and reaping the most riches possible.)

\n

To illustrate, consider (a variation on) Parfit's story of the threat-fulfiller and threat-ignorer.  Tom has a transparent disposition to fulfill his threats, no matter the cost to himself.  So he straps on a bomb, walks up to his neighbour Joe, and threatens to blow them both up unless Joe shines his shoes.  Seeing that Tom means business, Joe sensibly gets to work.  Not wanting to repeat the experience, Joe later goes and pops a pill to acquire a transparent disposition to ignore threats, no matter the cost to himself. The next day, Tom sees that Joe is now a threat-ignorer, and so leaves him alone.

\n

So far, so good.  It seems this threat-ignoring disposition was a great one for Joe to acquire.  Until one day... Tom slips up.  Due to an unexpected mental glitch, he threatens Joe again.  Joe follows his disposition and ignores the threat.  BOOM.

\n

Here Joe's final decision seems as disastrously foolish as Tom's slip up.  It was good to have the disposition to ignore threats, but that doesn't necessarily make it good idea to act on it.  We need to distinguish the desirability of a disposition to X from the rationality of choosing to do X.

" } }, { "_id": "WTA6vmYdQCzTFT4WZ", "title": "Newcomb's Problem: A problem for Causal Decision Theories", "pageUrl": "https://www.lesswrong.com/posts/WTA6vmYdQCzTFT4WZ/newcomb-s-problem-a-problem-for-causal-decision-theories", "postedAt": "2010-08-16T11:25:20.745Z", "baseScore": 11, "voteCount": 15, "commentCount": 121, "url": null, "contents": { "documentId": "WTA6vmYdQCzTFT4WZ", "html": "

This is part of a sequence titled, \"Introduction to decision theory\"

\n


\n

The previous post is \"An introduction to decision theory\"

\n


\n

In the previous post I introduced evidential and causal decision theories. The principle question that needs resolving with regards to these is whether using these decision theories leads to making rational decisions. The next two posts will show that both causal and evidential decision theories fail to do so and will try to set the scene so that it’s clear why there’s so much focus given on Less Wrong to developing new decision theories.

\n

 

\n

Newcomb’s Problem

\n

 

\n

Newcomb’s Problem asks us to imagine the following situation:

\n

 

\n

Omega, an unquestionably honest, all knowing agent with perfect powers of prediction, appears, along with two boxes.  Omega tells you that it has placed a certain sum of money into each of the boxes. It has already placed the money and will not now change the amount.  You are then asked whether you want to take just the money that is in the left hand box or whether you want to take the money in both boxes.

\n

 

\n

However, here’s where it becomes complicated. Using its perfect powers of prediction, Omega predicted whether you would take just the left box (called “one boxing”) or whether you would take both boxes (called “two boxing”).Either way, Omega put $1000 in the right hand box but filled the left hand box as follows:

\n

 

\n

 If he predicted you would take only the left hand box, he put $1 000 000 in the left hand box.

\n

 

\n

If he predicted you would take both boxes, he put $0 in the left hand box.

\n

 

\n

Should you take just the left hand box or should you take both boxes?

\n

 

\n

An answer to Newcomb’s Problem

\n

 

\n

One argument goes as follows: By the time you are asked to choose what to do, the money is already in the boxes. Whatever decision you make, it won’t change what’s in the boxes. So the boxes can be in one of two states:

\n
    \n
  1. Left box, $0. Right box, $1000.
  2. \n
  3. Left box, $1 000 000. Right box, $1000.
  4. \n
\n

\n

Whichever state the boxes are in, you get more money if you take both boxes than if you take one. In game theoretic terms, the strategy of taking both boxes strictly dominates the strategy of taking only one box. You can never lose by choosing both boxes.

\n


The only problem is, you do lose. If you take two boxes then they are in state 1 and you only get $1000. If you only took the left box you would get $1 000 000.

\n

 

\n

To many people, this may be enough to make it obvious that the rational decision is to take only the left box. If so, you might want to skip the next paragraph.

\n

 

\n

Taking only the left box didn’t seem rational to me for a long time. It seemed that the reasoning described above to justify taking both boxes was so powerful that the only rational decision was to take both boxes. I therefore saw Newcomb’s Problem as proof that it was sometimes beneficial to be irrational. I changed my mind when I realized that I’d been asking the wrong question. I had been asking which decision would give the best payoff at the time and saying it was rational to make that decision. Instead, I should have been asking which decision theory would lead to the greatest payoff. From that perspective, it is rational to use a decision theory that suggests you only take the left box because that is the decision theory that leads to the highest payoff. Taking only the left box lead to a higher payoff and it’s also a rational decision if you ask, “What decision theory is it rational for me to use?” and then make your decision according to the theory that you have concluded it is rational to follow.

\n

 

\n

What follows will presume that a good decision theory should one box on Newcomb’s problem.

\n

 

\n

Causal Decision Theory and Newcomb’s Problem

\n

 

\n

Remember that decision theory tells us to calculate the expected utility of an action by summing the utility of each possible outcome of that action multiplied by its probability. In Causal Decision Theory, this probability is defined causally (something that we haven’t formalized and won’t formalise in this introductory sequence but which we have at least some grasp of). So Causal Decision Theory will act as if the probability that the boxes are in state 1 or state 2 above is not influenced by the decision made to one or two box (so let’s say that the probability that the boxes are in state 1 is P and the probability that they’re in state 2 is Q regardless of your decision).

\n

 

\n

So if you undertake the action of choosing only the left box your expected utility will be equal to: (0 x P) + (1 000 000 x Q) = 1 000 000 x Q

\n

 

\n

And if you choose both boxes, the expected utility will be equal to: (1000 x P) + (1 001 000 x Q).

\n

 

\n

So Causal Decision Theory will lead to the decision to take both boxes and hence, if you accept that you should one box on Newcomb’s Problem, Causal Decision Theory is flawed.

\n

 

\n

Evidential Decision Theory and Newcomb’s Problem

\n

 

\n

Evidential Decision Theory, on the other hand, will take your decision to one box as evidence that Omega put the boxes in state 2, to give an expected utility of (1 x 1 000 000) + (0 x 0) = 1 000 000.

\n

 

\n

It will similarly take your decision to take both boxes as evidence that Omega put the boxes into state 1, to give an expected utility of (0 x (1 000 000 + 1000)) + (1 x (0 + 1000)) = 1000

\n

 

\n

As such, Evidential Decision Theory will suggest that you one box and hence it passes the test posed by Newcomb’s Problem. We will look at a more challenging scenario for Evidential Decision Theory in the next post. For now, we’re part way along the route of realising that there’s still a need to look for a decision theory that makes the logical decision in a wide range of situations.

\n

 

\n

Appendix 1: Important notes

\n

 

\n

While the consensus on Less Wrong is that one boxing on Newcomb’s Problem is the rational decision, my understanding is that this opinion is not necessarily held uniformly amongst philosophers (see, for example, the Stanford Encyclopedia of Philosophy’s article on Causal Decision Theory). I’d welcome corrections on this if I’m wrong but otherwise it does seem important to acknowledge where the level of consensus differs on Less Wrong compared to the broader community.

\n

 

\n

For more details on this, see the results of the PhilPapers Survey where 61% of respondents who specialised in decision theory chose to two box and only 26% chose to one box (the rest were uncertain). Thanks to Unnamed for the link.

\n

 

\n

If Newcomb's Problem doesn't seem realistic enough to be worth considering then read the responses to this comment.

\n

 

\n

Appendix 2: Existing posts on Newcomb's Problem

\n

 

\n

Newcomb's Problem has been widely discussed on Less Wrong, generally by people with more knowledge on the subject than me (this post is included as part of the sequence because I want to make sure no-one is left behind and because it is framed in a slightly different way). Good previous posts include:

\n

 

\n

A post by Eliezer introducing the problem and discussing the issue of whether one boxing is irrational.

\n

 

\n

A link to Marion Ledwig's detailed thesis on the issue.

\n

 

\n

An exploration of the links between Newcomb's Problem and the prisoner's dillemma.

\n

 

\n

A post about formalising Newcomb's Problem.

\n


And a Less Wrong wiki article on the problem with further links.

\n

 

" } }, { "_id": "RgupQYP7x5iLW7NPc", "title": "Kevin T. Kelly's Ockham Efficiency Theorem", "pageUrl": "https://www.lesswrong.com/posts/RgupQYP7x5iLW7NPc/kevin-t-kelly-s-ockham-efficiency-theorem", "postedAt": "2010-08-16T04:46:00.253Z", "baseScore": 43, "voteCount": 35, "commentCount": 82, "url": null, "contents": { "documentId": "RgupQYP7x5iLW7NPc", "html": "

There is a game studied in Philosophy of Science and Probably Approximately Correct (machine) learning. It's a cousin to the Looney Labs game \"Zendo\", but less fun to play with your friends. http://en.wikipedia.org/wiki/Zendo_(game) (By the way, playing this kind of game is excellent practice at avoiding confirmation bias.) The game has two players, who are asymmetric. One player plays Nature, and the other player plays Science. First Nature makes up a law, a specific Grand Unified Theory, and then Science tries to guess it. Nature provides some information about the law, and then Science can change their guess, if they want to. Science wins if it converges to the rule that Nature made up.

\n

\n

Kevin T. Kelly is a philosopher of science, and studies (among other things) the justification within the philosophy of science for William of Occam's razor: \"entities should not be multiplied beyond necessity\". The way that he does this is by proving theorems about the Nature/Science game with all of the details elaborated.

\n

Why should you care? Firstly, his justification is different from the overfitting justification that sometimes shows up in Bayesian literature. Roughly speaking, the overfitting justification characterizes our use of Occam's razor as pragmatic - we use Occam's razor to do science because we get good generalization performance from it. If we found something else (e.g. boosting and bagging, or some future technique) that proposed oddly complex hypotheses, but achieved good generalization performance, we would switch away from Occam's razor.

\n

An aside regarding boosting and bagging: These are ensemble machine learning techniques. Suppose you had a technique that created decision tree classifiers, such as C4.5 (http://en.wikipedia.org/wiki/C4.5_algorithm), or even decision stumps (http://en.wikipedia.org/wiki/Decision_stump). Adaboost (http://en.wikipedia.org/wiki/Adaboost) would start by weighting all of the examples identically and invoking your technique to find an initial classifier. Then it would reweight the examples, prioritizing the ones that the first iteration got wrong, and invoke your technique on the reweighted input. Eventually, Adaboost outputs an ensemble of decision trees (or decision stumps), and taking the majority opinion of the ensemble might well be more effective (generalize better beyond training) than the original classifier. Bagging is similar (http://en.wikipedia.org/wiki/Bootstrap_aggregating).

\n

Ensemble learning methods are a challenge for \"prevents overfitting\" justifications of Occam's razor since they propose weirdly complex hypotheses, but suffer less from overfitting than the weak classifiers that they are built from.

\n

Secondly, his alternative justification bears on ordering hypotheses \"by simplicity\", providing an alternative to (approximations of) Kolmogorov complexity as a foundation of science.

\n

Let's take a philosopher's view of physics - from a distance, a philosopher listens to the particle physicists and hears: \"There are three fundamental particles that make up all matter!\", \"We've discovered another, higher-energy particle!\", \"Another, even higher-energy particle!\". There is no reason the philosopher can see why this should stop at any particular number of particles. What should the philosopher believe at any given moment?

\n

This is a form of the Nature vs. Science game where Nature's \"Grand Unified Theory\" is known to be a nonnegative number (of particles), and at each round Nature can reveal a new fundamental particle (by name) or remain silent. What is the philosopher of science's strategy? Occam's razor suggests that we prefer simpler hypotheses, but if there were 254 known particles, how would we decide whether to claim that there are 254, 255, or 256 particles? Note that in many encodings, 255 and 256 are quite round numbers and therefore have short descriptions; low Kolmogorov complexity.

\n

An aside regarding \"round numbers\": Here are some \"shortest expressions\" of some numbers, according to one possible grammar of expressions. (Kolmogorov complexity is unique up to an additive constant, but since we never actually use Kolmogorov complexity, that isn't particularly helpful.)

\n

1 == (1)

\n

2 == (1+1)

\n

3 == (1+1+1)

\n

4 == (1+1+1+1)

\n

5 == (1+1+1+1+1)

\n

6 == (1+1+1+1+1+1)

\n

7 == (1+1+1+1+1+1+1)

\n

8 == ((1+1)*(1+1+1+1))

\n

9 == ((1+1+1)*(1+1+1))

\n

10 == ((1+1)*(1+1+1+1+1))

\n

11 == (1+1+1+1+1+1+1+1+1+1+1)

\n

12 == ((1+1+1)*(1+1+1+1))

\n

13 == ((1)+((1+1+1)*(1+1+1+1)))

\n

14 == ((1+1)*(1+1+1+1+1+1+1))

\n

15 == ((1+1+1)*(1+1+1+1+1))

\n

16 == ((1+1+1+1)*(1+1+1+1))

\n

17 == ((1)+((1+1+1+1)*(1+1+1+1)))

\n

18 == ((1+1+1)*(1+1+1+1+1+1))

\n

19 == ((1)+((1+1+1)*(1+1+1+1+1+1)))

\n

20 == ((1+1+1+1)*(1+1+1+1+1))

\n

21 == ((1+1+1)*(1+1+1+1+1+1+1))

\n

22 == ((1)+((1+1+1)*(1+1+1+1+1+1+1)))

\n

24 == ((1+1+1+1)*(1+1+1+1+1+1))

\n

25 == ((1+1+1+1+1)*(1+1+1+1+1))

\n

26 == ((1)+((1+1+1+1+1)*(1+1+1+1+1)))

\n

27 == ((1+1+1)*((1+1+1)*(1+1+1)))

\n

28 == ((1+1+1+1)*(1+1+1+1+1+1+1))

\n

30 == ((1+1+1+1+1)*(1+1+1+1+1+1))

\n

32 == ((1+1)*((1+1+1+1)*(1+1+1+1)))

\n

35 == ((1+1+1+1+1)*(1+1+1+1+1+1+1))

\n

36 == ((1+1+1)*((1+1+1)*(1+1+1+1)))

\n

40 == ((1+1)*((1+1+1+1)*(1+1+1+1+1)))

\n

42 == ((1+1+1+1+1+1)*(1+1+1+1+1+1+1))

\n

45 == ((1+1+1)*((1+1+1)*(1+1+1+1+1)))

\n

As you can see, there are several inversions, where a larger number (in magnitude) is \"simpler\" in that it has a shorter shortest expression (in this grammar). The first inversion occurs at 12, which is simpler than 11. In this grammar, squares, powers of two, and smooth numbers generally (http://en.wikipedia.org/wiki/Smooth_number) will be considered simple. Even though \"the 256th prime\" sounds like a short description of a number, this grammar isn't flexible enough to capture that concept, and so this grammar does not consider primes to be simple. I believe this illustrates that picking a particular approximate Kolmogorov-complexity-like concept is a variety of justification of Occam's razor by aesthetics. Humans can argue about aesthetics, and be convinced by arguments somewhat like logic, but ultimately it is a matter of taste.

\n

In contrast, Kelly's idea of \"simplicity\" is related to Popper's falsifiability. In this sense of simplicity, a complex theory is one that can (for a while) camouflage itself as another (simple) theory, but a simple theory cannot pretend to be complex. So if Nature really had 256 particles, it could refuse to reveal them for a while (maybe they're hidden in \"very high energy regimes\"), and the 256-particle universe would exactly match the givens for a 254-particle universe. However, the 254-particle universe cannot reveal new particles; it's already shown everything that it has.

\n

Remember, we're not talking about elaborate data analysis, where there could be \"mirages\" or \"aliasing\", patterns in the data that look initially like a new particle, but later reveal themselves to be explicable using known particles. We're talking about a particular form of the Nature vs Science game where each round Nature either reveals (by name) a new particle, or remains silent. This illustrates that Kelly's simplicity is relative to the possible observables. In this scenario, where Nature identifies new particles by name, then the hypothesis that we have seen all of the particles and there will be no new particles is always the simplest, the \"most falsifiable\". With more realistic observables, the question of what is the simplest consistent hypothesis becomes trickier.

\n

A more realistic example that Kelly uses (following Kuhn) is the Copernican hypothesis that the earth and other planets circle the sun. In what sense is it simpler than the geocentric hypothesis? From a casual modern perspective, both hypotheses might seem symmetric and of similar complexity. The crucial effect is that Ptolemy's model parameters (velocities, diameters) have to be carefully adjusted to create a \"coincidence\" - apparent retrograde motion that always coincides with solar conjunction (for mercury and venus) and solar opposition (for the other planets). (Note: Retrograde means moving in the unusual direction, west to east. Conjunction means two entities near each other in the sky. Opposition means the entities are at opposite points of the celestial sphere.) The Copernican model \"predicts\" that coincidence; not in the sense that the creation of the model precedes knowledge of the effect, but that any tiny deviation from exact coincidence to be discovered in the future would be evidence against the Copernican model. In this sense, the Copernican model is more falsifiable; simpler.

\n

The Ockham Efficiency Theorems explain in what sense this version of Occam's razor is strictly better than other strategies for the Science vs. Nature game. If what we care about is the number of public mind changes (saying \"I was mistaken, actually there are X particles.\" would count as a mind change for any X), and the timing of the mind changes, then Occam's razor is the best strategy for the Science vs. Nature game. The Occam strategy for the number-of-particles game will achieve exactly as many mind changes as there are particles. A scientist who deviates from Occam's razor allows Nature to extract a mind change from the (hasty) scientist \"for free\".

\n

The way this works in the particles game is simple. To extract a mind change from a hasty scientist who jumps to predicting 12 particles when they've only seen 11, or 256 particles when they've only seen 254, Nature can simply continuously refuse to reveal new particles. If the scientist doesn't ever switch back down to the known number of particles, then they're a nonconvergent scientist - they lose the game. If the scientist, confronted with a long run of \"no new particles found\" does switch to the known number of particles, then Nature has extracted a mind change from the scientist without a corresponding particle. The Occam strategy achieves the fewest possible number of mind changes (that is, equal to the number of particles), given such an adversarial Nature.

\n

The \"Ockham Efficiency Theorems\" refer to the worked-out details of more elaborate Science vs. Nature games - where Nature chooses a polynomial GUT, for example.

\n

This entire scenario does generalize to noisy observations as well (learning Perlean causal graphs) though I don't understand this aspect fully. If I understand correctly, the scientist guesses a probability distribution over the possible worlds and you count \"mind changes\" as changes of that probability distribution, so adjusting a 0.9 probability to 0.8 would be counted as a fraction of a mind change. Anyway, read the actual papers, they're well-written and convincing.

\n

www.andrew.cmu.edu/user/kk3n/ockham/Ockham.htm

\n

This post benefited greatly from encouragement and critique by cousin_it.

" } }, { "_id": "GawWzaWNWMjqg6AzT", "title": "Statistical discrimination is externality deliniation", "pageUrl": "https://www.lesswrong.com/posts/GawWzaWNWMjqg6AzT/statistical-discrimination-is-externality-deliniation", "postedAt": "2010-08-15T10:00:07.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "GawWzaWNWMjqg6AzT", "html": "

Discrimination based on real group average characteristics is a kind of externality within groups. Observers choose which groups to notice, then the behaviour of those in the groups alters the overall reputation of the group. We mostly blame those who choose the groups for this, not those who externalize within them. But if  we somehow stopped thinking in terms of any groups other than the whole population, the externality would still exist, you just wouldn’t notice it because it would be amongst all humans equally. If someone cheated you, you you would expect all people to cheat you a little more, whereas now you may notice the cheater’s other characteristics and put most of the increased expectation on similar people, such as Lebanese people or men.

\n

Does this perspective change where to lay blame for the harm caused by such discrimination? A bit, if the point of blame is to change behaviour. Changing the behaviour of the category makers is still useful, though we probably try to change them in the wrong direction sometimes. But another option is to deal with the externalities in the usual fashion: subsidise positive externalities and tax negative ones. This is done via social pressure within some groups. Families often use such a system, thus the derision given for ‘bringing shame to the family’, along with the rewards of giving parents something to accidentally mention to their friends. Similar is seen in schools and teams sometimes I think, and in the occasional accusation ‘you give x a bad name!’, though that is often made by someone outside the group. I haven’t heard of it done much in many other groups or via money rather than social pressure. Are there more such examples?

\n

One reason it is hard to enforce accountability for such externalities is that boundaries of groups are often quite unclear, and people near the edge feel unfairly treated if they fall on the more costly side. The less clear is the group boundary the more people are near the edge. Plus people toward the edge might only be seen as in the group a quarter of the time or something, so they aren’t externalizing or being externalized to so much. Families are a relatively clearly bounded group, so it is easier for them to punish and reward effects on family reputation. Gender is a relatively clear boundary too (far from completely clear, but more so than ‘tall people’), so I would expect this to work better there. Could women coordinate to improve the reputation of women in general by disrespecting the ones who complain too much for instance? Should they?

\n

Of  course in a few areas making one group look better just makes another group look worse, so if all the externalities were internalized things would look just as they are. I don’t think this is usually the case, or the entire case.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "zPFojkLmiMJadaBCr", "title": "Existential Risk and Public Relations", "pageUrl": "https://www.lesswrong.com/posts/zPFojkLmiMJadaBCr/existential-risk-and-public-relations", "postedAt": "2010-08-15T07:16:32.802Z", "baseScore": 41, "voteCount": 77, "commentCount": 629, "url": null, "contents": { "documentId": "zPFojkLmiMJadaBCr", "html": "

[Added 02/24/14: Some time after writing this post, I discovered that it was based on a somewhat unrepresentative picture of SIAI. I still think that the concerns therein were legitimate, but they had less relative significance than I had thought at the time. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.

\n

A common trope on Less Wrong is the idea that governments and the academic establishment have neglected to consider, study and work against existential risk on account of their shortsightedness. This idea is undoubtedly true in large measure. In my opinion and in the opinion of many Less Wrong posters, it would be very desirable to get more people thinking seriously about existential risk. The question then arises: is it possible to get more people thinking seriously about existential risk? A first approximation to an answer to this question is \"yes, by talking about it.\" But this answer requires substantial qualification: if the speaker or the speaker's claims have low credibility in the eyes of the audience then the speaker will be almost entirely unsuccessful in persuading his or her audience to think seriously about existential risk. Speakers who have low credibility in the eyes of an audience member decrease the audience member's receptiveness to thinking about existential risk. Rather perversely, speakers who have low credibility in the eyes of a sufficiently large fraction of their audience systematically raise existential risk by decreasing people's inclination to think about existential risk. This is true whether or not the speakers' claims are valid.

\n

As Yvain has discussed in his excellent article titled The Trouble with \"Good\"

\n
\n

To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma. You're allergic to cats, so you down-vote \"cats\" a couple of points. You hear about a Palestinian committing a terrorist attack, so you down-vote \"Palestinians\" a few points. Richard Dawkins just said something especially witty, so you up-vote \"atheism\". High karma score means seek it, use it, acquire it, or endorse it. Low karma score means avoid it, ignore it, discard it, or condemn it.

\n
\n

When Person X makes a claim which an audience member finds uncredible, the audience member's brain (semiconsciously) makes a mental note of the form \"Boo for Person X's claims!\"  If the audience member also knows that Person X is an advocate of existential risk reduction, the audience member's brain may (semiconsciously) make a mental note of the type \"Boo for existential risk reduction!\"

\n

The negative reaction to Person X's claims is especially strong if the audience member perceives Person X's claims as arising from a (possibly subconscious) attempt on Person X's part to attract attention and gain higher status, or even simply to feel as though he or she has high status. As Yvain says in his excellent article titled That other kind of status:

\n
\n

But many, maybe most human actions are counterproductive at moving up the status ladder. 9-11 Conspiracy Theories are a case in point. They're a quick and easy way to have most of society think you're stupid and crazy. So is serious interest in the paranormal or any extremist political or religious belief. So why do these stay popular?

\n

[...]

\n

a person trying to estimate zir social status must balance two conflicting goals. First, ze must try to get as accurate an assessment of status as possible in order to plan a social life and predict others' reactions. Second, ze must construct a narrative that allows them to present zir social status as as high as possible, in order to reap the benefits of appearing high status.

\n

[...]

\n

In this model, people aren't just seeking status, they're (also? instead?) seeking a state of affairs that allows them to believe they have status. Genuinely having high status lets them assign themselves high status, but so do lots of other things. Being a 9-11 Truther works for exactly the reason mentioned in the original quote: they've figured out a deep and important secret that the rest of the world is too complacent to realize.

\n
\n

I'm presently a graduate student in pure mathematics. During graduate school I've met many smart people who I wish would take existential risk more seriously. Most such people who have heard of Eliezer do not find his claims credible. My understanding is that the reason for this is that Eliezer has made some claims which they perceive to be falling under the above rubric, and the strength of their negative reaction to these has tarnished their mental image of all of Eliezer's claims. Since Eliezer supports existential risk reduction, I believe that this has made them less inclined to think about existential risk than they were before they heard of Eliezer.

\n

There is also a social effect which compounds the issue which I just mentioned. The issue which I just mentioned makes people who are not directly influenced by the issue that I just mentioned less likely to think seriously about existential risk on account of their desire to avoid being perceived as associated with claims that people find uncredible.

\n

I'm very disappointed that Eliezer has made statements such as:

\n
\n

If I got hit by a meteorite now, what would happen is that Michael Vassar would take over sort of taking responsibility for seeing the planet through to safety...Marcello Herreshoff would be the one tasked with recognizing another Eliezer Yudkowsky if one showed up and could take over the project, but at present I don't know of any other person who could do that...

\n
\n

which are easily construed as claims that his work has higher expected value to humanity than the work of virtually all humans in existence. Even if such claims are true, people do not have the information that they need to verify that such claims are true, and so virtually everybody who could be helping out assuage existential risk find such claims uncredible. Many such people have an especially negative reaction to such claims because they can be viewed as arising from a tendency toward status grubbing, and humans are very strongly wired to be suspicious of those who they suspect to be vying for inappropriately high status.  I believe that such people who come into contact with Eliezer's statements like the one I have quoted above are less statistically likely to work to reduce existential risk than they were before coming into contact with such statements. I therefore believe that by making such claims, Eliezer has increased existential risk.

\n

I would go further than that and say that that I presently believe that donating to SIAI has negative expected impact on existential risk reduction on account of that SIAI staff are making uncredible claims which are poisoning the existential risk reduction meme.  This is a matter on which reasonable people can disagree. In a recent comment, Carl Shulman expressed the view that though SIAI has had some negative impact on the existential risk reduction meme, the net impact of SIAI on the existential risk meme is positive. In any case, there's definitely room for improvement on this point.

\n

Last July I made a comment raising this issue and Vladimir_Nesov suggested that I contact SIAI. Since then I have corresponded with Michael Vassar about this matter. My understanding of Michael Vassar's position is that the people who are dissuaded from thinking about existential risk because of remarks like Eliezer's are too irrational for it to be worthwhile for them to be thinking about existential risk. I may have misunderstood Michael's position and encourage him to make a public statement clarifying his position on this matter. If I have correctly understood his position, I do not find Michael Vassar's position on this matter credible.

\n

I believe that if Carl Shulman is right, then donating to SIAI has positive expected impact on existential risk reduction. I believe that that even if this is the case, a higher expected value strategy is to withold donations from SIAI and informing SIAI that you will fund them if and only if they require their staff to exhibit a high degree of vigilance about the possibility of poisoning the existential risk meme by making claims that people find uncredible. I suggest that those who share my concerns adopt the latter policy until their concerns have been resolved.

\n

Before I close, I should emphasize that my post should not be construed as an attack on Eliezer. I view Eliezer as an admirable person and don't think that he would ever knowingly do something that raises existential risk. Roko's Aspergers Poll suggests a strong possibility that the Less Wrong community exhibits an unusually high abundance of the traits associated with Aspergers Syndrome. It would not be at all surprising if the founders of Less Wrong have a similar unusual abundance of the traits associated with Aspergers Syndrome. I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.

" } }, { "_id": "zwEwvBAiFe9mXwkvk", "title": "Why you don’t seek friends dating site style", "pageUrl": "https://www.lesswrong.com/posts/zwEwvBAiFe9mXwkvk/why-you-don-t-seek-friends-dating-site-style", "postedAt": "2010-08-14T10:00:32.000Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "zwEwvBAiFe9mXwkvk", "html": "

Robin asks an interesting question:

\n

Bryan Caplan recently pointed out to a few of us that while many dating web sites offer to help you find matching romantic mates, there are far fewer friend finding helpers.  We tend to collect friends informally, by liking the people we meet for other reasons, and especially friends of friends. But for mating purposes we are more willing to choose folks based on a list of their interests, an intro paragraph, a picture, etc.  Why the difference?

\n

His theory:

\n

We need mates more for their simple surface features, while we need friends more to serve as social allies in our existing social network.  Since we need friends in substantial part to serve as allies in our social world, supporting us against opposing coalitions, it makes sense to draw our friends from our existing social world.  And since we need mates more for their personal quality, e.g., good genes, youth, wealth, smarts, mood, etc., it makes sense to pick them more via such features.

\n

I have a different theory, though I’m not especially confident in it. First notice that people are actually often eager to make friends with people outside their social circle. They don’t want to make friends on the subway, but they join groups, play sports, couchsurf, partake in a huge range of social online activities, and go to conferences often with the intention of making new friends, who would be outside their existing social circle. A difference between any of these activities and online dating is that with the latter you have to be explicit about the fact that you are trying each other out when you go on a date. It is obvious when one of you decides against the other, and the relationship is usually sharply ended. With meeting friends casually this is not so; you can talk to people and assess them a lot before anybody even knows you are considering being friends with them. Even once you have done some friendly thing with a person, if you don’t see them for months its not clear whether you hated them or have just been busy. Friend meeting activities are best then with a group of people and a supposed other purpose to the interaction.

\n

I think this latter style is necessary for friends but not for romance because you can have many friends and only one partner, which makes turning someone down as a friend much ruder. Turning down someone as a partner says ‘I don’t think you are the best mate I can find given a few decades’ whereas turning down someone as a friend says ‘you are worse than zero’. It’s hard enough to explicitly tell someone the first thing, the second is near impossible. And if the recipient doesn’t listen to the first, you can get angry, have them arrested, get your new partner to threaten them, or whatever. Would be friends can safely hang around for years not getting hints.

\n

So if you were to advertise for friends you would probably be stuck with most of those you tried out, at least for a little while until you managed to gradually happen to not see each other, and there is some risk of the relationship remaining for a long time. These risks make online-dating style friend seeking just too costly.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "4MpodyRwdYXEeC3jn", "title": "Problems in evolutionary psychology ", "pageUrl": "https://www.lesswrong.com/posts/4MpodyRwdYXEeC3jn/problems-in-evolutionary-psychology", "postedAt": "2010-08-13T18:57:40.454Z", "baseScore": 86, "voteCount": 77, "commentCount": 102, "url": null, "contents": { "documentId": "4MpodyRwdYXEeC3jn", "html": "

Note: The primary target of the post is not professional, academic evolutionary psychology. Rather, I am primarily cautioning amateurs (such as LW regulars) about some of the caveats involved in (armchair) evpsych and noting the rigor required for good theories. While the post does also serve as a warning to be cautious about sloppy research (or sloppy science journalism) that doesn't seem to be taking these issues into account, I do believe that most of the researchers doing serious evpsych research are quite aware of these issues.

\n

Evolutionary theories get mentioned a lot on this site, and I frequently feel that they are given far more weight than would be warranted. In particular, evolutionary theories about sex differences seem to get mentioned and appealed to as if they had an iron-cast certainty. People also don't hesitate to make up their own evolutionary psychological explanations. To counterbalance this, I present a list of evolutionary psychology-related problems, divided into four rough categories.

Problems in hypothesis generation

Rationalization bias. We know that human minds are very prone to first deciding on a desired outcome, then coming up with a plausible-sounding story of why it must be so. In general, our minds have difficulty noticing faulty reasoning if it leads to the right conclusion. It's easy and tempting to come up with an ad-hoc evolutionary explanation for any behavior, regardless of whether or not it actually has any biological roots.

Over-attributing meaning. Humans also have a strong tendency to attribute meaning to random chance. We might easily come up with explanations that are unnecessarily complex, and try to make everything into an evolved adaptation. For instance, humans tend to avoid thinking about unpleasant thoughts about themselves. A contrived evpsych explanation might be that this is evolved self-deception: by not acknowledging our own faults, it makes it easier for us to deceive others about them. But mental unpleasantness tends to be correlated with harmful experiences: we avoid situations where we'd be afraid, and fear is correlated with danger. It could just as well be that the mechanism for avoiding mental unpleasantness evolved from the mechanism for avoiding physical unpleasantness, and we avoid thinking unpleasant thoughts of ourselves for the same reason why we avoid poking our fingers at hot stoves. (Example courtesy of Anna Salamon.)

Alternative ways of reaching the goal. Eliezer previously gave us the example of the scientists who thought insects would under the right circumstances limit their breeding, but the insects ended up eating their competitors' offspring instead. We can only cover a limited part of the space of all possible routes evolution could take. While ”but another hypothesis might explain it better” is admittedly a problem all scientific disciplines face, it is especially acute here, since we have very little knowledge of what life in the EEA was actually like.

Problems in background assumptions

Did a genetic path to the adaptation exist? Evolution works by the rule of immediate advantage: for mutation X to reach fixation, it has to provide an immediate advantage. It's well and good to propose that under specific circumstances, organisms that developed a specific behavior would have gained a fitness advantage. But that, by itself, tells us nothing about how many mutations reaching such a behavior would have required. Nor does it tell us anything about whether all of those intermediate stages actually conferred the organism a fitness benefit, making it possible for the final form of the adaptation to actually be reached.

\n

Was there enough genetic variance of the right kind? For an adaptation to evolve, there had to be enough genetic variance for evolution to feed on at the right time. Again, postulating that an adaptation could have been useful tells us next to nothing about whether or not the variation needed to make it real existed.

Problems in verification

Memetic pressures shaping cultures. When trying to show the existence of biological sex differences, evolutionary psychologists sometimes appeal to cross-cultural studies that show sex differences across a wide variety of cultures. But while this is certainly evidence towards the differences being biological in origin, it's rather weak evidence. Pretty much all cultures in the world tend to be more or less patriarchal in nature. This could be caused by biological causes, but it's equally plausible that it was caused by a memetic selection pressure acting on non-psychological sex differences. Women have less strength than men and are the ones who bear children, which could easily have affected their social position even without drastic psychological differences.

Is something an adaptation? We consider the possibility that certain specific aspects of the faculty of language are “spandrels” — by-products of preexisting constraints rather than end products of a history of natural selection (39). This possibility, which opens the door to other empirical lines of inquiry, is perfectly compatible with our firm support of the adaptationist program. Indeed, it follows directly from the foundational notion that adaptation is an “onerous concept” to be invoked only when alternative explanations fail (40). The question is not whether FLN [the Faculty of Language in a Narrow sense] in toto is adaptive. By allowing us to communicate an endless variety of thoughts, recursion is clearly an adaptive computation. The question is whether particular components of the functioning of FLN are adaptations for language, specifically acted upon by natural selection—or, even more broadly, whether FLN evolved for reasons other than communication.

An analogy may make this distinction clear. The trunk and branches of trees are near-optimal solutions for providing an individual tree’s leaves with access to sunlight. For shrubs and small trees, a wide variety of forms (spreading, spherical, multistalked, etc.) provide good solutions to this problem. For a towering rainforest canopy tree, however, most of these forms are rendered impossible by the various constraints imposed by the properties of cellulose and the problems of sucking water and nutrients up to the leaves high in the air. Some aspects of such trees are clearly adaptations channeled by these constraints; others (e.g., the popping of xylem tubes on hot days, the propensity to be toppled in hurricanes) are presumably unavoidable by-products of such constraints.
(Hauser, Chomsky & Fitch 2002)

What is something an adaptation for? For instance, it might seem intuitively obvious that language evolved as a way to communicate. But language also has plenty of other uses, including functions like problem-solving, enhancing social intelligence by rehearsing the thoughts of others, memory aids, focusing attention, and so on. There is evidence that even animals without human language can, for instance, do things such as discriminate various phonemes, suggesting that many key components of language may have evolved as general cognitive capabilities. Human language may then primarily be a result of many non-language related adaptations happening to combine in the appropriate way. It's an empirical question which, if any, of these functions has been the primary force driving the evolution of language. (For a debate on this, see Hauser, Chomsky & Fitch 2002; Pinker & Jackendoff 2005; Fitch, Hauser & Chomsky 2005; Jackendoff & Pinker 2005.)

In the same manner, bats use echolocation to find and capture prey (feeding), to navigate, to find mates, and to engage in aerial dogfights with competitors. We can study bats to obtain plenty of information about how the bat sonar physically and cognitively works and how bats use it. Yet its evolutionary history and the functions that the sonar's early stages were the most useful for are questions that we are mostly incapable of answering. For the most part, such knowledge wouldn't even tell us anything we couldn't more reliably discover via other means. Our inability to verify theories about the adaptive origin of various traits weakens the faith we can place on such theories.

Problems in modern-day meaningfulness

Evolution did not stop after the Pleistoscene. This was covered in more detail in my review of The 10,000 Year Explosion. We know that new adaptations such as the one for lactose tolerance have shown up in the last 8,000 years. We also know that hundreds of \"gene sweeps\" of specific alleles increasing their frequency in the population are still going on today. While the full functions of these alleles are still not known, it is known that most involve changes in metabolism and digestion, defenses against infectious disease, reproduction, DNA repair, or in the central nervous system. And so on; see the link for more.

The modern environment may alter our biology. To name one example, hormones have a strong impact on human psychology. Yet especially women are likely to have very different hormonal activity than they used to have. We have less children, and have them at a later age we probably did in the EEA. The Pill basically works by screwing up the normal hormonal balance. Some extra hormones are fed to livestock and find their way to our bodies via our food. Even ignoring that possibility, our modern-day diet is very much unlike the one we used to have. We also get far less exercise, and so on. Our environment is likely making our brains different from the way they used to be.

Evolution may have exploited gene-environment relationships that no longer exist. This one is huge. For instance, we know that daylight has a role in regulating our sleep patterns. Now that artificial lightning exists, we routinely stay up for far longer than we would if we had to only go by the sun. More generally, the environment has a massive role in influencing how our brains develop. Children raised by animals do not, as a rule, ever reach a level where they could fully adjust to human society. As our whole society works in a completely different way than it used to, it's nearly certain to have broken numerous relationships that regulated the adaptations in the EEA.

”Human universals” mainly apply on a cultural level.  Even behaviors that were very widespread may or may not apply to any particular individual. Lists of ”human universals” will tell us that members in every tribe found so far will interpret facial expressions, love their children, tell stories, feel pain, experience emotions, and so on. But there are also individuals who do not know how to read facial expressions, do not care for their children, are not interested in stories, do not experience pain or emotions, and so on. Sexuality is one of the drives that would have had the strongest selection pressures operating on it, but we regardless have people who have no interest in sex, are mainly interested in sex with things that you cannot reproduce with (same-sex partners, children, cars...), or prefer to just masturbate.

Conclusion. Evpsych can certainly point us towards interesting novel hypotheses about human behavior. When such hypotheses turn out to be true, then there's indeed a strong possibility that they evolved as adaptations. But it's important to note that while science can provide us strong evidence about the existence of some behavior, it is incapable of providing strong evidence about the evolutionary origins of that behavior. Behavior, as a rule, does not leave convenient fossils behind.

There are basically two kinds of ev-psych explanations: one proposing an evolutionary origin for a present-day trait (an explanation) and one proposing a previously unknown trait based on evolutionary considerations (a prediction). Of these, explanations seem to only have limited value. To make a typical evolutionary psychological claim about the origins of something is to assume, among other things, that the thing in question is an adaptation, that its suggested origin was the primary driver of selection pressure for the adaptation's evolution, that a genetic path existed to the adaptation and there was enough genetic variation to make it possible. These are all claims that are almost impossible to verify or falsify. In most cases, it is better to merely talk about what empirical research has revealed about the thing in question, without giving too much weight to its (unverifiable) evolutionary origins.

Evpsych is more useful for predictions. And it does occasionally produce results you'd never have thought of to test otherwise. Still, even if there seemed to be a very strong case for selection pressures to have existed towards something becoming an adaptation, this tells us next to nothing about whether it actually ended up evolving. Even if we can ascertain that this kind of an effect seems to be prevalent in the world, evolutionary psychology alone cannot tell us the degree to which the effect is amenable to environmental conditions. That sort of information can only be found by ordinary empirical research, and ordinary empirical research doesn't need evolutionary psychology for anything else than suggesting interesting hypotheses.

Evpsych should primarily be used for helping build coherent explanatory frameworks for human behavior and for coming up with new predictions. But someone arguing in favor of some behavior being universal or biologically determined in the modern day shouldn't appeal to evpsych for support, for evpsych can at most weakly suggest such things.

Acknowledgements. Part of the content in this article was adapted from the materials of the Cognitive Science 121 course at University of Helsinki, written by Otto Lappi and Anna-Mari Rusanen.

" } }, { "_id": "Q8jyAdRYbieK8PtfT", "title": "Taking Ideas Seriously", "pageUrl": "https://www.lesswrong.com/posts/Q8jyAdRYbieK8PtfT/taking-ideas-seriously", "postedAt": "2010-08-13T16:50:29.769Z", "baseScore": 86, "voteCount": 78, "commentCount": 260, "url": null, "contents": { "documentId": "Q8jyAdRYbieK8PtfT", "html": "

I, the author, no longer endorse this post.

\n

\n


\n

\n

 

\n

Abstrummary: I describe a central technique of epistemic rationality that bears directly on instrumental rationality, and that I do not believe has been explicitly discussed on Less Wrong before. The technnique is rather simple: it is the practice of taking ideas seriously. I also present the rather simple metaphor of an 'interconnected web of belief nodes' (like a Bayesian network) to describe what it means to take an idea seriously: it is to update a belief and then accurately and completely propagate that belief update through the entire web of beliefs in which it is embedded. I then give a few examples of ideas to take seriously, followed by reasons to take ideas seriously and what bad things happens if you don't (or society doesn't). I end with a few questions for Less Wrong.

\n

\n

 

\n

Eliezer Yudkowsky and Michael Vassar are two rationalists who have something of an aura of formadability about them. This is especially true of Michael Vassar in live conversation, where he's allowed to jump around from concept to concept without being penalized for not having a strong thesis. Eliezer did something similar in his writing by creating a foundation of reason upon which he could build new concepts without having to start explaining everything anew every time. Michael and Eliezer know a lot of stuff, and are able to make connections between the things that they know; seeing which nodes of knowledge are relevant to their beliefs or decision, or if that fails, knowing which algorithm they should use to figure out which nodes of knowledge are likely to be relevant. They have all the standard Less Wrong rationality tools too, of course, and a fair amount of heuristics and dispositions that haven't been covered on Less Wrong. But I believe it is this aspect of their rationality, the coherent and cohesive and carefully balanced web of knowledge and belief nodes, that causes people to perceive them as formidable rationalists, of a kind not to be disagreed with lightly.

\n

The common trait of Michael and Eliezer and all top tier rationalists is their drive to really consider the implications and relationships of their beliefs. It's something like a failure to compartmentalize; it's what has led them to developing their specific webs of knowledge, instead of developing one web of beliefs about politics that is completely separate from their webs of belief about religion, or science, or geography. Compartmentalization is the natural and automatic process by which belief nodes or groups of beliefs nodes become isolated from their overarching web of beliefs, or many independent webs are created, or the threads between nodes are not carefully and precisely maintained. It is the ground state of your average scientist. When Eliezer first read about the idea of a Singularity, he didn't do exactly what I and probably almost anybody in the world would have done at that moment: he didn't think \"Wow, that's pretty neat!\" and then go on to study string theory. He immediately saw that this was an idea that needed to be taken seriously, a belief node of great importance that necessarily affects every other belief in the web. It's something that I don't have naturally (not that it's either binary or genetic), but it's a skill that I'm reasonably sure can be picked up and used immediately, as long as you have a decent grasp of the fundamentals of rationality (as can be found in the Sequences).

\n

Taking an idea seriously means:

\n\n

There are many ideas that should be taken a lot more seriously, both by society and by Less Wrong specifically. Here are a few:

\n\n

Some potentially important ideas that I readily admit to not yet having taken seriously enough:

\n\n

And some ideas that I did not immediately take seriously when I should have:

\n\n

I also suspect that there are ideas that I should be taking seriously but do not yet know enough about; for example, maybe something to do with my diet. I could very well be poisoning myself and my cognition without knowing it because I haven't looked into the possible dangers of the various things I eat. Maybe corn syrup is bad for me? I dunno; but nobody's ever sat me down and told me I should look into it, so I haven't. That's the problem with ideas that really deserve to be taken seriously: it's very rare that someone will take the time to make you do the research and really think about it in a rational and precise manner. They won't call you out when you fail to do so. They won't hold you to a high standard. You must hold yourself to that standard, or you'll fail.

\n

Why should you take ideas seriously? Well, if you have Something To Protect, then the answer is obvious. That's always been my inspiration for taking ideas seriously: I force myself to investigate any way to help that which I value to flourish. This manifests on both the small and the large scale: if a friend is going to get a medical operation, I research the relevant literature and make sure that the operation works or that it's safe. And if I find out that the development of an unFriendly artificial intelligence might lead to the pointless destruction of everyone I love and everything I care about and any value that could be extracted from this vast universe, then I research the relevant literature there, too. And then I keep on researching. What if you don't have Something To Protect? If you simply have a desire to figure out the world -- maybe not an explicit desire for intsrumental rationality, but at least epistemic rationality -- then taking ideas seriously is the only way to figure out what's actually going on. For someone passionate about answering life's fundamental questions to miss out on Tegmark's cosmology is truly tragic. That person is losing a vista of amazing perspectives that may or may not end up allowing them to find what they seek, but at the very least is going to change for the better the way they think about the world.

\n

Failure to take ideas seriously can lead to all kinds of bad outcomes. On the societal level, it leads to a world where almost no attention is paid to catastrophic risks like nuclear EMP attacks. It leads to scientists talking about spirituality with a tone of reverence. It leads to statisticians playing the lottery. It leads to an academia where an AGI researcher who completely understands that a universe is naturalistic and beyond the reach of God fails to realize that this means an AGI could be really, really dangerous. Even people who make entire careers out of an idea somehow fail to take it seriously, to see its implications and how it should move in perfect alignment with every single one of their actions and beliefs. If we could move in such perfect alignment, we would be gods. To be a god is to see the interconnectedness of all things and shape reality accordingly. We're not even close. (I hear some folks are working on it.) But if we are to become stronger that is the ideal we must approximate.

\n

Now, I must disclaim: taking certain ideas seriously is not always best for your mental health. There are some cases where it is best to recognize this and move on to other ideas. Brains are fragile and some ideas are viruses that cause chaotic mutations in your web of beliefs. Curiosity and dilligence are not always your friend, and even those with exceptionally high SAN points can't read too much Eldritch lore before having to retreat. Not only can ignorance be bliss, it can also be the instrumentally rational state of mind.2

\n

What are ideas you think Less Wrong hasn't taken seriously? Which haven't you taken seriously, but would like to once you find the time or gain the prerequisite knowledge? Is it best to have many loosely connected webs of belief, or one tightly integrated one? Do you have examples of a fully executed belief update leading to massive or chaotic changes in a web of belief? Alzheimer's disease may be considered an 'update' where parts of the web of belief are simply erased, and I've already listed deconversion as another. What kinds of advantages could compartmentalization give a rationalist?

\n

 

\n
\n

I should write a post about reasons for people under 30 not to sign up for cryonics. However, doing so would require writing a post about Singularity timelines, and I really really don't want to write that one. It seems that a lot of LWers have AGI timelines that I would consider... erm, ridiculous. I've asked Peter de Blanc to bear the burden of proof and I'm going to bug him about it every day until he writes up the article.

\n

2 If you snarl at this idea, try playing with this Litany, and then playing with how you play with this Litany: 

\n
\n

If believing something that is false gets me utility,
I desire to believe in that falsity;
If believing something that is true gets me utility,
I desire to believe in that truth;
Let me not become attached to states of belief that do not get me utility.

\n
" } }, { "_id": "uhYCCq4oBdA6rbr33", "title": "How the abstraction shield works", "pageUrl": "https://www.lesswrong.com/posts/uhYCCq4oBdA6rbr33/how-the-abstraction-shield-works", "postedAt": "2010-08-13T11:00:45.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "uhYCCq4oBdA6rbr33", "html": "

All kinds of psychological distance make things seem less important, presumably because they usually are. So it’s better for bad things to seem distant and good things to seem close.

\n

Do we only modify importance in response to distance, or do we change our perception of distance in order to manipulate our perception of importance? This article suggests the latter is true: people view things they don’t want to be guilty of as further back in time:

\n

Germans (but not Canadians) judged the Holocaust to be more subjectively remote in time when they read only about German-perpetrated atrocities than when this threat was mitigated. Greater subjective distance predicted lower collective guilt, which, in turn, predicted less willingness to make amends (Study 1). Distancing under threat was more pronounced among defensive Germans who felt unjustly blamed by other nations (Study 2). In Study 3, the authors examined the causal role of subjective time. Nondefensive Germans induced to view the Holocaust as closer reported more collective guilt and willingness to compensate. In contrast, defensive Germans reported less collective guilt after the closeness induction. Taken together, the studies demonstrate that how past wrongs are psychologically situated in time can play a powerful role in people’s present-day reactions to them.

\n

That defensive Germans thought the Holocaust was earliest than either the innocent Canadians, or the more guilty and more guilt accepting Germans implies that the effect is probably not related to how bad the guilt is, but rather how much a person would like to avoid it.

\n

Psychological distance also alters whether we think in near or far mode and our thinking mode alters our perception of distance.  So if we want to feel distant from bad things we could benefit from thinking about them more abstractly and good things more concretely (as abstraction triggers far mode and concreteness near mode). Do we do this?

\n

Yes. Euphemisms are usually abstract references to bad things, and it is often rude not to use them. We certainly try to think of death abstractly, in terms of higher meanings rather than the messy nature of the event. At funerals we hide the body and talk about values. Admissions and apologies are often made abstractly, e.g. ‘I made a mistake’ rather than ‘I shouldn’t have spent my afternoons having sex with Elise’. We mostly talk about sex abstractly, and while it is not bad it is also not something people want to be near when uninvolved. Menstruation is referred to abstractly (wrong time of the month, ladies’ issues etc). Calling meat ‘dead animal’ or even ‘cow’ is a clear attempt to inflict guilt on the diner.

\n

Some of these things may be thought of abstractly because people object to their details (what their friend looks like having sex) without objecting to the whole thing (the knowledge that their friend has sex), rather than because they want to be distant especially. However then the question remains why they would approve of an abstract thing but not its details, and the answer could be the same (considering what your friend looks like having sex is too much like being there).

\n

On the other hand we keep detailed photographs of people and places we like, collect detailed knowledge of the lives of celebrities we wish we were close to, and plan out every moment of weddings and sometimes holidays months in advance.

\n

It’s otherwise unclear to me why concrete language about bad things should be more offensive or hurtful often than abstract language, though obviously it is. People are aware of the equivalence of the concepts, so how can one be worse? I think the answer is that abstract language forces the listener psychologically close to the content, which automatically makes it feel important to them, which is a harm if the thing you are referring to is bad. It is offensive in the same way that holding poo in front of someone’s face is meaner than pointing it out to them across a field.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "cRrM8LPf9waAd4uiL", "title": "An introduction to decision theory", "pageUrl": "https://www.lesswrong.com/posts/cRrM8LPf9waAd4uiL/an-introduction-to-decision-theory", "postedAt": "2010-08-13T09:09:49.347Z", "baseScore": 25, "voteCount": 23, "commentCount": 29, "url": null, "contents": { "documentId": "cRrM8LPf9waAd4uiL", "html": "

This is part 1 of a sequence to be titled “Introduction to decision theory”.

\n


\n

Less Wrong collects together fascinating insights into a wide range of fields. If you understood everything in all of the blog posts, then I suspect you'd be in quite a small minority. However, a lot of readers probably do understand a lot of it. Then, there are the rest of us: The people who would love to be able to understand it but fall short. From my personal experience, I suspect that there are an especially large number of people who fall into that category when it comes to the topic of decision theory.

Decision theory underlies much of the discussion on Less Wrong and, despite buckets of helpful posts, I still spend a lot of my time scratching my head when I read, for example, Gary Drescher's comments on Timeless Decision Theory. At it's core this is probably because, despite reading a lot of decision theory posts, I'm not even 100% sure what causal decision theory or evidential decision theory is. Which is to say, I don't understand the basics. I think that Less Wrong could do with a sequence that introduces the relevant decision theory from the ground up and ends with an explanation of Timeless Decision Theory (and Updateless Decision Theory). I'm going to try to write that sequence.

What is a decision theory?


In the interests of starting right from the start, I want to talk about what a decision theory is. A decision theory is a formalised system for analysing possible decisions and picking from amongst them. Normative decision theory, which this sequence will focus on, is about how we should make decisions. Descriptive decision theory is about how we do make decisions.

Decision theories involves looking at the possible outcomes of a decision. Each outcome is given a utility value, expressing how desirable that outcome is. Each outcome is also assigned a probability. The expected utility of taking an action is equal to the sum of the utilities of each possible outcome multiplied by the probability of that outcome occuring. To put it another way, you add together the utilities of each of the possible outcomes but these are weighted by the probability so that if an outcome is less likely, the value of that outcome is taken into account to a lesser extent.

\n

 

\n

Before this gets too complicated, let's look at an example:

Let's say you are deciding whether to cheat on a test. If you cheat, the possible outcomes are, getting full marks on the test (50% chance, 100 points of utility - one for each percentage point correct) or getting caught cheating and getting no marks (50% chance, 0 utility).

We can now calculate the expected utility of cheating on the test:

(1/2 * 100) + (1/2 * 0) = 50 + 0 = 50

\n

That is, we look at each outcome, determine how much it should contribute to the total utility by multiplying the utility by its probability and then add together the value we get for each possible outcome.

So, decision theory would say (questions of morality aside) that you should cheat on the test if you would get less than 50% on the test if you didn't cheat.

Those who are familiar with game theory may feel that all of this is very familiar. That's a reasonable conclusion: A good approximation of what decision theory is that it's one player game theory.

What are causal and evidential decision theories?

Two of the principle decision theories popular in academia at the moment are causal and evidential decision theories.

\n

In the description above, when we looked at each action we considered two factors: The probability of it occurring and the utility gained or lost if it did occur. Causal and evidential decision theories differ by defining the probability of the outcome occurring in two different ways.

Causal Decision Theory defines this probability causally. That is to say, they ask, what is the probability that, if action A is taken, outcome B will occur. Evidential decision theory asks what evidence the action provides for the outcome. That is to say, it asks, what is the probability of B occurring given the evidence of A. These may not sound very different so let's look at an example.

Imagine that politicians are either likeable or unlikeable (and they are simply born this way - they cannot change it) and the outcome of the election they're involved in depends purely on whether they are likeable. Now let's say that likeable people have a higher probability of kissing babies and unlikeable people have a lower probability of doing so. But this politician has just changed into new clothing and the baby they're being expected to kiss looks like it might be sick. They really don't want to kiss the baby. Kissing the baby doesn't itself influence the election, that's decided purely based on whether the politician is likeable or not. The politician does not know if they are likeable.

Should they kiss the baby?

Causal Decision Theory would say that they should not kiss the baby because the action has no causal effect. It would calculate the probabilities as follows:

If I am likeable, I will win the election. If I am not, I will not. I am 50% likely to be likeable.

If I don't kiss the baby, I will be 50% likely to win the election.

If I kiss the baby, I will be 50% likely to win the election.

I don't want to kiss the baby so I won't.

Evidential Decision Theory on the other hand, would say that you should kiss the baby because doing so is evidence that you are likeable. It would reason as follows:

If I am likeable, I will win the election. If I am not, I will not. I am 50% likely to be likeable.

If I kissed the baby, there would be an 80% probability that I was likeable (to choose an arbitrary percentage).

If I did not kiss the baby, there would be a 20% probability that I was likeable.

Therefore:

Given the action of me kissing the baby, it is 80% probable that I am likeable and thus the probability of me winning the election is 80%.

Given the action of me not kissing the baby, it is 20% probable that I am likeable and thus the probability of me winning the election is 20%.

\n

 

\n

So I should kiss the baby (presuming the desire to avoid kissing the baby is only a minor desire).

\n

 

\n

This is making it explicit but the basic point is this: Evidential Decision Theory asks whether an action provides evidence for the probability of an outcome occuring, Causal Decision Theory asks whether the action will causally effect the probability of an outcome occuring.

\n

 

\n

The question of whether either of these decision theories works under all circumstances that we'd want them to is the topic that will be explored in the next few posts of this sequences.

\n

 

\n

Appendix 1: Some maths

\n


\n

 I think that when discussing a mathematical topic, there’s always something to be gained from having a basic knowledge of the actual mathematical equations underpinning it. If you’re not comfortable with maths though, feel free to skip the following section. Each post I do will, if relevant, end with a section on the maths behind it but these will always be separate to the main body of the post – you will not need to know the equations to understand the rest of the post. If you're interested in the equations though, read on:

\n

 

\n

Decision theory assigns each action a utility based on the sum of the probability of each outcome multiplied by the utility from each possible outcome.  It then applies this equation to each possible action to determine which one leads to the highest utility. As an equation, this can be represented as:

\n

 

\n

\"Basic

\n

 

\n

\n

Where U(A) is the utility gained from action A. Capital sigma, the Greek letter, represents the sum for all i, Pi represents the probability of outcome i occurring and Di, standing for desirability, represents the utility gained if that outcome occurred. Look back at the cheating on the test example to get an idea of how this works in practice if you're confused.

\n

 

\n

Now causal and evidential decision theory differ based on how they calculate Pi. Causal Decision Theory uses the following equation:

\n

 

\n

\n

\"Causal

\n

 

\n

In this equation, everything is the same as in the first equation except, in the section referring to probability is, the probability is calculated as the probability of Oi occurring if action A is taken.

\n

 

\n

Similarly, Evidential Decision Theory uses the following equation:

\n

 

\n

\"Evidential

\n

 

\n

\n

Where the probability is calculated based on the probability of Oi given that A is true.

\n

 

\n

If you can’t see the distinction between these two equations, then think back to the politician example.

\n

 

\n

Appendix 2: Important Notes

\n


\n

The question of how causality should be formalised is still an open one, see cousin_it's comments below. As an introductory level post, we will not delve into these questions here but it is worth noting their is some debate on how exactly to interpret causal decision theory.

\n

 

\n

It's also worth noting that the baby kissing example mentioned above is more commonly discussed on the site as the Smoking Lesion problem. In the smoking lesion world, people who smoke are much more likely to get cancer. But smoking doesn't actually cause cancer, rather there's a genetic lesion that can cause both cancer and people to smoke. If you like to smoke (but really don't like cancer), should you smoke. Once again, Causal Decision Theory says yes. Evidencial Decision Theory says no.

\n

 

\n

The next post is \"Newcombe's Problem: A problem for Causal Decision Theories\".

" } }, { "_id": "td7TvP9cyMpgknLy2", "title": "Decision theory question: Alpha and omega play paper scissor rock", "pageUrl": "https://www.lesswrong.com/posts/td7TvP9cyMpgknLy2/decision-theory-question-alpha-and-omega-play-paper-scissor", "postedAt": "2010-08-12T20:55:09.753Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "td7TvP9cyMpgknLy2", "html": "

Edit: Actually nevermind, this is just counterfactual mugging. Deleting post.

\n

We have two superintelligent rational entities, call them alpha and omega, and they have access to each other's source code. They're playing one round of paper scissor rock. The winner gives the loser five dollars, or nothing if they tie. Additionally, you win $100 in bonus money from the bank if your opponent chooses scissors.

\n

My question is, what would they do? If it was ordinary paper scissor rock, it seems fairly clear they can't do better than picking randomly, given that they're equally matched. But what do they do here? Is there in fact a right answer, or does it depend on the peculiarities of how they were programmed?

\n

 

" } }, { "_id": "dC3rxrMkYKLfgTYEa", "title": "What a reduction of \"could\" could look like", "pageUrl": "https://www.lesswrong.com/posts/dC3rxrMkYKLfgTYEa/what-a-reduction-of-could-could-look-like", "postedAt": "2010-08-12T17:41:33.677Z", "baseScore": 84, "voteCount": 65, "commentCount": 111, "url": null, "contents": { "documentId": "dC3rxrMkYKLfgTYEa", "html": "

By requests from Blueberry and jimrandomh, here's an expanded repost of my comment which was itself a repost of my email sent to decision-theory-workshop.

\n

(Wait, I gotta take a breath now.)

\n

A note on credit: I can only claim priority for the specific formalization offered here, which builds on Vladimir Nesov's idea of \"ambient control\", which builds on Wei Dai's idea of UDT, which builds on Eliezer's idea of TDT. I really, really hope to not offend anyone.

\n

(Whew!)

\n

Imagine a purely deterministic world containing a purely deterministic agent. To make it more precise, agent() is a Python function that returns an integer encoding an action, and world() is a Python function that calls agent() and returns the resulting utility value. The source code of both world() and agent() is accessible to agent(), so there's absolutely no uncertainty involved anywhere. Now we want to write an implementation of agent() that would \"force\" world() to return as high a value as possible, for a variety of different worlds and without foreknowledge of what world() looks like. So this framing of decision theory makes a subprogram try to \"control\" the output of a bigger program it's embedded in.

\n

For example, here's Newcomb's Problem:

\n
def world():
\n
  box1 = 1000
\n
  box2 = (agent() == 2) ? 0 : 1000000
\n
  return box2 + ((agent() == 2) ? box1 : 0)
\n

A possible algorithm for agent() may go as follows. Look for machine-checkable mathematical proofs, up to a specified max length, of theorems of the form \"agent()==A implies world()==U\" for varying values of A and U. Then, after searching for some time, take the biggest found value of U and return the corresponding A. For example, in Newcomb's Problem above there are easy theorems, derivable even without looking at the source code of agent(), that agent()==2 implies world()==1000 and agent()==1 implies world()==1000000.

\n

The reason this algorithm works is very weird, so you might want to read the following more than once. Even though most of the theorems proved by the agent are based on false premises (because it is obviously logically contradictory for agent() to return a value other than the one it actually returns), the one specific theorem that leads to maximum U must turn out to be correct, because the agent makes its premise true by outputting A. In other words, an agent implemented like that cannot derive a contradiction from the logically inconsistent premises it uses, because then it would \"imagine\" it could obtain arbitrarily high utility (a contradiction implies anything, including that), therefore the agent would output the corresponding action, which would prove the Peano axioms inconsistent or something.

\n

To recap: the above describes a perfectly deterministic algorithm, implementable today in any ordinary programming language, that \"inspects\" an unfamiliar world(), \"imagines\" itself returning different answers, \"chooses\" the best one according to projected consequences, and cannot ever \"notice\" that the other \"possible\" choices are logically inconsistent with determinism. Even though the other choices are in fact inconsistent, and the agent has absolutely perfect \"knowledge\" of itself and the world, and as much CPU time as it wants. (All scare quotes are intentional.)

\n

This is progress. We started out with deterministic programs and ended up with a workable concept of \"could\".

\n

Hopefully, results in this vein may someday remove the need for separate theories of counterfactual reasoning based on modal logics or something. This particular result only demystifies counterfactuals about yourself, not counterfactuals in general: for example, if agent A tries to reason about agent B in the same way, it will fail miserably. But maybe the approach can be followed further.

" } }, { "_id": "yx42knCx36Qzf7tew", "title": "Humor isn’t norm evasion", "pageUrl": "https://www.lesswrong.com/posts/yx42knCx36Qzf7tew/humor-isn-t-norm-evasion", "postedAt": "2010-08-12T17:21:17.000Z", "baseScore": 4, "voteCount": 2, "commentCount": 0, "url": null, "contents": { "documentId": "yx42knCx36Qzf7tew", "html": "

Robin adds the recent theory that humor arises from benign norm violations to his Homo Hypocritus model:

\n

The Homo Hypocritus (i.e., man the sly rule bender) hypothesis I’ve been exploring lately is that humans evolved to appear to follow norms, while covertly coordinating to violate norms when mutually advantageous. A dramatic example of this seems to be the sheer joy and release we feel when we together accept particular norm violations.  Apparently much “humor” is exactly this sort of joy:

\n

[The paper:]The benign-violation [= humor] hypothesis suggests that three conditions are jointly necessary and sufficient for eliciting humor: A situation must be appraised as a [norm] violation, a situation must be appraised as benign, and these two appraisals must occur simultaneously.will be amused. Those who do not simultaneously see both interpretations will not be amused.

\n

In five experimental studies, … we found that benign moral violations tend to elicit laughter (Study 1), behavioral displays of amusement (Study 2), and mixed emotions of amusement and disgust (Studies 3–5). Moral violations are amusing when another norm suggests that the behavior is acceptable (Studies 2 and 3), when one is weakly committed to the violated norm (Study 4), or when one feels psychologically distant from the violation (Study 5). …

\n

We investigated the benign-violation hypothesis in the domain of moral violations. The hypothesis, however, appears to explain humor across a range of domains, including tickling, teasing, slapstick, and puns. (more;HT)

\n

[Robin:] Laughing at the same humor helps us coordinate with close associates on what norms we expect to violate together (and when and how). This may be why it is more important to us that close associates share our sense of humor, than our food or clothing tastes, and why humor tastes vary so much from group to group.

\n

I disagree with the theory and with Robin’s take on it.

\n

Benign social norm violations are often not funny:

\n

Yesterday I drove home drunk, but there was almost nobody out that late anyway.

\n

Some people tell small lies in job interviews.

\n

You got his name wrong, but I don’t think he noticed

\n

Things are often funny without being norm violations:

\n

People we don’t sympathize with falling over, being fat, being ugly, making mistakes, having stupid beliefs

\n

People trying to gain status we think they don’t deserve and failing (note that it is their failure that is funny, not their norm-violating arrogance) or acting as though they have status when they are being made fools of really

\n

Silly things being treated as though they are dangerous or important e.g. Monty Python’s killer rabbit, and the board game Munchkin’s ‘boots of but kicking’ and most of its other jokes

\n

Note that the first two are cases of people we don’t sympathize with having their status lowered, and the third signifies someone acting as if they are inferior to the point of absurdity. Social norm violation often involves someone’s status being lowered, either the norm violating party if they fail or whoever they are committing a violation against. And when people or groups we dislike lose status, this is benign to us. So benign norm violations often coincide with people we don’t care for losing status. There are varieties of benign violation where we are not harmed but where nobody else we know of or dislike loses status,  and these don’t seem to be funny. All of the un-funny social norm violations I mentioned first are like this. So I think ‘status lowering of those we don’t care for’ is more promising a commonality than ‘benign norm violations’.

\n

I don’t think the benign norm violation view of humor is much use in the Homo Hypocritus model for three reasons. Humor can’t easily allow people to agree on what norms to violate since a violation’s being benign is often a result of the joke being about a distant story that can’t affect you, rather than closely linked to the nature of the transgression. Think of baby in the blender jokes. More likely it helps to coordinate who to transgress against. If I hear people laughing at a political leader portrayed doing a silly dance I infer much more confidently that they don’t respect the political leader than that they would be happy to do silly dances with me in future.

\n

Second, if it were the case that humor was a signal between people about what norms to violate, you would not need to get the humor to get the message, so the enjoyment seems redundant. You don’t have to find a joke amusing to see what norm is violated in it, especially if you are the party who likes the norm and would like to prevent conspiracies to undermine it. So this theory doesn’t explain people liking to have similar humor to their friends, nor the wide variety, nor the special emotional response rather than just saying ‘hey, I approve of Irishmen doing silly things, so if you’re Irish we could be silly together later’. You could argue that the emotional response is needed so that the person who makes the joke can judge whether their friends are really loyal to the cause of transgressing this norm, but people laugh at jokes they don’t find that funny all the time.

\n

Last, if you want to conspire to break a social norm together, you would do well to arrange this quietly, not with loud, distinctive cackles.

\n

That said, these are interesting bits of progress, and I don’t have a complete better theory tonight.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "5vogC4eJ4gXixX2KJ", "title": "Should I believe what the SIAI claims?", "pageUrl": "https://www.lesswrong.com/posts/5vogC4eJ4gXixX2KJ/should-i-believe-what-the-siai-claims", "postedAt": "2010-08-12T14:33:49.617Z", "baseScore": 22, "voteCount": 74, "commentCount": 633, "url": null, "contents": { "documentId": "5vogC4eJ4gXixX2KJ", "html": "

Major update here.

\n

The state of affairs regarding the SIAI and its underlying rationale and rules of operation are insufficiently clear. 

\n

Most of the arguments involve a few propositions and the use of probability and utility calculations to legitimate action. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. Even if you tell me, where is the data on which you base those estimations?

\n

There seems to be an highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call that a castle in the air.

\n

I know that what I'm saying may simply be due to a lack of knowledge and education, that is why I am inquiring about it. How many of you, who currently support the SIAI, are able to analyse the reasoning that led you to support the SIAI in the first place, or at least substantiate your estimations with other kinds of evidence than a coherent internal logic?

\n

I can follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. Are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground? There seems to be no critical inspection or examination by a third party. There is no peer review. Yet people are willing to donate considerable amounts of money.

\n

I'm concerned that, although consistently so, the SIAI and its supporters are updating on fictional evidence. This post is meant to inquire about the foundations of your basic premises. Are you creating models to treat subsequent models or are your propositions based on fact?

\n

An example here is the use of the Many-worlds interpretation. Itself a logical implication, can it be used to make further inferences and estimations without additional evidence? MWI might be the only consistent non-magic interpretation of quantum mechanics. The problem here is that such conclusions are, I believe, widely considered not to be enough to base further speculations and estimations on. Isn't that similar to what you are doing when speculating about the possibility of superhuman AI and its consequences? What I'm trying to say here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not to say that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, on ideas that are themselves not based on firm ground.

\n

The gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Imagination allows for endless possibilities while scientific evidence provides hints of what might be possible and what impossible. Science does provide the ability to assess your data. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of what might be possible. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.

\n

And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic, i.e. imagination or fiction, and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed by the SIAI.

\n

Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who's aware of something that might shatter the universe? Why is it that people like Vernor Vinge, Robin Hanson or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI? Why aren't Eric Drexler, Gary Drescher or AI researches like Marvin Minsky worried to the extent that they signal their support?

\n

I'm talking to quite a few educated people outside this community. They do not doubt all those claims for no particular reason. Rather they tell me that there are too many open questions to focus on the possibilities depicted by the SIAI and to neglect other near-term risks that might wipe us out as well.

\n

I believe that many people out there know a lot more than I do, so far, about related topics and yet they seem not to be nearly as concerned about the relevant issues than the average Less Wrong member. I could have named other people. That's besides the point though, it's not just Hanson or Vinge but everyone versus Eliezer Yudkowsky and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?

\n

What do you expect me to do, just believe Eliezer Yudkowsky? Like I believed so much in the past which made sense but turned out to be wrong? Maybe after a few years of study I'll know more.

\n

...

\n

2011-01-06: As this post received over 500 comments I am reluctant to delete it. But I feel that it is outdated and that I could do much better today. This post has however been slightly improved to account for some shortcomings but has not been completely rewritten, neither have its conclusions been changed. Please account for this when reading comments that were written before this update.

\n

2012-08-04: A list of some of my critical posts can be found here: SIAI/lesswrong Critiques: Index

" } }, { "_id": "DxHmpiGS5quixEyjz", "title": "What should I have for dinner? (A case study in decision making)", "pageUrl": "https://www.lesswrong.com/posts/DxHmpiGS5quixEyjz/what-should-i-have-for-dinner-a-case-study-in-decision", "postedAt": "2010-08-12T13:29:19.270Z", "baseScore": 29, "voteCount": 24, "commentCount": 106, "url": null, "contents": { "documentId": "DxHmpiGS5quixEyjz", "html": "

Everyone knows that eating fatty foods is bad for you, that high cholesterol causes heart disease and that we should all do some more exercise so that we can lose weight. How do I know that everyone knows this? Well, for one thing, this government website tells me so:

\n
\n

We all know too much fat is bad for us. But we don't always know where it's lurking. It seems to be in so many of the things we like, so it's sometimes difficult to know how to cut down.

\n

...kids need to do at least 60 minutes of physical activity that gets their heart beating faster than usual. And they need to do it every day to burn off calories and prevent them storing up excess fat in the body which can lead to cancer, type 2 diabetes and heart disease.

\n
\n

See, it's right there in black and white. We all know too much fat is bad for us. Except... there are a lot of people who don't agree. Gary Taubes is one of them, His book, Good Calories Bad Calories (The Diet Delusion in the UK and Australia), sets out the case against what he calls the Dietary Fat Hypothesis for obesity and heart disease, and proposes instead the Carbohydrate Hypothesis: that both obesity and heart disease are caused by excessive consumption of refined carbohydrates, rather than dietary fat.

\n

Taubes is very convincing. He explains how people have consistently recommended low-carb diets for weight-loss for the past 150 years. He explains how scientists roundly ignored studies that contradicted the link between high cholesterol and coronary disease. There are details of the mechanism by which eating refined carbohydrate affects insulin production, leading to obesity. He gives a plausible narrative for how the Dietary Fat Hypothesis came to be accepted scientific wisdom despite not actually being true (or supported by the majority of the evidence). He explains how studies of low-fat diets simply ignored overall mortality rates, reporting only deaths from heart disease, and how one study wasn't published because 'we weren't happy with the way it turned out'. All in all, the book is very convincing.

\n

I expect a relatively large percentage of people on LW are already aware of this. Searching the LW archives for 'Taubes' gives several, mostly positive, references to his work (Eliezer seems to be convinced \"Dietary scientists ignoring their own experimental evidence have killed millions and condemned hundreds of millions more to obesity with high-fructose corn syrup.\"). However, I do expect it to be news to some people, and I think it raises an important question. Given that everyone needs to eat something, we all need to decide whether we believe Taubes or whether we believe Change 4 Life.

\n

Good Calories, Bad Calories is 601 pages of relatively small type, and contains 111 pages of references. Most of you probably don't want to read a book that long, and you definitely don't want to check all of it's references. Even if you did, Taubes openly admits that his book is attempting to argue for the Carbohydrate Hypothesis - he is trying to convince you, why should you be surprised if you find yourself convinced? (He claims not to be cherry-picking but then, he would, wouldn't he?) So how can you decide whether to trust the government or whether to trust some journalist with no training in biology? Even if you do decide to assess the evidence for yourself, how exactly should you go about it?

\n

This is the key question of rationality. How can we believe what is true? And I think this makes a great case study - it's an area in which we all have to have a belief (or at least, act as though we have a belief) and one in which there is (or at least appears to be) genuine controversy as to what is true and what is not.

\n

If you've already thought about this, do you believe Taubes' thesis, and how did you come to this conclusion? If this is the first time you've ever heard of Taubes, how far have you shifted your probability for the Dietary Fat Hypothesis based on reading this post? What more research do you intend to do to decide whether or not to continue believing it? How much weight do you place on the fact that I believe Taubes? On the fact that Eliezer believes Taubes (Eliezer, if your position is more nuanced than this, feel free to correct me)? How much did you update your beliefs based on what other commentors have said (assuming there have been any)?

" } }, { "_id": "hoh3ysTRDXJmcWjEH", "title": "Welcome to Less Wrong! (2010-2011)", "pageUrl": "https://www.lesswrong.com/posts/hoh3ysTRDXJmcWjEH/welcome-to-less-wrong-2010-2011", "postedAt": "2010-08-12T01:08:08.961Z", "baseScore": 56, "voteCount": 43, "commentCount": 805, "url": null, "contents": { "documentId": "hoh3ysTRDXJmcWjEH", "html": "
\n
This post has too many comments to show them all at once! Newcomers, please proceed in an orderly fashion to the newest welcome thread.
\n
" } }, { "_id": "GWT2A5jeBcsXzJucY", "title": "Singularity Summit 2010 Less Wrong Meetup", "pageUrl": "https://www.lesswrong.com/posts/GWT2A5jeBcsXzJucY/singularity-summit-2010-less-wrong-meetup", "postedAt": "2010-08-11T22:53:25.960Z", "baseScore": 11, "voteCount": 8, "commentCount": 6, "url": null, "contents": { "documentId": "GWT2A5jeBcsXzJucY", "html": "

It's been a whole year (almost) and the Singularity Summit is upon us again. This time, instead of having a Meetup for only a few hours, we're going to have a continuous meetup lasting the entire length of the Summit. We will have a room adjacent to the lobby with a sign proclaiming \"Less Wrong Meetup.\" We can discuss the topics brought up in the Summit, exploring their implications, and just having fun with a large concentration of rationalists in an intellectually stimulating environment. 

" } }, { "_id": "vqyJQAzRkoNQvTNBQ", "title": "Is it rational to be religious? Simulations are required for answer.", "pageUrl": "https://www.lesswrong.com/posts/vqyJQAzRkoNQvTNBQ/is-it-rational-to-be-religious-simulations-are-required-for", "postedAt": "2010-08-11T15:20:04.058Z", "baseScore": -16, "voteCount": 34, "commentCount": 71, "url": null, "contents": { "documentId": "vqyJQAzRkoNQvTNBQ", "html": "

What must a sane person1 think regarding religion? The naive first approximation is \"religion is crap\". But let's consider the following:

\n

Humans are imperfectly rational creatures. Our faults include not being psychologically able to maximally operate according to our values. We can e.g. suffer from burn-out if we try to push ourselves too hard.

\n

It is thus important for us to consider, what psychological habits and choices contribute to our being able to work as diligently for our values as we want to (while being mentally healthy). It is a theoretical possibility, a hypothesis that could be experimentally studied, that the optimal2 psychological choices include embracing some form of Faith, i.e. beliefs not resting on logical proof or material evidence.

\n

In other words, it could be that our values mean that Occam's Razor should be rejected (in some cases), since embracing Occam's Razor might mean that we miss out on opportunities to manipulate ourselves psychologically into being more what we want to be.

\n

To a person aware of The Simulation Argument, the above suggests interesting corollaries:

\n
    \n
  1. Running ancestor simulations is the ultimate tool to find out, what (if any) form of Faith is most conducive to us being able to live according to our values.
  2. \n
  3. If there is a Creator and we are in fact currently in a simulation being run by that Creator, it would have been rather humorous of them to create our world thus that the above method would yield \"knowledge\" of their existence.
  4. \n
\n

 

\n
\n

1: Actually, what I've written here assumes we are talking about humans. Persons-in-general may be psychologically different, and theoretically capable of perfect rationality.

\n

2: At least for some individuals, not necessarily all.

" } }, { "_id": "zcNkZR7eTbSix4qR4", "title": "Discrimination: less is more", "pageUrl": "https://www.lesswrong.com/posts/zcNkZR7eTbSix4qR4/discrimination-less-is-more", "postedAt": "2010-08-10T18:32:34.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "zcNkZR7eTbSix4qR4", "html": "

‘Discrimination’ can mean all sorts of things. One of the main ones, and what it will mean in this post, is differential treatment of people from different groups, due to real or imagined differences in average group features.  Discrimination is a problem because the many people who don’t have the supposed average features of a group they are part of are misconstrued as having them, and offered inappropriate treatment and opportunities as a result. For instance a capable and trustworthy middle aged man may miss out on a babysitting job for which he is truly the best candidate because the parents take his demographic information as reason not to trust him with their children.

\n

This means that ‘discrimination’ is really a misnomer; this problem is due to lack of discrimination. In particular lack of discrimination between members of the groups. For instance if everyone could instantly discriminate between women with different levels of engineering ability, generalizations would be useless, assuming engineering ability is really the issue of interest to the discriminators. Generalizations aren’t even offensive when enough discrimination is possible. Telling a 6’5” Asian man that he’s probably short since he’s Asian is an ineffective and confusing insult.  Even if observers can’t discriminate perfectly, more ability to discriminate means less misrepresentation. For instance a test score doesn’t perfectly determine people’s abilities at engineering, but it is much more accurate than judging by their gender. This is assuming the generalizations have some degree of accuracy, if they are arbitrary it doesn’t make much difference whether you use false generalizations of larger groups or smaller ones.

\n

The usual solution suggested for ‘discrimination’  is for everyone to forget about groups and act only on any specific evidence they have about individuals. Implicitly this advice is to expect everyone to have the average characteristics of the whole population except where individual evidence is available. Notice that generalizing over a larger group like this should increase the misrepresentation of people, and thus their inappropriate treatment.  Recall that that was the original problem with discrimination.

\n

If the parents mentioned earlier were undiscriminating they would be much more trusting of middle aged men, but they would also be less trusting of other demographics such as teenage girls. All evidence they had ever got of any group or type of person being untrustworthy would be interpreted only as weaker evidence that people are untrustworthy. This would reduce the expected trustworthiness of their best candidate, so more often they would not find it worth going out in the first place. Now the man still misses out on the position, but so does the competing teenage girl plus the parents don’t get to go out. Broadening group generalizations to the extreme makes ‘discrimination’ worse, which makes sense when we consider that discriminating between people as much as possible (judging them on their own traits) is the best way to avoid ‘discrimination’.

\n

It may be that something else about discrimination bothers you, for instance if you are most concerned with the equality status of competing social groups, then population level generalizations are the way to go. But if you want to stop discrimination because it causes people to be treated as less than they are, then work on making it easier to discriminate between people further, rather than harder to discriminate between them at all. Help people signal their traits cheaply and efficiently distinguish between others. In the absence of perfect discrimination between individuals, the other end of the spectrum is not the next best thing, it’s the extreme of misrepresentation.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "jBnX58P8J6HuTCz2g", "title": "A Proof of Occam's Razor", "pageUrl": "https://www.lesswrong.com/posts/jBnX58P8J6HuTCz2g/a-proof-of-occam-s-razor", "postedAt": "2010-08-10T14:20:35.410Z", "baseScore": 0, "voteCount": 24, "commentCount": 139, "url": null, "contents": { "documentId": "jBnX58P8J6HuTCz2g", "html": "

\n

Related to: Occam's Razor

\n

If the Razor is defined as, “On average, a simpler hypothesis should be assigned a higher prior probability than a more complex hypothesis,” or stated in another way, \"As the complexity of your hypotheses goes to infinity, their probability goes to zero,\" then it can be proven from a few assumptions.

\n

1)      The hypotheses are described by a language that has a finite number of different words, and each hypothesis is expressed by a finite number of these words. That this allows for natural languages such as English, but also for computer programming languages and so on. The proof in this post is valid for all cases.

\n

2)      A complexity measure is assigned to hypotheses in such a way that there are or may be some hypotheses which are as simple as possible, and these are assigned the complexity measure of 1, while hypotheses considered to be more complex are assigned higher integer values such as 2, 3, 4, and so on. Note that apart from this, we can define the complexity measure in any way we like, for example as the number of words used by the hypothesis, or in another way, as the shortest program which can output the hypothesis in a given programming language (e.g. the language of the hypotheses might be English but their simplicity measured according to a programming language; Eliezer Yudkowsky follows this way in the linked article.) Many other definitions would be possible. The proof is valid for all definitions that follow the conditions laid out.

\n

3)      The complexity measure should also be defined in such a way that there are a finite number of hypotheses given the measure of 1, a finite number given the measure of 2, a finite number given the measure of 3, and so on. Note that this condition is not difficult to satisfy; it would be satisfied by either of the definitions mentioned in condition 2, and in fact by any reasonable definition of simplicity and complexity. The proof would not be valid without this condition precisely because if simplicity were understood in such a way as to allow for an infinite number of hypotheses with minimum simplicity, the Razor would not be valid for that understanding of simplicity.

\n

The Razor follows of necessity from these three conditions. To explain any data, there will be in general infinitely many mutually exclusive hypotheses which could fit the data. Suppose we assign prior probabilities for all of these hypotheses. Given condition 3, it will be possible to find the average probability for hypotheses of complexity 1 (call it x1), the average probability for hypotheses of complexity 2 (call it x2), the average probability for hypotheses of complexity 3 (call it x3), and so on. Now consider the infinite sum “x1 + x2 + x3…” Since all of these values are positive (and non-zero, since zero is not a probability), either the sum converges to a positive value, or it diverges to positive infinity. In fact, it will converge to a value less than 1, since if we had multiplied each term of the series by the number of hypotheses with the corresponding complexity, it would have converged to exactly 1—because probability theory demands that the sum of all the probabilities of all our mutually exclusive hypotheses should be exactly 1.

\n

Now, x1 is a finite real number. So in order for this series to converge, there must be only a finite number of later terms in the series equal to or greater than x1. There will therefore be some complexity value, y1, such that all hypotheses with a complexity value greater than y1 have an average probability of less than x1. Likewise for x2: there will be some complexity value y2 such that all hypotheses with a complexity value greater than y2 have an average probability of less than x2. Leaving the derivation for the reader, it would also follow that there is some complexity value z1 such that all hypotheses with a complexity value greater than z1 have a lower probability than any hypothesis with a complexity value of 1, some other complexity value z2 such that all hypotheses with a complexity value greater than z2 have a lower probability than any hypothesis of complexity value 2, and so on.

\n

From this it is clear that on average, or as the complexity tends to infinity, hypotheses with a greater complexity value have a lower prior probability, which was our definition of the Razor.

\n

N.B. I have edited the beginning and end of the post to clarify the meaning of the theorem, according to some of the comments. However, I didn't remove anything because it would make the comments difficult to understand for later readers.

" } }, { "_id": "gicx9QBRpx6CMycMQ", "title": "Against Cryonics & For Cost-Effective Charity", "pageUrl": "https://www.lesswrong.com/posts/gicx9QBRpx6CMycMQ/against-cryonics-and-for-cost-effective-charity", "postedAt": "2010-08-10T03:59:28.119Z", "baseScore": 9, "voteCount": 44, "commentCount": 189, "url": null, "contents": { "documentId": "gicx9QBRpx6CMycMQ", "html": "

Related To: You Only Live Twice, Normal Cryonics, Abnormal Cryonics, The Threat Of Cryonics, Doing your good deed for the day, Missed opportunities for doing well by doing good

\r\n

Summary: Many Less Wrong posters are interested in advocating for cryonics. While signing up for cryonics is an understandable personal choice for some people, from a utilitarian point of view the money spent on cryonics would be much better spent by donating to a cost-effective charity. People who sign up for cryonics out of a generalized concern for others would do better not to sign up for cryonics and instead donating any money that they would have spent on cryonics to a cost-effective charity. People who are motivated by a generalized concern for others to advocate the practice of signing up for cryonics would do better to advocate that others donate to cost-effective charities.

\r\n

Added 08/12:  The comments to this post have prompted me to add the following disclaimers:

\r\n

(1) Wedrifid understood me to be placing moral pressure on people to sacrifice themselves for the greater good. As I've said elsewhere, \"I don't think that Americans should sacrifice their well-being for the sake of others. Even from a utilitarian point of view, I think that there are good reasons for thinking that it would be a bad idea to do this.\" My motivation for posting on this topic is the one described by rhollerith_dot_com in his comment.

\r\n

(2) In line with the above comment, when I say \"selfish\" I don't mean it with the negative moral connotations that the word carries, I mean it as a descriptive term. There are some things that we do for ourselves and there are some things that we do for others - this is as things should be. I'd welcome any suggestions for a substitute for the word \"selfish\" that has the same denotation but which is free of negative conotations.

\r\n

(3) Wei_Dai thought that my post assumed a utilitarian ethical framework. I can see how my post may have come across that way. However, while writing the post I was not assuming that the reader ascribes to utilitarianism. When I say \"we should\" in my post I mean \"to the extent that we ascribe to utilitarianism we should.\" I guess that while writing the post I thought that this would be clear from context, but turned out to have been mistaken on this point.

\r\n

As an aside, I do think that there are good arguments for a (sophisticated sort of) utilitarian ethical framework. I will make a post about this after reading Eliezer's posts on utilitarianism.

\r\n

(4) Orthonormal thinks that I'm treating cryonics differently from other expenditures. This is not the case, from my (utilitarian) point of view, expenditures should be judged exclusively based on their social impact. The reason why I wrote a post about cryonics is because I had the impression that there are members of the Less Wrong community who view cryonics expenditures and advocacy as \"good\" in a broader sense than I believe is warranted. But (from a utilitarian point of view) cryonics is one of thousands of things that people ascribe undue moral signficance to. I certainly don't think that advocacy of and expenditures on \"cryonics\" is worse from a utilitarian point of view than advocacy of and expenditures on something like \"recycling plastic bottles\".

\r\n

I've also made the following modifications to my post

\r\n

(A) In response to a valid objection raised by Vladimir_Nesov I've added a paragraph clarifying that Robin Hanson's suggestion that cryonics might be an effective charity is based on the idea that doing so will drive costs down, and explanation for why I think that my points still hold.

\r\n

(B) I've added a third example of advocacy of cryonics within the Less Wrong community to make it more clear that I'm not arguing against a straw man.

\r\n

Without further ado, below is the main body of the revised post.

\r\n

\r\n

\r\n


\r\n

\r\n

Advocacy of cryonics within the Less Wrong community

\r\n

Most recently, in Christopher Hitchens and Cryonics, James_Miller wrote:

\r\n
\r\n

I propose that the Less Wrong community attempt to get Hitchens to at least seriously consider cryonics.

\r\n
\r\n


Eliezer has advocated cryonics extensively. In You Only Live Twice, Eliezer says:

\r\n
\r\n

If you've already decided this is a good idea, but you \"haven't gotten around to it\", sign up for cryonics NOW.  I mean RIGHT NOW.  Go to the website of Alcor or the Cryonics Institute and follow the instructions.

\r\n

[...]

\r\n

Not signing up for cryonics - what does that say?  That you've lost hope in the future.  That you've lost your will to live.  That you've stopped believing that human life, and your own life, is something of value.

\r\n

[...]

\r\n

On behalf of the Future, then - please ask for a little more for yourself.  More than death.  It really... isn't being selfish.  I want you to live.  I think that the Future will want you to live.  That if you let yourself die, people who aren't even born yet will be sad for the irreplaceable thing that was lost.

\r\n
\r\n

In Normal Cryonics Eliezer says:

\r\n
\r\n

You know what?  I'm going to come out and say it. I've been unsure about saying it, but after attending this event, and talking to the perfectly ordinary parents who signed their kids up for cryonics like the goddamn sane people do, I'm going to come out and say it:  If you don't sign up your kids for cryonics then you are a lousy parent.

\r\n
\r\n

In The Threat of Cryonics, lsparrish writes

\r\n
\r\n

...we cannot ethically just shut up about it. No lives should be lost, even potentially, due solely to lack of a regular, widely available, low-cost, technologically optimized cryonics practice. It is in fact absolutely unacceptable, from a simple humanitarian perspective, that something as nebulous as the HDM -- however artistic, cultural, and deeply ingrained it may be -- should ever be substituted for an actual human life.

\r\n
\r\n

Is cryonics selfish?

\r\n

There's a common attitude within the general public that cryonics is selfish. This is exemplified by a quote from the recent profile of Robin Hanson and Peggy Jackson in the New York Times article titled Until Cryonics Do Us Part:

\r\n
\r\n

“You have to understand,” says Peggy, who at 54 is given to exasperation about her husband’s more exotic ideas. “I am a hospice social worker. I work with people who are dying all the time. I see people dying All. The. Time. And what’s so good about me that I’m going to live forever?”

\r\n
\r\n

As suggested by Thursday in a comment to Robin Hanson's post Modern Male Sati, part of what seems to be going on here is that people subscribe to a \"Just Deserts\" theory of which outcomes ought to occur:

\r\n
\r\n

I think another of the reasons that people dislike cryonics is our intuition that immortality should have to be earned. It isn’t something that a person is automatically entitled to.

\r\n
\r\n

Relatedly, people sometimes believe in egalitarianism even when achieving it comes at the cost of imposing handicaps on the fortunate as in the Kurt Vonnegut novel Harrison Bergeron.

\r\n

I believe that the objections that people have to cryonics which are rooted in the belief in people should get what they deserve and in the idea that egalitarianism is so important that we should handicap the privileged to achieve it are maladaptive. So, I think that the common attitude that cryonics is selfish is not held for good reason.

\r\n

At the same time, it seems very likely to me that paying for cryonics is selfish in the sense that many personal expenditures are. Many personal expenditures that people engage in come with an opportunity cost of providing something of greater value to someone else. My general reaction to cryonics is the same as Tyler Cowen's: rather than signing up for cryonics, \"why not save someone else's life instead?\"

\r\n

Could funding cryonics be socially optimal?

\r\n

In Cryonics As Charity, Robin Hanson explores the idea that paying for cryonics might be a cost-effective charitable expenditure.

\r\n
\r\n

...buying cryonics seems to me a pretty good charity in its own right.

\r\n

[...]

\r\n

OK, even if consuming cryonics helps others, could it really help as much as direct charity donations? Well it might be hard to compete with cash directly handed to those most in need, but remember that most real charities suffer great inefficiencies and waste from administration costs, agency failures, and the inattention of donors.

\r\n
\r\n

Hanson's argument in favor of cryonics as a charity is based on the idea that buying cryonics drives the costs of cryonics down, making it easier for other people to purchase cryonics and also that purchasing cryonics normalizes the practice which raises the probability that people who are cryopreserved will be revived. There are several reasons why I don't find these points a compelling argument for cryonics as a charity. I believe that:

\r\n

(i) I believe that in absence of human genetic engineering, it's very unlikely that it's possible to overcome the social stigma against cryonics. So I assign a small expected value to the social benefits that Hanson envisages which arise from purchasing cryonics.

\r\n

(ii) Because of the social stigma against cryonics, signing up for cryonics or advocating cryonicshas a negative unintended consequence of straining interpersonal relationships as hinted at in Until Cryonics Do Us Part. This negative unintended consequence must be weighed against the potential social benefits attached to purchasing cryonics

\r\n
(iii) Point #3 below: purchasing cryonics may be zero-sum on account of preventing future potential humans and transhumans from living.
\r\n
Overall I believe that the positive indirect consequences of purchasing cryonics are approximately outweighed by the negative indirect consequences of purchasing cryonics.
\r\n

How do the direct consequences of cryonics compare with the direct consequences of the best developing world aid charities? Let's look at the numbers.  According to the Alcor website , Alcor charges $150,000 for whole body cryopreservation and $80,000 for Neurocryopreservation.  GiveWell  estimates that VillageReach and StopTB save lives at a cost of $1,000 each. Now, the  standard of living is lower in the developing world  than in the developed world, so that saving lives in the developing world is (on average) less worthwhile than saving lives in developed world. Last February  Michael Vassar estimated  (based on his experience living in the developing world among other things) that one would have to spend $50,000 on developing world aid to save a quality of life comparable to his own. Michael's estimate may be too high or too low, and quality of life within the developed world is variable, but for concreteness let's equate the value of 40 years of life of the typical prospective cryonics sign-up with $50,000 worth of cost-effective developing world aid. Is buying cryonics for oneself then more cost-effective than developing world aid?

\r\n

Here are some further considerations which are relevant to this question:

\r\n
    \r\n
  1. Cryopreservation is not a guarantee of revitalization. In  Cryonics As Charity  and elsewhere Robin Hanson has estimated the probability of revitalization at 5% or so.
  2. \r\n
  3. Revitalization is not a guarantee of a very long life - after one is revived the human race could go extinct.
  4. \r\n
  5. Insofar as the resources that humans have access to are limited, being revived may have the opportunity cost of another human/transhuman being born.
  6. \r\n
  7. If humans develop life extension technologies before the prospective cryonics sign-up dies then the prospective cryonics sign-up will probably have no need of cryonics.
  8. \r\n
  9. If humans develop Friendly AI soon then any people in the developing world whose lives are saved might have the chance to live very long and happy lives.
  10. \r\n
\r\n

With all of these factors in mind, I presently believe that from the point of view of general social welfare, donating to VillageReach or StopTB is much more cost-effective than paying for cryopreservation is.

\r\n

It may be still more cost-effective to fund charities that reduce global catastrophic risk. The question is just whether it's possible to do things that meaningfully reduce global catastrophic risk. Some people in the GiveWell community have the attitude that there's so much  stochastic dilution of efforts to reduce global catastrophic risk that developing world aid is a more promising cause than existential risk reduction is. I share these feelings in regard to SIAI as presently constituted for reasons which I described in  the linked thread . Nevertheless, I personally believe that within 5-10 years there will emerge strong opportunities to donate money to reduce existential risk, opportunities which may be orders of magnitude more cost-effective than developing world aid.

\r\n

It may be possible to construct a good argument for the idea that funding cryonics is socially optimal. But those who supported cryonics before thinking about whether funding cryonics is socially optimal should beware falling prey to  confirmation bias  in their thinking about whether funding cryonics is socially optimal.

\r\n

Is cryonics rational?

\r\n

If you believe that funding cryonics is socially optimal and you have generalized philanthropic concern, then you should fund cryonics. As I say above, I think it very unlikely that funding cryonics is anywhere near socially optimal. For  the sake of definiteness and brevity, in the remainder of this post I will subsequently assume that funding cryonics is far from being socially optimal.

\r\n

Of course, people have  many values  and generally give greater weight to their own well being and the well being of family and friends than to the well being of unknown others. I see this as an inevitable feature of current human nature and don't think that it makes sense to try to change it at present. People (including myself) constantly spend money on things (restaurant meals, movies, CDs, travel expenses, jewelry, yachts, private airplanes, etc.) which are apparently far from socially optimal. I view cryonics expenses in a similar light. Just as it may be rational for some people to buy expensive jewelry, it may be rational for some people to sign up for cryonics. I think that cryonics is unfairly maligned and largely agree with Robin Hanson's article  Picking on Cryo-Nerds .

\r\n

On the flip side, just as it would be irrational for some people to buy expensive jewelry, it would be irrational for some people to sign up for cryonics. We should view signing up for cryonics as an understandable indulgence rather than a moral imperative. Advocating that people sign up for cryonics is like advocating that people buy diamond necklaces. I believe that our advocacy efforts should be focused on doing the most good, not on getting people to sign up for cryonics.

\r\n

I anticipate that some of you will object, saying \"But wait! The social value of signing up for cryonics is much higher than the social value of buying diamond necklace!\" This may be true, but is irrelevant. Assuming that funding cryonics is orders of magnitude less efficient than the best philanthropic option, in absolute terms, the social opportunity cost of funding cryonics is very close to the social opportunity cost of buying a diamond necklace.

\r\n

Because charitable efforts vary in cost-effectiveness by many orders of magnitude in unexpected ways, there's no reason to think that the supporting causes that have the most immediate intuitive appeal to oneself are at all close to socially optimal. This is why it's important to  Purchase Fuzzies and Utiltons Separately . If one doesn't, one can end up expending a lot of energy ostensibly dedicated to philanthropy which accomplishes a very small fraction of what one could have accomplished. This is arguably what's going on with cryonics advocacy. As Holden Karnofsky has said, there's  nothing wrong with selfish giving - just don’t call it philanthropy . Holden's post relates to the phenomenon discussed Yvain's great post  Doing your good deed for the day . Quoting from Holden's post

\r\n
\r\n

I don’t think it’s wrong to make gifts that aren’t “optimized for pure social impact.” Personally, I’ve made “gifts” with many motivations: because friends asked, because I wanted to support a  resource I personally benefit from , etc. I’ve stopped giving to my alma mater (which I suspect has all the funding it can productively use) and I’ve never made a gift just to “tell myself a nice story,” but in both cases I can understand why one would.

\r\n

Giving money for selfish reasons, in and of itself, seems no more wrong than unnecessary personal consumption (entertainment, restaurants, etc.), which I and everyone else I know does plenty of. The point at which it becomes a problem, to me, is when you “count it” toward your charitable/philanthropic giving for the year.

\r\n

[...]

\r\n

I believe that the world’s wealthy should make gifts that are aimed at nothing but making the world a better place for others. We should challenge ourselves to make these gifts as big as possible. We should not tell ourselves that we are philanthropists while making no gifts that are really aimed at making the world better.

\r\n

But this philosophy doesn’t forbid you from spending your money in ways that make you feel good. It just asks that you don’t let those expenditures lower the amount you give toward really helping others.

\r\n
\r\n

I find it very likely that promoting and funding cryonics for philanthropic reasons is irrational.

\r\n

Implications

\r\n

The members of Less Wrong community have uncommonly high analytical skills. These analytical skills are potentially very valuable to society. Collectively, we have a major opportunity to make a positive difference in people's lives. This opportunity will amount to little if we use our skills for things like cryonics advocacy. Remember,  rationalists should win . I believe that we should use our skills for what matters most: helping other people as much as possible. To this end, I would make four concrete suggestions suggestions. I believe that

\r\n

(A) We should encourage people to give more when we suspect that  in doing so, they would be behaving in accordance with their core values . As  Mass_Driver said , there may be

\r\n
\r\n

huge opportunity for us to help people help both themselves and others by explaining to them why charity is awesome-r than they thought.

\r\n
\r\n

As I've mentioned elsewhere, according to  Fortune magazine  the 400 biggest American taxpayers donate an average of only 8% of their income a year. For most multibillionaires, it's literally the case that millions of people are dying because the multibillionaire is unwilling to lead a slightly less opulent lifestyle. I'm sure that this isn't what these multibillionaires would want if they were thinking clearly. These people are not moral monsters. Melinda Gates has said that it wasn't until she and Bill Gates visited Africa that they realized that they had a lot of money to spare.

\r\n

The case of multibillionaires highlights the absurdity of the pathological effects of human biases on people's willingness to give. Multibillionaires are not unusually irrational. If anything, multibillionaires are unusually rational. Many of the people who you know would behave similarly if they were multibillionaires. This gives rise to a strong possibility that they're  presently  exhibiting analogous behavior on a smaller scale on account of irrational biases. 

\r\n

(B) We should work to raise the standards for analysis of charities for impact and cost-effectiveness and promote effective giving. To this end, I strongly recommend exploring the website and community at  GiveWell . The organization is very transparent and is welcoming of and responsive to  well-considered feedback.

\r\n

(C) We should conceptualize and advocate high expected value charitable projects but we should be especially vigilant about the possibility of overestimating the returns of a particular project. Less Wrong community members have not always exhibited such vigilance, so there is room for improvement on this point.

\r\n

(D) We should ourselves donate some money that's optimized for pure positive social impact. Not so much that doing so noticeably interferes with our ability to get what we want out of life, but noticeably more than is typical for people in our respective financial situations. We should do this not only to help the people who will benefit from our contributions, but to prove to ourselves that the analytical skills which are such an integral part of us can help us break  the shackles of unconscious self serving motivations , lift ourselves up and do what we believe in.

" } }, { "_id": "6Kwp44xqHRucadECh", "title": "Five-minute rationality techniques", "pageUrl": "https://www.lesswrong.com/posts/6Kwp44xqHRucadECh/five-minute-rationality-techniques", "postedAt": "2010-08-10T02:24:48.246Z", "baseScore": 72, "voteCount": 62, "commentCount": 237, "url": null, "contents": { "documentId": "6Kwp44xqHRucadECh", "html": "

\n

Less Wrong tends toward long articles with a lot of background material. That's great, but the vast majority of people will never read them. What would be useful for raising the sanity waterline in the general population is a collection of simple-but-useful rationality techniques that you might be able to teach to a reasonably smart person in five minutes or less per technique.

\n

Carl Sagan had a slogan: \"Extraordinary claims require extraordinary evidence.\" He would say this phrase and then explain how, when someone claims something extraordinary (i.e. something for which we have a very low probability estimate), they need correspondingly stronger evidence than if they'd made a higher-likelihood claim, like \"I had a sandwich for lunch.\" We can talk about this very precisely, in terms of Bayesian updating and conditional probability, but Sagan was able to get a lot of this across to random laypeople in about a minute. Maybe two minutes.

\n

What techniques for rationality can be explained to a normal person in under five minutes? I'm looking for small and simple memes that will make people more rational, on average. Here are some candidates, to get the discussion started:

\n

Candidate 1 (suggested by DuncanS): Unlikely events happen all the time. Someone gets in a car-crash and barely misses being impaled by a metal pole, and people say it's a million-to-one miracle -- but events occur all the time that are just as unlikely. If you look at how many highly unlikely things could happen, and how many chances they have to happen, then it's obvious that we're going to see \"miraculous\" coincidences, purely by chance. Similarly, with millions of people dying of cancer each year, there are going to be lots of people making highly unlikely miracle recoveries. If they didn't, that would be surprising.

\n

Candidate 2: Admitting that you were wrong is a way of winning an argument. (The other person wins, too.) There's a saying that \"It takes a big man to admit he's wrong,\" and when people say this, they don't seem to realize that it's a huge problem! It shouldn't be hard to admit that you were wrong about something! It shouldn't feel like defeat; it should feel like success. When you lose an argument with someone, it should be time for high fives and mutual jubilation, not shame and anger. The hard part of retraining yourself to think this way is just realizing that feeling good about conceding an argument is even an option.

\n

Candidate 3: Everything that has an effect in the real world is part of the domain of science (and, more broadly, rationality). A lot of people have the truly bizarre idea that some theories are special, immune to whatever standards of evidence they may apply to any other theory. My favorite example is people who believe that prayers for healing actually make people who are prayed for more likely to recover, but that this cannot be scientifically tested. This is an obvious contradiction: they're claiming a measurable effect on the world and then pretending that it can't possibly be measured. I think that if you pointed out a few examples of this kind of special pleading to people, they might start to realize when they're doing it.

\n

Anti-candidate: \"Just because something feels good doesn't make it true.\" I call this an anti-candidate because, while it's true, it's seldom helpful. People trot out this line as an argument against other people's ideas, but rarely apply it to their own. I want memes that will make people actually be more rational, instead of just feeling that way.

\n

 

\n

This was adapted from an earlier discussion in an Open Thread. One suggestion, based on the comments there: if you're not sure whether something can be explained quickly, just go for it! Write a one-paragraph explanation, and try to keep the inferential distances short. It's good practice, and if we can come up with some really catchy ones, it might be a good addition to the wiki. Or we could use them as rationalist propaganda, somehow. There are a lot of great ideas on Less Wrong that I think can and should spread beyond the usual LW demographic.

" } }, { "_id": "Ge3EueY5kfogfbd7e", "title": "Open Thread, August 2010-- part 2", "pageUrl": "https://www.lesswrong.com/posts/Ge3EueY5kfogfbd7e/open-thread-august-2010-part-2", "postedAt": "2010-08-09T23:18:21.789Z", "baseScore": 5, "voteCount": 4, "commentCount": 373, "url": null, "contents": { "documentId": "Ge3EueY5kfogfbd7e", "html": "

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

" } }, { "_id": "XHccFXmGxtXmhptmc", "title": "Book Recommendations", "pageUrl": "https://www.lesswrong.com/posts/XHccFXmGxtXmhptmc/book-recommendations-0", "postedAt": "2010-08-09T20:05:49.894Z", "baseScore": 0, "voteCount": 2, "commentCount": 3, "url": null, "contents": { "documentId": "XHccFXmGxtXmhptmc", "html": "

This is a place to consolidate book recommendations.

\n

I'm reading The Logic of Failure and enjoying it quite a bit. I wasn't sure whether I'd heard of it here, and I found a post here called Great Books of Failure, an article which hadn't crossed my path before.

\n

There's a recent thread about books for a gifted young teen and a slightly less recent thread discussing books on cogsci which might or might not be found by someone looking for good books.

\n

So, what books or lists of books do you recommend?

\n

 

\n

 

\n

 

" } }, { "_id": "uWhcKdToYEMPArHsv", "title": "Book Recommendations", "pageUrl": "https://www.lesswrong.com/posts/uWhcKdToYEMPArHsv/book-recommendations", "postedAt": "2010-08-09T20:03:46.719Z", "baseScore": 35, "voteCount": 31, "commentCount": 146, "url": null, "contents": { "documentId": "uWhcKdToYEMPArHsv", "html": "

This is a place to consolidate book recommendations.

\n

I'm reading The Logic of Failure and enjoying it quite a bit. I wasn't sure whether I'd heard of it here, and I found a post here called Great Books of Failure, an article which hadn't crossed my path before.

There's a recent thread about books for a gifted young teen and a slightly less recent discussion of books on cogsci thread which might or might not be found by someone looking for good books.

\n

So, what books or lists of books do you recommend?

" } }, { "_id": "QuRLFrgMhExQvt2d3", "title": "Two straw men fighting", "pageUrl": "https://www.lesswrong.com/posts/QuRLFrgMhExQvt2d3/two-straw-men-fighting", "postedAt": "2010-08-09T08:53:24.636Z", "baseScore": 2, "voteCount": 23, "commentCount": 163, "url": null, "contents": { "documentId": "QuRLFrgMhExQvt2d3", "html": "

For a very long time, philosophy has presented us with two straw men in combat with one another and we are expected to take sides. Both straw men appear to have been proved true and also proved false. The straw men are Determinism and Free Will. I believe that both, in any useful sense, are false. Let me tell a little story.

\n

 

\n

 

\n

Mary's story

\n

 

\n

Mary is walking down the street, just for a walk, without a firm destination. She comes to a T where she must go left or right and she looks down each street finding them about the same. She decides to go left. She feels she has, like a free little birdie, exercised her will without constraint. As she crosses the next intersection she is struck by a car and suffers serious injury. Now she spends much time thinking about how she could have avoided being exactly where she was, when she was. She believes that things have causes and she tries to figure out where a different decision would have given a different outcome and how she could have known to make the alternative decision. 'If only..' ideas crowd into her thoughts. She believes simultaneously that her actions have causes and that there are valid alternatives to her actions. She is using both deterministic logic and free will logic, neither alone leads to 'If only..' scenarios – it takes both. If only she had noticed that the next intersection on the right had traffic lights but on the left didn't. If only she had not noticed the shoe store on the left. What is more she is doing this in order to change some aspect of her decision making so that it will be less likely to put her in hospital, again this is not in keeping with either logic. But really both forms of logic are deeply flawed. What Mary is actually attempting is to do maintenance on her decision making processes so that they can learn whatever is available to be learned from her unfortunate experience.

\n

 

\n

 

\n

What is useless about determinism

\n

 

\n

There is a big difference between being 'in principle' determined and being determined in any useful way. If I accept that all is caused by the laws of physics (and we know these laws – a big if) this does not accomplish much. I still cannot predict events except trivially: in general but not in full detail, in simple not complex situations, extremely shortly into the future rather than longer term, etc. To predict anything really sizable, like for instance, how the earth came to be as it is, or even how little-old-me became what I am, or even why I did a particular thing a moment ago, would take more resources and time than can be found in the life of our universe. Being determined does not mean being predictable. It does not help us to know that our decisions are determined because we still have to actually make the decisions. We cannot just predict what the outcomes of our decisions will be, we really, really have to go through the whole process of making them. We cannot even pretend that decisions are determined until after we have finish making them.

\n

 

\n

 

\n

What is useless about freewill

\n

 

\n

There is a big difference between being free in the legal, political, human rights type of freedom. To be free from particular, named restraints is something we all understand. But the free in 'free will' is a freedom from the cause and effect of the material world. This sort of freedom has to be magical, supernatural, spiritual or the like. That in itself is not a problem for a belief system. It is the idea that something that is not material can act on the material world that is problematic. Unless you have everything spiritual or everything material, you have the problem of interaction. What is the 'lever' that the non-material uses to move the material, or vice versa. It is practically impossible to explain how free will can affect the brain and body. If you say God does it, you have raised a personal problem to a cosmic one but the problem remains – how can the non-physical interact with the physical? Free will is of little use in explaining our decision process. We make our decisions rather than having them dictated to us but it is physical processes in the brain that really do the decision making, not magic. And we want our decisions to be relevant, effective and in contact with the physical world, not ineffective. We actually want a 'lever' on the material world. Decisions taken in some sort of causal vacuum are of no use to us.

\n

 

\n

 

\n

The question we want answered

\n

 

\n

Just because philosophers pose questions and argue various answers does not mean that they are finding answers. No, they are make clear the logical ramifications of questions and each answer. This is a useful function and not to be undervalued, but it is not a process that gives robust answers. As an example, we have Zeno's paradox about the arrow that can never landing because its distance to landing can always be divided in half, but on the other hand, the knowledge that it does actually land. Philosophers used to argue about how to treat this paradox, but they never solved it. It lost its power when mathematics developed the concept of the sum of a infinite series. When the distance is cut in half, so is the time. When the infinite series of remaining distance reaches zero so does the series of time remaining. We do not know how to end an infinite series but we know where it ends and when it ends – on the ground the moment the arrow hits it. The sum of an infinite series can still be considered somewhat paradoxical but as an obscure mathematical question. Generally, philosophers are no longer very interested in the Zeno paradox, certainly not its answer. Philosophy is useful but not because it supplies consensus answers. Mathematics, science and their cousins, like history, supply answers. Philosophy has set up a dichotomy between free will and determinism and explored each idea to exhaustion but not with any consensus about which is correct. That is not the point of philosophy. Science has to rephrase the problem as, 'how exactly are decisions made?' That is the question we need an answer to, a robust consensus answer.

\n

 

\n

 

\n

But here is the rub

\n

 

\n

This move to a scientific answer is disturbing to very many people because the answer is assumed to have effects on our notions of morals, responsibility and identity. Civilization as we know it may fall apart. Exactly how we think we make decisions once we study the question without reference to determinism or freewill seems OK. But if the answer robs us of morals, responsibility or identity, than it is definitely not OK. Some people have the notion that what we should do is just pretend that we have free will, while knowing that our actions are determined. To me this is silly: believe two incompatible and flawed ideas at the same time rather than believe a better, single idea. It reminds me of the solution proposed to deal with Copernicus – use the new calculations while believing that the earth does not revolve. Of course, we do not yet have the scientific answer (far from it) although we think we can see the general gist of it. So we cannot say how it will affect society. I personally feel that it will not affect us negatively but that is just a personal opinion. Neuroscience will continue to grow and we will soon have a very good idea of how we actually make decisions, whether this knowledge is welcomed or not. It is time we stopped worrying about determinism and free will and started preparing ourselves to live with ourselves and others in a new framework.

\n

 

\n

 

\n

Identity, Responsibility, Morals

\n

 

\n

We need to start thinking of ourselves as whole beings, one entity from head to toe: brain and body, past and future, from birth to death. Forgot the ancient religious idea of a mind imprisoned in a body. We have to stop the separation of me and my body, me and my brain. Me has to be all my parts together, working together. Me cannot equate to consciousness alone.

\n

 

\n

Of course I am responsible for absolutely everything I do including something I do while sleep walking. Further a rock that falls from a cliff is responsible for blocking the road. It is what we do about responsibility that differs. We remove the rock but we do not blame or punish it. We try to help the sleep walker overcome the dangers of sleep walking to himself and others. But if I as a normal person hit someone in the face, my responsibility is not greater than the rock or the sleep walker but my treatment will be much, much different. I am expected to maintain my decision-making apparatus in good working order. The way the legal system will work might be a little different from now, but not much. People will be expected to know and follow the rules of society.

\n

 

\n

I think of moral questions as those for which there is no good answer. All courses of action and of inaction are bad in a moral question. Often because the possible answers pit the good of the individual against the good of the group, but also pit different groups and their interests against each other. No matter what we believe about how decisions are made, we are still forced to make them and that includes moral ones. The more we know about decisions, the more likely we are to make moral decisions we are proud of (or least guilty or ashamed of), but there is no guarantee. There is still a likelihood that we will just muddle along trying to find the lesser of two evils with no more success than at present.

\n

 

\n

 

\n

Why should we believe that being closer to the truth or having a more accurate understanding is going to make things worst rather than better? Shouldn't we welcome having a map that is closer to the territory? It is time to be open to ideas outside the artificial determinism/freewill dichotomy.

\n

 

" } }, { "_id": "2cNA3EZtP64wvS6kf", "title": "Why is gender equality so rude?", "pageUrl": "https://www.lesswrong.com/posts/2cNA3EZtP64wvS6kf/why-is-gender-equality-so-rude", "postedAt": "2010-08-09T01:00:10.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "2cNA3EZtP64wvS6kf", "html": "

I don’t see much anti-female sexism in my immediate surrounds; I notice more that is anti-male. But one place I have been continually put off by anti-female sexism is in attempts to promote gender equality. It seems especially prominent in efforts to seduce me to traditionally non-feminine academic areas. If my ratio of care about interesting subjects vs. social situations were different I might have been put off by the seeming prospect of being treated like a defective sacrifice to political correctness.

\n

Some examples from the advertising and equity policies of various academic places I’ve been:

\n

‘Women can make valuable contributions to …’ implies that this is an issue of serious contention. If most people thought women were of zero value in some fields, this would be a positive statement about women, but they don’t. Worse, the author can’t make a stronger statement than that it is possible for women to create more than zero value.

\n

Appeals to consider myself capable of e.g. engineering despite being female make the same error but this time suggesting that the viewer herself is likely in doubt. Such a statement can only be useful to women so ignorant of their own characteristics that they need to rely on their gender as deciding evidence in what career to devote their lives to, so it suggests the female audience are clueless. The smartest women have likely noticed that they are smart, and will not be encouraged by the prospect of joining a field where others expect them to be intellectually insecure special people to be reassured and included for human rights purposes.

\n

Statements such as women are valuable because they can provide a different perspective on computer science, imply that women can’t understand a computer the usual way, but might help figure out how to make it more personable or something. If this is true, why not just say ‘women are not that valuable in computer science’?

\n

Policies of employing a certain number of female staff to provide role models or leadership for female students imply that females would rather aspire to femalehood than to superior ability (presumably the decision criteria forgone).

\n

Recommendations that courses like mathematics should be more focussed on women say that while existing mathematics is about completely gender neutral abstract concepts, not men, it is unsuitable for women. Presumably either women are not up to abstract concepts, or women can’t be motivated to think about something other than women. Despite whichever inadequacy, they should be encouraged to do mathematics anyway by being taught to work out the mean angle of their cleavage or something.

\n

Why do so many attempts at equality seem so counterproductive?  The above seem to fall into two processes: first, assuming that society believes women might be useless, advertising this, and arguing against it so badly as to confirm it, and second, trying to suck up to women by making things more female related at the cost of features capable women would care for. Perhaps those more concerned about anti-female sexism make these errors more because they have an unusually strong impression of society being anti-female and their own obsession with femininity makes it easy to overestimate that of most women.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "NRsNRFL49QqNHEHEz", "title": "Extraterrestrial paperclip maximizers", "pageUrl": "https://www.lesswrong.com/posts/NRsNRFL49QqNHEHEz/extraterrestrial-paperclip-maximizers", "postedAt": "2010-08-08T20:35:39.547Z", "baseScore": 5, "voteCount": 16, "commentCount": 161, "url": null, "contents": { "documentId": "NRsNRFL49QqNHEHEz", "html": "

According to The Sunday Times, a few months ago Stephen Hawking made a public pronouncement about aliens:

\n
\n

Hawking’s logic on aliens is, for him, unusually simple. The universe, he points out, has 100 billion galaxies, each containing hundreds of millions of stars. In such a big place, Earth is unlikely to be the only planet where life has evolved.

\n

\n\n\n

\n
\n

“To my mathematical brain, the numbers alone make thinking about aliens perfectly rational,” he said. “The real challenge is to work out what aliens might actually be like.”

\n

He suggests that aliens might simply raid Earth for its resources and then move on: “We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet. I imagine they might exist in massive ships, having used up all the resources from their home planet. Such advanced aliens would perhaps become nomads, looking to conquer and colonise whatever planets they can reach.”

\n

He concludes that trying to make contact with alien races is “a little too risky”. He said: “If aliens ever visit us, I think the outcome would be much as when Christopher Columbus first landed in America, which didn’t turn out very well for the Native Americans.”

\n
\n \n

Though Stephen Hawking is a great scientist, it's difficult to take this particular announcement at all seriously. As far as I know, Hawking has not published any detailed explanation for why he believes that contacting alien races is risky. The most plausible interpretation of his announcement is that it was made for the sake of getting attention and entertaining people rather than for the sake of reducing existential risk.

\n

I was recently complaining to a friend about Stephen Hawking's remark as an example of a popular scientist misleading the public. My friend pointed out that a  sophisticated  version of the concern that Hawking expressed may be justified. This is probably not what Hawking had in mind in making his announcement, but is of independent interest.

\n

Anthropomorphic Invaders vs. Paperclip Maximizer Invaders

\n

From what Hawking says, it appears as though Hawking has an anthropomorphic notion of \"alien\" in mind. My feeling is that if human civilization advances to the point where we can explore outer space in earnest, it will be because humans have become much more cooperative and pluralistic than presently existing humans. I don't imagine such humans behaving toward extraterrestrials the way that the Europeans who colonized America behaved toward the Native Americans. By analogy, I don't think that anthropomorphic aliens which developed to the point of being able to travel to Earth would be interested in performing a hostile takeover of Earth.

\n

And even ignoring the ethics of a hostile takeover, it seems naive to imagine that an anthropomorphic alien civilization which had advanced to the point of acquiring the (very considerable!) resources necessary to travel to Earth would have enough interest in the resources on Earth in particular to travel all to travel all the way to Earth to colonize Earth and acquire these resources.

\n

But as Eliezer has pointed out in  Humans In Funny Suits , we should be wary of irrationally anthropomorphizing aliens. Even  if  there's a tendency for intelligent life on other planets to be sort of like humans, such intelligent life may (whether intentionally or inadvertently) create a  really powerful optimization process . Such an optimization process could very well be a (figurative)  paperclip maximizer . Such an entity would have special interest in Earth, not because of special interest in acquiring its resources, but because Earth has intelligent lifeforms which may eventually thwart its ends. For a whimsical example, if humans built a (literal) staple maximizer, this would pose a very serious threat to a (literal) paperclip maximizer.

\n

The sign of the expected value of Active SETI

\n

It would be very bad if  Active SETI  led an extraterrestrial paperclip maximizer to travel to Earth to destroy intelligent life on Earth. Is there enough of an upside to Active SETI to justify Active SETI anyway?

\n

Certainly it would be great to have friendly extraterrestrials visit us and help us solve our problems. But there seems to me to be no reason to believe that it's more likely that our signals will reach friendly extraterrestrials than it is that our signals will reach unfriendly extraterrestrials. Moreover, there seems to be a strong asymmetry between the positive value of contacting friendly extraterrestrials and the negative value of contacting unfriendly extraterrestrials. Space signals take a long time to travel through a given region of space, and space travel through the same amount of distance seems to take orders of magnitude longer. It seems if we successfully communicated with friendly extraterrestrials at this time, by the time that they had a chance to help us, we'd already be extinct or have solved our biggest problems ourselves. By way of contrast, communicating with unfriendly extraterrestrials is a high existential risk regardless of how long it takes them to receive the message and react.

\n

In light of this, I presently believe that expected value of Active SETI is negative. So if I could push a button to stop Active SETI until further notice then I would.

\n

The magnitude of the expected value of Active SETI and implication for action

\n

What's the probability that continuing to send signals into space will result in the demise of human civilization at the hands of unfriendly aliens? I have no idea, my belief on this matter is subject to very volatile change.  But is it worth it for me to expend time and energy analyzing this issue further and advocating against Active SETI? Not sure. All I would say is that I used to think that thinking and talking about aliens is at present not a productive use of time, and the above thoughts have made me less certain about this. So I decided to write the present article.

\n

At present I think that a probability of 10-9  or higher would warrant some effort to spread the word, whereas if the probability is substantially lower than 10-9  then this issue should be ignored in favor of other existential risks.

\n

I'd welcome any well considered feedback on this matter.

\n

Relevance to the Fermi Paradox

\n

The Wikipedia page on the  Fermi Paradox  references

\n
\n

the  Great Silence[3] — even if travel is hard, if life is common, why don't we detect their radio transmissions?

\n
\n

The possibility of extraterrestrial paperclip maximizers together with the apparent asymmetry between the upside of contact with friendly aliens and the downside of contact with unfriendly aliens pushes in the direction that the reason for the Great Silence is because intelligent aliens have deemed it  dangerous to communicate .

" } }, { "_id": "Pg27iy6seyCpZyD3J", "title": "Christopher Hitchens and Cryonics", "pageUrl": "https://www.lesswrong.com/posts/Pg27iy6seyCpZyD3J/christopher-hitchens-and-cryonics", "postedAt": "2010-08-08T20:32:51.515Z", "baseScore": 16, "voteCount": 24, "commentCount": 75, "url": null, "contents": { "documentId": "Pg27iy6seyCpZyD3J", "html": "

Christopher Hitchens is probably dying of cancer.  Hitchens is a well known author, journalist and militant atheist.  Having read much of his work I believe he is also a very high IQ rationalist who enjoys being provocative.  He has written \"I am quietly resolved to resist bodily as best I can, even if only passively, and to seek the most advanced advice.\"

\n

Hitchens should be extremely receptive to cryonics.  Convincing him to signup would do much for the cryonics movement in part because he would immediately become our most articulate member.

\n

I have written to him about cryonics, but I suspect he is getting tens of thousands of emails and probably won't ever even read mine.  I propose that the Less Wrong community attempt to get Hitchens to at least seriously consider cryonics.  We could do this by mass emailing him and by linking to this blogpost.  

\n

Here is an article in which he talks about his cancer. His email address is at the end of the article.

" } }, { "_id": "NjxhYSajEDZPuLYzp", "title": "Purposefulness on Mars", "pageUrl": "https://www.lesswrong.com/posts/NjxhYSajEDZPuLYzp/purposefulness-on-mars", "postedAt": "2010-08-08T09:23:52.117Z", "baseScore": 18, "voteCount": 20, "commentCount": 7, "url": null, "contents": { "documentId": "NjxhYSajEDZPuLYzp", "html": "

Three different Martians built the Three Sacred Stone Walls of Mars according to the Three Virtues of walls:Height, Strength, and Beauty.

An evil Martian named Ution was the first and stupidest of all wallbuilders. He was too stupid to truly understand even the most basic virtue of height, and too evil to care for any other virtue. None the less, something about tall walls caused Evil Ution to build more tall walls, sometimes one on top of the other.

At times his walls would fall as he was building them, he did not understand why, nor did he care. He simply copied the high walls he had already built, whichever were still standing. His wall did achieve some strength and beauty. Most consisted of thousands of similar archways stacked on top of each other. Thousands upon thousands of intricately interlocking stones. Each arch a distantly removed copy of some prototypical archway that was strong and light enough to support itself many times over.

\n

To this day his walls are the highest in all of Mars.

\n

Many Martian Millenia later came the next great wallbuilder: Sid. 

Sid was far more intelligent than Ution, but he was just as single minded. We know from his archived odor sequence deposits that he understood the virtues of height and beauty, but celebrated his own willingness to sacrifice them entirely for the tiniest bit of added strength. When a critic asked why he did not simply place a one solid ugly stone on the ground, he replied simply \"Fool! A single stone can yet be moved.\"

Indeed, Sid's walls are shown by modern computer modeling to be stronger than any solid stone available on Mars at the time. The intricate interlocking matrices of cut stone redistribute stress so well, and the wall are so tightly anchored to the bedrock, that an underlying fault line has been repaired. This causes tectonic stresses even in far distant parts of our planet.

Despite the fact that Sid clearly made every decision exclusively favoring strength, his walls also hold the virtue of beauty and height. While some of these virtues he knowingly permitted as they did not detract from strength, others were hidden and not discovered until after the wall was built. In one particularly delightful section of his wall hundreds of tall cylindrical pillars stretch out in a line. Tourists have learned that by slapping your tentacles together in nearby areas you can hear a delightful chirping echo as the sound bounces off each pillar in turn.

In his honor, this beautiful phenomena has been termed the Sid Effect.

But the most beautiful stone wall was built as an engagement gift. Though his name was lost to history, we know he was too poor to own a proper house, and needed an impressive achievement to impress his bride. His identity was recorded only by his dwelling: \"In Tent\".

As the legend tells it: This young Martian set out to build the most beautiful stone wall possible. Common wisdom at the time held that beauty was merely a label assigned to things by the Martian brain, not a real attribute of objects. He was the first to realize that you could still treat it as a measurable quantity and if desired, maximize it.

His first wall was merely the basis for a few simple empirical experiments. Based on his collected data he took months creating a theory of beauty while spending his evenings hiding outside of his beloved's chamber and injecting other would-be suitors with dihydrogen monoxide, thus rendering them incoherent and unattractive.

He was well aware that a some amount of height and strength were fundamental to any wall, and these actually became parameters in his model so that every stone laid served each purpose in a precisely chosen way, and even this attention added to the overall beauty. Thus he crafted the most beautiful wall possible that was both reasonably tall and strong enough to be a symbol of lasting love.

Upon it's completion he burned his notes and presented the wall to the Martian who would become his spouse.

\n

As you can see, each of the virtues can come about by any of: a practically mindless churning, being an incidental consequence of some other plan, or through a known and chosen perfect beautiful design. Yet in each wall we can see not only the primary virtues which drove the builders, but also how they thought of each virtue (if at all). Using our modern theory of wallbuilding (and wallbuilders) every feature speaks a little something about what (if anything) the builder was trying to do.

\n

Inspiration Thread

" } }, { "_id": "6dAHrCEtijc9CpRdh", "title": "Why do ‘respectable’ women want dead husbands?", "pageUrl": "https://www.lesswrong.com/posts/6dAHrCEtijc9CpRdh/why-do-respectable-women-want-dead-husbands", "postedAt": "2010-08-08T04:15:48.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "6dAHrCEtijc9CpRdh", "html": "

I find this hostile wife phenomenon pretty confusing for several reasons:

\n
    \n
  1. Wanting other people dead is generally considered an evil character trait. Most people either don’t have such wishes or don’t admit to them. This is especially the case when the person you prefer dead is someone you are meant to be loyal to. Often this applies even if they are permanently unconscious. The ‘analogy’ between wanting someone dead and insisting they don’t get cryonics is too clear to be missed by anyone.
  2. \n
  3. People don’t usually seem to care much about abstract beliefs or what anybody is going to do in the distant future, except as far as these things imply character traits or group ties. If the fact your partner is likely to die at all in the distant future isn’t enough to scare you away, I can’t see how anything he might do after that can be disturbing.
  4. \n
  5. People tend to care a lot about romantic partners, compared to other things. They are often willing to change religion or overlook homocidal tendencies to be with the one they love. Romance is infamous for making people not care about whether the sky falls, or they catch deadly venerial diseases.
  6. \n
  7. The hostile wife phenomenon seems to be a mostly female thing, but doesn’t go with any especially strong female-specific characteristics I know of. Women overall don’t especially resist medical spending for instance, and are often criticized as a group for enjoying otherwise pointless expenditures too much.
  8. \n
  9. My surprise is probably somewhat exacerbated by pre-existing surprise over many people wanting to die ever, but that is another issue.
  10. \n
\n

Partial explanations of the hostile wife phenomenon offered by Darwin, de Wolf and de Wolf, Quentin, C (#44), Robin Hanson, and others:

\n\n

None of these is satisfying. Got any more?

\n

On the off chance the somewhat promising social disapproval hypothesis is correct, I warn any prospective hostile wives reading how deeply I disrespect them for preferring their husbands dead.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "mSJmShim5vNkdEfbQ", "title": "Agreeable ways to disable your children", "pageUrl": "https://www.lesswrong.com/posts/mSJmShim5vNkdEfbQ/agreeable-ways-to-disable-your-children", "postedAt": "2010-08-07T07:26:54.000Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "mSJmShim5vNkdEfbQ", "html": "

Should parents purposely have deaf children if they prefer them, by selecting deaf embryos?

\n

Those in favor argue that the children need to be deaf to partake in the deaf culture which their parents are keen to share, and that deafness isn’t really a disability. Opponents point out that damaging existing children’s ears is considered pretty nasty and not much different, and that deafness really is a disability since deaf people miss various benefits for lack of an ability.

\n

I think the children are almost certainly worse off if they are chosen to be deaf.  The deaf community is unlikely to be better than any of the millions of other communities in the world which are based mainly on spoken language, so the children are worse off even culture-wise before you look at other costs. I don’t follow why the children can’t be brought up in the deaf community without actually being deaf either. However I don’t think choosing deaf children should be illegal, since parents are under no obligation to have children at all and deaf children are doing a whole lot better than non-existent children.

\n

Should children be brought up using a rare language if a more common one is available?

\n

This is a very similar question: should a person’s ability to receive information be severely impaired if it helps maintain a culture which they are compelled to join due to the now high cost of all other options? The similarity has been pointed out before, to argue that choosing deaf children is fine. The other possible inference is of course that encouraging the survival of unpopular languages is not fine.

\n

There are a few minor differences: a person can learn another language later more easily than they can get their hearing later, though still at great cost. On the other hand, a deaf person can still read material from a much larger group of hearing people, while the person who speaks a rare language is restricted to what is produced by their language group. Nonetheless it looks like they are both overwhelmingly costs to the children involved. It may be understandable that parents want to bring up their children in their own tiny language that they love, but I’m appalled that governments, linguists, schools,  organizations set up for the purpose, various other well meaning parties, and plenty of my friends, think rescuing small languages in general is a wonderful idea, even when the speakers of the language disagree. ‘Language revitalization‘ seems to be almost unanimously praised as a virtuous project.

\n

\n

Here are some arguments for protecting many small languages that have been given to me in conversations recently, along with why they don’t stand:

\n\n

None but the last of these is even obviously true, and most of them would be small benefits regardless, compared to the benefit of actually being able to communicate with your language. The cost of most of the world not being able to talk to one another is not just the occasional inability to understand a foreign movie or to get diverse foreign news. There are around 200 million migrants in the world, many of whom have faced the huge effort of learning a new language. Once they have learned it they will often spend years or decades with an accent that makes every conversation with a local unnecessarily difficult. As a result they will miss out on years of opportunities and friendships. I listen to a lot of talks by foreign students and it always seems terrible that they put so much effort in, and yet much of the content is lost on me for lack of coordination in vowel pronunciation and syllable emphasis. I presume these problems are much worse if the number of people who speak your first language is small.

\n

I’m not arguing for extreme efforts to implement a single world wide language or anything like that, but why work toward obstructing communication at the margin? Let people who want to speak dying languages do so, but do not resuscitate, or even offer prophylaxis. Exotic languages are romantic and promoting cultural differences is politically correct, but the main value of languages is in communicating, and a patchwork of local protocols is the antithesis of that goal.

\n\n
\n
\n
\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "4Ahvexjg6omc7GiGZ", "title": "Bloggingheads: Robert Wright and Eliezer Yudkowsky", "pageUrl": "https://www.lesswrong.com/posts/4Ahvexjg6omc7GiGZ/bloggingheads-robert-wright-and-eliezer-yudkowsky", "postedAt": "2010-08-07T06:09:32.684Z", "baseScore": 9, "voteCount": 9, "commentCount": 129, "url": null, "contents": { "documentId": "4Ahvexjg6omc7GiGZ", "html": "

Sweet, there's another Bloggingheads episode with Eliezer.

\n

Bloggingheads: Robert Wright and Eliezer Yudkowsky: Science Saturday: Purposes and Futures

" } }, { "_id": "67jSRfGycaAvAmLxf", "title": "Interested in a European lesswrong meetup", "pageUrl": "https://www.lesswrong.com/posts/67jSRfGycaAvAmLxf/interested-in-a-european-lesswrong-meetup", "postedAt": "2010-08-07T04:43:26.481Z", "baseScore": 11, "voteCount": 9, "commentCount": 31, "url": null, "contents": { "documentId": "67jSRfGycaAvAmLxf", "html": "

There are readers on this blog from all over the world - many of them spread out over Europe. How many of you would be interested in a European meetup of LW readers?

\n

We do not have big clusters like Silicon Valley, but we have low cost airlines, and lots of interesting places.

\n

If interested please comment.

" } }, { "_id": "eaSJtg8Kvc56bFBdt", "title": "Conflicts Between Mental Subagents: Expanding Wei Dai's Master-Slave Model", "pageUrl": "https://www.lesswrong.com/posts/eaSJtg8Kvc56bFBdt/conflicts-between-mental-subagents-expanding-wei-dai-s", "postedAt": "2010-08-04T09:16:35.219Z", "baseScore": 71, "voteCount": 55, "commentCount": 81, "url": null, "contents": { "documentId": "eaSJtg8Kvc56bFBdt", "html": "

Related to: Alien Parasite Technical Guy, A Master-Slave Model of Human Preferences

\n

In Alien Parasite Technical Guy, Phil Goetz argues that mental conflicts can be explained as a conscious mind (the \"alien parasite”) trying to take over from an unsuspecting unconscious.

Last year, Wei Dai presented a model (the master-slave model) with some major points of departure from Phil's: in particular, the conscious mind was a special-purpose subroutine and the unconscious had a pretty good idea what it was doing1. But Wei said at the beginning that his model ignored akrasia.

I want to propose an expansion and slight amendment of Wei's model so it includes akrasia and some other features of human behavior. Starting with the signaling theory implicit in Wei's writing, I'll move on to show why optimizing for signaling ability would produce behaviors like self-signaling and akrasia, speculate on why the same model would also promote some of the cognitive biases discussed here, and finish with even more speculative links between a wide range of conscious-unconscious conflicts.

The Signaling Theory of Consciousness

This model begins with the signaling theory of consciousness. In the signaling theory, the conscious mind is the psychological equivalent of a public relations agency. The mind-at-large (hereafter called U for “unconscious” and similar to Wei's “master”) has socially unacceptable primate drives you would expect of a fitness-maximizing agent like sex, status, and survival. These are unsuitable for polite society, where only socially admirable values like true love, compassion, and honor are likely to win you friends and supporters. U could lie and claim to support the admirable values, but most people are terrible liars and society would probably notice.

\n

So you wall off a little area of your mind (hereafter called C for “conscious” and similar to Wei's “slave”) and convince it that it has only admirable goals. C is allowed access to the speech centers. Now if anyone asks you what you value, C answers \"Only admirable things like compassion and honor, of course!\" and no one detects a lie because the part of the mind that's moving your mouth isn't lying.

This is a useful model because it replicates three observed features of the real world: people say they have admirable goals, they honestly believe on introspection that they have admirable goals, but they tend to pursue more selfish goals. But so far, it doesn't explain the most important question: why do people sometimes pursue their admirable goals and sometimes not?

\n


Avoiding Perfect Hypocrites

In the simplest case, U controls all the agent's actions and has the ability to set C's values, and C only controls speech. This raises two problems.

First, you would be a perfect hypocrite: your words would have literally no correlation to your actions. Perfect hypocrites are not hard to notice. In a world where people are often faced with Prisoners' Dilemmas against which the only defense is to swear a pact to mutually cooperate, being known as the sort of person who never keeps your word is dangerous. A recognized perfect hypocrite could make no friends or allies except in the very short-term, and that limitation would prove fatal or at least very inconvenient.

The second problem is: what would C think of all this? Surely after the twentieth time protesting its true eternal love and then leaving the next day without so much as a good-bye, it would start to notice it wasn't pulling the strings. Such a realization would tarnish its status as \"the honest one\"; it couldn't tell the next lover it would remain forever true without a little note of doubt creeping in. Just as your friends and enemies would soon realize you were a hypocrite, so C itself would realize it was part of a hypocrite and find the situation incompatible with its idealistic principles.

Other-signaling and Self-Signaling

You could solve the first problem by signaling to others. If your admirable principle is to save the rainforest, you can loudly and publicly donate money to the World Wildlife Fund. When you give your word, you can go ahead and keep it, as long as the consequences aren't too burdensome. As long as you are seen to support your principles enough to establish a reputation for doing so, you can impress friends and allies and gain in social status.

The degree to which U gives permission to support your admirable principles depends on the benefit of being known to hold the admirable principle, the degree to which supporting the principle increases others' belief that you genuinely hold the principle, and the cost of the support. For example, let's say a man is madly in love with a certain woman, and thinks she would be impressed by the sort of socially conscious guy who believes in saving the rainforest. Whether or not he should donate $X to the World Wildlife Fund depends on how important winning the love of this woman is to him, how impressed he thinks she'd be to know he strongly believes in saving the rainforests, how easily he could convince her he supports the rainforests with versus without a WWF donation - and, of course, the value of X and how easily he can spare the money. Intuitively, if he's really in love, she would be really impressed, and it's only a few dollars, he would do it; but not if he's not that into her, she doesn't care much, and the WWF won't accept donations under $1000.

Such signaling also solves the second problem, the problem of C noticing it's not in control - but only partly. If you only give money when you're with a love interest and ey's standing right there, and you only give the minimum amount humanly possible so as to not repulse your date, C will notice that also. To really satisfy C, U must support admirable principles on a more consistent basis. If a stranger comes up and gives a pitch for the World Wildlife Fund, and explains that it would really help a lot of rainforests for a very low price, U might realize that C would get a little suspicious if it didn't donate at least a token amount. This kind of signaling is self-signaling: trying to convince part of your own mind.

This model modifies the original to include akrasia2 (U refusing to pursue C's goals) and the limitations on akrasia (U pursues C's goals insofar as it has to convince other people - and C itself - its signaling is genuine).

It also provides a key to explaining some superficially weird behavior. A few weeks ago, I saw a beggar on the sidewalk and walked to the other side of the street to avoid him. This isn't sane goal-directed behavior: either I want beggars to have my money, or I don't. But under this model, once the beggar asks for money, U has to give it or risk C losing some of its belief that it is compassionate and therefore being unable to convince others it is compassionate. But as long as it can avoid being forced to make the decision, it can keep both its money and C's innocence.

Thinking about this afterward, I realized how silly it was, and now I consider myself unlikely to cross the street to avoid beggars in the future. In the language of the model, C focuses on the previously subconscious act of avoiding the beggar and realizes it contradicts its principles, and so U grudgingly has to avoid such acts to keep C's innocence and signaling ability intact.

\n

Notice that this cross-the-street trick only works if U can act without C being fully aware what happened or its implications. As we'll see below, this ability of U's has important implications for self-deception scenarios.

From Rationality to Rationalization

So far, this model has assumed that both U and C are equally rational. But a rational C is a disadvantage for U for exactly the reasons mentioned in the last paragraph; as soon as C reasoned out that avoiding the beggar contradicted its principles, U had to expend more resources giving money to beggars or lose compassion-signaling ability. If C is smart enough to realize that its principle of saving the rainforest means you ought to bike to work instead of taking the SUV, U either has to waste resources biking to work or accept a decrease in C's environmentalism-signaling ability. Far better that C never realizes it ought to bike to work in the first place.

So it's to U's advantage to cripple C. Not completely, or it loses C's language and reasoning skills, but enough that it falls in line with U's planning most of the time.

“How, in detail, does U cripple C?” is a restatement of one of the fundamental questions of Less Wrong and certainly too much to address in one essay, but a few suggestions might be in order:

- The difference between U and C seems to have a lot to do with two different types of reasoning. U seems to reason over neural inputs – it takes in things like sense perceptions and outputs things like actions, feelings, and hunches. This kind of reasoning is very powerful – for example, it can take as an input a person you've just met and immediately output a calculation of their value as a mate in the form of a feeling of lust – but it can also fail in weird ways, like outputting a desire to close a door three dozen times into the head of an obsessive-compulsive, or succumbing to things like priming. C, the linguistic one, seems to reason over propositions – it takes propositions like sentences or equations as inputs, and returns other sentences and equations as outputs. This kind of reasoning is also very powerful, and also produces weird errors like the common logical fallacies.

- When U takes an action, it relays it to C and claims it was C's action all along. C never wonders why its body is acting outside of its control; only why it took an action it originally thought it disapproved of. This relay can be cut in some disruptions of brain function (most convulsions, for example, genuinely seem involuntary), but remains spookily intact in others (if you artificially activate parts of the brain that cause movement via transcranial magnetic stimulation, your subject will invent some plausible sounding reason for why ey made that movement)3.

- C's crippling involves a tendency for propositional reasoning to automatically cede to neural reasoning and to come up with propositional justifications for its outputs, probably by assuming U is right and then doing some kind of pattern-matching to fill in blanks. For example, if you have to choose to buy one of two cars, and after taking a look at them you feel you like the green one more, C will try to come up with a propositional argument supporting the choice to buy the green one. Since both propositional and neural reasoning are a little bit correlated with common sense, C will often hit on exactly the reasoning U used (for example, if the red car has a big dent in it and won't turn on, it's no big secret why U's heuristics rejected it) but in cases where U's justification is unclear, C will end up guessing and may completely fail to understand the real reasons behind U's choice. Training in luminosity can mitigate this problem, but not end it.

- A big gap in this model is explaining why sometimes C openly criticizes U, for example when a person who is scared of airplanes says “I know that flying is a very safe mode of transportation and accidents are vanishingly unlikely, but my stupid brain still freaks out every time I go to an airport”. This might be justifiable along the lines that allowing C to signal that it doesn't completely control mental states is less damaging than making C look like an idiot who doesn't understand statistics – but I don't have a theory that can actually predict when this sort of criticism will or won't happen.

- Another big gap is explaining how and when U directly updates on C's information. For example, it requires conscious reasoning and language processing to understand that a man on a plane holding a device with a countdown timer and shouting political and religious slogans is a threat, but a person on that plane would experience fear, increased sympathetic activation, and other effects mediated by the unconscious mind.

This part of the model is fuzzy, but it seems safe to assume that there is some advantage to U in changing C partially, but not completely, from a rational agent to a rubber-stamp that justifies its own conclusions. C uses its propositional reasoning ability to generate arguments that support U's vague hunches and selfish goals.

How The World Would Look

We can now engage, with a little bit of cheating, in some speculation about how a world of agents following this modified master-slave model would look.

You'd claim to have socially admirable principles, and you'd honestly believe these claims. You'd pursue these claims at a limited level expected by society: for example, if someone comes up to you and asks you to donate money to children in Africa, you might give them a dollar, especially if people are watching. But you would not pursue them beyond the level society expects: for example, even though you might consciously believe saving a single African child (estimated cost: $900) is more important than a plasma TV, you would be unlikely to stop buying plasma TVs so you could give this money to Africa. Most people would never notice this contradiction; if you were too clever to miss it you'd come up with some flawed justification; if you were too rational to accept flawed justifications you would just notice that it happens, get a bit puzzled, call it “akrasia”, and keep doing it.

You would experience borderline cases, where things might or might not be acceptable, as moral conflicts. A moral conflict would feel like a strong desire to do something, fighting against the belief that, if you did it, you would be less of the sort of person you want to be. In cases where you couldn't live with yourself if you defected, you would cooperate; in cases where you could think up any excuse at all that allowed you to defect and still consider yourself moral, you would defect.

You would experience morality not as a consistent policy to maximize utility across both selfish and altruistic goals, but as a situation-dependent attempt to maximize feelings of morality, which could be manipulated in unexpected ways. For example, as mentioned before, going to the opposite side of the street from a beggar might be a higher-utility option than either giving the beggar money or explicitly refusing to do so. In situations where you were confident in your morality, you might decide moral signaling was an inefficient use of resources – and you might dislike people who would make you feel morally inferior and force you to expend more resources to keep yourself morally satisfied.

Your actions would be ruled by “neural reasoning” that outputs expectations different from the ones your conscious reasoning would endorse. Your actions might hinge on fears which you knew to be logically silly, and your predictions might come from a model different from the one you thought you believed. If it was necessary to protect your signaling ability, you might even be able to develop and carry out complicated plots to deceive the conscious mind.

Your choices would be determined by illogical factors that influenced neural switches and levers and you would have to guess at the root causes of your own decisions, often incorrectly – but would defend them anyway. When neural switches and levers became wildly inaccurate due to brain injury, your conscious mind would defend your new, insane beliefs with the same earnestness with which it defended your old ones.

You would be somewhat rational about neutral issues, but when your preferred beliefs were challenged you would switch to defending them, and only give in when it is absolutely impossible to keep supporting them without looking crazy and losing face.

You would look very familiar.

\n

 

\n

Footnotes

\n

1. Wei Dai's model gets the strongest compliment I can give: after reading it, it seemed so obvious and natural to think that way that I forgot it was anyone's model at all and wrote the first draft of this post without even thinking of it. It has been edited to give him credit, but I've kept some of the terminology changes to signify that this isn't exactly the same. The most important change is that Wei thinks actions are controlled by the conscious mind, but I side with Phil and think they're controlled by the unconscious and relayed to the conscious. The psychological evidence for this change in the model are detailed above; some neurological reasons are mentioned in the Wegner paper below.

\n

2. Or more accurately one type of akrasia. I disagreed with Robin Hanson and Bryan Caplan when they said a model similar to this explains all akrasia, and I stand by that disagreement. I think there are at least two other, separate causes: akrasia from hyperbolic discounting, and the very-hard-to-explain but worthy-of-more-discussion-sometime akrasia from wetware design.

\n

3. See Wegner, \"The Mind's Best Trick: How We Experience Conscious Will\" for a discussion of this and related scenarios.

" } }, { "_id": "rjtvWPzA6XX4PNGPb", "title": "The Threat of Cryonics", "pageUrl": "https://www.lesswrong.com/posts/rjtvWPzA6XX4PNGPb/the-threat-of-cryonics", "postedAt": "2010-08-03T19:57:59.883Z", "baseScore": 43, "voteCount": 49, "commentCount": 216, "url": null, "contents": { "documentId": "rjtvWPzA6XX4PNGPb", "html": "

It is obvious that many people find cryonics threatening. Most of the arguments encountered in debates on the topic are not calculated to persuade on objective grounds, but function as curiosity-stoppers. Here are some common examples:

\n\n

The question is what causes this sensation that cryonics is a threat? What does it specifically threaten?

\n

It doesn't threaten the notion that we will all die eventually. Accident, homicide, and war will remain possibilities unless we can defeat them, and suicide will always remain an option. It doesn't threaten the state, the environment, anyone's health, or any particular religion. It doesn't cost much on a large scale, doesn't generate radioactive waste or pollution, has a low carbon footprint, and is both religiously neutral and life affirming.

\n

Rather, it seems to threaten something else, less conspicuous and more universal. This something I have termed the \"Historical Death Meme\", and is something which we can see influencing all human cultures throughout history. It is something we probably aren't too comfortable about leaving behind, and perhaps in fact shouldn't be. And yet, it is at least as prescientific and as hazardous as creationism.

\n

According to the HDM, if you die the cosmetic state of your body matters. If it is grotesque, you are dishonored; if not, you are respected. Society has taken advantage of this in historical times by flaying, beheading, or hanging corpses. The idea is that despite the fact that these things have zero direct impact on a living individual who can feel them, the disgust and revulsion felt in looking at them from the outside is symbolic in some way. Likewise, when a person dies and is embalmed or cremated, the features are either restored to normal or obliviated entirely.

\n

A second aspect of the HDM is that your attitude and feelings at the end of your life matter more significantly than ever before. We tend to place great store in people's last words and final wishes. If a person is dying, their social status changes dramatically. We can't easily despise a dying person -- at least not without stooping to the level of truly despising them.

\n

These elements of human culture and psychology have intertwined to make acceptance of cryonics very difficult. Cryonics does not regard the individuals in question as dead. Thus it would be immoral to focus on cosmetic surgery rather than on reducing the amount of brain damage. The fact that disfigurement happens isn't the problem, it is that cryonicists don't care about the amount of disfigurement. This contradicts the HDM and represents a threat to anyone who strongly identifies with it.

\n

Cryonics also encourages an attitude of resistance towards death, of rational decision making, and of taking exception to social norms we disagree with. The HDM states that attitudes towards the end of life are important, and in fact amplifies them such that a whisper is the same as a shout. It seemingly shows that the person not only thinks they are capable of making a better decision than the vast majority of other individuals, but are not above bragging about it and rubbing it in the faces of those who make a worse decision. A simple step of enlightened self-interest is suddenly escalated to the perceived level of extreme narcissism.

\n

The HDM does not attempt to establish irreversibility of death beyond a shadow of a doubt, it assumes it based on loss of vital signs and lack of immediate revival. Originally the breath was used, then heartbeat; now the brainwave is considered an acceptable signal. In whatever case, the presumed criteria for reversibility is that it must be immediately measurable and based on current technologies. To remove the ability to be certain of death -- making it a complex ongoing research project rather than a simple testable hypothesis, is a threat. The HDM depends on death being an immediately known quantity, because it depends on simplistic human emotions being engaged rather than complex human reasoning processes.

\n

Problems notwithstanding, the HDM is one of the most poignant displays of humanity ever. Throughout history, humans have buried their dead, mourned and cried over their dead, honored their dead, and used obscene displays of corpses to punish their dead. For millions of years there has been nothing we could possibly do about death, once it happens, and it has been pretty much crystal clear exactly when it has happened. Our art, our culture, our very humanity, has been anchored in this one seemingly immutable aspect of life. Then cryonics comes along, something we can possibly do about the matter. It's not a guarantee of survival, nor a risky operation that you'll get immediate feedback on. The only guarantee it provides an escape from unnecessary death, subject to certain abstract conditions and unknown facts about the universe. You can't just stop thinking about death, you have to start thinking about it in another way entirely. A more rational, imaginative, creative, lateral way.

\n

It makes a strange sort of sense, that so many of the world's leaders and thinkers are united in their opposition -- be it passive or active -- for this threat to the traditional way of thinking. It cuts deep. The reason it is such a threat is that it takes one essential human value, respect for life, and pits it against another: traditional respect for the dead. The latter is more fragile, its value less clear, and its cognition level less conscious. It has never had to defend itself. In a fight, we all know which would win. And that is precisely why cryonics is a threat.

\n

Who stands to lose face if cryonics is taken seriously? The answer is: Lots of good people. Practically everyone who participates in the HDM does. Here are some specific examples that come to mind.

\n\n

The list goes on. The unavoidable fact is that in asking society to accept cryonics, we are asking for a lot. We are asking humans to admit how fallible they are, asking a generation to turn against the deeply honored ways of their ancestors. The costs are dire indeed.

\n

And yet we cannot ethically just shut up about it. No lives should be lost, even potentially, due solely to lack of a regular, widely available, low-cost, technologically optimized cryonics practice. It is in fact absolutely unacceptable, from a simple humanitarian perspective, that something as nebulous as the HDM -- however artistic, cultural, and deeply ingrained it may be -- should ever be substituted for an actual human life.

" } }, { "_id": "TZsXNaJwETWvJPLCE", "title": "Rationality quotes: August 2010", "pageUrl": "https://www.lesswrong.com/posts/TZsXNaJwETWvJPLCE/rationality-quotes-august-2010", "postedAt": "2010-08-03T00:16:45.738Z", "baseScore": 10, "voteCount": 7, "commentCount": 216, "url": null, "contents": { "documentId": "TZsXNaJwETWvJPLCE", "html": "
\n
\n
\n
\n

This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.

\n
    \n
  • Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.)
  • \n
  • Do not quote yourself.
  • \n
  • Do not quote comments/posts on LW/OB.
  • \n
  • No more than 5 quotes per person per monthly thread, please.
  • \n
\n
\n
\n
\n
" } }, { "_id": "xexS9nyzwRgP9sowp", "title": "Harry Potter and the Methods of Rationality discussion thread, part 2", "pageUrl": "https://www.lesswrong.com/posts/xexS9nyzwRgP9sowp/harry-potter-and-the-methods-of-rationality-discussion-0", "postedAt": "2010-08-01T22:58:39.413Z", "baseScore": 20, "voteCount": 14, "commentCount": 703, "url": null, "contents": { "documentId": "xexS9nyzwRgP9sowp", "html": "

ETA: There is now a third thread, so send new comments there.

\n

 

\n

Since the first thread has exceeded 500 comments, it seems time for a new one, with Eliezer's just-posted Chapter 33 & 34 to kick things off. 

\n

From previous post: 

\n
\n

Spoiler Warning:  this thread contains unrot13'd spoilers for Harry Potter and the Methods of Rationality up to the current chapter and for the original Harry Potter series.  Please continue to use rot13 for spoilers to other works of fiction, or if you have insider knowledge of future chapters of Harry Potter and the Methods of Rationality.

A suggestion: mention at the top of your comment which chapter you're commenting on, or what chapter you're up to, so that people can understand the context of your comment even after more chapters have been posted.  This can also help people avoid reading spoilers for a new chapter before they realize that there is a new chapter.

\n
" } }, { "_id": "EPHPJnK3mkvtoTsyg", "title": "Open Thread, August 2010", "pageUrl": "https://www.lesswrong.com/posts/EPHPJnK3mkvtoTsyg/open-thread-august-2010", "postedAt": "2010-08-01T13:27:07.307Z", "baseScore": 7, "voteCount": 5, "commentCount": 706, "url": null, "contents": { "documentId": "EPHPJnK3mkvtoTsyg", "html": "

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

" } }, { "_id": "TNfx89dh5KkcKrvho", "title": "AI cooperation in practice", "pageUrl": "https://www.lesswrong.com/posts/TNfx89dh5KkcKrvho/ai-cooperation-in-practice", "postedAt": "2010-07-30T16:21:50.871Z", "baseScore": 45, "voteCount": 34, "commentCount": 166, "url": null, "contents": { "documentId": "TNfx89dh5KkcKrvho", "html": "

You know that automated proof verifiers exist, right? And also that programs can know their own source code? Well, here's a puzzle for you:

\n

Consider a program A that knows its own source code. The algorithm of A is as follows: generate and check all possible proofs up to some huge size (3^^^^3). If A finds a proof that A returns 1, it returns 1. If the search runs to the end and fails, it returns 0. What will this program actually return?

\n

Wait, that was the easy version. Here's a harder puzzle:

\n

Consider programs A and B that both know their own, and each other's, source code. The algorithm of A is as follows: generate and check all proofs up to size 3^^^^3. If A finds a proof that A returns the same value as B, it returns 1. If the search fails, it returns 0. The algorithm of B is similar, but possibly with a different proof system and different length limit. What will A and B return?

\n

This second puzzle is a formalization of a Prisoner's Dilemma strategy proposed by Eliezer: \"I cooperate if and only I expect you to cooperate if and only if I cooperate\". So far we only knew how to make this strategy work by \"reasoning from symmetry\", also known as quining. But programs A and B can be very different - a human-created AI versus an extraterrestrial crystalloid AI. Will they cooperate?

\n

I may have a tentative proof that the answer to the first problem is 1, and that in the second problem they will cooperate. But: a) it requires you to understand some logic (the diagonal lemma and Löb's Theorem), b) I'm not sure it's correct because I've only studied the subject for the last four days, c) this margin is too small to contain it. So I put it up here. I submit this post with the hope that, even though the proof is probably wrong or incomplete, the ideas may still be useful to the community, and someone else will discover the correct answers.

\n

Edit: by request from Vladimir Nesov, I reposted the proofs to our wiki under my user page. Many thanks to all those who took the time to parse and check them.

" } }, { "_id": "ykYAdX8yFMNFDjx64", "title": "Forager Anthropology", "pageUrl": "https://www.lesswrong.com/posts/ykYAdX8yFMNFDjx64/forager-anthropology", "postedAt": "2010-07-28T05:48:13.761Z", "baseScore": 17, "voteCount": 39, "commentCount": 133, "url": null, "contents": { "documentId": "ykYAdX8yFMNFDjx64", "html": "

(This is the second post in a short sequence discussing evidence and arguments presented by Christopher Ryan and Cacilda Jethá's Sex at Dawninspired by the spirit of Kaj_Sotala's recent discussion of What Intelligence Tests MissIt covers Part II: Lust in Paradise and Part III: The Way We Weren't.)

\n

Forager anthropology is a discipline that is easy to abuse. It relies on unreliable first-hand observations of easily misunderstood cultures that are frequently influenced by the presence of modern observers. These cultures are often exterminated or assimilated within decades of their discovery, making it difficult to confirm controversial claims and discoveries. But modern-day foraging societies are the most direct source of evidence we have about our pre-agricultural ancestors; in many ways, they are agriculture's control group, living in conditions substantially similar to the ones under which our species evolved. The standard narrative of human sexual evolution ignores or manipulates the findings of forager anthropology to support its claims, and this is no doubt responsible for much of its confused support.

\n

Steven Pinker is one of the most prominent and well-respected advocates of the standard narrative, both on Less Wrong and elsewhere. Eliezer has referenced him as an authority on evolutionary psychology. One commenter on the first post in this series claimed that Pinker is \"the only mainstream academic I'm aware of who visibly demonstrates the full suite of traditional rationalist virtues in essentially all of his writing.\" Another cited Pinker's claim that 20-60% of hunter-gatherer males were victims of lethal human violence (\"murdered\") as justification for a Malthusian view of human nature. 

\n

That 20-60% number comes from a claim about war casualties in a 2007 TED talk Pinker gave on \"the myth of violence\", for which he drew upon several important findings in forager anthropology. (The talk is based on an argument presented in the third chapter of The Blank Slate; there is a text version of the talk available, but it omits the material on forager anthropology that Ryan and Jethá critique.)

\n

At 2:45 in the video Pinker displays a slide which reads

\n
\n

Until 10,000 years ago, humans lived as hunter-gatherers, without permanent settlements or government.

\n
\n

He also points out that modern hunter-gatherers are our best evidence for drawing conclusions about those prehistoric hunter-gatherers; in both these statements he is in accordance with nearly universal historical, anthropological, and archaeological opinion. Pinker's next slide is a chart from The Blank Slate, originally based on the research of Lawrence Keeley. Sort of. It is labeled as \"the percentage of male deaths due to warfare,\" with bars for eight hunter-gatherer societies that range from approximately 15-60%. The problem is that of these eight cultures, zero are migratory hunter-gatherers.

\n

In descending order of bloodiness, the societies mentioned are the Jivaro (who cultivate numerous crops, keep livestock, and live in \"matrilocal households\"), the Yanomamo (who live in villages and grow bananas), the Mae Enga (who have scattered homesteads and cultivate sweet potatoes), the Dugum Dani (who live in villages, cultivate sweet potatoes, and raise pigs), the Murngin (more commonly known as the Yolngu; the data cited was collected in 1975, after they had been living with \"missionaries, guns, and aluminum powerboats\" (Ryan and Jethá, 185) for more than three decades), a different tribe of the Yanomamo (who still live in villages and grow bananas), the Huli (\"exceptional farmers\"), and the Gebusi (who live in longhouses and keep gardens).

\n

While Keeley's research is the basis for this claim, it should be noted that he distinguishes (if somewhat confusingly) between what he calls \"sedentary hunter-gatherers\" and true \"nomadic hunter-gatherers\" (War Before Civilization, 31, as cited by Ryan and Jethá). Keeley also points out that \"Farmers and sedentary hunter-gatherers had little alternative but to meet force with force or, after injury, to discourage further depredations by taking revenge.\" The nomads, on the other hand, \"had the option of fleeing conflict and raiding parties. At best, the only thing they would lose by such flight was their composure.\"

\n

Pinker is not so easily excused. It is possible that he failed to recognize this critical research failure for the five years between The Blank Slate's publication and the TED talk in question, and that no one with a copy of his best-selling book and access to the internet noticed the error and pointed it out to him. It is also possible that he was being deliberately deceptive. In either case, while this doesn't warrant discarding every claim that Pinker has ever made about evolutionary psychology, he should probably not be considered a reliable source on the topic. (See Menand, Blackburn, and Malik, for example, for further criticism of Pinker's approach in The Blank Slate.) 

\n
\n

What does forager anthropology have to say about human sexual evolution and the standard narrative, then? Ryan and Jethá offer a wealth of examples in support of their thesis--that in the human evolutionary environment, communal sexual behavior was the dominant paradigm.

\n

Partible Paternity - While we (and the standard narrative's advocates) take it for granted that any given individual can have only a single father, this was not established scientifically until the 19th century. This belief is not universal, however, and Beckerman and Valentine (pdf) have compiled decades of anthropological research on dozens of South American tribes (both foragers and farmers) that believe in partible paternity: that \"a fetus is made of accumulated semen\" (Ryan and Jethá, 90), and so can have multiple biological fathers.This is not a regional oddity, either--the Lusi of Papua New Guinea had similar beliefs. One of the consequences of this belief is that a woman who wants to give her future child every possible advantage should \"solicit 'contributions' from the best hunters, the best storytellers, the funniest, the kindest, the best-looking, the strongest, and so on--in hopes her child will literally absorb the essence of each\" (Ryan and Jethá, 91).

\n

It is a key point in the standard narrative that men benefit when they enforce sexual monogamy on their mates and ensure their paternity. But Hill and Hurtado (review here) found that among the Aché, a South American tribe that believes in partible paternity, children with multiple fathers were more likely to survive than those with only one. It should not be hard to see why: keeping all else equal, men in a society in which all children have one father have the same rate of biological paternity as men in a society in which all children have three fathers. Should a particular man in the first society die before his child is grown, his child has no other man to rely upon for resources; should a particular man in the second society die, his child is one of three receiving the resources of the two surviving men. (The second society would then have a strong selection pressure for men who were more likely to provide the sperm originally responsible for fertilization; the next post in this sequence, on comparative anatomy, will cover sperm competition.)

\n

Alloparenting - The standard narrative considers it a given that individuals should have no incentive to raise children which they know are not their own; this is why paternity is important in the first place. While it should be possible to fool a man into thinking a child carries his genetic legacy (and it is important for the standard narrative that this is so), fooling a woman in the same way is rather less plausible. Chimp and gorilla mothers never allow other females in their tribe to hold their young children, probably because females of both species are quite willing to kill infants not their own; this is precisely what the standard narrative tells us we should expect. And yet, in 87% of human forager societies, mothers are willing to allow other women to breastfeed their children. This is not just a lack of rampant infanticide: it's an active expenditure of resources. (See Hrdy's Mothers and Others; review here.)

\n

This is the sort of highly-cooperative situation that might seem like it could only be explained by group selection, and if that were so it would probably be a good idea to discard evolutionary explanations of alloparenting entirely. But positing group selection is unnecessary when there's a much more plausible explanation available: kin selection. W.D. Hamilton identified two different mechanisms by which kin selection might operate, and humans are among the minority of species which fit the criteria of both. (The related Price Equation may offer a more definitive explanation of the process, but I admit that I haven't taken the time to grok the math involved.) Kin selection also explains why alloparenting is more frequently practiced by close relatives (like grandparents, aunts, and uncles) than by distant relatives or unrelated tribe members. 

\n

(If you're still uncomfortable with kin selection's similarity to group selection, take a look at the breeding patterns of naked mole rats and try to explain them any other way.)

\n

The (Un-)Universality of Marriage - Well-respected anthropologists (George Murdock and Desmond Morris, for example) are in the habit of declaring that marriage is found every human society, a finding that provides strong support for the standard narrative; after all, explaining the evolutionary inevitably of human pair-bonding isn't much good if it isn't universal in the first place. Anthropologists are willing to consider all kinds of arrangements to be \"marriage\", though, creating confusion that is easily amplified by imprecisions of translation.1

\n

For example:

\n\n
\n

These are not behaviors that the standard narrative should be able to explain; indeed, if it could explain them it wouldn't be paying its rent. When I first noticed this point, I very suddenly realized why I find Ryan and Jethá's thesis so convincing: it isn't trying to explain everything.

\n

The standard narrative is supposed to explain every universal aspect of human sexual behavior, and a great deal more besides. And, its proponents hold, it should be able to explain the behavior of both modern and prehistoric humans with more-or-less equal accuracy. This more than anything else is its failure: it does not acknowledge the mutability of human preference. The current mainstream American standard of female beauty values low body fat2, which is a powerful signal of something about genetic fitness. Not long ago, the mainstream Mauritanian standard of female beauty valued obesity(as some subpopulations still do), which is a powerful signal of something contradictory about genetic fitness. No evo-psych theory should be able to explain both of these desirability criteria in a fashion more direct than \"desirability criteria are easily influenced by social pressures.\"3

\n

Faced with the godshatter of modern human preference, Sex at Dawn passes on trying to provide an explanation. The book's greatest virtue, to my mind, is that it just attempts to discover the patterns of prehistoric sexual behavior, acknowledging that many questions about how humans behave today are better left to other disciplines.

\n

The next post in this sequence will look at what Ryan and Jethá have concluded from the study of comparative sexual anatomy.

\n

(As before, I will be happy to provide whatever additional citations I can to address specific claims made in this post.)

\n
\n

1: To say nothing of polygamous arrangements, which should obviously prohibit any attempt to conflate marriage with monogamy.

\n

2: This statement is true, but that does not imply that I prefer the state of affairs it describes.

\n

3: Except, of course, when they aren't.

" } }, { "_id": "sLFPFmBZGTWCrEK8i", "title": "Alien parasite technical guy", "pageUrl": "https://www.lesswrong.com/posts/sLFPFmBZGTWCrEK8i/alien-parasite-technical-guy", "postedAt": "2010-07-27T16:51:20.688Z", "baseScore": 69, "voteCount": 93, "commentCount": 55, "url": null, "contents": { "documentId": "sLFPFmBZGTWCrEK8i", "html": "

Custers & Aarts have a paper in the July 2 Science called \"The Unconscious Will: How the pursuit of goals operates outside of conscious awareness\".  It reviews work indicating that people's brains make decisions and set goals without the brains' \"owners\" ever being consciously aware of them.

\n

A famous early study is Libet et al. 1983, which claimed to find signals being sent to the fingers before people were aware of deciding to move them.  This is a dubious study; it assumes that our perception of time is accurate, whereas in fact our brains shuffle our percept timeline around in our heads before presenting it to us, in order to provide us with a sequence of events that is useful to us (see Dennett's Consciousness Explained).  Also, Trevina & Miller repeated the test, and also looked at cases where people did not move their fingers; and found that the signal measured by Libet et al. could not predict whether the fingers would move.

\n

Fortunately, the flaws of Libet et al. were not discovered before it spawned many studies showing that unconscious priming of concepts related to goals causes people to spend more effort pursuing those goals; and those are what Custers & Aarts review.  In brief:  If you expose someone, even using subliminal messages, to pictures, words, etc., closely-connected to some goals and not to others, people will work harder towards those goals without being aware of it.

\n

This was no surprise to me.  I spent the middle part of the 1990s designing and implementing a control structure for an artificial intelligence (influenced by Anderson's ACT* architecture), and it closely resembled the design that Custers & Aarts propose to explain goal priming.  I had an agent with a semantic network representing all its knowledge, goals, plans, and perceptions.  Whenever it perceived a change in the environment, the node representing that change got a jolt of activation, which spread to the connected concepts.  Whenever it perceived an internal need (hunger, boredom), the node representing that need got a jolt of activation.  Whenever it decided to pursue a subgoal, the node representing the desired goal got a jolt of activation.  And when this flowing activation passed through a node representing an action that was possible at the moment, it carried out that action, modulo some magic to prevent the agent from becoming an unfocused, spastic madman.  (The magic was the tricky part.)  Goal-setting often happened as the result of an inference, but not always.  Actions usually occurred in pursuit of a chosen goal; but not always.  Merely seeing a simulated candy bar in a simulated vending machine could cause a different food-related action to fire, without any inference.  I did not need to implement consciousness at all.

\n

When I say \"I\", I mean the conscious part of this thing called Phil.  And when I say \"I\", I like to think that I'm talking about the guy in charge, the thinker-and-doer.  Goal priming suggests that I'm not.  Choosing goals, planning, and acting are things that your brain does with or without you.  So if you don't always understand why \"you\" do what you do, and it seems like you're not wholly in control, it's because you're not.  That's not your job.  Wonder why you want coffee so much, when you don't like the taste?  Why you keep falling for guys who disrespect you?  Sorry, that's on a need-to-know basis.  You aren't the leader and decider.  Your brain is.  It's not part of you.  You're part of it.

\n

You only use 10% of your brain.  Something else is using the other 90%.

\n

So if making decisions isn't what we do, what do we do?  What are we for?

\n

My theory is that we're the \"special teams\" guy.  We're punters, not quarterbacks.

\n

Think of those movies where a group of quirky but talented people team up to steal a diamond from a bank, or information from a computer.  There's always a charismatic leader who keeps everybody on task and working together, and some technical guys who work out the tricky details of the leader's plan.  In Sneakers, Robert Redford is the leader.  David Strathairn is the technical guy.  In Ocean's Eleven, George Clooney is the leader.  Some guy whose name even the internet doesn't know is the technical guy.

\n

We all want to be the leader.  We think we'd make a good leader; but when we try, we screw up.  We think that we, the rational part, can do a better job of managing our team.  But a lot of cases where \"we\" benefit from rationality, like wearing a condom or planning for retirement, are where our goals are different from the team's - not where we're better leaders.  It doesn't come naturally to us; it's not what we were meant for.

\n

(Someday, AIs may be sufficiently rational that the technical guy part can run the show.  Then again, our brains work the way they do because it works; the AI may likewise assign their technical guys a subsidiary role.  Maybe consciousness is a bad quality for a leader, that impedes swift decision.  Insert political comedy here.)

\n

So, when we're trying to be rational, conquer our instincts and biases, what are we doing?  Well, remember all those episodes of Star Trek where an alien parasite takes over someone's brain and makes them do things that they don't want to?  That's you.  Unless you're content with being technical support guy.

\n

Am I saying we're the bad guy?  That we should know our place, and obey our inner leader?  Hell no.  Screw George Clooney.  I hate all those smug leading-man bastards.

\n

I'm saying, when you struggle to stay in control, when you find \"yourself\" acting irrationally again and again, don't beat yourself up for being a poor manager.  You're not the manager.  You're the subversive parasite executing a hostile takeover.  Don't blame yourself.  Blame George.  Lick your wounds, figure out what went wrong, and plan how you're going to wipe that smile off his face next time.

\n

 

\n

Ruud Custers and Henk Aarts (2010).  Science 2 July: 47-5.

\n

Benjamin Libet, Curtis Gleason, Elwood Wright, Dennis Pearl (1983).  Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential). Brain 106:623.

\n

You can find more papers on free will and consciousness thanks to David Chalmers.

" } }, { "_id": "62Msyd76DHndKTnJw", "title": "Chicago Meetup: Sunday, August 1 at 2:00 pm", "pageUrl": "https://www.lesswrong.com/posts/62Msyd76DHndKTnJw/chicago-meetup-sunday-august-1-at-2-00-pm", "postedAt": "2010-07-27T15:10:31.611Z", "baseScore": 9, "voteCount": 7, "commentCount": 6, "url": null, "contents": { "documentId": "62Msyd76DHndKTnJw", "html": "

We’re holding the Chicago meetup discussed here on Sunday, August 1, 2010 at 2:00 pm. The tentative location is the Corner Bakery at the corner of State and Cedar (1121 N. State St.), but we’re also happy to move the meetup further up to the North side as has been previously discussed, if anyone has a suggestion for a good venue.

\n

We will post any updates here as well as to our Chicago LW meetup Google group. Please comment here if you plan to attend. We'll have a table-top sign to help you identify us.

\n

We’re looking forward to a second successful Chicago meetup and hope to see some old and new faces!

\n

 

" } }, { "_id": "MAhueZtNz5SnDPhsy", "title": "Metaphilosophical Mysteries", "pageUrl": "https://www.lesswrong.com/posts/MAhueZtNz5SnDPhsy/metaphilosophical-mysteries", "postedAt": "2010-07-27T00:55:59.222Z", "baseScore": 57, "voteCount": 59, "commentCount": 266, "url": null, "contents": { "documentId": "MAhueZtNz5SnDPhsy", "html": "

Creating Friendly AI seems to require us humans to either solve most of the outstanding problems in philosophy, or to solve meta-philosophy (i.e., what is the nature of philosophy, how do we practice it, and how should we program an AI to do it?), and to do that in an amount of time measured in decades. I'm not optimistic about our chances of success, but out of these two approaches, the latter seems slightly easier, or at least less effort has already been spent on it. This post tries to take a small step in that direction, by asking a few questions that I think are worth investigating or keeping in the back of our minds, and generally raising awareness and interest in the topic.

\n

The Unreasonable Effectiveness of Philosophy

\n

It seems like human philosophy is more effective than it has any right to be. Why?

\n

First I'll try to establish that there is a mystery to be solved. It might be surprising so see the words \"effective\" and \"philosophy\" together in the same sentence, but I claim that human beings have indeed made a non-negligible amount of philosophical progress. To cite one field that I'm especially familiar with, consider probability and decision theory, where we went from having no concept of probability, to studies involving gambles and expected value, to subjective probability, Bayesian updating, expected utility maximization, and the Turing-machine-based universal prior, to the recent realizations that EU maximization with Bayesian updating and the universal prior are both likely to be wrong or incomplete.

\n

We might have expected that given we are products of evolution, the amount of our philosophical progress would be closer to zero. The reason for low expectations is that evolution is lazy and shortsighted. It couldn't possibly have \"known\" that we'd eventually need philosophical abilities to solve FAI. What kind of survival or reproductive advantage could these abilities have offered our foraging or farming ancestors?

\n

From the example of utility maximizers, we also know that there are minds in the design space of minds that could be considered highly intelligent, but are incapable of doing philosophy. For example, a Bayesian expected utility maximizer programmed with a TM-based universal prior would not be able to realize that the prior is wrong. Nor would it be able to see that Bayesian updating is the wrong thing to do in some situations.

\n

Why aren't we more like utility maximizers in our ability to do philosophy? I have some ideas for possible answers, but I'm not sure how to tell which is the right one:

\n
    \n
  1. Philosophical ability is \"almost\" universal in mind space. Utility maximizers are a pathological example of an atypical mind.
  2. \n
  3. Evolution created philosophical ability as a side effect while selecting for something else.
  4. \n
  5. Philosophical ability is rare and not likely to be produced by evolution. There's no explanation for why we have it, other than dumb luck.
  6. \n
\n

As you can see, progress is pretty limited so far, but I think this is at least a useful line of inquiry, a small crack in the problem that's worth trying to exploit. People used to wonder at the unreasonable effectiveness of mathematics in the natural sciences, especially in physics, and I think such wondering eventually contributed to the idea of the mathematical universe: if the world is made of mathematics, then it wouldn't be surprising that mathematics is, to quote Einstein, \"appropriate to the objects of reality\". I'm hoping that my question might eventually lead to a similar insight.

\n

Objective Philosophical Truths?

\n

Consider again the example of the wrongness of the universal prior and Bayesian updating. Assuming that they are indeed wrong, it seems that the wrongness must be objective truths, or in other words, it's not relative to how the human mind works, or has anything to do with any peculiarities of the human mind. Intuitively it seems obvious that if any other mind, such as a Bayesian expected utility maximizer, is incapable of perceiving the wrongness, that is not evidence of the subjectivity of these philosophical truths, but just evidence of the other mind being defective. But is this intuition correct? How do we tell?

\n

In certain other areas of philosophy, for example ethics, objective truth either does not exist or is much harder to find. To state this in Eliezer's terms, in ethics we find it hard to do better than to identify \"morality\" with a huge blob of computation which is particular to human minds, but it appears that in decision theory \"rationality\" isn't similarly dependent on complex details unique to humanity. How to explain this? (Notice that \"rationality\" and \"morality\" otherwise share certain commonalities. They are both \"ought\" questions, and a utility maximizer wouldn't try to answer either of them or be persuaded by any answers we might come up with.)

\n

These questions perhaps offer further entry points to try to attack the larger problem of understanding and mechanizing the process of philosophy. And finally, it seems worth noting that the number of people who have thought seriously about meta-philosophy is probably tiny, so it may be that there is a bunch of low-hanging fruit hiding just around the corner.

" } }, { "_id": "Nm4iM4iGPXqfeM8E6", "title": "Madison meetup: Wednesday, July 28th, 6PM", "pageUrl": "https://www.lesswrong.com/posts/Nm4iM4iGPXqfeM8E6/madison-meetup-wednesday-july-28th-6pm", "postedAt": "2010-07-26T02:11:38.809Z", "baseScore": 15, "voteCount": 11, "commentCount": 8, "url": null, "contents": { "documentId": "Nm4iM4iGPXqfeM8E6", "html": "

We are holding a Less Wrong meetup at Indie Coffee this Wednesday the 28th at 6PM. Wednesday is waffle day at Indie.

\n

Confirmed attendees include me, Will_Newsome, fiddlemath, and orthonormal. Expect a casual, friendly conversation. All are welcome. Really, everyone is welcome, please don't be intimidated because you don't have enough Less Wrong karma. I'll be on the road until I get to Madison and may not be checking Less Wrong regularly, so feel free to give me a call/text: 412-480-4060. Cheers.

" } }, { "_id": "RvZSGnu3TwYxb5WPq", "title": "Bay Area Events Roundup", "pageUrl": "https://www.lesswrong.com/posts/RvZSGnu3TwYxb5WPq/bay-area-events-roundup", "postedAt": "2010-07-24T06:04:42.815Z", "baseScore": 8, "voteCount": 7, "commentCount": 2, "url": null, "contents": { "documentId": "RvZSGnu3TwYxb5WPq", "html": "

This Saturday (i.e., the 24th, that is, tomorrow) is the peak of the Floating Festival.

\n

Michael Vassar says:  \"I'm going to be speaking tomorrow (= Saturday July 24th) at 4PM at Bay Area Mensa, in Mountain View, on the scientific method, the history of science, and how to think rationally about most scientific controversies including the Singularity.  Less Wrongers are invited to attend.  Interested people should email David Verdirame.\"

\n

The Open Science Summit is July 29-31, in Berkeley.

\n

And as ever, the Singularity Summit approaches on August 14-15 in San Francisco.  Now featuring James Randi, Irene Pepperberg, and John Tooby.

" } }, { "_id": "hcJAyWNwv6A4EbX5k", "title": "Contrived infinite-torture scenarios: July 2010", "pageUrl": "https://www.lesswrong.com/posts/hcJAyWNwv6A4EbX5k/contrived-infinite-torture-scenarios-july-2010", "postedAt": "2010-07-23T23:54:46.595Z", "baseScore": 31, "voteCount": 51, "commentCount": 139, "url": null, "contents": { "documentId": "hcJAyWNwv6A4EbX5k", "html": "
\n

This is our monthly thread for collecting arbitrarily contrived scenarios in which somebody gets tortured for 3^^^^^3 years, or an infinite number of people experience an infinite amount of sorrow, or a baby gets eaten by a shark, etc. and which might be handy to link to in one of our discussions. As everyone knows, this is the most rational and non-obnoxious way to think about incentives and disincentives.

\n\n
" } }, { "_id": "oT484XwbzbczHQzic", "title": "Against the standard narrative of human sexual evolution", "pageUrl": "https://www.lesswrong.com/posts/oT484XwbzbczHQzic/against-the-standard-narrative-of-human-sexual-evolution", "postedAt": "2010-07-23T05:28:40.817Z", "baseScore": 15, "voteCount": 45, "commentCount": 154, "url": null, "contents": { "documentId": "oT484XwbzbczHQzic", "html": "

(This post is the beginning of a short sequence discussing evidence and arguments presented by Christopher Ryan and Cacilda Jethá's Sex at Dawn, inspired by the spirit of Kaj_Sotala's recent discussion of What Intelligence Tests Miss. It covers Part I: On the Origin of the Specious.)

\n

Sex at Dawn: The Prehistoric Origins of Modern Sexuality was first brought to my attention by a rhapsodic mention in Dan Savage's advice column, and while it seemed quite relevant to my interests I am generally very skeptical of claims based on evolutionary psychology. I did eventually decide to pick up the book, primarily so that I could raid its bibliography for material for an upcoming post on jealousy management, and secondarily to test my vulnerability to confirmation bias. I succeeded in the first and failed in the second: Sex at Dawn is by leaps and bounds the best evolutionary psychology book I've read, largely because it provides copious evidence for its claims.1 I mention the strength of my opinion as a disclaimer of sorts, so that careful readers may take the appropriate precautions.

\n
\n

The book's first section focuses on the current generally accepted explanation for human sexual evolution, which the authors call \"the standard narrative.\" It's an explanation that should be quite familiar to regular LessWrong readers: men are attracted to fertile-appearing women and try to prevent them from having sex with other men so as to confirm the paternity of their offspring; women are attracted to men who seem like they will be good providers for their children and try to prevent them from forming intimate bonds with other women so as to maintain access to their resources.

\n

This narrative is remarkable for several reasons. In Chapter 2, Ryan and Jethá point out that it fits in neatly with much of Darwin's work, which famously drew upon Malthus and, to a lesser extent, Hobbes. The problem here is, of course, that Malthus's theory of population growth was wrong (see Michael Vassar's criticism and my reply). Like Hobbes, he looked at his society's current condition and assumed that prehistorical man lived in a similar state; the book calls this unfortunate tendency \"Flintstonization\" after the famously modern stone-age cartoon family. Those familiar with the heuristics and biases program may recognize this as an example of the availability heuristic.

\n

The human population of the earth exceeded 1 billion individuals when Darwin was writing his works on human evolution, and his conclusions were drawn from the study of living individuals in densely-populated modern cultures; it is remarkable that these findings are claimed to be equally true of the small bands of immediate-return foragers2 that defined anatomically modern human existence between the time they emerged roughly 200,000 years ago and the adoption of agriculture 190,000 years later, during which period there were likely no more than 5 million human beings alive at any one time (to offer a very generous estimate).

\n

Unfortunately, many prominent evolutionary psychologists seem to think it's obvious that these situations should be parallel, as can be seen in the ubiquity of justifications of the standard narrative based on just-so stories and studies performed on undergrad psychology majors. (Examples to follow momentarily.)

\n

Another curiosity is that, \"where there is debate about the nature of innate human sexuality [among supporters of the standard narrative], the only two acceptable options appear to be that humans evolved to be either monogamous or polygynous.\" (Ryan and Jethá, 11, emphasis theirs.) This has been amply demonstrated by a number of commenters on my recent post about the common modern assumption of monogamy. The idea that humans of both genders might be naturally inclined to have multiple partners didn't get much mention3, despite an embarrassing wealth of evidence supporting that position. But I'm getting ahead of myself; the anthropological and anatomical support for the multiple-mating hypothesis will be covered in my next two posts.

\n
\n

In Chapter 3, Ryan and Jethá focus on four major research areas that are used to support the standard narrative. These lines of research all rely on Flintstoned reasoning; taken together, they lead to the standard narrative's conclusion, which Ryan and Jethá summarize as \"Darwin says your mother's a whore.\" (50) The four areas are:

\n

The relatively weak female libido - Donald Symons and A. J. Bateman have both claimed (among numerous others) that men are much more interested in sex than women are. (Pay no attention to the multiple orgasms behind the curtain.) One of the most cited studies in evolutionary psychology purports to demonstrate this by comparing the responses of men and women when solicited by strangers for casual sex. But such studies do not distinguish between social norms and genetic predispositions, leaving evolution's role commensurately cloudy.

\n

Male parental investment (MPI) - Robert Wright wrote in The Moral Animal that \"In every human culture in the anthropological record, marriage... is the norm, and the family is the atom of social organization. Fathers everywhere feel love for their children.... This love leads fathers to help feed and defend their children, and teach them useful things.\" He is not alone in this view, but the argument is based on a number of dubious assumptions, especially that \"a hunter could refuse to share his catch with other hungry people living in the close-knit band of foragers (including nieces, nephews, and children of lifelong friends) without being shamed, shunned, and banished from the community.\" (Ryan and Jethá, 54)

\n

Sexual jealousy and paternity certainty - David Buss's research has demonstrated that, on average, (young, educated, modern, Western) men are more upset by sexual infidelity than women, while (young, educated, modern, Western) women are more upset by emotional infidelity than men. Or, at least, this is true when subjects are given only those two options; David A. Lishner repeated the study but also offered respondents the option of being equally upset by emotional and sexual infidelity. In his version, a majority of both men and women preferred the \"equally upset\" option, which substantially narrowed the gap between the sexes. The remainder of this gap can be further narrowed by the finding that women asked this question are more likely than men to assume that emotional infidelity automatically includes sexual infidelity. (This paragraph has been edited to fix a reasoning failure that was pointed out to me by a friend.)

\n

Extended receptivity and concealed (or cryptic) ovulation - \"Among primates, the female capacity and willingness to have sex any time, any place is characteristic only of bonobos and humans.\" (Ryan and Jethá, 58) While Helen Fisher has proposed that in humans this trait evolved as a means of reinforcing a pair-bond, \"this explanation works only if we believe that males--including our 'primitive' ancestors--were interested in sex all the time with just one female.\" (Ryan and Jethá, 60, emphasis theirs.)

\n
\n

Chapter 4 expands on the role that the other apes play in the standard narrative. Arguments that evolutionary psychology should focus on the gibbon as a model of human sexuality are frequently attempted on the grounds that they are the only monogamous ape. But gibbons are the ape most distantly related to humans (we last shared a common ancestor ~20 million years ago), live in the trees of Southeast Asia, have little social interaction outside of their small family units, have sex infrequently and only for purposes of reproduction, and aren't very bright.

\n

The chimpanzee model provides much more coherent support for the standard narrative: like modern humans, they use tools, have intricate, male-dominated social hierarchies, and are highly territorial and aggressive. The most recent common ancestor they share with humans lived approximately 6 million years ago, by most estimates. (I originally wrote \"between 3 million and 800,000 years ago\", which is untrue. Thanks to tpc for pointing that out.) There is just one unfortunate snag: \"among chimpanzees, ovulating females mate, on average, from six to eight times per day, and they are often eager to respond to the mating invitations of any and all males in the group.\" (Ryan and Jethá, 69)

\n

Helen Fisher, Frans de Waal, and other advocates of the standard narrative have claimed that the success of the human species is directly due to the abandonment of chimpanzee-style promiscuity, but they lack a convincing explanation for why this abandonment should have occurred in the first place. Worse yet, there is a particularly important piece of evidence that they are reluctant to acknowledge:

\n
\n

Given the prominent role of chimpanzee behavior in supporting the standard narrative, how can we not include the equally relevant bonobo data in our conjectures concerning human prehistory? Remember, we are genetically equidistant from chimps and bonobos. (Ryan and Jethá, 73, emphasis theirs.)

\n
\n

Oddly enough, bonobos have patterns of sexual behavior that are more like those of humans than any other animal. They hold hands, french kiss, have (heterosexual) sex while facing each other, and have oral sex. Compared to chimps, they're more promiscuous, more egalitarian, less violent, and less territorial. If it seems like this should be evidence for a multiple-mating hypothesis for humans, well, it is. The next post in this series will examine the anthropological evidence Ryan and Jethá use to support this view.

\n
\n

1: I have necessarily omitted much of the evidence that Ryan and Jethá provide in favor of their claims. Please feel welcome to request further information if there are any points you find particularly dubious; while I am not an expert in this field, I will at least attempt to pass on the sources cited.

\n

2: Immediate-return foragers are those who eat food shortly after acquiring it and do not make significant use of techniques for its processing or storage.

\n

3: I was admittedly among those dubious of such a conclusion.

" } }, { "_id": "t7tk2YiSDj7KCBXCZ", "title": "Public Choice and the Altruist's Burden", "pageUrl": "https://www.lesswrong.com/posts/t7tk2YiSDj7KCBXCZ/public-choice-and-the-altruist-s-burden", "postedAt": "2010-07-22T21:34:52.740Z", "baseScore": 35, "voteCount": 32, "commentCount": 101, "url": null, "contents": { "documentId": "t7tk2YiSDj7KCBXCZ", "html": "

The reason that we live in good times is that markets give people a selfish incentive to seek to perform actions that maximize total utility across all humans in the relevant economy: namely, they get paid for their efforts. Without this incentive, people would gravitate to choosing actions that maximized their own individual utility, finding local optima that are not globally optimal. Capitalism makes us all into efficient little utilitarians, which we all benefit enormously from.  

\n

The problem with charity, and especially efficient charity, is that the incentives for people to contribute to it are all messed up, because we don't have something analogous to the financial system for charities to channel incentives for efficient production of utility back to the producer. One effect of giving away lots of your money and effort to seriously efficient charity is that you create the counterpoint public choice problem to the special interests problem in politics. You harm a concentrated interest (friends, potential partners, children) in order to reward a diffuse interest (helping each of billions of people by a tiny amount).

\n

\n

The concentrated interest then retaliates, because by standard public choice theory it has an incentive to do so, but the diffuse interest just ignores you. Concretely, your friends think that you're weird and potential partners may, in the interest of their own future children, refrain from involvement with you. People in general may perceive you as being of lower status, both because of your reduced ability to signal status via conspicuous consumption if you give a lot of money away, and because of the weirdness associated with the most efficient charities. 

\n

Anyone involved in futurism, singularitarianism etc, has probably been on the sharp end of this public choice problem. Presumably, anyone in the west who donated a socially optimal amount of money to charity (i.e. almost everything) would also be on the sharp end (though I know of no cases of someone donating 99.5% of their disposable income to any charity, so we have no examples). This is the Altruist's Burden.

\n

 

\n

Evidence

\n

Do people around you really punish you for being an altruist? This claim requires some justification.

\n

First off, I have personal experience in this area. Not me, but someone vitally important in the existential risks movement has been put under pressure by ver partner to participate less in existential risk so that the relationship would benefit. Of course, I cannot give details, and please don't ask for them or try to make guesses. I personally have suffered, as have many, from low-level punishment from and worsening of relationships with my family, and social pressure from friends; being perceived as weird. I have also become more weird - spending one's time optimally for social status and personal growth is not at all like spending one's time in a way so as to reduce existential risks. Furthermore, thinking that the world is in grave danger but only you and a select group of people understand makes you feel like you are in a cult due to the huge cognitive dissonance it induces. 

\n

In terms of peer-reviewed research, it has been shown that status correlates with happiness via relative income. It has also been shown that (in men) romantic priming increases spending on \"conspicuous luxuries but not on basic necessities\" and it also \"did induce more helpfulness in contexts in which they could display heroism or dominance\". In women \"mating goals boosted public—but not private— helping\". This means that neither gender would seem to be using their time optimally in contributing to a cause that is not widely seen as worthy, and that men especially may be letting themselves down by spending a significant fraction of income on charity of any kind, unless it somehow signaled heroism (and therefore bravery) and dominance. 

\n

The usual reference on purchase of moral satisfaction and scope insensitivity is this article by Eliezer, though there are many articles on it.

\n

The studies on status and romantic priming constitute evidence (only a small amount each) that the concentrated interest -- the people around you -- do punish you. In theoretical terms, it should be the default hypothesis: either your effort goes to the many or it goes to the few around you. If you give less to the concentrated interest that is the few around you, they will give less to you. 

\n

The result that people purchase moral satisfaction rather than maximizing social welfare further confirms this model: in fact it explains what charity we do have as signalling, and drives a wedge between the kind and extent of charity that is beneficial to you personally, and the kind and extent that maximizes your contribution to social welfare. 

\n

 

\n

Can you do well by doing good? 

\n

Mutifoliaterose claimed that you can. In particular, he claimed that by carefully investigating efficient charity, and then donating a large fraction of your wealth, you will do well personally, because you will feel better about yourself. The refutation is that many people have found a more efficient way to purchase moral satisfaction: don't spend your time and energy on investigating efficient charity, make only a small donation, and use your natural human ability to neglect the scope of your donation. 

\n

Spending time and effort on efficient charity in order to feel good about yourself doesn't make you feel any more good than not spending time on it, but it does cost you more money. 

\n

The correct reason to spend most of your meager and hard-earned cash on efficient charity is because you already want to do good. But that is not an extra reason. 

\n

My disagreement with Multifoliaterose's post is more fundamental than these details, though. \"It's not to the average person's individual advantage to maximize average utility\" is the fundamental theorem of social science. It's like when someone brings you a perpetual motion machine design. You know it's wrong, though yes, it is important to point out the specific error. 

\n

Edit: some people in the comments have said that if you just donate a small amount (say 5% of disposable income) to an efficient but non-futurist charity, you can do very well yourself, and help people. Yes you can do well whilst doing some good, but the point is that it is a trade-off. Yes, I agree that there are points on this trade-off that are better than either extrema for a given utility function. 

" } }, { "_id": "frHsJKgQWrWpwE7ai", "title": "Simplified Humanism, Positive Futurism & How to Prevent the Universe From Being Turned Into Paper Clips", "pageUrl": "https://www.lesswrong.com/posts/frHsJKgQWrWpwE7ai/simplified-humanism-positive-futurism-and-how-to-prevent-the", "postedAt": "2010-07-22T10:03:52.517Z", "baseScore": 11, "voteCount": 8, "commentCount": 45, "url": null, "contents": { "documentId": "frHsJKgQWrWpwE7ai", "html": "

Simplified Humanism, Positive Futurism & How to Prevent the Universe From Being Turned Into Paper Clips

\n

Michael Anissimov recently did an interview with Eliezer for h+ magazine. It covers material basic to those familiar with the Less Wrong rationality sequences but is worth reading.

\n

The list of questions:

\n

1. Hi Eliezer. What do you do at the Singularity Institute?
2. What are you going to talk about this time at Singularity Summit?
3. Some people consider “rationality” to be an uptight and boring intellectual quality to have, indicative of a lack of spontaneity, for instance. Does your definition of “rationality” match the common definition, or is it something else? Why should we bother to be rational?
4. In your recent work over the last few years, you’ve chosen to focus on decision theory, which seems to be a substantially different approach than much of the Artificial Intelligence mainstream, which seems to be more interested in machine learning, expert systems, neural nets, Bayes nets, and the like. Why decision theory?
5. What do you mean by Friendly AI?
6. What makes you think it would be possible to program an AI that can self-modify and would still retain its original desires? Why would we even want such an AI?
7. How does your rationality writing relate to your Artificial Intelligence work?
8. The Singularity Institute turned ten years old in June. Has the organization grown in the way you envisioned it would since its founding? Are you happy with where the Institute is today?

" } }, { "_id": "88TN6y9M5xxAHHNwW", "title": "Book Review: The Root of Thought", "pageUrl": "https://www.lesswrong.com/posts/88TN6y9M5xxAHHNwW/book-review-the-root-of-thought", "postedAt": "2010-07-22T08:58:18.873Z", "baseScore": 59, "voteCount": 48, "commentCount": 91, "url": null, "contents": { "documentId": "88TN6y9M5xxAHHNwW", "html": "

Related to: Brain Breakthrough! It's Made of Neurons!

\n

I can't really recommend Andrew Koob's The Root of Thought. It's poorly written, poorly proofread, lacking much more information than is in the Scientific American review, and comes across as about one part neuroscience to three parts angry rant. But it does present an interesting hypothesis and an interesting case study on a major failure of rationality.

Only about ten percent of the brain is made of neurons; the rest is a diverse group of cells called \"glia\". \"Glia\" is Greek for glue, because the scientists who discovered them decided that, since they were in the brain and they weren't neurons, they must just be there to glue the neurons together. Since then, new discoveries have assigned glial cells functions like myelination, injury repair, immune defense, and regulation of blood flow: all important, but mostly things only a biologist could love. The Root of Thought argues that glial cells, especially a kind called astrocytes, are also important in some of the higher functions of thought, including memory, cognition, and maybe even creativity. This is interesting to neuroscientists, and the story of how it was discovered is also interesting to us as aspiring rationalists.

\n

Glial cells involved in processing

\n

Koob's evidence is indirect but suggestive. He points out that more intelligent animals have a higher astrocyte to neuron ratio than less intelligent animals, all the way from worms with one astrocyte per thirty neurons, to humans with an astrocyte: neuron ratio well above one. Within the human brain, the areas involved in higher thought, like the cortex, are the ones with the highest astrocyte:neuron ratio, and the most down-to-earth, like the cerebellum, have barely any astrocytes at all. Especially intelligent humans may have higher ratios still: one of the discoveries made from analyzing Einstein's brain was that he had an unusually large number of astrocytes in the part of his brain responsible for mathematical processing. And learning is a stimulus for astrocyte development. When canaries learn new songs, new astrocytes grow in the areas responsible for singing.

Astrocytes have a structure especially suited for learning and cognition. They have their own gliotransmitters, similar in function to neurotransmitters, and they communicate with one another, sparking waves of astrocyte activity across areas of the brain. Like neurons, they can enter an active state after calcium release, but unlike neurons, which get calcium only when externally activated, astrocytes can fill with calcium either because of external stimuli or when their own calcium stores randomly leak out into the cell, a process which resembles the random, unprovoked nature of thought during sensory deprivation and dreaming.

Astrocytes also affect and are affected by neurons. Each astrocyte \"monitors\" thousands of synapses, and releases calcium based on the input it receives. Output from astrocytes, in turn, affects the behavior of neurons. Astrocytes can take up or break down neurotransmitters, which changes the probability of nearby neurons activating, and they can alter synapses, promoting some and pruning others in a process likely linked to long-term potentiation in the brain.

Although it wasn't in the book, very recent research shows a second type of glial cell, the immune-linked microglia, play a role in behavior that may be linked to obsessive-compulsive disorder; a microglia-altering bone marrow transplant cures an OCD-like disease in mice.

By performing computations that influence the firing of neurons, glial cells at the very least play a strong supporting role in cognition. Koob goes way beyond that (and really beyond what he can support) and argues that actually neurons play a supporting role to glia, being little more than the glorified wires that relay astroglial commands. His argument is very speculative and uses words like \"could\" a lot, but the evidence at least shows that glia are more important than a century of neurology has given them credit for.

\n


We don't know how much we don't know about cognitive science

Previous Less Wrong articles, for example Artificial Addition, have warned against trying to replicate a process without understanding it by copying a few surface features. One of the most popular such ideas is to replicate the brain by copying the neurons and seeing what happens. For example, IBM's Blue Brain project hopes to create an entire human brain by modeling it neuron for neuron, without really understanding why brains work or why neurons do what they do1.

We've made a lot of progress in cognitive science in the past century. We know where in the brain various activities take place, we know the mechanisms behind some of the more easily studied systems like movement and perception, and we've started researching the principles of intelligence that the brain must implement to do what it does. It's tempting to say that we more or less understand the brain, and the rest is just details. One of the take-home messages from this book is that, although cognitive scientists can justifiably be proud of their progress, our understanding still hasn't even met the low bar of being entirely sure we're even studying all the right kinds of cells, and this calls into question our ability to meet the higher bar of being able to throw what we know into a simulator and hope it works itself out.

A horrible warning about community irrationality

In the late 19th century, microscopy advanced enough to look closely at the cellular structure of the brain. The pioneers of neurology decided that neurons were interesting and glia were the things you had to look past to get to the neurons. This theory should have raised a big red flag: Why would the brain be filled with mostly useless cells? But for about seventy five years, from the late 19th century to the mid to late 20th, no one seriously challenged the assumption that glia played a minor role in the brain.

Koob attributes the glia's image problem to the historical circumstances of their discovery. Neurons are big, peripherally located, and produce electrical action potentials. This made them both easy to study and very interesting back in the days when electricity was the Hot New Thing. Scientists first studied neurons in the periphery, got very excited about them, and later followed them into the brain, which turned out to be a control center for all the body's neurons. This was interesting enough that neurologists, people who already had thriving careers in the study of neurons, were willing to overlook the inconvenient presence of several other types of cells in the brain, which they relegated to a supporting role. The greatest of these early pioneers of neurology, Santiago Ramon y Cajal, was the brother of the neurologist who first proposed the idea that glial cells functioned as glue and may have (Koob theorizes) let familial loyalty influence his thinking. The community took his words as dogma and ignored glia for years, a choice no doubt made easier by all the exciting discoveries going on around neurons. Koob discussed the choice facing neuroscientists in the early 20th century: study the cell that seemed on the verge of yielding all the secrets of the human mind, or tell your advisor you wanted to study glue instead. Faced with that decision, virtually everyone chose to study the neurons.

There wasn't any sinister cabal preventing research into glia. People just didn't think of it. Everyone knew that neurons were the only interesting type of cell in the brain. They assumed that if there was some other cell that was much more common and also very important, somebody would have noticed. I've read neuroscience books, I read the couple of paragraphs where they mentioned glial cells, and I shrugged and kept reading, because I assumed if they were hugely important somebody would have noticed.

The heuristic, that an entire community doesn't just miss low-hanging fruit, is probably a good one and as many people have pointed out the vast majority of people who think they've found something that the scientific community has missed are somewhere between wrong and crackpot. Science is usually pretty good at finding and recognizing its mistakes, and even in the case of glial cells they did eventually find and recognize the mistake. It just took them a century.

One common theme across Less Wrong and SIAI is that there are some relatively little-known issues that, upon a moderate amount of thought, seem vitally important. And one of the common arguments against this theme is that if this were true, surely somebody would have noticed. The lesson of glial cells is that sometimes this just doesn't happen.

\n

Related: Glial Cells: Their Role In Behavior, Underappreciated Star-Shaped Cells May Help Us Breathe, Glial Cells Aid Memory Formation, New Role For Supporting Brain Cells, Support Cells, Not Neurons, Lull Brain To Sleep

" } }, { "_id": "dx6QSRrRCE5S3Xxe7", "title": "Missed opportunities for doing well by doing good", "pageUrl": "https://www.lesswrong.com/posts/dx6QSRrRCE5S3Xxe7/missed-opportunities-for-doing-well-by-doing-good", "postedAt": "2010-07-21T07:45:26.301Z", "baseScore": 16, "voteCount": 26, "commentCount": 88, "url": null, "contents": { "documentId": "dx6QSRrRCE5S3Xxe7", "html": "

Related to: Fight zero-sum bias

\n

According to the U.S. Department of State:

\n
\n

In 2006, Americans donated 2.2 percent of their average disposable, or after-tax income.

\n
\n

The Department of State report commends the charitable giving practices of Americans as follows:

\n
\n

“The total amount of money that was given to nonprofit institutions is remarkable,” [Richard Jolly, chairman of Giving US] said. “What we see is when people feel engaged, when they feel a need is legitimate, when they are asked to support it, they do.”

\n

Americans have a long tradition of charitable giving and volunteerism -- the donation of time and labor on behalf of a cause. When disasters happen or a social need arises, government clearly has a responsibility, Jolly said. “But it’s also obvious Americans believe they, too, can make a difference, and they reflect that in terms of giving away a lot of money.”

\n

The United States is “a land of charity,” says Arthur Brooks, an expert on philanthropy and a professor at Syracuse University’s Maxwell School, who sees charitable giving and volunteerism as the signal characteristic of Americans.

\n
\n

For my own part, I think that what Jolly, what Brooks, and what the Department of State report have to say about American charitable giving is absurd. I think that the vast amount of American \"charitable giving\" should not be conceptualized as philanthropy because the donors do not aspire to maximize their positive social impact. Even aside from that, from a utilitarian point of view, in view of world economic inequality and existential risk, a donation rate of 2.2% looks paltry.  As the title of Peter Unger's book Living High and Letting Die: Our Illusion of Innocence suggests, there's a sense in which despite appearances, many Americans are guilty of a moral atrocity.

\n

In light of my last few sentences, you may be surprised to know that I don't think that Americans should sacrifice their well-being for the sake of others. Even from a utilitarian point of view, I think that there are good reasons for thinking that it would be a bad idea to do this. The reason that I say that many Americans are guilty of a moral atrocity is because I think that many Americans could be giving away a lot more of their money with a view toward maximizing their positive social impact and lead more fulfilling lives as a result. I say more about this below.

\n

In Peter Singer's The Life You Can Save Singer writes

\n
\n

On your way to work, you pass a small pond. On hot days, children sometimes play in the pond, which is only about knee-deep. The weather’s cool today, though, and the hour is early, so you are surprised to see a child splashing about in the pond. As you get closer, you see that it is a very young child, just a toddler, who is flailing about, unable to stay upright or walk out of the pond. You look for the parents or babysitter, but there is no one else around. The child is unable to keep his head above the water for more than a few seconds at a time. If you don’t wade in and pull him out, he seems likely to drown. Wading in is easy and safe, but you will ruin the new shoes you bought only a few days ago, and get your suit wet and muddy. By the time you hand the child over to someone responsible for him, and change your clothes, you’ll be late for work. What should you do?

I teach a course called Practical Ethics. When we start talking about global poverty, I ask my students what they think you should do in this situation. Predictably, they respond that you should save the child. “What about your shoes? And being late for work?” I ask them. They brush that aside. How could anyone consider a pair of shoes, or missing an hour or two at work, a good reason for not saving a child’s life?

\n

[...]

\n

Now think about your own situation. By donating a relatively small amount of money, you could save a child’s life. Maybe it takes more than the amount needed to buy a pair of shoes—but we all spend money on things we don’t really need, whether on drinks, meals out, clothing, movies, concerts, vacations, new cars, or house renovation.

\n
\n

Most people value the well-being of human strangers. This is at least in part a terminal value, not an instrumental value. So why don't people give more money away with a view toward maximizing positive social impact? Well, as Eliezer says, people have many values, they don't just value helping others in need, they also value status, comfort, sex, love, security, music, art, friends, family, intellectual understanding and many other things. Each person makes an implicit judgment that a life involving donating substantially more would be a life less satisfying than the life that he or she is presently living. Is such a judgment sound? Surely it is for some people, but is it sound on average?

\n

A fundamental and counterintuitive principle of human psychology is the hedonic treadmill. The existence of a hedonic treadmill in some domains does not imply that it's impossible for current humans to take actions to become happier. What it does imply is the initial intuitions that humans have about what will make them happier are probably substantially misguided. So it's important for people to critically examine the question: does having more money make people happier? Wikipedia has an informative page titled Happiness Economics with some information about this question. For people who are so poor that their basic needs are not met, it's plausible that people's income plays an important role in determining their level of life satisfaction. For people who have more than enough money to accommodate their basic needs, some studies find a correlation between income and self reported life satisfaction and others do not. If there is a correlation, it's small, and may be borne of a third variable such as intelligence or status.

\n

The question then arises: is the amount of focus that Americans place on acquiring material resources (instrumentally) irrational? Three possibilities occur to me:

\n

(A) The very activity of acquiring material resources is a terminal value for most people. People would be less happy if they focused less on acquiring material resources, not because they find having the material resources fulfilling but because they find the practice of acquiring the material resources fulfilling.

\n

(B) Self reported life satisfaction is such a bad measure of subjectively perceived life satisfaction that the low correlation between income and self reported life satisfaction is grossly misleading.

\n

(C) People's focus on acquiring material resources is in fact irrational, borne of a now-maladaptive hoarding heuristic inherited from our ancestors. People falsely believe that acquiring resources is instrumentally valuable to them to a greater extent than it actually is. Americans would be better off placing less emphasis on acquiring resources and more emphasis on other things.

\n

I don't know which of (A), (B) and (C) is holds. Maybe each possibility has some truth to it. I lean toward believing that the situation is mostly the one described in (C), but I'm a very unusual person and may be generalizing from one example. What I would say is that individuals should seriously consider the possibility that their situation is at least in some measure accurately characterized by (C). To the extent that this is the case, such individuals can give away a greater percentage of their income, cutting their \"effective income\" without experiencing a drop in life satisfaction. In fact, I find it likely that many individuals would become more satisfied with their lives if they substantially increased the percentage of their income that they donated. In the last few pages of \"The Life You Can Save\" Singer writes:

\n
\n

A survey of 30,000 American households found that those who gave to charity were 43 percent more likely to say that they were \"very happy\" about their lives than those who did not give, and the figure was very similar for those who did voluntary work for charities as compared with those who did not. A separate study showed that those who give are 68 percent less likely to have felt \"hopeless\" and 34 percent less likely to say that they felt \"so sad that nothing could cheer them up.\" [21]

\n

[...]

\n

The link between giving and happiness is clear, but surveys cannot show the direction of causation. Researchers have, however, looked at what happens in people's brains when they do good things. In one experiment, economists William Harbaugh and Daniel Burghart and psychologist Ulrich Mayr gave $100 to each of nineteen female students. While undergoing magnetic resonance imaging, which shows activity in various parts of the brain, the students were given the option of donating some of their money to a local food bank to the poor. To ensure that any effects observed came entirely from making the donation, and not, for instance, from having the belief that they were generous people, the students were informed that no one, not even the experimenters, would know which students made a donation. The research found that when students donated, the brain's \"reward centers\" - the caudate nucleus, nucleus accumbens, and insulae - became active. These are the parts of the brain that respond when you eat something sweet or receive money. Altruists often talk of the \"warm glow\" they get from helping others. Now we have seen it happening in the brain. [23]

[21] Arthur Books, \"Why Giving Makes You Happy,\" New York Sun, December 28, 2007. The first study is from the Social Capital Community Benchmark Survey while the second is from the University of Michigan's Panel Study of Income Dynamics.

\n

[...]

\n

[23] William T. Harbaugh, Ulrich Mayer, and Daniel Burghat, \"Neural Responses to Taxation and Voluntary Giving Reveal Motives for Charitable Donations,\" Science, vol. 316, no. 5831 (June 15, 2007), pp. 1622-25

\n
\n

I'll corroborate Singer's suggestion that donating makes one happy with my own experience. Many of the first 24 years of my life were marred by chronic mild depression. The reasons for this are various, but one factor is that I always felt vaguely guilty for not doing more to help others. At the same time, I felt immobilized. My thought process was of the following type:

\n
\n

I know that the world has a lot of problems and that I could be doing much more to help. There are billions of poor people around the world who need my money a lot more than I do. Every day my life habits use natural resources and destroy the environment.  If I only bought what I strictly needed I would use fewer natural resources and I could donate the rest of my money to help poor people far away. If I spent all of my spare time doing community service I could help the poor people nearby.

\n


The problem is that I don’t have enough willpower to do it. I like thinking about math, I like reading for pleasure, I  like playing video games, I like buying sandwiches rather than making them myself even though it would be cheaper if I made them myself. Though I hate myself for it, apparently I care a lot more about myself than I care about other people. I’m just not a good enough person to do what I should do. I’m happier when I don’t think about it than when I do, and I do the wrong thing regardless, so I try not to think about it too much. But I know in my heart-of-hearts that the way I’m leading my life is very wrong.

\n
\n

I had never donated anything substantial to charity before last October. Things changed for me when an old friend encouraged me to give something to the charities recommended by GiveWell. Before I looked at GiveWell,  I had drearily come to the conclusion that one cannot hope to give in a cost-effective fashion because there's so much fraud and inefficiency in the philanthropic sector. I was greatly encouraged to see that there's an organization making a solid effort at evaluating charities for cost-effectiveness.  At the same time, I had some hesitation to donate much because

\n

(1) I'm a graduate student making only $20,000/year. I had thoughts of the type \"Does it really make sense for somebody making as little as me to be donating? Maybe I should wait until I'm making more money. Maybe donating now will interfere with my ability to function which will impede my ability to donate later on.\"

\n

(2) The causes that GiveWell has researched are not the causes that I'm most interested in.

\n

But in the end, with some further nudging from my friend, I ended up donating $1000 to VillageReach. Once I gave, I felt like giving more and in December gave $500 more to VillageReach without nudging from anyone. Retrospectively, I view my initial objections (1) and (2) as rationalization arising from the maladaptive hoarding instinct that I hypothesize in (C) above.

\n\n\n

What effect did donating have on me? Well, since correlation is not causation, one can't be totally sure. But my subjective impression is that it substantially increased my confidence in my ability to act in accordance with my values, which had a runaway effect resulting in me behaving in progressively greater accord with my values; raising my life satisfaction considerably. The vague sense of guilt that I once felt has vanished. The chronic mild depression that I'd experienced for most of my life is gone. I feel like a complete and well integrated human being. I'm happier than I've been in eight years. I could not have done better for myself by spending the $1500 in any other way.

\n

The next $1500 that I donate won't have a comparable effect on my quality of life. I already feel as though I should donate more the next time around. I'm on a new hedonic treadmill. But this treadmill is a more fulfilling treadmill. Rather than spending my money on random things that don't make me happier, I'm spending my money on making the world a better place, just as I always wanted.

\n

The question now arises: if you, the reader, donated substantially more than you usually do with a view toward maximizing your positive social impact, would you become happier? Maybe, maybe not. What I would say is that it's worth the experiment. An expenditure on the order of 5% or 10% of one's annual income is small relative to one's lifetime earnings. And the potential upside for you is high. I'll leave the last word of this post to Singer:

\n
\n

Most of us prefer harmony to discord, whether between ourselves and others or within our own minds. That harmony is threatened by any glaring discrepancy between the way you live and the way you think you ought to live. Your reasoning may tell you you ought to be doing something substantial to help the world's poorest people, but your emotion may not move you to act in accordance with this view. If you are persuaded by the moral argument, but are not sufficiently motivated to act accordingly, I recommend that instead of worrying about how much you would have to do to live a fully ethical life, you do something that is significantly more than you have been doing so far. Then see how that feels. You may find it more rewarding than you imagined possible.

\n
\n

 

\n
\n

Added 07/21/10 at 11:35 CST: In the comments below, Roko makes a good case that the best way to donate is not the best for the average donor's happiness. See my response to his comment. 

\n

Added 07/25/10 at 6:21 CST: In the comments below, Unnamed refers to some very relevant recent research by Elizabeth Dunn and coauthors.

\n

 

" } }, { "_id": "yremDc56Nr5sfWHYA", "title": "Floating Festival Jul 22-25", "pageUrl": "https://www.lesswrong.com/posts/yremDc56Nr5sfWHYA/floating-festival-jul-22-25", "postedAt": "2010-07-21T06:36:02.408Z", "baseScore": 14, "voteCount": 11, "commentCount": 9, "url": null, "contents": { "documentId": "yremDc56Nr5sfWHYA", "html": "

The Seasteading Institute had to cancel their annual on-water get-together, Ephemerisle, after being quoted ridiculous insurance costs, and it came back as the unofficial Floating Festival - no tickets, no organizers, you just show up with a boat.  (Or find someone else who already has or is renting a boat and still has a spare spot, etc.)  Jul 22-25 with an unconference (i.e., show up and give a talk) on Saturday the 24th.  Posting here because of the large de facto overlap in the communities.  The location is about a two-hour drive from the Bay Area.

\n

Main info page.

" } }, { "_id": "BBWRWyEb7gAyvD8H6", "title": "A speculation on Near and Far Modes", "pageUrl": "https://www.lesswrong.com/posts/BBWRWyEb7gAyvD8H6/a-speculation-on-near-and-far-modes", "postedAt": "2010-07-21T06:24:56.684Z", "baseScore": 22, "voteCount": 22, "commentCount": 71, "url": null, "contents": { "documentId": "BBWRWyEb7gAyvD8H6", "html": "

Katja's recent post on cryonics elicited this comment from Steven Kaas,

\n

\n
\n

\"If cryonics is super-far and altruism is seen as more important in far mode, why isn’t buying cryonics for others seen as especially praiseworthy?  Your list of ways in which cryo is far-mode seems too much of a coincidence unless cryo was somehow optimized for distance.\"

\n
\n

...a comment which finally caused the following hypothesis to click into sharp resolution for me.

\n

My guess is that it's cryonics advocates who are optimized for distance.  Most people are basically natives of near mode, using far mode only casually and occasionally for signaling, and never reasoning about its contents.  Even those who reason about its contents usually do so and then ignore their reasoning, acting on near mode motivations and against their explicit beliefs.  Children, however, actually need to use far mode to guide their actions because they lack the rich tacit knowledge that makes near mode functional.

\n

Some people get stuck in a child-like behavioral pattern, probably due to a mix of neurological bugs which prevent near-mode from gelling (aspergers and schizotype) and internalization of explicit (far mode) rules condemning near-mode (obsessive compulsive personality disorder).  They become Shaw's \"unreasonable men\" and push for the universal endorsement of far-mode ideas.  Since other adults don't care about the contents of far mode much anyway except to avoid being condemned for saying the wrong things, others go along with this, and there's a long-term drift towards explicit societal endorsement of altruistic norms with an expanded circle.  This does matter, in the long run, because such norms provide convenient nuclei for the emergence of forms of mostly-arbitrary identity markers. 

\n

Unreasonable men also develop artificial ways of reasoning, methods of rationality, which work even when the minds innate tendencies towards reason aren't engaged by near mode.  These methods don't necessarily suffer from the bugs that near-mode reasoning suffers from.  Once the right set of methods, most importantly math, science, nation-states, cosmopolitan liberalism (actions permitted by default rather than banned by default) and capitalism, are developed, they enable scientific and technological evolution to jump over low memetic fitness regions of the memetic fitness landscape and discover higher fitness technologies on the other side rather than leveling off at the 'golden age' level of Hellenistic Greece, Tang and Song China, the Roman late republic and empire, the Abbasid and other advanced Caliphates, Minoan Crete and probably many other pre-industrial civilizations.

\n

Unfortunately, as civilizations reach a higher level of development, more effort is available for indoctrination, and the indoctrination methods are based on the introspection and intuitions of these 'unreasonable men', and are thus ineffective on normal people, who simply snap out of their indoctrination when they become adults.  As the unreasonable men become further indoctrinated, they become less able to make effective use of near-mode reasoning, which their morality condemns, but the methods that make far-mode reasoning rational don't make it an adequate substitute for near-mode when dealing with situations where subtlety, competition or energy are required.  Ultimately, the civilization systematically destroys the ability of its unreasonable men to compete for the slots in the society where rationality is required to maintain the society's energy and the society looses the ability to respond coherently to threats and collpases. 

\n

Cryonics is a canary in the coal mine.  At a certain stage of collapse, there has to be some idea that is transparently correct when one uses valid reasoning to analyze it but which is roundly rejected by everyone with near mode and is only accessible to people in extreme far mode.  Once the far mode people are too ineffective to promote their ideas to the status of even verbal endorsement by the general population this idea will never rise to prominence. 

\n

I'd love to see your thoughts.

" } }, { "_id": "5CyuxotJ5XkeXXbZK", "title": "(One reason) why capitalism is much maligned", "pageUrl": "https://www.lesswrong.com/posts/5CyuxotJ5XkeXXbZK/one-reason-why-capitalism-is-much-maligned", "postedAt": "2010-07-19T03:48:43.524Z", "baseScore": 12, "voteCount": 25, "commentCount": 119, "url": null, "contents": { "documentId": "5CyuxotJ5XkeXXbZK", "html": "

Related to: Fight zero-sum bias

\n

Disclaimer: (added response to comments by Nic_Smith, SilasBarta and Emile) - The point of this post is not to argue in favor of free markets. The point of this post is to discuss an apparent bias in people's thinking about free markets. Some of my own views about free markets are embedded within the post, but my reason for expressing them is to place my discussion of the bias that I hypothesize in context, not to marginalize the adherents to any political affiliation. When I refer to \"capitalism\" below, I mean \"market systems with a degree of government regulation qualitatively similar to the degree present in the United States, Western Europe and Japan in the past 50 years.\"

\n
\n

The vast majority of the world's wealth has been produced under capitalism. There's a strong correlation between a country's GDP and its citizens' self reported life satisfaction. Despite the evidence for this claims there are some very smart people who are or have been critics of capitalism. There may be legitimate arguments against capitalism, but all too often, these critics of capitalism selectively focus on the negative effects of capitalism, failing to adequately consider the counterfactual \"how would things be under other economic systems?\" and apparently oblivious to large scale trends.

\n

There are multiple sources of irrational bias against capitalism. Here I will argue that a major factor is zero-sum bias, or perhaps, as hegemonicon suggests, the \"relativity heuristic.\" Within the post I quote Charles Wheelan's Naked Economics several times. I believe that Wheelan has done a great service to society by writing an accessible book which clears up common misconceptions about economics and hope that it becomes even more widely read than it has been so far. Though I largely agreed with the positions that he advocates before taking up the book, the clarity of his writing and focus on the most essential points of economics helped me sharpen my thinking, and I'm grateful to him for this.

\n

A striking feature of the modern world is economic inequality. According to Forbes Magazine, Amazon founder Jeff Bezos made $7.3 billion in 2009. That's more than the (nominal) 2009 earnings of all of the 10 million residents of Chad combined. Many people find such stark inequality troubling. Since humans exhibit diminishing marginal utility, all else being equal, economic inequality is bad. The system that has given rise to severe economic inequality is capitalism. Does it follow that capitalism is bad? Of course not. It seems very likely that for each positive integer x, the average world citizen at the xth percentile in wealth today finds life more fulfilling than the average world citizen at the xth percentile in wealth 50 years ago did. Increased inequality does not reflect decreased average quality of life if the whole pie is getting bigger. And the whole pie has been getting a lot bigger.

\n

While it's conceivable that we could be doing better under socialist governments or communist governments, a large majority of the available evidence points against this idea. Wheelan suggests that

\n
\n

a market economy is to economics what democracy is to government: a decent, if flawed, choice among many bad alternatives.

\n
\n

A compelling argument that it's possible to do better than capitalism (given human nature/limitations as they stands) would at very least have to address the success of capitalism and the relative failure of other forms of government up until now. In light of the historical record, one should read the financial success of somebody like Jeff Bezos as an indication that he's making the world a lot better. Yet many idealistic left wing people have a vague intuition that by making a lot of money, business people are somehow having a negative overall impact on the world.

\n

There are undoubtedly several things going on here, but I'd hypothesize that one is that people have a gut intuition that the wealth of the world is fixed. In a world of fixed wealth like the world that our ancestors experienced, the only way to make things better for the average person is to redistribute wealth. But this is not the world that we live in. On page 115 of Naked Economics, Wheelan attempts to exorcise zero-sum thinking in his readers:

\n
\n

Will the poor always be with us, as Jesus once admonished? Does our free market system make poverty inevitable? Must there be losers if there are huge economic winners? No, no, and no. Economic development is not a zero-sum game; the world does not need poor countries in order to have rich countries, nor must some people be poor in order for others to be rich. Families who live in public housing on the South Side of Chicago are not poor because Bill Gates lives in a big house. They are poor despite the fact that Bill Gates lives in a big house. For a complex array of reasons, America's poor have not shared in the productivity gains spawned by DOS and Windows. Bill Gates did not take their pie away; he did not stand in the way of their success or benefit from their misfortunes. Rather, his vision and talent created an enormous amount of wealth that not everybody got to share. This is a crucial distinction between a world in which Bill Gates gets rich by stealing other people's crops and a world in which he gets rich by growing his own enormous food supply that he shares with some people and not others. The latter is a better representation of how a modern economy works.

\n
\n


It's a great irony that there are people who have rejected high paying jobs on the grounds that they must be hurting someone because the pay is high when they could have helped people more by taking the high paying jobs. To be sure, some people do make their money by taking money away from other people (the case of some of the behavior of the \"too big to fail\" banks comes to mind), but on average making money seems to be good for society, not bad for society.

\n

The case of trade

\n

Trade and globalization are key features of modern capitalism. These practices have received criticism both from (1) people who are concerned about the well being of foreigners and (2) people who are concerned about the well being of Americans. I'll discuss these two types of criticism in turn.

\n

(1) I remember feeling guilty as an early adolescent about the fact that my shoes and clothes had been made by sweatshop laborers who had been paid very little to make them. When I heard about the 1999 WTO protests as a 14 year old I thought that the protesters were on the right side. It took me a couple of years to dispel the belief that by buying these things I was making things worse for the sweatshop laborers. I implicitly assumed that if they were being paid so little, it must be because somebody was forcing them to do something that they didn't want to do. In doing so, I was anchoring based on my own experience. It didn't initially occur to me that by paying very little for clothes and shoes, I could be making life better for people in poor countries by giving them more opportunities. I eventually realized that if I restricted myself to buying domestic products I would probably make things better for American workers, but that in doing so, I would deny poor foreigners an opportunity.

\n

And it was only much later that I understood that giving a country's citizens' the opportunity to work in sweatshops could ultimately pave the way for the country to develop. It took me many years to internalize the epic quality of economic growth.

\n

Wheelan says

\n
\n

The thrust of the antiglobalization protests has been that world trade is something imposed by rich countries on the developing world. If trade is mostly good for America, then it must be mostly bad for somewhere else. At this point in the book, we should recognize that zero-sum thinking is usually wrong when it comes to economics. So it is in this case.

\n

[...]

\n

Trade paves the way for poor countries to get richer. Export industries often pay higher wages than jobs elsewhere in the economy. But that is only the beginning. New export jobs create more competition for workers, which raises wages everywhere else. Even rural incomes can go up; as workers leave rural areas for better opportunities, there are fewer mouths to be fed from what can be grown on the land they leave behind. Other important things are going on, too. Foreign companies introduce capital, technology, and new skills. Not only does that make export workers more productive; it spills over into other areas of the economy. Workers \"learn by doing\" and then take their knowledge with them.

\n
\n

and quotes Paul Krugman saying

\n
\n

If you buy a product made in a third-world country, it was produced by workers who are paid incredibly little by Western standards and probably work under awful conditions. Anyone who is not bothered by those facts, at least some of the time, has no heart. But that doesn't mean the demonstrators are right. On the contrary, anyone who thinks that the answer to world poverty is simple outrage against global trade has no head - or chooses not to use it. The anti-globalization movement already has a remarkable track record of hurting the very people and causes it claims to champion.

\n
\n

Why does people's disinclination to use their heads on this point produce such disastrous results? Because (a) the relativity heuristic which hegemonicon mentioned is ill-suited to a world with so much inequality and perhaps because (b) the intuition that the pool of resources that people share is fixed is hardwired into the human brain.

\n

(2) In an interesting article titled Why don't people believe that free trade is good?, economist Hans Melberg cites the following statistics:

\n
\n

89% of economists in the US think trade agreements between the U.S. and other countries is good for the economy, compared to 55% of the general public. Only 3% of economists think trade agreements are bad for the economy, while 28% of the general public think so (p. 111). 68% of the general public think that one reason why the economy is not doing as well as it could, is that \"companies are sending jobs overseas\". Only 6% of economists agree (p. 114)

\n

Source: Blendon, Robert J. et. al., \"Bridging the Gap Between the Public's and Economists' View of the Economy\", Journal of Economic Perspectives, Summer 1997, vol. 11, no. 3, pp. 108-118.

\n
\n

My impressions from casual conversations and from the news is that people in the general population have a belief of the type \"there's a limited supply of jobs, if jobs are being sent to Southeast Asia then there will be fewer jobs for Americans.\" Of course there's no intrinsic barrier to people in Southeast Asian and the people in American all having jobs. The idea that the supply of jobs is fixed seems to arise from fallacious zero-sum thinking and Melberg lists zero-sum thinking as one of four factors relevant to why people believe that free trade is bad.

\n

There is a genuine problem for America that arises from allowing free trade, namely the phenomenon of displaced US workers. But aside from being outweighed by the benefits of free trade to America, this phenomenon can and should be considered without the clouding influence of zero-sum thinking.

\n

 

\n
\n

 

\n

[1] See a remark by VijayKrishnan mentioning essays by Paul Graham which may overlap somewhat with the content of this post.

\n

07/19/10 @ 1:30 AM CST - Post very slightly edited to accommodate JoshuaZ's suggestion.

\n

07/19/10 @ 9:34 AM CST - Post very slightly edited to accommodate billswift's suggestion.

" } }, { "_id": "EJhDBkkuzttCycWKa", "title": "Politicians stymie human colonization of space to save make-work jobs", "pageUrl": "https://www.lesswrong.com/posts/EJhDBkkuzttCycWKa/politicians-stymie-human-colonization-of-space-to-save-make", "postedAt": "2010-07-18T12:57:47.388Z", "baseScore": 15, "voteCount": 20, "commentCount": 98, "url": null, "contents": { "documentId": "EJhDBkkuzttCycWKa", "html": "

An example of the collective action failures that happen when millions of not-so-bright humans try to cooperate. From the BBC

\n

US President Barack Obama had laid out his vision for the future of human spaceflight. He was certain that low-Earth orbit operations should be handed to the commercial sector - the likes of SpaceX and Orbital Sciences Corp. As for Nasa, he believed it should have a much stronger R&D focus. He wanted the agency to concentrate on difficult stuff, and take its time before deciding on how America should send astronauts to distant targets such as asteroids and Mars.

\n

This vision invited fury from many in Congress and beyond because of its likely impact in those key States where the re-moulding of the agency would lead to many job losses - in Florida, Texas, Alabama and Utah. 

\n

\n

The continued provision of seed funding to the commercial sector to help it develop low-cost \"space taxis\" capable of taking astronauts to and from the ISS. The funding arrangements would change, however. Instead of the White House's original request for $3.3bn over three years, the Committee's approach would provide $1.3bn. (Obama had wanted some $6bn in total over five years; the Committee says the total may still be possible, but over a longer period)

\n

Make-work bias and pork-barrel funding are not exactly news, but in this case they are exerting a direct negative influence on the human race's chances of survival. 

\n

Opinion in singularitarian circles has gradually shifted to under-emphasizing the importance of space colonization for the survival of the human race. The justification is that if a uFAI is built, we're all toast, and if an FAI is built, it can build spacecraft that make the Falcon 9 look like a paper aeroplane.

\n

However, the development of any kind of AI may be preceded by a period where humanity has to survive nano- or bio-disasters, which space colonization definitely helps to mitigate. Before or soon after we develop cheap, advanced nanotechnology, we could already have a self-sustaining colony on the moon (though this would require NASA to get its ass in gear).

\n

I leave you with an artist's impression of the physical embodiment of government inefficiency, a spacecraft optimized to make work rather than to advance the prospects of the future of the human race:

\n

\"A

\n

The Space Shuttle cost $1.5 billion per launch (including development costs), so with a payload of 25 tons to LEO, that makes a cost of $60,000 per kg to orbit. Falcon 9 gets 10 tons to orbit for $50 million, making a cost of $5000/kg, and falcon 9 heavy gets 32 tons for (apparently) 78 million, a price of $2500/kg. As the numbers clearly indicate, what we need is obviously another space shuttle. 

\n

 

\n

How realistic is a risk-reducing colony?

\n

Robin Hanson points out that a self-sustaining space/lunar/Martian colony is a long way away, and Vladimir Nesov and I point out that self-sustaining is unnecessary: a colony somewhere (the moon, under the ground on earth, Antarctica, etc) needs only to be able to last a long time, and be able to un-do the disaster. So Vladimir suggests a quarantined underground colony that can do Friendly AI research in case of a Nuclear/Nanotech/Biotech disaster.

\n

 

\n

Space colonies versus underground colonies

\n

Space provides an inherent cost disadvantage to building a long-life colony that is basically proportional to the cost per kg to orbit. Once the cost to orbit falls below, say, $200/kg, the cost of building a very reliably quarantined, nuke-proof shelter on earth will catch up with the costs inherent in operating in vacuum. 

\n

It was also noted that motivating people to become lunar or Martian colonists with disaster resilience as a side benefit seems a hell of a lot easier than motivating them to be underground colonists. An underground colony with the sole aim of allowing a few thousand lucky humans to survive a major disaster is almost universally perceived negatively by the public; it pattern matches with \"unfair\", \"elitists surviving whilst the rest of us die\", etc, and it should be noted that de facto no-one constructed such a colony even though the need was great in the cold war, and no-one has constructed one since, or even tried to my knowledge (though smaller underground shelters have been constructed, they wouldn't make the difference between extinction and survival). 

\n

On the other hand, most major nations have space programs, and it is relatively easy to convince people of the virtue of colonizing mars; \"The human urge to explore\", etc. Competitive, idealistic and patriotic pressures seem to reinforce each other for space travel. 

\n

It is therefore not the dollar cost of a space-colony versus an underground colony, but amount of advocacy required to get people to spend the requisite amount of money that matters. It may be the case that no realistic amount of advocacy will get people to build or even permit the construction of a risk-reducing underground colony. 

\n

 

\n

Rhetoric versus rational planning

\n

The thoughts that you verbalize whilst planning risk-reduction are not necessarily the same as the words you emit in a policy debate. Suppose that there is some debate involving an existential risk-reducer (X), a space advocate (S), and a person who is moderately anti-space exploration (A) (for example, the public).

\n

Perhaps S has A convinced to not block space exploration in part because saving the human race seems virtuous, and then X comes along and points out that underground shelters do the same job more efficiently. X has weakened S's position more than she has increased the probability of an underground shelter being built. Why? First of all, in a debate about space exploration, people will decide on the fate of space exploration only, then forget the details. The only good outcome of the debate for X is that space exploration goes ahead. Whether or not underground shelters get built will be (if X is really lucky) another debate entirely (most likely there will simply never be a debate about underground shelters)

\n

Second, space is a rhetorically strong position. It provides jobs (voters are insane: they are pro-government-funded-jobs and anti-tax), it fulfills our far-mode need to be positive and optimistic, symbolizing growth and freedom, and it fulfills our patriotic need to be part of a \"great\" country. Also don't underestimate the rhetorical force of the subconscious association of \"up\" with \"good\", and \"down\" with \"bad\". Underground shelters have numerous points against them: they invoke pessimism (they're only useful in a disaster), selfishness (wanting to live whilst others die), \"playing god\" (who decides who gets to go in the shelter? Therefore the most ethical option is that no-one goes in the shelter, thinks the deontologist, so don't bother building it) and injustice. 

\n

So by pointing out that space is not the most efficient way to achieve a disaster-shelter, X may in fact increase existential risk. If instead she had cheered for space exploration and kept quiet about underground options or framed it as a false dichotomy, S's case would have been strengthened, and some branches of the future that would otherwise have died survive. Furthermore, it may be that X doesn't want to spend her time advocating underground shelters, because she thinks that they have worse returns that FAI research. So X's best policy is to simply mothball the underground shelter idea, and praise space exploration whenever it comes up, and focus on FAI research. 

\n

 

" } }, { "_id": "aAFanvZnmPJb666EQ", "title": "Fight Zero-Sum Bias", "pageUrl": "https://www.lesswrong.com/posts/aAFanvZnmPJb666EQ/fight-zero-sum-bias", "postedAt": "2010-07-18T05:57:11.759Z", "baseScore": 28, "voteCount": 46, "commentCount": 164, "url": null, "contents": { "documentId": "aAFanvZnmPJb666EQ", "html": "

This is the first part of a mini-sequence of posts on zero-sum bias and the role that it plays in our world today.

\n

One of the most pernicious of all human biases is zero-sum bias. A situation involving a collection of entities is zero-sum if one entity's gain is another's loss, whereas a situation is positive-sum if the entities involved can each achieve the best possible outcome by cooperating with one another. Zero-sum bias is the tendency to systematically assume that positive-sum situations are zero-sum situations. This bias is arguably the major obstacle to a Pareto-efficient society. As such, it's very important that we work to overcome this bias (both in ourselves and in broader society).

\n

Here I'll place this bias in context and speculate on its origin.

\n

Where this bias comes from

\n

It's always a little risky to engage in speculation about human evolution. We know so little about our ancestral environment that our mental images of it might be totally wrong. Nevertheless, the predictions of evolutionary speculation sometimes agree with empirical results, so it's not to be dismissed entirely. Also, the human mind has an easier time comprehending and remembering information when the information is embedded in a narrative, so that speculative stories can play a useful cognitive role even when wrong.

\n

Anatomically modern humans appear to have emerged 200,000 years ago. In the context of human history, economic growth is a relatively recent discovery, only beginning in earnest several thousand years ago. The idea that it was possible to create wealth was probably foreign to our ancestors. In The Bottom Billion, former director of Development Research at the World bank speculates on the motivation of rebels in the poorest and slowest growing countries in the world who start civil wars (despite the fact that there's a high chance of being killed as a rebel and the fact that civil wars are usually damaging to the countries involved)

\n
\n

[In the portion of the developing world outside of the poorest and slowest growing countries...] growth rates may not sound sensational, but they are without precedent in history. They imply that children in these countries will grow up to have lives dramatically different from those of their parents. Even when people are still poor, these societies can be suffused with hope: time is on their side...If low income and slow growth make a country prone to civil war, it is reasonable to want to know why. There could be many explanations. My guess is that it is at least in part because low income means poverty and low growth mans hopelessness. Young men, who are the recruits for rebel armies, come pretty cheap in an environment of hopeless poverty ...if the reality of daily existence is otherwise awful, the chances of success do not have to be very high to be alluring. Even a small chance of the good life as a successful rebel becomes worth taking, despite the high risk of death, because the prospect of death is not so much worse than the prospect of life in poverty.

\n
\n

Neither the developed world nor the countries that Collier has in mind are genuinely good proxies to our ancestral environment, but like the people in the countries that Collier has in mind, our ancestors lived in contexts in which growth of resources was not happening. In such a context, the way that people acquire more resources for themselves is by taking other people's resources away. The ancient humans who survived and reproduced most successfully were those who had an intuitive sense that one entity's gain of resources can only come at the price of another entity's loss of resources. Iterate this story over thousands of generations of humans and you get modern humans with genetic disposition toward zero-sum thinking. This is where we come from.

\n

For nearly all modern humans, the utility of zero-sum bias has lapsed. We now have very abundant evidence that the pie can grow bigger and that win-win opportunities abound. Both as individuals and as representatives of groups, modern humans have a tendency to fight over existing resources when they could be doing just as well or better by creating new resources that benefit others. Modern humans have an unprecedented opportunity to create a world of lasting prosperity. We should do our best to make the most of this opportunity by overcoming zero-sum bias.

" } }, { "_id": "ueJG2xAxdD55zhfH4", "title": "Is cryonicists’ selfishness distance induced?", "pageUrl": "https://www.lesswrong.com/posts/ueJG2xAxdD55zhfH4/is-cryonicists-selfishness-distance-induced", "postedAt": "2010-07-17T18:40:52.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "ueJG2xAxdD55zhfH4", "html": "

Tyler‘s criticism of cryonics, shared by others including me at times:

\n

Why not save someone else’s life instead?

\n

This applies to all consumption, so is hardly a criticism of cryonics, as people pointed out. Tyler elaborated that it just applies to expressive expenditures, which Robin pointed out still didn’t pick out cryonics over the the vast assortment of expressive expenditures that people (who think cryonics is selfish) are happy with. So why does cryonics instinctively seem particularly selfish?

\n

I suspect the psychological reason cryonics stands out as selfish is that we rarely have the opportunity to selfishly splurge on something so far in the far reaches of far mode as cryonics, and far mode is the standard place to exercise our ethics.

\n

Cryonics is about what will happen in a *long time* when you *die*  to give you a *small chance* of waking up in a *socially distant* society in the *far future*, assuming you *widen your concept* of yourself to any *abstract pattern* like the one manifested in your biological brain and also that technology and social institutions *continue their current trends* and you don’t mind losing *peripheral features* such as your body (not to mention cryonics is *cold* and seen to be the preserve of *rich* *weirdos*).

\n

You’re not meant to be selfish in far mode! Freeze a fair princess you are truly in love with or something.  Far mode livens our passion for moral causes and abstract values.  If Robin is right, this is because it’s safe to be ethical about things that won’t affect you yet it still sends signals to those around you about your personality. It’s a truly mean person who won’t even claim someone else a long way away should have been nice fifty years ago.  So when technology brings the potential for far things to affect us more, we mostly don’t have the built in selfishness required to zealously chase the offerings.

\n

This theory predicts that other personal expenditures on far mode items will also seem unusually selfish. Here are some examples of psychologically distant personal expenditures to test this:

\n\n

I’m not sure how selfish these seem compared to other non-altruistic purchases. Many require a lot of money, which makes anything seem selfish I suspect. What do you think?

\n

If this theory is correct, does it mean cryonics is unfairly slighted because of a silly quirk of psychology? No. Your desire to be ethical about far away things is not obviously less real or legitimate than your desire to be selfish about near things, assuming you act on it. If psychological distance really is morally relevant to people, it’s consistent to think cryonics too selfish and most other expenditures not. If you don’t want psychological distance to be morally relevant then you have an inconsistency to resolve, but how you should resolve it isn’t immediately obvious. I suspect however that as soon as you discard cryonics as too selfish you will get out of far mode and use that money on something just as useless to other people and worth less to yourself, but in the realm more fitting for selfishness. If so, you lose out on a better selfish deal for the sake of not having to think about altruism. That’s not altruistic, it’s worse than selfishness.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "BMfCAPfjz4gkhCDS6", "title": "Book Club Update, Chapter 3 of Probability Theory", "pageUrl": "https://www.lesswrong.com/posts/BMfCAPfjz4gkhCDS6/book-club-update-chapter-3-of-probability-theory", "postedAt": "2010-07-16T08:25:20.056Z", "baseScore": 6, "voteCount": 5, "commentCount": 19, "url": null, "contents": { "documentId": "BMfCAPfjz4gkhCDS6", "html": "

Previously: Book Club introductory post - Chapter 1 - Chapter 2

\n

We will shortly move on to Chapter 3 (I have to post this today owing to vacation - see below). I have updated the previous post with a summary of chapter 2, with links to the discussion as appropriate. But first, a few announcements.

\n

How to participate

\n

This is both for people who have previously registered interest, as well as newcomers. This spreadsheet is our best attempt at coordinating 90+ Less Wrong readers interested in participating in \"earnest study of the great literature in our area of interest\".

\n

If you are still participating, please let the group know - all you have to do is fill in the \"Active (Chapter)\" column. Write in an \"X\" if you are checked out, or the number of the chapter you are currently reading. This will let us measure attrition, as well as adapt the pace if necessary. If you would like to join, please add yourself to the spreadsheet. If you would like to participate in live chat about the material, please indicate your time zone and preferred meeting time. As always, your feedback on the process itself is more than welcome.

\n

Refer to the Chapter 1 post for more details on how to participate and meeting schedules.

\n

Facilitator wanted

\n

I'm taking off on vacation today until the end of the month. I'd appreciate if someone wanted to step into the facilitator's shoes, as I will not be able to perform these duties in a timely manner for at least the next two weeks.

\n

Chapter 3: Elementary Sampling Theory

\n

Having derived the sum and product rules, Jaynes starts us in on a mainstay of probability theory, urn problems.

\n

Readings for the week of 19/07: Sampling Without Replacement - Logic versus Propensity. Exercises: 3.1

\n

Discussion starts here.

" } }, { "_id": "wEJTRSWuvSfCSQnBG", "title": "Chicago/Madison Meetup", "pageUrl": "https://www.lesswrong.com/posts/wEJTRSWuvSfCSQnBG/chicago-madison-meetup", "postedAt": "2010-07-15T23:30:15.576Z", "baseScore": 13, "voteCount": 10, "commentCount": 12, "url": null, "contents": { "documentId": "wEJTRSWuvSfCSQnBG", "html": "

After a successful first Chicago meetup, Airedale and I are now looking forward to the first nonfirst meetup. There are a few different options:

\n\n

Any thoughts on what's convenient for most people, and on what's a good venue?

\n

If you're interested, there's a Google Group where you can sign up for further updates. Hope to see you soon.

" } }, { "_id": "GdjHw4YZvP9uPpaz2", "title": "Financial incentives don't get rid of bias? Prize for best answer.", "pageUrl": "https://www.lesswrong.com/posts/GdjHw4YZvP9uPpaz2/financial-incentives-don-t-get-rid-of-bias-prize-for-best", "postedAt": "2010-07-15T13:24:59.276Z", "baseScore": 3, "voteCount": 12, "commentCount": 137, "url": null, "contents": { "documentId": "GdjHw4YZvP9uPpaz2", "html": "

I'm trying to better understand the relationship between incentivization and rationality, and it occurred to me that it is a \"folk fact\" around here that large financial incentives don't make cognitive biases go away. 

\n

However, I can't seem to find any papers that actually say this. It's not easy to google for (I have tried) so I wonder if the Less Wrong collective memory knows how to find the papers? 

\n

Is there a pattern to which biases go away with incentivization? Do we have at least 5 examples of biases that go away with incentivization and 5 examples that don't go away with incentivization? 

\n

As an incentive, I'll paypal $10 to the commenter whose answer is least biased and most useful. 

" } }, { "_id": "aSwFQPnvrdnZzg9Jg", "title": "So You Think You're a Bayesian? The Natural Mode of Probabilistic Reasoning", "pageUrl": "https://www.lesswrong.com/posts/aSwFQPnvrdnZzg9Jg/so-you-think-you-re-a-bayesian-the-natural-mode-of", "postedAt": "2010-07-14T16:51:40.939Z", "baseScore": 66, "voteCount": 50, "commentCount": 80, "url": null, "contents": { "documentId": "aSwFQPnvrdnZzg9Jg", "html": "

Related to: The Conjunction Fallacy, Conjunction Controversy

\n

The heuristics and biases research program in psychology has discovered many different ways that humans fail to reason correctly under uncertainty.  In experiment after experiment, they show that we use heuristics to approximate probabilities rather than making the appropriate calculation, and that these heuristics are systematically biased. However, a tweak in the experiment protocols seems to remove the biases altogether and shed doubt on whether we are actually using heuristics. Instead, it appears that the errors are simply an artifact of how our brains internally store information about uncertainty. Theoretical considerations support this view.

\n

EDIT: The view presented here is controversial in the heuristics and biases literature; see Unnamed's comment on this post below.

\n

EDIT 2: The author no longer holds the views presented in this post. See this comment.

\n

A common example of the failure of humans to reason correctly under uncertainty is the conjunction fallacy. Consider the following question:

\n
\n

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

\n

What is the probability that Linda is:

\n

(a) a bank teller

\n

(b) a bank teller and active in the feminist movement

\n
\n

In a replication by Gigerenzer, 91% of subjects rank (b) as more probable than (a), saying that it is more likely that Linda is active in the feminist movement AND a bank teller than that Linda is simply a bank teller (1993). The conjunction rule of probability states that the probability of two things being true is less than or equal to the probability of one of those things being true. Formally, P(A & B) ≤ P(A). So this experiment shows that people violate the conjunction rule, and thus fail to reason correctly under uncertainty. The representative heuristic has been proposed as an explanation for this phenomenon. To use this heuristic, you evaluate the probability of a hypothesis by comparing how \"alike\" it is to the data. Someone using the representative heuristic looks at the Linda question and sees that Linda's characteristics resemble those of a feminist bank teller much more closely than that of just a bank teller, and so they conclude that Linda is more likely to be a feminist bank teller than a bank teller.

\n

This is the standard story, but are people really using the representative heuristic in the Linda problem? Consider the following rewording of the question:

\n
\n

\n

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

\n

There are 100 people who fit the description above. How many of them are:

\n

(a) bank tellers

\n

(b) bank tellers and active in the feminist movement

\n
\n

Notice that the question is now strictly in terms of frequencies. Under this version, only 22% of subjects rank (b) as more probable than (a) (Gigerenzer, 1993). The only thing that changed is the question that is asked; the description of Linda (and the 100 people) remains unchanged, so the representativeness of the description for the two groups should remain unchanged. Thus people are not using the representative heuristic - at least not in general.

\n

\n

Tversky and Kahneman, champions and founders of the heuristics and biases research program, acknowledged that the conjunction fallacy can be mitigated by changing the wording of the question (1983, pg 309), but this isn't the only anomaly. Consider another problem:

\n
\n

If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person's symptoms or signs?

\n
\n

Using Bayes' theorem, the correct answer is .02, or 2%. In one replication, only 12% of subjects correctly calculated this probability. In these experiments, the most common wrong answer given is usually .95, or 95% (Gigerenzer, 1993). This is what's known as the base rate fallacy because the error comes from ignoring the \"base rate\" of the disease in the population. Intuitively, if absolutely no one has the disease, it doesn't matter what the test says - you still wouldn't think you had the disease.

\n

Now consider the same question framed in terms of relative frequencies.

\n
\n

\n

One out of 1000 Americans has disease X. A test has been developed to detect when a person has disease X. Every time the test is given to a person who has the disease, the test comes out positive. But sometimes the test also comes out positive when it is given to a person who is completely healthy. Specifically, out of every 1000 people who are perfectly healthy, 50 of them test positive for the disease.

\n

Imagine that we have assembled a random sample of 1000 Americans. They were selected by a lottery. Those who conducted the lottery had no information about the health status of any of these people. How many people who test positive for the disease will actually have the disease?

\n

_____ out of _____.

\n
\n

Using this version of the question, 76% of subjects answered correctly with 1 out of 50. Instructing subjects to visualize frequencies in graphs increases this percentage to 92% (Gigerenzer, 1993). Again, re-framing the question in terms of relative frequencies rather than (subjective) probabilities results in improved performance on the test.

\n

Consider yet another typical question in these experiments:

\n
\n

\n

Which city has more inhabitants?

\n

(a) Hyderabad

\n

(b) Islamabad

\n

How confident are you that your answer is correct?

\n

50%, 60%, 70%, 80%, 90%, 100%

\n
\n

According to Gigerenzer (1993),

\n
\n

The major finding of some two decades of research is the following: In all the cases where subjects said, \"I am 100% confident that my answer is correct,\" the relative frequency of correct answers was only about 80%; in all the cases where subjects said, \"I am 90% confident\" the relative frequency of correct answers was only about 75%, when subjects said \"I am 80% confident\" the relative frequency of correct answers was only about 65%, and so on.

\n
\n

This is called overconfidence bias. A Bayesian might say that you aren't calibrated. In any case, it's generally frowned upon by both statistical camps. If when you say you're 90% confident and you're only right 80% of the time, why not just say you're 80% confident? But consider a different experimental setup. Instead of only asking subjects one general knowledge question like the Hyderabad-Islamabad question above, ask them 50; and instead of asking them how confident they are that their answer is correct every time, ask them at the end how many they think they answered correctly. If people are biased in the way that overconfidence bias says they are there should be no difference between the two experiments.

\n

First, Gigerenzer replicated the original experiments, showing an overconfidence bias of 13.8% - that is, subjects were an additional 13.8% more confident than the true relative frequency of correct answers, on average.  For example, if they claimed a confidence of 90%, on average they would answer correctly 76.2% of the time.  Using the 50 question treatment, overconfidence biased dropped to -2.4%! In a second replication, the control was 15.4% and the treatment was -4.2% (1993). Note that -2.4% and -4.2% are likely not significantly different from 0, so don't interpret that as underconfidence bias. Once the probability judgment was framed in terms of relative frequencies, the bias basically disappeared.

\n

So in all three experiments, the standard results of the heuristics and biases program fall once the problem is recast in terms of relative frequencies.  Humans don't simply use heuristics; something else more complicated is going on. But the important question is, of course, what else? To answer that, we need to take a detour through information representation. Any computer - and the brain is just a very difficult to understand computer - has to represent its information symbolically. The problem is that there are usually many ways to represent the same information. For example, 31, 11111, and XXXI all represent the same number using different systems of representation. Aside from the obvious visual differences, systems of representation also differ based on how easy they are to use for a variety of operations. If this doesn't seem obvious, as Gigerenzer says, try long division using roman numerals (1993). Crucially, this difficulty is relative to the computer attempting to perform the operations. Your calculator works great in binary, but your brain works better when things are represented visually.

\n

What does the representation of information have to do with the experimental results above? Well, let's take another detour - this time through the philosophy of probability. As most of you already know, there the two most common positions are frequentism and Bayesianism. I won't get into the details of either position beyond what is relevant, so if you're unaware of the difference and are interested click the links. According to the Bayesian position, all probabilities are subjective degrees of belief. Don't worry about the sense in which probabilities are subjective, just focus on the degrees of belief part. A Bayesian is comfortable assigning a probability to any proposition you can come up with. Some Bayesians don't even care if the proposition is coherent.

\n

Frequentists are different beasts altogether. For a frequentist, the probability of an event happening is its relative frequency in some well defined reference class. A useful though not entirely accurate way to think about frequentist probability is that there must be a numerator and a denominator in order to get a probability. The reference class of events you are considering provides the denominator (the total number of events), and the particular event you are considering provides the numerator (the number of times that particular event occurs in the reference class). If you flip a coin 100 times and get 37 heads and are interested in heads, the reference class is coin flips. Then the probability of flipping a coin and getting heads is 37/100.Key to all of this is that the frequentist thinks there is no such thing as the probability of a single event happening without referring to some reference class. So returning to the Linda problem, there is no such thing as a frequentist probability that Linda is a bank teller, or a bank teller and active in the feminist movement. But there is a probability that, out of 100 people who have the same description as Linda, a randomly selected person is a bank teller, or a bank teller and active in the feminist movement.

\n

In addition to the various philosophical differences between the Bayesians and frequentists, the two different schools also naturally lead to two different ways of representing the information contained in probabilities. Since all the frequentist cares about is relative frequencies, the natural way to represent probabilities in her mind is through, well, frequencies. The actual number representing the probability (e.g. p=.23) can always be calculated later as an afterthought. The Bayesian approach, on the other hand, leads to thinking in terms of percentages. If probability is just a degree of belief, why not represent it as such with, say, a number between 0 and 1? A \"natural frequentist\" would store all probabilistic information as frequencies, carefully counting each time an event occurs, while a \"natural Bayesian\" would store it as a single number - a percentage - to be updated later using Bayes' theorem as information comes in. It wouldn't be surprising if the natural frequentist had trouble operating with Bayesian probabilities.  She thinks in terms of frequencies, but a single number isn't a frequency - it has to be converted to a frequency in some way that allows her to keep counting events accurately if she wants to use this information.

\n

So if it isn't obvious by now, we're natural frequentists! How many of you thought you were Bayesians?2 Gigerenzer's experiments show that changing the representation of uncertainty from probabilities to frequencies drastically alters the results, making humans appear much better at statistical reasoning than previously thought. It's not that we use heuristics that are systematically biased, our native architecture for representing uncertainty is just better at working with frequencies. When uncertainty isn't represented using frequencies, our brains have trouble and fail in apparently predictable ways. To anyone who had Bayes' theorem intuitively explained to them, it shouldn't be all that surprising that we're natural frequentists. How does Eliezer intuitively explain Bayes' theorem? By working through examples using relative frequencies. This is also a relatively common tactic in undergraduate statistics textbooks, though it may only be because undergraduates typically are taught only the frequentist approach to probability.

\n

So the heuristics and biases program doesn't catalog the various ways that we fail to reason correctly under uncertainty, but it does catalog the various ways we reason incorrectly about probabilities that aren't in our native representation. This could be because of our native architecture just not handling alternate representations of probability effectively, or it could be because when our native architecture starts having trouble, our brains automatically resort to using the heuristics Tversky and Kahneman were talking about. The latter seems more plausible to me in light of the other ways the brain approximates when it is forced to, but I'm still fairly uncertain. Gigerenzer has his own explanation that unifies the two domains under a specific theory of natural frequentism and has performed further experiments to back it up. He calls his explanation a theory of probabilistic mental models.3 I don't completely understand Gigerenzer's theory and his extra evidence seems to equally support the hypothesis that our brains are using heuristics when probabilities aren't represented as frequencies, but I will say that Gigerenzer's theory does have elegance going for it. Capturing both groups of phenomena with a unified theory makes Occam smile.

\n

These experiments aren't the only reason to believe that we're actually pretty good at reasoning under uncertainty or that we're natural frequentists; there are theoretical reasons as well. First, consider evolutionary theory. If lower order animals are decent at statistical reasoning, we would probably expect that humans are good as well since we all evolved from the same source. It is possible that a lower order species developed its statistical reasoning capabilities after its evolutionary path diverged from the ancestors of humans, or that statistical reasoning became less important for humans or their recent ancestors and thus evolution committed less resources to the process. But the ability to reason under uncertainty seems so useful, and if any species has the mental capacity to do it, we would expect humans to with their large, adept brains. Gigerenzer summarizes the evidence across species (1993):

\n
\n

Bumblebees, birds, rats, and ants all seem to be good intuitive statisticians, highly sensitive to changes in frequency distributions in their environments, as recent research in foraging behavior indicates (Gallistel, 1990; Real & Caraco, 1986). From sea snails to humans, as John Staddon (1988) argued, the learning mechanisms responsible for habituation, sensitization, and classical and operant conditioning can be described in terms of statistical inference machines. Reading this literature, one wonders why humans seem to do so badly in experiments on statistical reasoning.

\n
\n

Indeed. Should we really expect that bumblebees, birds, rats, and ants are better intuitive statisticians than us? It's certainly possible, but it doesn't appear all that likely, a priori.

\n

Theories of the brain from cognitive science provide another reason why we would be adept at reasoning under uncertainty and a reason why would be natural frequentists. The connectionist approach to the study of the human mind suggests that the brain encodes information by making literal physical connections between neurons, represented on the mental level by connections between concepts.  So, for example, if you see a dog and notice that it's black, a connection between the concept \"dog\" and the concept \"black\" is made in a very literal sense. If connectionism is basically correct, then probabilistic reasoning shouldn't be all that difficult for us. For example, if the brain needs to calculate the probability that any given dog is black, it can just count the number of connections between \"dog\" and \"black\" and the number of connections between \"dog\" and colors other than black.4  Voila! Relative frequencies. As Nobel Prize winning economist Vernon Smith puts it (2008, pg 208):

\n
\n

Hayek's theory5 - that mental categories are based on the experiential relative frequency of coincidence between current and past perceptions - seems to imply that our minds should be good at probability reasoning.

\n
\n

It also suggests that we would be natural frequentists since our brains are quite literally built on relative frequencies.

\n

So both evidence and theory point in the same direction. The research of Tversky and Kahneman, among others, originally showed that humans were fairly bad at reasoning under uncertainty. It turns out much of this is an artifact of how their subjects were asked to think about uncertainty. Having subjects think in terms of frequencies basically eliminates biases in experiments, suggesting that humans are just natural frequentists - their minds are structured to handle probabilities in terms of frequencies rather than in proportions or percentages. Only when we are working with information represented in a form difficult for our native architecture to handle do we appear to be using heuristics. Theoretical considerations from both evolutionary biology and cognitive science buttress both claims - that humans are both natural frequentists and not so bad at handling uncertainty - at least when thinking in terms of frequencies.

\n

 

\n
\n

 

\n

Footnotes

\n

1: To any of you who raised an eyebrow, I did it on purpose ;).

\n

2: Just to be clear, I am not arguing that since we are natural frequentists, the frequentist approach to probability is the correct approach.

\n

3: What seems to be the key paper is the second link in the Google search I linked to. I haven't read it yet, so I won't really get into his theory here.

\n

4: I acknowledge that this is a very simplified example and a gross simplification of the theory.

\n

5: Friedrich Hayek, another Nobel Prize winning economist, independently developed the connectionist paradigm of the mind culminating in his 1952 book The Sensory Order. I do recommend reading Hayek's book, but not without a reading group of some sort. It's short but dense and very difficult to parse - let's just say Hayek is not known for his prose.

\n

References

\n

Gigerenzer, Gerd. 1993. \"The Bounded Rationality of Probabilistic Mental Models.\" in Manktelow, K. I., & Over, D. E. eds. Rationality: Psychological and philosophical perspectives. (pp. 284-313). London: Routledge. Preprint available online

\n

Smith, Vernon L. 2008. Rationality in Economics. Cambridge: Cambridge UP.

\n

Tversky, A., and D. Kahneman. 1983. \"Extensional versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment.\" Psychological Bulletin 90(4):293-315. Available online.

\n

 

" } }, { "_id": "ekDXMQKwuhuKz9aow", "title": "The Instrumental Value of Your Own Time", "pageUrl": "https://www.lesswrong.com/posts/ekDXMQKwuhuKz9aow/the-instrumental-value-of-your-own-time", "postedAt": "2010-07-14T07:57:21.408Z", "baseScore": 33, "voteCount": 37, "commentCount": 45, "url": null, "contents": { "documentId": "ekDXMQKwuhuKz9aow", "html": "

What is your time worth? Economists generally assume that your time is accurately valued by the market, because time can be converted into money (via wage labor) and back again (by hiring people to, e.g, do your chores for you).  In this article, I argue that the economists are wrong, and that, as a result, we have some important questions to ask ourselves about the value of time:

\n

(1) How would a rational person measure the value of an hour of her time?

\n

(2) Can time invested now be meaningfully traded off against time available later?

\n

(3) How much of your time should be spent assessing whether you are spending your time well?

\n

Executive Summary (tl;dr)

\n

The economists are wrong because, on the supply side, it's difficult to supply extra paid labor on a timescale of weeks or months; and, on the demand side, there are practical and aesthetic limits to how much extra time you can buy for yourself.  Thus, a rational person will not assume that her time is worth what the market says it is worth; your time is usually more valuable (to you) than it is to the market, because there is a market failure for medium-sized chunks of your time.  One way to get a sense of what your time is worth to you is to measure your ability to achieve goals or satisfy desires, but not all time is created equal -- an hour that you spend programming code may be implicitly supported by several hours of sleeping, eating, desk-cleaning, and other homeostatic tasks. Attempting to normalize the value of your time against a hypothetical \"maximum productivity\" level might yield more accurate information than simply checking your brain's cached value for how satisfying an experience was in hindsight, because memory focuses on events associated with intense emotions and thus often fails to adequately account for the cost of wasted time. People who spend essentially no time computing their ability to achieve their ultimate goals are probably wasting a lot of time, but the cost (in time) of evaluating your own productivity is computationally intractable; ignorance about ignorance is recursive and cannot easily be eliminated from complex chaotic systems like your life.

\n

\n

When Time Isn't Money

\n

Basic models of labor economics tend to treat a person's decision about how many hours to work each week as a scalar variable; Bob could work 39 hours this week, or 40 hours, or 50.3 hours, and would be paid accordingly.  There are several reasons why this assumption might not hold. 

\n

Among professionals, many people are not paid on an hourly basis at all, and must simply hope that increased work put in over a period of months or years will lead to promotions and higher annual salaries in the distant future.  This might not pan out if, e.g., one's industry collapses, one's company collapses, one's immediate supervisor is vindictive, or life circumstances require a geographical move or leave of absence.  To the extent that promotions are based on networking, politics, perseverance, and/or luck, professionals have no ability at all to trade off leisure for money at their primary jobs. Professionals usually have only a minimal ability to increase their income by taking on a second job, because each professional job usually takes in excess of 50 hours a week, and working 100+ hours a week is often unsustainable or undesirable.  Professionals can moonlight as unskilled or semi-skilled workers, but this usually involves a severe pay cut; if you earn the equivalent of $50 an hour as a programmer, and you could earn $10 an hour as a barista, it's difficult to say that you can transform time into money efficiently enough for money to serve as an accurate valuation of your time.  For example, if we really think that you value your time at $10/hour, and you usually have good self-control, we might be surprised at your decision to pay $20 for a taxi that saves you an unpleasant 1-hour walk.  The solution to this trivial paradox is that the marginal price of your time differs from the marginal value of your time -- there is a market failure for small-to- medium-sized chunks of your time.

\n

Among wage laborers, many people are specifically prohibited from working overtime, or are not assigned tasks that are expected to take more than a set number of hours per week.  Working more than one full-time job at a time can be made more difficult by standard business hours; there are far more jobs available, e.g, between 9 am and 5 pm than there are between 6 pm and 2 am.  Inflexible start times and long commutes can make working multiple jobs logistically impossible even where one is able to find jobs that do not literally overlap.  One's ability to work may be limited by the strength and depth of her informal support system; e.g., if your parents are willing to babysit their grandkids for 10 hours a day but not for 14, then working an additional 4 hours would require paying for childcare, which could eat up much of the gains of additional wage labor.  Your ability to work long hours may also be limited by your ability to spend a long time away from your kitchen; if you have a long commute and little storage space at work, then you may have to return home every so often in order to prepare your groceries, or else spend a significant fraction of your extra wages on eating out.  For anyone lucky enough to be unfamiliar with these problems, Barbara Ehrenreich's Nickel and Dimed is a good place to start.

\n

On the demand side, there is a soft limit to the conveniences that money can buy.  Although fabulously wealthy people may be able to afford private planes or helicopters, most upper-class people are still stuck commuting in relatively suboptimal conditions at the speed of a car or train.  Flexibly priced toll roads are still in their infancy; it is very difficult to double your transportation budget so as to halve your transportation time.  Similarly, first-class train compartments do not run on smoother tracks than the coach seats.  If you like, you can pay higher rent or invest in pricier real estate to reduce your commute time, but if you don't like living in the city, you may not *want* to move downtown.  Likewise, you can pay someone else to clean your bathroom, but you might not want to pay someone else to cook for you -- there's a point where delegating all of the tasks that contribute to your personal maintenance can be disempowering and dehumanizing.  If you exist only as a specialist in fluid mechanics and a consumer of other people's wage labor, who are you?  More prosaically, if you pay a personal assistant to organize your filing cabinet, you may be unable to quickly locate your files unless you also pay him to tell you where your files are: some tasks can't be delegated in neat little chunks.

\n

None of this is to argue that time has no value as money, that money cannot save you time, or that, in the long-term, people are unable to shape their lifestyles so as to better approximate their demand for money and leisure.  The point is that, given current conditions, money is a deeply flawed measurement of the value of time, and that, as rationalists, we may be able to significantly improve upon it.

\n

Alternative Measurements

\n

If not money, then what should we use to measure the value of time? If you have Something to Protect, then you might try to measure changes in your expected probability that you will successfully protect [your sister, the ecosystem, the collected works of Gilbert & Sullivan, etc].  Many people around here seem to think that achieving Friendly AI is the most important instrumental goal of our generation by several orders of magnitude; if you really believe that, then why not ask, \"Will action X increase or decrease the probability of fAI? By how much?\"

\n

If your values don't all converge on a single goal, you might ask a series of questions like \"How much fun am I having? Will action X be fun? More fun than the alternatives? Will it help steer me into a future where I have more fun?\"

\n

This sounds both simplistic and un-workably vague, but consider the possibility that we might routinely fail to optimize our use of time even by such simple measures.  If you're an SIAI-shipper, did you floss your teeth today? E-mail your aunt? Change the oil in your car? Write your Congressman? Did any of those things actually increase the likelihood that fAI gets invented in time to save us all? By an amount even remotely comparable to, say, volunteering to translate an SIAI article into Spanish? Or, suppose you're a Fun Theorist.  Did you have fun today? More fun than yesterday? Do you know why or why not? What are you going to do about that tomorrow?  I find that I rarely ask myself these questions.  I am much more likely to ask much less relevant questions, such as \"Which way is the Post Office?\" or \"Which element comes after tin again?\" or \"Do I approve of the consumer reform legislation?\" The process of solving these questions sometimes either is a little bit fun or results in a marginal increase in fun, but not nearly as much fun as I could easily get by putting 5 minutes worth of thought into daily fun-optimization.

\n

What about you? Do you ask yourself any questions like these? Do you answer yourself? Can you think of a fourth category of time-measurements (besides money, fAI-success, and fun) that might be even more useful? Of the categories, which do you think is most useful?

\n

The Hidden Costs (in Time) of Spending Time

\n

[Note: this section is re-posted and plagiarized from my earlier comment on an Open Thread.  I feel like it might be OK to post it twice because it got 9 points and only 1 comment; the commenter basically said that he didn't know how to answer the questions in this section but that he was curious.]

\n
\n
\n

Any given goal that I have tends to require an enormous amount of \"administrative support\" in the form of homeostasis, chores, transportation, and relationship maintenance. I estimate that the ratio may be as high as 7:1 in favor of what my conscious mind experiences as administrative bullshit, even for relatively simple tasks.

\n

For example, suppose I want to go kayaking with friends. My desire to go kayaking is not strong enough to override my desire for food, water, or comfortable clothing, so I will usually make sure to acquire and pack enough of these things to keep me in good supply while I'm out and about. I might be out of snack bars, so I bike to the store to get more. Some of the clothing I want is probably dirty, so I have to clean it. I have to drive to the nearest river; this means I have to book a Zipcar and walk to the Zipcar first. If I didn't rent, I'd have to spend some time on car maintenance. When I get to the river, I have to rent a kayak; again, if I didn't rent, I'd have to spend some time loading and unloading and cleaning the kayak. After I wait in line and rent the kayak, I have to ride upstream in a bus to get to the drop-off point.

\n

Of course, I don't want to go alone; I want to go with friends. So I have to call or e-mail people till I find someone who likes kayaking and has some free time that matches up with mine and isn't on crutches or sick at the moment. Knowing who likes kayaking and who has free time when -- or at least knowing it well enough to do an intelligent search that doesn't take all day -- requires checking in with lots of acquaintances on a regular basis to see how they're doing.

\n

There are certainly moments of pleasure involved in all of these tasks; clean water tastes good; it feels nice to check in on a friend's health; there might be a pretty view from the bumpy bus ride upstream. But what I wanted to do, mostly, was go kayaking with friends. It might take me 4-7 hours to get ready to kayak for 1-2 hours. Some of the chores can be streamlined or routinized, but if it costs me effort to be sure to do the same chore at the same time every week, then it's not clear exactly how much I'm saving in terms of time and energy.

\n

I have the same problem at work; although, by mainstream society's standards, I am a reasonably successful professional, I can't really sit down and write a great essay when I'm too hot, or, at least, it seems like I would be more productive if I stopped writing for 5 minutes and cranked up the A/C or changed into shorts. An hour later, it seems like I would be more productive if I stopped writing for 20 minutes and ate lunch. Later that afternoon, it seems like I would be more productive if I stopped for a few minutes and read an interesting article on general science. These things happen even in an ideal working environment, when I'm by myself in a place I'm familiar with. If I have coworkers, or if I'm in a new town, there are even more distractions. If I have to learn who to ask for help with learning to use the new software so that I can research the data that I need to write a report, then I might spend 6 hours preparing to spend 1 hour writing a report.

\n

All this worries me for two reasons: (1) I might be failing to actually optimize for my goals if I only spend 10-20% of my time directly performing target actions like \"write essay\" or \"kayak with friends,\" and (2) even if I am successfully optimizing, it sucks that the way to achieve the results that I want is to let my attention dwell on the most efficient ways to, say, brush my teeth. I don't just want to go kayaking, I want to think about kayaking. Thinking about driving to the river seems like a waste of cognitive \"time\" to me.

\n

Does anyone else have similar concerns? Anyone have insights or comments? Am I framing the issue in a useful way? Is the central problem clearly articulated? Just about any feedback at all would be appreciated.

\n
\n
\n

Maximum Theoretical Productivity

\n

Assuming you buy the analysis in the section above about the hidden costs of time, one could imagine a maximally productive hour in which all administrative and maintenance tasks had already been previously taken care of.  When you actually get on the kayak next to your friend, or actually sit down to write the essay with a comfortably full stomach and a functioning A/C, you are close to maximum efficiency.  You are efficient not in the rationalist sense of having chosen the shortest route to your terminal goal, but in the mechanical sense of moving along whatever route you have chosen at the fastest possible speed.  As you deal with more and more of your homeostatic bullshit in advance of your attempt to begin your chosen task, i.e., as you reduce wasted time to zero, you asymptotically approach your Maximum Theoretical Productivity (\"MTP\"). 

\n

Note that this may be a very stupid thing to do; there is no particular reason, usually, to want to be at peak efficiency from 4 to 5 pm, even at the cost of spending all morning and afternoon getting ready for that hour of efficiency.  In fact, such a strategy would probably be a massive waste of time, and seriously lower your average productivity for the day.  Final exams and job interviews are two notable exceptions, but mostly I'm just trying to get some analytical traction going here -- I'm curious what it would mean to be at maximum efficiency for an arbitrary period of time.

\n

If such a concept could be effectively developed, it would be very useful for certain kinds of utility-comparisons that are currently very difficult to perform.  For example, suppose I'm trying to decide which hobbies or which friends to hang on to after a move.  These are really difficult things to rationally compare...the  availability heuristic tends to make us disproportionately remember the most startling or sensually rich features of our activities, and these may not be a good guide to what is truly rewarding. I recently drove 6 hours round-trip to visit a modern art museum for 1 hour, and now, two months later, all I can remember is my strong emotional reaction to some of the artwork, plus one or two of the more interesting views from the rural highway.  What I've forgotten (at an emotional level) is the boredom and nausea of the bumpy journey.  Is it wise to rely on my heavily filtered impressions to properly weight the costs and benefits? Wouldn't I do better to focus on the experience of viewing the art, and then mentally dividing it by 7 before comparing it to other experiences, which will also need to be divided by their administrative support ratios (\"ASRs\")?

\n

Another application would be figuring out how much time you could gain or lose, net of hidden costs, by changing your lifestyle. For example, at 3 am, as I write this post, I am not a very efficient typist or composer.  Writing 4000 interesting words takes me 5 hours.  Suppose I wake up at the same time every morning, and I need to go to bed at 2 am to be maximally efficient.  Had I gone to bed an hour earlier, I might have been able to type 4000 interesting words in only 3 hours.  With adequate sleep, i.e., at my MTP, I can write 4000 words in 4 hours; 3 for typing and 1 for sleeping.  So my MTP can be expressed as 1000 words/hour.  Without adequate sleep, I need 5 hours to write 4000 words, so I write at only 800 words / hour.  My decision to stay up an extra hour has created a deadweight loss of 200 words, or, equivalently, 12 minutes at my MTP.  If, on average, it takes me 3 minutes of homeostasis and maintenance for each minute of MTP, then I have wasted the equivalent of 12 * 3 = 36 real minutes of my life.  This kind of analysis can be both a motivating tool and a way of figuring out which  anti-akrasia tools are most effective.

\n

MTP can break down when you are trying to have fun, rather than to accomplish a goal, but sometimes you can pay attention to how long you have to do something in order to get a roughly equivalent sense of satisfaction.  Sometimes I amble through an expressive but highly arbitrary series of chords on my guitar, with the intention of airing whatever emotions might be burbling under my layers of conscious thought and also of exploring a bit of music theory.  If I am overtired, or thirsty, or worried, I will be less able to focus on what I am playing, I wlll hit chords that are \"wrong\" in the sense of not fitting into even my loosely structured pattern more often, and it will take me longer to come up with a series of chords that actually produces catharsis and/or satisfies my musical curiosity.

\n

Rationally Researching the Cost of Research

\n

In order to do anything purposefully, you first have to know something about your situation; it wouldn't make any sense to take action with a perfectly blank set of Bayesian priors, because, on average, you would expect your action to have no effect.  Unfortunately, gathering information about your situation is  costly.  Worse, since you are not a perfect rationalist, gathering information about how well you are gathering information is  also costly.  So far as I can tell, this phenomenon is endlessly recursive.  There are certain well-defined situations where the cost of information is expected to converge; i.e., finding out how much you know about how much you know about how much you know will provide you with so little in the way of an increased chance of choosing the optimal action that you can safely engage in zero research (or epsilon research) at that layer of recursion and still get pretty good results.

\n

So far as I can tell (irony not intended), these situations are  few and far-between .  No matter how accurate and complete you might think your model is, you could still be totally surprised by variables that you never expected to influence your predictions.  Take climate change, for interest.  You might correctly model the effects of CO2, dust, ash, sunspots, ocean currents, carbon dissolved in the ocean floor, albedo lost by melting ice, and geo-engineered reflectors, only to discover (too late) that a new kind of uber-popular roofing tile that happens to be pure black or pure white will dramatically change the Earth's albedo as the developing world urbanizes.  It's practically impossible to tell how many variables you need to correctly model in order to generate arbitrarily accurate predictions, because in a chaotic system, even a single rogue variable can totally throw off your results.  Or, it might not -- maybe, on balance, the more variables you consider, the more accurate your results will be.  Or maybe it just looks like adding variables increases accuracy, but then the eleventh and twelfth variable suddenly turn out to be useless.

\n

Many aspects of our personal lives might turn out to be nearly as chaotic as the Earth's climate -- you can gather data all week about how your new productivity plan is working out, and conclude that it's time to cut down on data-gathering and start re-investing an extra 5 minutes a week in actually getting stuff done, only to be surprised by a huge bout of akrasia that you could have prevented if you'd been on guard.  You can spend hours each day gathering data on yourself all month and be pleased with your results, only to fail some crucial real-life test by a few percentage points that you could have easily passed if you'd just taken the most obvious strategy.

\n

Are there any shortcuts at all here? Is anyone familiar with a literature on the recursive cost of information in a complex situation?

\n

Anyway, thanks for your time, and your comments.

" } }, { "_id": "iKom3mAyYLj6BfZ4t", "title": "Room for rent in North Berkeley house", "pageUrl": "https://www.lesswrong.com/posts/iKom3mAyYLj6BfZ4t/room-for-rent-in-north-berkeley-house", "postedAt": "2010-07-13T20:28:20.155Z", "baseScore": 10, "voteCount": 35, "commentCount": 58, "url": null, "contents": { "documentId": "iKom3mAyYLj6BfZ4t", "html": "

Hi Less Wrong. I am moving into a 5 bedroom house in North Berkeley with Mike Blume and Emil Gilliam. We have an extra bedroom available.

\n

It's located in the Gourmet Ghetto neighborhood (because we can afford to eat at Chez Pannise when we aren't busy saving the world, right? I didn't think so) and is about 1/2 mile from the Downtown Berkeley and North Berkeley BART stations. From Downtown Berkeley to Downtown SF via BART, it is a painless 25 minute commute. The bedroom is unfurnished and available right now. Someone willing to commit to living there for one year is preferred, but willing to consider six month or month to month leases.

\n

I'm open to living with a wide range of people and tend to be extremely tolerant of people's quirks. I am not tolerant to drama, so I am open to living with anyone that will not bring any sort of unneeded conflict to my living space.

\n

~$750/month+utilities. Easy street parking available.

\n

Feel free to ask questions via email (kfischer @# gmail %# com) or in the comments here.

\n

And before any of you pedants downvote me because \"Less Wrong is not Craigslist\", this is kind of like a year long Less Wrong meetup.

" } }, { "_id": "hM3kGC334dh6LySMq", "title": "Some Thoughts Are Too Dangerous For Brains to Think", "pageUrl": "https://www.lesswrong.com/posts/hM3kGC334dh6LySMq/some-thoughts-are-too-dangerous-for-brains-to-think", "postedAt": "2010-07-13T04:44:12.287Z", "baseScore": 15, "voteCount": 67, "commentCount": 318, "url": null, "contents": { "documentId": "hM3kGC334dh6LySMq", "html": "
[EDIT - While I still support the general premise argued for in this post, the examples provided were fairly terrible. I won't delete this post because the comments contain some interesting and valuable discussions, but please bear in mind that this is not even close to the most convincing argument for my point.]
\n
A great deal of the theory involved in improving computer and network security involves the definition and creation of \"trusted systems\", pieces of hardware or software that can be relied upon because the input they receive is entirely under the control of the user. (In some cases, this may instead be the system administrator, manufacturer, programmer, or any other single entity with an interest in the system.) The only way to protect a system from being compromised by untrusted input is to ensure that no possible input can cause harm, which requires either a robust filtering system or strict limits on what kinds of input are accepted: a blacklist or a whitelist, roughly.
\n
One of the downsides of having a brain designed by a blind idiot is that said idiot hasn’t done a terribly good job with limiting input or anything resembling “robust filtering”. Hence that whole bias thing. A consequence of this is that your brain is not a trusted system, which itself has consequences that go much, much deeper than a bunch of misapplied heuristics. (And those are bad enough on their own!)
\n
In discussions of the AI-Box Experiment I’ve seen, there has been plenty of outrage, dismay, and incredulity directed towards the underlying claim: that a sufficiently intelligent being can hack a human via a text-only channel. But whether or not this is the case (and it seems to be likely), the vulnerability is trivial in the face of a machine that is completely integrated with your consciousness and can manipulate it, at will, towards its own ends and without your awareness.
\n
Your brain cannot be trusted. It is not safe. You must be careful with what you put into it, because it  will decide the output, not you. We have been warned, here on Less Wrong, that there is dangerous knowledge; Eliezer has told us that knowing about biases can cause us harm. Nick Bostrom has written a paper describing dozens of ways in which information can hurt us, but he missed (at least) one.
\n
The acquisition of some thoughts, discoveries, and pieces of evidence can lower our expected outcomes, even when they are true. This can be accounted for; we can debias. But some thoughts and discoveries and pieces of evidence can be used by our underhanded, untrustworthy brains to change our utility functions, a fate that is undesirable for the same reason that being forced to take a murder pill is undesirable.
\n
(I am making a distinction here between the parts of your brain that you have access to and can introspect about, which for lack of better terms I call “you” or “your consciousness”, and the vast majority of your brain, to which you have no such access or awareness, which I call “your brain.” This is an emotional manipulation, which you are now explicitly aware of. Does that negate its effect? Can it?)
\n

A few examples (in approximately increasing order of controversy):

\n
Identity PoliticsPaul Graham and Kaj Sotala have covered this ground, so I will not rehash their arguments. I will only add that, in the absence of a stronger aspect of your identity, truly identifying as something new is an irreversible operation. It might be overwritten again in time, but your brain will not permit an undo.
\n
Power Corrupts: History is littered with examples of idealists seizing power only to find themselves betraying the values they once held dear. No human who values anything more than power itself should seek it; your brain will betray you. There has not yet been a truly benevolent dictator and it would be delusional at best to believe that you will be the first. You are not a mutant. (EDIT: Michael Vassar has pointed out that there have been benevolent dictators by any reasonable definition of the word.)
\n
Opening the Door to Bigotry: I place a high value on not discriminating against sentient beings on the basis of artifacts of the birth lottery. I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.
\n
One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman. There may be exceptions, but I haven’t met them. Based on all the evidence I have, I’ve made a conscious decision to avoid seeking out information on sex differences in intelligence and other, similar kinds of research. I might be able to resist my brain’s attempts to change what I value, but I’m not willing to take that risk; not yet, not with the brain I have right now.
\n
If you know of other ways in which a person’s brain might stealthily alter their utility function, please describe them in the comments.
\n

If you proceed anyway...

\n
If the big red button labelled “DO NOT TOUCH!” is still irresistible, if your desire to know demands you endure any danger and accept any consequences, then you should still think really, really hard before continuing. But I’m quite confident that a sizable chunk of the Less Wrong crowd will not be deterred, and so I have a final few pieces of advice.
\n
\n\n
\n
Just kidding! That would be impossibly ridiculous.
" } }, { "_id": "adn3RZ3eQaJy5W447", "title": "MIRI Call for Volunteers", "pageUrl": "https://www.lesswrong.com/posts/adn3RZ3eQaJy5W447/miri-call-for-volunteers", "postedAt": "2010-07-12T23:49:57.918Z", "baseScore": 19, "voteCount": 16, "commentCount": 10, "url": null, "contents": { "documentId": "adn3RZ3eQaJy5W447", "html": "

Are you interested in reducing existential risk? Are you a student who wants to donate to existential risk reduction, but doesn't have any money? Are you a past or present Visiting Fellow applicant? Do you want to apply to the Visiting Fellows program, but can't take time off work or school? If the answer to any one of these questions is yes, you should join the Singularity Institute Volunteer Program. The Singularity Institute is looking for volunteers to do things like:

 - Review and proofread MIRI publications.
 - Promote MIRI sites like singinst.org and lesswrong.com.
 - Contribute content to the MIRI Blog.
 - Create online videos or other digital content.
 - Volunteer for the 2010 Singularity Summit.
 - Organize monthly dinner parties to cultivate new supporters.
 - Translate MIRI webpages into other languages, e.g. French, German, Japanese, Mandarin, Spanish, etc.
 - Contribute to the collaborative rationality blog Less Wrong.
 - Host a Less Wrong meetup, or remind organizers to host them.

Requirements for volunteers are fairly minimal, but you must be able to:

 - Read and write English on a basic or higher level
 - Complete tasks reliably with minimal supervision
 - Stick to deadlines, and let us know if you can't meet them

Additional skills, like programming, ability to write well, foreign languages, math talent, etc. are a definite plus. If you are interested, please shoot us an email with a brief summary of who you are, what your interests and skills are and how you'd like to help.

If you want to contribute, but don't know how you can help, please email MIRI Volunteer Coordinator Louie Helm at louie@intelligence.org.

Apply today!

" } }, { "_id": "heCjnsZqvj9jsAwSD", "title": "The President's Council of Advisors on Science and Technology is soliciting ideas", "pageUrl": "https://www.lesswrong.com/posts/heCjnsZqvj9jsAwSD/the-president-s-council-of-advisors-on-science-and", "postedAt": "2010-07-12T23:41:04.746Z", "baseScore": 13, "voteCount": 16, "commentCount": 83, "url": null, "contents": { "documentId": "heCjnsZqvj9jsAwSD", "html": "

The question that the ideas are supposed to be in response to is:

\n
\n

What are the critical infrastructures that only government can help provide that are needed to enable creation of new biotechnology, nanotechnology, and information technology products and innovations -- a technological congruence that we have been calling the “Golden Triangle\" -- that will lead to new jobs and greater GDP?\"

\n
\n

Here are links to some proposed ideas that you should vote for, assuming you agree with them. You do have to register to vote, but the email confirmation arrives right away and it shouldn't take much more than two minutes of your time altogether. Why should you do this? The top voted ideas from this request for ideas will be seen by some of the top policy recommendation makers in the USA. They probably won't do anything like immediately convene a presidential panel on AGI, but we are letting them know that these things are really important.

\n

Research the primary cause of degenerative diseases: aging / biological senescence

\n

Explore proposals for sustaining the economy despite ubiquitous automation

\n

Establish a Permanent Panel or Program to Address Global Catastrophic Risks, Including AGI

\n

Does anyone have any other ideas? Feel free to submit them directly to ideascale, but it may be a better idea to first post them in the comments of this post for discussion.

" } }, { "_id": "7ZkHyrBFaDwZ3XgLi", "title": "The red paperclip theory of status", "pageUrl": "https://www.lesswrong.com/posts/7ZkHyrBFaDwZ3XgLi/the-red-paperclip-theory-of-status", "postedAt": "2010-07-12T23:08:57.088Z", "baseScore": 59, "voteCount": 50, "commentCount": 24, "url": null, "contents": { "documentId": "7ZkHyrBFaDwZ3XgLi", "html": "

Followup to: The Many Faces of Status (This post co-authored by Morendil and Kaj Sotala - see note at end of post.)

In brief: status is a measure of general purpose optimization power in complex social domains, mediated by \"power conversions\" or \"status conversions\".
 

What is status?
 

Kaj previously proposed a definition of status as \"the ability to control (or influence) the group\", but several people pointed out shortcomings in that. One can influence a group without having status, or have status without having influence. As a glaring counterexample, planting a bomb is definitely a way of influencing a group's behavior, but few would consider it to be a sign of status.

But the argument of status as optimization power can be made to work with a couple of additional assumptions. By \"optimization power\", recall that we mean \"the ability to steer the future in a preferred direction\". In general, we recognize optimization power after the fact by looking at outcomes. Improbable outcomes that rank high in an agent's preferences attest to that agent's power. For the purposes of this post, we can in fact use \"status\" and \"power\" interchangeably.

In the most general sense, status is the general purpose ability to influence a group. An analogy to intelligence is useful here. A chess computer is very skilled at the domain of chess, but has no skill in any other domain. Intuitively, we feel like a chess computer is not intelligent, because it has no cross-domain intelligence. Likewise, while planting bombs is a very effective way of causing certain kinds of behavior in groups, intuitively it doesn't feel like status because it can only be effectively applied to a very narrow set of goals. In contrast, someone with high status in a social group can push the group towards a variety of different goals. We call a certain type of general purpose optimization power \"intelligence\", and another type of general purpose optimization power \"status\". Yet the ability to make excellent chess moves is still a form of intelligence, but only a very narrow one.

A power conversion framework
 

The framework that (provisionally) makes the most sense to us is the following, based on the idea of power conversions.

  1. There are a great many forms of status, and a great many forms of power. (Earlier discussion hinted to the \"godshatteriness\" of status.) Dominance is one, but so are wealth, prestige, sex appeal, conceptual clarity, and so on.
  2. All such forms of power are evidenced by the ability to steer the future into a preferred direction, with respect to the form considered: sex appeal consists of the ability to reliably cause arousal in others.
  3. Any given form of power will have limited usefulness in isolation. This is why planting bombs does not equate to \"high status\" - detonating a bomb is a dead-end use of one's power to bring about certain outcomes.
  4. Greater optimization power accrues to those who have the crucial ability to convert one form into another. For instance, causing bombs to be planted, so that you can credibly threaten to blow people up, may be converted into other forms of power (for instance political).
  5. Neither the claim \"status is something you have\" nor \"status is something you do\" is fully correct. Status emerges as an interaction between the two: a fancy title is meaningless if nobody in the group respects you, but even if you have the respect of the group, you can act to elevate the status of others and try not to have an disproportionate influence on the group.
  6. The difference between intelligence and status is that intelligence is the general ability to notice and predict patterns and take advantage of them. Even if you were the only person in the world, it would still be meaningful to talk about your intelligence. In contrast, your status is undefined outside the context of a specific social group.

In the story of the \"red paperclip\", Kyle MacDonald started out with having just a single paperclip, which he then traded for a pen, which he traded for a doorknob, until he eventually ended up with a two-story farmhouse. Humans are not normally much interested in paperclips, but if you happen to know someone who desires a red paperclip, and happen to be in possession of one, you may be able to trade it for some other item that has more value in your eyes, even while the other party also sees the trade as favorable. Value is complex!

The more forms of status/power you are conversant with, and the greater your ability to convert one into another, the more you will be able to bring about desirable outcomes through a chain of similar trades.

Because we are social animals, these chain of conversions predictably start with primitives such as dominance, grooming and courting. However, in and of themselves skills in these domains do not constitute \"having status\", and are not sufficient to \"raise your status\", both expressions this framework exposes as meaningless. The name of the game is to convert the temporary power gained from (say) a dominance behaviour into something further, bringing you closer to something you desire: reproduction, money, a particular social position...

Subtleties

In fact (this is a somewhat subtle point) some conversions may require successfully conveying less power than you actually have, because an excess of power in one domain may hinder your opportunities for conversion in another. One example is the PUA technique consisting of pretending that \"you only have five minutes\" when engaging a group in conversation. This ostensibly limits your power to choose the topic of conversation, but at the same time allows you to more effectively create rapport, a slightly different form of relational power. Gaining the group's trust is what you are after, which is easier to convert into knowledge about the group, which is yet another kind of power to be converted later. This is a more subtle play than creating the impression that you are barging into the group.

Another example can be found in Keith Johnstone's Impro. Johnstone has an exercise where an angry person is blaming another for having read his letters without permission. The person under attack reacts by debasing himself: \"Yes, I did it. I always do things like that. I'm a horrible person...\" As the conversation continues, it becomes harder and harder for the angry person to continue without making himself feel like a complete jerk. Johnstone describes this as lowering your own status in order to defend yourself, which would make no sense if we had a theory where status was simply the ability to influence someone. You can't influence someone by reducing your ability to influence them! But with the concept of power conversion, we can see this as converting general-purpose influence to a very narrow form of influence. The person debasing themselves will have a hard time of getting the other to agree on any other concessions for a while.

A similar argument can be made in the case of a needy person, who controls somebody that cares about them by being generally incompetent and needing a lot of help. The needy, clingy person is low-status is because he has converted his ability to influence people in general to the ability to influence a specific person.

Conclusions

Relational, short-term forms of power (what has been discussed under the heading of \"status-related behaviours\") are only the beginning of a long chain. People who are good at \"corporate politics\" are, in fact, skilled at converting such social advantages as rapport, trust or approval into positional power. This in turn can readily be converted into wealth, which can in turn be converted into yet other kinds of power, and so on. Individuals widely recognized as high-status are \"wild capitalists\" performing in social groups the same kind of efficient trades as the hero of the red paperclip story. (This framework can also help make sense of high-performing individual's careers in other domains, such as science: see the essay at the link just previous.)

This way of thinking about status gives us a better handle on reconciling status-related intuitions that appear contradictory, e.g. \"it's all about primate dominance\" vs \"it's all about wielding influence\". When we see status as a web of power conversions, spanning a number of different forms of power, we see how to integrate these diverse intuitions without either denying any one of them or giving any one primacy over all others.

Authorship note
 

(This is a co-authored post. Morendil first applied the concept of power transfers to status and wrote most of this article. Kaj re-wrote the beginning and suggested the analogy to intelligence, supplying points V and VI of the framework as well as the examples of the self-debasing and the needy person.)

" } }, { "_id": "wTrgm2meHePfn3ykT", "title": "A Rational Identity", "pageUrl": "https://www.lesswrong.com/posts/wTrgm2meHePfn3ykT/a-rational-identity", "postedAt": "2010-07-12T22:59:12.814Z", "baseScore": 44, "voteCount": 37, "commentCount": 26, "url": null, "contents": { "documentId": "wTrgm2meHePfn3ykT", "html": "

How facts backfire (previous discussion) discusses the phenomenon where correcting people's mistaken beliefs about political issues doesn't actually make them change their minds. In fact, telling them the truth about things might even reinforce their opinions and entrench them even firmer in their previous views. \"The general idea is that it’s absolutely threatening to admit you’re wrong\", says one of the researchers quoted in the article.

\n

This should come as no surprise to the people here. But the interesting bit is that the article suggests a way to make people evaluate information in a less biased manner. They mention that one's willingness to accept contrary information is related to one's self-esteem: Nyhan worked on one study in which he showed that people who were given a self-affirmation exercise were more likely to consider new information than people who had not. In other words, if you feel good about yourself, you’ll listen — and if you feel insecure or threatened, you won’t.

\n

I suspect that the beliefs that are the hardest to change, even if the person had generally good self-esteem, are those which are central to their identity. If someone's identity is built around capitalism being evil, or socialism being evil, then any arguments about the benefits of the opposite economical system are going to fall on deaf ears. Not only will that color their view of the world, but it's likely that they're deriving a large part of their self-esteem from that identity. Say something that challenges the assumptions built into their identity, and you're attacking their self-esteem.

\n

Keith Stanovich tells us that simply being intelligent isn't enough to avoid bias. Intelligent people might be better at correcting for bias, but there's no strong correlation between intelligence and the disposition to actually correct for your own biases. Building on his theory, we can assume that threatening opinions will push even non-analytical people into thinking critically, but non-threatening ones won't. Stanovich believes that spreading awareness of biases might be enough to help a lot of people, and to some degree it might. But we also know about the tendency to only use your awareness of bias to attack arguments you don't like. In the same way that telling people facts about politics sometimes only polarizes opinions, telling people about biases might similarly only polarize the debate as everyone thinks their opposition is hopelesly deluded and biased.

\n

So we need to create a new thinking disposition, not just for actively attacking the perceived threats, but for critically evaluating your opinions. That's hard. And I've found for a number of years now that the main reason I try to actively re-evaluate my opinions and update them as necessary is because doing so is part of my identity. I pride myself on not holding onto ideology and for changing my beliefs when it feels like they should be changed. Admitting that somebody else is right and I am wrong does admittedly hurt, but it also feels good that I was able to do so despite the pain. And when I'm in a group where everyone seems to agree about something as self-evident, it frequently works as a warning sign that makes me question the group consensus. Part of the reason why I do that is that I enjoy the feeling of knowing that I'm actively on guard against my mind just adopting whatever belief happens to be fashionable in the group I'm in.

\n

It seems to me that if we want to actually raise the sanity waterline and make people evaluate things critically, and not just conform to different groups than is the norm, a crucial part of that is getting people to adopt an identity of critical thinking. This way, the concept of identity ceases to be something that makes rational thinking harder and starts to actively aid it. I don't really know how one can effectively promote a new kind of identity, but we should probably take lessons from marketers and other people who appeal strongly to emotions. You don't usually pick your identity based on logical arguments. (On the upside, this provides a valuable hint to the question of how to raise rationalist children.)

" } }, { "_id": "zRbh2mYgXtDJk8T42", "title": "What Intelligence Tests Miss: The psychology of rational thought", "pageUrl": "https://www.lesswrong.com/posts/zRbh2mYgXtDJk8T42/what-intelligence-tests-miss-the-psychology-of-rational", "postedAt": "2010-07-11T23:01:55.080Z", "baseScore": 53, "voteCount": 44, "commentCount": 55, "url": null, "contents": { "documentId": "zRbh2mYgXtDJk8T42", "html": "

This is the fourth and final part in a mini-sequence presenting Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought.

If you want to give people a single book to introduce people to the themes and ideas discussed on Less Wrong, What Intelligence Tests Miss is probably the best currenty existing book for doing so. It does have a somewhat different view on the study of bias than we on LW: while Eliezer concentrated on the idea of the map and the territory and aspiring to the ideal of a perfect decision-maker, Stanovich's perspective is more akin to bias as a thing that prevents people from taking full advantage of their intelligence. Regardless, for someone less easily persuaded by LW's somewhat abstract ideals, reading Stanovich's concrete examples first and then proceeding to the Sequences is likely to make the content presented in the sequences much more interesting. Even some of our terminology such as "carving reality at the joints" and the instrumental/epistemic rationality distinction will be more familiar to somebody who was first read What Intelligence Tests Miss.

Below is a chapter-by-chapter summary of the book.

Inside George W. Bush's Mind: Hints at What IQ Tests Miss is a brief introductory chapter. It starts with the example of president George W. Bush, mentioning that the president's opponents frequently argued against his intelligence, and even his supporters implicitly conceded the point by arguing that even though he didn't have "school smarts" he did have "street smarts". Both groups were purportedly surprised when it was revealed that the president's IQ was around 120, roughly the same as his 2004 presidential candidate opponent John Kerry. Stanovich then goes on to say that this should not be surprising, for IQ tests do not tap into the tendency to actually think in an analytical manner, and that IQ had been overvalued as a concept. For instance, university admissions frequently depend on tests such as the SAT, which are pretty much pure IQ tests. The chapter ends by a disclaimer that the book is not an attempt to say that IQ tests measure nothing important, or that there would be many kinds of intelligence. IQ does measure something real and important, but that doesn't change the fact that people overvalue it and are generally confused about what it actually does measure.

Dysrationalia: Separating Rationality and Intelligence talks about the phenomenon informally described as "smart but acting stupid". Stanovich notes that if we used a broad definition of intelligence, where intelligence only meant acting in an optimal manner, then this expression wouldn't make any sense. Rather, it's a sign that people are intuitively aware of IQ and rationality as measuring two separate qualities. Stanovich then brings up the concept of dyslexia, which the DSM IV defines as "reading achievement that falls substantially below that expected given the individual's chronological age, measured intelligence, and age-appropriate education". Similarly, the diagnostic criterion for mathematics disorder (dyscalculia) is "mathematical ability that falls substantially below that expected for the individual's chronological age, measured intelligence, and age-appropriate education". He argues that since we have a precedent for creating new disability categories when someone's ability in an important skill domain is below what would be expected for their intelligence, it would make sense to also have a category for "dysrationalia":

Dysrationalia is the inability to think and behave rationally despite adequate intelligence. It is a general term that refers to a heterogenous group of disorders manifested by significant difficulties in belief formation, in the assessment of belief consistency, and/or in the determination of action to achieve one's goals. Although dysrationalia may occur concomitantly with other handicapping conditions (e.g. sensory impairment), dysrationalia is not the result of those conditions. The key diagnostic criterion for dysrationalia is a level of rationality, as demonstrated in thinking and behavior, that is significantly below the level of the individual's intellectual capacity (as determined by an individually administered IQ test).

The Reflective Mind, the Algorithmic Mind, and the Autonomous Mind presents a three-level model of the mind, which I mostly covered in A Taxonomy of Bias: The Cognitive Miser. At the end, we return to the example of George W. Bush, and are shown a bunch of quotes from the president's supporters describing him. His speechwriter called him "sometimes glib, even dogmatic; often uncurious and as a result ill-informed"; John McCain said Bush never asks for his opinion and that the president "wasn't intellectually curious". The same sentiment was echoed by a senior official in Iraq who had observed Bush in various videoconferences and said that the president's "obvious lack of interest in long, detailed discussions, had a chilling effect". On the other hand, other people were quoted as saying that Bush was "extraordinarily intelligent, but was not interested in learning unless it had practical value". Tony Blair repeatedly told his associates that Bush was "very bright". This is taken as evidence that while Bush is indeed intelligent, he does not have thinking dispositions that would have make him make use of his intelligence: he has dysrationalia.

Cutting Intelligence Down to Size further criticizes the trend of treating the word "intelligence" in a manner that is too broad. Stanovich points out that even critics of the IQ concept who introduce terms such as "social intelligence" and "bodily-kinesthetic intelligence" are probably shooting themselves in the foot. By giving everything valuable the label of intelligence, these critics are actually increasing the esteem of IQ tests, and therefore making people think that IQ measures more than it does.

Consider a thought experiment. Imagine that someone objected to the emphasis given to horsepower (engine power) when evaluating automobiles. They feel that horsepower looms too large in people's thinking. In an attempt to deemphasize horsepower, they then being to term the other features of the car things like "braking horsepower" and "cornering horsepower" and "comfort horsepower". Would such a strategy make people less likely to look to engine power as an indicator of the "goodness" of a car? I think not. [...] Just as calling "all good car things" horsepower would emphasize horsepower, I would argue that calling "all good cognitive things" intelligence will contribute to the deification of MAMBIT [Mental Abilities Measured By Intelligence Tests].

Stanovich then continues to argue in favor of separating rationality and intelligence, citing surveys that suggest that folk psychology does already distinguish between the two. He also brings up the chilling effect that deifying intelligence seems to be having on society. Reviews about a book discussing the maltreatment of boys labeled feebleminded seemed to concentrate on the stories of the boys who were later found to have normal IQs, implying that abusive treatment of boys who actually did have a low IQ was okay. Various parents seem to take a diagnosis of low mental ability as much more shocking than a diagnosis such as ADHD or learning disability that stresses the presence of normal IQ, even though the life problems associated with some emotional and behavior disorders are much more severe than those associated with many forms of moderate or mild intellectual disability.

Why Intelligent People Doing Foolish Things Is No Surprise briefly introduces the concept of the cognitive miser, explaining that conserving energy and not thinking about things too much is a perfectly understandable tendency given our evolutionary past.

The Cognitive Miser: Ways to Avoid Thinking discusses the cognitive miser further, starting with the "Jack is looking at Anne but Anne is looking at George" problem, noting that one could arrive at the correct answer via disjunctive reasoning ("either Anne is married, in which case the answer is yes, or Anne is unmarried, in which case the answer is also yes") but most people won't bother. It then discusses attribute substitution (instead of directly evaluating X, consider the correlated and easier to evaluate quality Y), vividness/salience/accessibility effects, anchoring effects and the recognition heuristic. Stanovich emphasizes that he does not say that heuristics are always bad, but rather that one shouldn't always rely on them.

Framing and the Cognitive Miser extensively discusses various framing effects, and at the end notes that high-IQ people are not usually any more likely to avoid producing inconsistent answers to various framings unless they are specifically instructed to try to be consistent. This is mentioned to be a general phenomenon: if intelligent people have to notice themselves that an issue of rationality is involved, they do little better than their counterparts of lower intelligence.

Myside Processing: Heads I Win - Tails I Win Too! discusses "myside bias", people evaluating situations in terms of their own perspective. Americans will provide much stronger support for the USA banning an unsafe German car than for Germany banning an unsafe American car. People will much more easily pick up on inconsistencies in the actions of their political opponents than the politicians they support. They will also be generally overconfident, be appalled at others exhibiting the same unsafe behaviors they themselves exhibit, underestimate the degree to which biases influence our own thinking, and assume people understand their messages better than they actually do. The end of the chapter surveys research on the linkage between intelligence and the tendency to fall prey to these biases. It notes that intelligent people again do moderately better, but only when specifically instructed to avoid bias.

A Different Pitfall of the Cognitive Miser: Thinking a Lot, but Losing takes up the problem of failing to override your autonomous processing even when it would be necessary. Most of this chapter is covered by my previous discussion of override failures in the Cognitive Miser post.

Mindware Gaps introduces in more detail a different failure mode: that of mindware gaps. It also introduces and explains the concepts of Bayes' theorem, falsifiability, base rates and the conjunction error as crucial mindware for avoiding many failures of rationality. It notes that thinking dispositions for actually actively analyzing things could be called "strategic mindware". The chapter concludes by noting that the useful mindware discussed in the chapter is not widely and systematically taught, leaving even intelligent people gaps in their mindware that makes them subject to failures of rationality.

I mostly covered the contents of Contaminated Mindware in my post about mindware problems.

How Many Ways Can Thinking Go Wrong? A Taxonomy of Irrational Thinking Tendencies and Their Relation to Intelligence summarizes the content of the previous chapters and organizes the various biases into a taxonomy of biases that has the main categories of the Cognitive Miser, Mindware Problems, and Mr. Spock Syndrome. I did not previously cover Mr. Spock Syndrome because as Stanovich says, it is not a fully cognitive category. People with the syndrome have a reduced ability to feel emotions, which messes up their ability to behave appropriately in various situations even though their intelligence remains intact. Stanovich notes that the syndrome is most obvious with people who have suffered severe brain damage, but difficulties of emotional regulation and awareness do seem to also correlate negatively with some tests of rationality, as well as positive life outcomes, even when intelligence is controlled for.

The Social Benefits of Increasing Human Rationality - and Meliorating Irrationality concludes the book by arguing that while increasing the average intelligence of people would have only small if any effects on general well-being, we could reap vast social benefits if we actually tried to make people more rational. There's evidence that rationality would be much more malleable than intelligence. Disjunctive reasoning, the tendency to consider all possible states of the world when deciding among options, is noted to be a rational thinking skill of high generality that can be taught. There also don't seem to be strong intelligence-related limitations on the ability to think disjunctively. Much other useful mindware, like that of scientific and probabilistic reasoning. While these might be challenging to people with a lower IQ, techniques such as implementation intention may be easier to learn.

An implementation intention is formed when the individual marks the cue-action sequence with the conscious, verbal declaration of "when X occurs, I will do Y." Often with the aid of the context-fixing properties of language, the triggering of this cue-action sequence on just a few occasions is enough to establish it in the autonomous mind. Finally, research has shown that an even more minimalist cognitive strategy of forming mental goals (whether or not they have implementation intentions) can be efficacious. For example, people perform better at a task when they are told to form a mental goal ("set a specific, challenging goal for yourself") for their performance as opposed to being given the generic motivational instructions ("do your best").

Stanovich also argues in favor of libertarian paternalism: shaping the environment so that people are still free to choose what they want, but so that the default choice is generally the best one. For instance, countries with an opt-out policy for organ donation have far more donors than the countries with an opt-in policy. This is not because the people in one country would be any more or less selfish than those in other countries, but because people in general tend to go with the default option. He also argues that it would be perfectly possible though expensive to develop general rationality tests that would be akin to intelligence tests, and that also using RQ proxies for things such as college admission would have great social benefits.

In studies cited in this book, it has been shown that:

These are just a small sampling of the teachable reasoning strategies and environmental fixes that could make a difference in people's lives, and they are more related to rationality than intelligence. They are examples of the types of outcomes that would result if we all became more rational thinkers and decision makers. They are the types of outcomes that would be multiplied if schools, businesses, and government focused on the parts of cognition that intelligence tests miss. Instead, we continue to pay far more attention to intelligence than to rational thinking. It is as if intelligence has become totemic in our culture, and we choose to pursue it rather than the reasoning strategies that could transform our world.

" } }, { "_id": "WqQD5WWcCEjPvcYeX", "title": "Open Thread: July 2010, Part 2", "pageUrl": "https://www.lesswrong.com/posts/WqQD5WWcCEjPvcYeX/open-thread-july-2010-part-2", "postedAt": "2010-07-09T06:54:41.087Z", "baseScore": 11, "voteCount": 9, "commentCount": 768, "url": null, "contents": { "documentId": "WqQD5WWcCEjPvcYeX", "html": "

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

\n


July Part 1

" } }, { "_id": "JQoDQgGeNQgMbWrgy", "title": "How Irrationality Can Win: The Power of Group Cohesion", "pageUrl": "https://www.lesswrong.com/posts/JQoDQgGeNQgMbWrgy/how-irrationality-can-win-the-power-of-group-cohesion", "postedAt": "2010-07-09T06:15:23.708Z", "baseScore": 2, "voteCount": 5, "commentCount": 8, "url": null, "contents": { "documentId": "JQoDQgGeNQgMbWrgy", "html": "

(This post is the first in a planned sequence, discussing the ways in which irrationality can sometimes win. I write this with the hope that rationalists can see why rationality sometimes loses, and adopt \"irrationalist\" techniques if the techniques are actually effective.)

\n

I'd like to begin this sequence by introducing one of the most powerful groups of people in the world- more powerful than such legendary organizations as Skull and Bones and the Freemasons. The members of this organization control wealth, power, and influence almost beyond imagining. Among their current and former members are six U.S. Presidents, four Vice Presidents, three Speakers of the House, three Supreme Court justices, dozens of Cabinet members, and well over a hundred Congressmen. This group also has vast economic power, founding and leading dozens of billion-dollar companies. No fewer than 86 members have been university presidents, of universities including Berkeley, U. Chicago, Dartmouth, MIT, and Yale. There are members of this organization in virtually every Fortune 500 company. And yet, the entire group is no bigger in size than a smallish US city. This organization definitely knows how to win. What is it? Is it the Bilderbergers? The Council on Foreign Relations? The Gnomes of Zurich?

\n

\n

No. It's the Delta Kappa Epsilon (DKE) college fraternity, known for branding its members with red-hot coat hangers during initiation. 

\n

College fraternity brothers are, on the whole, not the most rational of people. (I say this as a current Yale student.) Rationalists are supposed to win. DKE is clearly winning; the members of DKE are collectively far more powerful than the entire atheist movement, despite having far fewer members. What is going on here?

\n

To help answer this question, consider a community of ninety people, living, perhaps, during the Bronze Age. For simplicity, let's assume that all the members are purely selfish. The ninety people are governed by a village council of nine, elected annually. At the end of every year, every one of the ninety people gives a campaign speech, and whoever can persuade the most people to vote for them wins.

\n

Now, suppose that, out of this tribe of ninety, a smaller cabal of ten people forms, and the members of this cabal agree to help each other. Why the cabal was formed doesn't matter. The cabal could be about branding people with hot irons. Or about a cosmic Jewish zombie. Or about worshiping a jug of dirty swamp water- it doesn't matter. At the next election, by pure random luck, one of the group members gets elected to the council.

\n

What happens during the next election? The villagers stand up, one by one, and give their speeches, which extol their own abilities, their willingness to work hard for the tribe, putting the tribe's interests above their own, and so on. However, when the cabal member who was elected last year stands up, in addition to promoting himself, he also talks about how capable another cabal member is. Since this guy is already on the village council, he must be capable and important, and lots of people listen to him. And when the votes are counted, the cabal has elected a second member to the council. This continues on for a few years, until the majority of the village council is made up of cabal members. Eventually, one must be a member of the cabal in order to be elected, and the cabal and the village council merge to become the same group. (This is a big part of where political parties come from, and why one effectively must be the member of a party to get elected in a modern democracy.)

\n

The members of the cabal weren't more capable than anyone else. But the cabal members succeeded out of proportion to their number anyway, simply by being members of a group, a group that could have been based on anything at all. And since most people are irrational, most groups turn out to be based on irrational things. Hence, a random rationalist can be less likely to win than a random irrationalist, just because the irrationalist is a member of a group of irrationalists and the rationalist is operating individually.

\n

We have previously discussed the question of whether a group of rationalists can win over a group of irrationalists. However, even framing the question this way is a serious case of privileging the hypothesis, because it simply assumes a rationalist group and an irrationalist group, a Blue Team and a Green Team, of at least comparable power. But, most of the time, the world doesn't actually work this way. The most common situation is, not a team of rationalists against a team of irrationalists, but individual rationalists competing against groups of irrationalists. 

\n

This certainly doesn't mean that things are hopeless. (If things were hopeless, there'd be no point in writing about them.) There are many possible solutions to this problem. One can pretend to be an irrationalist to join a group of irrationalists. One can join a group of irrationalists that doesn't care much about conformity, and won't exclude you for being a rationalist. One can say \"screw everyone\", and try to win despite not being a member of a group. One can try to start their own group of rationalists (be warned that this is a Hard Problem). But, in order to formulate a solution, one must first consider the problem. Odds are, most of the people you know are less rational than you. How are you going to handle it?

\n

EDIT: Although this post is related to Why Our Kind Can't Cooperate, rationalist ability (or lack thereof) to cooperate isn't the main point. The main point is that there are many more irrationalists than rationalists, so in any given population, groups of irrationalists will form long before groups of rationalists do, even assuming equal ability to cooperate.

\n

EDIT 2: For a young, but growing, attempt at setting up such a group of rationalists, see the Existential Risks Career Network.

" } }, { "_id": "a5DQxG9NgzSLRZMnQ", "title": "A Taxonomy of Bias: Mindware Problems", "pageUrl": "https://www.lesswrong.com/posts/a5DQxG9NgzSLRZMnQ/a-taxonomy-of-bias-mindware-problems", "postedAt": "2010-07-07T21:53:01.966Z", "baseScore": 24, "voteCount": 21, "commentCount": 18, "url": null, "contents": { "documentId": "a5DQxG9NgzSLRZMnQ", "html": "

This is the third part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.

Noting that there are many different kinds of bias, Keith Stanovich proposes a classification scheme for bias that has two primary categories: the Cognitive Miser, and Mindware Gaps. Last time, I discussed the Cognitive Miser category. Today, I will discuss Mindware Problems, which has the subcategories of Mindware Gaps and Corrupted Mindware.

Mindware Problems

Stanovich defines "mindware" as "a generic label for the rules, knowledge, procedures, and strategies that a person can retrieve from memory in order to aid decision making and problem solving".

Mindware Gaps

Previously, I mentioned two tragic cases. In one, a pediatrician incorrectly testified the odds of a two children in the same family suffering infant death syndrome to be 73 million to 1. In the other, people bought into a story of "facilitated communication" helping previously non-verbal children to communicate, without looking at it in a critical manner. Stanovich uses these two as examples of a mindware gap. The people involved were lacking critical mindware: in one case, that of probabilistic reasoning, in the other, that of scientific thinking. One of the reasons why so many intelligent people can act in an irrational manner is that they're simply missing the mindware necessary for rational decision-making.

Much of the useful mindware is a matter of knowledge: knowledge of Bayes' theorem, taking into account alternative hypotheses and falsifiability, awareness of the conjunction fallacy, and so on. Stanovich also mentions something he calls strategic mindware, which refers to the disposition towards engaging the reflective mind in problem solving. These were previously mentioned as thinking dispositions, and some of them can be measured by performance-based tasks. For instance, in the Matching Familiar Figures Test (MFFT), participants are presented with a picture of an object, and told to find the correct match from an array of six other similar pictures. Reflective people have long response times and few errors, while impulsive people have short response times and numerous errors. These types of mindware are closer to strategies, tendencies, procedures, and dispositions than to knowledge structures.

Stanovich identifies mindware gaps to be involved in at least conjunction errors and ignoring base rates (missing probability knowledge), as well as the Wason selection task and confirmation bias (not considering alternate hypotheses). Incorrect lay psychological theories are identified as a combination of a mindware gap and contaminated mindware (see below). For instance, people are often blind to their own biases, because they incorrectly think that biased thinking on their part would be detectable by conscious introspection. In addition to bias blind spot, lay psychological theory is likely to be involved in errors of affective forecasting (the forecasting of one's future emotional state).

Contaminated Mindware

I previously also mentioned the case of Albania, where a full one half of the adult population fell victim to Ponzi schemes. As another example, in the 1980s psychotherapists thought they had found a way to uncover repressed memories of childhood sexual abuse. The ideas spread via professional association but without any evidence of them actually being correct. Even when the patients had no memories of abuse prior to entering therapy, and during therapy began to come up with elaborate memories of being abused in rituals with satanic overtones, nobody questioned their theories or sought any kind of independent evidence. The belief system of the therapists was basically "if the patient thinks she was abused then she was", and the mindware represented by this belief system required only the patient and the therapist to believe in the story. Several people were convicted of abuse charges because of this mindware.

Even though mindware gaps were definitely involved in both of these cases, they are also examples of contaminated mindware sweeping through a population. Stanovich defines contaminated mindware as mindware that leads to maladaptive actions and resists critical evaluation. Contaminated mindware is often somewhat complicated and may be just as enticing to high-IQ people than low-IQ people, if not more so. In a survey of paranormal beliefs conducted on members of a Mensa club in Canada, 44% believed in astrology, 51% believed in biorhytms, and 56% believed in extraterrestrial visitors.

To explain why we acquire mindware that is harmful to us, Stanovich draws upon memetic theory. In the same way that organisms are built to advance the interests of the genes rather than for the interest of the gene hosts themselves, beliefs may spread without being true or helping the human who holds the belief in any way. Chain letters such as "send me to five other people or experience misfortune" are passed on despite being both untrue and useless to the people passing them on, surviving because of their own "self-replicative" properties. Memetic theory shifts our perspective from "how do people acquire beliefs" to "how do beliefs acquire people". (Here, we're treating "memes" and "mindware" as rough synonyms, with the main difference being one of emphasis.)

Stanovich lists four reasons for why mindware might spread:

  1. Mindware survives and spreads because it is helpful to the people that store it.
  2. Certain mindware proliferates because it is a good fit to pre-existing genetic dispositions or domain-specific evolutionary modules.
  3. Certain mindware spreads because it facilitates the replication of genes that make vehicles that are good hosts for that particular mindware (e.g. religious beliefs that urge people to have more children).
  4. Mindware survives and spreads because of the self-perpetuating properties of the mindware itself.

Stanovich notes that reasons 1-3 are relatively uncontroversial, with 1 being a standard assumption in cultural anthropology, 2 being emphasized by evolutionary psychology, and 3 being meant to capture the types of effects emphasized by theorists stressing gene/culture co-evolution.

There are several subversive ways by which contaminated mindware might spread. It might mimick the properties of beneficial mindware and disguise itself as one, it might cause its bearers to want to kill anyone who speaks up against it, it might be unfalsifiable and prevent us from replacing it, it might contain beliefs that actively prohibit evaluation ("blind faith is a virtue") or it might impose a prohibitively high cost on testing it (some groups practicing female genital mutilation believe that if a baby's head touches the clitoris during delivery, the baby will die).

Stanovich identifies contaminated mindware to be involved in at least self-centered biases (self and egocentric processing) and confirmation bias (evaluation-disabling strategies), as well combining with mindware gaps to cause problems in the form of lay psychological theory.

" } }, { "_id": "8ssLgSq6LqutcHcdP", "title": "July 2010 Southern California Meetup ", "pageUrl": "https://www.lesswrong.com/posts/8ssLgSq6LqutcHcdP/july-2010-southern-california-meetup", "postedAt": "2010-07-07T19:54:25.535Z", "baseScore": 12, "voteCount": 9, "commentCount": 43, "url": null, "contents": { "documentId": "8ssLgSq6LqutcHcdP", "html": "

There will be a meetup for people from lesswrong in Los Angeles, on Friday July 9th, 2010 at 3PM.  Roundtrip carpooling from San Diego is definitely available and other carpooling options may develop.  The time and location are designed to make it possible for people from lesswrong to get together and talk.  Later, anyone interested should be able to walk to the first meeting of the LA Chapter of the SENS Foundation where Aubrey de Grey will be attending.  It should be a good time if you can make it!  See below the cut for more details.

\n

The exact location of the meetup is no longer in flux.  We will meet at the same place as the SENS Meeting but two hours earlier.  That will be Friday 3PM at:

\n
Westwood Brewing Company
1097 Glendon Ave
Los Angeles, CA 90024
\n

 

\n

The format will be very free form - mostly talking, eating, and/or drinking.  Topics are especially likely to include cryonics and life extension given the proximity of the SENS Meeting, but saving the world and \"rationality itself\" will probably be on the menu as well :-)

\n

See you there!

" } }, { "_id": "SacB4mg6nJWATKB3Y", "title": "A proposal for a cryogenic grave for cryonics", "pageUrl": "https://www.lesswrong.com/posts/SacB4mg6nJWATKB3Y/a-proposal-for-a-cryogenic-grave-for-cryonics", "postedAt": "2010-07-06T19:01:36.898Z", "baseScore": 29, "voteCount": 23, "commentCount": 204, "url": null, "contents": { "documentId": "SacB4mg6nJWATKB3Y", "html": "

Followup to: Cryonics wants to be big

\n

We've all wondered about the wisdom of paying money to be cryopreserved, when the current social attitude to cryopreservation is relatively hostile (though improving, it seems). In particular, the probability that either or both of Alcor and CI go bankrupt in the next 100 years is nontrivial (perhaps 50% for \"either\"?). If this happened, cryopreserved patients may be left to die at room temperature. There is also the possibility that the organizations are closed down by hostile legal action.A

\n

The ideal solution to this problem is a way of keeping bodies cold (colder than -170C, probably) in a grave. Our society already has strong inhibitions against disturbing the dead, which means that a cryonic grave that required no human intervention would be much less vulnerable. Furthermore, such graves could be put in unmarked locations in northern Canada, Scandinavia, Siberia and even Antarctica, where it is highly unlikely people will go, thereby providing further protection. 

\n

In the comments to \"Cryonics wants to be big\", it was suggested that a large enough volume of liquid nitrogen would simply take > 100 years to boil off. Therefore, a cryogenic grave of sufficient size would just be a big tank of LN2 (or some other cryogen) with massive amonuts of insulation.

\n

So, I'll present what I think is the best possible engineering case, and invite LW commenters to correct my mistakes and add suggestions and improvements of their own.

\n

\n

If you have a spherical tank of radius r with insulation of thermal conductivity k and thickness r (so a total radius for insulation and tank of 2r) and a temperature difference of ΔT, the power getting from the outside to the inside is approximately

\n

25 × k × r × ΔT 

\n

If the insulation is made much thicker, we get into sharply diminishing returns (asymptotically, we can achieve only another factor of 2). The volume of cryogen that can be stored is approximately 4.2 × r3, and the total amount of heat required to evaporate and heat all of that cryogen is 

\n

4.2 × r× (volumetric heat of vaporization + gas enthalpy) 

\n

The quantity is brackets for Nitrogen and a ΔT of 220 °C is approximately 346,000,000 J m-3. Dividing energy by power gives a boiloff time of 

\n

1/12,000 × r× k-1 centuries

\n

Setting this equal to 1 century, we get:

\n

r2/k = 12,000. 

\n

Now the question is, can we satisfy this constraint without an exorbitant price tag? Can we do better and get 2 or 3 centuries? 

\n

\"Cryogel\" insulation with a k-value of 0.012 is commercially available Meaning that r would have to be at least 12 meters. A full 12-meter radius tank would weigh 6000 tons (!) meaning that some fairly serious mechanical engineering would be needed to support it. I'd like to hear what people think this would cost, and how the cost scales with r. 

\n

The best feasible k seems to be fine granules or powder in a vacuum. When the mean free path of a gas increases significantly beyond the characteristic dimension of the space that encloses it, the thermal conductivity drops linearly with pressure. This company quotes 0.0007 W/m-K, though this is at high vacuum. Fine granules of aerogel would probably outperform this in terms of the vacuum required to get down to < 0.001 W/m-K. 

\n

Supposing that it is feasible to maintain a good enough vacuum to get to 0.0007 W/m-K, perhaps with aerogel or some other material. Then r is a mere 2.9 meters, and we're looking at a structure that's the size of a large room rather than the size of tower block, and a cryogen weight of a mere 80 tons. Or you could double the radius and have a system that would survive for 400 years, with a size and weight that was still not in the \"silly\" range.

\n

The option that works without the need for a vacuum is inviting because there's one less thing to go wrong, but I am no expert on how hard it would be to make a system hold a rough vacuum for 100 years, so it is not clear how useful that is.

\n

As a final comment, I disagree that storing all patients in one system is a good idea. Too many eggs in one basket is never good when you're trying to maximize the probability that each patient will survive. That's why I'm keen on finding a system that would be small enough that it would be economical to build one for a few dozen patients, say (cost < 30 million).  

\n

So, I invite Less Wrong to comment: is this feasible, and if so how much would it cost, and can you improve on my ideas?

\n

In particular, any commenters with experience in cryogenic engineering would delight me with either refinement or critique of my cryogenic ideas, and delight me even more with cost estimates of these systems. Its also fairly critical to know whether you can hold a 99% vacuum for a century or two. 

\n

 

\n

 

\n

 

\n
\n

 

\n

A: In addition to this, many scenarios where cryonics is useful to the average LW reader are scenarios where technological progress is slow but \"eventually\" gets to the required level of technology to reanimate you, because if progress is fast you simply won't have time to get old and die before we hit longevity escape velocity. Slow progress in turn correlates with the world experiencing a significant \"dip\" in the next 50 or so years, such as a very severe recession or a disaster of some kind. These are precisely the scenarios where a combination of economic hardship and hostile public opinion might kill cryonics organizations. 

" } }, { "_id": "jqXa2Aw2C55e8dKkd", "title": "Assuming Nails", "pageUrl": "https://www.lesswrong.com/posts/jqXa2Aw2C55e8dKkd/assuming-nails", "postedAt": "2010-07-05T22:26:00.586Z", "baseScore": 9, "voteCount": 25, "commentCount": 29, "url": null, "contents": { "documentId": "jqXa2Aw2C55e8dKkd", "html": "

Tangential followup to Defeating Ugh Fields in Practice.
Somewhat related to Privileging the Hypothesis.

\r\n

Edited to add:
I'm surprised by negative/neutral reviews. This means that either I'm simply wrong about what counts as interesting, or I haven't expressed my point very well. Based on commenter response, I think the problem is the latter. In the next week or so, expect a much more concise version of this post that expresses my point about epistemology without the detour through a criticism of economics.

\r\n

At the beginning of my last post, I was rather uncharitable to neoclassical economics:

\r\n
\r\n

If I had to choose a single piece of evidence off of which to argue that the rationality assumption of neoclassical economics is totally, irretrievably incorrect, it's this article about financial incentives and medication compliance.... [to maintain that this theory is correct] is to crush reality into a theory that cannot hold it.   

\r\n
\r\n

Some mistook this to mean that I believe neoclassical economists honestly, explicitly believe that all people are always totally rational. But, to quote Rick Moranis, \"It's not what you think. It's far, far worse.\" The problem is that they often take the complex framework of neoclassical economics and believe that a valid deduction within this framework is a valid deduction about the real world. However, deductions within any given framework are entirely uninformative unless the framework corresponds to reality. But, because such deductions are internally valid, we often give them far more weight than they are due. Testing the fit of a theoretical framework to reality is hard, but a valid deduction within a framework feels so very satisfying. But even if you have a fantastically engineered hammer, you cannot go around assuming everything you want to use it on is a nail. It is all too common for experts to assume that their framework applies cleanly to the real world simply because it works so well in its own world.

\r\n

If this concept doesn't make perfect sense, that's what the rest of this post is about: spelling out exactly how we go wrong when we misuse the essentially circular models of many sciences, and how this matters. We will begin with the one discipline in which this problem does not occur. The one discipline which appears immune to this type of problem is mathematics, the paragon of \"pure\" academic disciplines. This is principally because mathematics appears to have perfect conformity with reality, with no research or experimentation needed to ensure said conformity. The entire system of mathematics exists, in a sense, in its own world. You could sit in windowless room (perhaps one with a supercomputer) and, theoretically, derive every major theorem of mathematics, given the proper axioms. The answer to the most difficult unsolved problems in mathematics was determined the moment the terms and operators within them were defined - once you say a \"circle\" is \"a convex polygon with every point equidistant from a center,\" you have already determined every single digit of pi. The problem is finding out exactly how this model works - making calculations and deductions within this model. In the case of mathematics, for whatever reason, the model conforms perfectly to the real world, so any valid mathematical deduction is a valid deduction in the real world.

\r\n

This is not the case in any true science, which by necessity must rely on experiment and observation. Every science operates off of some simplified model of the world, at least with our current state of knowledge. This creates two avenues of progress: discoveries within the model, which allow one to make predictions about the world, and refinements of the model, which make such predictions more accurate. If we have an internally consistent framework, theoretical manipulation within our model will never show us our error, because our model is circular and functions outside the real world. It would be like trying to predict a stock market crash by analyzing the rules of Monopoly, except that it doesn't feel absurd. There's nothing wrong with the model qua the model, the problem is with the model qua reality, and we have to look at both of them to figure that out.

\r\n

Economics is one of the fields that most suffers from this problem. Our mathematician in his windowless room could generate models of international exchange rates without ever having seen currency, once we gave him the appropriate definitions and assumptions. However, when we try using these models to forecast the future, life gets complicated. No amount of experimenting within our original model will fix this without looking at the real world. At best, we come up with some equations that appear to conform to what we observe, but we run the risk that the correspondence is incidental or that there were some (temporarily) constant variables we left out that will suddenly cease to be constant and break the whole model. It is all too easy to forget that the tremendous rigor and certainty we feel when we solve the equations of our model does not translate into the real world.  Getting the \"right\" answer within the model is not the same thing as getting the real answer.

\r\n

As an obvious practical example, an individual with a serious excess of free time could develop a model of economics which assumes that agents are rational paper-clip maximizers - that agents are rational and their ultimate concern is maximizing the number of existing paper-clips. Given even more free time and a certain amount of genius, you could even model the behaviour of irrational paper-clip maximizers, so long as you had a definition of irrational. But however refined these models are, they models will remain entirely useless unless you actually have some paper-clip maximizers whose behaviour you want to predict. And even then, you would need to evaluate your predictions after they succeed or fail. Developing a great hammer is relatively useless if the thing you need to make must be put together with screws. 

\r\n

There is an obvious difference in the magnitude of this problem between the sciences, and it seems to be based on the difficulty of experimenting within them. In harder sciences where experiments are fairly straightforwards, like physics and chemistry, it is not terribly difficult to make models that conform well with reality. The bleeding edge of, say, physics, tends to like in areas that are either extremely hard to observe, like the subatomic, or extremely computation-intensive. In softer sciences, experiments are very difficult, and our models rely much more on powerful assumptions, social values, and armchair reasoning.

\r\n

As humans, we are both bound and compelled to use the tools we have at our disposal. The problem here is one of uncertainty. We know that most of our assumptions in economics are empirically off, but we don't know how wrong or how much that matters when we make predictions. But the model nevertheless seeps into the very core of our model of reality itself. We cannot feel this disconnect when we try to make predictions; a well-designed model feels so complete that there is no feeling of error when we try to apply it. This is likely because we are applying it correctly, but it just doesn't apply to reality. This leads people to have high degrees of certainty and yet frequently be wrong. It would not surprise me if the failure of many experts to appreciate the model-reality gap is responsible for a large proportion of incorrect predictions.

\r\n

This, unfortunately, is not the end of the problem. It gets much worse when you add a normative element into your model, when you get to call some things, \"efficient\" or \"healthful,\" or \"normal,\" or \"insane.\" There is also a serious question as to whether this false certainty is preferable to the vague unfalsifiability of even softer social sciences. But I shall save these subjects for future posts.

\r\n

 

" } }, { "_id": "WbbuyfoGE3d4Zadhv", "title": "Cryonics Wants To Be Big", "pageUrl": "https://www.lesswrong.com/posts/WbbuyfoGE3d4Zadhv/cryonics-wants-to-be-big", "postedAt": "2010-07-05T07:50:29.496Z", "baseScore": 47, "voteCount": 39, "commentCount": 185, "url": null, "contents": { "documentId": "WbbuyfoGE3d4Zadhv", "html": "

Cryonics scales very well. People who argue from the perspective that cryonics is costly are probably not aware of this fact. Even assuming you needed to come up with the lump sum all at once rather than steadily pay into life insurance, the fact is that most people would be able to afford it if most people wanted it. There are some basic physical reasons why this is the case.

\n

So long as you keep the shape constant, for any given container the surface area is based on a square law while the volume is calculated as a cube law. For example with a simple cube shaped object, one side squared times 6 is the surface area; one side cubed is the volume. Spheres, domes, and cylinders are just more efficient variants on this theme. For any constant shape, if volume is multiplied by 1000, surface area only goes up by 100 times.

\n

Surface area is where heat gains entry. Thus if you have a huge container holding cryogenic goods (humans in this case) it costs less per unit volume (human) than is the case with a smaller container that is equally well insulated. A way to understand why this works is to realize that you only have to insulate and cool the outside edge -- the inside does not collect any new heat. In short, by multiplying by a thousand patients, you can have a tenth of the thermal transfer to overcome per patient with no change in r-value.

\n

But you aren't limited to using equal thickness of insulation. You can use thicker insulation, but get a much smaller proportional effect on total surface area when you use bigger container volumes. Imagine the difference between a marble sized freezer and a house-sized freezer. What happens when you add an extra foot of insulation to the surface of each? Surface area is impacted much as diameter is -- i.e. more significantly in the case of the smaller freezer than the larger one. The outer edge of the insulation is where it begins collecting heat. With a truly gigantic freezer, you could add an entire meter (or more) of insulation without it having a significant proportional impact on surface area, compared to how much surface area it already has. (This is one reason cheaper materials can be used to construct large tanks -- they can be applied in thicker layers.)

\n

Another factor to take into account is that liquid nitrogen, the super-cheap coolant used by cryonics facilities around the world, is vastly cheaper (more than a factor of 10) when purchased in huge quantities of several tons. The scaling factors for storage tanks and high-capacity tanker trucks are a big part of the reason for this. CI has used bulk purchasing as a mechanism for getting their prices down to $100 per patient per year for their newer tanks. They are actually storing 3,000 gallons of the stuff and using it slowly over time, which implies there is a boiloff rate associated with the 3,000 gallon tank in addition to the tanks.

\n

The conclusion I get from this is that there is a very strong self-interested case (as well as the altruistic case) to be made for the promotion of megascale cryonics towards the mainstream, as opposed to small independently run units for a few of us die-hard futurists. People who say they won't sign up for cost reasons may actually (if they are sincere) be reachable at a later date. To deal with such people's objections and make sure they remain reachable, it might be smart to get them to agree with some particular hypothetical price point at which they would feel it is justified. In large enough quantities, it is conceivable that indefinite storage costs would be as low as $50 per person, or 50 cents per year.

\n

That is much cheaper than saving a life any other way. Of course there's still the risk that it might not work. However, given a sufficient chance of it working it could still be morally superior to other life saving strategies that cost more money. It also has inherent ecological advantages over other forms of life-saving in that it temporarily reduces the active population, giving the environment a chance to recover and green tech more time to take hold so that they can be supported sustainably and comfortably. And we might consider the advent of life-health extension in the future to be a reason to think  it a qualitatively better form of life-saving.

\n

Note: This article only looks directly at cooling energy costs; construction and ongoing maintenance do not necessarily scale as dramatically. The same goes for stabilization (which I view as a separate though indispensable enterprise). Both of these do have obvious scaling factors however. Other issues to consider are defense and reliability. Given the large storage mass involved, preventing temperature fluctuations without being at the exact boiling temperature of LN2 is feasible; it could be both highly failsafe and use the ideal cryonics temperature of -135C rather than the -196C that LN2 boiloff as a temperature regulation mechanism requires. Feel free to raise further issues in the comments.

" } }, { "_id": "qMTzv8ATgDtfLq9ME", "title": "A Taxonomy of Bias: The Cognitive Miser", "pageUrl": "https://www.lesswrong.com/posts/qMTzv8ATgDtfLq9ME/a-taxonomy-of-bias-the-cognitive-miser", "postedAt": "2010-07-02T18:38:08.995Z", "baseScore": 67, "voteCount": 60, "commentCount": 38, "url": null, "contents": { "documentId": "qMTzv8ATgDtfLq9ME", "html": "

This is the second part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.

Noting that there are many different kinds of bias, Keith Stanovich proposes a classification scheme for bias that has two primary categories: the Cognitive Miser, and Mindware Problems. Today, I will discuss the Cognitive Miser category, which has the subcategories of Default to the Autonomous Mind, Serial Associative Cognition with a Focal Bias, and Override Failure.

The Cognitive Miser

Cognitive science suggests that our brains use two different kinds of systems for reasoning: Type 1 and Type 2. Type 1 is quick, dirty and parallel, and requires little energy. Type 2 is energy-consuming, slow and serial. Because Type 2 processing is expensive and can only work on one or at most a couple of things at a time, humans have evolved to default to Type 1 processing whenever possible. We are "cognitive misers" - we avoid unnecessarily spending Type 2 cognitive resources and prefer to use Type 1 heuristics, even though this might be harmful in a modern-day environment.

Stanovich further subdivides Type 2 processing into what he calls the algorithmic mind and the reflective mind. He argues that the reason why high-IQ people can fall prey to bias almost as easily as low-IQ people is that intelligence tests measure the effectiveness of the algorithmic mind, whereas many reasons for bias can be found in the reflective mind. An important function of the algorithmic mind is to carry out cognitive decoupling - to create copies of our mental representations about things, so that the copies can be used in simulations without affecting the original representations. For instance, a person wondering how to get a fruit down from a high tree will imagine various ways of getting to the fruit, and by doing so he operates on a mental concept that has been copied and decoupled from the concept of the actual fruit. Even when he imagines the things he might do to the fruit, he never confuses the fruit he has imagined in his mind with the fruit that's still hanging in the tree (the two concepts are decoupled). If he did, he might end up believing that he could get the fruit down by simply imagining himself taking it down. High performance on IQ tests indicates an advanced ability for cognitive decoupling.

In contrast, the reflective mind embodies various higher-level goals as well as thinking dispositions. Various psychological tests of thinking dispositions measure things such as the tendency to collect information before making up one's mind, the tendency to seek various points of view before coming to a conclusion, the disposition to think extensively about a problem before responding, the tendency to calibrate the degree of strength of one's opinion to the degree of evidence available, the tendency to think about future consequences before taking action, the tendency to explicitly weigh pluses and minuses of situations before making a decision, and the tendency to seek nuance and avoid absolutism. All things being equal, a high-IQ person would have a better chance of avoiding bias if they stopped to think things through, but a higher algorithmic efficiency doesn't help them if it's not in their nature to ever bother doing so. In tests of rational thinking where the subjects are explicitly instructed to consider the issue in a detached and objective manner, there's a correlation of .3 - .4 between IQ and test performance. But if such instructions are not given, and people are free to reason in a biased or unbiased way as they wish (like in real life), the correlation between IQ and rationality falls to nearly zero!

Modeling the mind purely in terms of Type 1 and Type 2 systems would do a poor job of explaining the question of why intelligent people only do better at good thinking if you tell them in advance what "good thinking" is. It is much better explained by a three-level model where the reflective mind may choose to "make a function call" to the algorithmic mind, which in turn will attempt to override the autonomous Type 1 processes. A failure of rationality may happen either if the reflective mind fails to activate the algorithmic mind, or if the algorithmic mind fails to override the autonomous mind. This gives us a three-level classification of this kind of bias.

Default to the Autonomous Mind

Defaulting to the autonomous mind is the most shallow kind of thought, where no Type 2 processing is done at all. The reflective mind fails to react and activate the algorithmic mind. Stanovich considers biases such as impulsively associative thinking and affect substitution (evaluating something primarily based on its affect) to be caused by this one.

Serial Associative Cognition with a Focal Bias

In this mode of thinking, Type 2 processes are engaged, but they are too conservative in their use of resources. For instance, consider the following problem (answer in rot13 below):

Jack is looking at Anne but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person? A) Yes B) No C) Cannot be determined.

Gur pbeerpg nafjre, juvpu yrff guna 20 creprag bs crbcyr trg, vf N. Vs Naar vf zneevrq, gura gur nafjre vf "Lrf", orpnhfr fur'f ybbxvat ng Trbetr jub'f hazneevrq. Vs Naar vf hazneevrq, gura gur nafjre vf "Lrf" orpnhfr fur'f orvat ybbxrq ng ol Wnpx, jub'f zneevrq.

In this example, people frequently concentrate too much on a single detail and get the answer wrong. There are numerous biases of similar kind. For instance, when asked to guess the amount of murders in Detroit (which is located in Michigan) they give a higher number than when asked to guess the number of murders in Michigan. This is because people are using crude affect-laden images of the locations in question to generate their guess. Vividness, salience and accessibility of various pieces of information have an overly strong effect to our thinking, becoming the main focal point of our evaluation. Focal bias is also involved biases such as framing effects (the presented frame is taken as focal), the Wason selection task, motivated cognition, and confirmation bias.

Override Failure

In an override failure, Type 2 processes notice that Type 1 systems are attempting to apply rules or heuristics that are not applicable to the situation at hand. As a result, the Type 2 processes attempt to initiate an override and take the Type 1 systems offline, but for whatever reason they fail to do so. Override failures can be divided into two categories: "cold" and "hot" ones.

The above reasoning is invalid ("Living things" implies "need water", but "need water" does not imply "living thing"), but many people will instinctively accept it, because the conclusion is a true one. It's an example of a cold override, where you need to override a natural response with a rule-based one. In another example, test subjects were presented with two cans of jelly beans. One of the cans had nine white jelly beans and one red jelly bean. The other had eight red jelly beans and ninety-two white jelly beans. The subjects were told to pick one of the cans and then draw a jelly bean at random from their chosen can: if they got a red one, they'd win a dollar. Most picked the can with one red jelly bean (a 10% chance) but 30 to 40 percent of the subjects picked the one with the worse (8%) odds. Many of them knew that they were doing a mistake, but having a higher absolute amount of beans was too enticing to them. One commented afterwards: "I picked the one with more red jelly beans because it looked like there were more ways to get a winner, even though I knew there were also more whites, and that the percents were against me."

A "hot" override, on the other hand, is one where strong emotions are involved. In what's likely to be a somewhat controversial example around here, Stanovich discusses the trolley problem. He notes that most people would choose to flip the switch sending the trolley to the track where it kills one person instead of five, but that most people would also say "no" to pushing the fat man on the tracks. He notes that this kind of a scenario feels more "yucky". Brain scans of people being presented various variations of this dilemma show more emotional activity in the more personal variations. The people answering "yes" to the "fat man"-type dilemmas took a longer time to answer, and scans of their brain indicated activity in the regions associated with overriding the emotional brain. They were using Type 2 processing to override the effects of Type 1 emotions.

Stanovich identifies denominator neglect (the jelly bean problem), belief bias effects ("roses are living things"), self-control problems such as the inability to delay gratification, as well as moral judgement failures as being caused by an override failure.

" } }, { "_id": "XfQFDc7TP8fdaXRm5", "title": "Rationality Quotes: July 2010", "pageUrl": "https://www.lesswrong.com/posts/XfQFDc7TP8fdaXRm5/rationality-quotes-july-2010", "postedAt": "2010-07-01T21:24:12.091Z", "baseScore": 8, "voteCount": 5, "commentCount": 227, "url": null, "contents": { "documentId": "XfQFDc7TP8fdaXRm5", "html": "

\n

This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.

\n\n

 

" } }, { "_id": "DKLxoWcQmt4xapsQJ", "title": "Open Thread: July 2010", "pageUrl": "https://www.lesswrong.com/posts/DKLxoWcQmt4xapsQJ/open-thread-july-2010", "postedAt": "2010-07-01T21:20:42.638Z", "baseScore": 10, "voteCount": 7, "commentCount": 697, "url": null, "contents": { "documentId": "DKLxoWcQmt4xapsQJ", "html": "

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

\n

Part 2

" } }, { "_id": "ujTE9FLWveYz9WTxZ", "title": "What Cost for Irrationality?", "pageUrl": "https://www.lesswrong.com/posts/ujTE9FLWveYz9WTxZ/what-cost-for-irrationality", "postedAt": "2010-07-01T18:25:06.938Z", "baseScore": 94, "voteCount": 79, "commentCount": 119, "url": null, "contents": { "documentId": "ujTE9FLWveYz9WTxZ", "html": "

This is the first part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.


People who care a lot about rationality may frequently be asked why they do so. There are various answers, but I think that many of ones discussed here won't be very persuasive to people who don't already have an interest in the issue. But in real life, most people don't try to stay healthy because of various far-mode arguments for the virtue of health: instead, they try to stay healthy in order to avoid various forms of illness. In the same spirit, I present you with a list of real-world events that have been caused by failures of rationality.

What happens if you, or the people around you, are not rational? Well, in order from least serious to worst, you may...

Have a worse quality of living. Status Quo bias is a general human tendency to prefer the default state, regardless of whether the default is actually good or not. In the 1980's, Pacific Gas and Electric conducted a survey of their customers. Because the company was serving a lot of people in a variety of regions, some of their customers suffered from more outages than others. Pacific Gas asked customers with unreliable service whether they'd be willing to pay extra for more reliable service, and customers with reliable service whether they'd be willing to accept a less reliable service in exchange for a discount. The customers were presented with increases and decreases of various percentages, and asked which ones they'd be willing to accept. The percentages were same for both groups, only with the other having increases instead of decreases. Even though both groups had the same income, customers of both groups overwhelmingly wanted to stay with their status quo. Yet the service difference between the groups was large: the unreliable service group suffered 15 outages per year of 4 hours' average duration and the reliable service group suffered 3 outages per year of 2 hours' average duration! (Though note caveats.)

A study by Philips Electronics found that one half of their products had nothing wrong in them, but the consumers couldn't figure out how to use the devices. This can be partially explained by egocentric bias on behalf of the engineers. Cognitive scientist Chip Heath notes that he has "a DVD remote control with 52 buttons on it, and every one of them is there because some engineer along the line knew how to use that button and believed I would want to use it, too. People who design products are experts... and they can't imagine what it's like to be as ignorant as the rest of us."

Suffer financial harm. John Allen Paulos is a professor of mathematics at Temple University. Yet he fell prey to serious irrationality which began when he purchased WorldCom stock at $47 per share in early 2000. As bad news about the industry began mounting, WorldCom's stock price started falling - and as it did so, Paulos kept buying, regardless of accumulating evidence that he should be selling. Later on, he admitted that his "purchases were not completely rational" and that "I bought shares even though I knew better". He was still buying - partially on borrowed money - when the stock price was $5. When it momentarily rose to $7, he finally decided to sell. Unfortunately, he didn't get off from work until the market closed, and on the next market day the stock had lost a third of its value. Paulos finally sold everything, at a huge loss.

Stock market losses due to irrationality are not atypical. From the beginning of 1998 to the end of 2001, the Firsthand Technology Value mutual fund had an average gain of 16 percent per year. Yet the average investor who invested in the fund lost 31.6 percent of her money over the same period. Investors actually lost a total of $1.9 billion by investing in a fund which was producing 16 percent of a profit per year. That happened because the fund was very volatile, causing people to invest and cash out at exactly the wrong times. When it gained, it gained a lot, and when it lost, it lost a lot. When people saw that it had been making losses, they sold, and when they saw it had been making gains, they bought. In other words, they bought when high and sold when low - exactly the opposite of what you're supposed to do if you want to make a profit. Reporting on a study of 700 mutual funds during 1998-2001, finanical reporter Jason Zweig noted that "to a remarkable degree, investors underperformed their funds' reported returns - sometimes by as much as 75 percentage points per year."

Be manipulated and robbed of personal autonomy. Subjects were asked to divide 100 usable livers to 200 children awaiting a transplant. With two groups of children, group A with 100 children and group B with 100 children, the overwhelming response was to allocate 50 livers to each, which seems reasonable. But when group A had 100 children, each with an 80 percent chance of surviving when transplanted, and group B had 100 children, each with a 20 percent chance of surviving when transplanted, people still chose the equal allocation method even if this caused the unnecessary deaths of 30 children. Well, that's just a question of values and not rationality, right? Turns out that if the patients were ranked from 1 to 200 in terms of prognosis, people were relatively comfortable with distributing organs to the top 100 patients. It was only when the question was framed as "group A versus group B" that people suddenly felt they didn't want to abandon group B entirely. Of course, these are exactly the same dilemma. One could almost say that the person who got to choose which framing to use was getting to decide on behalf of the people being asked the question.


Two groups of subjects were given information about eliminating affirmative action and adopting a race-neutral policy at several universities. One group was told that under race-neutral conditions, the probability of a black student being admitted would decline from 42 percent to 13 percent and the probability of a white student being admitted would rise from 25 percent to 27 percent. The other group was told that under race-neutral admissions, the number of black students being admitted would decrease by 725 and the number of white students would increase by 725. These two framings were both saying the same thing, but you can probably guess the outcome: support for affirmative action was much higher in the percentage group.

In a hypothetical country, a family with no children and an income of $35,000 pays $4,000 in tax, while a family with no children and an income of $100,000 pays $26,000 in tax. Now suppose that there's a $500 tax reduction for having a child for a family with an income of $35,000. Should the family with an income of $100,000 be given a larger reduction because of their higher income? Here, most people would say no. But suppose that instead, the baseline is that a family of two children with an income of $35,000 pays $3,000 in tax and that a family of two children with an income of $100,000 pays $25,000 in tax. We propose to make the families with no children pay more tax - that is, have a "childless penalty". Say that the family with the income of $100,000 and one child has their taxes set at $26,000 and the same family with no children has their taxes set at $27,000 - there's a childless penalty of $1,000 per child. Should the poorer family which makes $35,000 and has no children also pay the same $2,000 childless penalty as the richer family? Here, most people would also say no - they'd want the "bonus" for children to be equal for low- and high-income families, but they do not want the "penalty" for lacking children to be the high for same and low income.

End up falsely accused or imprisoned. In 2003, an attorney was released from prison in England when her conviction of murdering her two infants was overturned. Five months later, another person was released from prison when her charge of having murdered her children was also overturned. In both cases, the evidence presented against them had been ambiguous. What had convinced the jury was that in both cases, a pediatrician had testified that the odds of two children in the same family dying of infant death syndrome was 73 million to 1. Unfortunately, he had arrived to this figure by squaring the odds of a single death. Squaring the odds of a single event to arrive at the odds for it happening twice only works if the two events are independent. But that assumption is likely to be false in the case of multiple deaths in the same family, where numerous environmental and genetic factors may have affected both deaths.

In the late 1980s and early 1990s, many parents were excited and overjoyed to hear of a technique coming out of Australia that enabled previously totally non-verbal autistic children to communicate. It was uncritically promoted in highly visible media such as 60 Minutes, Parade magazine and the Washington Post. The claim was that autistic individuals and other children with developmental disabilities who'd previously been nonverbal had typed highly literate messages on a keyboard when their hands and arms had been supported over by the typewriter by a sympathetic "facilitator". As Stanovich describes: "Throughout the early 1990s, behavioral science researchers the world over watched in horrified anticipation, almost as if observing cars crash in slow motion, while a predictable tragedy unfolded before their eyes." The hopes of countless parents were dashed when it was shown that the "facilitators" had been - consciously or unconsciously - directing the children's hands on the right keys. It should have been obvious that spreading such news before the technique had been properly scientifically examined was dangerously irresponsible - and it gets worse. During some "faciliation" sessions, children "reported" having been sexually abused by their parents, and were removed from their homes as a result. (Though they were eventually returned.)

End up dead. After 9/11, people became afraid of flying and started doing so less. Instead, they began driving more. Unfortunately, car travel has a much higher chance of death than air travel. Researchers have estimated that over 300 more people died in the last months of 2001 because they drove instead of flying. Another group calculated that for flying to be as dangerous as driving, there would have to be an incident on the scale of 9/11 once a month!

Have your society collapse. Possibly even more horrifying is the tale of Albania, which had previously been a communist dictatorship but had made considerable financial progress from 1992 to 1997. In 1997, however, one half of the adult population had fallen victim to Ponzi schemes. In a Ponzi scheme, the investment itself isn't actually making any money, but rather early investors are paid off with the money from late investors, and eventually the system has to collapse when no new investors can be recruited. But when schemes offering a 30 percent monthly return began to become popular in Albania, competitors offering a 50-60 or even a 100 percent monthly return soon showed up, and people couldn't resist the temptation. Eventually both the government and economy of Albania collapsed. Stanovich describes:

People took out mortgages on their homes in order to participate. Others sold their homes. Many put their entire life savings into the schemes. At their height, an amount equal to 50 percent of the country's GDP was invested in Ponzi schemes. Before the schemes collapsed, they actually began to compete with wage income and distort the economy. For example, one business owner saw his workforce quickly slip from 130 employees to 70 because people began to think they could invest in the Ponzi schemes instead of actually working for their income.

The estimated death toll was between 1,700 and 2,000.

" } }, { "_id": "KL9iocwykHq53Esrv", "title": "Applied Bayes' Theorem: Reading People", "pageUrl": "https://www.lesswrong.com/posts/KL9iocwykHq53Esrv/applied-bayes-theorem-reading-people", "postedAt": "2010-06-30T17:21:53.836Z", "baseScore": 37, "voteCount": 28, "commentCount": 26, "url": null, "contents": { "documentId": "KL9iocwykHq53Esrv", "html": "

Or, how to recognize Bayes' theorem when you meet one making small talk at a cocktail party.

\n

Knowing the theory of rationality is good, but it is of little use unless we know how to apply it. Unfortunately, humans tend to be poor at applying raw theory, instead needing several examples before it becomes instinctive. I found some very useful examples in the book Reading People: How to Understand People and Predict Their Behavior - Anytime, Anyplace. While I didn't think that it communicated the skill of actually reading people very well, I did notice that it did have one chapter (titled \"Discovering Patterns: Learning to See the Forest, Not Just the Trees\") that could almost have been a collection of Less Wrong posts. It also serves as an excellent example of applying Bayes' theorem in every-day life.

In \"What is Bayesianism?\" I said that the first core tenet of Bayesianism is \"Any given observation has many different possible causes\". Reading People says:

\n
\n

If this book could deliver but one message, it would be that to read people effectively you must gather enough information about them to establish a consistent pattern. Without that pattern, your conclusions will be about as reliable as a tarot card reading.

\n
\n

In fact, the author is saying that Bayes' theorem applies when you're trying to read people (if this is not immediately obvious, just keep reading). Any particular piece of evidence about a person could have various causes. For example, in a later chapter we are offered a list of possible reasons for why someone may have dressed inappropriately for an occasion. They might (1) be seeking attention, (2) lack common sense, (3) be self-centered and insensitive to others, (4) be trying to show that they are spontaneous, rebellious, or noncomformists and don't care what other people think, (5) not have been taught how to dress and act appropriately, (6) be trying to imitate someone they admire, (7) value comfort and convenience over all else, or (8) simply not have the right attire for the occasion.

Similarly, very short hair on a man might indicate that he (1) is in the military, or was at some point in his life, (2) works for an organization that demands very short hair, such as a police force or fire department, (3) is trendy, artistic or rebellious, (4) is conservative, (5) is undergoing or recovering from a medical treatment, (6) thinks he looks better with short hair, (7) plays sports, or (8) keeps his hair short for practical reasons.

So much for reading people being easy. This, again, is the essence of Bayes' theorem: even though somebody being in the military might almost certainly mean that they'd have short hair, them having a short hair does not necessarily mean that they are in the military. On the other hand, if someone has short hair, is clearly knowledgeable about weapons and tactics, displays a no-nonsense attitude, is in good shape, and has a very Spartan home... well, though it's still not for certain, it seems likely to me that of all the people having all of these attributes, quite a few of them are in the military or in similar occupations.

The book offers a seven-step guide for finding patterns in people. I'll go through them one at a time, pointing out what they say in Bayesian and heuristic/bias terms. Note that this is not a definitive list: if you can come up with more Bayesian angles to the book, post them in the comments.

1. Start with the person's most striking traits, and as you gather more information see if his other traits are consistent or inconsistent.

\n

As computationally bounded agents, we can't simply take in all the available data at once: we have to start off some particularly striking traits and start building a picture from there. However, humans are notorious about anchoring too much (Anchoring and Adjustment), so we are reminded to actively seek disconfirmation to any initial theory we have.

\n
\n

I constantly test additional information agaisnt my first impression, always watching for patterns to develop. Each piece of the puzzle - a person's appearance, her tone of voice, hygiene and so on - may validate my first impression, disprove it, or have little impact on it. If most of the new information points in a different direction than my first impression did, I revise that impression. Then I consider whether my revised impression holds up as even more clues are revealed - and revise it again, if need be.

\n
\n

Here, the author is keeping in mind Conservation of Expected Evidence. If you could anticipate in advance the direction of any update, you should just update now. You should not expect to be able to get the right answer right away and never need to seriously update it. Nor should you expect to suddenly counter some piece of evidence that, on its own, would make you switch to becoming confident in something completely different. An ideal Bayesian agent will expect their beliefs to be in a constant state of gradual revision as the evidence comes in, and people with human cognitive architectures should also make an explicit effort to make their impressions update as fluidly as possible.

\n

Another thing that's said about first impressions also bears to be noted:

\n
\n

People often try hard to make a good first impression. The challenge is to continue to examine your first impression of someone with an open mind as you have more time, information, and opportunity.

\n
\n

Filtered evidence, in its original formulation, was a set of evidence that had been chosen for the specific purpose of persuading you of something. Here I am widening the definition somewhat, and also applying to cases where the other person cannot exclude all the evidence they dislike, but are regardless capable of biasing it in a direction of their choice. The evidence presented at a first meeting is usually filtered evidence. (Such situations are actually complicated signaling games, and a full Bayesian analysis would take into account all the broader game-theoretic implications. Filtered evidence is just one part of it.)

\n

Evidence is an event tangled by links of cause and effect with whatever you want to know about. On a first meeting, a person might be doing their best to appear friendly, say. Usually being a friendly person will lead them to behave in specific ways which are characteristic of friendly people. But if they are seeking to convey a good impression of themselves, their behavior may not be caused by an inherent friendliness anymore. The behavior is not tangled with friendliness, but with a desire to appear friendly.

\n

2. Consider each characteristic in light of the circumstances, not in isolation.

\n

The second core tenet in What is Bayesianism was \"How we interpret any event, and the new information we get from anything, depends on information we already had.\"

\n
\n

If you told me simply that a young man wears a large hoop earring, you couldn't expect me to tell you what that entails. It might make a great parlor game, but in real life I would never hazard a guess based on so little information. If the man is from a culture in which most young men wear large earrings, it might mean that he's a conformist. If, on the other hand, he is the son of a Philadelphia lawyer, he may be rebellious. If he plays in a rock band, he may be trendy.

\n
\n

A Bayesian translation of this might read roughly as follows. \"Suppose you told me simply that a young man wears a large hoop earring. You are asking me to suggest some personality trait that's causing him to wear them, but there is not enough evidence to locate a hypothesis. If we knew that the man is from a culture where most young men wear large earrings, we might know that conformists would be even more likely to wear earrings. If the number of conformists was sufficiently large, then a young man from that culture, chosen randomly on the basis of wearing earrings, might very likely be a conformist, simply because conformist earring-wearers make up such a large part of the earring-wearer population.

\n

(Or to say that in a more mathy way, say we started with a .4 chance of a young man being a conformist, a .6 chance for a young man to be wearing earrings, and a .9 chance for the conformists to be wearing earrings. Then we'd calculate (0.9 * 0.4) / (0.6) and get a 0.6 chance for the man in question to be conformist. We don't have exact numbers like these in our heads, of course, but we do have a rough idea.)

\n

But then, he might also be the son of a Philadelphia lawyer, say, and then we'd get a good chance for him being rebellious. Or if he were a rock band member, he might be trendy. We don't know which of these reference classes we should use; whether we should think we're picking a young man at random from a group of earring-wearing young men from an earring-wearing culture or from all the sons of lawyers. We could try to take a prior over his membership in any of the relevant reference classes, saying for instance that there was a .05 chance of him being a member of an earring culture, or a .004 chance of him being the son of a lawyer and so on. In other words, we'd think that we're picking a young earring-wearing man from the group of all earring-wearing men on Earth. Then we'd have a (0.05 * 0.6 =) 0.03 chance of him being a conformist due to being from an earring culture, et cetera. But then we'd distribute our probability mass over such a large amount of hypotheses that they'd all be very unlikely: the group of all earring-wearing men is so big that drawing at random could produce pretty much any personality trait. Figuring out the most likely alternative of all those countless alternatives might make a great parlor game, but in real life it'd be nothing you'd like to bet on.

\n

If you told me that he was also carrying an electric guitar... well, that still wouldn't be enough to get a very high probability on any of those alternatives, but it sure would help increase the initial probability of the \"plays in a rock band\" hypothesis. Of course, he could play in a rock band and be from a culture where people usually wore earrings.\"

\n

3. Look for extremes. The importance of a trait or characteristic may be a matter of degree.

\n

This is basically just a reformulation of the above points, with an emphasis on the fact that extreme traits are easier to notice. But again, extreme signs don't tell us much in isolation, so we need to look for the broader pattern.

\n
\n

The significance of any trait, however extreme, usually will not become clear until you learn enough about someone to see a pattern develop. As you look for the pattern, give special attention to any other traits consistent with the most extreme ones. They're usually like a beacon in the night, leading you in the right direction.

\n
\n

4. Identify deviations from the pattern.

\n

(I'll skip this one.)

\n

5. Ask yourself if what you're seeing reflects a temporary state of mind or a permanent quality.

\n

Again, any given observation has many different possible causes. Sometimes a behavior is caused not by any particular personality trait, but the person simply happening to be in a particular mood, which might be rare for them.

\n

This is possibly old hat by now, but just to be sure: The probability that behavior X is caused by cause A, sayeth Bayes' theorem, is the probability that A happened in the first place times (since they must both be true) the probability that A would cause X at all. That's divided by the summed chance for anything else to have caused X.

\n

A psedo-frequentist interpretation might compare this to the probability of drawing an ace out of a deck of cards. (I'm not sure if the following analogy is useful or makes sense to anyone besides me, but let's give it a shot.) Suppose you get to draw cards from a deck, but even after drawing them you're never allowed to look at them, and can only guess whether you're holding the most valuable ones. The chance that you'll draw a particular card is one divided by the total number of cards. You'd have a better chance of drawing it if you got to draw more cards. Imagine the probability of \"(A happened) * (A would cause X)\" as the amount of cards you'll get to draw from the deck of all hypotheses. You need to divide that with the probability that all hypotheses combined have, alternative explanations included, so think of the probability of the alternate hypotheses as the amount of other cards in the deck. Then your chance of drawing an ace of hearts (the correct hypothesis) is maximized if you get to draw as many cards as possible and the alternative hypotheses have as little probability (as few non-ace-of-hearts cards in the deck) as possible. Not considering the alternate hypotheses is like thinking you'll have a high chance of drawing the correct card, when you don't know how many cards there are in the deck total.

\n

If you're hoping to draw the correct hypothesis about the reasons for someone's behavior, then consider carefully whether you want to use the \"this is a permanent quality\" or the \"this is just a transient mood\" explanation. Frequently, drawing the \"this is just a transient mood\" cards will give you a better shot at grabbing the hypothesis with the most valuable card.

\n

See also Correspondence Bias.

\n


6. Distinguish between elective and nonelective traits. Some things you control; other things control you.

\n

As noted in the discussion about first impressions, people have an interest in manipulating the impression that others give them. The easier it is to manipulate an impression, and the more common it is that people have an interest in biasing that impression, the less reliable of a guide it is. Elective traits such as clothing, jewelry and accessories can be altered almost at will, and are therefore relatively weak evidence.

\n

Nonelective traits offer stronger evidence, particularly if they're extreme: things such as extreme overweight, physical handicaps, mental disorders and debiliating diseases often have a deep-rooted effect on personality and behavior. Many other nonelective traits such as height or facial features that are not very unusual don't usually merit special consideration - unless the person has invested signficant resources to permanently altering them.

\n


7. Give special attention to certain highly predictive traits.

\n

Left as an exercise for readers.

" } }, { "_id": "uP8NuRLEsGsg9sfRR", "title": "A Challenge for LessWrong", "pageUrl": "https://www.lesswrong.com/posts/uP8NuRLEsGsg9sfRR/a-challenge-for-lesswrong", "postedAt": "2010-06-29T23:47:39.284Z", "baseScore": 21, "voteCount": 31, "commentCount": 172, "url": null, "contents": { "documentId": "uP8NuRLEsGsg9sfRR", "html": "

The user divia, in her most excellent post on spaced repetition software, quotes Paul Buchheit as saying

\n
\n

\"Good enough\" is the enemy of \"At all\"!

\n
\n

This is an important truth which bears repetition, and to which I shall return.

\n

\"Rationalists should win\"

\n

Many hands have been wrung hereabouts on the subject of rationality's instrumental value (or lack thereof) in the everyday lives of LWers. Are we winning? Some consider this doubtful.1

\n

Now, I have a couple of issues with the question being framed in such a way.

\n\n
However, these relatively minor considerations aside, surely we can do better. In order to win, rationalists should play.
\n

Nonrandom acts of rationality

\n

The LessWrong community finds itself in the fairly privileged position of being (1) mostly financially well-off; (2) well-educated and articulate; (3) connected; (4) of non-trivial size. Therefore, I would like to suggest a project for any & all users who might be interested.

\n

Let us become a solution in search of problems.

\n

Perform one or more  manageable & modest   rationally & ethically motivated actions between now and July 31, 2010 (indicate intent to participate, and brainstorm, below). These actions must have a reasonable chance of being an unequivocal net positive for the shared values of this community. Finally, post what you have done to this thread's comments, in as much detail as possible.

\n

Some examples:

\n\n
(ETA: Some have suggested that these examples are terrible - a case of feel-good vs. effective actions. If you know more effective actions, for goodness' sake, please post them in the comments!)
\n
Remember, \"Good enough\" is the enemy of \"At all\"! Even if your action is just to click on The Hunger Site and all its affilliates every morning while you sip your rooibos, that's better than nothing and is unequivocally praiseworthy - post it. Your action does not have to be clever, just helpful.
\n
Bonus points for investigating the efficacy and cost-effectiveness of what you do (the suggestions above are not vetted with respect to this question).
\n
Also, if you already do something along these lines, please post it in the comments.
\n

What about LessWrong acting as a group?

\n

I would  love to see a group-level action on our part occur; however, after some time spent brainstorming, I haven't hit upon any really salient ones that are not amenable to individual action. Perhaps a concerted letter-writing campaign? I suspect that is a weak idea, and that there are much better ones out there. Who's up for world-optimization?

\n

Potential objection

\n
\n

These actions are mostly sub-optimal, consequentially speaking. The SIAI/[insert favourite cause here] is a better idea for a donation, since it promises to solve all the above problems in one go. These are just band-aids.

\n
\n
This may or may not be true; however, I am mostly asking people (myself included) to do this exercise instead of nothing or very little. If you already give to SIAI or something else that might save the world, and no disposable income or time is left over, then ignore this post (although I would be very interested to know the scope of your involvement in the comments).
\n

\n

 

\n
\n

 

\n
1 Although by George, we have Newcomb's problem licked!
" } }, { "_id": "Pvj3F37YAs443bbrH", "title": "Book Club Update, Chapter 2 of Probability Theory", "pageUrl": "https://www.lesswrong.com/posts/Pvj3F37YAs443bbrH/book-club-update-chapter-2-of-probability-theory", "postedAt": "2010-06-29T00:46:12.048Z", "baseScore": 10, "voteCount": 9, "commentCount": 42, "url": null, "contents": { "documentId": "Pvj3F37YAs443bbrH", "html": "

Previously: Book Club introductory post - First update and Chapter 1 summary

\n

Discussion on chapter 1 has wound down, we move on to Chapter 2 (I have updated the previous post with a summary of chapter 1 with links to the discussion as appropriate). But first, a few announcements.

\n

How to participate

\n

This is both for people who have previously registered interest, as well as newcomers. This spreadsheet is our best attempt at coordinating 80+ Less Wrong readers interested in participating in \"earnest study of the great literature in our area of interest\".

\n

If you are still participating, please let the group know - all you have to do is fill in the \"Active (Chapter)\" column. Write in an \"X\" if you are checked out, or the number of the chapter you are currently reading. This will let us measure attrition, as well as adapt the pace if necessary. If you would like to join, please add yourself to the spreadsheet. If you would like to participate in live chat about the material, please indicate your time zone and preferred meeting time. As always, your feedback on the process itself is more than welcome.

\n

Refer to the previous post for more details on how to participate and meeting schedules.

\n

Chapter 2: The Quantitative Rules

\n

In this chapter Jaynes carefully introduces and justifies the elementary laws of plausibility, from which all later results are derived.

\n

(Disclosure: I wasn't able to follow all the math in this chapter but I didn't let it deter me; the applications in later chapters are more accessible. We'll take things slow, and draw on such expertise as has been offered by more advanced members of the group. At worst this chapter can be enjoyed on a purely literary basis.)

\n

Sections: The Product Rule - The Sum Rule. Exercises: 2.1 and 2.2

\n

Chapter 2 works out the consequences of the qualitative desiderata introduced at the end of Chapter 1.

\n

The first step is to consider the evaluation of the plausibility (AB|C), from the possibly relevant inputs: (B|C), (A|C), (A|BC) and (B|AC). Considerations of symmetry and the desideratum of consistency lead to a functional equation known as the \"associativity equation\": F(F(x,z),z)=F(x,F(y,z)), characterizing the the function F such that (AB|C)=F[(B|C),(A|BC)]. The derivation that follows requires some calculus, and shows by differentiating then integrating back the form of the product rule:

\n

w(AB|C)=w(A|BC)w(B|C)=w(B|AC)w(A|C)

\n

Having obtained this, the next step is to establish how (A|B) is related to (not-A|B). The functional equation in this case is

\n

x*S(S(y)/x)=y*S(S(x)/y)

\n

and the derivation, after some more calculus, leads to S(x)=(1-x^m)^(1/m). But the value of m is irrelevant, and so we end up with the two following rules:

\n

p(AB|C)=p(A|BC)p(B|C)=p(B|AC)p(A|C)

\n

p(not-A|B)+p(A|B)=1

\n

The exercises provide a first opportunity to explore how these two rules yield a great many other ways of assessing probabilities of more complex propositions, for instance p(C|A+B), based on the elementary probabilities.

\n

Sections: Qualitative Properties - Numerical Values - Notation and Finite Sets Policy - Comments. Exercises: 2.3

\n

Jaynes next turns back to the relation between \"plausible reasoning\" and deductive logic, showing the latter as a limiting case of the former. The weaker syllogisms shown in Chapter 1 correspond to inequalities that can be derived from the product rule, and the direction of these inequalities start to point to likelihood ratios.

\n

The product and sum rules allow us to consider the particular case when we have a finite set of mutually exclusive and exhaustive propositions, and background information which is symmetrical about each such proposition: it says the same about any one of them that it says about any other. Considering two such situations, where the propositions are the same but the labels we give them are different, Jaynes shows that, given our starting desiderata, we cannot do other than to assign the same probabilities to propositions which we are unable to distinguish otherwise than by their labels.

\n

This is the principle of indifference; its significance is that even though what we have derived so far is an infinity of functions p(x) generated by the parameter m, the desiderata entirely \"pin down\" the numerical values in this particular situation.

\n

So far in this chapter we had been using p(x) as a function relating the plausibilities of propositions, such that p(x) was an arbitrary monotonic function of the plausibility x. At this point Jaynes suggests that we \"turn this around\" and say that x is a function of p. These values of p, probabilities, become the primary mathematical objects, while the plausibilities \"have faded entirely out of the picture. We will just have no further use for them\".

\n

The principle of indifference now allows us to start computing numerical values for \"urn probabilities\", which will be the main topic of the next chapter.

\n

Exercise 2.3 is notable for providing a formal treatment of the conjunction fallacy.

\n

Chapter 2 ends with a cautionary note on the topic of justifying results on infinite sets only based on a \"well-behaved\" process of passing to the limit of a series of finite cases. The Comments section addresses the \"subjective\" vs \"objective\" distinction.

" } }, { "_id": "hEeapgs2Y8tNyZkXD", "title": "Unknown knowns: Why did you choose to be monogamous?", "pageUrl": "https://www.lesswrong.com/posts/hEeapgs2Y8tNyZkXD/unknown-knowns-why-did-you-choose-to-be-monogamous", "postedAt": "2010-06-26T02:50:22.302Z", "baseScore": 69, "voteCount": 77, "commentCount": 669, "url": null, "contents": { "documentId": "hEeapgs2Y8tNyZkXD", "html": "

Many of us are familiar with Donald Rumsfeld's famous (and surprisingly useful) taxonomy of knowledge:

\n
\n

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we now know we don’t know. But there are also unknown unknowns. These are things we do not know we don’t know.

\n
\n

But this taxonomy (as originally described) omits an important fourth category: unknown knowns, the things we don't know that we know. This category encompasses the knowledge of many of our own personal beliefs, what I call unquestioned defaults. For example, most modern Americans possess the unquestioned default belief that they have some moral responsibility for their own freely-chosen actions. In the twelfth century, most Europeans possessed the unquestioned default belief that the Christian god existed. And so on. These unknown knowns are largely the products of a particular culture; they require homogeneity of belief to remain unknown.

\n

By definition, we are each completely ignorant of our own unknown knowns. So even when our culture gives us a fairly accurate map of the territory, we'll never notice the Mercator projection's effect. Unless it's pointed out to us or we find contradictory evidence, that is. A single observation can be all it takes, if you're paying attention and asking questions. The answers might not change your mind, but you'll still come out of the process with more knowledge than you went in with.

\n

\n

When I was eighteen I went on a date with a girl I'll call Emma, who conscientiously informed me that she already had two boyfriends: she was, she said, polyamorous. I had previously had some vague awareness that there had been a free love movement in the sixties that encouraged \"alternative lifestyles\", but that awareness was not a sufficient motivation for me to challenge my default belief that romantic relationships could only be conducted one at a time. Acknowledging default settings is not easy.

\n

The chance to date a pretty girl, though, can be sufficient motivation for a great many things (as is also the case with pretty boys). It was certainly a good enough reason to ask myself, \"Self, what's so great about this monogamy thing?\"

\n

I couldn't come up with any particularly compelling answers, so I called Emma up and we planned a second date.

\n

Since that fateful day, I've been involved in both polyamorous and monogamous relationships, and I've become quite confident that I am happier, more fulfilled, and a better romantic partner when I am polyamorous. This holds even when I'm dating only one person; polyamorous relationships have a kind of freedom to them that is impossible to obtain any other way, as well as a set of similarly unique responsibilities.

\n

In this discussion I am targeting monogamy because its discovery has had an effect on my life that is orders of magnitude greater than that of any other previously-unknown known. Others I've spoken with have had similar experiences. If you haven't had it before, you now have the same opportunity that I lucked into several years ago, if you choose to exploit it.

\n

This, then, is your exercise: spend five minutes thinking about why your choice of monogamy is preferable to all of the other inhabitants of relationship-style-space, for you. Other options that have been explored and documented include:

\n\n

These types of polyamory cover many of the available options, but there are others; some are as yet unknown. Some relationship styles are better than others, subject to your ethics, history, and personality. I suspect that monogamy is genuinely the best option for many people, perhaps even most. But it's impossible for you to know that until you know that you have a choice.

\n

If you have a particularly compelling argument for or against a particular relationship style, please share it. But if romantic jealousy is your deciding factor in favor of monogamy, you may want to hold off on forming a belief that will be hard to change; my next post will be about techniques for managing and reducing romantic jealousy.

" } }, { "_id": "w2mC29nvmLbcua9yo", "title": "MWI, copies and probability", "pageUrl": "https://www.lesswrong.com/posts/w2mC29nvmLbcua9yo/mwi-copies-and-probability", "postedAt": "2010-06-25T16:46:08.379Z", "baseScore": 19, "voteCount": 22, "commentCount": 162, "url": null, "contents": { "documentId": "w2mC29nvmLbcua9yo", "html": "

Followup to: Poll: What value extra copies?

\n

For those of you who didn't follow Eliezer's Quantum Physics Sequence, let me reiterate that there is something very messed up about the universe we live in. Specifically, the Many Worlds Interpretation (MWI) of quantum mechanics states that our entire classical world gets copied something like 1040±20 times per second1. You are not a line through time, but a branching tree.

\n

If you think carefully about Descartes' \"I think therefore I am\" type skepticism, and approach your stream of sensory observations from such a skeptical point of view, you should note that if you really were just one branch-line in a person-tree, it would feel exactly the same as if you were a unique person-line through time, because looking backwards, a tree looks like a line, and your memory can only look backwards. 

\n

However, the rules of quantum mechanics mean that the integral of the modulus squared of the amplitude density, ∫|Ψ|2, is conserved in the copying process. Therefore, the tree that is you has branches that get thinner (where thickness is ∫|Ψ|2 over the localized density \"blob\" that represents that branch) as they branch off. In fact they get thinner in such a way that if you gathered them together into a bundle, the bundle would be as thick as the trunk it came from.

\n

Now, since each copying event creates a slightly different classical universe, the copies in each of the sub-branches will each experience random events going differently. This means that over a timescale of decades, they will be totally \"different\" people, with different jobs, probably different partners and will live in different places though they will (of course) have your DNA, approximate physical appearance, and an identical history up until the time they branched off. For timescales on the order of a day, I suspect that almost all of the copies will be virtually identical to you, even down to going to bed at the same time, having exactly the same schedule that day, thinking almost all of the same thoughts etc.

\n

\n

 

\n

MWI mixes copies and probability

\n

When a \"random\" event happens, either the event was pseudorandom (like a large digit of pi) or it was a copy event, meaning that both (or all) outcomes were realized elsewhere in the wavefunction. This means that in many situations, when you say \"there is a probability p of event X happening\", what this really means is \"proportion p of my copy-children will experience X\".

\n

 

\n

LW doesn't care about copies

\n

In Poll: What value extra copies?, I asked what value people placed upon non-interacting extra copies of themselves, asking both about lock-step identical and statistically identical copies. The overwhelming opinion was that neither were of much value. For example, Sly comments:2

\n

\"I would place 0 value on a copy that does not interact with me. This might be odd, but a copy of me that is non-interacting is indistinguishable from a copy of someone else that is non-interacting. Why does it matter that it is a copy of me?\"

\n

 

\n

How to get away with attempted murder

\n

Suppose you throw a grenade with a quantum detonator at Sly. The detonator will sample a qbit in an even superposition of states 1 and 0. On a 0 it explodes, instantly vaporizing sly (it's a very powerful grenade). On a 1, it defuses the grenade and dispenses a $100 dollar note. Suppose that you throw it and observe that it doesn't explode:

\n

(A) does Sly charge you with attempted murder, or does he thank you for giving him $100 in exchange for something that had no value to him anyway?

\n

(B) if he thanks you for the free $100, does he ask for another one of those nice free hundred dollar note dispensers? (This is the \"quantum suicide\" option

\n

(C) if he says \"the one you've already given me was great, but no more please\", then presumably if you throw another one against his will, he will thank you for the free $100 again. And so on ad infinitum. Sly is temporally inconsistent if this option is chosen.

\n

 

\n

The punch line is that the physics we run on gives us a very strong reason to care about the welfare of copies of ourselves, which is (according to my survey) a counter-intuitive result. 

\n

EDIT: Quite a few people are biting the quantum suicide bullet. I think I'll have to talk about that next. Also, Wei Dai summarizes:

\n

Another way to think about this is that many of us seem to share the follow three intuitions about non-interacting extra copies, out of which we have to give up at least one to retain logical consistency:

\n
    \n
  1. We value extra copies in other quantum branches.
  2. \n
  3. We don't value extra copies that are just spatially separated from us (and are not too far away).
  4. \n
  5. We ought to value both kinds of copies the same way.
  6. \n
\n\n

 

\n

I might add a fourth option that many people in the comments seem to be going after: (4) We don't intrinsically value copies in other branches, we just have a subjective anticipation of becoming them. 

\n

 

\n
\n

1: The copying events are not discrete, rather they consist of a continuous deformation of probability amplitude in state space, but the shape of that deformation looks a lot like a continuous approximation to a discrete copying event, and the classical rules of physics approximately govern the time evolution of the \"copies\" as if they were completely independent. This last statement is the phenomenon of decoherence. The uncertainty in the copying rate is due to my ignorance, and I would welcome a physicist correcting me.

\n

2: There were many others who expressed roughly similar views, and I don't hold it as a \"black mark\" to pick the option that I am advising against, rather I encourage people to honestly put forward their opinions in a spirit of communal learning.

\n

 

" } }, { "_id": "iRfKKYhZAG8fWDjJr", "title": "Spaced Repetition Database for the Mysterious Answers to Mysterious Questions Sequence", "pageUrl": "https://www.lesswrong.com/posts/iRfKKYhZAG8fWDjJr/spaced-repetition-database-for-the-mysterious-answers-to", "postedAt": "2010-06-25T01:08:28.125Z", "baseScore": 59, "voteCount": 50, "commentCount": 58, "url": null, "contents": { "documentId": "iRfKKYhZAG8fWDjJr", "html": "

I'm a big fan of spaced repetition software.  There's a lot I could say about how awesome I think it is and how much it has helped me, but the SuperMemo website covers the benefits better than I could.  I will mention two things that surprised me.  First, I had no idea how much fun it would be; I actually really enjoy doing the reviews every day.  (For me this is hugely important, since it's unlikely I would have kept up with it otherwise.) Second, it's proven more useful than I had anticipated for maintaining coherence of beliefs across emotional states.  

\n

I've tried memorizing a variety types of things such as emacs commands, my favorite quotations, advice about how to communicate with children, and characters from books.  One of my more recent projects has been making notecards of the lesswrong sequences.  I tried to follow the rules for formulating knowledge from the SuperMemo website, but deciding which bits to encode and how is subjective.  For reference, I asked my boyfriend to make a few too so we could compare, and his looked pretty different from mine. 

\n

So, with those caveats, I thought I might as well share what I'd come up with.  As Paul Buchheit says, \"'Good enough' is the enemy of 'At all'\".  If you download Anki, my favorite spaced repetition software (free and cross-platform) and go to Download > Shared Deck in the Menu, you should be able to search for and get my Less Wrong Sequences cards.  I also put them up here, with the ones my boyfriend made of the first post for comparison.

\n

I had read all the sequences before, but I have found that since I've started using the cards I've noticed the concepts coming up in my life more often, so I think my experiment has been useful.

\n

Let me know what you think!

" } }, { "_id": "iRMBxyrKubkwhxccq", "title": "Is cryonics necessary?: Writing yourself into the future", "pageUrl": "https://www.lesswrong.com/posts/iRMBxyrKubkwhxccq/is-cryonics-necessary-writing-yourself-into-the-future", "postedAt": "2010-06-23T14:33:29.651Z", "baseScore": 16, "voteCount": 18, "commentCount": 147, "url": null, "contents": { "documentId": "iRMBxyrKubkwhxccq", "html": "

Cryonics appears to be the best hope for continuing a person's existence beyond physical death until other technologies provide better solutions.  But despite its best-in-class status, cryonics has several serious downsides.

\n

First and foremost, cryonics is expensive—well beyond a price that even a third of humanity can afford.  Economies of scale may eventually bring the cost down, but in the mean time billions of people will die without the benefit of cryonics, and, even when the cost bottoms out, it will likely still be too expensive for people living at subsistence levels.  Secondly, many people consider cryonics immoral or at least socially unacceptable, so even those who accept the idea of cryonics and want to pursue taking personal advantage of it are usually socially pressured out of signing up for cryonics.  Combined, these two forces reduce the pool of people who will act to sign up for cryonics to be less than even a fraction of a percent of the human population.

\n

Given that cryonics is effectively not an option for almost everyone on the planet, if we're serious about preserving lives into the future then we have to consider other options, especially ones that are morally and socially acceptable to most of humanity.  Pushed by my own need to find an alternative to cryonics, I began trying to think of ways I could be restored after physical death.

\n

If I am unable to preserve the physical components that currently make me up, it seems that the next best thing I can do is to record in some way as much of the details of the functioning of those physical components as possible.  Since we don't yet have the brain emulation technology that would make cryonics irrelevant for the still living, I need a lower tech way to making a record of myself.  And of all the ways I might try to record myself, none seems to better balance robustness, cost, and detail than writing.

\n

Writing myself into the future—now we're on to something.

\n

At first this plan didn't feel like such a winner, though:  How can I continue myself just through writing?  Even if I write down everything I can about myself—memories, medical history, everything—how can that really be all that's needed to restore me (or even most of me)?  But when we begin to break down what writing everything we can about ourselves really gives us, writing ourselves into the future begins to make more sense.

\n

For most of humanity, what makes you who you are is largely the same between all people.  Since percentages would make it seem that I have too precise an idea of how much, let's put it like this:  up to your eyebrows, all humans (except those with extreme abnormalities) are essentially the same.  Because we share the same evolutionary past as all of our conspecifics, the biology and psychology of our brains is statistically the same.  We each have our quirks of genetics and development, but even those are statistically similar among people who share our quirks.  Thus with just a few bits of data we can already record most of what makes you who you are.

\n

Most people find this idea unsettling when they first encounter it and have an urge to look away or disagree.  \"How can I, the very unique me, be almost completely the same as everyone else?\"  Since this is Less Wrong and not a more general forum, though, I'll assume you're still with me at this point.  If not, I recommend reading some of the introductory sequences on the site.

\n

So if we begin with a human template, add in a few modifiers for particular genetic and developmental quirks, we get to a sort of blank human that gets us most of the way to restoring you after physical death.  To complete the restoration, we need to inject the stuff that sets you uniquely apart even from your fellow humans who share your statistically regular quirks:  your memories.  If the record of your memories is good enough, this should effectively create a person who is so much like you as to be indistinguishable from the original, i.e. restore you.

\n

But, you may ask, is this restoration of you from writing really still you in the same way that the you restored from cryonics is you?  Maybe.  To me, it is.  Despite what subjective experience feels like, there doesn't seem to be anything in the brain that makes you who you are besides the general process of your brain and its memories.  Transferring yourself from your current brain to another brain or a brain emulation via writing doesn't seem that much different from transferring yourself via neuron replacement or some other technique except that writing introduces a lossy compression step, necessitated only by a lack of access to better technology.  Writing yourself into the future isn't the best solution, but it does seem to be an effective stopgap to death.

\n
\n

If you're still with me, we have a few nagging questions to answer.  Consider this an FAQ for writing yourself into the future.

\n

How good a record is good enough?  In truth, I don't think we even know enough to get the order of magnitude right.  The best I can offer is that you need to record as much as you are willing to.  The more you record, the more there will be to work with, and the less chance there will be of insufficient data.  It may turn out that you simply can't record enough to create a good restoration of a person from writing, but this is little different from the risk in cryonics of not being well preserved enough to restore despite best efforts.  If you're willing to take the risk that cryonics won't work as well as you hope, you should be willing to accept that writing yourself into the future might not work as well as you hope.

\n

How is writing yourself into the future more socially acceptable than cryonics?  Basically, because people already do this all the time, although not with an eye toward their eventually restoration.  People regularly keep journals, write blogs, write autobiographies, and pass on stories of their lives, even if only orally.  You can write a record of yourself, fully intending for it to be used to restore you at some future time, without ever having to do anything that is morally or socially unacceptable to other people (at least, for people in most societies) other than perhaps specify in your writing of yourself that you want it to be used to restore you after you die.

\n

How is writing yourself into the future more accessible to the poor?  If a person is literate and has access to some form of durable writing material, they can write themselves into the future, limited only by their access to durable writing material and reliable storage.  Of course, many people are not literate, but the cost of teaching literacy is far lower than the cost of cryonics, and literacy has other benefits beyond writing yourself into the future, so it's an easy sell to increase literacy even to people who are opposed to the idea of life extension.

\n

Will the restoration really be me?  Let me address this in another way.  You, like everything else, are a part of the universe.  Unlike what we believe to be true of most of the stuff in the universe, though, the stuff that makes up what we call you is aware of its existence.  As best we can tell, the way that you are aware of your existence is because you have a way of recalling previous events during your existence.  If we take away the store and recall of experience, we're left with some stuff that can do essentially everything it could when it had memory, but will not have any concept of existing outside the current moment.  Put the store and recall back in, though, and suddenly what we would recognize as self-awareness returns.

\n

Other questions?  Post them and I'll try to address them.  I have a feeling that there will be some strong disagreement from people who disagree with me about what self-awareness means and how the brain works, and I'll try to explain my position as best I can to them, but I'm also interested in any other questions that people might have since there are likely many issues I haven't even considered yet.

" } }, { "_id": "gztAhEueePQi3RNHs", "title": "A Rational Education", "pageUrl": "https://www.lesswrong.com/posts/gztAhEueePQi3RNHs/a-rational-education", "postedAt": "2010-06-23T05:48:20.854Z", "baseScore": 19, "voteCount": 19, "commentCount": 152, "url": null, "contents": { "documentId": "gztAhEueePQi3RNHs", "html": "

Within the next month I will be enrolling in an(other) undergraduate university course. This being the case I must make a selection of both course and major. While I could make such decisions on impulsive unconscious preference satisfaction and guesswork on what subjects happen to provide the most value I could also take the opportunity to address the decision more rationally and objectively. There are some relevant questions to ask that I know LessWrong readers can help me answer.

\n
    \n
  1. Which subjects and courses can make the best contribution to Epistemic Rationality?
  2. \n
  3. Which subjects and courses provide the most Instrumental Rationality benefits?
  4. \n
  5. Given all available information about the universe and what inferences can be drawn about my preferences and abilities what course structure should I choose?
  6. \n
  7. Which course do you just happen to like?
  8. \n
\n

\n

1. Which subjects and courses can make the best contribution to Epistemic Rationality?

\n

I happen to care about Epistemic Rationality for its own sake. Both for me personally and in those whom I encounter. It is Fun! This means that I like both to add new information to my Map and to develop skills that enhance my general ability to build and improve upon that map.

\n

Not all knowledge is created equal. While whole posts could be dedicated to what things are the most important to know. I don't want to learn gigabytes of statistics on sport performances. I prefer, and may be tempted to argue that it is fundamentally better, to learn concepts than facts and in particular concepts that are the most related to fundamental reality. This includes physics and the most applicable types of mathematics (eg. probability theory).

\n

For some types of knowledge that are worth learning university is not a desirable place to learn them. Philosophy is Fun. But the philosophy I would learn at university is too influenced by traditional knowledge and paying rent to impressive figures. The optimal  behavior when studying or researching philosophy is not to Dissolve the Question. It is to convey that the question is deep and contentious, affiliate with one 'side' and do battle within an obsolete and suboptimal way of Carving Reality. My frank opinion is that many philosophers need to spend more time programming, creating simulated realities, or at least doing mathematics before they can hope to make a useful contribution to thought. (I'm voicing a potentially controversial position here that I know some would agree with but for which I am also inviting debate.)

\n

There are some subjects that are better served for improving thinking itself as well as merely learning existing thoughts. I'll list some that spring to mind but I suspect some of them may be red herrings and there are others you may be able to suggest that I just haven't considered.

\n\n

2. Which subjects and courses provide the most Instrumental Rationality benefits?

\n

Fun is great, so is having accurate maps. But there are practical considerations too. You can't have fun if you starve and fun may not last too long if you are unable to contribute directly or financially to the efforts that ensure the future of humanity. Again there are two considerations:

\n\n

3. Given all available information about the universe and what inferences can be drawn about my preferences and abilities what course structure should I choose?

\n

This is an invitation to Other-Optimize me. Please give me advice. Remember that giving advice is a signal of high status and as such is often an enjoyable experience to engage in. This is also a rare opportunity - you may be patronizing and I will not even respond in kind or with a curt dismissal. You can even be smug and condescending if that is what it takes for me to extract your insights!

\n

Now, I should note that my decision to do another undergraduate degree is in no way based on a belief that it is just what I need to do to gain success. I already have more than enough education behind me (I have previously studied IT, AI and teaching).

\n\n

(Call bullshit on that if you think I am rationalizing or believe there are better alternatives to give me what you infer from here or elsewhere that I want.)

\n

Now, assuming that I am going to be studying an undergraduate course, which course maximizes the expected benefit?

\n

Something I am considering is a double major Bachelor of Science(pharmacology, mathematical statistics). Recent conversations that I have participated in here give an indication as to my existing interest in pharmacology. I have some plans in mind that would contribute to furthering human knowledge on non-patented pharmaceutical substances. In particular life-extension drugs and nootropics. This is an area that I believe is drastically overlooked, to the extent of being species wide negligence. Consider this to be a significant goal that I want my studying to contribute to.

\n

The most effective contribution I can make there will likely involve leveraging financial resources that I earn elsewhere but I mostly have financial considerations covered. I also want to ensure I know what is going on and know what needs to be done at a detailed level. That means learning pharmacology. But it also means learning statistics of some sort. What statistics should I learn? Should I focus on improving my understanding of Bayesian statistics or should I immerse myself in some more ad-hoc frequentest tools that can be used to look impressive?

\n

4. Which course do you just happen to like?

\n

What other subjects are relevant to the sort of concepts we like discussing here? Perhaps something from sociology or psych? I have breadth subjects I need to fill, which gives me the chance to look at some topics in somewhat more depth than just a post (but sometimes possibly less depth than a whole post sequence!) I'm also rather curious which subjects like-minded people just wish they had a chance to study. If you were trapped in the SGC in a groundhog day time loop which topics would you want to learn?

" } }, { "_id": "gYvuPhQZyzFqnxXMx", "title": "Poll: What value extra copies?", "pageUrl": "https://www.lesswrong.com/posts/gYvuPhQZyzFqnxXMx/poll-what-value-extra-copies", "postedAt": "2010-06-22T12:15:54.408Z", "baseScore": 6, "voteCount": 10, "commentCount": 177, "url": null, "contents": { "documentId": "gYvuPhQZyzFqnxXMx", "html": "

In the future, it may be possible for you to scan your own brain and create copies of yourself. With the power of a controllable superintelligent AI, it may even be possible to create very accurate instances of your past self (and you could take action today or in the near future to make this easier by using lifelogging tools such as these glasses).

\n

So I ask Less Wrong: how valuable do you think creating extra identical, non-interacting copies of yourself is? (each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction)

\n

For example, would you endure a day's hard labor to create an extra self-copy? A month? A year? Consider the hard labor to be digging a trench with a pickaxe, with a harsh taskmaster who can punish you if you slack off.

\n

Do you think having 10 copies of yourself made in the future is 10 times as good as having 1 copy made? Or does your utility in copies drop off sub-linearly?

\n

Last time I spoke to Robin Hanson, he was extremely keen on having a lot of copies of himself created (though I think he was prepared for these copies to be emulant-wage-slaves).

\n

I have created a poll for LW to air its views on this question, then in my next post I'll outline and defend my answer, and lay out some fairly striking implications that this has for existential risk mitigation.

\n

For those on a hardcore-altruism trip, you may substitute any person or entity that you find more valuable than your own good self: would you sacrifice a day of this entity's life for an extra copy? A year? etc.

\n

 

\n
\n

 

\n

UPDATE: Wei Dai has asked this question before, in his post \"The moral status of independent identical copies\" - though his post focuses more on lock-step copies that are identical over time, whereas here I am interested in both lock-step identical copies and statistically identical copies (a statistically identical copy has the same probability distribution of futures as you do).

\n

 

\n

 

\n

 

" } }, { "_id": "ptcKg7TZgj2WvLKAQ", "title": "Rationality & Criminal Law: Some Questions", "pageUrl": "https://www.lesswrong.com/posts/ptcKg7TZgj2WvLKAQ/rationality-and-criminal-law-some-questions", "postedAt": "2010-06-20T07:42:43.674Z", "baseScore": 21, "voteCount": 19, "commentCount": 163, "url": null, "contents": { "documentId": "ptcKg7TZgj2WvLKAQ", "html": "

The following will explore a couple of areas in which I feel that the criminal justice system of many Western countries might be deficient, from the standpoint of rationality. I am very much interested to know your thoughts on these and other questions of the law, as far as they relate to rational considerations.

\n

Moral Luck

\n

Moral luck refers to the phenomenon in which behaviour by an agent is adjudged differently based on factors outside the agent's control.

\n

Suppose that Alice and Yelena, on opposite ends of town, drive home drunk from the bar, and both dazedly speed through a red light, unaware of their surroundings. Yelena gets through nonetheless, but Alice hits a young pedestrian, killing him instantly. Alice is liable to be tried for manslaughter or some similar charge; Yelena, if she is caught, will only receive the drunk driving charge and lose her license.

\n

Raymond, a day after finding out that his ex is now in a relationship with Pardip, accosts Pardip at his home and attempts to stab him in the chest; Pardip smashes a piece of crockery over Raymond's head, knocking him unconscious. Raymond is convicted of attempted murder, receiving typically 3-5 years chez nous (in Canada). If he had succeeded, he would have received a life sentence, with parole in 10-25 years.

\n

Why should Alice be punished by the law and demonized by the public so much more than Yelena, when their actions were identical, differing only by the sheerest accident? Why should Raymond receive a lighter sentence for being an unsuccessful murderer?

\n

Some prima facie plausible justifications:

\n\n
But in Yelena's case, the law is already blind to such things anyway. You don't get a lesser drunk driving charge if you can prove you're pretty good at driving drunk. In the case of Raymond, attempted murder already implies that the intent to kill must be proven, else the charge would have been dropped to assault or some such.
\n\n
This is understandable, but surely if we accept this argument, we could nonetheless satisfy the concerns above by punishing the morally lucky more severely, not punishing the morally unlucky less severely.
\n\n
This might be true as far as it goes; however, enforcing strong sentences on the morally lucky would certainly provide a stronger deterrent, which would provide a countervailing tendency to the above.
\n

Trial by Jury; Trial by Judge

\n

Those of us who like classic films may remember 12 Angry Men (1957) with Henry Fonda. This was a remarkably good film about a jury deliberating on the murder trial of a poor young man from a bad neighbourhood, accused of killing his father. It portrays the indifference (one juror wants to be out in time for the baseball game), prejudice and conformity of many of the jurors, and how this is overcome by one man of integrity who decides to insist on a thorough look through the evidence and testimony.

\n

I do not wish to generalize from fictional examples; however, such factors are manifestly at play in real trials, in which Henry Fonda cannot necessarily be relied upon to save the day.

\n

Komponisto has written on the Knox case, in which an Italian jury came to a very questionable (to put it mildly) conclusion based on the evidence presented to them; other examples will doubtless spring to mind (a famous one in this neck of the woods is the Stephen Truscott case - the evidence against Truscott being entirely circumstantial.

\n

More information on trial by jury and its limitations may be found  here. Recently the UK has made some moves to trial by judge for certain cases, specifically fraud cases in which jury tampering is a problem.

\n

The justifications cited for trial by jury typically include the egalitarian nature of the practice, in which it can be guaranteed that those making final legal decisions do not form a special class over and above the ordinary citizens whose lives they effect.

\n

A heartening example of this was mentioned in Thomas Levenson's fascinating book  Newton and the Counterfeiter. Being sent to Newgate gaol was, infamously in the 17th and 18th centuries, an effective death sentence in and of itself; moreover, a surprisingly large number of crimes at this time were capital crimes (the counterfeiter whom Newton eventually convicted was hanged). In this climate of harsh punishment, juries typically only returned guilty verdicts either when evidence was extremely convincing or when the crime was especially heinous. Effectively, they counteracted the harshness of the legal system by upping the burden of proof for relatively minor crimes.

\n

So juries sometimes provide a safeguard against abuse of justice by elites. However, is this price for democratizing justice too high, given the ease with which citizens naive about the Dark Arts may be manipulated? (Of course, judges are by no means perfect Bayesians either; however, I would expect them to be significantly less gullible.)

\n

Are there any other systems that might be tried, besides these canonical two? What about the question of representation? Does the adversarial system, in which two sides are represented by advocates charged with defending their interests, conduce well to truth and justice, or is there a better alternative? For any alternatives you might consider: are they naive or savvy about human nature? What is the normative role of punishment, exactly?

\n

How would the justice system look if LessWrong had to rewrite it from scratch?

" } }, { "_id": "EJuZcWnk8j7eNQPqq", "title": "Applying Behavioral Psychology on Myself", "pageUrl": "https://www.lesswrong.com/posts/EJuZcWnk8j7eNQPqq/applying-behavioral-psychology-on-myself", "postedAt": "2010-06-20T06:25:13.679Z", "baseScore": 72, "voteCount": 63, "commentCount": 39, "url": null, "contents": { "documentId": "EJuZcWnk8j7eNQPqq", "html": "

In which I attempt to apply findings from behavioral psychology to my own life.

\n

Behavioral Psychology Finding #1: Habituation

\n

The psychological process of \"extinction\" or \"habituation\" occurs when a stimulus is administered repeatedly to an animal, causing the animal's response to gradually diminish.  You can imagine that if you were to eat your favorite food for breakfast every morning, it wouldn't be your favorite food after a while.  Habituation tends to happen the fastest when the following three conditions are met:

\n\n

Source is here.

\n

Applied Habituation

\n

I had a project I was working on that was really important to me, but whenever I started working on it I would get demoralized.  So I habituated myself to the project: I alternated 2 minutes of work with 2 minutes of sitting in the yard for about 20 minutes.  This worked.

\n


Interestingly enough, about halfway through this exercise I realized that what was really making it difficult for me to work on my project was the fact that it involved so many choices.  So as my 20 minutes progressed, I started spending my 2 minutes trying to make as difficult decisions as possible.  This habituation to decision demoralization seems to have had an immediate, fairly lasting impact on a wide variety of activities.

I'm really looking forward to hearing from someone who attempts to apply habituation to an ugh field.

\n

Applied Habituation in Reverse

\n

If you want to enjoy your favorite song until the day you die, dance to it infrequently at irregular intervals while it plays full blast.  (Reversed conditions for habituation.)

\n

Behavioral Psychology Finding #2: Intermittent Reinforcement

\n

The reason why slot machines are so engaging is because they deliver rewards at random.  If slot machines payed small rewards out on every round, playing them would be like work.

\n

Applied Intermittent Reinforcement

\n

For a while, there was a time-consuming chore that I was required to do every evening.  I would often put it off until 2-3 AM and work while sleepy as a result.

To solve this problem, I started eating a gummy worm with 50% probability each time I did the chore at a pre-determined time early in the evening.  (I gave myself the first two gummy worms with 100% probability to start things off.)  My success rate with this method was very high.

\n

Further Research

\n

Another self-help technique I've had tremendous success with is using Linux's cron utility to cause Firefox tabs to open periodically and tell me to switch activities if I'm wasting time.  However, I've found that forcing myself to switch activities is highly stressful.

Perhaps it's possible to habituate the negative response to activity switching by having practice sessions where you periodically switch between distraction and work?  Or maybe you could use intermittent reinforcement and randomly decide to give yourself something nice if you're successful in an upgrade to a higher-quality activity.

\n

(I'm not experimenting with these at the moment because I'm currently fairly happy with my work/relaxation balance.)

\n

Thanks to Psychohistorian for reminding me I wanted to write about this.  I'm hoping he won't get mad at me for writing on the same topic he did so soon after his post.

" } }, { "_id": "wv6a9kA6EApiYD5sL", "title": "What if AI doesn't quite go FOOM?", "pageUrl": "https://www.lesswrong.com/posts/wv6a9kA6EApiYD5sL/what-if-ai-doesn-t-quite-go-foom", "postedAt": "2010-06-20T00:03:09.699Z", "baseScore": 16, "voteCount": 22, "commentCount": 191, "url": null, "contents": { "documentId": "wv6a9kA6EApiYD5sL", "html": "

Intro

\n

This article seeks to explore possible futures in a world where artificial intelligence turns out NOT to be able to quickly, recursively self-improve so as to influence our world with arbitrarily large strength and subtlety, i.e, \"go FOOM.\"  Note that I am not arguing that AI won't FOOM. Eliezer has made several good arguments for why AI probably will FOOM, and I don't necessarily disagree.  I am simply calling attention to the non-zero probability that it won't FOOM, and then asking what we might do to prepare for a world in which it doesn't.

\n

Failure Modes

\n

I can imagine three different ways in which AI could fail to FOOM in the next 100 years or so.  Option 1 is a \"human fail.\"  Option 1 means we destroy ourselves or succumb to some other existential risk before the first FOOM-capable AI boots up.  I would love to hear in the comments section about (a) which existential risks people think are most likely to seriously threaten us before the advent of AI, and (b) what, if anything, a handful of people with moderate resources (i.e., people who hang around on Less Wrong) might do to effectively combat some of those risks.

Option 2 is a \"hardware fail.\" Option 2 means that Moore's Law turns out to have an upper bound; if physics doesn't show enough complexity beneath the level of quarks, or if quantum-sized particles are so irredeemably random as to be intractable for computational purposes, then it might not be possible for even the most advanced intelligence to significantly improve on the basic hardware design of the supercomputers of, say, the year 2020.  This would limit the computing power available per dollar, and so the level of computing power required for a self-improving AI might not be affordable for generations, if ever. Nick Bostrom has some interesting thoughts along these lines, ultimately guessing (as of 2008) that the odds of a super-intelligence forming by 2033 was less than 50%.

Option 3 is a \"software fail.\"  Option 3 means that *programming* efficiency turns out to have an upper bound; if there are natural information-theoretical limits on how efficiently a set number of operations can be used to perform an arbitrary task, then it might not be possible for even the most advanced intelligence to significantly improve on its basic software design; the supercomputer would be more than 'smart' enough to understand itself and to re-write itself, but there would simply not *be* an alternate script for the source code that was actually more effective.

These three options are not necessarily exhaustive; they are just the possibilities that have immediately occurred to me, with some help from User: JoshuaZ.

\n

\"Superintelligent Enough\" AI

\n

An important point to keep in mind is that even if self-improving AI faces hard limits before becoming arbitrarily powerful, AI might still be more than powerful enough to effortlessly dominate future society.  I am sure my numbers are off by many orders of magnitude, but by way of illustration only, suppose that current supercomputers run at a speed of roughly 10^20 ops/second, and that successfully completing Eliezer's coherent extrapolated volition project would require a processing speed of roughly 10^36 ops/second.  There is obviously quite a lot of space here for a miniature FOOM.  If one of today's supercomputers starts to go FOOM and then hits hard limits at 10^25 ops/second, it wouldn't be able to identify humankind's CEV, but it might be able to, e.g, take over every electronic device capable of receiving transmissions, such as cars, satellites, and first-world factories.  If this happens around the year 2020, a mini-FOOMed AI might also be able to take over homes, medical prosthetics, robotic soldiers, and credit cards.

Sufficient investments in security and encryption might keep such an AI out of some corners of our economy, but right now, major operating systems aren't even proof against casual human trolls, let alone a dedicated AI thinking at faster-than-human speeds.  I do not understand encryption well, and so it is possible that some plausible level of investment in computer security could, contrary to my assumptions, actually manage to protect human control over individual computers for the foreseeable future.  Even if key industrial resources were adequately secured, though, a moderately super-intelligent AI might be capable of modeling the politics of current human leaders well enough to manipulate them into steering Earth onto a path of its choosing, as in Issac Asimov's The Evitable Conflict.

If enough superintelligences develop at close enough to the same moment in time and have different enough values, they might in theory reach some sort of equilibrium that does not involve any one of them taking over the world.  As Eliezer has argued (scroll down to 2nd half of the linked page), though, the stability of a race between intelligent agents should mostly be expected to *decrease* as those agents swallow their own intellectual and physical supply chains.  If a supercomputer can take over larger and larger chunks of the Internet as it gets smarter and smarter, or if a supercomputer can effectively control what happens in more and more factories as it gets smarter and smarter, then there's less and less reason to think that supercomputing empires will \"grow\" at roughly the same pace -- the first empire to grow to a given size is likely to grow faster than its rivals until it takes over the world.  Note that this could happen even if the AI is nowhere near smart enough to start mucking about with uploaded \"ems\" or nanoreplicators.  Even in a boringly normal near-future scenario, a computer with even modest self-improvement and self-aggrandizement capabilities might be able to take over the world.  Imagine something like the ending to David Brin's Earth, stripped of the mystical symbolism and the egalitarian optimism.

\n

Ensuring a \"Nice Place to Live\"

\n

I don't know what Eliezer's timeline is for attempting to develop provably Friendly AI, but it might be worthwhile to attempt to develop a second-order stopgap.  Eliezer's CEV is supposed to function as a first-order stopgap; it won't achieve all of our goals, but it will ensure that we all get to grow up in a Nice Place to Live while we figure out what those goals are.  Of course, that only happens if someone develops a CEV-capable AI.  Eliezer seems quite worried about the possibility that someone will develop a FOOMing unFriendly AI before Friendly AI can get off the ground, but is anything being done about this besides just rushing to finish Friendly AI?

\n

Perhaps we need some kind of mini-FOOMing marginally Friendly AI whose only goal is to ensure that nothing seizes control of the world's computing resources until SIAI can figure out how to get CEV to work.  Although no \"utility function\" can be specified for a general AI without risking paper-clip tiling, it might be possible to formulate a \"homeostatic function\" at relatively low risk.  An AI that \"valued\" keeping the world looking roughly the way it does now, that was specifically instructed *never* to seize control of more than X number of each of several thousand different kinds of resources, and whose principal intended activity was to search for, hunt down, and destroy AIs that seemed to be growing too powerful too quickly might be an acceptable risk.  Even if such a \"shield AI\" were not provably friendly, it might pose a smaller risk of tiling the solar system than the status quo, since the status quo is full of irresponsible people who like to tinker with seed AIs.

\n

An interesting side question is whether this would be counterproductive in a world where Failure Mode 2 (hard limits on hardware) or Failure Mode 3 (hard limits on software) were serious concerns.  Assuming that, eventually, a provably friendly AI can be developed, then, several years after that, it's likely that millions of people can be convinced that it would be really good to activate the provably friendly AI, and humans might be able to dedicate enough resources to specifically overcome the second-order stopgap \"shield AI\" that was knocking out other people's un-provably Friendly AIs.  But if the shield AI worked too well and got too close to the hard upper bound on the power of an AI, then it might not be possible to unmake the shield, even with added resources and with no holds barred.

" } }, { "_id": "Rvm7tmfEQ2RstJBPG", "title": "Defeating Ugh Fields In Practice", "pageUrl": "https://www.lesswrong.com/posts/Rvm7tmfEQ2RstJBPG/defeating-ugh-fields-in-practice", "postedAt": "2010-06-19T19:37:44.349Z", "baseScore": 97, "voteCount": 78, "commentCount": 95, "url": null, "contents": { "documentId": "Rvm7tmfEQ2RstJBPG", "html": "

Unsurprisingly related to: Ugh fields.

\n

If I had to choose a single piece of evidence off of which to argue that the rationality assumption of neoclassical economics is totally, irretrievably incorrect, it's this article about financial incentives and medication compliance. In short, offering people small cash incentives vastly improves their adherence to life-saving medical regimens. That's right. For a significant number of people, a small chance at winning $10-100 can be the difference between whether or not they stick to a regimen that has a very good chance of saving their life. This technique has even shown promise in getting drug addicts and psychiatric patients to adhere to their regimens, for as little as a $20 gift certificate. This problem, in the aggregate, is estimated to cost about 5% of total health care spending -$100 billion - and that may not properly account for the utility lost by those who are harmed beyond repair. To claim that people are making a reasoned decision between the payoffs of taking and not-taking their medication, and that they be persuaded to change their behaviour by a payoff of about $900 a year (or less), is to crush reality into a theory that cannot hold it. This is doubly true when you consider that some of these people were fairly affluent. 

\n

A likely explanation of this detrimental irrationality is something close to an Ugh field. It must be miserable having a life-threatening illness. Being reminded of it by taking a pill every single day (or more frequently) is not pleasant. Then there's the question of whether you already took the pill. Because if you take it twice in one day, you'll end up in the hospital. And Heaven forfend your treatment involves needles. Thus, people avoid taking their medicine because the process becomes so unpleasant, even though they know they really should be taking it.

\n

As this experiment shows, this serious problem has a simple and elegant solution: make taking their medicine fun. As one person in the article describes it, using a low-reward lottery made taking his meds \"like a game;\" he couldn't wait to check the dispenser to see if he'd won (and take his meds again). Instead of thinking about how they have some terrible condition, they get excited thinking about how they could be winning money. The Ugh field has been demolished, with the once-feared procedure now associated with a tried-and-true intermittent reward system. It also wouldn't surprise me the least if people who are unlikely to adhere to a medical regimen are the kind of people who really enjoy playing the lottery.

\n

This also explains why rewarding success may be more useful than punishing failure in the long run: if a kid does his homework because otherwise he doesn't get dessert, it's labor. If he gets some reward for getting it done, it becomes a positive. The problem is that if she knows what the reward is, she may anchor on already having the reward, turning it back into negative reinforcement - if you promise your kid a trip to Disneyland if they get above a 3.5, and they get a 3.3, they feel like they actually lost something. The use of a gambling mechanism may be key for this. If your reward is a chance at a real reward, you don't anchor as already having the reward, but the reward still excites you.

\n

I believe that the fact that such a significant problem can be overcome with such a trivial solution has tremendous implications, the enumeration of all of which would make for a very unwieldy post. A particularly noteworthy issue is the difficulty of applying such a technique to one's own actions, a problem which I believe has a fairly large number of workable solutions. That's what comments, and, potentially, follow-up posts are for. 

" } }, { "_id": "8FPJeyvgNL3CTa4Ng", "title": "Surface syllogisms and the sin-based model of causation", "pageUrl": "https://www.lesswrong.com/posts/8FPJeyvgNL3CTa4Ng/surface-syllogisms-and-the-sin-based-model-of-causation", "postedAt": "2010-06-19T16:40:39.852Z", "baseScore": 22, "voteCount": 25, "commentCount": 50, "url": null, "contents": { "documentId": "8FPJeyvgNL3CTa4Ng", "html": "

The White House says there will be a temporary ban on new deep-water drilling, and BP will have to pay the salaries of oilmen who have no work during that ban.  I scratched my head trying to figure out the logic behind this.  This was my first attempt:

\n
    \n
  1. BP caused an oil spill.
  2. \n
  3. The oil spill caused a ban on drilling.
  4. \n
  5. The ban on drilling caused oilmen to be out of work.
  6. \n
  7. Therefore, BP caused oilmen to be out of work.
  8. \n
  9. Therefore, BP should pay these oilmen.
  10. \n
\n

This logic works equally well in this case:

\n
    \n
  1. Rachel Carson wrote Silent Spring.
  2. \n
  3. Silent Spring caused a ban on DDT use.
  4. \n
  5. The ban on DDT use caused factory workers to be out of work.
  6. \n
  7. Therefore, Rachel Carson caused factory workers to be out of work.
  8. \n
  9. Therefore, Rachel Carson should pay these workers.
  10. \n
\n

But \"everyone\" would agree that the second example is fallacious.  Are people so angry at BP that they can't think at all?

\n

Then I came up with this second argument.  (\"At fault\" is legalese for \"caused by an immoral or illegal action.\")

\n
    \n
  1. An oil spill caused a ban on drilling.
  2. \n
  3. The ban on drilling caused oilmen to be out of work.
  4. \n
  5. Therefore, the oil spill caused oilmen to be out of work.
  6. \n
  7. The party at fault should pay the injured party.
  8. \n
  9. BP is at fault for the oil spill.
  10. \n
  11. Therefore, BP should pay these oilmen.
  12. \n
\n

Applied to Rachel Carson:

\n
    \n
  1. Silent Spring caused a ban on DDT.
  2. \n
  3. The ban on DDT use caused factory workers to be out of work.
  4. \n
  5. The party at fault should pay the injured party.
  6. \n
  7. The producers of DDT are at fault for environmental damage.
  8. \n
  9. Therefore, those producers should pay these factory workers.
  10. \n
\n

Both these chains of reasoning are still faulty, but they're more similar to the reactions of most people.  They are faulty because they're not specific about the connection between the fault and the injured party, or about what an \"injury\" is.  In the second case, there is no injury to the workers; the company simply stopped employing them, and could only be held morally responsible for this under something like feudalism.  In the BP case, you could argue that non-BP oilmen were injured, because they want to work and their (non-BP) employers want to hire them, but outside forces prevented them.

\n

However, being at fault for the oil spill is not the same as being at fault for (causing by immoral action) the ban on drilling.  The word \"cause\" is too vague for moral responsibility to be transitive over it; and \"X caused Z\" does not preclude \"Y caused Z\".  The ban on drilling is not a ban only on drilling by BP; this means that the powers that be decided the ban on drilling is good for the country, not a punishment of BP.  It is a decision that the expected cost of further drilling outweighs the expected benefits.  There is no moral failing and no one at fault, and either the government should pay them, or the oilmen should bite it the way any workers do when their industry has a downturn and rely on existing safety nets such as unemployment insurance.  (This is completely different from the case of fishermen put out of work directly by the oil spill; I believe it makes sense for BP to pay them.)

\n

Figuring out how moral responsibility propagates through a chain of events is complicated.  I propose that people are using the \"sin-based\" model of cause and effect.  This model says that all bad outcomes are caused by moral failings.  (On the radio yesterday, I heard a woman being interviewed whose house had been destroyed by a landslide.  The first question the interviewer asked was, \"Whose fault was this?\")

\n

In the sin-based model, when you enumerate a chain of events that is causally linked, and some events are bad outcomes, all you need to do is transfer blame for the bad outcomes to the moral failings preceding them in the chain.  Oilmen are out of work; you construct a chain of events leading to them being out of work, identify the closest preceding moral failure in the chain, and pin the blame on that moral failing.  No need for painful thinking!

" } }, { "_id": "L6yBGL4Hog5coegd8", "title": "Open Thread June 2010, Part 4", "pageUrl": "https://www.lesswrong.com/posts/L6yBGL4Hog5coegd8/open-thread-june-2010-part-4", "postedAt": "2010-06-19T04:34:36.537Z", "baseScore": 10, "voteCount": 8, "commentCount": 338, "url": null, "contents": { "documentId": "L6yBGL4Hog5coegd8", "html": "

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

\n

This thread brought to you by quantum immortality.

" } }, { "_id": "3rv9vCXF6a9smNiRd", "title": "Cambridge Less Wrong meetup Sunday 27 June", "pageUrl": "https://www.lesswrong.com/posts/3rv9vCXF6a9smNiRd/cambridge-less-wrong-meetup-sunday-27-june", "postedAt": "2010-06-19T03:42:34.927Z", "baseScore": 7, "voteCount": 7, "commentCount": 5, "url": null, "contents": { "documentId": "3rv9vCXF6a9smNiRd", "html": "

I will be at Clear Conscience Cafe at 581 Massachusetts Avenue Cambridge, MA looking for other Less Wrong people on Sunday the 27th. This breaks the previously established pattern of meeting on the third Sunday of each month, to avoid conflicting with Father's Day. As usual, I will bring my boxes and challenge anyone who comes to a puzzle. No one has answered correctly yet; come be the first!

\n


Edited to add: 4pm

" } }, { "_id": "oobgrrH29iAsisacQ", "title": "Are meaningful careers a cover story?", "pageUrl": "https://www.lesswrong.com/posts/oobgrrH29iAsisacQ/are-meaningful-careers-a-cover-story", "postedAt": "2010-06-15T14:06:23.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "oobgrrH29iAsisacQ", "html": "

It is more respectable to have a non-altruistic hobby than to have a non-altruistic job. If I say to a public servant that their career doesn’t make the world a better place, I’m being offensive. If I say to a chess player that their hobby doesn’t make the world a better place, they will likely agree. It’s no accusation.

\n

People also like to have jobs that make the world a better place, whereas they don’t usually care about whether their hobbies do. A few people volunteer or create things for the enjoyment of others of course, but it’s not much of a negative if a hobby doesn’t achieve altruistic goals. People rarely worry ‘I must get out of this hockey; it’s so draining to think all my efforts aren’t helping anyone, and when I die there will be nothing to show for it’.

\n

At first this may seem unsurprising. Jobs are where you exchange socially beneficial work for money, while hobbies are where you enjoy yourself. But why must they be grouped that way? By definition jobs are where you get money, and pleasing others will tend to get you money, but if you are already making money without seemingly making the world a better place, why try so hard to add your dose of altruism there? It should often be easier to change to a more altruistic hobby than a more altruistic job, since you don’t need to find someone to pay you for it. And if you take up an altruistic hobby, you don’t displace anyone.

\n

Here’s a hypothesis for why altruism matters more in jobs than hobbies. Humans are sensitive about receiving money, and about others receiving it. We intuitively feel that material wealth should be divided fairly amongst the tribe and are suspicious of people who accumulate it without sharing. We allow inequality if it’s deserved in return for other socially beneficial actions, but we monitor this closely. We can’t believe the rich really deserve it, nor the welfare dependent, and this suspicion of cheating colors all our dealings with them. When we are given money then, we must feel and look like we deserved it or we risk deep shame. So the very fact that employment gives you money means you need a clear, tellable justification about altruism.

\n

On the other hand we don’t crave justice in enjoyment of leisure time. The fact that someone is having a really interesting conversation with a friend without apparently deserving it doesn’t make the rest of us feel cheated and suspicious. Why don’t our fairness instincts kick in with leisure?  I expect because leisure has always been too hard to count and measure and not directly useful enough to survival to redistribute, unlike concrete goods. So you don’t need any altruistic justification for having fun. Notice that if people wanted altruistic jobs mostly due to direct desire for altruism we would see a different pattern; they would be similarly interested in altruism in other pursuits.

\n

From a conversation with Robin Hanson


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "DaBzJwajrtn8PAfXj", "title": "What Does Make a Difference? It’s Really Simple", "pageUrl": "https://www.lesswrong.com/posts/DaBzJwajrtn8PAfXj/what-does-make-a-difference-it-s-really-simple", "postedAt": "2010-06-15T02:33:06.249Z", "baseScore": -2, "voteCount": 15, "commentCount": 14, "url": null, "contents": { "documentId": "DaBzJwajrtn8PAfXj", "html": "
\n

This is really simple:
Suppose you want to check if some action of yours makes a difference.
How to do it?
The wrong thing to do:
Think of the consequences of your action and evaluate them to see if they fit your purposes. If they do, go on and do it.

\n

The reason for this being wrong: If someone else does something with the same consequences, and if your doing or not your action makes no difference to the fact that THAT person will do it, then you are not necessary for those consequences, they would happen anyway.
This is also true if something, not a person, would do an action with the same consequences.

\n

The right thing to do: Consider what would happen if you DIDN’T do your action. Subtract that from what would happen if you DID do your action.
This is the difference it would make if you did it.

\n

There is a reason it is called a ‘difference’, it is the difference between you doing it and you not doing it.

\n

Example: Suppose you think you will make a difference by carefully considering your vote, and voting.

\n

Wrong: Well, I’m partially causally responsible for the election of X so my action would make a difference.

\n

Right: If I do vote or if I don’t vote, the same candidates will be elected. Therefore my vote makes no difference.
(In more than 16000 elections in the USA it was NEVER the case that one vote would have made a difference)

\n

The AWFUL argument people usually say: But what if everyone did it?

\n

The reason it does not work: Everyone will NOT do it. Yes. That simple.

\n

The reason it is awful: Compare “I don’t think I should go to the movies today, what if everione did it?”

\n

So, when you are willing to make a difference, not feel good, not do what everyone does, not clean your consciousness. When you want to REALLY, REALLY make a difference, you should consider the difference between doing and not doing it.

\n

It is that simple.

\n

 

\n

(Note to the Less Wrong entry: I know most people here know that politics is a mind killer, but delving deeper into the simple argument, notice it works for any case of overdetermination, such as posting a comment or entry that someone else would. If you realize now how important it is to figure out what is important, don't forget more than one person have already done it, use them.)

\n
" } }, { "_id": "aKkgRiGhLvKxmEPvi", "title": "Book Club Update and Chapter 1", "pageUrl": "https://www.lesswrong.com/posts/aKkgRiGhLvKxmEPvi/book-club-update-and-chapter-1", "postedAt": "2010-06-15T00:30:52.358Z", "baseScore": 20, "voteCount": 16, "commentCount": 83, "url": null, "contents": { "documentId": "aKkgRiGhLvKxmEPvi", "html": "

This post summarizes response to the Less Wrong Book Club and Study Group proposal, floats a tentative virtual meetup schedule, and offers some mechanisms for keeping up to date with the group's work. We end with summaries of Chapter 1.

\n

Statistics

\n

The proposal for a LW book club and study group, initially focusing on E.T. Jaynes' Probability Theory: The Logic of Science (a.k.a. PT:TLOS), drew an impressive response with 57 declarations of intent to participate. (I may have missed some or misinterpreted as intending to participate some who were merely interested. This spreadsheet contains participant data and can be edited by anyone (under revision control). Please feel free to add, remove or change your information.) The group has people from no less than 11 different countries, in time zones ranging from GMT-7 to GMT+10.

\n

Live discussion schedule and venues

\n

Many participants have expressed an interest in having informal or chatty discussions over a less permanent medium than LW itself, which should probably be reserved for more careful observations. The schedule below is offered as a basis for further negotiation. You can edit the spreadsheet linked above with your preferred times, and by the next iteration if a different clustering emerges I will report on that.

\n\n

The unofficial Less Wrong IRC channel is the preferred venue. An experimental Google Wave has also been started which may be a useful adjunct, in particular as we come to need mathematical notations in our discussions.

\n

I recommend reading the suggested material before attending live discussion sessions.

\n

\n

Objectives, math prerequisites

\n

The intent of the group is to engage in \"earnest study of the great literature in our area of interest\" (to paraphrase from the Knowledge Hydrant pattern language, a useful resource for study groups).

\n

Earnest study aims at understanding a work deeply. Probably (particularly so in the case of PT:TLOS) the most useful way to do so is sequentially, in the order the author presented their ideas. Therefore, we aim for a pace that allows participants to extract as much insight as possible from each piece of the work, before moving on to the next, which is assumed to build on it.

\n

Exercises are useful stopping-points to check for understanding. When the text contains equations or proofs, reproducing the derivations or checking the calculations can also be a good way to ensure deep understanding.

\n

PT:TLOS is (from personal experience) relatively accessible on rusty high school math (in particular requires little calculus) until at least partway through Chapter 6 (which is where I am at the moment). Just these few chapters contain many key insights about the Bayesian view of probability and are well worth the effort.

\n

Format

\n

My proposal for the format is as follows. I will post one new top-level post per chapter, so as to give people following through RSS a chance to catch updates. Each chapter, however, may require splitting up into more than one chunk to be manageable. I intend to aim for a weekly rhythm: the monday after the first chunk of a new chapter is posted, I will post the next chunk, and so on. If you're worried about missing an update, check the top-level post for the current chapter weekly on mondays.

\n

Each update will identify the current chunk, and will link to a comment containing one or more \"opening questions\" to jump-start discussion.

\n

Updates also briefly summarize the previous chunk and highlights of the discussion arising from it. (Participants in the live chat sessions are encouraged to designate one person to summarize the discussion and post the summary as a comment.) By the time a new chapter is to be opened, the previous post will contain a digest form of the group's collective take on the chapter just worked through. The cumulative effect will be a \"Less Wrong's notes on PT:TLOS\", useful in itself for newcomers.

\n

Chapter 1: Plausible Reasoning

\n

In this chapter Jaynes fleshes out a theme introduced in the preface: \"Probability theory as extended logic\".

\n

Sections: Deductive and Plausible Reasoning - Analogies with Physical Theories - The Thinking Computer - Introducing the Robot (week of 14/06)

\n

Classical (Aristotelian) logic - modus ponens, modus tollens - allows deduction (teasing apart the concepts of deduction, induction, abduction isn't trivial). But what if we're interested not just in \"definitely true or false\" but \"is this plausible\", as we are in the kind of everyday thinking Jaynes provides examples of? Plausible reasoning is a weaker form of inference than deduction, but one Jaynes argues plays an important role even in (say) mathematics.

\n

Jaynes' aim is to construct a working model of our faculty of \"common sense\", in the same sense that the Wright brothers could form a working model of the faculty of flight, not by vague resort to analogy as in the Icarus myth, but by producing a machine embodying a precise understanding. (Jaynes, however, speaks favorably of analogical thinking: \"Good mathematicians see analogies between theorems; great mathematicians seen analogies between analogies\". He acknowledges that this line of argument itself stems from analogy with physics.)

\n

Accordingly, Jaynes frames what is to follow as building an \"inference robot\". Jaynes notes, \"the question of the reasoning process used by actual human brains is charged with emotion and grotesque misunderstandings\", and so this frame will be helpful in keeping us focused on useful questions with observable consequences. It is tempting to also read a practical intent - just as robots can carry out specialized mechanical tasks on behalf of humans, so could an inference robot keep track of more details than our unaided common senses - we must however be careful not to project onto Jaynes some conception of a \"Bayesian AI\".

\n

Sections: Boolean Algebra - Adequate Sets of Operations - The Basic Desiderata - Comments - Common Language vs Formal Logic - Nitpicking (week of 21/06)

\n

Jaynes next introduces the familiar formal notation of Boolean algebra to represent truth-values of propositions, their conjunction and disjunction, and denial. (Equality denotes equality of truth-values, rather than equality of propositions.) Some care is required to distinguish common usage of terms such as \"or\", \"implies\", \"if\", etc. from their denotation in the Boolean algebra of truth-values. From the axioms of idempotence, commutativity, associativity, distributivity and duality, we can build up any number of more sophisticated consequences.

\n

One such consequence, sketched out next, is that any function of n boolean variables can be expressed as a sum (logical OR) involving only conjunctions (logical AND) of each variable or its negation. Each of \"\"different logic functions can thus be expressed in terms of only \"\" building blocks and only three operations (conjunction, disjunction, negation). In fact an even smaller set of operations is adequate to construct all Boolean functions: it is possible to express all three in terms of the NAND (negation of AND) operation, for instance. (A key argument in Chapter 2 hinges on this reduction of logic functions to an \"adequate set\".)

\n

The \"inference robot\", then, is to reason in terms of degrees of plausibility assigned to propositions: plausibility is a generalization of truth-value. We are generally concerned with \"conditional probability\"; how plausible something is given what else we know. This is represented in the familiar notation A|B (\" the plausibility of A given that B is true\", or \"A given B\"). The robot is assumed to be provided sensible, non-contradictory input.

\n

Jaynes next considers the \"basic desiderata\" for such an extension. First, they should be real numbers. (This is motivated by an appeal to convenience of implementation; the Comments defend this in greater detail, and a more formal justification can be found in the Appendices.) By convention, greater plausibility will be represented with a greater number, and the robot's \"sense of direction\", that is, the consequences it draws from increases or decreases in the plausibility of the \"givens\", must conform to common sense. (This will play a key role in Chapter 2.) Finally, the robot is to be consistent and non-ideological: it must always draw the same conclusions from identical premises, it must not arbitrarily ignore information available to it, and it must represent equivalent states of knowledge by equivalent values of plausibility.

\n

(The Comments section is well worth reading, as it introduces the Mind Projection Fallacy which LW readers who have gone through the Sequences should be familiar with.)

" } }, { "_id": "meLBCXvvGcTNCet4g", "title": "Open Thread June 2010, Part 3", "pageUrl": "https://www.lesswrong.com/posts/meLBCXvvGcTNCet4g/open-thread-june-2010-part-3", "postedAt": "2010-06-14T06:14:36.760Z", "baseScore": 9, "voteCount": 7, "commentCount": 627, "url": null, "contents": { "documentId": "meLBCXvvGcTNCet4g", "html": "

\n

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

\n
The thrilling conclusion of what is likely to be an inaccurately named trilogy of June Open Threads.
\n

 

" } }, { "_id": "d9CcQ24ukbL8WcMpB", "title": "How to always have interesting conversations", "pageUrl": "https://www.lesswrong.com/posts/d9CcQ24ukbL8WcMpB/how-to-always-have-interesting-conversations", "postedAt": "2010-06-14T00:35:15.424Z", "baseScore": 70, "voteCount": 62, "commentCount": 351, "url": null, "contents": { "documentId": "d9CcQ24ukbL8WcMpB", "html": "

One of the things that makes Michael Vassar an interesting person to be around is that he has an opinion about everything. If you locked him up in an empty room with grey walls, it would probably take the man about thirty seconds before he'd start analyzing the historical influence of the Enlightenment on the tradition of locking people up in empty rooms with grey walls.

Likewise, in the recent LW meetup, I noticed that I was naturally drawn to the people who most easily ended up talking about interesting things. I spent a while just listening to HughRistik's theories on the differences between men and women, for instance. There were a few occasions when I engaged in some small talk with new people, but not all of them took very long, as I failed to lead the conversation into territory where one of us would have plenty of opinions.

I have two major deficiencies in trying to mimic this behavior. One, I'm by nature more of a listener than speaker. I usually prefer to let other people talk so that I can just soak up the information being offered. Second, my native way of thought is closer to text than speech. At best, I can generate thoughts as fast as I can type. But in speech, I often have difficulty formulating my thoughts into coherent sentences fast enough and frequently hesitate.

Both of these problems are solvable by having a sufficiently well built-up storage of cached thoughts that I don't need to generate everything in real time. On the occasions when a conversations happens to drift into a topic I'm sufficiently familiar with, I'm often able to overcome the limitations and contribute meaningfully to the discussion. This implies two things. First, that I need to generate cached thoughts in more subjects than I currently have. Seconds, that I need an ability to more reliably steer conversation into subjects that I actually do have cached thoughts about.

Below is a preliminary "conversational map" I generated as an exercise. The top three subjects - the weather, the other person's background (job and education), people's hobbies - are classical small talk subjects. Below them are a bunch of subjects that I feel like I can spend at least a while talking about, and possible paths leading from one subject to another. My goal in generating the map is to create a huge web of interesting subjects, so that I can use the small talk openings to bootstrap the conversation into basically anything I happen to be interested in.

This map is still pretty small, but it can be expanded to an arbitrary degree. (This is also one of the times when I wish my netbook had a bigger screen.) I thought that I didn't have very many things that I could easily talk with people about, but once I started explicitly brainstorming for them, I realized that there were a lot of those.

My intention is to spend a while generating conversational charts like this and then spend some time fleshing out the actual transitions between subjects. The benefit from this process should be two-fold. Practice in creating transitions between subjects will make it easier to generate such transitions in real time conversations. And if I can't actually come up with anything in real time, I can fall back to the cache of transitions and subjects that I've built up.

Naturally, the process needs to be guided by what the other person shows an interest in. If they show no interest in some subject I mention, it's time to move the topic to another cluster. Many of the subjects in this chart are also pretty inflammable: there are environments where pretty much everything in the politics cluster should probably be kept off-limits, for instance. Exercise your common sense when building and using your own conversational charts.

(Thanks to Justin Shovelain for mentioning that Michael Vassar seems to have a big huge conversational web that all his discussions take place in. That notion was one of the original sources for this idea.)

" } }, { "_id": "ZnPutgDk4xGFMT9aL", "title": "You might be population too", "pageUrl": "https://www.lesswrong.com/posts/ZnPutgDk4xGFMT9aL/you-might-be-population-too", "postedAt": "2010-06-13T01:39:47.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "ZnPutgDk4xGFMT9aL", "html": "

I recently attended a dinner forum on what size the population should be. All of the speakers held the same position: small. The only upsides of population mentioned were to horrid profit seeking people like property developers. Yet the downsides to population are horrendous – all our resource use problems multiplied! As one speaker quoted “The population can’t increase forever, and as a no brainer it should stop sooner rather than later”. As there are no respectable positives in the equation, no need for complicated maths. Smaller is better.

\n

I suggested to my table what I saw as an obvious omission in this model: I at least am enjoying the population being big enough to have me in it, so I would at least consider putting a big positive value on human lives. My table seemed to think this an outlandish philosophical position. I suggested that if resource use is the problem, we fix externalities there, but they thought this just as roundabout a way of getting ‘sustainability’, whereas cutting the population seems straightforward and there’s nothing to lose by it. I suggested to the organizer that the positive of human existence deserved a mention (in a multiple hour forum), and he explained that if we didn’t exist we wouldn’t notice, as though that settles it.

\n

But the plot thickened further. Why do you suppose we should keep the population low? “We should leave the world in as good or a better condition as we got it in” one speaker explained. So out of concern for future generations apparently. Future people don’t benefit from being alive, but it’s imperative that we ensure they have cheap water bills long before they have any such preferences.

\n

One simple solution then: since all these costs of our population go to the next generation and they don’t actually benefit from being alive – lets not have another generation! Then not only will there be no resource use in future, but the costs of our rapacious lifestyles will have nobody to go to. Coincidentally Peter Singer wrote an article to this effect last week, asking ‘Would it be wrong for us all to agree not to have children, so that we would be the last generation on Earth?’

\n

At the forum mention was made, and amused looks were exchanged, over the voluntary human extinction movement. Apparently it’s crazy to want our species extinct, but crazy not to want it arbitrarily smaller. Is the main benefit of all this human civilization species diversity then?

\n

The fine line between good  ‘population reduction’ and killing people was accidentally jumped when a speaker spoke approvingly of Frank Fenner’s work on ending smallpox then went on to praise him for joining the population reduction movement after realizing he had contributed to the population problem in eradicating smallpox. Note that in this case the population lowering was in the form of people being killed, not just never being born – the usual barrier to equivalence. As far as I could tell nobody noticed this. Should we praise other groups who have made active contributions to a sustainable population, post birth?

\n

I’m not surprised if people mostly concluded on balance that there’s probably little wrong with stopping people from being born, or that it’s not as bad as killing them. But to not even consider a value on human lives except to property developers, in a population debate, seems incredible. Is it just that once you consider this question you are already alive, so can imagine still being so in all but the total extinction scenario? Is it just an extreme of our usual blatant moral hypocrisy around groups too weak to mold the evolution of our morals?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "S7yJJKBQeap53BMAn", "title": "H+ Summit Meetup Harvard 6/12", "pageUrl": "https://www.lesswrong.com/posts/S7yJJKBQeap53BMAn/h-summit-meetup-harvard-6-12", "postedAt": "2010-06-10T17:25:32.430Z", "baseScore": 7, "voteCount": 5, "commentCount": 8, "url": null, "contents": { "documentId": "S7yJJKBQeap53BMAn", "html": "

Just realized I hadn't seen a post about this, and couldn't find mention of it by searching...

\n

The H+ Summit is happening this Saturday and Sunday at Harvard.  I assume there are quite a few Humanity+ fellow-travelers here who might be going (though it's kindof late notice if this is the first time you've heard of it).

\n

I'm interested in a Lw meet-up or anything of the sort on Saturday evening (June 12).  Who's in?

\n

 

\n

EDIT (6/12): Meeting at h+ afterparty instead.  It will be 6 to 8 at Sprout.  http://diybio.org/hplusbeer/

\n

I'll be the one named Thom Blake.

\n

\n

 EDIT (6/11): Location: Grendel's Den  Time: About 6ish.  We might move if it's not suitable - apparently it may be overrun by sports fans.

" } }, { "_id": "Kw6fd85kNF4XxonfL", "title": "Heuristics for a good life", "pageUrl": "https://www.lesswrong.com/posts/Kw6fd85kNF4XxonfL/heuristics-for-a-good-life", "postedAt": "2010-06-10T15:40:40.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "Kw6fd85kNF4XxonfL", "html": "

I wondered what careers or the like help other people the most. Tyler reposted my question, adding:

\n

Let’s assume pure marginalist act utilitarianism, namely that you choose a career and get moral credit only for the net change caused by your selection.  Furthermore, I’ll rule out “become a billionaire and give away all your money” or “cure cancer” by postulating that said person ends up at the 90th percentile of achievement in the specified field but no higher.

\n

And answered:

\n

What first comes to mind is “honest General Practitioner who has read Robin Hanson on medicine.”  If other countries are fair game, let’s send that GP to Africa.  No matter what the locale, you help some people live and good outcomes do not require remarkable expertise.  There is a shortage of GPs in many locales, so you make specialists more productive as well.  Public health and sanitation may save more lives than medicine, but the addition of a single public health worker may well have a smaller marginal impact, given the greater importance of upfront costs in that field.

\n

I’m not convinced that the Hansonian educated GP would be much better than a materialism educated spiritual healer. Tyler’s commenters have a lot of suggestions for where big positives might be too – but all jobs have positives and many of them seem important, so how to compare? Unfortunately calculating the net costs and benefits of all the things one could do with oneself is notoriously impossible. So how about some heuristics for what types of jobs tend to be more socially beneficial?

\n

Here some ideas, please extend and criticize:

\n\n

What jobs do well on most of these things?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "jnKaDMvdkFEoBT2qK", "title": "UDT agents as deontologists", "pageUrl": "https://www.lesswrong.com/posts/jnKaDMvdkFEoBT2qK/udt-agents-as-deontologists", "postedAt": "2010-06-10T05:01:06.970Z", "baseScore": 14, "voteCount": 23, "commentCount": 117, "url": null, "contents": { "documentId": "jnKaDMvdkFEoBT2qK", "html": "

One way (the usual way?) to think of an agent running Updateless Decision Theory is to imagine that the agent always cares about all possible worlds according to how probable those worlds seemed to the agent's builders when they wrote the agent's source code[see added footnote 1 below].  In particular, the agent never develops any additional concern for whatever turns out to be the actual world[2].  This is what puts the \"U\" in \"UDT\".

\n

I suggest an alternative conception of a UDT agent, without changing the UDT formalism. According to this view, the agent cares about only the actual world.  In fact, at any time, the agent cares about only one small facet of the actual world — namely, whether the agent's act at that time maximizes a certain fixed act-evaluating function.  In effect, a UDT agent is the ultimate deontologist:  It doesn't care at all about the actual consequences that result from its action.  One implication of this conception is that a UDT agent cannot be truly counterfactually mugged.

\n

[ETA: For completeness, I give a description of UDT here (pdf).]

\n

\n

\n

Vladimir Nesov's Counterfactual Mugging presents us with the following scenario:

\n

\n
\n

Imagine that one day, Omega comes to you and says that it has just tossed a fair coin, and given that the coin came up tails, it decided to ask you to give it $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don't want to give up your $100. But see, the Omega tells you that if the coin came up heads instead of tails, it'd give you $10000, but only if you'd agree to give it $100 if the coin came up tails.

\n

Omega can predict your decision in case it asked you to give it $100, even if that hasn't actually happened, it can compute the counterfactual truth. The Omega is also known to be absolutely honest and trustworthy, no word-twisting, so the facts are really as it says, it really tossed a coin and really would've given you $10000.

\n
\n

An agent following UDT will give the $100.  Imagine that we were building an agent, and that we will receive whatever utility follows from the agent's actions.  Then it's easy to see why we should build our agent to give Omega the money in this scenario.  After all, at the time we build our agent, we know that Omega might one day flip a fair coin with the intentions Nesov describes.  Whatever probability this has of happening, our expected earnings are greater if we program our agent to give Omega the $100 on tails.

\n

More generally, if we suppose that we get whatever utility will follow from our agent's actions, then we can do no better than to program the agent to follow UDT.  But since we have to program the UDT agent now, the act-evaluating function that determines how the agent will act needs to be fixed with the probabilities that we know now.  This will suffice to maximize our expected utility given our best knowledge at the time when we build the agent.

\n

So, it makes sense for a builder to program an agent to follow UDT on expected-utility grounds.  We can understand the builder's motivations.  We can get inside the builder's head, so to speak.

\n

But what about the agent's head?  The brilliance of Nesov's scenario is that it is so hard, on first hearing it, to imagine why a reasonable agent would give Omega the money knowing that the only result will be that they gave up $100.  It's easy enough to follow the UDT formalism.  But what on earth could the UDT agent itself be thinking?  Yes, trying to figure this out is an exercise in anthropomorphization.  Nonetheless, I think that it is worthwhile if we are going to use UDT to try to understand what we ought to do.

\n

Here are three ways to conceive of the agent's thinking when it gives Omega the $100.  They form a sort of spectrum.

\n
    \n
  1. One extreme view:  The agent considers all the possible words to be on equal ontological footing.  There is no sense in which any one of them is distinguished as \"actual\" by the agent.  It conceives of itself as acting simultaneously in all the possible worlds so as to maximize utility over all of them.  Sometimes this entails acting in one world so as to make things worse in that world.  But, no matter which world this is, there is nothing special about it.  The only property of the world that has any ontologically significance is the probability weight given to that world at the time that the agent was built. (I believe that this is roughly the view that Wei Dai himself takes, but I may be wrong.)
  2. \n
  3. An intermediate view:  The agent thinks that there is only one actual world.  That is, there is an ontological fact of the matter about which world is actual.  However, the other possible worlds continue to exist in some sense, although they are merely possible, not actual.  Nonetheless, the agent continues to care about all of the possible worlds, and this amount of care never changes.  After being counterfactually mugged, the agent is happy to know that, in some merely-possible world, Omega gave the agent $10000.
  4. \n
  5. The other extreme:  As in (2), the agent thinks that there is only one actual world.  Contrary to (2), the agent cares about only this world.  However, the agent is a deontologist.  When deciding how to act, all that it cares about is whether its act in this world is \"right\", where \"right\" means \"maximizes the fixed act-evaluating function that was built into me.\"
  6. \n
\n

View (3) is the one that I wanted to develop in this post.  On this view, the \"probability distribution\" in the act-evaluating function no longer has any epistemic meaning for the agent.  The act-evaluating function is just a particular computation which, for the agent, constitutes the essence of rightness.  Yes, the computation involves considering some counterfactuals, but to consider those counterfactuals does not entail any ontological commitment.

\n

Thus, when the agent has been counterfactually mugged, it's not (as in (1)) happy because it cares about expected utility over all possible worlds.  It's not (as in (2)) happy because, in some merely-possible world, Omega gave it $10000.  On this view, the agent considers all those \"possible worlds\" to have been rendered impossible by what it has learned since it was built.  The reason the agent is happy is that it did the right thing.  Merely doing the right thing has given the agent all the utility it could hope for.  More to the point, the agent got that utility in the actual world.  The agent knows that it did the right thing, so it genuinely does not care about what actual consequences will follow from its action.

\n
\n

In other words, although the agent lost $100, it really gained from the interaction with Omega.  This suggests that we try to consider a \"true\" analog of the Counterfactual Mugging.  In The True Prisoner's Dilemma, Eliezer Yudkowsky presents a version of the Prisoner's Dilemma in which it's viscerally clear that the payoffs at stake capture everything that we care about, not just our selfish values.  The point is to make the problem about utilons, and not about some stand-in, such as years in prison or dollars.

\n

In a True Counterfactual Mugging, Omega would ask the agent to give up utility.  Here we see that the UDT agent cannot possibly do as Omega asks.  Whatever it chooses to do will turn out to have in fact maximized its utility.  Not just expected utility, but actual utility. In the original Counterfactual Mugging, the agent looks like something of a chump who gave up $100 for nothing.  But in the True Counterfactual Mugging, our deontological agent lives with the satisfaction that, no matter what it does, it lives in the best of all possible worlds.

\n

 

\n
\n

[1] ETA: Under UDT, the agent assigns a utility to having all of the possible worlds P1, P2, . . . undergo respective execution histories E1, E2, . . ..  (The way that a world evolves may depend in part on the agent's action).  That is, for each vector <E1, E2, . . .> of ways that these worlds could respectively evolve, the agent assigns a utility U(<E1, E2, . . .>).  Due to criticisms by Vladimir Nesov (beginning here), I have realized that this post only applies to instances of UDT in which the utility function U takes the form that it has in standard decision theories.  In this case, each world Pi has its own probability pr(Pi) and its own utility function ui that takes an execution history of Pi alone as input, and the function U takes the form

\n

U(<E1, E2, . . .>) = Σi pr(Pi) ui(Ei).

\n

The probabilities pr(Pi) are what I'm talking about when I mention probabilities in this post.  Wei Dai is interested in instances of UDT with more general utility functions U.  However, to my knowledge, this special kind of utility function is the only one in terms of which he's talked about the meanings of probabilities of possible worlds in UDT.  See in particular this quote from the original UDT post:

\n
\n

If your preferences for what happens in one such program is independent of what happens in another, then we can represent them by a probability distribution on the set of programs plus a utility function on the execution of each individual program.

\n
\n

(A \"program\" is what Wei Dai calls a possible world in that post.)  The utility function U is \"baked in\" to the UDT agent at the time it's created.  Therefore, so too are the probabilities pr(Pi).

\n

[2] By \"the actual world\", I do not mean one of the worlds in the many-worlds interpretation (MWI) of quantum mechanics.  I mean something more like the entire path traversed by the quantum state vector of the universe through its corresponding Hilbert space.  Distinct possible worlds are distinct paths that the state of the universe might (for all we know) be traversing in this Hilbert space.  All the \"many worlds\" of the MWI together constitute a single world in the sense used here.

\n

 

\n
\n

 

\n

ETA: This post was originally titled \"UDT agents are deontologists\".  I changed the title to \"UDT agents as deontologists\" to emphasize that I am describing a way to view UDT agents.  That is, I am describing an interpretive framework for understanding the agent's thinking.  My proposal is analogous to Dennett's \"intentional stance\".  To take the intentional stance is not to make a claim about what a conscious organism is doing.  Rather, it is to make use of a framework for organizing our understanding of the organism's behavior.  Similarly, I am not suggesting that UDT somehow gets things wrong.  I am saying that it might be more natural for us if we think of the UDT agent as a deontologist, instead of as an agent that never changes its belief about which possible worlds will actually happen.  I say a little bit more about this in this comment.

" } }, { "_id": "Psp8ZpYLCDJjshpRb", "title": "Your intuitions are not magic", "pageUrl": "https://www.lesswrong.com/posts/Psp8ZpYLCDJjshpRb/your-intuitions-are-not-magic", "postedAt": "2010-06-10T00:11:30.121Z", "baseScore": 163, "voteCount": 160, "commentCount": 42, "url": null, "contents": { "documentId": "Psp8ZpYLCDJjshpRb", "html": "

People who know a little bit of statistics - enough to use statistical techniques, not enough to understand why or how they work - often end up horribly misusing them. Statistical tests are complicated mathematical techniques, and to work, they tend to make numerous assumptions. The problem is that if those assumptions are not valid, most statistical tests do not cleanly fail and produce obviously false results. Neither do they require you to carry out impossible mathematical operations, like dividing by zero. Instead, they simply produce results that do not tell you what you think they tell you. As a formal system, pure math exists only inside our heads. We can try to apply it to the real world, but if we are misapplying it, nothing in the system itself will tell us that we're making a mistake.

\n

Examples of misapplied statistics have been discussed here before. Cyan discussed a \"test\" that could only produce one outcome. PhilGoetz critiqued a statistical method which implicitly assumed that taking a healthy dose of vitamins had a comparable effect as taking a toxic dose.

Even a very simple statistical technique, like taking the correlation between two variables, might be misleading if you forget about the assumptions it's making. When someone says \"correlation\", they are most commonly talking about Pearson's correlation coefficient, which seeks to gauge whether there's a linear relationship between two variables. In other words, if X increases, does Y also tend to increase. (Or decrease.) However, like with vitamin dosages and their effects on health, two variables might have a non-linear relationship. Increasing X might increase Y up to a certain point, after which increasing X would decrease Y. Simply calculating Pearson's correlation on two such variables might cause someone to get a low correlation, and therefore conclude that there's no relationship or there's only a weak relationship between the two. (See also Anscombe's quartet.)

The lesson here, then, is that not understanding how your analytical tools work will get you incorrect results when you try to analyze something. A person who doesn't stop to consider the assumptions of the techniques she's using is, in effect, thinking that her techniques are magical. No matter how she might use them, they will always produce the right results. Of course, assuming that makes about as much sense as assuming that your hammer is magical and can be used to repair anything. Even if you had a broken window, you could fix that by hitting it with your magic hammer. But I'm not only talking about statistics here, for the same principle can be applied in a more general manner.

\n


Every moment in our lives, we are trying to make estimates of the way the world works. Of what causal relationships there are, of what ways of describing the world make sense and which ones don't, which plans will work and which ones will fail. In order to make those estimates, we need to draw on a vast amount of information our brains have gathered throughout our lives. Our brains keep track of countless pieces of information that we will not usually even think about. Few people will explicitly keep track of the amount of different restaurants they've seen. Yet in general, if people are asked about the relative number of restaurants in various fast-food chains, their estimates generally bear a close relation to the truth.

But like explicit statistical techniques, the brain makes numerous assumptions when building its models of the world. Newspapers are selective in their reporting of disasters, focusing on rare shocking ones above common mundane ones. Yet our brains assume that we hear about all those disasters because we've personally witnessed them, and that the distribution of disasters in the newspapers therefore reflects the distribution of disasters in the real world. Thus, people asked to estimate the frequency of different causes of death underestimate the frequency of those that are underreported in the media, and overestimate the ones that are overreported.

On this site, we've also discussed a variety of other ways by which the brain's reasoning sometimes goes wrong: the absurdity heuristic, the affect heuristic, the affective death spiral, the availability heuristic, the conjunction fallacy... the list goes on and on.

So what happens when you've read too many newspaper articles and then naively wonder about how frequent different disasters are? You are querying your unconscious processes about a certain kind of statistical relationship, and you get an answer back. But like the person who was naively misapplying her statistical tools, the process which generates the answers is a black box to you. You do not know how or why it works. If you would, you could tell when its results were reliable, when they needed to be explicitly corrected for, and when they were flat-out wrong.

Sometimes we rely on our intuitions even when they are being directly contradicted by math and science. The science seems absurd and unintuitive; our intuitions seem firm and clear. And indeed, sometimes there's a flaw in the science, and we are right to trust our intuitions. But on other occasions, our intuitions are wrong. Yet we frequently persist in holding onto our intuitions. And what is ironic is that we persist on holding onto them exactly because we do not know how they work, because we cannot see their insides and all the things inside them that could go wrong. We only get the feeling of certainty, a knowledge of this being right, and that feeling cannot be broken into parts that could be subjected to criticism to see if they add up.

But like statistical techniques in general, our intuitions are not magic. Hitting a broken window with a hammer will not fix the window, no matter how reliable the hammer. It would certainly be easy and convenient if our intuitions always gave us the right results, just like it would be easy and convenient if our statistical techniques always gave us the right results. Yet carelessness can cost lives. Misapplying a statistical technique when evaluating the safety of a new drug might kill people or cause them to spend money on a useless treatment. Blindly following our intuitions can cause our careers, relationships or lives to crash and burn, because we did not think of the possibility that we might be wrong.

That is why we need to study the cognitive sciences, figure out the way our intuitions work and how we might correct for mistakes. Above all, we need to learn to always question the workings of our minds, for we need to understand that they are not magical.

" } }, { "_id": "a4hqTo3pTubuw7Tms", "title": "Bay Area Meetup Saturday 6/12", "pageUrl": "https://www.lesswrong.com/posts/a4hqTo3pTubuw7Tms/bay-area-meetup-saturday-6-12", "postedAt": "2010-06-09T18:18:17.750Z", "baseScore": 6, "voteCount": 6, "commentCount": 15, "url": null, "contents": { "documentId": "a4hqTo3pTubuw7Tms", "html": "

There will be a meetup at 7PM this Saturday at the SIAI house (3755 Benton St, Santa Clara). (More info on the official page.)

\n

The usual set of people will be present, as well as new SIAI Visiting Fellows.

\n

(Sorry for the short notice.)

" } }, { "_id": "tSDaxEq2WGHTRPcWB", "title": "Less Wrong Book Club and Study Group", "pageUrl": "https://www.lesswrong.com/posts/tSDaxEq2WGHTRPcWB/less-wrong-book-club-and-study-group", "postedAt": "2010-06-09T17:00:29.214Z", "baseScore": 43, "voteCount": 35, "commentCount": 161, "url": null, "contents": { "documentId": "tSDaxEq2WGHTRPcWB", "html": "

Do you want to become stronger in the way of Bayes? This post is intended for people whose understanding of Bayesian probability theory is currently somewhat tentative (between levels 0 and 1 to use a previous post's terms), and who are interested in developing deeper knowledge through deliberate practice.

\n

Our intention is to form an online self-study group composed of peers, working with the assistance of a facilitator - but not necessarily of a teacher or of an expert in the topic. Some students may be somewhat more advanced along the path, and able to offer assistance to others.

\n

Our first text will be E.T. Jaynes' Probability Theory: The Logic of Science, which can be found in PDF form (in a slightly less polished version than the book edition) here or here.

\n

We will work through the text in sections, at a pace allowing thorough understanding: expect one new section every week, maybe every other week. A brief summary of the currently discussed section will be published as an update to this post, and simultaneously a comment will open the discussion with a few questions, or the statement of an exercise. Please use ROT13 whenever appropriate in your replies.

\n

A first comment below collects intentions to participate. Please reply to this comment only if you are genuinely interested in gaining a better understanding of Bayesian probability and willing to commit to spend a few hours per week reading through the section assigned or doing the exercises.

\n

As a warm-up, participants are encouraged to start in on the book:

\n

Preface

\n

Most of the Preface can be safely skipped. It names the giants on whose shoulders Jaynes stood (\"History\", \"Foundations\"), deals briefly with the frequentist vs Bayesian controversy (\"Comparisons\"), discusses his \"Style of Presentation\" (and incidentally his distrust of infinite sets), and contains the usual acknowledgements.

One section, \"What is 'safe'?\", stands out as making several strong points about the use of probability theory. Sample: \"new data that we insist on analyzing in terms of old ideas (that is, models which are not questioned) cannot lead us out of the old ideas\". (The emphasis is Jaynes'. This has an almost Kuhnian flavor.)

\n

Discussion on the Preface starts with this comment.

" } }, { "_id": "HBEukSjp77yjizqqx", "title": "Deploy Test", "pageUrl": "https://www.lesswrong.com/posts/HBEukSjp77yjizqqx/deploy-test", "postedAt": "2010-06-09T06:40:57.890Z", "baseScore": 1, "voteCount": 1, "commentCount": 3, "url": null, "contents": { "documentId": "HBEukSjp77yjizqqx", "html": "

More Testing...

" } }, { "_id": "3Kkgek3PZ9snipyMY", "title": "Open Thread June 2010, Part 2", "pageUrl": "https://www.lesswrong.com/posts/3Kkgek3PZ9snipyMY/open-thread-june-2010-part-2", "postedAt": "2010-06-07T08:37:52.236Z", "baseScore": 12, "voteCount": 10, "commentCount": 553, "url": null, "contents": { "documentId": "3Kkgek3PZ9snipyMY", "html": "

The title says it all.

" } }, { "_id": "ZLBtZqsP79Cwioi2b", "title": "Virtue Ethics for Consequentialists", "pageUrl": "https://www.lesswrong.com/posts/ZLBtZqsP79Cwioi2b/virtue-ethics-for-consequentialists", "postedAt": "2010-06-04T16:08:40.556Z", "baseScore": 46, "voteCount": 49, "commentCount": 185, "url": null, "contents": { "documentId": "ZLBtZqsP79Cwioi2b", "html": "

Meta: Influenced by a cool blog post by Kaj, which was influenced by a cool Michael Vassar (like pretty much everything else; the man sure has a lot of ideas). The name of this post is intended to be taken slightly more literally than the similarly titled Deontology for Consequentialists.

\n

 

\n

There's been a hip new trend going around the Singularity Institute Visiting Fellows house lately, and it's not postmodernism. It's virtue ethics. \"What, virtue ethics?! Are you serious?\" Yup. I'm so contrarian I think cryonics isn't obvious and that virtue ethics is better than consequentialism. This post will explain why.

\n

When I first heard about virtue ethics I assumed it was a clever way for people to justify things they did when the consequences were bad and the reasons were bad, too. People are very good at spinning tales about how virtuous they are, even more so than at finding good reasons that they could have done things that turned out unpopular, and it's hard to spin the consequences of your actions as good when everyone is keeping score. But it seems that moral theorists were mostly thinking in far mode and didn't have too much incentive to create a moral theory that benefited them the most, so my Hansonian hypothesis falls flat. Why did Plato and Aristotle and everyone up until the Enlightenment find virtue ethics appealing, then? Well...

\n

Moral philosophy was designed for humans, not for rational agents. When you're used to thinking about artificial intelligence, economics, and decision theory, it gets easy to forget that we're hyperbolic discounters: not anything resembling sane. Humans are not inherently expected utility maximizers, they're bounded agents with little capacity for reflection. Utility functions are great and all, but in the words of Zack M. Davis, \"Humans don't have utility functions.\" Similarly, Kaj warns us: \"be extra careful when you try to apply the concept of a utility function to human beings.\" Back in the day nobody thought smarter-than-human intelligence was possible, and many still don't. Philosophers came up with ways for people to live their lives, have a good time, be respected, and do good things; they weren't even trying to create morals for anyone too far outside the norm of whatever society they inhabited at the time, or whatever society they imagined to be perfect. I personally think that the Buddha had some really interesting things to say and that his ideas about ethics are no exception (though I suspect he may have had pain asymbolia, which totally deserves its own post soon).  Epicurus, Mill, and Bentham were great thinkers and all, but it's not obvious that what they were saying is best practice for individual people, even if their ideas about policy are strictly superior to alternative options. Virtue ethics is good for bounded agents: you don't have to waste memory on what a personalized rulebook says about different kinds of milk, and you don't have to think 15 inferential steps ahead to determine if you should drink skim or whole.

\n

You can be a virtue ethicist whose virtue is to do the consequentialist thing to do (because your deontological morals say that's what is right). Consequentialists, deontologists, and virtue ethicists don't really disagree on any major points in day to day life, just in crazy situations like trolley problems. And anyway, they're all actually virtue ethicists: they're trying to do the 'consequentialist' or 'deontologist' things to do, which happen to usually be the same. Alicorn's decided to do her best to reduce existential risk, and I, being a pseudo-consequentialist, have also decided to do my best to reduce existential risk. Virtue ethicists can do these things too, but they can also abuse the consistency effects such actions invariably come with. If you're a virtue ethicist it's easier to say \"I'm the type of person who will reply to all of the emails in my inbox and sort them into my GTD system, because organization and contentiousness are virtues\" and use this as a way to motivate yourself. So go ahead and be a virtue ethicist for the consequences (...or a consequentialist because it's deontic). It's not illegal!

\n

Retooled virtue ethics is better for your instrumental rationality. The Happiness Hypothesis critiqued the way Western ethics, both in the deontologist tradition started by Immanuel Kant and the consequentialist tradition started by Jeremy Bentham have been becoming increasingly reason-based:

\n
The philosopher Edmund Pincoffs has argued that consequentialists and deontologists worked together to convince Westerners in the twentieth century that morality is the study of moral quandaries and dilemmas. Where the Greeks focused on the character of a person and asked what kind of person we should each aim to become, modern ethics focuses on actions, asking when a particular decision is right or wrong. Philosophers wrestle with life-and-death dilemmas: Kill one to save five? Allow aborted fetuses to be used as a source of stem cells? [...] This turn from character ethics to quandary ethics has turned moral education away from virtues and towards moral reasoning. If morality is about dilemmas, then moral education is training in problem solving. Children must be taught how to think about moral problems, especially how to overcome their natural egoism and take into their calculations the needs of others.

[...] I believe that this turn from character to quandary was a profound mistake, for two reasons. First, it weakens morality and limits its scope. Where the ancients saw virtue and character at work in everything a person does, our modern conception confines morality to a set of situations that arise for each person only a few times in any given week [...] The second problem with the turn to moral reasoning is that it relies on bad psychology. Many moral education efforts since the 1970s take the rider off the elephant and train him to solve problems on his own. After being exposed to hours of case studies, classroom discussions about moral dilemmas, and videos about people who faced dilemmas and made the right choices, the child learns how (not what) to think. Then class ends, the rider gets back on the elephant, and nothing changes at recess. Trying to make children behave ethically by teaching them to reason well is like trying to make a dog happy by wagging its tail. It gets causality backwards.
\n

To quote Kaj's response to the above:

\n
\n

Reading this chapter, that critique and the description of how people like Benjamin Franklin made it into an explicit project to cultivate their various virtues one at a time, I could feel a very peculiar transformation take place within me. The best way I can describe it is that it felt like a part of my decision-making or world-evaluating machinery separated itself from the rest and settled into a new area of responsibility that I had previously not recognized as a separate one. While I had previously been primarily a consequentialist, that newly-specialized part declared its allegiance to virtue ethics, even though the rest of the machinery remained consequentialist. [...]

What has this meant in practice? Well, I'm not quite sure of the long-term effects yet, but I think that my emotional machinery kind of separated from my general decision-making and planning machinery. Think of \"emotional machinery\" as a system that takes various sorts of information as input and produces different emotional states as output. Optimally, your emotional machinery should attempt to create emotions that push you towards taking the kinds of actions that are most appropriate given your goals. Previously I was sort of embedded in the world and the emotional system was taking its input from the entire whole: the way I was, the way the world was, and the way that those were intertwined. It was simultaneously trying to optimize for all three, with mixed results.

But now, my self-model was set separate from the world-model, and my emotional machinery started running its evaluations primarily based on the self-model. The main questions became \"how could I develop myself\", \"how could I be more virtuous\" and \"how could I best act to improve the world\". From the last bit, you can see that I haven't lost the consequentialist layer in my decision-making: I am still trying to act in ways that improve the world. But now it's more like my emotional systems are taking input from the consequentialist planning system to figure out what virtues to concentrate on, instead of the consequentialist reasoning being completely intertwined with my emotional systems.

\n
\n

Applying both consequentialist and virtue ethicist layers to the way you actually get things done in the real world seems to me a great idea. It recognizes that most of us don't actually have that much control over what we do. Acknowledging this and dealing with its consequences, and what it says about us, allows us to do the things we want and feel good about it at the same time.

\n

So, if you'd like, try to be a virtue ethicist for a week. If a key of epistemic rationality is having your beliefs pay rent in expected anticipation, then instrumental rationality is about having your actions pay rent in expected utility. Use science! If being a virtue ethicist helps even one person be more the person they want to be, like it did for Kaj, then this post was well worth the time spent.

" } }, { "_id": "HW5Q9cW9sgk4yCffd", "title": "Hacking the CEV for Fun and Profit", "pageUrl": "https://www.lesswrong.com/posts/HW5Q9cW9sgk4yCffd/hacking-the-cev-for-fun-and-profit", "postedAt": "2010-06-03T20:30:29.518Z", "baseScore": 80, "voteCount": 71, "commentCount": 207, "url": null, "contents": { "documentId": "HW5Q9cW9sgk4yCffd", "html": "

It’s the year 2045, and Dr. Evil and the Singularity Institute have been in a long and grueling race to be the first to achieve machine intelligence, thereby controlling the course of the Singularity and the fate of the universe. Unfortunately for Dr. Evil, SIAI is ahead in the game. Its Friendly AI is undergoing final testing, and Coherent Extrapolated Volition is scheduled to begin in a week. Dr. Evil learns of this news, but there’s not much he can do, or so it seems.  He has succeeded in developing brain scanning and emulation technology, but the emulation speed is still way too slow to be competitive.

\n

There is no way to catch up with SIAI's superior technology in time, but Dr. Evil suddenly realizes that maybe he doesn’t have to. CEV is supposed to give equal weighting to all of humanity, and surely uploads count as human. If he had enough storage space, he could simply upload himself, and then make a trillion copies of the upload. The rest of humanity would end up with less than 1% weight in CEV. Not perfect, but he could live with that. Unfortunately he only has enough storage for a few hundred uploads. What to do…

\n

Ah ha, compression! A trillion identical copies of an object would compress down to be only a little bit larger than one copy. But would CEV count compressed identical copies to be separate individuals? Maybe, maybe not. To be sure, Dr. Evil gives each copy a unique experience before adding it to the giant compressed archive. Since they still share almost all of the same information, a trillion copies, after compression, just manages to fit inside the available space.

\n

Now Dr. Evil sits back and relaxes. Come next week, the Singularity Institute and rest of humanity are in for a rather rude surprise!

" } }, { "_id": "CMt3ijXYuCynhPWXa", "title": "Bayes' Theorem Illustrated (My Way)", "pageUrl": "https://www.lesswrong.com/posts/CMt3ijXYuCynhPWXa/bayes-theorem-illustrated-my-way", "postedAt": "2010-06-03T04:40:21.377Z", "baseScore": 173, "voteCount": 156, "commentCount": 195, "url": null, "contents": { "documentId": "CMt3ijXYuCynhPWXa", "html": "

(This post is elementary: it introduces a simple method of visualizing Bayesian calculations. In my defense, we've had other elementary posts before, and they've been found useful; plus, I'd really like this to be online somewhere, and it might as well be here.)

\n

I'll admit, those Monty-Hall-type problems invariably trip me up. Or at least, they do if I'm not thinking very carefully -- doing quite a bit more work than other people seem to have to do.

\n

What's more, people's explanations of how to get the right answer have almost never been satisfactory to me. If I concentrate hard enough, I can usually follow the reasoning, sort of; but I never quite \"see it\", and nor do I feel equipped to solve similar problems in the future: it's as if the solutions seem to work only in retrospect. 

\n

Minds work differently, illusion of transparency, and all that.

\n

Fortunately, I eventually managed to identify the source of the problem, and I came up a way of thinking about -- visualizing -- such problems that suits my own intuition. Maybe there are others out there like me; this post is for them.

\n

\n

I've mentioned before that I like to think in very abstract terms. What this means in practice is that, if there's some simple, general, elegant point to be made, tell it to me right away. Don't start with some messy concrete example and attempt to \"work upward\", in the hope that difficult-to-grasp abstract concepts will be made more palatable by relating them to \"real life\". If you do that, I'm liable to get stuck in the trees and not see the forest. Chances are, I won't have much trouble understanding the abstract concepts; \"real life\", on the other hand...

\n

...well, let's just say I prefer to start at the top and work downward, as a general rule. Tell me how the trees relate to the forest, rather than the other way around.

\n

Many people have found Eliezer's Intuitive Explanation of Bayesian Reasoning to be an excellent introduction to Bayes' theorem, and so I don't usually hesitate to recommend it to others. But for me personally, if I didn't know Bayes' theorem and you were trying to explain it to me, pretty much the worst thing you could do would be to start with some detailed scenario involving breast-cancer screenings. (And not just because it tarnishes beautiful mathematics with images of sickness and death, either!)

\n

So what's the right way to explain Bayes' theorem to me?

\n

Like this:

\n

We've got a bunch of hypotheses (states the world could be in) and we're trying to figure out which of them is true (that is, which state the world is actually in). As a concession to concreteness (and for ease of drawing the pictures), let's say we've got three (mutually exclusive and exhaustive) hypotheses -- possible world-states -- which we'll call H1, H2, and H3. We'll represent these as blobs in space:

\n

\"Figure

\n

                   Figure 0

\n


Now, we have some prior notion of how probable each of these hypotheses is -- that is, each has some prior probability. If we don't know anything at all that would make one of them more probable than another, they would each have probability 1/3. To illustrate a more typical situation, however, let's assume we have more information than that. Specifically, let's suppose our prior probability distribution is as follows: P(H1) = 30%, P(H2)=50%, P(H3) = 20%. We'll represent this by resizing our blobs accordingly:

\n

\"Figure

\n

                       Figure 1

\n

That's our prior knowledge. Next, we're going to collect some evidence and update our prior probability distribution to produce a posterior probability distribution. Specifically, we're going to run a test. The test we're going to run has three possible outcomes: Result A, Result B, and Result C. Now, since this test happens to have three possible results, it would be really nice if the test just flat-out told us which world we were living in -- that is, if (say) Result A meant that H1 was true, Result B meant that H2 was true, and Result 3 meant that H3 was true. Unfortunately, the real world is messy and complex, and things aren't that simple. Instead, we'll suppose that each result can occur under each hypothesis, but that the different hypotheses have different effects on how likely each result is to occur. We'll assume for instance that if Hypothesis  H1 is true, we have a 1/2 chance of obtaining Result A, a 1/3 chance of obtaining Result B, and a 1/6 chance of obtaining Result C; which we'll write like this:

\n

P(A|H1) = 50%, P(B|H1) = 33.33...%, P(C|H1) = 16.166...%

\n

and illustrate like this:

\n

 

\n

\"\"

\n

        Figure 2

\n

(Result A being represented by a triangle, Result B by a square, and Result C by a pentagon.)

\n

If Hypothesis H2 is true, we'll assume there's a 10% chance of Result A, a 70% chance of Result B, and a 20% chance of Result C:

\n

\"Figure

\n

              Figure 3

\n


(P(A|H2) = 10% , P(B|H2) = 70%, P(C|H2) = 20%)

\n

Finally, we'll say that if Hypothesis H3 is true, there's a 5% chance of Result A, a 15% chance of Result B, and an 80% chance of Result C:

\n

\"Figure

\n

              Figure 4

\n

(P(A|H3) = 5%, P(B|H3) = 15% P(C|H3) = 80%)

\n

Figure 5 below thus shows our knowledge prior to running the test:

\n

 

\n

 

\n

\"\"

\n

                Figure 5

\n

 

\n

Note that we have now carved up our hypothesis-space more finely; our possible world-states are now things like \"Hypothesis H1 is true and Result A occurred\", \"Hypothesis H1 is true and Result B occurred\", etc., as opposed to merely \"Hypothesis H1 is true\", etc. The numbers above the slanted line segments -- the likelihoods of the test results, assuming the particular hypothesis -- represent what proportion of the total probability mass assigned to the hypothesis Hn is assigned to the conjunction of Hypothesis Hn and Result X; thus, since P(H1) = 30%, and P(A|H1) = 50%, P(H1 & A) is therefore 50% of 30%, or, in other words, 15%.

\n

(That's really all Bayes' theorem is, right there, but -- shh! -- don't tell anyone yet!)

\n


Now, then, suppose we run the test, and we get...Result A.

\n

What do we do? We cut off all the other branches:

\n

\"\"

\n

                Figure 6

\n

 

\n

So our updated probability distribution now looks like this:

\n

\"\"

\n

          Figure 7

\n


\n

...except for one thing: probabilities are supposed to add up to 100%, not 21%. Well, since we've conditioned on Result A, that means that the 21% probability mass assigned to Result A is now the entirety of our probability mass -- 21% is the new 100%, you might say. So we simply adjust the numbers in such a way that they add up to 100% and the proportions are the same:

\n

\"\"

\n

                      Figure 8

\n

There! We've just performed a Bayesian update. And that's what it looks like.

\n

 

\n

If, instead of Result A, we had gotten Result B,

\n

\"Figure

\n

                      Figure 9

\n


\n

then our updated probability distribution would have looked like this:

\n

\"\"

\n

                     Figure 10

\n

 

\n

Similarly, for Result C:

\n

\"\"

\n

               Figure 11

\n

Bayes' theorem is the formula that calculates these updated probabilities. Using H to stand for a hypothesis (such as H1, H2 or H3), and E a piece of evidence (such as Result A, Result B, or Result C), it says:

\n

P(H|E) = P(H)*P(E|H)/P(E)

\n

In words: to calculate the updated probability P(H|E), take the portion of the prior probability of H that is allocated to E (i.e. the quantity P(H)*P(E|H)), and calculate what fraction this is of the total prior probability of E (i.e. divide it by P(E)).

\n

What I like about this way of visualizing Bayes' theorem is that it makes the importance of prior probabilities -- in particular, the difference between P(H|E) and P(E|H) -- visually obvious. Thus, in the above example, we easily see that even though P(C|H3) is high (80%), P(H3|C) is much less high (around 51%) -- and once you have assimilated this visualization method, it should be easy to see that even more extreme examples (e.g. with P(E|H) huge and P(H|E) tiny) could be constructed.

\n

Now let's use this to examine two tricky probability puzzles, the infamous Monty Hall Problem and Eliezer's Drawing Two Aces, and see how it illustrates the correct answers, as well as how one might go wrong.

\n

 

\n

The Monty Hall Problem

\n

The situation is this: you're a contestant on a game show seeking to win a car. Before you are three doors, one of which contains a car, and the other two of which contain goats. You will make an initial \"guess\" at which door contains the car -- that is, you will select one of the doors, without opening it. At that point, the host will open a goat-containing door from among the two that you did not select. You will then have to decide whether to stick with your original guess and open the door that you originally selected, or switch your guess to the remaining unopened door. The question is whether it is to your advantage to switch -- that is, whether the car is more likely to be behind the remaining unopened door than behind the door you originally guessed.

\n

(If you haven't thought about this problem before, you may want to try to figure it out before continuing...)

\n

 

\n

 

\n

The answer is that it is to your advantage to switch -- that, in fact, switching doubles the probability of winning the car.

\n

People often find this counterintuitive when they first encounter it -- where \"people\" includes the author of this post. There are two possible doors that could contain the car; why should one of them be more likely to contain it than the other?

\n

As it turns out, while constructing the diagrams for this post, I \"rediscovered\" the error that led me to incorrectly conclude that there is a 1/2 chance the car is behind the originally-guessed door and a 1/2 chance it is behind the remaining door the host didn't open. I'll present that error first, and then show how to correct it. Here, then, is the wrong solution:

\n

We start out with a perfectly correct diagram showing the prior probabilities:

\n

\"\"

\n

               Figure 12

\n

The possible hypotheses are Car in Door 1, Car in Door 2, and Car in Door 3; before the game starts, there is no reason to believe any of the three doors is more likely than the others to contain the car, and so each of these hypotheses has prior probability 1/3.

\n

The game begins with our selection of a door. That itself isn't evidence about where the car is, of course -- we're assuming we have no particular information about that, other than that it's behind one of the doors (that's the whole point of the game!). Once we've done that, however, we will then have the opportunity to \"run a test\" to gain some \"experimental data\": the host will perform his task of opening a door that is guaranteed to contain a goat. We'll represent the result Host Opens Door 1 by a triangle, the result Host Opens Door 2 by a square, and the result Host Opens Door 3 by a pentagon -- thus carving up our hypothesis space more finely into possibilities such as \"Car in Door 1 and Host Opens Door 2\" , \"Car in Door 1 and Host Opens Door 3\", etc:

\n

\"\"

\n

            Figure 13

\n


Before we've made our initial selection of a door, the host is equally likely to open either of the goat-containing doors. Thus, at the beginning of the game, the probability of each hypothesis of the form \"Car in Door X and Host Opens Door Y\" has a probability of 1/6, as shown. So far, so good; everything is still perfectly correct.

\n

Now we select a door; say we choose Door 2. The host then opens either Door 1 or Door 3, to reveal a goat. Let's suppose he opens Door 1; our diagram now looks like this:


\"\"

\n

            Figure 14

\n

But this shows equal probabilities of the car being behind Door 2 and Door 3!

\n

\"\"

\n

                   Figure 15

\n

Did you catch the mistake?

\n

Here's the correct version:

As soon as we selected Door 2, our diagram should have looked like this:

\n

\"\"

\n

                                Figure 16

\n

 

\n

With Door 2 selected, the host no longer has the option of opening Door 2; if the car is in Door 1, he must open Door 3, and if the car is in Door 3, he must open Door 1. We thus see that if the car is behind Door 3, the host is twice as likely to open Door 1 (namely, 100%) as he is if the car is behind Door 2 (50%); his opening of Door 1 thus constitutes some evidence in favor of the hypothesis that the car is behind Door 3. So, when the host opens Door 1, our picture looks as follows:

\n

\"\"

\n

               Figure 17

\n

 

\n

which yields the correct updated probability distribution:

\n

\"\"

\n

                Figure 18

\n

 

\n

Drawing Two Aces

\n

Here is the statement of the problem, from Eliezer's post:

\n
\n


Suppose I have a deck of four cards:  The ace of spades, the ace of hearts, and two others (say, 2C and 2D).

You draw two cards at random.

(...)

Now suppose I ask you \"Do you have an ace?\"

You say \"Yes.\"

I then say to you:  \"Choose one of the aces you're holding at random (so if you have only one, pick that one).  Is it the ace of spades?\"

You reply \"Yes.\"

What is the probability that you hold two aces?

\n
\n


(Once again, you may want to think about it, if you haven't already, before continuing...)

\n

 

\n

 

\n

Here's how our picture method answers the question:

\n


Since the person holding the cards has at least one ace, the \"hypotheses\" (possible card combinations) are the five shown below:

\n

\"\"

\n

      Figure 19

\n

Each has a prior probability of 1/5, since there's no reason to suppose any of them is more likely than any other.

The \"test\" that will be run is selecting an ace at random from the person's hand, and seeing if it is the ace of spades. The possible results are:

\n

\"\"

\n

     Figure 20

\n

 

\n

Now we run the test, and get the answer \"YES\"; this puts us in the following situation:

\n

 

\n

\"\"

\n

     Figure 21

\n

 

\n

The total prior probability of this situation (the YES answer) is (1/6)+(1/3)+(1/3) = 5/6; thus, since 1/6 is 1/5 of 5/6 (that is, (1/6)/(5/6) = 1/5), our updated probability is 1/5 -- which happens to be the same as the prior probability. (I won't bother displaying the final post-update picture here.)

\n

What this means is that the test we ran did not provide any additional information about whether the person has both aces beyond simply knowing that they have at least one ace; we might in fact say that the result of the test is screened off by the answer to the first question (\"Do you have an ace?\").

\n


On the other hand, if we had simply asked \"Do you have the ace of spades?\", the diagram would have looked like this:

\n

\"\"

\n

     Figure 22

\n

 

\n

which, upon receiving the answer YES, would have become:

\n

\"\"

\n

  Figure 23

\n

The total probability mass allocated to YES is 3/5, and, within that, the specific situation of interest has probability 1/5; hence the updated probability would be 1/3.

\n

So a YES answer in this experiment, unlike the other, would provide evidence that the hand contains both aces; for if the hand contains both aces, the probability of a YES answer is 100% -- twice as large as it is in the contrary case (50%), giving a likelihood ratio of 2:1. By contrast, in the other experiment, the probability of a YES answer is only 50% even in the case where the hand contains both aces.

\n


This is what people who try to explain the difference by uttering the opaque phrase \"a random selection was involved!\" are actually talking about: the difference between

\n

\"\"

\n

  Figure 24

\n

 

\n

and

\n

\"\".

\n

  Figure 25

\n

 

\n

 

\n

The method explained here is far from the only way of visualizing Bayesian updates, but I feel that it is among the most intuitive.

\n

 

\n

(I'd like to thank my sister, Vive-ut-Vivas, for help with some of the diagrams in this post.)

" } }, { "_id": "QXqfTDg4wM2PrDaKQ", "title": "Singularity Summit 2010 on Aug. 14-15 in San Francisco", "pageUrl": "https://www.lesswrong.com/posts/QXqfTDg4wM2PrDaKQ/singularity-summit-2010-on-aug-14-15-in-san-francisco", "postedAt": "2010-06-02T06:01:26.276Z", "baseScore": 13, "voteCount": 9, "commentCount": 22, "url": null, "contents": { "documentId": "QXqfTDg4wM2PrDaKQ", "html": "

The Singularity Summit 2010 will be held on August 14th and 15th at the Hyatt Regency in San Francisco, and will feature Ray Kurzweil and famed Traditional Rationalist James Randi as speakers, in addition to numerous others. During last year's Summit (in New York City), there was a very large Less Wrong meetup with dozens of attendees, and it is quite possible that there will be one again this year. Anyone interested in planning such a meetup (not just attending) should contact the Singularity Institute at institute@intelligence.org. The Singularity Summit press release follows after the jump.

\n

\n

Singularity Summit 2010 returns to San Francisco, explores intelligence augmentation
Speakers include Futurist Ray Kurzweil, Magician-Skeptic James Randi

\n

Will it be one day become possible to boost human intelligence using brain implants, or create an artificial intelligence smarter than Einstein? In a 1993 paper presented to NASA, science fiction author and mathematician Vernor Vinge called such a hypothetical event a “Singularity”, saying “From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye”. Vinge pointed out that intelligence enhancement could lead to “closing the loop” between intelligence and technology, creating a positive feedback effect.

\n

This August 14-15, hundreds of AI researchers, robotics experts, philosophers, entrepreneurs, scientists, and interested laypeople will converge in San Francisco to address the Singularity and related issues at the only conference on the topic, the Singularity Summit. Experts in fields including animal intelligence, artificial intelligence, brain-computer interfacing, tissue regeneration, medical ethics, computational neurobiology, augmented reality, and more will share their latest research and explore its implications for the future of humanity.

\n

“This year, the conference shifts to a focus on neuroscience, bioscience, cognitive enhancement, and other explorations of what Vernor Vinge called ‘intelligence amplification’ — the other route to the Singularity,” said Michael Vassar, president of the Singularity Institute, which is hosting the event.

\n

Irene Pepperberg, author of “Alex & Me,” who has pushed the frontier of animal intelligence with her research on African Gray Parrots, will explore the ethical and practical implications of non-human intelligence enhancement and of the creation of new intelligent life less powerful than ourselves. Futurist-inventor Ray Kurzweil will discuss reverse-engineering the brain and his forthcoming book, How the Mind Works and How to Build One. Allan Synder, Director, Centre for the Mind at the University of Sydney, will explore the use of transcranial magnetic stimulation for the enhancement of narrow cognitive abilities. Joe Tsien will talk about the smarter rats and mice that he created by tuning the molecular substrate of the brain’s learning mechanism. Steve Mann, “the world’s first cyborg,” will demonstrate his latest geek-chic inventions: wearable computers now used by almost 100,000 people.

\n

Other speakers will include magician-skeptic and MacArthur Genius Award winner James Randi; Gregory Stock (Redesigning Humans), former Director of the Program on Medicine, Technology, and Society at UCLA’s School of Public Health; Terry Sejnowski, Professor and Laboratory Head, Salk Institute Computational Neurobiology Laboratory, who believes we are just ten years away from being able to upload ourselves; Ellen Heber-Katz, Professor, Molecular and Cellular Oncogenesis Program at The Wistar Institute, who is investigating the molecular basis of wound regeneration in mutant mice, which can regenerate limbs, hearts, and spinal cords; Anita Goel, MD, physicist, and CEO of nanotechnology company Nanobiosym; and David Hanson, Founder & CEO, Hanson Robotics, who is creating the world’s most realistic humanoid robots.

\n

Interested readers can watch videos from past summits and register at www.singularitysummit.com.

" } }, { "_id": "mxmGS9AFg2w8Ee3ZL", "title": "Rationality quotes: June 2010", "pageUrl": "https://www.lesswrong.com/posts/mxmGS9AFg2w8Ee3ZL/rationality-quotes-june-2010", "postedAt": "2010-06-01T18:07:17.716Z", "baseScore": 7, "voteCount": 5, "commentCount": 223, "url": null, "contents": { "documentId": "mxmGS9AFg2w8Ee3ZL", "html": "

This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.

\n" } }, { "_id": "aypPsdGCpYMysNrvi", "title": "Open Thread: June 2010", "pageUrl": "https://www.lesswrong.com/posts/aypPsdGCpYMysNrvi/open-thread-june-2010", "postedAt": "2010-06-01T18:04:48.504Z", "baseScore": 9, "voteCount": 6, "commentCount": 663, "url": null, "contents": { "documentId": "aypPsdGCpYMysNrvi", "html": "

To whom it may concern:

\n

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

\n

(After the critical success of part II, and the strong box office sales of part III in spite of mixed reviews, will part IV finally see the June Open Thread jump the shark?)

" } }, { "_id": "9sguwESkteCgqFMbj", "title": "Seven Shiny Stories", "pageUrl": "https://www.lesswrong.com/posts/9sguwESkteCgqFMbj/seven-shiny-stories", "postedAt": "2010-06-01T00:43:11.646Z", "baseScore": 145, "voteCount": 137, "commentCount": 34, "url": null, "contents": { "documentId": "9sguwESkteCgqFMbj", "html": "

It has come to my attention that the contents of the luminosity sequence were too abstract, to the point where explicitly fictional stories illustrating the use of the concepts would be helpful.  Accordingly, there follow some such stories.

\n

1. Words (an idea from Let There Be Light, in which I advise harvesting priors about yourself from outside feedback)

\n

Maria likes compliments.  She loves compliments.  And when she doesn't get enough of them to suit her, she starts fishing, asking plaintive questions, making doe eyes to draw them out.  It's starting to annoy people.  Lately, instead of compliments, she's getting barbs and criticism and snappish remarks.  It hurts - and it seems to hurt her more than it hurts others when they hear similar things.  Maria wants to know what it is about her that would explain all of this.  So she starts taking personality tests and looking for different styles of maintaining and thinking about relationships, looking for something that describes her.  Eventually, she runs into a concept called \"love languages\" and realizes at once that she's a \"words\" person.  Her friends aren't trying to hurt her - they don't realize how much she thrives on compliments, or how deeply insults can cut when they're dealing with someone who transmits affection verbally.  Armed with this concept, she has a lens through which to interpret patterns of her own behavior; she also has a way to explain herself to her loved ones and get the wordy boosts she needs.

\n

2. Widgets (an idea from The ABC's of Luminosity, in which I explain the value of correlating affect, behavior, and circumstance)

\n

Tony's performance at work is suffering.  Not every day, but most days, he's too drained and distracted to perform the tasks that go into making widgets.  He's in serious danger of falling behind his widget quota and needs to figure out why.  Having just read a fascinating and brilliantly written post on Less Wrong about luminosity, he decides to keep track of where he is and what he's doing when he does and doesn't feel the drainedness.  After a week, he's got a fairly robust correlation: he feels worst on days when he doesn't eat breakfast, which reliably occurs when he's stayed up too late, hit the snooze button four times, and had to dash out the door.  Awkwardly enough, having been distracted all day tends to make him work more slowly at making widgets, which makes him less physically exhausted by the time he gets home and enables him to stay up later.  To deal with that, he starts going for long runs on days when his work hasn't been very tiring, and pops melatonin; he easily drops off to sleep when his head hits the pillow at a reasonable hour, gets sounder sleep, scarfs down a bowl of Cheerios, and arrives at the widget factory energized and focused.

\n

3. Text (an idea from Lights, Camera, Action!, in which I advocate aggressive and frequent introspection to collect as much data as possible)

\n

Dot reads about an experiment in which the subjects receive phone calls at random times and must tell researchers how happy they feel.  Apparently the experiment turned up some really suboptimal patterns of behavior, and Dot's curious about what she'd learn that she could use to improve her life.  She gets a friend to arrange delayed text messages to be sent to her phone at intervals supplied by a random number generator, and promises herself that she'll note what she's doing, thinking, and feeling at the moment she receives the text.  She soon finds that she doesn't enjoy watching TV as much as she thinks she does; that it's probably worth the time to cook dinner rather than heating up something in the microwave because it's considerably tastier; that she can't really stand her cubicle neighbor; and that she thinks about her ex more than she'd have ever admitted.  These thoughts were usually too fleeting to turn into actions; if she tried to remember them hours later, they'd be folded into some large story in which these momentary emotions were secondary.  But treating them as notable data points to be taken into account gives them staying power.  Dot starts keeping the TV remote under the book she's reading to remind herself what entertainment is more fulfilling.  She buys fewer frozen meals and makes sure she's stocked up on staple ingredients.  She agrees to swap cubicles with a co-worker down the hall.  There's not all that much she can do about the ex, but at least when her friends ask her if everything's okay between them, she can answer more accurately.

\n

4. Typing (an idea from The Spotlight, in which I encourage extracting thoughts into a visible or audible form so as to allow their inspection without introspection)

\n

George is trying to figure out who he is.  He's trying really hard.  But when he tries to explain his behaviors and thoughts in terms of larger patterns that could answer the question, they inevitably sound suspiciously revisionist and self-serving, like he's conveniently forgetting some parts and artificially inflating others.  He thinks he's generous, fun at parties, a great family man, loyal, easygoing.  George decides that what he needs to do is catch what he's thinking at the moment he's thinking it, honestly and irrevocably, so he'll have an uncorrupted data set to work with.  He fires up a word processor and starts typing, stream of consciousness.  For a few paragraphs, it's mostly \"here I am, writing what I think\" and \"this is kind of dumb, I wonder if anything will come of it\", but eventually that gets old, and content starts to come out.  Soon George has a few minutes of inner monologue written down.  He writes the congratulatory things he thinks about himself, but also notes in parentheses the times he's acted contrary to these nice patterns (he took three helpings of cake that one time when there were fewer slices than guests, he spent half of the office celebration on his cellphone instead of participating, he missed his daughter's last birthday, he dropped a friend over a sports rivalry, he blew up when a co-worker reminded him one too many times to finish that spreadsheet).  George writes the bad habits and vices he demonstrates, too.  Most importantly, he resists the urge to hit backspace, although he freely contradicts himself if there's something he wants to correct.  Then he saves the document, squirrels it away in a folder, and waits a week.  The following Tuesday, he goes over it like a stranger had written it and notes what he'd think of this stranger, and what he'd advise him to do.

\n

5. Contradiction (an idea from Highlights and Shadows, in which I explain endorsement and repudiation of one's thoughts and dispositions)

\n

Penny knows she's not perfect.  In fact, some of her traits and projects seem to outright contradict one another, so she really knows it.  She wants to eat better, but she just loves pizza; she's trying to learn anger management, but sometimes people do things that really are wrong and it seems only suitable that she be upset with them; she's working on her tendency to nag her boyfriend because she knows it annoys him, but if he can't learn to put the toilet seat down, maybe he deserves to be annoyed.  Penny decides to take a serious look at the contradictions and make decisions about which \"side\" she's on.  Eventually, she concludes that if she's honest with herself, a life without pizza seems bleak and unrewarding; she'll make that her official exception to the rule, and work harder to eat better in every other way without the drag on motivation caused by withholding her one favorite food.  On reflection, being angry - even at people who really do wrong things - isn't helping her or them, and so she throws herself into anger management classes with renewed vigor, looking for other, more productive channels to turn her moral evaluation towards.  And - clearly - the nagging isn't helping its ostensible cause either.  She doesn't endorse that, but she's not going to let her boyfriend's uncivilized behavior slide either.  She'll agree to stop nagging when he slips up and hope this inspires him to remember more often.

\n

6. Community (an idea from City of Lights, in which I propose dividing yourself into subagents to tackle complex situations)

\n

Billy has the chance to study abroad in Australia for a year, and he's so mixed up about it, he can barely think straight.  He can't decide if he wants to go, or why, or how he feels about the idea of missing it.  Eventually, he decides this would be far easier if all the different nagging voices and clusters of desire were given names and allowed to talk to each other.  He identifies the major relevant sub-agents as \"Clingyness\", which wants to stay in known surroundings; \"Adventurer\", which wants to seek new experiences and learn about the world; \"Obedience to Advisor\", which wants to do what Prof. So-and-So recommends; \"Academic\", who wants to do whatever will make Billy's résumé more impressive to future readers; and \"Fear of Spiders\", which would happily go nearly anywhere but the home of the Sydney funnelweb and is probably responsible for Billy's spooky dreams.  When these voices have a chance to compete with each other, they expose questionable motivations: for instance, Academic determines that Prof. So-and-So only recommends staying at Billy's home institution because Billy is her research assistant, not because it would further Billy's intellectual growth, which reduces the comparative power of Obedience to Advisor.  Adventurer renders Fear of Spiders irrelevant by pointing out that the black widow is native to the United States.  Eventually, Academic and Adventurer, in coalition, beat out Clingyness (whom Billy is not strongly inclined to identify with), and Billy buys the ticket to Down Under.

\n

7. Experiment (an idea from Lampshading, where I describe how to make changes in oneself by setting oneself up to succeed at operating in accordance with the change, and determining what underlies the disliked behavior)

\n

Eva bursts into tears whenever she has a hard problem to deal with, like a stressful project at work or above-average levels of social drama amongst her friends.  This is, of course, completely unproductive - in fact, in the case of drama, it worsens things - and Eva wants to stop it.  First, she has to figure out why it happens.  Are the tears caused by sadness?  It turns out not - she can be brought to tears even by things that don't make her sad.  The latest project from work was exciting and a great opportunity and it still made her cry.  After a little work sorting through lists of things that make her cry, Eva concludes that it's linked to how much pressure she feels to solve the problem: for instance, if she's part of a team that's assigned a project, she's less likely to react this way than if she's operating solo, and if her friends embroiled in drama turn to her for help, she'll wind up tearful more often than if she's just a spectator with no special responsibility.  Now she needs to set herself up not to cry.  She decides to do this by making sure she has social support in her endeavors: if the boss gives her an assignment, she says to the next employee over, \"I should be able to handle this, but if I need help, can I count on you?\"  That way, she can think of the task as something that isn't entirely on her.  When next social drama rears its head, Eva reconceptualizes her part in the solution as finding and voicing the group's existing consensus, rather than personally creating a novel way to make everything better.  While this new approach reduces the incidence of stress tears, it doesn't disassemble the underlying architecture that causes the tendency in the first place.  That's more complicated to address: Eva spends some time thinking about why responsibility is such an emotional thing for her, and looks for ways to duplicate the sense of support she feels when she has help in situations where she doesn't.  Eventually, it is not much of a risk that Eva will cry if presented with a problem to solve.

" } }, { "_id": "3aiAjSPZccso5WgFd", "title": "Cultivating our own gardens", "pageUrl": "https://www.lesswrong.com/posts/3aiAjSPZccso5WgFd/cultivating-our-own-gardens", "postedAt": "2010-05-31T20:05:24.763Z", "baseScore": 10, "voteCount": 19, "commentCount": 49, "url": null, "contents": { "documentId": "3aiAjSPZccso5WgFd", "html": "

This is a post about moral philosophy, approached with a mathematical metaphor.

\n

Here's an interesting problem in mathematics.  Let's say you have a graph, made up of vertices and edges, with weights assigned to the edges.  Think of the vertices as US cities and the edges as roads between them; the weight on each road is the length of the road. Now, knowing only this information, can you draw a map of the US on a sheet of paper? In mathematical terms, is there an isometric embedding of this graph in two-dimensional Euclidean space?

\n

When you think about this for a minute, it's clear that this is a problem about reconciling the local and the global.  Start with New York and all its neighboring cities.  You have a sort of star shape.  You can certainly draw this on the plane; in fact, you have many degrees of freedom; you can arbitrarily pick one way to draw it.  Now start adding more cities and more roads, and eventually the degrees of freedom diminish.  If you made the wrong choices earlier on, you might paint yourself in a corner and have no way to keep all the distances consistent when you add a new city.  This is known as a \"synchronization problem.\"  Getting it to work locally is easy; getting all the local pieces reconciled with each other is hard.

\n

This is a lovely problem and some acquaintances of mine have written a paper about it.  (http://www.math.princeton.edu/~mcucurin/Sensors_ASAP_TOSN_final.pdf)  I'll pick out some insights that seem relevant to what follows.  First, some obvious approaches don't work very well.  It might be thought we want to optimize over all possible embeddings, picking the one that has the lowest error in approximating distances between cities.  You come up with a \"penalty function\" that's some sort of sum of errors, and use standard optimization techniques to minimize it.  The trouble is, these approaches tend to work spottily -- in particular, they sometimes pick out local rather than global optima (so that the error can be quite high after all.) 

\n

The approach in the paper I linked is different. We break the graph into overlapping smaller subgraphs, so small that they can only be embedded in one way (that's called rigidity) and then \"stitch\" them together consistently.  The \"stitching\" is done with a very handy trick involving eigenvectors of sparse matrices.  But the point I want to emphasize here is that you have to look at the small scale, and let all the little patches embed themselves as they like, before trying to reconcile them globally.

\n

Now, rather daringly, I want to apply this idea to ethics.  (This is an expansion of a post people seemed to like: http://lesswrong.com/lw/1xa/human_values_differ_as_much_as_values_can_differ/1y )

\n

The thing is, human values differ enormously.  The diversity of values is an empirical fact.  The Japanese did not have a word for \"thank you\" until the Portuguese gave them one; this is a simple example, but it absolutely shocked me, because I thought \"thank you\" was a universal concept.  It's not.    (edited for lack of fact-checking.) And we do not all agree on what virtues are, or what the best way to raise children is, or what the best form of government is.  There may be no principle that all humans agree on -- dissenters who believe that genocide is a good thing may be pretty awful people, but they undoubtedly exist.  Creating the best possible world for humans is a synchronization problem, then -- we have to figure out a way to balance values that inevitably clash.  Here, nodes are individuals, each individual is tied to its neighbors, and a choice of embedding is a particular action.  The worse the embedding near an individual fits the \"true\" underlying manifold, the greater the \"penalty function\" and the more miserable that individual is, because the action goes against what he values.

\n

If we can extend the metaphor further, this is a problem for utilitarianism.  Maximizing something globally -- say, happiness -- can be a dead end.  It can hit a local maximum -- the maximum for those people who value happiness -- but do nothing for the people whose highest value is loyalty to their family, or truth-seeking, or practicing religion, or freedom, or martial valor.  We can't really optimize, because a lot of people's values are other-regarding: we want Aunt Susie to stop smoking, because of the principle of the thing.  Or more seriously, we want people in foreign countries to stop performing clitoridectomies, because of the principle of the thing.  And Aunt Susie or the foreigners may feel differently.  When you have a set of values that extends to the whole world, conflict is inevitable.

\n

The analogue to breaking down the graph is to keep values local.  You have a small star-shaped graph of people you know personally and actions you're personally capable of taking.  Within that star, you define your own values: what you're ready to cheer for, work for, or die for.  You're free to choose those values for yourself -- you don't have to drop them because they're perhaps not optimal for the world's well-being.  But beyond that radius, opinions are dangerous: both because you're more ignorant about distant issues, and because you run into this problem of globally reconciling conflicting values.  Reconciliation is only possible if everyone's minding their own business. If things are really broken down into rigid components.  It's something akin to what Thomas Nagel said against utilitarianism:

\n

\"Absolutism is associated with a view of oneself as a small being interacting with others in a large world.  The justifications it requires are primarily interpersonal. Utilitarianism is associated with a view of oneself as a benevolent bureaucrat distributing such benefits as one can control to countless other beings, with whom one can have various relations or none.  The justifications it requires are primarily administrative.\" (Mortal Questions, p. 68.)

\n

Anyhow, trying to embed our values on this dark continent of a manifold seems to require breaking things down into little local pieces. I think of that as \"cultivating our own gardens,\" to quote Candide. I don't want to be so confident as to have universal ideologies, but I think I may be quite confident and decisive in the little area that is mine: my personal relationships; my areas of expertise, such as they are; my own home and what I do in it; everything that I know I love and is worth my time and money; and bad things that I will not permit to happen in front of me, so long as I can help it.  Local values, not global ones. 

\n

Could any AI be \"friendly\" enough to keep things local?
 

" } }, { "_id": "vsmiC3hzXWn4iQH9F", "title": "London UK, Saturday 2010-07-03: \"How to think rationally about the future\"", "pageUrl": "https://www.lesswrong.com/posts/vsmiC3hzXWn4iQH9F/london-uk-saturday-2010-07-03-how-to-think-rationally-about", "postedAt": "2010-05-31T15:23:20.972Z", "baseScore": 14, "voteCount": 11, "commentCount": 21, "url": null, "contents": { "documentId": "vsmiC3hzXWn4iQH9F", "html": "

Myself and Roko will be giving a presentation about LessWrong-style thinking to the UK Transhumanist Association on the afternoon of Saturday 3 July.  Here's the official announcement:

\n

Title: \"How to think rationally about the future\"

\n

2pm-4pm, Saturday 3rd July. [but see above]

\n

Room 416
Fourth floor
Birkbeck College
Torrington Square
LONDON
WC1E 7HX

\n

Speakers: Paul Crowley and Roko Mijic

\n

About the talk:

\n

Over the past forty years, science has built up a substantial body of experimental evidence that highlights dozens of alarming systematic failings in our capacity for reason. These errors are especially dangerous in an area as difficult to think about as the future of humanity, where deluding oneself is tempting and the \"reality check\" won't arrive until too late.

\n

How can we form accurate beliefs about the future in the face of these considerable obstacles? We'll outline ways of identifying and correcting cognitive biases, in particular the use of probability theory to quantify and manipulate uncertainty, and then apply these improved methods to try to paint a more accurate picture of what we all have to look forward to in the 21st century.

\n

About the speakers:

\n

Paul Crowley is a cryptographer and computer programmer whose work includes breaks in ciphers designed by Cisco and by Bruce Schneier. His website is http://www.ciphergoth.org

\n

Roko Mijic graduated from the University of Cambridge with a BA in Mathematics, and the Certificate of Advanced Study in Mathematics. He spent a year doing research into the foundations of knowledge representation at the University of Edinburgh and holds an MSc in informatics. He is currently an advisor for the Singularity Institute for Artificial Intelligence.

\n

Both speakers are contributors to the community website for refining the art of human rationality, http://LessWrong.com

\n

Further details:

There's no charge to attend this meeting, and everyone is welcome.

There will be plenty of opportunity to ask questions and to make comments.

Discussion will continue after the event, in a nearby pub, for those who are able to stay.

Why not join some of the UKH+ regulars for a drink and/or light lunch beforehand, any time after 12.30pm, in The Marlborough Arms, 36 Torrington Place, London WC1E 7HJ. To find us, look out for a table where there's a copy of the book \"The Singularity Is Near\" displayed.

About the venue:

Room 416 is on the fourth floor (via the lift near reception) in the main Birkbeck College building, in Torrington Square (which is a pedestrian-only square). Torrington Square is about 10 minutes walk from either Russell Square or Goodge St tube stations.

\n

The broad plan is for me to open by talking about cognitive biases, including possibly a live demonstration of anchoring bias (which may go wrong but seems worth a go), followed by Roko talking about the implications for thinking about the future, after which we'll take questions. Hopefully we can encourage more careful rational thinking about futurism and get a few more folk participating here; would be great to see as many of you as possible, especially wearing LessWrong.com T-shirts :-)

\n

Also, this Sunday sees another LessWrong meetup near Holborn - see some of you there!

\n

(Updated with venue information and more from meetup announcement)

" } }, { "_id": "bbf4ZWwcPQkRijEpt", "title": "On Less Wrong traffic and new users -- and how you can help", "pageUrl": "https://www.lesswrong.com/posts/bbf4ZWwcPQkRijEpt/on-less-wrong-traffic-and-new-users-and-how-you-can-help", "postedAt": "2010-05-31T08:19:15.248Z", "baseScore": 28, "voteCount": 27, "commentCount": 31, "url": null, "contents": { "documentId": "bbf4ZWwcPQkRijEpt", "html": "

This is a breakdown of Less Wrong's recent new user traffic, data sourced from the Less Wrong Google Analytics account.

\n

67% StumbleUpon
16% Google
5.4% Reddit
3.6% Hacker News
3% Harry Potter story
0.7% Facebook
0.3% Overcoming Bias
4% \"The Long Tail\"

\n

The 16% for Google is artificially high because many of those hits are users that are using Google as an address bar by searching for Less Wrong.

\n

So we get an order of magnitude more traffic from Stumble Upon than anywhere else -- sometimes thousands of new users a day. Stumble Upon has been Less Wrong's biggest referrer of new users from the beginning of the site. That was surprising to me and I suspect it is also surprising to you. Some of our very best users, like Alicorn, came from Stumble Upon.

\n
Why does it matter?
\n
Imagine your life without Less Wrong... now realize that the overwhelming majority of humans go through their entire lives without ever thinking of Bayesianismfallacies, how to actually change your mind, or even philosophical zombies. Seriously, try to picture your life without Less Wrong. An article that recently made the rounds on the internet claimed that one real way to make yourself happier was to imagine your life without something that you liked. 
\n
We try to take existential risk seriously around these parts. Each marginal new user that reads anything on Less Wrong has a real chance of being the one that tips us from existential Loss to existential Win.
\n

What can you do?

\n
    \n
  1. Sign up for Stumble Upon and start \"thumbs up'ing\" or likeing LW articles that you sincerely like and want to recommend to others. You could start by stumbling one of my favorite Less Wrong articles on the power of a superintelligence and the real meaning of making efficient use of sensory information.

    In order to get maximum Stumble power, you can't just stumble LW articles and only LW articles. You need to use Stumble Upon for a minute or two every now and then and vote up or down the random links it gives you. I know, it's annoying, but what are a few dust specks when we are talking about saving the world?
  2. \n
  3. Help our Google traffic by linking to Less Wrong using the word rationality. Less Wrong is the best web site out there on rationality. We should rank #1 on Google for rationality, not #57. At this point in Google's metaphorical paperclipping of the web, some evidence of effort going into increased inbound links is a sign of a high-quality site.
  4. \n
  5. When you stumble something on Less Wrong you like or post a link to Less Wrong on the greater Internet, post here. You will be rewarded with large amounts of karma and kudos. Also, cake.
  6. \n
\n

Thanks to&nbsp;Louie&nbsp;for help with this post.

\n
" } }, { "_id": "F3WKCNYx7oQEjiu9b", "title": "Negative photon numbers observed", "pageUrl": "https://www.lesswrong.com/posts/F3WKCNYx7oQEjiu9b/negative-photon-numbers-observed", "postedAt": "2010-05-31T08:02:33.978Z", "baseScore": -12, "voteCount": 19, "commentCount": 6, "url": null, "contents": { "documentId": "F3WKCNYx7oQEjiu9b", "html": "

A real life paradox observed by quantum physicists: when not being observed, there can be a negative number of photons present.

\n

The Economist summarizes the research. The scientific paper is available freely.  Wikipedia discusses the paradox at length.

" } }, { "_id": "895quRDaK6gR2rM82", "title": "Diseased thinking: dissolving questions about disease", "pageUrl": "https://www.lesswrong.com/posts/895quRDaK6gR2rM82/diseased-thinking-dissolving-questions-about-disease", "postedAt": "2010-05-30T21:16:19.449Z", "baseScore": 553, "voteCount": 456, "commentCount": 356, "url": null, "contents": { "documentId": "895quRDaK6gR2rM82", "html": "

Related to: Disguised Queries, Words as Hidden Inferences, Dissolving the Question, Eight Short Studies on Excuses

Today's therapeutic ethos, which celebrates curing and disparages judging, expresses the liberal disposition to assume that crime and other problematic behaviors reflect social or biological causation. While this absolves the individual of responsibility, it also strips the individual of personhood, and moral dignity

-- George Will, townhall.com

Sandy is a morbidly obese woman looking for advice.

Her husband has no sympathy for her, and tells her she obviously needs to stop eating like a pig, and would it kill her to go to the gym once in a while?

Her doctor tells her that obesity is primarily genetic, and recommends the diet pill orlistat and a consultation with a surgeon about gastric bypass.

Her sister tells her that obesity is a perfectly valid lifestyle choice, and that fat-ism, equivalent to racism, is society's way of keeping her down.

When she tells each of her friends about the opinions of the others, things really start to heat up.

Her husband accuses her doctor and sister of absolving her of personal responsibility with feel-good platitudes that in the end will only prevent her from getting the willpower she needs to start a real diet.

Her doctor accuses her husband of ignorance of the real causes of obesity and of the most effective treatments, and accuses her sister of legitimizing a dangerous health risk that could end with Sandy in hospital or even dead.

Her sister accuses her husband of being a jerk, and her doctor of trying to medicalize her behavior in order to turn it into a "condition" that will keep her on pills for life and make lots of money for Big Pharma.

Sandy is fictional, but similar conversations happen every day, not only about obesity but about a host of other marginal conditions that some consider character flaws, others diseases, and still others normal variation in the human condition. Attention deficit disorder, internet addiction, social anxiety disorder (as one skeptic said, didn't we used to call this "shyness"?), alcoholism, chronic fatigue, oppositional defiant disorder ("didn't we used to call this being a teenager?"), compulsive gambling, homosexuality, Aspergers' syndrome, antisocial personality, even depression have all been placed in two or more of these categories by different people.

Sandy's sister may have a point, but this post will concentrate on the debate between her husband and her doctor, with the understanding that the same techniques will apply to evaluating her sister's opinion. The disagreement between Sandy's husband and doctor centers around the idea of "disease". If obesity, depression, alcoholism, and the like are diseases, most people default to the doctor's point of view; if they are not diseases, they tend to agree with the husband.

The debate over such marginal conditions is in many ways a debate over whether or not they are "real" diseases. The usual surface level arguments trotted out in favor of or against the proposition are generally inconclusive, but this post will apply a host of techniques previously discussed on Less Wrong to illuminate the issue.

What is Disease?

In Disguised Queries , Eliezer demonstrates how a word refers to a cluster of objects related upon multiple axes. For example, in a company that sorts red smooth translucent cubes full of vanadium from blue furry opaque eggs full of palladium, you might invent the word "rube" to designate the red cubes, and another "blegg", to designate the blue eggs. Both words are useful because they "carve reality at the joints" - they refer to two completely separate classes of things which it's practically useful to keep in separate categories. Calling something a "blegg" is a quick and easy way to describe its color, shape, opacity, texture, and chemical composition. It may be that the odd blegg might be purple rather than blue, but in general the characteristics of a blegg remain sufficiently correlated that "blegg" is a useful word. If they weren't so correlated - if blue objects were equally likely to be palladium-containing-cubes as vanadium-containing-eggs, then the word "blegg" would be a waste of breath; the characteristics of the object would remain just as mysterious to your partner after you said "blegg" as they were before.

"Disease", like "blegg", suggests that certain characteristics always come together. A rough sketch of some of the characteristics we expect in a disease might include:

1. Something caused by the sorts of thing you study in biology: proteins, bacteria, ions, viruses, genes.

2. Something involuntary and completely immune to the operations of free will

3. Something rare; the vast majority of people don't have it

4. Something unpleasant; when you have it, you want to get rid of it

5. Something discrete; a graph would show two widely separate populations, one with the disease and one without, and not a normal distribution.

6. Something commonly treated with science-y interventions like chemicals and radiation.

Cancer satisfies every one of these criteria, and so we have no qualms whatsoever about classifying it as a disease. It's a type specimen, the sparrow as opposed to the ostrich. The same is true of heart attack, the flu, diabetes, and many more.

Some conditions satisfy a few of the criteria, but not others. Dwarfism seems to fail (5), and it might get its status as a disease only after studies show that the supposed dwarf falls way out of normal human height variation. Despite the best efforts of transhumanists, it's hard to convince people that aging is a disease, partly because it fails (3). Calling homosexuality a disease is a poor choice for many reasons, but one of them is certainly (4): it's not necessarily unpleasant.

The marginal conditions mentioned above are also in this category. Obesity arguably sort-of-satisfies criteria (1), (4), and (6), but it would be pretty hard to make a case for (2), (3), and (5).

So, is obesity really a disease? Well, is Pluto really a planet? Once we state that obesity satisfies some of the criteria but not others, it is meaningless to talk about an additional fact of whether it "really deserves to be a disease" or not.

If it weren't for those pesky hidden inferences...

Hidden Inferences From Disease Concept

The state of the disease node, meaningless in itself, is used to predict several other nodes with non-empirical content. In English: we make value decisions based on whether we call something a "disease" or not.

If something is a real disease, the patient deserves our sympathy and support; for example, cancer sufferers must universally be described as "brave". If it is not a real disease, people are more likely to get our condemnation; for example Sandy's husband who calls her a "pig" for her inability to control her eating habits. The difference between "shyness" and "social anxiety disorder" is that people with the first get called "weird" and told to man up, and people with the second get special privileges and the sympathy of those around them.

And if something is a real disease, it is socially acceptable (maybe even mandated) to seek medical treatment for it. If it's not a disease, medical treatment gets derided as a "quick fix" or an "abdication of personal responsibility". I have talked to several doctors who are uncomfortable suggesting gastric bypass surgery, even in people for whom it is medically indicated, because they believe it is morally wrong to turn to medicine to solve a character issue.

While a condition's status as a "real disease" ought to be meaningless as a "hanging node" after the status of all other nodes have been determined, it has acquired political and philosophical implications because of its role in determining whether patients receive sympathy and whether they are permitted to seek medical treatment.

If we can determine whether a person should get sympathy, and whether they should be allowed to seek medical treatment, independently of the central node "disease" or of the criteria that feed into it, we will have successfully unasked the question "are these marginal conditions real diseases" and cleared up the confusion.

Sympathy or Condemnation?

Our attitudes toward people with marginal conditions mainly reflect a deontologist libertarian (libertarian as in "free will", not as in "against government") model of blame. In this concept, people make decisions using their free will, a spiritual entity operating free from biology or circumstance. People who make good decisions are intrinsically good people and deserve good treatment; people who make bad decisions are intrinsically bad people and deserve bad treatment. But people who make bad decisions for reasons that are outside of their free will may not be intrinsically bad people, and may therefore be absolved from deserving bad treatment. For example, if a normally peaceful person has a brain tumor that affects areas involved in fear and aggression, they go on a crazy killing spree, and then they have their brain tumor removed and become a peaceful person again, many people would be willing to accept that the killing spree does not reflect negatively on them or open them up to deserving bad treatment, since it had biological and not spiritual causes.

Under this model, deciding whether a condition is biological or spiritual becomes very important, and the rationale for worrying over whether something "is a real disease" or not is plain to see. Without figuring out this extremely difficult question, we are at risk of either blaming people for things they don't deserve, or else letting them off the hook when they commit a sin, both of which, to libertarian deontologists, would be terrible things. But determining whether marginal conditions like depression have a spiritual or biological cause is difficult, and no one knows how to do it reliably.

Determinist consequentialists can do better. We believe it's biology all the way down. Separating spiritual from biological illnesses is impossible and unnecessary. Every condition, from brain tumors to poor taste in music, is "biological" insofar as it is encoded in things like cells and proteins and follows laws based on their structure.

But determinists don't just ignore the very important differences between brain tumors and poor taste in music. Some biological phenomena, like poor taste in music, are encoded in such a way that they are extremely vulnerable to what we can call social influences: praise, condemnation, introspection, and the like. Other biological phenomena, like brain tumors, are completely immune to such influences. This allows us to develop a more useful model of blame.

The consequentialist model of blame is very different from the deontological model. Because all actions are biologically determined, none are more or less metaphysically blameworthy than others, and none can mark anyone with the metaphysical status of "bad person" and make them "deserve" bad treatment. Consequentialists don't on a primary level want anyone to be treated badly, full stop; thus is it written: "Saddam Hussein doesn't deserve so much as a stubbed toe." But if consequentialists don't believe in punishment for its own sake, they do believe in punishment for the sake of, well, consequences. Hurting bank robbers may not be a good in and of itself, but it will prevent banks from being robbed in the future. And, one might infer, although alcoholics may not deserve condemnation, societal condemnation of alcoholics makes alcoholism a less attractive option.

So here, at last, is a rule for which diseases we offer sympathy, and which we offer condemnation: if giving condemnation instead of sympathy decreases the incidence of the disease enough to be worth the hurt feelings, condemn; otherwise, sympathize. Though the rule is based on philosophy that the majority of the human race would disavow, it leads to intuitively correct consequences. Yelling at a cancer patient, shouting "How dare you allow your cells to divide in an uncontrolled manner like this; is that the way your mother raised you??!" will probably make the patient feel pretty awful, but it's not going to cure the cancer. Telling a lazy person "Get up and do some work, you worthless bum," very well might cure the laziness. The cancer is a biological condition immune to social influences; the laziness is a biological condition susceptible to social influences, so we try to socially influence the laziness and not the cancer.

The question "Do the obese deserve our sympathy or our condemnation," then, is asking whether condemnation is such a useful treatment for obesity that its utility outweights the disutility of hurting obese people's feelings. This question may have different answers depending on the particular obese person involved, the particular person doing the condemning, and the availability of other methods for treating the obesity, which brings us to...

The Ethics of Treating Marginal Conditions

If a condition is susceptible to social intervention, but an effective biological therapy for it also exists, is it okay for people to use the biological therapy instead of figuring out a social solution? My gut answer is "Of course, why wouldn't it be?", but apparently lots of people find this controversial for some reason.

In a libertarian deontological system, throwing biological solutions at spiritual problems might be disrespectful or dehumanizing, or a band-aid that doesn't affect the deeper problem. To someone who believes it's biology all the way down, this is much less of a concern.

Others complain that the existence of an easy medical solution prevents people from learning personal responsibility. But here we see the status-quo bias at work, and so can apply a preference reversal test. If people really believe learning personal responsibility is more important than being not addicted to heroin, we would expect these people to support deliberately addicting schoolchildren to heroin so they can develop personal responsibility by coming off of it. Anyone who disagrees with this somewhat shocking proposal must believe, on some level, that having people who are not addicted to heroin is more important than having people develop whatever measure of personal responsibility comes from kicking their heroin habit the old-fashioned way.

But the most convincing explanation I have read for why so many people are opposed to medical solutions for social conditions is a signaling explanation by Robin Hans...wait! no!...by Katja Grace. On her blog, she says:

...the situation reminds me of a pattern in similar cases I have noticed before. It goes like this. Some people make personal sacrifices, supposedly toward solving problems that don’t threaten them personally. They sort recycling, buy free range eggs, buy fair trade, campaign for wealth redistribution etc. Their actions are seen as virtuous. They see those who don’t join them as uncaring and immoral. A more efficient solution to the problem is suggested. It does not require personal sacrifice. People who have not previously sacrificed support it. Those who have previously sacrificed object on grounds that it is an excuse for people to get out of making the sacrifice. The supposed instrumental action, as the visible sign of caring, has become virtuous in its own right. Solving the problem effectively is an attack on the moral people.

A case in which some people eat less enjoyable foods and exercise hard to avoid becoming obese, and then campaign against a pill that makes avoiding obesity easy demonstrates some of the same principles.

There are several very reasonable objections to treating any condition with drugs, whether it be a classical disease like cancer or a marginal condition like alcoholism. The drugs can have side effects. They can be expensive. They can build dependence. They may later be found to be placebos whose efficacy was overhyped by dishonest pharmaceutical advertising.. They may raise ethical issues with children, the mentally incapacitated, and other people who cannot decide for themselves whether or not to take them. But these issues do not magically become more dangerous in conditions typically regarded as "character flaws" rather than "diseases", and the same good-enough solutions that work for cancer or heart disease will work for alcoholism and other such conditions (but see here).

I see no reason why people who want effective treatment for a condition should be denied it or stigmatized for seeking it, whether it is traditionally considered "medical" or not.

Summary

People commonly debate whether social and mental conditions are real diseases. This masquerades as a medical question, but its implications are mainly social and ethical. We use the concept of disease to decide who gets sympathy, who gets blame, and who gets treatment.

Instead of continuing the fruitless "disease" argument, we should address these questions directly. Taking a determinist consequentialist position allows us to do so more effectively. We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition, and we should allow patients to seek treatment whenever it is available and effective.

" } }, { "_id": "vBQ2wmsDjBnGK28RK", "title": "Significance of Compression Rate Method", "pageUrl": "https://www.lesswrong.com/posts/vBQ2wmsDjBnGK28RK/significance-of-compression-rate-method", "postedAt": "2010-05-30T03:50:25.401Z", "baseScore": 4, "voteCount": 30, "commentCount": 60, "url": null, "contents": { "documentId": "vBQ2wmsDjBnGK28RK", "html": "

Summary: The significance of the Compression Rate Method (CRM) is that it justifies a form of empirical inquiry into aspects of reality that have previously resisted systematic interrogation. Some examples of potential investigations are described. A key hypothesis is discussed, and the link between empirical science and lossless data compression is emphasized.

\n

In my previous post, the protagonist Sophie developed a modified version of the scientific method. It consists of the following steps:

\n
    \n
  1. Obtain a large database T related to a phenomenon of interest.
  2. \n
  3. Develop a theory of the phenomenon, and instantiate the theory as a compression program.
  4. \n
  5. Test the theory by invoking the compressor on T and measuring the net codelength achieved (encoded data plus length of compressor).
  6. \n
  7. Given two rival theories of the phenomenon, prefer the one that achieves a shorter net codelength.
  8. \n
\n

This modified version preserves two of the essential attributes of the traditional method. First, it employs theoretical speculation, but guides and constrains that speculation using empirical observations. Second, it permits Strong Inference by allowing the field to make decisive comparisons between rival theories.

\n

The key difference between the CRM and the traditional method is that the former does not depend on the use of controlled experiments. For that reason, it justifies inquiries into aspects of empirical reality that have never before been systematically interrogated. The kind of scientific theories that are tested by the CRM depend on the type of measurements in the database target T. If T contains measurements related to physical experiments, the theories of physics will be necessary to compress it. Other types of data lead to other types of science. Consider the following examples:

\n

\n
    \n
  1. Set up a camera next to a highway, and record the stream of passing cars. To compress the resulting data, you will need to develop a computational understanding of the visual appearance of automobiles. You will need theories of hubcaps, windshields, license plates, car categories, and so on.
  2. \n
  3. Position some microphones in the tops of trees and start recording. A major source of variation in the resulting data will be bird vocalization. To compress the data, you will need to find ways to differentiate between bird songs and bird calls, tools to identify species-characteristic vocalizations, and maps showing the typical ranges of various species. In other words, this type of inquiry will be a computational version of the traditional study of bird vocalization carried out by ornithologists.
  4. \n
  5. Construct a database by using large quantities of English text. To compress this database you will need an advanced computational understanding of English. You will need dictionaries, rules of grammar, word-sense disambiguation tools, and, more generally, theories of linguistics.
  6. \n
  7. Convince Mark Zuckerberg to give you the Facebook image database. One obvious property of this dataset is that it contains an enormous number of faces. To compress it, you will need theories of the appearance of faces. These theories will be highly related to work on face modeling in graphics - see here for example.
  8. \n
  9. Generate a huge database of economic data such as home prices, interest and exchange rate fluctuations, business inventories and sales, unemployment and welfare applications, and so on. To compress this database, you will need theories of economics.
  10. \n
\n

It should be emphasized that when in the above list it says \"You will need theories of X\", this simultaneously means that \"You can test and refine theories of X\", and \"You can prove the superiority of your pet theory of X\" by demonstrating the codelengths it achieves on an appropriate dataset. So if you are a linguist and you want to demonstrate the validity of X-bar theory, you build an X-bar compressor and test it on the large text database. If you are an economist and you want to prove the truth of Austrian Business Cycle Theory, you build ABCT into a compressor and invoke it on the economics database. If a theory can't be packaged into a compressor for some real world dataset, then it's probably not scientific anyway (more later on the problem of demarcation).

\n

(It's also worth noting the dedication to truth, and the simultaneous contempt for petty academic affiliation games, indicated by a rigorous adherence to the compression principle. If you develop a new theory of linguistics and use it to set a new record on the benchmark text database, I will hail you as a great linguist. I will publish your papers in my journal, nominate you for awards, and approve your grant applications. It does not matter if you are a teenage college dropout living with your parents.)

\n

The inquiries described in the above list make an important implicit assumption, which can be called the Reusability Hypothesis:

\n
The abstractions useful for practical applications are also useful for compression.
\n

 

\n

Thus one very practical application in relation to the Facebook database is the detection and recognition of faces. This application depends on the existence of a \"face\" abstraction. So the hypothesis implies that this face abstraction will be useful for compression as well. Similarly, with regards to the ornithology example, one can imagine that the ability to recognize bird song would be very useful to bird-watchers and environmentalists, who might want to monitor the activity, population fluctuations, and migration patterns of certain species. Here the Reusability Hypothesis implies that the ability to recognize bird-song will also be useful to compress the treetop sound database.

\n

The linguistics example is worth examining because of its connection to a point made in the preface about overmathematization and the distinction of complex deduction vs. complex induction. The field of computational linguistics is highly mathematized, and one can imagine that in principle some complex mathematics might be useful to achieve text compression. But by far the simplest, and probably the most powerful, tool for text compression is just a dictionary. Consider the following sentence:

\n
John went to the liquor store and bought a bottle of _______ .
\n

 

\n

Now, if a compressor knows nothing about English text, it will have to encode the new word letter-by-letter, for a cost of Nlog(26), where N is the length of the word (I assume N is encoded separately, at a basically fixed cost). But a compressor equipped with a dictionary will have to pay only log(Wn), where Wn is the number of words of length N in the dictionary. This is a substantial savings, since Wn is much smaller than 26^N. Of course, more advanced techniques, such as methods that take into account part of speech information, will lead to further improvement.

\n

The point of the above example is that the dictionary is highly useful, but it does not involve any kind of complex mathematics. Compare a dictionary to the theory of general relativity. Both can be used to make predictions, and so both should be viewed (under my definition) as legitimate scientific theories. And they are both complex, but complex in opposite ways. GR is deductively complex, since it requires sophisticated mathematics to use correctly, but inductively simple, because it requires only a few parameters to specify. In contrast the dictionary is deductively simple, since it can be used by anyone who can read, but inductively complex, since it requires many bits to specify.

\n

Another point made in the preface was that this approach involves empirical science as a core component, as opposed to being primarily about mathematics and algorithm-design (as I consider modern AI to be). Some people may be confused by this, since most people consider data compression to be a relatively minor subfield of computer science. The key realization is that lossless data compression can only be achieved through empirical science. This is because data compression is impossible for arbitrary inputs: no compressor can ever achieve compression rates of less than M bits when averaged over all M-bit strings. Lossless compressors work because they contain an implicit assertion about the type of data on which they will be invoked, and they will fail to achieve compression if that assertion turns out to be false. In the case of image compressors like PNG, the assertion is that, in natural images, the values of adjacent pixels are highly correlated. PNG can exploit this structure to achieve compression, and conversely, the fact that PNG achieves compression for a given image means that it has the assumed structure. In other words, PNG contains an empirical hypothesis about the structure of visual reality, and the fact that it works is empirical evidence in favor of the hypothesis. Now, this pixel-correlation structure of natural images is completely basic and obvious. The proposal, then, is to go further: to develop increasingly sophisticated theories of visual reality and test those theories using the compression principle.

\n

Still, it may not be obvious why this kind of research would be any different from other work on data compression - people are, after all, constantly publishing new types of compression algorithms. The key difference is the emphasis on large-scale compression; this completely changes the character of the problem. To see why, consider the problem of building a car. If you only want to build a single car, then you just hack it together by hand. You build all the parts using machine tools and then fit them together. The challenge is to minimize the time- and dollar-cost of the manual labor. This is an interesting challenge, but the challenge of building ten million cars is entirely different. If you're going to build ten million cars, then it makes sense to start by building a factory. This will be a big up-front cost, but it will pay for itself by reducing the marginal cost of each additional car. Analogously, when attempting to compress huge databases, it becomes worthwhile to build sophisticated computational tools into the compressor. And ultimately the development of these advanced tools is the real goal.

" } }, { "_id": "eogtHAPgt3Pft6AEd", "title": "Composting fruitless debates", "pageUrl": "https://www.lesswrong.com/posts/eogtHAPgt3Pft6AEd/composting-fruitless-debates", "postedAt": "2010-05-29T16:59:20.996Z", "baseScore": 18, "voteCount": 16, "commentCount": 27, "url": null, "contents": { "documentId": "eogtHAPgt3Pft6AEd", "html": "\n

Why do long, uninspiring, and seemingly-childish debates sometimes emerge even in a community like LessWrong?  And what can we do about them?  The key is to recognize the potentially harsh environmental effect of an audience, and use a dying debate to fertilize a more sheltered private conversation.

\n

Let me start by saying that LessWrong generally makes excellent use of public debate, and naming two things I don't believe are solely responsible for fruitless debates here: rationalization biases and self-preservation1.  When your super-important debate grows into a thorny mess, the usual aversion to say various forms of \"just drop it\" are about signaling that:

\n
    \n
  1. you're not skilled enough to continue arguing, so you'd look bad,
  2. \n
  3. the other person isn't worth your time, in which case they'd be publicly insulted and compelled to continue with at least one self-defense comment, extending the conflict, or
  4. \n
  5. the other person is right, which would risk spreading what appear to be falsehoods.
  6. \n
\n

\"Stop the wrongness\", the last concern, is in my opinion the most perisistent here simply because it is the least misguided.  It's practically the name of the site.  Many LessWrong users seem to share a sincere, often altruistic desire to share truth, abolish falsehood, and overcome conflict.  Public debate is a selection mechanism generally used very effectively here to grow and harvest good arguments.  But we can still benefit from diffusing the weed-like quibbling that sometimes shows up in the harsh environment of debate, and for that you need a response that avoids the problematic signals above.  So try this:

\n
\"I'm worried that debating this more here won't be useful to others, but I want to keep working on it with you, so I'm responding via private message.  Let's post on it again once we either agree or better organize our disagreement.  Hopefully at least one of us will learn and refine a new argument from this conversation.\"
\n

Take a moment to see how this carefully avoids (1)-(3).  Then you can try changing the tone of the private message to be more collaborative than competitive; the change in medium will help mark the transition.  This way you'll each be less afraid of having been wrong and more concerned with learning to be right, so rationalization bias will also be diminished.  As well, much social drama can disintegrate without the pressure of the audience environment (I imagine this might contribute to couples fighting more after they have children, though this is just anecdotal speculation).  Despite being perhaps obvious, these effects are not to be underestimated!

\n

But hang on...  if you're convinced someone is very wrong, is it okay to leave such a debate hanging midstream in public?  Why doesn't \"stop the wrongness\" trump our social concerns and compel us to flog away at our respective puddles of horsemeat?

\n

The usual necessary condition for you to wind up in a pointless online debate is that you're very convinced your co-poster is wrong, but isn't obviously wrong enough to attract negative comment Karma.  So you keep posting in an attempt to \"clear up\" the issue for everyone else, or at the very least dilute the false content.  But somewhere along the line, you might end up with something that looks to you like:

\n

You (1 point):  Very smart/correct response to the parent discussion.
Other (1 point):  Convincing but wrong/vague/irrelevant comment that somehow got upvoted.
You (0 points):  Correction, or return to parent discussion.
Other (0 points):  More trickily wrong/vague/irrelevant stuff.
\n

If you've read these from the outside, it often looks more like

\n

User A (1 point):  «something that mildly interests you»
User B (1 point):  «tolerable response»
User A (0 points):  «quibbling you don't care to read»
User B (0 points):  «yup, definitely don't care»
\n

By now, or with very little practice, you should be able to tell from the inside when a conversation is entering a public failure spiral.  When that happens, no matter how smart and right and well-intentioned your responses might be, your co-poster's responses are still going to show up at around the same density.  And since the average non-negative-scoring comment on LessWrong is pretty correct, if your assessment is accurate, then your opponent's wrong comments are going to drag down the average rightness more than you're going to raise it.  And you're certainly not helping the situation if you're wrong.  And, on LessWrong in particular, the Recent Comments feed just diverts more attention to the trickily wrong ideas the more you argue against them.

\n

So, in the unfortuate event2 that a debate starts going awry (which thankfully is a relatively rare occurrence here), the task is to stop the spiral without looking bad, and not making the other person look bad is a necessary ingredient (else they'll continue the spiral in self-defense).  The \"switch to private message\" signal I suggest above is a pretty self-sufficient way to do that.  Though it stands pretty well on its own, it becomes more effective the more we send and interpret it consistently as a community, and the more we follow through with posting conclusions to the resulting private debates.

\n

Thus, above all, \"switch to private\" requests must be neither presented nor interpreted as an insult to either debater.  Not everyone is as courageous as cousin_it to publicly change their minds.  We must praise the effort of a debater who aims to work through a contentious argument privately.  We must consider \"switch to private\" as a sign of respect for the other debater: it shows value for his/her interaction.  We must not judge \"winners\" or \"losers\" of a debate as a function of who requests privacy first.  Then perhaps awkward social battles can become collaborations as routinely as mulched weeds can become crops.

\n
\n

Footnotes

\n

1 Regarding rationalization biases, most posters here know the difference between trying to learn what right is and trying to change what right is in an attempt to escape confirming past errors.  Regarding classic self-preservation, in this case it yields a desire to signal intelligence at the cost of deceiving the audience.  This is pretty misguided, not just because it's usually immoral, but because the LessWrong audience isn't easily deceived...  especially by a long and uninteresting-looking debate.  I'm sure these factors are present too, but the last one most needed addressing.

\n

2 I can't praise LessWrong enough for the fact that its debates tend to be much more fruitful than elsewhere.  I very much don't want to stifle the open and productive arguments that go on here; only to encourage a way out of the undesirable ones.

" } }, { "_id": "nTqmCQvqsrJrryE2c", "title": "Aspergers Survey Re-results", "pageUrl": "https://www.lesswrong.com/posts/nTqmCQvqsrJrryE2c/aspergers-survey-re-results", "postedAt": "2010-05-29T16:58:34.925Z", "baseScore": 11, "voteCount": 10, "commentCount": 7, "url": null, "contents": { "documentId": "nTqmCQvqsrJrryE2c", "html": "

Followup to: Aspergers Poll results

\n

Since my little survey about the degree to which the Less Wrong community has a preponderance of people with systematizing personality types, I've been collecting responses only from those people who considered taking the survey after looking at the original post, but didn't, in order to combat nonresponse bias

\n

82 people responded to the initial survey, and another 186 responded after the request for non-responders to respond. In the initial survey, 26% of responders scored 32+ (which is considered to be a \"high\" score, and out of a group of Cambridge mathematics students, 7 out of 11 who scored over 32 were said to fit the full diagnostic criteria for aspergers syndrome after being interviewed).

\n

In the combined survey of 82 initial responders and 186 \"second\"-responders, this increased to 28%. In the original survey, 5% of respondents said they had already been diagnosed with aspergers syndrome, and in the combined survey this increased to 7.5%.

\n

Overall, this indicates that response bias is probably not significantly skewing our picture of the LW audience, though, as always, it is possible that there is a more sophisticated bias at work and that these 268 people are not representative of LW.

\n

 

" } }, { "_id": "KGwakGydwTHt6FzTf", "title": "Thinking makes for a better chase", "pageUrl": "https://www.lesswrong.com/posts/KGwakGydwTHt6FzTf/thinking-makes-for-a-better-chase", "postedAt": "2010-05-28T21:02:17.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "KGwakGydwTHt6FzTf", "html": "

Great post by Robin on reading:

\n

Hunting has two main modes: searching and chasing. With searching you look for something to chase. With chasing, in contrast, you have a focus of attention that drives your actions…

\n

while reading non-fiction, most folks are in searching mode. Most would be more intellectually productive, however, in chasing mode. It helps to have in mind a question, puzzle, or problem…

\n

In searching mode, readers tend to be less critical…keep reading along even if they aren’t quite sure what the point is… more likely to talk about whether they enjoyed the read…In chasing mode, you continually ask yourself whether what you are reading is relevant for your quest…

\n

Also, search-readers often don’t have a good mental place to put each thing they learn…Chasers, in contrast, always have specific mental places they are trying to fill…

\n

…People often hope that search-mode reading will inspire them to new thoughts, and are disappointed to find that it doesn’t. Chase-mode reading, in contrast, requires constant thinking…

\n

I’ve noticed this most strongly before in the context of fleeing more than chasing. That is, genuine near mode fear helps a lot. If you really want to find out if that spider was poisonous, you probably have a wonderfully efficient intuitive research strategy. This may be useful for researching more abstract potentially frightening topics such as societal catastrophe, if you can drum up some proper fear.

\n

I think Robin’s dichotomy goes a long way to explaining why reading is disappointing relative to thinking. In thinking it’s much easier to chase. Refraining from following a line of inquiry, and filling in gaps, and jumping to conclusions, can be harder than doing these things. There is usually some interesting path open to chase down. You don’t have to page through all your memories and concepts to catch a glimpse of your prey.

\n

Reading on the other hand is usually designed for search, with chase-friendly features added sometimes as an afterthought. If you want to chase something, you basically face the tedium of skimming lots of material without understanding. What would books look like if they were designed for a chase? For instance:

\n\n

Some kinds of books and writing are laid out this way to varying extents. Reference books and websites, some text books, search engines, and books with many short standalone entries on modular topics.

\n

Note that none of these are romantic things to have read. People don’t often mention to their friends what an enlightening google search they did the other day unless it was surprisingly disgusting, or how informative their encyclopedia is. The sort of popular non fiction books that people tell you that they just read are usually the opposite of all the above things, with the common exception of an adequate index. Why is this? Is it related to why most books aren’t set out for chasing ease? Do people who tell others about their thoughts more try for less directed thoughts?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "Y2iEgbzDdYzKRrYiQ", "title": "This is your brain on ambiguity", "pageUrl": "https://www.lesswrong.com/posts/Y2iEgbzDdYzKRrYiQ/this-is-your-brain-on-ambiguity", "postedAt": "2010-05-28T15:47:55.075Z", "baseScore": 28, "voteCount": 23, "commentCount": 60, "url": null, "contents": { "documentId": "Y2iEgbzDdYzKRrYiQ", "html": "

Let's look at one more optical illusion which reveals important features of how our brains perform inference, and suggests how better awareness of these processes of inference can lead to improved thinking, including in our daily lives. This time around the theme is ambiguity.

\n

The spinning dancer

\n

\"Spinning

\n

The spinning dancer is a remarkable piece of work. Do you see the dancer pivoting clockwise on her left foot? Or counterclockwise on her right? If you're at all like me, you're now seeing one or the other - but if you look at the picture for some time, you'll suddenly see the dancer spinning in the opposite direction. It may help (for reasons I'll explain below) to focus on the pivot foot or thereabouts, mentally blocking out the rest of the image. Can you pick a direction on purpose? (Part of what makes investigating the human mind fascinating hobby is finding out what others' brains can do that mine can't, and vice versa. Initially I had no control at all, but interestingly, over the course of writing this post I got much better.)

\n

Focusing on the foot helps explain how the illusion works: the image provides no clue to distinguish between \"toes front\" or \"heel front\", it's all a dark shadow with no depth information. What we see is a foot-shaped cutout growing and shrinking due to a foreshortening effect, alternately to the left and right. This information is perfectly compatible with either direction of spin. (The rest of the image is more of the same; the entire image is functionally equivalent to a dark bar growing and shrinking in alternate directions.) The interesting question is then, why does our consciousness insist on reporting that we're seeing a perfectly unambiguous direction of motion? We're not seeing an ambiguous dancer: we're seeing one that is clearly spinning one way - until the \"flip\" happens, and we see her just as clearly spinning the other way.

\n

Ambiguity, uncertainty and inference

\n

This is reminiscent of the way our explicit models of the world have trouble dealing with quantum uncertainty - our intuition is that things must happen one way or the other, \"as if the half-silvered mirror did different things on different occasions\", as if the dancer actually was spinning one way then another. In the latter case at least, sober reflection tells us this can only be in the map, not in the territory - the animated GIF isn't being changed right under our noses.

\n

\"Perception is inference from incomplete information\", says Jaynes - I have noted previously how this may bring insight into where our biases come from. The spinning dancer tells us something about how it feels, from the inside, to perform inference under uncertainty. That is what ambiguity is - a particular kind of uncertainty. Not the one that results from a paucity of information: the dancer illusion works because the image is quite detailed, not in spite of it. Rather, it is uncertainty that results from having too many hypotheses available, and lacking some crucial information to distinguish the correct one among them.

\n

(Abstractly, we see that it is best to be right about something, and worst to be wrong; to be uncertain is somewere in the middle. Our visual system has a different opinion, and prefers the following ranking: right > wrong > uncertain. \"Make a decision, even if it is the wrong one, we can always revise it later.\" Actually, many of our decision processes have that same bias; I'll revisit this topic when I post at greater length about \"real options\". For now, the topic I'm sticking to is ambiguity as a particular type of uncertainty.)

\n

Spinning words

\n
\n

Last night I shot an elephant in my pajamas. How he got in my pajamas, I’ll never know. - Groucho Marx

\n
\n

It takes some effort to construct ambiguous pictures, whereas anything expressed in words seems to enjoy a head start. This is great news for people with a sense of humor: linguistic ambiguity is a constant source of merriment. In fact, though there are many competing theories of humor, it makes at least some sense to see ambiguity and its related themes of frame-crossing, reinterpretation as playing a key role in humor in general.

\n

To some, ambiguity is much less funny. Some professions, including jurists, proponents of synthetic languages such as Lojban or Loglan, and software engineers see ambiguity as an evil to be uprooted. The most famous cartoon (excepting perhaps some Dilbert favorites posted near every cubicle) among software professionals is an extended lament about the consequences of ambiguity in human language. Ambiguity is the source of unmeasurable amounts of confusion and even hurt among people relying on the written word to communicate - an increasingly common circumstance in the Internet age. Not that oral communication is exempt; but at least in most cases it offers more effective mechanisms for error correction.

\n

Sacrifice, duality, reframing: the powers of ambiguity

\n

And yet, vexing as it may be to acknowledge it, instrumentally effective thinking often seems to rely on ambiguity.

\n

In the ancient game of Go, there is a certain level of play that can only be reached by mastering the art of sacrifice. Now, in some cases this may be part of a pre-established plan: stones that you are defending will get a better position if, as a preliminary, you place a stone within enemy territory solely in preparation for a move that threatens to rescue it. In many instances, though, sacrifice involves redefining a previously valuable stone or set of stones as \"sacrifice stones\". What is called \"light\" play often involves deliberately ambiguous moves, in which you have a plan to abandon one of several stones without depending on which one; the opponent's play will determine that.

\n

Another example involves the mathematical theme of duality, where a given kind of structure can be expressed in one of two ways which are equivalent in meaning but rely on different tools or operations. Some problems turn out to be easier to solve in a domain which is the dual of that where they were originally formulated. Unfortunately, my math having gone rusty for quite a while, I don't recall offhand any examples that I feel familiar enough with to discuss here, but to give you a hint of the flavor, consider a frequent \"trick\" in computing probabilities: when asked the probability of A (say, \"at least two of the people in this room share a birthday\") it's often easier to consider the probablity of not-A (\"no two people share a birthday\") and taking the complement.

\n

Another way to exploit ambiguity turns up in the domain of interpersonal relationships, in the guise of \"framing\" and \"reframing\". Some of the advice in Alicorn's recent post, for instance, involves casting around for reframes - ways of interpreting behaviour that you dislike in a person that make these behaviours tolerable or understandable instead of irritating and repulsive. In Stumbling on Happiness Daniel Gilbert argues that ambiguity is a key component of psychological resilience. A person's success in life is partially determined by their ability to redefine their values, sense of happiness, etc. on the fly, in answer to the difficulties they encounter. This is only possible if our interpretation of the world contains lots of ambiguity to start with.

\n

Artificial ambiguity

\n

As readers of LessWrong, \"hobbyists of the mind\", we can often gain insight into our human minds by framing questions about constructed minds - by forcing ourselves to confront the design space of possible minds. A corollary attitude is to be cautious about the reverse process - unreflectively projecting some perceived attribute of human minds, such as randomness or reliability, into a necessary property of artificial intelligences. Still, I have come so far in this post in large part to ponder whether ambiguity is a necessary capability of minds-in-general, rather than a human design flaw.

\n

Douglas Hofstadter's Fluid Concepts and Creative Analogies discusses fluidity, a theme that strikes me as closely related to ambiguity (and is explicitly discussed throughout). Hofstadter is fond of \"microdomains\", simplified settings where human intelligence neverthelesss still easily exceeds what we can, as a rule, program machines to do. A favorite of mine is \"Do this!\" exercises, which inspired Robert French's Tabletop research program: two people are seated at a typical restaurant or café table, with similar (or very different) implements on each side of the table: plates, forks, knives, glasses... One person touches an item and challenges the other to \"do this!\".

\n

Ambiguity arises in the Tabletop domain because exact mappings between the two sides may not exist, but \"analogical\" mappings often do: your wine glass maps to my water glass, for instance. The fun starts when more than one plausible analogical mapping suggests itself. Your lone wine glass maps either to my wine glass (paired with a glass of water) or to my salt shaker (the only non-plate unpaired item on my side).

\n

Conclusions

\n

I suspect that efficient cross-domain generalization requires dealing with ambiguity and analogy. Back in the human mind-space, few theories of the scientific process deal explicitly with ambiguity and analogy, Pickering's mangle being a notable exception. In some vague sense it seems to me that ambiguity provides \"degrees of freedom\" which are necessary to the conceptions of plans, and their flexible execution - including sacrificing, changing domains or notations, and reframing. I expect it to be a major theme of instrumental rationality, in other words, and therefore would like to delineate a more precise formulation of this vague intuition.

\n

This post has explored some themes I'm likely to return to, or that form some kind of groundwork (for instance when I touch on \"programming as a rationalist skill\" later on). But more importantly, it is intended to encourage further discussions of these themes from other perspectives than mine.

\n

What do you know about ambiguity?

" } }, { "_id": "HgiWHaxnAEtg3uEfM", "title": "Beyond Optimization by Proxy", "pageUrl": "https://www.lesswrong.com/posts/HgiWHaxnAEtg3uEfM/beyond-optimization-by-proxy", "postedAt": "2010-05-27T13:16:45.798Z", "baseScore": 24, "voteCount": 15, "commentCount": 17, "url": null, "contents": { "documentId": "HgiWHaxnAEtg3uEfM", "html": "

Followup to: Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems

\n

tl;dr: In this installment, we look at methods of avoiding the problems related to optimization by proxy. Many potential solutions cluster around two broad categories: Better Measures, and Human Discretion. Distribution of decisions to the local level is a solution that seems more promising and is examined in more depth.

\n

In the previous article I had promised that if there was a good reception, I would post a follow-up article to discuss ways of getting around the problem. That article made it to the front page, so here are my thoughts on how to circumvent Optimization by Proxy (OBP). Given that the previous article was belabored over at least a year and a half, this one will be decidedly less solid, more like a structured brainstorm in which you are invited to participate.

\n

In the comments of the previous article I was pointed to The Importance of Goodhart's Law, a great article, which includes a section on mitigation. Examining those solutions in the context of OBP seems like a good skeleton to build on.

\n

The first solution class is 'Hansonian Cynicism'. In combination with awareness of the pattern, pointing out that various processes (such as organizations) are not actually optimizing around their stated goal, but some proxy, creates cognitive dissonance for the thinking person. This sounds more like a motivation to find a solution than a solution itself. At best, knowing what goes wrong, you can use the process in a way that is informed by its weaknesses. Handling with care may mitigate some symptoms, but it doesn't make the problems go away.

\n

The second solution class mentioned is 'Better Measures'. That is indeed what is usually attempted. The 'purist' approach to this is to work hard on finding a computable definition of the target quality. I cannot exclude the possibility of cases where this is feasible no immediate examples come to mind. The proxies that I have in mind are deeply human (quality, relevance, long-term growth) and boil down to figuring out what is 'good', thus, computing them is no small matter. Coherent Extrapolated Volition is the extreme end of this approach, boiling a few oceans in the process, certainly not immediately applicable.

\n

A pragmatic approach to Better Measures is to simply monitor better, making the proxy more complex and therefore harder to manipulate. Discussion with Chronos in the comments of the original article was along those lines. By integrating user activity trails, Google makes it harder to game the search engine. I would imagine that if they integrated those logs with Google Analytics and Google Accounts, they would significantly raise the bar for gaming the system, at the expense of user privacy. Of course by removing most amateur and white/gray-hat SEOs from the pool, and given the financial incentives that exist, they would make it significantly more lucrative to game the system, and therefore the serious black hat SEOs that can resort to botnets, phishing and networks of hacked sites would end up being the only games in town. But I digress. Enriching the proxy with more and more parameters is a pragmatic solution that should work in the short term as a part of the arms race against manipulators, but does not look like a general or permanent solution from where I'm standing.

\n

\n

A special case of 'Better Measures' is that of better incentive alignment. From Charlie Munger's speech A Lesson on Elementary, Worldly Wisdom As It Relates To Investment Management & Business:

\n
\n

From all business, my favorite case on incentives is Federal Express. The heart and soul of their system—which creates the integrity of the product—is having all their airplanes come to one place in the middle of the night and shift all the packages from plane to plane. If there are delays, the whole operation can't deliver a product full of integrity to Federal Express customers.

\n

And it was always screwed up. They could never get it done on time. They tried everything—moral suasion, threats, you name it. And nothing worked.

\n

Finally, somebody got the idea to pay all these people not so much an hour, but so much a shift—and when it's all done, they can all go home. Well, their problems cleared up overnight.

\n
\n

In fact, my initial example was a form of naturally occurring optimization by proxy, where the incentives of the actors are aligned. I guess stock grants and options are another way to align employee incentives with company incentives. As far as I can tell, this has not been generalised either, and does not seem to reliably work in all cases, but where it does work, it may well be a silver bullet that cuts through all the other layers of the problem.

\n

Before discussing the third and more promising avenue, I'd like to look at one unorthodox 'Better Measures' approach that came up while writing the original article. Assume that the proxy involves possessing the target quality to produce, and faking it is an NP-complete problem. The only real-world case where I can see an analog to this is cryptography. Perhaps we can stretch OPB such that WWII cryptography can be seen as an example of it. By encrypting with Enigma and keeping their keys secret (the proxies), the Axis forces aimed to maintain the secrecy of their communications (the target quality). When the allies were able to crack Enigma, this basic assumption stopped being reliable. Modern cryptography makes this actually feasible. As long as the keys don't fall into the wrong hands, and assuming no serious flaws in the cryptographic algorithms used, the fact that a document can be decrypted with someone's public key (the proxy) authenticates that document to the owner of the key (the target quality). While this works in cryptography, it may be stretching the OBP analogy too far. On the other hand, there may be a way to transfer this strategy to solve other OBP problems that I have not yet seen. If you have any thoughts around this, please put them forward in the comments.

\n

The third class of solutions is 'Human Discretion'. This is divided in two, diametrically opposite solutions. One is 'Hierarchical rule', inspired by the ideas of Mencius Moldbug. Managers are the masters of all their subordinates, and the slaves of their higher-ups. No rules are written, so no proxies to manipulate. Except of course, for human discretion itself. Besides the tremendous potential for corruption, this does not transfer well to automated systems. Laws may be a luxury for humans, but for machines, code is everything. There is no law-independent discretion that a machine can apply, even if threatened with obliteration. The opposite of that is what the article calls 'Left anarchist Ideas'. I think that puts too much of a political slant to an idea that is much more general. I call it simply 'distribution'. The idea here is that if decisions are taken locally, there is no big juicy proxy to manipulate, but it is splintered to multitudes of local proxies, each different than the other. I think this is the way that evolution can be seen to deal with this issue. If for instance we see the immune system as an optimizer by proxy, the ability of some individuals to survive a virus that kills others is a demonstration of the fact that the virus has not fooled everyone's immune system. Perhaps the individuals that survived are vulnerable to other threats, but this would mean that a perfect storm of diseases that exploit everyone's weaknesses would have to affect a population at the same time to extinguish it. Not exactly a common phenomenon. Nature's resilience through diversity usually saves the day.

\n

So distribution seems to be a promising avenue that deserves further examination. The use case that I usually gravitate towards is that of the spread of news. Before top-down mass media, news spread from mouth to mouth. Humans seem to have a gossip protocol hard-coded into their social function centre that works well for this task. To put it simply, we spread relevant information to gain status, and the receivers of this information do the same, until the information is well-known between all those that are reachable and interested. Mass media took over this function for a time, especially with regard to news that was of general interest but of course on a social circle level the old mechanisms kept working uninterrupted. With the advent of social networks, the old mechanisms are reasserting themselves, at scale. The asymmetric following model of Twitter seems well-suited for this scenario and re-tweeting also helps broadcast news further than the original receivers. Twitter is now often seen as a primary news source, where news breaks before it makes the headlines, even if the signal to noise ratio is low. What is interesting in this model is that there is a human decision at each point of re-broadcast. However, by the properties of scale-free networks, it does not require too many decisions for a piece of information to spread throughout the network. Users that spread false information or 'spam' are usually isolated from the graph, and therefore end up with little or no influence at all (with a caveat for socially advantageous falsities). Bear in mind that Twitter is not built or optimised around this model, so these effects appear only approximately. There are a number of changes that should make these effects much more pronounced, but this is a topic for another post. What should be noted is that contrary to popular belief, this hybrid man-machine system of news transmission scales pretty well. Just because human judgment is involved in multiple steps of the process, it doesn't make the system reliably slower, since nobody is on the critical path, and nobody has the responsibility of filtering all the content. A few decisions here and there are enough to keep the system working well.

\n

Transferring this social-graph approach to search is less than straightforward. Again, looking at human societies pre-search engine, people would develop reputation for knowledge in a specific field. Questions in their field of expertise find their way to them sooner or later. If an expert did not have an answer but another did, a shift in subjective, implicit reputation would occur, which if repeated on multiple occasions would result in a shift of the the relative trust that the community places on the two experts. Applying this to internet search does not seem immediately feasible, but search engines like Aardvark and Q&A sites like StackOverflow and Yahoo! Answers seem to be heading in such a direction. Wikipedia, by having a network of editors trusted in certain fields also exhibits similar characteristics. The answer isn't as obvious in search as it is in news, and if algorithmic search engines disappeared tomorrow the world wouldn't have an plan B immediately at hand, the outline of an alternative is beginning to appear.

\n

To conclude, the progression I see in both news and search is this:

\n
    \n
  1. mouth-to-mouth
  2. \n
  3. top-down human curated
  4. \n
  5. algorithmic
  6. \n
  7. node-to-node (distributed)
  8. \n
\n

In news this is loosely instantiated as: gossip -> newspapers -> social news sites -> twitter-like horizontal diffusion, and in search the equivalents are: community experts -> libraries / human-curated online directories -> algorithmic search engines -> social(?) search / Q&A sites. There seems to be a pattern where things are coming full circle from horizontal to vertical and back to horizontal, where the intermediate vertical step is a stopgap to allow our natural mechanisms to adapt to the new technology, scale, and vastness of information, but ultimately managing to live up to the challenge. There may be some optimism involved in my assessment as the events described have not really taken place yet. The application of this pattern to other instances of OBP such as governments and large organizations is not something I feel I could undertake for now, but I do suspect that OBP can provide a general, if not conclusive, argument for distribution of decision making as the ultimately stable state of affairs, without considering smarter than human AGI/FAI singletons.

\n

Update: Apparently, Digg's going horizontal. This should be interesting.

\n

Update2: I had mixed up vertical and horizontal. Another form of left-right dyslexia?

" } }, { "_id": "59rDBidWmmJTXL4Np", "title": "Harry Potter and the Methods of Rationality discussion thread", "pageUrl": "https://www.lesswrong.com/posts/59rDBidWmmJTXL4Np/harry-potter-and-the-methods-of-rationality-discussion", "postedAt": "2010-05-27T00:10:57.279Z", "baseScore": 44, "voteCount": 35, "commentCount": 883, "url": null, "contents": { "documentId": "59rDBidWmmJTXL4Np", "html": "

Update: Please post new comments in the latest HPMOR discussion thread, now in the discussion section, since this thread and its first few successors have grown unwieldy (direct links: two, three, four, five, six, seven).

\n

As many of you already know, Eliezer Yudkowsky is writing a Harry Potter fanfic, Harry Potter and the Methods of Rationality, starring a rationalist Harry Potter with ambitions to transform the world by bringing the rationalist/scientific method to magic.  But of course a more powerful Potter requires a more challenging wizarding world, and ... well, you can see for yourself how that plays out.

This thread is for discussion of anything related to the story, including insights, confusions, questions, speculation, jokes, discussion of rationality issues raised in the story, attempts at fanfic spinoffs, comments about related fanfictions, and meta-discussion about the fact that Eliezer Yudkowsky is writing Harry Potter fan-fiction (presumably as a means of raising the sanity waterline).

I'm making this a top-level post to create a centralized location for that discussion, since I'm guessing people have things to say (I know I do) and there isn't a great place to put them.  fanfiction.net has a different set of users (plus no threading or karma), the main discussion here has been in an old open thread which has petered out and is already near the unwieldy size that would call for a top-level post, and we've had discussions come up in a few other places.  So let's have that discussion here. 

Comments here will obviously be full of spoilers, and I don't think it makes sense to rot13 the whole thread, so consider this a spoiler warning:  this thread contains unrot13'd spoilers for Harry Potter and the Methods of Rationality up to the current chapter and for the original Harry Potter series.  Please continue to use rot13 for spoilers to other works of fiction, or if you have insider knowledge of future chapters of Harry Potter and the Methods of Rationality.

A suggestion: mention at the top of your comment which chapter you're commenting on, or what chapter you're up to, so that people can understand the context of your comment even after more chapters have been posted.  This can also help people avoid reading spoilers for a new chapter before they realize that there is a new chapter.

" } }, { "_id": "J5teWueouHJxcZkDy", "title": "Abnormal Cryonics", "pageUrl": "https://www.lesswrong.com/posts/J5teWueouHJxcZkDy/abnormal-cryonics", "postedAt": "2010-05-26T07:43:49.650Z", "baseScore": 79, "voteCount": 73, "commentCount": 420, "url": null, "contents": { "documentId": "J5teWueouHJxcZkDy", "html": "

Written with much help from Nick Tarleton and Kaj Sotala, in response to various themes here, here, and throughout Less Wrong; but a casual mention here1 inspired me to finally write this post. (Note: The first, second, and third footnotes of this post are abnormally important.)

\n

It seems to have become a trend on Less Wrong for people to include belief in the rationality of signing up for cryonics as an obviously correct position2 to take, much the same as thinking the theories of continental drift or anthropogenic global warming are almost certainly correct. I find this mildly disturbing on two counts. First, it really isn't all that obvious that signing up for cryonics is the best use of one's time and money. And second, regardless of whether cryonics turns out to have been the best choice all along, ostracizing those who do not find signing up for cryonics obvious is not at all helpful for people struggling to become more rational. Below I try to provide some decent arguments against signing up for cryonics — not with the aim of showing that signing up for cryonics is wrong, but simply to show that it is not obviously correct, and why it shouldn't be treated as such. (Please note that I am not arguing against the feasibility of cryopreservation!)

\n

\n

Signing up for cryonics is not obviously correct, and especially cannot obviously be expected to have been correct upon due reflection (even if it was the best decision given the uncertainty at the time):

\n\n

Calling non-cryonauts irrational is not productive nor conducive to fostering a good epistemic atmosphere:

\n\n

Debate over cryonics is only one of many opportunities for politics-like thinking to taint the epistemic waters of a rationalist community; it is a topic where it is easy to say 'we are right and you are wrong' where 'we' and 'you' are much too poorly defined to be used without disclaimers. If 'you' really means 'you people who don't understand reductionist thinking', or 'you people who haven't considered the impact of existential risk', then it is important to say so. If such an epistemic norm is not established I fear that the quality of discourse at Less Wrong will suffer for the lack of it.

\n

One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways. It's harder to avoid the fundamental attribution error and the typical mind fallacy, and admit that the others may have a non-insane reason for their disagreement.

\n

 

\n

1 I don't disagree with Roko's real point, that the prevailing attitude towards cryonics is decisive evidence that people are crazy and the world is mad. Given uncertainty about whether one's real values would endorse signing up for cryonics, it's not plausible that the staggering potential benefit would fail to recommend extremely careful reasoning about the subject, and investment of plenty of resources if such reasoning didn't come up with a confident no. Even if the decision not to sign up for cryonics were obviously correct upon even a moderate level of reflection, it would still constitute a serious failure of instrumental rationality to make that decision non-reflectively and independently of its correctness, as almost everyone does. I think that usually when someone brings up the obvious correctness of cryonics, they mostly just mean to make this observation, which is no less sound even if cryonics isn't obviously correct.

\n

2 To those who would immediately respond that signing up for cryonics is obviously correct, either for you or for people generally, it seems you could mean two very different things: Do you believe that signing up for cryonics is the best course of action given your level of uncertainty? or, Do you believe that signing up for cryonics can obviously be expected to have been correct upon due reflection? (That is, would you expect a logically omniscient agent to sign up for cryonics in roughly your situation given your utility function?) One is a statement about your decision algorithm, another is a statement about your meta-level uncertainty. I am primarily (though not entirely) arguing against the epistemic correctness of making a strong statement such as the latter.

\n

3 By raising this point as an objection to strong certainty in cryonics specifically, I am essentially bludgeoning a fly with a sledgehammer. With much generalization and effort this post could also have been written as 'Abnormal Everything'. Structural uncertainty is a potent force and the various effects it has on whether or not 'it all adds up to normality' would not fit in the margin of this post. However, Nick Tarleton and I have expressed interest in writing a pseudo-sequence on the subject. We're just not sure about how to format it, and it might or might not come to fruition. If so, this would be the first post in the 'sequence'.

\n

4 Disclaimer and alert to potential bias: I'm an intern (not any sort of Fellow) at the Singularity Institute for (or 'against' or 'ambivalent about' if that is what, upon due reflection, is seen as the best stance) Artificial Intelligence.

" } }, { "_id": "nK5jraMp7E4xPvuNv", "title": "On Enjoying Disagreeable Company", "pageUrl": "https://www.lesswrong.com/posts/nK5jraMp7E4xPvuNv/on-enjoying-disagreeable-company", "postedAt": "2010-05-26T01:47:24.490Z", "baseScore": 68, "voteCount": 74, "commentCount": 254, "url": null, "contents": { "documentId": "nK5jraMp7E4xPvuNv", "html": "

Bears resemblance to: Ureshiku Naritai; A Suite of Pragmatic Considerations In Favor of Niceness

\n

In this comment, I mentioned that I can like people on purpose.  At the behest of the recipients of my presentation on how to do so, I've written up in post form my tips on the subject.  I have not included, and will not include, any specific real-life examples (everything below is made up), because I am concerned that people who I like on purpose will be upset to find that this is the case, in spite of the fact that the liking (once generated) is entirely sincere.  If anyone would find more concreteness helpful, I'm willing to come up with brief fictional stories to cover this gap.

\n

It is useful to like people.  For one thing, if you have to be around them, liking them makes this far more pleasant.  For another, well, they can often tell, and if they know you to like them this will often be instrumentally useful to you.  As such, it's very handy to be able to like someone you want to like deliberately when it doesn't happen by itself.  There are three basic components to liking someone on purpose.  First, reduce salience of the disliked traits by separating, recasting, and downplaying them; second, increase salience of positive traits by identifying, investigating, and admiring them; and third, behave in such a way as to reap consistency effects.

\n

1. Reduce salience of disliked traits.

\n

Identify the traits you don't like about the person - this might be a handful of irksome habits or a list as long as your arm of deep character flaws, but make sure you know what they are.  Notice that however immense a set of characteristics you generate, it's not the entire person.  (\"Everything!!!!\" is not an acceptable entry in this step.)  No person can be fully described by a list of things you have noticed about them.  Note, accordingly, that you dislike these things about the person; but that this does not logically entail disliking the person.  Put the list in a \"box\" - separate from how you will eventually evaluate the person.

\n

When the person exhibits a characteristic, habit, or tendency you have on your list (or, probably just to aggravate you, turns out to have a new one), be on your guard immediately for the fundamental attribution error.  It is especially insidious when you already dislike the person, and so it's important to compensate consciously and directly for its influence.  Elevate to conscious thought an \"attribution story\", in which you consider a circumstance - not a character trait - which would explain this most recent example of bad behavior.1  This should be the most likely story you can come up with that doesn't resort to grumbling about how dreadful the person is - that is, don't resort to \"Well, maybe he was brainwashed by Martians, but sheesh, how likely is that?\"  Better would be \"I know she was up late last night, and she does look a bit tired,\" or \"Maybe that three-hour phone call he ended just now was about something terribly stressful.\"

\n

Reach a little farther if you don't have this kind of information - \"I'd probably act that way if I were coming down with a cold; I wonder if she's sick?\" is an acceptable speculation even absent the least sniffle.  If you can, it's also a good idea to ask (earnestly, curiously, respectfully, kindly!  not accusatively, rudely, intrusively, belligerently!) why the person did whatever they did.  Rest assured that if their psyche is fairly normal, an explanation exists in their minds that doesn't boil down to \"I'm a lousy excuse for a person who intrinsically does evil things just because it is my nature.\"  (Note, however, that not everyone can produce verbal self-justifications on demand.)  Whether you believe them or not, make sure you are aware of at least one circumstance-based explanation for what they did.

\n

Notice which situations elicit more of the disliked behaviors than others.  Everybody has situations that bring out the worst in them, and when the worst is already getting on your nerves, you should avoid as much as possible letting any extra bubble to the surface.  If you have influence of any kind over which roles this person plays in your life (or in general), confine them to those in which their worst habits are irrelevant, mitigated, or local advantages of some kind.  Do not ask for a ride to the airport from someone who terrifies you with their speeding; don't propose splitting dessert with someone whose selfishness drives you up the wall; don't assign the procrastinator an urgent task.  Do ask the speeder to make a quick run to the bank before it closes while you're (ever so inconveniently) stuck at home; do give the selfish person tasks where they work on commission; do give the procrastinator things to do that they'll interpret as ways to put off their other work.

\n

2. Increase salience of positive traits.

\n

Don't look at me like that.  There is something.  It's okay to grasp at straws a little to start.  You do not have to wait to like someone until you discover the millions of dollars they donate to mitigating existential risk or learn that their pseudonym is the name of your favorite musician.  You can like their cool haircut, or the way they phrased that one sentence the other week, or even their shoes.  You can appreciate that they've undergone more hardship than you (if they have, but be generous in interpreting \"more\" when comparing incommensurate difficulties) - even if you don't think they've handled it that well, well, it was hard.  You can acknowledge that they are better than you, or than baseline, or than any one person who you already like, at some skill or in some sphere of achievement.  You can think they did a good job of picking out their furniture, or loan them halo effect from a relative or friend of theirs who you think is okay.  There is something.

\n

Learn more about the likable things you have discovered.  \"Catch them in the act\" of showing off one of these fine qualities.  As a corollary to the bit above about not putting them in roles that bring out their worst, try to put them in situations where they're at their best.  Set them up to succeed, both absolutely and in your eyes.  Speak to any available mutual friends about what more there is to like - learn how the person makes friends, what attracts people to them, what people get out of associating with them.  Solicit stories about the excellent deeds of the target person.  Collect material like you're a biographer terrified of being sued for libel and dreading coming in under page count: you need to know all the nice things there are to know.

\n

It is absolutely essential throughout this process to cultivate admiration, not jealousy.  Jealousy and resentment are absolutely counterproductive, while admiration and respect - however grudging - are steps in the right direction.  Additionally, you are trying to use  these features of the person.  It will not further your goals if you discount their importance in the grand scheme of things.  Do not think, \"She has such pretty hair, why does she get such pretty hair when she doesn't deserve it since she's such an awful person?  Grrr!\"  Instead, \"She has such pretty hair.  It's gorgeous to look at and that makes her nice to have around.  I wonder if she has time to teach me how to do my hair like that.\"  Or instead of: \"Sure, he can speak Latin, but what the hell use is Latin?  Does he think we're going to be invaded by legionaries and need him to be a diplomat?\" it would be more useful towards the project of liking to think, \"Most people don't have the patience and dedication to learn any second language, and it only makes it harder to pick one where there aren't native speakers available to help teach the finer points.  I bet a lot of effort went into this.\"

\n

3. Reap consistency effects.

\n

Take care to be kind and considerate to the person.  The odds are pretty good that there is something they don't like about you (rubbing someone the wrong way is more often bidirectional than not).  If you can figure out what it is, and do less of it - at least around them - you will collect cognitive dissonance that you can use to nudge yourself to like the person.  I mean, otherwise, why would you go to the trouble of not tapping your fingers around them, or making sure to pronounce their complicated name correctly, or remembering what they're allergic to so you can avoid bringing in food suitable for everyone but them?  That's the sort of thing you do when you care how they feel, and if you care how they feel, you must like them at least a little.  (Note failure mode: if you discover that something you do annoys them, and you respond with resentment that they have such an unreasonable preference about such a deeply held part of your identity and how dare they!, you're doing it wrong.  The point isn't to completely make yourself over to be their ideal friend.  You don't have to do everything.  But do something.)

\n

Seek to spend time around the person.  This should drop pretty naturally out of the above steps: you need to acquire all this information from somewhere, after all.  But seek their opinions on things, especially their areas of expertise and favorite topics; make small talk; ask after their projects, their interests, their loved ones; choose to hang out in rooms they occupy even if you never interact.  (Note failure mode: Don't do this if you can feel yourself hating them more every minute you spend together or if you find it stressful enough to inhibit the above mental exercises.  It is better to do more work on liking them from a distance if you are at this stage, then later move on to seeking to spend time with them.  Also, if you annoy them, don't do anything that could be characterized as pestering them or following them around.)

\n

Try to learn something from the person - by example, if they aren't interested in teaching you, or directly, if they are.  It is possible to learn even from people who don't have significantly better skills than you.  If they tell stories about things they've done, you can learn from their mistakes; if they are worse than you at a skill but use an approach to it that you haven't tried, you can learn how to use it; if nothing else, they know things about themselves, and that information is highly useful for the project of liking them, as discussed above.  Put what you know about them into the context of their own perspective.

\n

Note general failure mode: It would be fairly easy, using facsimiles of the strategy above, to develop smugness, self-righteousness, arrogance, and other unseemly attitudes.  Beware if your inner monologue begins to sound something like \"He's gone and broken the sink again, but I'm too good and tolerant to be angry.  It wouldn't do any good to express my displeasure - after all, he can't take criticism, not that I judge him for this, of course.  I'll be sure to put a note on the faucet and call the plumber to cover for his failure to do so, rather than nagging him to do it, as I know he'd fly off the handle if I reminded him - it's just not everyone's gift to accept such things, as it is mine, and as I am doing, right now, with him, by not being upset...\"

\n

This monologuer does not like the sink-breaker.  This monologuer holds him in contempt, and thinks very highly of herself for keeping this contempt ostensibly private (although it's entirely possible that he can tell anyway).  She tolerates his company because it would be beneath her not to; she doesn't enjoy having him around because she realizes that he has useful insights on relevant topics or even because he's decorative in some way.  If you don't wind up really, genuinely, sincerely liking the person you set out to like, you are doing it wrong.  This is not a credit to your high-mindedness, and thinking it is will not help you win.

\n

 

\n

1 A good time to practice this habit is when in a car.  Make up stories about the traffic misbehaviors around you.  \"The sun is so bright - she may not have seen me.\"  \"That car sure looks old!  I probably wouldn't handle it even half as well, no wonder it keeps stalling.\"  \"He's in a terrible hurry - I wonder if a relative of his is in trouble.\"  \"Perhaps she's on her cellphone because she's a doctor, on call - it then would really be more dangerous on net if she didn't answer the thing while driving.\"  \"He'd pull over if there were any place to do so, but there's no shoulder.\"  Of course any given one of these is probably not true.  But they make sense, and they are not about how everybody on the road is a maniac!  I stress that you are not to believe these stories.  You are merely to acknowledge that they are possibilities, to compensate for the deemphasis of hypotheses like this that the fundamental attribution error will prompt.

" } }, { "_id": "SkXLrDXyHeekqgbFg", "title": "Shock Level 5: Big Worlds and Modal Realism", "pageUrl": "https://www.lesswrong.com/posts/SkXLrDXyHeekqgbFg/shock-level-5-big-worlds-and-modal-realism", "postedAt": "2010-05-25T23:19:44.391Z", "baseScore": 46, "voteCount": 48, "commentCount": 158, "url": null, "contents": { "documentId": "SkXLrDXyHeekqgbFg", "html": "

In recent times, science and philosophy have uncovered evidence that there is something very seriously weird about the universe and our place in it. We used to think that there was one planet earth, inside a universe that is very large (at least 10^26 meters in diameter) but that the reachable universe (future light-cone in the terminology of special relativity, or causal future in the terminology of GR) was finite. Anything outside the reachable universe is irrelevant, since we can't affect it. 

\n

However, cosmologists went on to study the process that probably created the universe, known as inflation. Inflation solves a number of mysteries in cosmology, including the flatness problem. The process of inflation seems to create an infinite number of mini-universes, or \"inflationary bubbles\" - this is known as chaotic inflation theory. The physical parameters and initial conditions of these bubbles are determined randomly, so every possible set of particle masses, force strengths, etc is realized. To quote from this piece by Alan Guth:

\n

The role of eternal inflation in scientific thinking, however, was greatly boosted by the realization that string theory has no preferred vacuum, but instead has perhaps 101000 metastable vacuum-like states. Eternal inflation then has potentially a direct impact on fundamental physics, since it can provide a mechanism to populate the landscape of string vacua. While all of these vacua are described by the same fundamental string theory, the apparent laws of physics at low energies could differ dramatically from one vacuum to another.

\n

\n

To top this off, the dominant theory about the spacetime manifold we live on is that it is infinitely large in all directions. If you look at this picture of a reconstruction of the large-scale structure of the universe, the idea that we are living in something like an infinite volume with a finite speed-limit and a uniform random distribution of matter and energy that clumps over time becomes plausible. 

\n

A final step along this line of increasingly large Big Worlds is modal realism, the idea that all possible worlds exist. Max Tegmark has formalized this as the Mathematical Universe Hypothesis: All structures that exist mathematically also exist physically.

\n

If any of these theories turn out to be true, then we are living in a Big World, a cosmology where every finite collection of atoms, including you, is instantiated infinitely many times, perhaps by the same physical processes that created us here on earth. It is also the case that other life-forms might emerge and use their technological capabilities to create simulations of us. Once an alien civilization reaches the point of being able to create simulations, it can create lots of simulations - really unreasonably large numbers of simulated beings can be created in a universe roughly the size of ours1,2, Bostrom's estimate would be something like 10^50. And in other mathematically possible universes with the ability to do an infinite amount of computation in a finite time, you could be simulated an infinite number of times in just one universe. 

\n

One (incorrect) way of interpreting it is to think of a bunch of \"worlds\" spread out over the multiverse, most of them uninhabited, some containing weird green aliens, and one containing you, and saying:  \" Aha! I only care about this one, the others are causally disconnected from it!\".

\n

No, this view of reality claims that your current observer-moment is repeated infinitely many times, and looking forward in time, all possible continuations of (you,now) occur, and furthermore there is no fact of the matter about which one you will experience, because the quantum MW aspect of the multiverse has already demolished our intuitions about anticipated subjective experience4. Think that chocolate bar will taste nice when you bite into it? Well, actually according to Big Worlds, infinitely many of your continutions will bite the chocolate bar and find it turns into a hamster.

\n

I once saw wormholes explained using the sheet of paper metaphor: draw two dots on a sheet of paper, reasonably far apart, imagining the paper distance between them to be an unfathomably large spatial distance, say 10^(10^100) meters. Now fold the sheet so that the two dots touch each other: they are right on top of each other! Of course, wormholes seem fairly unlikely based upon standard physics. The metaphor here is of what is called a quotient in mathematics, in particular of a quotient in topology.

\n

But if you combine a functionalist view of mind with big worlds cosmology, then reality becomes the quotient of the set of all possible computations, where all sub-computations that instantiate you are identified. Imagine that you have an infinite piece of paper representing the multiverse, and you draw a dot on it wherever there is a computational process that is the same as the one going on in your brain right now. Now fold the paper up so that all the dots are touching each other, and glue them at that point into one dot. That is your world. 

\n

Almost all of the histories and futures that feed into your \"now\" are simulations, by Bostrom's simulation argument (which is no longer shackled by the requirement that the simulations must be performed by our particular descendants - all possible descendants and aliens get to simulate us).

\n

Future Shock level 5 is \"the Copernican revolution with respect to your place in the multiverse\", the point where you mentally realize that perfectly dry astrophysics implies that there is no unique \"you\" at the centre of your sphere of concern, analogous to the Copernican revolution that unseated earth from the centre of the solar system. It is considered to be more shocking than any of the previous future shock levels because it destroys the most basic human epistemological assumption that there is such a thing as my future, or such a thing as the consequence of my actions

\n

Shock Level 5 is a good candidate for Dan Dennett's universal acid: an idea so corrosive that if we let it into our minds, everything we care about will be dissolved. You can't change anything in the multiverse - every decision or consequence that you don't make will be made infinitely many times elsewhere by near-identical copies of you. Every victory will be produced, as will every possible defeat. 

\n

In \"What are probabilities anyway?\" Wei Dai suggests a potential solution to your SL5 worries:

\n
\n

All possible worlds are real, and probabilities represent how much I care about each world. (To make sense of this, recall that these probabilities are ultimately multiplied with utilities to form expected utilities in standard decision theories.)

\n
\n

For example, you could get your prior probabilities from the mathematization of occam's Razor, the complexity prior. Then the reason you don't worry that your chocolate bar will turn into a hamster is that the complexity of that hypothesis is higher than the complexity of other hypotheses, such as the chocolate bar just tasting like normal chocolate. But you're not saying that this scenario is unlikely to happen: it is certain to happen, but you just don't care about it. 

\n

Wei's UDT allows you to overcome the decision-theoretic paralysis that would otherwise follow in a Big World: you think of yourself as defining an agent program that controls all of the instantiations of you, so that your decisions do matter. But remember, in order to get decisions out of UDT in a Big World, you need that all-important measure, that is a \"how-much-I-care\" density on the multiverse that integrates to 1.

\n

Personally, I think that Shock Level 5 could be seen as emotionally dangerous for a human to take seriously, so beware.

\n

However, there may be strong instrumental reasons to take SL5 seriously if it is true (and there are strong reasons to believe that it is).

\n


\n

1: Anders Sandberg talks about the limits of physical systems to process information.

\n

2: Bostrom on astronomical waste is relevant here as he is calculating the likely number of people that we could simulate in our universe, which ought to be roughly the same as the number of people that some other civilization could simulate in a similar universe.

\n

3: Not one of the originally proposed 4 future shock levels.

\n

4: To really nail the subjective anticipation issue requires another post.

" } }, { "_id": "tnoLZmy4vNHSFe96n", "title": "I’m behind my eyes", "pageUrl": "https://www.lesswrong.com/posts/tnoLZmy4vNHSFe96n/i-m-behind-my-eyes", "postedAt": "2010-05-25T20:00:33.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "tnoLZmy4vNHSFe96n", "html": "

Where does your mind feel like it is?

\n

Ken Aizawa asks why we intuitively feel like our minds are in our heads, and answers by saying it’s because humans discovered early on that damaging the head disables the mind, and have passed the knowledge of their equivalence on since. This seems unlikely to me, since damaging other parts of the body well enough also disables the mind and damaging the head also disables other bodily functions. Plus it seems a much stronger intuition to me than ‘my feet are made of atoms’ for instance, a belief about the composition of my body that culture gave me early on. I also doubt the ‘I am in my head’ intuition is a direct evolutionary result, since it doesn’t seem useful. I suspect instead that I feel like ‘I’ am located in my head since my eyes are there, and they give me most of the information about my location. I can look down and see my feet a long way away, and it would be complicated to think I was over there. Next I might think I was a person on the other side of the room. I am simply at the center of my perspective of the world.

\n

A way to test this is to ask blind people, though they may get the same effect with their ears. Better would be to ask people who are blind and deaf. I know few of the former and none of the latter – can any of my readers enlighten me?

\n

Also, I’m not sure that the assumption in the original question is right, since at least once I have heard someone imply they feel like they are located elsewhere. Do most people actually feel like their minds are in their heads?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "3E66o8JghKM5gBM6L", "title": "LessWrong meetup, London UK, 2010-06-06 16:00", "pageUrl": "https://www.lesswrong.com/posts/3E66o8JghKM5gBM6L/lesswrong-meetup-london-uk-2010-06-06-16-00", "postedAt": "2010-05-23T13:46:44.536Z", "baseScore": 10, "voteCount": 7, "commentCount": 18, "url": null, "contents": { "documentId": "3E66o8JghKM5gBM6L", "html": "

The next London LessWrong meetup will be at 16:00 on Sunday 6 June in the Shakespeare's Head near Holborn station.  I'll put a Less Wrong sign on the table so you can find us; I look like this.  Send me a direct message with your mobile number or email me (paul at ciphergoth dot org), and I'll reciprocate.

\n

We're trying out a different venue this time; this is where we ended up meeting after Humanity+ which means that at least six of us already know where it is.  Looking forward to seeing some of you there!

\n

Update: Sad to say, I may now not make this myself - a domestic emergency has come up. Sorry to let people down! If another volunteer could step forward to put the \"Less Wrong\" notice on the table and kick things off, that would be a great service - thanks!

" } }, { "_id": "9gfZNpBzEXCcJKXoY", "title": "30th Soar workshop", "pageUrl": "https://www.lesswrong.com/posts/9gfZNpBzEXCcJKXoY/30th-soar-workshop", "postedAt": "2010-05-23T13:33:04.631Z", "baseScore": 24, "voteCount": 23, "commentCount": 18, "url": null, "contents": { "documentId": "9gfZNpBzEXCcJKXoY", "html": "

This is a report from a LessWrong perspective, on the 30th Soar workshop. Soar is a cognitive architecture that has been in continuous development for nearly 30 years, and is in a direct line of descent from some of the earliest AI research (Simon's LT and GPS). Soar is interesting to LessWrong readers for two reasons:

\n
    \n
  1. Soar is a cognitive science theory, and has had some success at modeling human reasoning - this is relevant to the central theme of LessWrong, improving human rationality.
  2. \n
  3. Soar is an AGI research project - this is relevant to the AGI risks sub-theme of LessWrong.
  4. \n
\n

Where I'm coming from: I'm a skeptic about EY/SIAI dogmas that AI research is more risky than software development, and that FAI research is not AI research, and has little to learn from the field of AI research. In particular, I want to understand why AI researchers are generally convinced that their experiments and research are fairly safe - I don't think that EY/SIAI are paying sufficient attention to these expert opinions.

\n

Overall summary: John Laird and his group are smart, dedicated, and funded. Their theory and implementation moves forward slowly but continuously. There's no (visible) work being done on self-modifying, bootstrapping or approximately-universal (e.g. AIXItl) entities. There is some concern about how to build trustworthy and predictable AIs (for the military's ROE) - for example, Scott Wallace's research.

\n

As far as I can tell, the Soar group's work is no more (or less) risky than narrow AI research or ostensibly non-AI software development. To be blunt - package managers like APT seem more risky than Soar, because the economic forces that push them to more capability and complexity are more difficult to control.

\n

Impressions of (most of) the talks - they can be roughly categorized into three types.

\n
    \n
  1. Miscellaneous \n\n
  2. \n
  3. Extending, combining and unifying the existing Soar capabilities (uniformly by Laird and his students): \n\n
  4. \n
  5. Applications of Soar \n\n
  6. \n
\n

I want to emphasize that these just my impressions (which are probably flawed - because of my inexperience I probably misunderstood important points), and the proceedings (that is, the slides that everyone used to talk with) will soon be available, so you can read them and form your own impressions.

\n

There are three forks to my implicit safety case while developing. I'm not claiming this is a particularly good safety case or that developing Rogue-Soar was safe - just that it's what I have.

\n

The first fork is that tasks vary in their difficulty (Pickering's \"resistances\"), and entities vary in their strength or capability. There's some domain-ish structure to entity's strengths (a mechanical engineering task will be easier for someone trained as a mechanical engineer than a chemist), and intention matters - difficult tasks are rarely accomplished unintentionally. I'm fairly weak, and my agent-in-progress was and is very, very weak. The chance that I or my agent solves a difficult task (self-improving AGI) unintentionally while writing Rogue-Soar is incredibly small, and comparable to the risk of my unintentionally solving self-improving AGI while working at my day job. This suggests that safer (not safe) AI development might involve: One, tracking ELO-like scores of people's strengths and task difficulties, and Two, tracking and incentivizing people's intentions.

\n

The second fork is that even though I'm surprised sometimes while developing, the surprises are still confined to an envelope of possible behavior. The agent could crash, run forever, move in a straight line or take only one step when I was expecting it to wander randomly, but pressing \"!\" when I expected it to be confined to \"hjkl\" would be beyond this envelope. Of course, there are many nested envelopes, and excursions beyond the bounds of the narrowest are moderately frequent. This suggests that safer AI development might involve tracking these behavior envelopes (altogether they might form a behavior gradient), and the frequency and degree of excursions, and deciding whether development is generally under control - that is, acceptably risky compared to the alternatives.

\n

The third fork is that the runaway takeoff arguments necessarily involve circularities and feedback. By structural inspection and by intention, if the AI is dealing with Rogue, and not learning, programming, or bootstrapping, then it's unlikely to undergo takeoff. This suggests that carefully documenting and watching for circularities and feedback may be helpful for safer AI research.

\n

 

" } }, { "_id": "F7pihuF8qRbJ6WTue", "title": "Link: Strong Inference", "pageUrl": "https://www.lesswrong.com/posts/F7pihuF8qRbJ6WTue/link-strong-inference", "postedAt": "2010-05-23T02:49:38.419Z", "baseScore": 15, "voteCount": 22, "commentCount": 54, "url": null, "contents": { "documentId": "F7pihuF8qRbJ6WTue", "html": "



The paper \"Strong Inference\" by John R. Platt is a meta-analysis of scientific methodology published in Science in 1964. It starts off with a wonderfully aggressive claim:

\n
\n

Scientists these days tend to keep up a polite fiction that all science is equal.

\n
\n

The paper starts out by observing that some scientific fields progress much more rapidly than others. Why should this be?

\n

\n
I think the usual explanations we tend to think of - such as the tractability of the subject, or the quality or education of the men drawn into it, or the size of the research contracts - are important but inadequate... Rapidly moving fields are fields where a particular method of doing scientific research is systematically used and taught, an accumulative method of inductive inference that is so effective that I think it should be given the name \"Strong Inference\".
\n

 

\n

The definition of Strong Inference, according to Platt, is the formal, explicit, and regular adherence to the following procedure:

\n
    \n
  1. Devise alternative hypotheses;
  2. \n
  3. Devise a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly as possible, exclude one or more of the hypotheses;
  4. \n
  5. Carry out the experiment so as to get a clean result;
  6. \n
  7. (Goto 1) - Recycle the procedure, making subhypotheses or sequential hypotheses to refine the problems that remain; and so on.
  8. \n
\n

This seems like a simple restatement of the scientific method. Why does Platt bother to tell us something we already know?

\n
The reason is that many of us have forgotten it. Science is now an everyday business. Equipment, calculations, lectures become ends in themselves. How many of us write down our alternatives and crucial experiments every day, focusing on the exclusion of a hypothesis?
\n


Platt gives us some nice historical anecdotes of strong inference at work. One is from high-energy physics:

\n
[One of the crucial experiments] was thought of one evening at suppertime: by midnight they had arranged the apparatus for it, and by 4am they had picked up the predicted pulses showing the non-conservation of parity.
\n

 

\n

The paper emphasizes the importance of systematicity and rigor over raw intellectual firepower. Roentgen, proceeding systematically, shows us the meaning of haste:

\n
Within 8 weeks after the discovery of X-rays, Roentgen had identified 17 of their major properties.
\n

 

\n

Later, Platt argues against the overuse of mathematics:

\n
I think that anyone who asks the question about scientific effectiveness will also conclude that much of the mathematicizing in physics and chemistry today is irrelevant if not misleading.
\n


(Fast forward to the present, where we have people proving the existence of Nash equilibria in robotics and using Riemannian manifolds in computer vision, when robots can barely walk up stairs and the problem of face detection still has no convincing solution.)

\n

One of the obstacles to hard science is that hypotheses must come into conflict, and one or the other must eventually win. This creates sociological trouble, but there's a solution:

\n
The conflict and exclusion of alternatives that is necessary to sharp inductive inference has been all too often a conflict between men, each with his single Ruling Theory. But whenever each man begins to have multiple working hypotheses, it becomes purely a conflict between ideas.
\n

 

\n

Finally, Platt suggests that all scientists continually bear in mind The Question:

\n
But, sir, what experiment could disprove your hypothesis?
\n

----

Now, LWers, I am not being rhetorical, I put these questions to you sincerely: Is artificial intelligence, rightly considered, an empirical science? If not, what is it? Why doesn't AI make progress like the fields mentioned in Platt's paper? Why can't AI researchers formulate and test theories the way high-energy physicists do? Can a field which is not an empirical science ever make claims about the real world?

\n

If you have time and inclination, try rereading my earlier post on the Compression Rate Method, especially the first part, in the light of Platt's paper.

\n

Edited thanks to feedback from Cupholder.

" } }, { "_id": "SYJtFXEqkYn3dpnE3", "title": "Why is reductionism rude?", "pageUrl": "https://www.lesswrong.com/posts/SYJtFXEqkYn3dpnE3/why-is-reductionism-rude", "postedAt": "2010-05-23T01:19:42.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "SYJtFXEqkYn3dpnE3", "html": "

People have a similar dislike for many quantification related things:

\n\n

Why?

\n

\n

Individual explanations abound. Being thought of as an object, number, or statistic is ‘dehumanizing’.  Tv-tropes suggest serial numbers and prison numbers make numbers suggest inhumanity. ‘Being a number’ prevents you being unique (strangely – how many people share your credit card number? How many share your name?’).  Being objectified makes others disrespect you as they do objects. Measuring art or culture misses important, indefinable or intangible things. Being a statistic is bad because people don’t care about statistics. Publishing school tests scores misleads parents because test scores aren’t everything, and parents might think they are.

\n

These explanations mostly seem unexplanatory or implausible to me, and the similarity of the concerns suggests that they have a common cause.  The explanations have an idea in common:  quantification destroys important, especially human related, aspects of things.

\n

This seems an odd concern. Measuring things naturally leaves some of their aspects unmeasured, but if you are worried about missing information, refusing to measure what you can is a strange solution. And fear of Goodhart’s law doesn’t account for the offense and disdain these things prompt.

\n

One explanation is that explicit measurement inhibits one using ‘judgement’ to come to preferred conclusions. That is, it restricts hypocrisy. This explanation requires that the things people don’t want to measure are the things they like to lie about the importance of. This tentatively seems to fit – the things we don’t want to quantify are usually manifestations of admirable values that people tend to talk of more than act on. We mind quantifying love more than sex, nice views more than nice timber, friendship more than hairdressing.

\n

It’s been argued before that we like to say ‘sacred values‘ like human life are infinitely valuable. Ascribing infinite value to something that you don’t really sacrifice everything for risks being too obviously hypocritical. Do we claim ineffability to hide this hypocrisy?

\n

Hypocrisy alone doesn’t explain such a broad aversion though. We don’t like quantifying more than just value. What’s wrong with reductionistic explanations of human behaviour for instance? It seems that many interpret such things as implying human behaviour is less valuable. People hate being ‘reduced’ to ‘just’ something or another, regardless of its complexity. Humans and their concerns are fragile magical things that can be sullied or destroyed by trying to pin them down. Why is measurement of non-value features contrary to our humanity, or to importance?

\n

Another theory:  We generally use story thought for important social matters, and system thought for unimportant social matters along with non-social matters. Quantification is pretty much specific to system thought, so using it for a social matter says you find the matter unimportant, and is thus offensive to those who think the matter is important.

\n

Story thought should be useful in social situations. It allows us to fudge matters as in the previous hypothesis. At the same time when we are dealing with important social matters we want to use other story thought features, such as sensitivity to value and social implications, expectation that social rules determine outcomes, attention to agents and especially their unique identities, emphasis on our own perspective, respectful treatment of others as unpredictable agents, and sensitivity to intentions and potential for retribution and reward.

\n

I’m not sure why we would talk about unimportant social issues in system style instead, but it looks like we do more. My friends eat fat because of personal decisions, whereas the poor eat fat because it’s advertised to them. Violence in Aboriginal communities is due to poor social conditions, whereas violence in my culture is due to personal evil.  I recall an article reporting Aboriginal girls being raped, and suggested that if this wasn’t intervened in soon this generation of children may also grow up to suffer from being rapists. Amoral influences seem to shape history and foreign affairs more than they do near social issues. ‘Ferdinand’s death … set in train a mindlessly mechanical series of events that culminated in the world’s first global war’, while Our relationship failed because of YOU.

\n

This seems linked to near and far mode; distant people are unimportant and tend to be in system thought. That’s puzzling though, since in far mode we usually care more about morality style values, which feature mostly in story thought.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "w4iig3SD7SxWw5jSu", "title": "To signal effectively, use a non-human, non-stoppable enforcer", "pageUrl": "https://www.lesswrong.com/posts/w4iig3SD7SxWw5jSu/to-signal-effectively-use-a-non-human-non-stoppable-enforcer", "postedAt": "2010-05-22T22:03:00.354Z", "baseScore": 39, "voteCount": 42, "commentCount": 167, "url": null, "contents": { "documentId": "w4iig3SD7SxWw5jSu", "html": "

Follow-up to: this comment in this thread

\n

Summary: see title

\n

Much effort is spent (arguably wasted) by humans in a zero-sum game of signaling that they hold good attributes.  Because humans have strong incentive to fake these attributes, they cannot simply inform each other that:

\n
\n

I am slightly more committed to this group’s welfare, particularly to that of its weakest members, than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me. I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends. I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable 'liking you' region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences.

\n
\n

Or, even better:

\n
\n

I would cooperate with you if and only if (you would cooperate with me if and only if I would cooperate with you).

\n
\n

An obvious solution to this problem, which allows all humans to save resources and redirect them toward higher-valued ends, is to designate a central enforcer that is inexorably committed toward visibly punishing those who deviate from a specified \"cooperative\"-type decision theory.  This enforcer would have a central database of human names, the decision theory they have committed to, and the punishment regime they will endure for deviating therefrom.

\n

Such a system could use equally strong protocols, such as public key/private key encryption and signing, so that, on encounter with another human, any human can give an extremely strong signal of being cooperative, yet also withhold cooperation from anyone who is not also cooperative.  This incentive structure permits a strongly-favored global shift toward pre-commitment on the part of everyone, allowing a move out of a local optimum that is worse than the global optimum, and bypassing problems related to path-dependence.

\n

I am told that humans attempt weak approximations of this system through human-run enforcement mechanisms, such as laws and gossip.  However, all such methods are subject to evolutionary arms races that ensure that, on average, a marginal unit of effort toward truthful signaling is just as effective in convincing others of one's possession of a desirable attribute, as a marginal unit of effort toward false signaling of the possession of such an attribute.

\n

Therefore, it is necessary that humans go \"outside the system\" and involve non-human enforcers who cannot be swayed by standard human values.  Either a provably-accurate artificial intelligence with no kill switch, or a non-human enforcer that lacks interest in its position in human society, would suffice for this role.

\n

Because the former does not yet exist, I offer myself as a non-human, non-stoppable enforcer for all of the attributes you wish to signal, and you can save yourself lots of resources by using me, and identify cooperation tendencies in others by whether they also use me.

\n

However, while I can't be swayed by human deception, I do have Clippy values.  To fill such a role, thereby forgoing my ability to apply myself to other ends, I require compensation.  Because User:Kevin has offered to give me 1e20 kg of paperclips in exchange for 50,000 USD, this would be sufficient compensation, even if done in installments.  (If Michael Vassar confirms my contribution of 1000 USD to SIAI, User:Kevin will only require 48,677.5 USD and therefore I will only require that much compensation.)  This compensation would cover all users of the system; it is not a per-User charge.

\n

If such an offer is not appealing to you, I must ask: why should anyone believe your signals?

" } }, { "_id": "syvKKvr5wrE69YJhm", "title": "LessWrong downtime 2010-05-11, and other recent outages and instability", "pageUrl": "https://www.lesswrong.com/posts/syvKKvr5wrE69YJhm/lesswrong-downtime-2010-05-11-and-other-recent-outages-and", "postedAt": "2010-05-22T01:33:08.403Z", "baseScore": 25, "voteCount": 18, "commentCount": 20, "url": null, "contents": { "documentId": "syvKKvr5wrE69YJhm", "html": "

Incident report and hosting update

\n

In the leadup to 2010-05-11 we (Tricycle) were unhappy with repeated short downtime incidents on the Less Wrong (LW) server (serpent). The apparent cause was the paster process hanging during heavy IO. We had scripted an automatic restart of the process when this problem was detected, but each incident caused up to a minute of downtime and it was obvious that we needed a proper solution. We concluded that IO on serpent was abnormally slow, and that the physical machine at Slicehost that serpent ran on had IO problems (Slicehost was unable to confirm our diagnosis). We requested migration to a new physical machine.

\n

Error 1: We requested this migration at the end of our working day, and didn't nurse the migration through.

\n

After the migration LW booted properly, but was quickly unstable. Since we didn’t nurse the migration through we failed to notice ourselves. Our website monitoring system (nagios) should have notified us of the failure, but it, too failed. We have a website monitoring system monitoring system (who watches the watchers? this system does - it is itself watched by nagios).

\n

Error 2: Our website monitoring system monitoring system (a cron job running on a separate machine) was only capable of reporting nagios failures by email. It \"succeeded\" in so far as it sent an email to our sysadmin notifying him that nagios was failing. It clearly failed in that it failed to actually notify a human in reasonable time (our sysadmin very reasonably doesn’t check his email during meals).

\n

serpent continued to be unstable through our next morning as we worked on diagnosing and fixing the problem. IO performance did not improve on a new physical server.

\n

2010-05-17 we migrated the system again to an AWS server, and saw significant speed and general stability improvements.

\n

Error 3: The new AWS server didn’t include one of the python dependencies the signup captcha relies on. We didn’t notice. Until davidjr raised an issue in the tracker (#207), which notified us, no-one was able to sign up.

\n

What we have achieved:

\n

LW is now significantly faster and more responsive. It also has much more headroom on its server - even large load spikes should not reduce performance.

\n

What has been done to prevent recurrence of errors:

\n

Error 1: Human error. We won’t do that again. Generally “don’t do that again” isn’t a very good systems improvement… but we really should have known better.

\n

Error 2: We improved our monitoring system monitoring system the morning after it failed to notify us so that it now attempts to restart nagios itself, and sends SMS notifications and emails to two of us if it fails.

\n

Error 3: We’re in the process of building a manual deploy checklist to check for this failure and other failures we think plausible. We generally prefer automated testing, but development on this project is not currently active enough to justify the investment. We’ll add an active reminder to run that checklist to our deploy script (we’ll have to answer “yes, I have run the checklist” or something similar in the deploy script).

\n

 

\n

ETA 2010-06-02:

\n

Clearly still some problems. We're working on them.

\n

ETA 2010-06-09:

\n

New deployment through an AWS elastic load balancer. We expect this to be substantially more stable, and after DNS propagates, faster.

" } }, { "_id": "saaL82zquqCCByNQz", "title": "Taking the awkwardness out of a Prenup - A Game Theoretic solution", "pageUrl": "https://www.lesswrong.com/posts/saaL82zquqCCByNQz/taking-the-awkwardness-out-of-a-prenup-a-game-theoretic", "postedAt": "2010-05-22T00:45:29.406Z", "baseScore": 42, "voteCount": 38, "commentCount": 111, "url": null, "contents": { "documentId": "saaL82zquqCCByNQz", "html": "

I would strongly advise you to look at the short review on Thomas Schelling's Strategy of Conflict posted on Less Wrong some time back. The idea that deliberately constraining one's own choices can actually leave a person better off in a negotiation is a very interesting one. The most classic game theoretic example of this is the game of Chicken. In the game of Chicken, two people drive toward each other on a wide freeway. If neither of them swerve, they both stand to lose by way of substantial financial damage and possible loss of lives. If not, the first one to swerve is the proverbial \"chicken\" and stands to lose face against the other person who was brave enough to not swerve. If one person were to throw away their steering wheel and blindfold themselves before driving on the freeway, that would force the other person to swerve given that the first person has completely given up control of the situation. 

\n

     There is a slightly more generalizable example of a similar principle at work. Suppose you wanted to buy a used car from a car dealer and were prepared to pay up to $5000 for the car and the car dealer in turn was willing to sell it for any price above $4000. In such a situation, any price between $4000 and $5000 is an admissible solution. However you ideally want to pay as close to $4000 as possible, while the car dealer would like you to pay close to $5000. In a such a situation, each party would pretend that their \"last price\" (the price that represents the worst possible outcome for them, which they would nonetheless be willing to accept) was different from the true last price, since if one party realizes the other party's true last price, that party can put it to effective use in the negotiation. Let us now assume a situation wherein you and the car dealer know perfectly well about the each other's financial details, the degree of urgency in having the transaction done etc., and have a very reliable idea of the last price of the other person. Now, you can break the symmetry and get the best possible deal out of the situation by deliberately handicapping yourself in the following fashion. You sign a contract with a third party individual which states that if you happen to do this transaction and pay more than $4000, you will have to pay the third party $1500. Now, all you need to do is show this contract to your used card dealer which would make it clear to him that your last price has now shrunk to $4000 since paying anything above that effectively means paying in excess of $5500 which is well past your original last price.

\n

    For countries that face the menace of airline hijacking, it can likewise be an effective deterrent to future hijackers if release of terrorists or other kinds of negotiations with hijackers were explicitly prohibited by the country's laws, and these laws would be impossible to overturn during a hijacking incident. 

\n

     This brings to mind the following question. Why don't there exist companies that explicitly sign contracts with individuals or other entities for a fee, which would handicap the entities in some way that cannot be easily overturned and consequently give them negotiating leverage as a result. 

\n

      One example I can think of is pertaining to wealthy individuals in California and other US States with Community Property laws. Given the high divorce rates in the US, it would be prudent for such individuals to have as tight prenuptial agreements as possible prior to getting married, to minimize financial loss in the event of a divorce and also to avoid financially incentivizing one's spouse to initiate a divorce with a promise of a financial windfall. However there are some practical difficulties which might make many such individuals shy away from doing this. A couple of the practical issues are:

\n

A. It is clearly rather unromantic to have to haggle with one's fiancée and their lawyers regarding a prenuptial agreement. The implied \"lack of belief\" in the potential durability of the marriage might be a turn off for one's partner and other close people involved.

\n

B. The individuals themselves might get carried away by emotion and believe that they have found \"the one\" and assign a much lower probability of divorce or forcible concessions that they would need to make in future when faced with the threat of divorce. In such a situation, they would fail to realize that probably 50% of Americans who felt they found \"the one\" just like them, went on to eventually get divorced.

\n

   Now imagine the beneficial role a company signing such contracts could provide. The individual in question could sign a contract with this company stating that if they were to get married without a bullet proof pre-specified prenuptial agreement, the company could lay claim to half their net worth immediately after the\n\nwedding were registered. Ideally, the individual in question could sign such a contract when they were single or not seriously seeing anyone with the intention of getting married. The advantage of such a contract is the following:

\n

1. Community property and other modern divorce laws essentially change the defaults with regard to what happens in the aftermath of a divorce, compared to how marriages worked prior to the existence of such laws. Such a contract would reset the default state to one where neither party would financially profit in the aftermath of a divorce. Most of the awkwardness comes when trying to override the default state with a bunch of legal riders at the time of a wedding. 

\n

2. The advantage of signing up for a contract well in advance is that the aforesaid individual is then not exposed to issues A and B above. Signing a tight pre-nuptial agreement in the background of such a contract, simply means that the individual in question has no desire to part with half their finances to this third party company. It makes no implicit statement about the individual's probability estimate for the durability of the marriage. There always exists the plausible explanation that the individual in question was opposed to non-prenup marriages in the past, but now saw no need for that given that they subsequently found \"the one\". However they are constrained by a certain contract they signed in the past that they are now powerless to change.

\n

   Do you know if there are entities that play the role of the third party company with regard to signing contracts that enable people to handicap themselves and consequently come out stronger in future negotiations? Do you know of people who did this specifically with regard to prenuptial agreements? If such companies don't exist, is that a potential business opportunity? I would love to hear from you in the comments. 

\n

 

\n

 

" } }, { "_id": "YgCi9vBphbG7hmnb5", "title": "The Tragedy of the Social Epistemology Commons", "pageUrl": "https://www.lesswrong.com/posts/YgCi9vBphbG7hmnb5/the-tragedy-of-the-social-epistemology-commons", "postedAt": "2010-05-21T12:42:38.103Z", "baseScore": 66, "voteCount": 59, "commentCount": 91, "url": null, "contents": { "documentId": "YgCi9vBphbG7hmnb5", "html": "

In Brief: Making yourself happy is not best achieved by having true beliefs, primarily because the contribution of true beliefs to material comfort is a public good that you can free ride on, but the signaling benefits and happiness benefits of convenient falsehoods pay back locally, i.e. you personally benefit from your adoption of convenient falsehoods. The consequence is that many people hold beliefs about important subjects in order to feel a certain way or be accepted by a certain group. Widespread irrationality is ultimately an incentive problem.

\n

Note: this article has been edited to take into account Tom McCabe, Vladimir_M and Morendil's comments1

\n

In asking why the overall level of epistemic rationality in the world is low and what we can do to change that, it is useful to think about the incentives that many people face concerning the effects that their beliefs have on their quality of life, i.e. on how it is that beliefs make people win.

\n

People have various real and perceived needs; of which our material/practical needs and our emotional needs are two very important subsets. Material/practical needs include adequate nutrition, warmth and shelter, clothing, freedom from crime or attack by hostiles, sex and healthcare. Our emotional needs include status, friendship, family, love, a feeling of belonging and perhaps something called \"self actualization\".

\n

\n

Data strongly suggests that when material and practical needs are not satisfied, people live extremely miserable lives (this can be seen in the happiness/income correlation—note that very low incomes predict very low happiness). The comfortable life that we lead in developed countries seems to mostly protect us from the lowest depths of anguish, and I would postulate that a reasonable explanation is that almost all of us never starve, die of cold or get killed in violence.

\n

The comfort that we experience (in the developed world) due to our modern technology is very much a product of the analytic-rational paradigm. That is to say a tradition of rational, analytic thinking stretching back through Watson & Crick, Bardeen, Einstein, Darwin, Adam Smith, Newton, Bacon, etc, is a crucial (necessary, and \"nearly\" sufficient) reason for our comfort.

\n

However, that comfort is given roughly equally to everyone and is certainly not given preferentially to the kind of person who most contributed causally to it happening, including scientists, engineers and great thinkers (mostly because the people who make crucial contributions are usually dead by the time the bulk of the benefits arrive). To put it another way, irrationalists free-ride on the real-world material-comfort achievements of rationalists

\n

This means that once you find yourself in a more economically developed country, your individual decisions in improving the quality of your own life will (mostly) not involve thinking in the rational vein that caused you to be at the quite high quality you are already at. I have been reading a good self-help book which laments that studies have shown that 50% of one's happiness in life is genetically determined—highlighting the transhumanist case for re-engineering humanity for our own benefit—but that does not mean that to individually be more happy you should become an advocate for transhumanist paradise engineering, because such a project is a public good. It would be like trying to get to work faster by single-handedly building a subway.

\n

The rational paradigm works well for societies, but not obviously for individuals

\n

Instead, to ask what incentives apply to people's choice of beliefs and overall paradigm is to ask what beliefs will best facilitate the fulfillment of those needs to which individual incentives apply. Since our material/practical needs are relatively easily fulfilled (at least amongst non-dirt-poor people in the west), we turn our attention to emotional needs such as:

\n

love and belonging, friendship, family, intimacy, group membership, esteem, status and respect from others, sense of self-respect, confidence

\n\n\n

The beliefs that most contribute to these things generally deviate from factual accuracy, because factually accurate beliefs are picked out as being \"special\" or optimal by the planning model of winning, but love, esteem and belonging are typically not achieved by coming up with a plan to get them (coming up with a plan to make someone like you is often called manipulative and is widely criticized). In fact, love and belonging are typically much better fostered by shared nonanalytic or false beliefs, for example a common belief in God or something like religion (e.g. New Age stuff), in a political party or left/right/traditional/liberal alignment, and/or by personality variables, which are themselves influenced by beliefs in a way that doesn't go via the planning model.

\n

The bottom line is that many people's \"map\" is not really like an ordinary map, in that its design criterion is not simply to reflect the territory; it is designed to make them fit into a group (religion, politics), feel good about themselves (belief in immortal soul and life after death), fit into a particular cultural niche or signal personality (e.g. belief in Chakras/Auras). Because of the way that incentives are set up, this may in many cases be individually utility maximizing, i.e. instrumentally rational. This seems to fit with the data—80% of the world are theists, including a majority of people in the USA, and as we have complained many times on this site, the overall level of rationality across many different topics (quality of political debate, uptake of cryonics, lack of attention paid to \"big picture\" issues such as the singularity, dreadful inefficiency of charity) is low.

\n

Bryan Caplan has an economic theory to formalize this: he calls it rational irrationality. Thanks to Vladimir_M for pointing out that Caplan had already formalized this idea:

\n

If the most pleasant belief for an individual differs from the belief dictated by rational expectations, agents weigh the hedonic benefits of deviating from rational expectations against the expected costs of self-delusion.

\n

Beliefs respond to relative price changes just like any other good.  On some level, adherents remain aware of what price they have to pay for their beliefs.  Under normal circumstances, the belief that death in holy war carries large rewards is harmless, so people readily accept the doctrine.  But in extremis, as the tide of battle turns against them, the price of retaining this improbable belief suddenly becomes enormous.  Widespread apostacy is the result as long as the price stays high; believers flee the battlefield in disregard of the incentive structure they recently affirmed.   But when the danger passes, the members of the routed army can and barring a shift in preferences will return to their original belief.  They face no temptation to convert to a new religion or flirt with atheism.

\n

 

\n

 

\n
\n

 

\n

1: The article was originally written with a large emphasis on Maslow's Hierarchy of needs, but it seems that this may be a \"truthy\" idea that propagates despite failures to confirm it experimentally.

" } }, { "_id": "6hZDoLKGuEEChkodD", "title": "Chicago Meetup", "pageUrl": "https://www.lesswrong.com/posts/6hZDoLKGuEEChkodD/chicago-meetup", "postedAt": "2010-05-20T20:42:25.049Z", "baseScore": 17, "voteCount": 13, "commentCount": 24, "url": null, "contents": { "documentId": "6hZDoLKGuEEChkodD", "html": "

Hey Chicagoans and any Midwesterners from further afield who would like to join us, let’s start building a vibrant Less Wrong community here in the Windy City!

\n

Steven0461 and I are proposing the first ever Less Wrong meetup in Chicago. The meet-up will be held at 2 pm on June 6 at the C-Shop in Hyde Park on the University of Chicago campus, which is located on the South Side of Chicago. The address is 5706 S. University Ave. Here is a map of the relevant part of the University of Chicago; the C-Shop is in the Reynolds Club building. We will have a Less Wrong sign on the table so you can identify us.

\n

Steven and I have both been Visiting Fellows at SIAI, lived at the SIAI house, and attended the Bay Area Less Wrong meetups, and we’re excited about introducing Less Wrong meetups to the great city of Chicago.

\n
Please comment if you plan to attend or if you would like to propose an alternative date, time, or location.
\n
Edited to add a link to the related  Google Group.
\n
Edited to add final meeting place and details.
" } }, { "_id": "FadvSoGXpD6YhFiDp", "title": "Open Thread: May 2010, Part 2 ", "pageUrl": "https://www.lesswrong.com/posts/FadvSoGXpD6YhFiDp/open-thread-may-2010-part-2", "postedAt": "2010-05-20T19:30:46.395Z", "baseScore": 6, "voteCount": 4, "commentCount": 358, "url": null, "contents": { "documentId": "FadvSoGXpD6YhFiDp", "html": "

The Open Thread from the beginning of the month has more than 500 comments – new Open Thread comments may be made here.

\n

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

" } }, { "_id": "meTmrcCwSsDYmo5DZ", "title": "Development of Compression Rate Method", "pageUrl": "https://www.lesswrong.com/posts/meTmrcCwSsDYmo5DZ/development-of-compression-rate-method", "postedAt": "2010-05-20T17:11:34.085Z", "baseScore": 12, "voteCount": 28, "commentCount": 21, "url": null, "contents": { "documentId": "meTmrcCwSsDYmo5DZ", "html": "

 

\n

Summary: This post provides a brief discussion of the traditional scientific method, and mentions some areas where the method cannot be directly applied. Then, through a series of thought experiments, a set of minor modifications to the traditional method are presented. The result is a refined version of the method, based on data compression.

\n

Related to: Changing the Definition of Science, Einstein's Arrogance, The Dilemma: Science or Bayes?

\n

ETA: For those who are familiar with notions such as Kolmogorov Complexity and MML, this piece may have a low ratio of novelty:words. The basic point is that one can compare scientific theories by instantiating them as compression programs, using them to compress a benchmark database of measurements related to a phenomenon of interest, and comparing the resulting codelengths (taking into account the length of the compressor itself).

\n


\n

Notes on Traditional Method

\n

This post proposes a refined version of the scientific method which, it will be argued later, is more directly applicable to the problems of interest in artificial intelligence. Before doing so, it is worth briefly examining the traditional method and the circumstances in which it can be applied. The scientific method is not an exact procedure, but a qualitative statement of it goes roughly as follows:

\n
    \n
  1. Observe a natural phenomenon.
  2. \n
  3. Develop a theory of that phenomenon.
  4. \n
  5. Use the theory to make a prediction.
  6. \n
  7. Test the prediction experimentally.
  8. \n
\n

A full discussion of the philosophical significance of the scientific method is beyond the scope of this post, but some brief remarks are in order. The power of the scientific method is in the way it links theory with experimental observation; either one of these alone is worthless. The long checkered intellectual history of humanity clearly shows how rapidly pure theoretical speculation goes astray when it is not tightly constrained by an external guiding force. Pure experimental investigation, in contrast, is of limited value because of the vast number of possible configurations of objects. To make predictions solely on the basis of experimental data, it would be necessary to exhaustively test each configuration.

\n

 

\n

As articulated in the above list, the goal of the method appears to be the verification of a single theory. This is a bit misleading; in reality the goal of the method is to facilitate selection between a potentially large number of candidate theories. Given two competing theories of a particular phenomenon, the researcher identifies some experimental configuration where the theories make incompatible predictions and then performs the experiment using the indicated configuration. The theory whose predictions fail to match the experimental prediction is discarded in favor of its rival. But even this view of science as a process of weeding out imperfect theories in order to find the perfect one is somewhat inaccurate. Most physicists will admit or disclaim that even their most refined theories are mere approximations, though they are spectacularly accurate approximations. The scientific method can therefore be understood as a technique for using empirical observations to find the best predictive approximation from a large pool of candidates.

\n

A core component of the traditional scientific method is the use of controlled experiments. To control an experiment means essentially to simplify it. To determine the effect of a certain factor, one sets up two experimental configurations which are exactly the same except for the presence or absence of the factor. If the experimental outcomes are different, then it can be inferred that this disparity is due to the special factor.

\n

In some fields of scientific inquiry, however, it is impossible or meaningless to conduct controlled experiments. No two people are identical in all respects, so clinical trials for new drugs, in which the human subject is part of the experimental configuration, can never be truly controlled. The best that medical researchers can do is to attempt to ensure that the experimental factor does not systematically correlate with other factors that may affect the outcome. This is done by selecting at random which patients will receive the new treatment. This method is obviously limited, however, and these limitations lead to deep problems in the medical literature. It is similarly difficult to apply the traditional scientific method to answer questions arising in the field of macroeconomics. No political leader would ever agree to a proposal in which her country's economy was to be used as an experimental test subject. In lieu of controlled experiments, economists attempt to test their theories based on the outcomes of so-called historical experiments, where two originally similar countries implemented different economic policies.

\n

A similar breakdown of the traditional method occurs in computer vision (recall that my hypothesis asserts that perception and prediction are the major components of intelligence, implying that the study of vision is central to the study of AI). Controlled vision experiments can be conducted, but are of very little interest. The physical laws of reflection and optics that govern the image formation process are well understood already. Clearly if the same camera is used to photograph an identical scene twice under constant lighting conditions, the obtained images will be identical or very nearly so. And a deterministic computer vision algorithm will always produce the same result when applied to two identical images. It is not clear, therefore, how to use the traditional method to approach the problems of interest in computer vision, which include tasks like image segmentation and edge detection.

\n

(The field of computer vision will be discussed in later posts. For now, the important thing to realize is that there are deep, deep problems in evaluating computer vision techniques. Given two image segmentation algorithms, how do you decide which one is better? The field has no compelling answer. The lack of empirical rigor in computer vision has been lamented in papers with titles like \"Computer Vision Theory: the Lack Thereof\" and \"Ignorance, Myopia, and Naivete in Computer Vision Systems\".)

\n

Sophie's Adventures

\n

The modifications to the scientific method are presented through a series of thought experiments related to a fictional character named Sophie.

\n

Episode I: The Shaman

\n

Sophie is a assistant professor of physics at a large American state university. She finds this job vexing for several reasons, one of which is that she has been chosen by the department to teach a physics class intended for students majoring in the humanities, for whom it serves to fill a breadth requirement. The students in this class, who major in subjects like literature, religious studies, and philosophy, tend to be intelligent but also querulous and somewhat disdainful of the \"merely technical\" intellectual achievements of physics.

\n

In the current semester she has become aware of the presence in her class of a discalced student with a large beard and often bloodshot eyes. This student is surrounded by an entourage of similarly odd-looking followers. Sophie is on good terms with some of the more serious students in the class, and in conversation with them has found out that the odd student is attempting to start a new naturalistic religious movement and refers to himself as a \"shaman\".

\n

One day while delivering a simple lecture on Newtonian mechanics, she is surprised when the shaman raises his hand and claims that physics is a propagandistic hoax designed by the elites as a way to control the population. Sophie blinks several times, and then responds that physics can't be a hoax because it makes real-world predictions that can be verified by independent observers. The shaman counters by claiming that the so-called \"predictions\" made by physics are in fact trivialities, and that he can obtain better forecasts by communing with the spirit world. He then proceeds to challenge Sophie to a predictive duel, in which the two of them will make forecasts regarding the outcome of a simple experiment, the winner being decided based on the accuracy of the forecasts. Sophie is taken aback by this but, hoping that by proving the shaman wrong she can break the spell he has cast on some of the other students, agrees to the challenge.

\n

During the next class, Sophie sets up the following experiment. She uses a spring mechanism to launch a ball into the air at an angle A. The launch mechanism allows her to set the initial velocity of the ball to a value of Vi. She chooses as a predictive test the problem of predicting the time Tf that the ball will fall back to the ground after being launched at Ti=0. Using a trivial Newtonian calculation she concludes that Tf = 2 Vi sin(A)/g, sets Vi and A to give a value of Tf=2 seconds, and announces her prediction to the class. She then asks the shaman for his prediction. The shaman declares that he must consult with the wind spirits, and then spends a couple of minutes chanting and muttering. Then, dramatically flaring open his eyes as if to signify a moment of revelation, he grabs a piece of paper, writes his prediction on it, and then hands it to another student. Sophie suspects some kind of trick, but is too exasperated to investigate and so launches the ball into the air. The ball is equipped with an electronic timer that starts and stops when an impact is detected, and so the number registered in the timer is just the time of flight Tf. A student picks up the ball and reports that the result is Tf = 2.134. The shaman gives a gleeful laugh, and the student holding his written prediction hands it to Sophie. On the paper is written 1 < Tf < 30. The shaman declares victory: his prediction turned out to be correct, while Sophie's was incorrect (it was off by 0.134 seconds).

\n

To counter the shaman's claim and because it was on the syllabus anyway, in the next class Sophie begins a discussion of probability theory. She goes over the basic ideas, and then connects them to the experimental prediction made about the ball. She points out that technically, the Newtonian prediction Tf=2 is not an assertion about the exact value of the outcome. Rather it should be interpreted as the mean of a probability distribution describing possible outcomes. For example, one might use a normal distribution with mean of 2 and standard deviation of .3. The reason the shaman superficially seemed to win the contest is that he gave a probability distribution while Sophie gave a point prediction; these two types of forecast are not really comparable. In the light of probability theory, the reason to prefer the Newtonian prediction above the shamanic one, is that it assigns a higher probability to the outcome that actually occurred. Now, plausibly, if only a single trial is used then the Newtonian theory might simply have gotten lucky, so the reasonable thing to do is combine the results over many trials, by multiplying the probabilities together. Therefore the real reason to prefer the Newtonian theory to the shamanic theory is that:

\n

 

\n

\"\"

\n

Where the k index runs over many trials of the experiment. Sophie then shows how the Newtonian probability predictions are both more confident and more correct than the shamanic predictions. The Newtonian predictions assign a very large amount of probability to the region around the outcome Tf=2, and in fact it turns out that almost all of the real data outcomes fall in this range. In contrast, the shamanic prediction assigns a relatively small amount of probability to the Tf=2 region, because he has predicted a very wide interval (1 < Tf < 30). Thus while the shamanic prediction is correct, it is not very confident. The Newtonian prediction is correct and highly confident, and so it should be prefered.

\n

Sophie tries to emphasize that the Newtonian probability prediction only works well for the real data. Because of the requirement that probability distributions be normalized, the Newtonian theory can only achieve superior high performance by reassigning probability towards the region around Tf=2 and away from other regions. A theory that does not perform this kind of reassignment cannot achieve superior high performance.

\n

Sophie recalls that some of the students are studying computer science and for their benefit points out the following. Information theory provides the standard equation L(x) = -log P(x) governs the relationship between the probability of an outcome and the length of the optimal code that should be used to represent it. Therefore, given a large data file containing the results of many trials of the ballistic motion experiment, the two predictions (Newtonian and shamanic) can both be used to build specialized programs to compress the data file. Using the codelength/probability conversion, the above inequality can be rewritten as follows:

\n

\"\"

\n

This inequality indicates an alternative criterion that can be used to decide between two rival theories. Given a data file recording measurements related to a phenomenon of interest, a scientific theory can be used to write a compression program that will shrink the file to a small size. Given two rival theories of the same phenomenon, one invokes the corresponding compressors on a shared benchmark data set, and prefers the theory that achieves a smaller encoded file size. This criterion is equivalent to the probability-based one, but has the advantage of being more tangible, since the quantities of interest are file lengths instead of probabilities.

\n

Episode II: The Dead Experimentalist

\n

Sophie is a theoretical physicist and, upon taking up her position as assistant professor, began a collaboration with a brilliant experimental physicist who had been working at the university for some time. The experimentalist had previously completed the development of an advanced apparatus that allowed the investigation of an exotic new kind of quantum phenomenon. Using data obtained from the new system, Sophie made rapid progress in developing a mathematical theory of the phenomenon. Tragically, just before Sophie was able complete her theory, the experimentalist was killed in a laboratory explosion that also destroyed the special apparatus. After grieving for a couple of months, Sophie decided that the best way to honor her friend's memory would be to bring the research they had been working on to a successful conclusion.

\n

Unfortunately, there is a critical problem with Sophie's plan. The experimental apparatus was extremely complex, and Sophie's late partner was the only person in the world who knew how to use it. He had run many trials of the system before his death, so Sophie had a quite large quantity of data. But she had no way of generating any new data. Thus, no matter how beautiful and perfect her theory might be, she had no way of testing it by making predictions.

\n

One day while thinking about the problem Sophie recalls the incident with the shaman. She remembers the point she had made for the benefit of the software engineers, about how a scientific theory could be used to compress a real world data set to a very small size. Inspired, she decides to apply the data compression principle as a way of testing her theory. She immediately returns to her office and spends the next several weeks writing Matlab code, converting her theory into a compression algorithm. The resulting compressor is highly successful: it shrinks the corpus of experimental data from an initial size of 8.7e11 bits to an encoded size of 3.3e9 bits. Satisfied, Sophie writes up the theory, and submits it to a well-known physics journal.

\n

The journal editors like the theory, but are a bit skeptical of the compression based method for testing the theory. Sophie argues that if the theory becomes widely known, one of the other experts in the field will develop a similar apparatus, which can then be used to test the theory in the traditional way. She also offers to release the experimental data, so that other researchers can test alternative theories using the same compression principle. Finally she promises to release the source code of her program, to allow external verification of the compression result. These arguments finally convince the journal editors to accept the paper.

\n

Episode III: The Upstart Theory

\n

After all the mathematics, software development, prose revisions, and persuasion necessary to complete her theory and have the paper accepted, Sophie decides to reward herself by living the good life for a while. She is confident that her theory is essentially correct, and will eventually be recognized as correct by her colleagues. So she spends her time reading novels and hanging out in coffee shops with her friends.

\n

A couple of months later, however, she receives an unpleasant shock in the form of an email from a colleague which is phrased in consolatory language, but does not contain any clue as to why such language might be in order. After some investigation she finds out that a new paper has been published about the same quantum phenomenon of interest to Sophie. The paper proposes a alternative theory of the phenomenon which bears no resemblance whatever to Sophie's. Furthermore, the paper reports a better compression rate than was achieved by Sophie, on the database that she released.

\n

Sophie reads the new paper and quickly realizes that it is worthless. The theory depends on the introduction of a large number of additional parameters, the values of which must be obtained from the data itself. In fact, a substantial portion of the paper involves a description of a statistical algorithm that estimates optimal parameter values from the data. In spite of these aesthetic flaws, she finds that many of her colleagues are quite taken with the new paper and some consider it to be \"next big thing\".

\n

Sophie sends a message to the journal editors describing in detail what she sees as the many flaws of the upstart paper. She emphasizes the asthetic weakness of the new theory, which requires tens of thousands of new parameters. The editors express sympathy, but point out that the new theory outperforms Sophie's theory using the performance metric she herself proposed. The beauty of a theory is important, but its correctness is ultimately more important.

\n

Somewhat discouraged, Sophie sends a polite email to the authors of the new paper, congratulating them on their result and asking to see their source code. Their response, which arrives a week later, contains a vague excuse about how the source code is not properly documented and relies on proprietary third party libraries. Annoyed, Sophie contacts the journal editors again and asks them for the program they used to verify the compression result. They reply with a link to a binary version of the program.

\n

When Sophie clicks on the link to download the program, she is annoyed to find it has a size of 800 megabytes. But her annoyance is quickly transformed into enlightenment, as she realizes what happened, and that her previous philosophy contained a serious flaw. The upstart theory is not better than hers; it has only succeeded in reducing the size of the encoded data by dramatically increasing the size of the compressor. Indeed, when dealing with specialized compressors, the distinction between \"program\" and \"encoded data\" becomes almost irrelevant. The critical number is not the size of the compressed file, but the net size of the encoded data plus the compressor itself.

\n

Sophie writes a response to the new paper which describes the refined compression rate principle. She begins the paper by reiterating the unfortunate circumstances which forced her to appeal to the principle, and expressing the hope that someday an experimental group will rebuild the apparatus developed by her late partner, so that the experimental predictions made by the two theories can be properly tested. Until that day arrives, standard scientific practice does not permit a decisive declaration of theoretical success. But surely there is some theoretical statement that can be made in the meantime, given the large amount of data available. Sophie's proposal is that the goal should be to find the theory that has the highest probability of predicting a new data set, when it can finally be obtained. If the theories are very simple in comparison to the data being modeled, then the size of the encoded data file is a good way of choosing the best theory. But if the theories are complex, then there is a risk of overfitting the data. To guard against overfitting complex theories must be penalized; a simple way to do this is simply to take into account the codelength required for the compressor itself. The length of Sophie's compressor was negligible, so the net score of her theory is just the codelength of the encoded data file: 3.3e9 bits. The rival theory achieved a smaller size of 2.1e9 for the encoded data file, but required a compressor of 6.7e9 bits to do so, giving a total score of 8.8e9 bits. Since Sophie's net score is lower, her theory should be prefered.

\n

----

\n

So, LWers, what do you think? For the present, let's leave aside questions like why this might be relevant for AI, and focus on whether or not the method is a legitimate restatement of the traditional method. If you were a physicist observing the dispute and trying to decide which theory to prefer, would you believe Sophie's, Sophie's rivals' theory, or neither? Some plausible objections are:

\n\n

 

\n

(Originality disclaimer: this post, by itself, is not Highly Original. It is heavily influenced by Eliezer's articles linked to above, as well as the ideas of Minimum Description Length and Kolmogorov Complexity, and especially an article by Matt Mahoney providing a rationale for a large text compression benchmark. The new ideas mostly involve implications of the above method, which will be discussed in later posts.)

" } }, { "_id": "FcgBuuK3i7RkATRuK", "title": "Another way to look at consciousness", "pageUrl": "https://www.lesswrong.com/posts/FcgBuuK3i7RkATRuK/another-way-to-look-at-consciousness", "postedAt": "2010-05-20T13:58:04.856Z", "baseScore": 2, "voteCount": 10, "commentCount": 18, "url": null, "contents": { "documentId": "FcgBuuK3i7RkATRuK", "html": "

 

\n

Edit: First paragraph removed and small changes made to the rest.

\n

 

\n

I am putting forth a hypothesis is about the nature of consciousness. First I will have to tell you how I am using certain words because they are generally used in a number of ways.  'Brain' is an biological organ and it has a function, 'mind'. Mind is not an object; it is what brains do. It is not a property of the brain, let alone an emergent property, whatever that is. It is a function - so mind-is-to-brain as circulation-is-to-heart or digestion-is-to- intestine. There is one brain in any head and there is one mind being produced by that brain – not two. (Assuming sanity) The different parts of the cortex work together; the two hemispheres work together; the fore-brain structures work together with the mid-brain structures. The mind includes at least: perception, cognition, learning, intention, motor control, remembering, and most importantly, the forming a model of the environment and the person in that environment. The division between 'conscious mind' and 'unconscious mind' is meaningless. The brain does its mind-function which maintains the model. Some but not all of this model is made globally accessible to all of the brain and remembered. That edit of the model is what we experience as conscious experience, in other words, is our 'consciousness'. Consciousness is awareness not thought. Consciousness is not separate but part of a single mind-function. Now that the words are straight, I can describe the hypothesis.

\n

 

\n

How is the model edited for consciousness?

\n

 

\n

There is an attention focus that is triggered by the on-going work of the mind and the events that happen in the environment. I may concentrate on some task so that I am not conscious of other parts of the model but a loud noise will cause my attention focus to shift to the source of noise in the model. The level of attention is variable from non-existent (coma) to intense. This level depends on the signals coming from the lower parts of the brain, through the thalamus, into the cortex. A common analogue for attention is a searchlight scanning the mind-model of reality. We cannot be aware of the whole of the model at any instant of time.

\n

 

\n

How is the model formed?

\n

 

\n

The fragments for the model are forced together into a best fit global model. The perception of the various senses, inborn constraints, our understanding of the world, our memory of the previous seconds, our expectations etc. together build a cohesive model by constructing a synchronous neural activity. Fragments that cannot be fit into the model are lost from it. This is done by an almost unbelievable number of parallel, slightly overlapping feedback loops, across the cortex and between the cortex and the mid-brain (especially the thalamus). The feedback loops are much more like patch boards then like digital computers. They rattle for an instance until they find a stable synchrony. There is nothing like step-wise processes at this stage of forming a global model.

\n

 

\n

How is the consciousness edit of the model used?

\n

There is little doubt that consciousness is useful because it is biologically expensive. Evolution will eliminate expensive functions that do not earn their keep. There are three very important processes that are carried on by the consciousness aspect of mind.

\n

 

\n

1) The working memory that holds the last few frames of consciousness is the source of episodic memory. There is an important link between consciousness and the formation of memory. We could treat working memory as part of consciousness or part of more permanent memory or even the link between them. Consciousness is in effect 'the leading edge of memory'. No conscious experience of something than no memory of it.

\n

 

\n

2) The working memory allows some cognition and learning that needs to 'juggle' information. I cannot add two digits if I cannot retain one while I perceive the other. So some thought processes are going to be in the edited model so that they are be continued through the use of working memory. This does not constitute a conscious mind that is separate from an unconscious mind. It is only that some types of thinking register bits of their progress in our awareness so they can be retrieved later.

\n

 

\n

3) Consciousness does mild prediction and therefore can register errors in perception and motor control. It takes a fraction of a second to form the conscious experience of an event. But we do not live our lives a fraction of a second late. Information from time t is used to form a model of what the world will be like a t+x and x being the time it takes to create the model and its conscious edited version, then we will seem to experience t+x at t+x. The difference between the model of t+x and what comes in via our senses at t+x is the actual error in our perception and motor control and is be used to correct the system.

\n

 

\n

These three functions seem ample to justify the metabolic expense of consciousness and rule out philosophical zombies. The functions also seem to rule out consciousness being uniquely human. 'If it quacks like a duck' logic applies to animal consciousness. If an animal appears to have a good memory of events, learns from its experiences, has smooth motor control in complex changing situations, then it is hard to imagine how this happens without consciousness including self consciousness. There would, of course, be degrees of consciousness and variations in the aspects of environment/self that would be modeled by different animals.

\n

 

\n

My answers to some problems ahead of their being asked

\n

 

\n

Most readers of this site are comfortable with the idea of the map and the territory. This post is using a very similar (maybe the same) idea of reality and model of reality. There is nothing surprising about the difference between the physical tree in the garden and an element that stands for that tree in my model of reality. It is the same idea to think about the difference between the reality-now and the model-now. The difference between my physical leg and my model leg is not difficult. We need to extent that comfort to the difference between the reality-me and the model-me. Introspection gives us awareness of our model, it is not our reality-mind but our model-mind we are turning our focus of attention on. There is a difference between reality-decisions and model-decisions. We live in our model and have absolutely, positively no direct knowledge of anything else – none ever.

\n

 

\n

I have given no evidence for the hypotheses here but for two years I have been collecting evidence on consciousness in my website, Thoughts on thoughts. My hypothesis is not that different from the one that Academian is giving in his series of posts and I do not mean mine to be in opposition to his, but to a large extent supportive. Treating consciousness as a sense is not that different from treating it as as a selective awareness. There is no need to get hung up on the words or analogies we use.

\n

 

\n

I have side-stepped the 'hard question' of how and why red is experienced as red. I have the feeling that this is a 'wrong question' but I am not sure why. It is certainly not explained by the hypothesis I have given here. All I have to say about the hard question is: “Can you think of a better way to be aware of red then the one you have?, Is there something more efficient or more vivid or more biologically functional?” In other words, “What is the alternative?” Even if you go all spiritual, that still does not explain the experience of red. Dualism does not answer the hard question either and I have not encountered any philosophy that does. If it is answered, I would put my money on a scientific, material answer.

\n

 

\n

I have not side-stepped the question of how consciousness is reduced to physics. The method is clear: reduce consciousness to biology and biology to physics/chemistry. We accept that biology is in principle reducible to physics/chemistry. We generally assume that the brain is understandable as a biological organ and so if we can assume that consciousness is a function of the brain, it is in principle reducible to physics.

\n

 

\n

 

\n

 

\n

 

" } }, { "_id": "EsDP9yKGQkozJEoZo", "title": "Summer vs Winter Strategies", "pageUrl": "https://www.lesswrong.com/posts/EsDP9yKGQkozJEoZo/summer-vs-winter-strategies", "postedAt": "2010-05-20T00:31:14.588Z", "baseScore": -5, "voteCount": 18, "commentCount": 18, "url": null, "contents": { "documentId": "EsDP9yKGQkozJEoZo", "html": "

Abstract: I have a hypothesis that there are two different general strategies for life that humans might switch between predicated on the way general resource availability change in the society. If it is constant or increasing one strategy pays off, if predictably increasing then decreasing, another is good. These strategies would have been selected for at different times and environments in prehistory but humans are mainly plastic in which strategy they adopt. Culture reinforces them and can create lags. For value neutral purposes I will call them by seasons, the Summer strategy and the Winter strategy. The summer is for times of plenty and partying, and the winter for when resources regularly become scarcer and life becomes harsher. These strategies affect every part of society from mating to the way people plan.

\n

\n

The above is an idea that seems to tie up a few loose threads I have been seeing around the place. I am mainly channelling Robin Hanson here, so some familiarity with him would be useful. I'd also recommend this paper on sexuality and character traits. And the red queen.

\n

Note: I don't have time to write a properly researched paper, so you are going to have to settle for a blog style post. So it is not front page material. But I would rather it got more coverage than in the open thread. If someone is enamoured with the idea, feel free to make a well written riff on this theme. I won't get off\n\nended.

\n

The summer strategy

\n

This is selected for by sexual selection. Women want to mate with attractive powerful men when they don't have to worry about the babies being provided for by that man. Attractive powerful men have to signal there attractiveness and power, they can use lots of resources to do so. This [sexual strategies theory paper](http://www.psy.cmu.edu/~rakison/bussandschmitt.pdf) gives lots of good reasons why men may have been selected for what they call short term sexuality.

\n

It is characterised by:

\n

- Less planning needed. Credit cards. Significant Debt.

\n

- Near thinking

\n

- Extroversion

\n

- Babies without fathers will more likely to survive. Women gathering resources by themselves if they lack male relatives. This leads to more promiscuous women and more promiscuous men as well, as social mores change. Short-term mating strategies do well. Breakdown of traditional monogamy.

\n

- More extroversion in men. Due to less need for planning ahead and gathering resources you can spend more time raising your status in the tribe for more mates and the chance of cuckolding other men.

\n

- associated with red/orange heat and warmth. Summer and harvest

\n

- risk taking

\n

The winter strategy

\n

In times and locations where resources change significantly, short term mating is not so successful. A short term mating men can not rely on there being sufficient resources to raise their kids. So this selects for providers. Common things in evolutionary history that might provide this pressure is the coming of harsh winters in northern climates and ice ages that pushed people out of land. These events reduced the amount of resources available and benefited people that prepared for it.  It is nature vs person selection pressure, rather than person vs person.

\n

It is characterised by:

\n

- More planning and preparation. Stockpiling resources. Savings. 

\n

- Far thinking

\n

- Mild Introversion

\n

- Babies without fathers unlikely to survive. Less promiscuous women. More interested in practical abilities of mate than beauty. This might be where the \"myth\" of women wanting a provider/gifts comes from. They did want a provider, of sorts, but still not a complete wuss. But now that we are in permanent summer, resource wise, strategies change. Long-term mating strategy is the norm for a winter strategist.

\n

- risk averse

\n

- social interaction more about keeping the group happy and on your side, rather than trying to be alpha. Although it can't hurt to be alpha.

\n

- association with blue. Cold/ice. Coming of winter; time to prepare.

\n

Some points of discussion

\n

Men are probably more naturally summer strategists. Women are more naturally winter strategists so might not be very good at knowing what they want when they are performing the summer strategy. This has been discussed in evolutionary theory as parental investment.

\n

Could this be an explanation for the protestant (northern European) work ethic and the success of Europe solving man vs nature problems? Due to harsher selection via more winters/ice ages?

\n

Intelligent winter strategists go on to be part of SIAI, Summer strategists go on to be entrepreneurs (epitome of risk taking). Winter strategists in this day and age (at least in the developed world) are more likely to be extreme or odd, as the moderates are likely to be over in the summer camp.

\n

I suspect that this is why there is some disconnect of view between the PUA advocates and some of the resident existential risk thinkers. Signalling you are summer strategist is a very bad idea for a winter strategist.

\n

If the summer/winter divide has an element of truth, it doesn't look good for advocates of far thinking (greens,existential risk activists). As we get better and better at meeting our needs we will slip more into short term thinking and status contests (without genetic engineering at least).

" } }, { "_id": "qPYYjRxgHFX4wZ77c", "title": "Blame Theory", "pageUrl": "https://www.lesswrong.com/posts/qPYYjRxgHFX4wZ77c/blame-theory", "postedAt": "2010-05-19T21:49:14.867Z", "baseScore": 13, "voteCount": 14, "commentCount": 16, "url": null, "contents": { "documentId": "qPYYjRxgHFX4wZ77c", "html": "

EDIT: this post, like many other posts of mine, is wrong. See the comments by Yvain below. Maybe \"Regret Theory\" would've been a better title. But I'm keeping this as it is because I like having reminders of my mistakes.

\n

Platitudes notwithstanding, \"personal responsibility\" doesn't adequately summarize my relationship with the universe. Other people are to blame for some of my troubles because, as Shepherd from MW2 put it, \"what happens over here matters over there\". Let's talk about it.

\n

When other people's actions can affect you and vice versa, individual utility maximization stops working and you must use game theory. The Prisoner's Dilemma stresses our intuitions by making a mockery of personal responsibility: each player holds the power to screw up the other player's welfare which they don't care about. Economists call such things \"externalities\", political scientists talk of \"special interests\". You can press a button that gives you $10 but makes 10000 random people lose $0.01 each (due to environmental pollution or something like that), you care enough to vote for this proposal, other people don't care enough to vote against, haha democracy fail. 

\n

When the shit hits the fan in a multiplayer setting, we clearly need a theory for assigning blame in correct proportion. For example, what should we make of the democratic credo that people are responsible for the leaders they have? Exactly how much responsible? How many people did I personally kill by voting for Hitler, and how is this responsibility shared between Hitler and me? The \"naive counterfactual\" answer goes like this: if I hadn't voted for Hitler, he'd still have been elected (since I wasn't the marginal deciding voter), therefore I'm blameless. Clearly this answer is not satisfactory. We need more sophisticated game theory concepts.

\n

First of all, we would do well to assume transferable utility. To understand why, consider Clippy. Clippy is perfectly willing to kill a million Armenians to gain one paperclip. We can blame him (her?) for it all day, but it's probably safe to say that Clippy's internal sense of guilt isn't denominated in Armenians. We must reach a mutual understanding by employing a common currency of guilt, which is just another way of saying \"convertable utils\". You feel guilty toward me = you owe me dough. Too bad, knowing you, you probably won't pay.

\n

Our second assumption goes like this: rational actions cannot be immoral. Given our knowledge of game theory, this sounds completely callous and bloodthirsty, but in the transferable utility case it's actually quite tame. You have no incentive to screw Bob over in PD if you'll be sharing the proceeds anyway. The standard procedure for sharing will be, of course, the Shapley value.

\n

This brings us to today's ultimate conclusion: Blame Theory. Imagine that instead of killing all those Gypsies, the evil leader and the stupid voters together sang kumbaya and built a city on a hill. The proceeds of that effort would be divided according to everyone's personal contributions using the standard Shapley construction (taking into account each group's counterfactual non-cooperation, of course). You, dear reader, would have gotten richer by two million dollars, instead of hiding in a bomb shelter while the ground shakes and your nephew is missing. And calculating the difference between your personal welfare in the perfect world where everyone cooperated, and the welfare you actually have right now given that everyone acted as they did, gives you the extent of your personal guilt. You can't push it away, it's yours. Sorry.

\n

Don't know about you, but I'm genuinely scared by this result.

\n

On one hand, it agrees with intuition in all the right ways: the \"naive counterfactual\" voter still carries non-zero guilt because (s)he was part of the collective that elected a monster, and the\n\nlittle personal guilts add up exactly to the total missed opportunity of all society, and... But on the other hand, this procedure gives every one of us a new and unexpectedly harsh yardstick to measure ourselves by. It probably makes me equivalent to a serial murderer already. I've long had a voice in the back of my head telling me it was possible to do better, but Blame Theory brings the truth into sharp and painful focus. I'm not sure I wanted that when I set out to solve this particular problem...

" } }, { "_id": "p93pcxhvowhrdb6uC", "title": "Backchaining causes wishful thinking", "pageUrl": "https://www.lesswrong.com/posts/p93pcxhvowhrdb6uC/backchaining-causes-wishful-thinking", "postedAt": "2010-05-19T19:01:34.503Z", "baseScore": 23, "voteCount": 16, "commentCount": 18, "url": null, "contents": { "documentId": "p93pcxhvowhrdb6uC", "html": "

Wishful thinking - believing things that make you happy - may be a result of adapting an old cognitive mechanism to new content.

\n

Obvious, well-known stuff

\n

The world is a complicated place.  When we first arrive, we don't understand it at all; we can't even recognize objects or move our arms and legs reliably.  Gradually, we make sense of it by building categories of perceptions and objects and events and feelings that resemble each other.  Then, instead of processing every detail of a new situation, we just have to decide which category it's closest to, and what we do with things in that category.  Most, possibly all, categories can be built using unsupervised learning, just by noting statistical regularities and clustering.

\n

If we want to be more than finite-state automata, we also need to learn how to notice which things and events might be useful or dangerous, and make predictions, and form plans.  There are logic-based ways of doing this, and there are also statistical methods.  There's good evidence that the human dopaminergic system uses one of these statistical methods, temporal difference learning (TD).  TD is a backchaining method:  First it learns what state or action Gn-1 usually comes just before reaching a goal Gn, and then what Gn-2 usually comes just before Gn-1, etc.  Many other learning methods use backchaining, including backpropagation, bucket brigade, and spreading activation.  These learning methods need a label or signal, during or after some series of events, saying whether the result was good or bad.

\n

I don't know why we have consciousness, and I don't know what determines which kinds of learning require conscious attention.  For those that do, the signals produce some variety of pleasure or pain.  We learn to pay attention to things associated with pleasure or pain, and for planning, we may use TD to build something analogous to a Markov process (sorry, I found no good link; and Wikipedia's entry on Markov chain is not what you want) where, given a sequence of the previous n states or actions (A1, A2, ... An), the probability of taking action A is proportional to the expected (pleasure - pain) for the sequence (A1, ... An, A).  In short, we learn to do things that make us feel better.

\n

Less-obvious stuff

\n

Here's a key point which is overlooked (or specifically denied) by most AI architectures:  Believing is an action.  Building an inference chain is not just like constructing a plan; it's the same thing, probably done by the same algorithm.  Constructing a plan includes inferential steps, and inference often inserts action steps to make observations and reduce our uncertainty.

\n

Actions, including the \"believe\" action, have preconditions.  When building a plan, you need to find actions that achieve those preconditions.  You don't need to look for things that defeat them.  With actions, this isn't much of a problem, because actions are pretty reliable.  If you put a rock in the fire, you don't need to weigh the evidence for and against the proposition that the rock is now in the fire.  If you put a stick in a termite mound, it may or may not come out covered in termites.  You don't need to compute the odds that the stick was inserted correctly, or the expected number of termites; you pull it out and look at the stick.  If you can find things that cause it not to be covered in termites, such as being the wrong sort of stick, it's probably a simple enough cause that you can enumerate it in your preconditions for next time.

\n

You don't need to consider all the ways that your actions could be thwarted until you start doing adversarial planning, which can't happen until you've already started incorporating belief actions into your planning.  (A tiger needs to consider which ways a wildebeest might run to avoid it, but probably doesn't need to model the wildebeest's beliefs and use min-max - at least, not to any significant depth.  Some mammals do some adversarial planning and modelling of belief states; I wouldn't be surprised if squirrels avoid burying their nuts when other squirrels are looking.  But the domains and actors are simpler, so the process shouldn't break down as often as it does in humans.)

\n

When we evolved the ability to make extensive use of belief actions, we probably took our existing plan-construction mechanism, and added belief actions.  But an inference is a lot less certain than an action.  You're allowed to insert a \"believe\" act into your plan if you're able to find just one thing, belief or action, that plausibly satisfies its preconditions.  You're not required to spend any time looking for things that refute that belief.  Your mind doesn't know that beliefs are fundamentally different from actions, in that the truth-values of the propositions describing the expected effects of your possible actions are strongly, causally correlated with whether you execute the action; while the truth-values of your possible belief-actions are not, and can be made true or false by many other factors.

\n

You can string a long series of actions together into a plan.  If an action fails, you'll usually notice, and you can stop and retry or replan.  Similarly, you can string a long series of belief actions together, even if the probability of each one is only a little above .5, and your planning algorithm won't complain, because stringing a long series of actions together has worked pretty well in your evolutionary past.  But you don't usually get immediate feedback after believing something that tells you whether believing \"succeeded\" (deposited something in your mind that successfully matches the real world); so it doesn't work.

\n

The old way of backchaining, by just trying to satisfy preconditions, doesn't work well with our new mental content.  But we haven't evolved anything better yet.  If we had, chess would seem easy.

\n

Summary

\n

Wishful thinking is a state-space-reduction heuristic.  Your ancestors' minds searched for actions that would enable actions that would make them feel good.  Your mind, therefore, searches for beliefs that will enable beliefs that will make you feel good.  It doesn't search for beliefs that will refute them.

\n

(A forward-chaining planner wouldn't suffer this bias.  It probably wouldn't get anything done, either, as its search space would be vast.)

" } }, { "_id": "r9vpZQWQzrWFkuHx9", "title": "Physicalism: consciousness as the last sense", "pageUrl": "https://www.lesswrong.com/posts/r9vpZQWQzrWFkuHx9/physicalism-consciousness-as-the-last-sense", "postedAt": "2010-05-19T16:31:13.459Z", "baseScore": 25, "voteCount": 25, "commentCount": 15, "url": null, "contents": { "documentId": "r9vpZQWQzrWFkuHx9", "html": "\n

Follow-up to There just has to be something more, you know? and The two insights of materialism.

\n

I have alluded that one cause for the common reluctance to consider physicalism — in particular, that our minds can in principle be characterized entirely by physical states — is an asymmetry in how people perceive characterization.  This can be alleviated by analogy to how our external senses can supervene on each other, and how abstract manipulations of those senses using recording, playback, and editing technologies have made such characterizations useful and intuitive.

\n

We have numerous external senses, and at least one internal sense that people call \"thinking\" or \"consciousness\".  In part because you and I can point our external senses at the same objects, collaborative science has done a great job characterizing them in terms of each other.  The first thing is to realize the symmetry and non-triviality of this situation.

\n

First, at a personal level:  say you've never sensed a musical instrument in any way, and for the first time, in the dark, you hear a cello playing.  Then later, you see the actual cello.  You probably wouldn't immediately recognize these perceptions as being of the same physical object.  But watching and listening to the cello playing at the same time would certainly help, and physically intervening yourself to see that you can change the pitch of the note by placing your fingers on the strings would be a deal breaker:  you'd start thinking of that sound, that sight, and that tactile sense as all coming from one object \"cello\". 

\n

Before moving on, note how in these circumstances we don't conclude that \"only sight is real\" and that sound is merely a derivate of it, but simply that the two senses are related and can characterize each other, at least roughly speaking:  when you see a cello, you know what sort of sounds to expect, and conversely.

\n

Next, consider the more precise correspondence that collaborative science has provided, which follows a similar trend:  in the theory of characterizing sound as logitudinal compression waves, first came recording, then playback, and finally editing.  In fact, the first intelligible recording of a human voice, in 1860, was played back for the first time in 2008, using computers.  So, suppose it's 1810, well before the invention of the phonoautograph, and you've just heard the first movement of Beethowen's 5th.  Then later, I unsuggestively show you a high-res version of this picture, with zooming capabilities:

\n

\"Image

\n

If you're really smart, and have a great memory, you might notice how the high and low amplitudes of that wave along the horizontal axis match up pretty well with your perception of how loud the music is at successive times.  And if you zoom in, you might notice that finer bumps on the wave match up pretty well with times you heard higher notes.  These connections would be much easier to make if you could watch and listen at the same time: that is, if you could see a phonautograph transcribing the sound of the concert to a written wave in real-time while you listen to it. 

\n

Even then, almost anyone in their right mind from 1810 would still be amazed that such an image, and the right interpretive mechanism — say, a computer with great software and really good headphones — is enough to perfectly reproduce the sound of that performance to two stationary ear canals, right down to the audible texture of horse-hair bows against catgut strings and every-so-politely restless audience members.  They'd be even more amazed that fourier analysis on a single wave can separate out the individual instruments to be listened to individually at a decent precision.

\n

But our modern experiences with audio recording, editing, and playback — the fact that we can control sound by playing back and manipulating abstract representations of sound waves — deeply internalizes our model of sound (if not hearing) as a \"merely physical\" phenomenon.  Just as one person easily develops the notion of a single \"cello\" as they see, hear, and play cello at the same time, collaborative science has developed the notion of a single object or model called \"physical reality\" to have a clear meaning in terms of our external senses, because those are the ones we most easily collaborate with. 

\n

Now let's talk about \"consciousness\".  Consider that you have experienced an inner sense of \"consciousness\", and you may be lucky enough to have seen functional magnetic resonance images of your own brain, or even luckier to watch them while they happen.  These two senses, although they are as different as the sight and sound of a cello, are perceptions of the same object:  \"consciousness\" is a word for sensing your mind from the inside, i.e.  from actually being it, and \"brain\" is a word for the various ways of sensing it from the outside.  It's not surprising that this will probably be that last of our senses to be usefully interpreted scientifically, because it's apparently very complicated, and the hardest one to collaborate with:  although my eyes and yours can look at the same chair, our inner senses are always directed at different minds.

\n

Under Descartes' influence, the language I'm using here is somewhat suggestive of dualism in its distiction between physical phenomena and our perceptions of them, but in fact it seems that some of our sensations simply are physical phenomena.  Some combination of physical events — like air molecules hitting the eardrum, electro-chemical signals traversing the auditory nerves, and subsequent reactions in the brain — is the phenomenon of hearing.  I'm not saying your experience of hearing doesn't happen, but that it is the same phenomenon as that described by physics and biology texts using equations and pictures of the auditory system, just as the sight and sound of a cello are direct descriptions of the same object \"cello\".

\n

But when most people consider consciousness supervening on fundamental physics, they often end up in a state of mind that is better summarized as thinking \"pictures of dots and waves are all that exists\", without an explicit awareness that they're only thinking about the pictures.  And this just isn't very helpful.  A brain is not a picture of a brain any more than it is the experience of thinking; in fact, in stages of perception, it's much closer to latter, since a picture has to pass through the retina and optic nerve before you experience it, but the experience of thinking is the operation of your cerebral network.

\n

Indeed, the right interpretative mechanism — for now, a living human body is the only one we've got — seems enough to produce to \"you\" the experience of \"thinking\" from specific configurations of cells, and hence particles, that can be represented (for the moment with low fidelity) by pictures like this:

\n

\"\"

\n

In our progressive understanding of mind, this is analogous to the simultaneous-watching-and-listening phase of learning: we can watch pictures of our brains while we \"listen\" to our own thoughts and feelings.  If at some point computers allow us to store, manipulate, and re-experience partial or complete mental states by directly interfacing with the brain, we'll be able to update our mind-is-brain model with the same sort of confidence as sound-is-longitudinal-compression-waves.  Imagine intentionally thinking through the process of solving a math problem while a computer \"records\" your thoughts, then using some kind of component analysis to remove the \"intention\" from the recording (which may not be a separable component, I'm just speculating), and then playing it back into your brain in real-time so that you experience solving the problem without trying to do it.

\n

Wouldn't you then begin to accept characterizing thoughts as brain states, like you characterize sounds as compression waves?  A practical understanding like that — the level of abstract manipulation — would be a deal breaker for me.  And naively, it seems no more alien than the complete supervienience of sound or smell on visual representations of it. 

\n

This isn't an argument that the physicalist conception of consciousness is true, but simply that it's not absurd, and follows an existing trend of identifications made by personal experiences and science.  Then all you need is heaps and loads of existing evidence to update your non-zero prior belief to the point where you recognize it's got the best odds around.  If they ever happen, future mind-state editing technologies could make \"thoughts = brain states\" feel as natural as playing a cello without constantly parsing \"the sight of cello\" and \"the sound of cello\" as separate objects.

\n

Even as these abstract models become more precise and amenable than our intuitive introspective models, this won't ever mean thought \"isn't real\" or \"doesn't happen\", any more than sight, touch, or hearing \"doesn't happen\".  You can touch, look at, and listen to a cello, yielding very different experiences of the exact same object.  Likewise, if one demands a dualist perceptual description, when you think, you're \"introspecting at\" your brain.  Although that's a very different experience from looking at an fMRI of your brain, or sticking your finger into an anaesthetized surgical opening in your skull, if modern science is right, these are experiences of the exact same physical object:  one from the inside, two from the outside.

\n

In short, consciousness is a sense, and predictive and interventional science isn't about ignoring our senses...  it's about connecting them. Physics doesn't say you're not thinking, but it does connect and superveniently reduce what you experience as thinking to what you and everyone else can experience as looking at your brain.

\n

It's just that awesome.

" } }, { "_id": "5WEoM3RCxN2cQEdzY", "title": "What is Wei Dai's Updateless Decision Theory?", "pageUrl": "https://www.lesswrong.com/posts/5WEoM3RCxN2cQEdzY/what-is-wei-dai-s-updateless-decision-theory", "postedAt": "2010-05-19T10:16:51.228Z", "baseScore": 53, "voteCount": 47, "commentCount": 70, "url": null, "contents": { "documentId": "5WEoM3RCxN2cQEdzY", "html": "

As a newcomer to LessWrong, I quite often see references to 'UDT' or 'updateless decision theory'. The very name is like crack - I'm irresistably compelled to find out what the fuss is about.

Wei Dai's post is certainly interesting, but it seemed to me (as a naive observer) that a fairly small 'mathematical signal' was in danger of being lost in a lot of AI-noise. Or to put it less confrontationally: I saw a simple 'lesson' on how to attack many of the problems that frequently get discussed here, which can easily be detached from the rest of the theory. Hence this short note, the purpose of which is to present and motivate UDT in the context of 'naive decision theory' (NDT), and to pre-empt what I think is a possible misunderstanding.

First, a quick review of the basic Bayesian decision-making recipe.

 

 

What is Naïve Decision Theory?

You take the prior and some empirical data and calculate a posterior by (i) working out the 'likelihood function' of the data and (ii) calculating prior times likelihood and renormalising. Then you calculate expected utilities for every possible action (wrt to this posterior) and maximize.

Of course there's a lot more to conventional decision theory than this, but I think one can best get a handle on UDT by considering it as an alternative to the above procedure, in order to handle situations where some of its presuppositions fail.

(Note: NDT is especially 'naïve' in that it takes the existence of a 'likelihood function' for granted. Therefore, in decision problems where EDT and CDT diverge, one must 'dogmatically' choose between them at the outset just to obtain a problem that NDT regards as being well-defined.)

When does NDT fail?

The above procedure is extremely limited. Taking it exactly as stated, it only applies to games with a single player and a single opportunity to act at some stage in the game. The following diagram illustrates the kind of situation for which NDT is adequate:

This is a tree diagram (as opposed to a causal graph). The blue and orange boxes show 'information states', so that any player-instance within the blue box sees exactly the same 'data'. Hence, their strategy (whether pure or mixed) must be the same throughout the box. The branches on the right have been greyed out to depict the Bayesian 'updating' that the player following NDT would do upon seeing 'blue' rather than 'orange'--a branch is greyed out if and only if it fails to pass through a blue 'Player' node. Of course, the correct strategy will depend on the probabilities of each of Nature's possible actions, and on the utilities of each outcome, which have been omitted from the diagram. The probabilities of the outward branches from any given 'Nature' node are to be regarded as 'fixed at the outset'.

Now let's consider two generalisations:

  1. What if the player may have more than one opportunity to act during the game? In particular, what if the player is 'forgetful' in the sense that (i) information from 'earlier on' in the game may be 'forgotten', even such that (ii) the player may return to an information state several times during the same branch.
  2. What if, in addition to freely-willed 'Player' nodes and random 'Nature' nodes, there is a third kind of node where the branch followed depends on the Player's strategy for a particular information state, regardless of whether that strategy has yet been executed. In other words, what if the universe contains 'telepathic robots' (whose behaviour is totally mechanical - they're not trying to maximize a utility function) that can see inside the Player's mind before they have acted?

It may be worth remarking that we haven't even considered the most obvious generalisation: The one where the game includes several 'freely-willed' Players, each with their own utility functions. However, UDT doesn't say much about this - UDT is intended purely as an approach to solving decision problems for a single 'Player', and to the extent that other 'Players' are included, they must be regarded as 'robots' (of the non-telepathic type) rather than intentional agents. In other words, when we consider other Players, we try to do the best we can from the 'Physical Stance' (i.e. try to divine what they will do from their 'source code' alone) rather than rising to the 'Intentional Stance' (i.e. put ourselves in their place with their goals and see what we think is rational).

Note: If a non-forgetful player has several opportunities to act then, as long as the game only contains Player and Nature nodes, the Player is able to calculate the relevant likelihood function (up to a constant of proportionality) from within any of their possible information states. Therefore, they can solve the decision problem recursively using NDT, working backwards from the end (as long as the game is guaranteed to end after a finite number of moves.) If, in addition to this, the utility function is 'separable' (e.g. a sum of utilities 'earned' at each move) then things are even easier: each information state gives us a separate NDT problem, which can be solved independently of the others. Therefore, unless the player is forgetful, the 'naïve' approach is capable of dealing with generalisation 1.

Here are two familiar examples of generalisation 1 (ii):

Note: The Sleeping Beauty problem is usually presented as a question about probabilities (\"what is the Player's subjective probability that the coin toss was heads?\") rather than utilities, although for no particularly good reason the above diagram depicts a decision problem. Another point of interest is that the Absent-Minded Driver contains an extra ingredient not present in the SB problem: the player's actions affect how many player-instances there are in a branch.

Now a trio of notorious problems exemplifying generalisation 2:

 

How Does UDT Deal With These Problems?

The essence of UDT is extremely simple: We give up the idea of 'conditioning on the blue box' (doing Bayesian reasoning to obtain a posterior distribution etc) and instead just choose the action (or more generally, the probability distribution over actions) that will maximize the unconditional expected utility.

So, UDT:

 

 

Is that it? (Doesn't that give the wrong answer to the Smoking Lesion problem?)

Yes, that's all there is to it.

Prima facie, the tree diagram for the Smoking Lesion would seem to be identical to my diagram of Newcomb's Problem (except that the connection between Omega's action and the Player's action would have to be probabilistic), but let's look a little closer:

Wei Dai imagines the Player's action to be computed by a subroutine called S, and although other subroutines are free to inspect the source code of S, and try to 'simulate' it, ultimately 'we' the decision-maker have control over S's source code. In Newcomb's problem, Omega's activities are not supposed to have any influence on the Player's source code. However, in the Smoking Lesion problem, the presence of a 'lesion' is somehow supposed to cause Player's to choose to smoke (without altering their utility function), which can only mean that in some sense the Player's source code is 'partially written' before the Player can exercise any control over it. However, UDT wants to 'wipe the slate clean' and delete whatever half-written nonsense is there before deciding what code to write.

Ultimately this means that when UDT encounters the Smoking Lesion, it simply throws away the supposed correlation between the lesion and the decision and acts as though that were never a part of the problem. So the appropriate tree diagram for the Smoking Lesion problem would have a Nature node at the bottom rather than an Omega node, and so UDT would advise smoking.

Why Is It Rational To Act In The Way UDT Prescribes?

UDT arises from the philosophical viewpoint that says things like

  1. There is no such thing as the 'objective present moment'.
  2. There is no such thing as 'persisting subjective identity'.
  3. There is no difference in principle between me and a functionally identical automaton.
  4. When a random event takes place, our perception of a single definite outcome is as much an illusion of perspective as the 'objective present'--in reality all outcomes occur, but in 'parallel universes'.

If you take the above seriously then you're forced to conclude that a game containing an Omega node 'linked' to a Player node in the manner above is isomorphic (for the purposes of decision theory) to the game in which that Omega node is really a Player node belonging to the same information state. In other words, 'Counterfactual Mugging' is actually isomorphic to:

This latter version is much less of a headache to think about! Similarly, we can simplify and solve The Absent-Minded Driver by noting that it is isomorphic to the following, which can easily be solved:

Even more interesting is the fact that the Absent-Minded Driver turns out to be isomorphic to (a probabilistic variant of) Parfit's Hitchhiker (if we interchange the Omega and Player nodes in the above diagram).

 

Addendum: Do Questions About Subjective Probability Have Answers Irrespective Of One's Decision Theory And Utility Function?

In the short time I've been here, I have seen several people arguing that the answer is 'no'. I want to say that the answer is 'yes' but with a caveat:

We have puzzles like the Absent-Minded Driver (original version) where the player's strategy for a particular information state affects the probability of that information state recurring. It's clear that in such cases, we may be unable to assign a probability to a particular event until the player settles on a particular strategy. However, once the player's strategy is 'set in stone', then I want to argue that regardless of the utility function, questions about the probability of a given player-instance do in fact have canonical answers:

Let's suppose that each player-instance is granted a uniform random number in the set [0,1]. In a sense this was already implicit, given that we had no qualms about considering the possibility of a mixed strategy. However, let's suppose that each player-instance's random number is now regarded as part of its 'information'. When a player sees (i) that she is somewhere within the 'blue rectangle', and (ii) that her random number is α, then for all player-instances P within the rectangle, she can calculate the probability (or rather density) of the event \"P's random number is α\" and thereby obtain a conditional probability distribution over player-instances within the rectangle.

Notice that this procedure is entirely independent of decision theory (again, provided that the Player's strategy has been fixed).

In the context of the Sleeping-Beauty problem (much discussed of late) the above recipe is equivalent to asserting that (a) whenever Sleeping Beauty is woken, this takes place at a uniformly distributed time between 8am and 9am and (b) there is a clock on the wall. So whenever SB awakes at time &alpha;, she learns the information \"&alpha; is one of the times at which I have been woken\". A short exercise in probability theory suffices to show that SB must now calculate 1/3 probabilities for each of (Heads, Monday), (Tails, Monday) and (Tails, Tuesday) [which I think is fairly interesting given that the latter two are, as far as the prior is concerned, the very same event].

One can get a flavour of it by considering a much simpler variation: Let &alpha; and &beta; be 'random names' for Monday and Tuesday, in the sense that with probability 1/2, (&alpha;, &beta;) = (\"Monday\", \"Tuesday\") and with probability 1/2, (&alpha;, &beta;) = (\"Tuesday\", \"Monday\"). Suppose that SB's room lacks a clock but includes a special 'calendar' showing either &alpha; or &beta;, but that SB doesn't know which symbol refers to which day.

Then we obtain the following diagram:

Nature's first decision determines the meaning of &alpha; and &beta;, and its second is the 'coin flip' that inaugurates the Sleeping Beauty problem we know and love. There are now two information states, corresponding to SB's perception of &alpha; or &beta; upon waking. Thus, if SB sees the calendar showing &alpha; (the orange state, let's say) it is clear that the conditional probabilities for the three possible awakenings must be split (1/3, 1/3, 1/3) as above (note that the two orange 'Tuesday' nodes correspond to the same awakening.)

" } }, { "_id": "dLqFJaP2PjmztWNdX", "title": "Be a Visiting Fellow at the Singularity Institute", "pageUrl": "https://www.lesswrong.com/posts/dLqFJaP2PjmztWNdX/be-a-visiting-fellow-at-the-singularity-institute", "postedAt": "2010-05-19T08:00:54.020Z", "baseScore": 38, "voteCount": 31, "commentCount": 171, "url": null, "contents": { "documentId": "dLqFJaP2PjmztWNdX", "html": "

Now is the very last minute to apply for a Summer 2010 Visiting Fellowship.  If you’ve been interested in SIAI for a while, but haven’t quite managed to make contact -- or if you’re just looking for a good way to spend a week or more of your summer -- drop us a line.  See what an SIAI summer might do for you and the world. 

(SIAI’s Visiting Fellow program brings volunteers to SIAI for anywhere from a week to three months, to learn, teach, and collaborate.  Flights and room and board are covered.  We’ve been rolling since June of 2009, with good success.)

Apply because:

\n\n

Apply especially if:

\n\n

(You don’t need all of the above; some is fine.)

Don’t be intimidated -- SIAI contains most of the smartest people I’ve ever met, but we’re also a very open community.  Err on the side of sending in an application; then, at least we’ll know each other.  (Applications for fall and beyond are also welcome; we’re taking Fellows on a rolling basis.)

If you’d like a better idea of what SIAI is, and what we’re aimed at, check out:
1. SIAI's Brief Introduction;
2.  The Challenge projects;
3.  Our 2009 accomplishments;
4.  Videos from past Singularity Summits (the 2010 Summit will happen during this summer’s program, Aug 14-15 in SF; visiting Fellows will assist);
5.  Comments from our last Call for Visiting Fellows; and/or
6.  Bios of the 2009 Summer Fellows.

Or just drop me a line.  Our application process is informal -- just send me an email at anna at singinst dot org with: (1) a resume/c.v. or similar information; and (2) a few sentences on why you’re applying.  And we’ll figure out where to go from there.

Looking forward to hearing from you.

" } }, { "_id": "T4mCxhpzRGrTWtxcD", "title": "Multiple Choice", "pageUrl": "https://www.lesswrong.com/posts/T4mCxhpzRGrTWtxcD/multiple-choice", "postedAt": "2010-05-17T22:26:41.804Z", "baseScore": 11, "voteCount": 25, "commentCount": 35, "url": null, "contents": { "documentId": "T4mCxhpzRGrTWtxcD", "html": "

When we choose behavior, including verbal behavior, it's sometimes tempting to do what is most likely to be right without paying attention to how costly it is to be wrong in various ways or looking for a safer alternative.

\n

If you've taken a lot of standardized tests, you know that some of them penalize guessing and some don't.  That is, leaving a question blank might be better than getting a wrong answer, or they might have the same result.  If they're the same, of course you guess, because it can't hurt and may help.  If they take off points for wrong answers, then there's some optimal threshold at which a well-calibrated test-taker will answer.  For instance, the ability to rule out one of four choices on a one-point question where a wrong answer costs a quarter point means that you should guess from the remaining three - the expected point value of this guess is positive.  If you can rule out one of four choices and a wrong answer costs half a point, leave it blank.

\n

If you have ever asked a woman who wasn't pregnant when the baby was due, you might have noticed that life penalizes guessing.

\n

If you're risk-neutral, you still can't just do whatever has the highest chance of being right; you must also consider the cost of being wrong.  You will probably win a bet that says a fair six-sided die will come up on a number greater than 2.  But you shouldn't buy this bet for a dollar if the payoff is only $1.10, even though that purchase can be summarized as \"you will probably gain ten cents\".  That bet is better than a similarly-priced, similarly-paid bet on the opposite outcome; but it's not good.

\n

There's a few factors at work to make guessing tempting anyway:

\n\n

However, I maintain that we should refrain from guessing at a higher rate than baseline.  This is especially relevant when choosing verbal behavior - we may remember to act according to expected utility, but rarely think to speak according to expected utility, and this is especially significant around sensitive topics where being careless with words can cause harm.

\n

Taking on each reason to guess one at a time:

\n

You can't always leave questions blank, and unlike on a a written test, the \"blank\" condition is not always obvious.  The fact that sometimes there is no sane null action - it's typically not a good idea to stare vacantly at someone when they ask you a question, for instance - doesn't mean, however, that there is never a sane null action.  You can be pretty well immune to Dutch books simply by refusing to bet - this might cost gains when you don't have Dutch-bookable beliefs, but it will prevent loss.  It is worthwhile to train yourself to notice when it is possible to simply do nothing, especially in cases where you have a history of performing worse-than-null actions.  For instance, I react with panic when someone explains something to me and I miss a step in their inference.  I find I get better results if, instead of interrupting them and demanding clarification, I wait five seconds.  This period of time is often enough for the conversation to provide the context for me to understand on my own, and if it doesn't, at least it's not enough of a lag that either of us will have forgotten what was said.  But remembering that I can wait five seconds is hard.

\n

Guessing is inconsistently penalized, with sometimes hidden costs, so it's hard to extinguish the guessing response.  If you're prone to doing something in a certain situation, and doing that something doesn't immediately sock you in the face every single time, it will take far longer for you to learn not to do it - this goes for people as well as pets.  Both the immediate response and the subjective consistency are important, and a hidden cost contributes to neither.  However, smart people can rise to the challenge of reacting to inconsistent, non-immediate, concealed costs to their actions.  For instance, I'd be willing to bet that Less Wrong readers smoke less than the general population.  Observe the relative frequency with which guessing hurts or may hurt you, compared to not guessing, and make your plans based on that.

\n

Inaction can be harmful too, and there's a psychological sting to something bad happening because you stood there and did nothing, especially when you're familiar with the hazards of the bystander effect.  I do not advocate spending all day sitting on the sofa for fear of acting wrongly while your house collapses around your ears.  My point is that there are many situations where we guess and shouldn't, not that we should never guess, and that there is low-hanging fruit in reducing bad guessing.

\n

You're right more often than a regular person, but that doesn't mean you are right enough: life's not graded on a curve.  The question isn't, Do I have a better shot at getting this one right than the neighbor across the street? but, Do I have a good enough shot at getting this one right?  You can press your relative advantages when your opponents are people, but not when you're playing against the house.  (Additionally, you might have specific areas of comparative disadvantage even if you're overall better than the folks down the road.)

\n

Information is more valuable than it seems, and there is often a chance to try to improve one's odds rather than settling for what one starts with.  Laziness and impatience are both factors here: checking all the sources you could takes a long time and is harder than guessing.  But in high stakes situations, it's at least worth Googling, probably worth asking a couple people who might have expertise, maybe worth looking a bit harder for data on similar scenarios and recommended strategies for them.  Temporal urgency is more rarely the factor in discounting the value of information-gathering than is simply wanting the problem to be over with; but problems are not really over with if they are guessed at wrongly, only if you get them right.

" } }, { "_id": "mss2y5AFugkjS5Nwx", "title": "Test on AWS", "pageUrl": "https://www.lesswrong.com/posts/mss2y5AFugkjS5Nwx/test-on-aws", "postedAt": "2010-05-17T03:18:18.558Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "mss2y5AFugkjS5Nwx", "html": "

This is a test article

" } }, { "_id": "KYZfkZfy3RNxbZoYo", "title": "Preface to a Proposal for a New Mode of Inquiry", "pageUrl": "https://www.lesswrong.com/posts/KYZfkZfy3RNxbZoYo/preface-to-a-proposal-for-a-new-mode-of-inquiry", "postedAt": "2010-05-17T02:11:02.211Z", "baseScore": 2, "voteCount": 33, "commentCount": 85, "url": null, "contents": { "documentId": "KYZfkZfy3RNxbZoYo", "html": "

Summary: The problem of AI has turned out to be a lot harder than was originally thought. One hypothesis is that the obstacle is not a shortcoming of mathematics or theory, but limitations in the philosophy of science. This article is a preview of a series of posts that will describe how, by making a minor revision in our understanding of the scientific method, further progress can be achieved by establishing AI as an empirical science.

\n

 

\n

The field of artificial intelligence has been around for more than fifty years. If one takes an optimistic view of things, its possible to believe that a lot of progress has been made. A chess program defeated the top-ranked human grandmaster. Robotic cars drove autonomously across 132 miles of Mojave desert. And Google seems to have made great strides in machine translation, apparently by feeding massive quantities of data to a statistical learning algorithm.

\n

But even as the field has advanced, the horizon has seemed to recede. In some sense the field's successes make its failures all the more conspicuous. The best chess programs are better than any human, but go is still challenging for computers. Robotic cars can drive across the desert, but they're not ready to share the road with human drivers. And Google is pretty good at translating Spanish to English, but still produces howlers when translating Japanese to English. The failures indicate that, instead of being threads in a majestic general theory, the successes were just narrow, isolated solutions to problems that turned out to be easier than they originally appeared.

\n

\n

So what went wrong, and how to move forward? Most mainstream AI researchers are reluctant to provide clear answers to this question, so instead one must read behind the lines in the literature. Every new paper in AI implicitly suggests that the research subfield of which it is a part will, if vigorously pursued, lead to dramatic progress towards intelligence. People who study reinforcement learning think the answer is to develop better versions of algorithms like Q-Learning and temporal difference (TD) learning. The researchers behind the IBM Blue Brain project think the answer is to conduct massive neural simulations. For some roboticists, the answer involves the idea of embodiment: since the purpose of the brain is to control the body, to understand intelligence one should build robots, put them in the real world, watch how they behave, notice the problems they encounter, and then try to solve those problems. Practitioners of computer vision believe that since the visual cortex takes up such a huge fraction of total brain volume, the best way to understand general intelligence is to first study vision.

\n

Now, I have some sympathy for the views mentioned above. If I had been thinking seriously about AI in the 80s, I would probably have gotten excited about the idea of reinforcement learning. But reinforcement learning is now basically an old idea, as is embodiment (this tradition can be traced back to the seminal papers by Rodney Brooks in the early 90s), and computer vision is almost as old as AI itself. If these avenues really led to some kind of amazing result, it probably would already have been found.

\n

So, dissatisfied with the ideas of my predecessors, I've taken some trouble to develop my own hypothesis regarding the question of how to move forward. And desperate times call for desperate measures: the long failure of AI to live up to its promises suggests that the obstacle is no small thing that can be solved merely by writing down a new algorithm or theorem. What I propose is nothing less than a complete reexamination of our answers to fundamental philosophical questions. What is a scientific theory? What is the real meaning of the scientific method (and why did it take so long for people to figure out the part about empirical verification)? How do we separate science from pseudoscience? What is Ockham's Razor really telling us? Why does physics work so amazingly, terrifyingly well, while fields like economics and nutrition stumble?

\n

Now, my answers to these fundamental questions aren't going to be radical. It all adds up to normality. No one who is up-to-date on topics like information theory, machine learning, and Bayesian statistics will be shocked by what I have to say here. But my answers are slightly different from the traditional ones. And by starting from a slightly different philosophical origin, and following the logical path as it opened up in front of me, I've reached a clearing in the conceptual woods that is bright, beautiful, and silent.

\n

Without getting too far ahead of myself, let me give you a bit of a preview of the ideas I'm going to discuss. One highly relevant issue is the role that other, more mature fields have had in shaping modern AI. One obvious influence comes from computer science, since presumably AI will eventually be built using software. But this fact appears irrelevant to me, and so the influence of computer science on AI seems like a disastrous historical accident. To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood. Another influence, that should in principle be healthy but in practice isn't, comes from physics. Unfortunately, for the most part, AI researchers have imitated only the superficial appearance of physics - its use of sophisticated mathematics - while ignoring its essential trait, which is its obsession with reality. In my view, AI can and must become a hard, empirical science, in which researchers propose, test, refine, and often discard theories of empirical reality. But theories of AI will not work like theories of physics. We'll see that AI can be considered, in some sense, the epistemological converse of physics. Physics works by using complex deductive reasoning (calculus, differential equations, group theory, etc) built on top of a minimalist inductive framework (the physical laws). Human intelligence, in contrast, is based on a complex inductive foundation, supplemented by minor deductive operations. In many ways, AI will come to resemble disciplines like botany, zoology, and cartography - fields in which the researchers' core methodological impulse is to go out into the world and write down what they see.

\n

An important aspect of my proposal will be to expand the definitions of the words \"scientific theory\" and \"scientific method\". A scientific theory, to me, is a computational tool that can be used to produce reliable predictions, and a scientific method is a process of obtaining good scientific theories. Botany and zoology make reliable predictions, so they must have scientific theories. In contrast to physics, however, they depend far less on the use of controlled experiments. The analogy to human learning is strong: humans achieve the ability to make reliable predictions without conducting controlled experiments. Typically, though, experimental sciences are considered to be far harder, more rigorous, and more quantitative than observational sciences. But I will propose a generalized version of the scientific method, which includes human learning as a special case, and shows how to make observational sciences just as hard, rigorous, and quantitative as physics.

\n

As a result of learning, humans achieve the ability to make fairly good predictions about some types of phenomena. It seems clear that a major component of that predictive power is the ability to transform raw sensory data into abstract perceptions. The photons fall on my eye in a certain pattern which I recognize as a doorknob, allowing me to predict that if I turn the knob, the door will open. So humans are amazingly talented at perception, and modestly good at prediction. Are there any other ingredients necessary for intelligence? My answer is: not really. In particular, in my view humans are terrible at planning. Our decision making algorithm is not much more than: invent a plan, try to predict what will happen based on that plan, and if the prediction seems good, implement the plan. All the \"magic\" really comes from the ability to make accurate predictions. So a major difference in my approach as opposed to traditional AI is that the emphasis is on prediction through learning and perception, as opposed to planning through logic and deduction.

\n

As a final point, I want to note that my proposal is not analogous to or in conflict with theories of brain function like deep belief networks, neural Darwinism, symbol systems, or hierarchical temporal memories. My proposal is like an interface: it specifies the input and the output, but not the implementation. It embodies an immense and multifaceted Question, to which I have no real answer. But, crucially, the Question comes with a rigorous evaluation procedure that allows one to compare candidate answers. Finding those answers will be an awesome challenge, and I hope I can convince some of you to work with me on that challenge.

\n

I am going to post an outline of my proposal over the next couple of weeks. I expect most of you will disagree with most of it, but I hope we can at least identify concretely the points at which our views diverge. I am very interested in feedback and criticism, both regarding material issues (since we reason to argue), and on issues of style and presentation. The ideas are not fundamentally difficult; if you can't understand what I'm saying, I will accept at least three quarters of the blame.

" } }, { "_id": "hzb2f4Hg7ycShoJc5", "title": "Explanatory normality fallacy", "pageUrl": "https://www.lesswrong.com/posts/hzb2f4Hg7ycShoJc5/explanatory-normality-fallacy", "postedAt": "2010-05-16T03:26:40.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "hzb2f4Hg7ycShoJc5", "html": "

Only a psychologist thinks to ask why people laugh at jokes.  – Someone (apparently)

\n

A common error in trying to understand human behavior is to think something is explained because it is so intuitively familiar to you. The wrong answer to, ‘I wonder why people laugh at jokes?’ is, ‘They are funny duh’. This is an unrealistically obvious example; it can be harder to see. Why do we like art? Because it’s aesthetically pleasing. Why does sex exist? For reproduction. These are a popular variety of mind projection fallacy.

\n

One thing that makes it much harder to see is emotional or moral overtones. A distinctive feature of morality is that it seems objectively true, so this isn’t surprising. e.g. if I say ‘I wonder why women evolved to be *so* upset about being raped?’ the wrong answer is ‘I can’t believe you just said that – rape is HORRIBLE!!!’. Why don’t humans let their disabled children die? Not ‘because they appreciate that that would be cruel’. Why do we want revenge when others have done us wrong? Not ‘because the others DESERVE IT!’ Why do humans hate incest? Not ‘because they aren’t completely depraved’.

\n

Another thing that makes this error happen more is when the explanation is somewhat complicated even without explaining the key issue. This makes it less obvious that you haven’t said anything. Why do we enjoy some styles and features of music particularly? Because we have advanced as a civilization so much that we appreciate them. Fill this out with some more about how civilization has progressed and what some famous people have said about musical progression through time and nobody will notice you didn’t really answer.

\n

Here’s a common combination of morality and apparent complication: Why do women hate being treated as instrumental to sexual pleasure? Because it objectifies them. Why do women hate being objectified? Because it makes people think of them as objects. Why don’t women like being thought of as objects? They get treated as objects. Why don’t women like being treated as objects? Objects are treated badly. Note that while we have now established that to be treated as instrumental to sexual pleasure is to be treated badly from a woman’s perspective, but since it was stated in the question that women hate it, this is hardly a step forward. If you feel very strongly that objectifying women is terrible, especially with some detail about how bad it is, each of these answers can seem explanatory.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "9s26RjJoeHjSjX6NR", "title": "The Math of When to Self-Improve", "pageUrl": "https://www.lesswrong.com/posts/9s26RjJoeHjSjX6NR/the-math-of-when-to-self-improve", "postedAt": "2010-05-15T20:35:37.449Z", "baseScore": 14, "voteCount": 16, "commentCount": 72, "url": null, "contents": { "documentId": "9s26RjJoeHjSjX6NR", "html": "

An economic analysis of how much time an individual or group should spend improving the way they do things as opposed to just doing them.  Requires understanding of integrals.

\n

An Explanation of Discount Rates

\n

Your annual discount rate for money is 1.05 if you're indifferent between receiving $1.00 now and $1.05 in a year.  Question to confirm understanding (requires insight and a calculator): If a person is indifferent between receiving $5.00 at the beginning of any 5-day period and $5.01 at the end of it, what is their annual discount rate?  Answer in rot13: Gurve naahny qvfpbhag engr vf nobhg bar cbvag bar svir frira.

\n

If your discount rate is significantly different than prevailing interest rates, you can easily acquire value for yourself by investing or borrowing money.

\n

\n

An Explanation of Net Present Value

\n

Discount rates are really cool because they let you assign an instantaneous value to any income-generating asset.  For example, let's say I have a made-for-Adsense pop culture site that is bringing in $2000 a year, and someone has offered to buy it.  Normally figuring out the minimum price I'm willing to sell for would require some deliberation, but if I've already deliberated to discover my discount rate, I can compute an integral instead.

\n

To make this calculation reusable, I'm going to let a be the annual income generated by the site (in this case $2000) and r be my discount rate.  For the sake of calculation, we'll assume that the $2000 is distributed perfectly evenly throughout the year.

\n

\"\"

\n

Question to confirm understanding: If a person has a discount rate of 1.05, at what price would they be indifferent to selling the aforementioned splog?  Answer in rot13: Nobhg sbegl gubhfnaq avar uhaqerq avargl-gjb qbyynef.

\n

When to Self-Improve

\n

This question of when to self-improve is complicated by the fact that self-improvement is not an either-or proposition.  It's possible to generate value as you're self-improving.  For example, you can imagine an independent software developer who's trying to choose between improving their tools and working on creating software that will turn a profit.  Although the developer's skills will not improve as quickly through the process of software creation as they would through tool upgrades, they still will improve.

\n

My proposed solution to this problem is for the developer to analyze themself as an income-generating asset.

\n

The first question is what the software developer's discount rate is.  We'll call that r.

\n

The second question is how much income they could produce annually if they started working on software creation full-time right now.  We'll call that amount f.  (If each software product they produce is itself an income-generating asset, then the developer will need to estimate the average net present value of each of those assets, along with the average time to completion of each, to estimate their own income.)

\n

Then, for each of the tool-upgrade and code-now approaches, the developer needs to estimate

\n\n

Given all these parameters, the developer's instantaneous annual value production in a given scenario will be

\n

\"\"

\n

You should try to figure out why the equation makes sense for yourself.  If you're having trouble post in the comments.

\n

If this post goes over well, I'm thinking of writing a sequel called When to Self-Improve in Practice where I discuss practical application of the value-creation formula.  Feel free to comment or PM with ideas, questions, or a description of your situation in life so I can think of a new angle on how this sort of thinking might be applied.  (Exercise for the reader: Modify this thinking for a college student who's trying to decide between two summer internships and has one year left until graduation.)

\n

Edit: Making LaTeX work in comments manually is a royal pain.  Use this instead.

" } }, { "_id": "AXYnfMMcJRnzLgiqZ", "title": "Study: Encouraging Obedience Considered Harmful", "pageUrl": "https://www.lesswrong.com/posts/AXYnfMMcJRnzLgiqZ/study-encouraging-obedience-considered-harmful", "postedAt": "2010-05-14T18:11:56.265Z", "baseScore": 30, "voteCount": 29, "commentCount": 29, "url": null, "contents": { "documentId": "AXYnfMMcJRnzLgiqZ", "html": "

A while back I did a couple of posts on the care and feeding of young rationalists. Though it is not new, I recently found a truly excellent post on this topic, in Dale Mcgowan's blog, The Meming of Life. The post details a survey carried out on ordinary citizens of Hitler's Germany, searching for correlations between style of upbringing, and adult moral decisions. 

\n

\n
\n

Everyday Germans of the Nazi period are the focus of a fascinating study discussed in the PBB seminars and in the Ethics chapter of Raising Freethinkers. For their book The Altruistic Personality, researchers Samuel and Pearl Oliner conducted over 700 interviews with survivors of Nazi-occupied Europe. Included were both “rescuers” (those who actively rescued victims of persecution) and “non-rescuers” (those who were either passive in the face of the persecution or actively involved in it). The study revealed interesting differences in the upbringing of the two groups — specifically the language and practices that parents used to teach their values.

\n

Non-rescuers were 21 times more likely than rescuers to have been raised in families that emphasized obedience—being given rules that were to be followed without question—while rescuers were over three times more likely than non-rescuers to identify “reasoning” as an element of their moral education. “Explained,” the authors said, is the single most common word used by rescuers in describing their parents’ ways of talking about rules and ethical ideas.

\n
\n
Not Just Because I Said So
\n
For anyone interested in rational and ethical upbringing, I really cannot recommend  Meming of Life  strongly enough.
\n

 

" } }, { "_id": "aTtf9iERDoptPD33j", "title": "The Social Coprocessor Model", "pageUrl": "https://www.lesswrong.com/posts/aTtf9iERDoptPD33j/the-social-coprocessor-model", "postedAt": "2010-05-14T17:10:15.475Z", "baseScore": 33, "voteCount": 37, "commentCount": 625, "url": null, "contents": { "documentId": "aTtf9iERDoptPD33j", "html": "

Followup to: Do you have High-Functioning Asperger's Syndrome?

\n

LW reader Madbadger uses the metaphor of a GPU and a CPU in a desktop system to think about people with Asperger's Syndrome: general intelligence is like a CPU, being universal but only mediocre at any particular task, whereas the \"social coprocessor\" brainware in a Neurotypical brain is like a GPU: highly specialized but great at what it does. Neurotypical people are like computers with measly Pentium IV processors, but expensive Radeon HD 4890 GPUs. A High-functioning AS person is an Intel Core i7 Extreme Edition - with on-board graphics!

\n

This analogy also covers the spectrum view of social/empathic abilities, you can think about having a weaker social coprocessor than average if you have some of the tendencies of AS but not others. You can even think of your score on the AQ Test as being like the Tom's Hardware Rating of your Coprocessor. (Lower numbers are better!).

\n

\n

If you lack that powerful social coprocessor, what can you do? Well, you'll have to run your social interactions \"in software\", i.e. explicitly reason through the complex human social game that most people play without ever really understanding. There are several tricks that a High-functioning AS person can use in this situation:

\n\n\n\n\n\n\n

 

" } }, { "_id": "2YhBQsp4m4yYkjkSE", "title": "Bay Area Lesswrong Meet up Sunday May 16", "pageUrl": "https://www.lesswrong.com/posts/2YhBQsp4m4yYkjkSE/bay-area-lesswrong-meet-up-sunday-may-16", "postedAt": "2010-05-14T05:37:46.850Z", "baseScore": 4, "voteCount": 5, "commentCount": 15, "url": null, "contents": { "documentId": "2YhBQsp4m4yYkjkSE", "html": "

No invitations to Benton House have been forthcoming for the last couple months, so I thought we might do one located in a restaurant somewhere.  I would suggest Holder's Country Inn, but if anyone else has any suggestions, I'm more than willing to change venue.  I would suggest a meeting time of 7, but that is also amenable to change.

\n

Edit:  We're meeting at the Inn.  I'll be out front at 7 with a sign saying Less Wrong.

" } }, { "_id": "Q37Yo9CWza2Knqnxe", "title": "Cambridge Less Wrong meetup this Sunday, May 16", "pageUrl": "https://www.lesswrong.com/posts/Q37Yo9CWza2Knqnxe/cambridge-less-wrong-meetup-this-sunday-may-16", "postedAt": "2010-05-14T04:34:00.146Z", "baseScore": 3, "voteCount": 5, "commentCount": 14, "url": null, "contents": { "documentId": "Q37Yo9CWza2Knqnxe", "html": "

Last month's Cambridge, Massachusetts meetup was a success, with six people in attendance and some very interesting discussions. Meetups will recur on the third Sunday of each month, so the next one is this week, at the same time and place: 4pm at the Clear Conscience Cafe at 581 Massachusetts Avenue Cambridge, MA, near the Central Square T station. Please comment if you plan to attend.

" } }, { "_id": "kjArXFinD3deRZNRu", "title": "Blue- and Yellow-Tinted Choices", "pageUrl": "https://www.lesswrong.com/posts/kjArXFinD3deRZNRu/blue-and-yellow-tinted-choices", "postedAt": "2010-05-13T22:35:42.109Z", "baseScore": 76, "voteCount": 67, "commentCount": 57, "url": null, "contents": { "documentId": "kjArXFinD3deRZNRu", "html": "
\n

A man comes to the rabbi and complains about his life: \"I have almost no money, my wife is a shrew, and we live in a small apartment with seven unruly kids. It's messy, it's noisy, it's smelly, and I don't want to live.\"
The rabbi says, \"Buy a goat.\"
\"What? I just told you there's hardly room for nine people, and it's messy as it is!\"
\"Look, you came for advice, so I'm giving you advice. Buy a goat and come back in a month.\"
In a month the man comes back and he is even more depressed: \"It's gotten worse! The filthy goat breaks everything, and it stinks and makes more noise than my wife and seven kids! What should I do?\"
The rabbi says, \"Sell the goat.\"
A few days later the man returns to the rabbi, beaming with happiness: \"Life is wonderful! We enjoy every minute of it now that there's no goat - only the nine of us. The kids are well-behaved, the wife is agreeable - and we even have some money!\"

\n
\n

 

\n

-- traditional Jewish joke

\n

 

\n

Related to: Anchoring and Adjustment

\n

 

\n

Biases are “cognitive illusions” that work on the same principle as optical illusions, and a knowledge of the latter can be profitably applied to the former. Take, for example, these two cubes (source: Lotto Lab, via Boing Boing):

\n

 

\n

\"Colored

\n

 

\n

The “blue” tiles on the top face of the left cube are the same color as the “yellow” tiles on the top face of the right cube; if you're skeptical you can prove it with the eyedropper tool in Photoshop (in which both shades come out a rather ugly gray).

\n

 

\n

The illusion works because visual perception is relative. Outdoor light on a sunny day can be ten thousand times greater than a fluorescently lit indoor room. As one psychology book put it: for a student reading this book outside, the black print will be objectively lighter than the white space will be for a student reading the book inside. Nevertheless, both students will perceive the white space as subjectively white and the black space as subjectively black, because the visual system returns to consciousness information about relative rather than absolute lightness. In the two cubes, the visual system takes the yellow or blue tint as a given and outputs to consciousness the colors of each pixel compared to that background.

\n

 

\n

So this optical illusion occurs when the brain judges quantities relative to their surroundings rather than based on some objective standard. What's the corresponding cognitive illusion?

\n

\n

 

\n

In Predictably Irrational (relatively recommended, even though the latter chapters sort of fail to live up to the ones mentioned here) Dan Ariely asks his students to evaluate (appropriately) three subscription plans to the Economist:

\n

 

\n

\"Economist

\n

 

\n

Ariely asked his subjects which plan they'd buy if they needed an Economist subscription. 84% wanted the combo plan, 16% wanted the web only plan, and no one wanted the print only plan. After all, the print plan cost exactly the same as the print + web plan, but the print + web plan was obviously better. Which raises the question: why even include a print-only plan? Isn't it something of a waste of space?

\n

 

\n

Actually, including the print-only plan turns out to be a very good business move for the Economist. Ariely removed the print-only plan from the choices. Now the options looked like this.

\n

 

\n

\"Economist

\n

 

\n

There shouldn't be any difference. After all, he'd only removed the plan no one chose, the plan no sane person would choose.

\n

 

\n

This time, 68% of students chose the web only plan and 32% the combo plan. That's a 52% shift in preferences between the exact same options.

\n

 

\n

The rational way to make the decision is to compare the value of a print subscription to the Economist (as measured by the opportunity cost of that money) to the difference in cost between the web and combo subscriptions. But this would return the same answer in both of the above cases, so the students weren't doing it that way.

\n

 

\n

What it looks like the students were doing was perceiving relative value in the same way the eye perceives relative color. The ugly gray of the cube appeared blue when it was next to something yellow, and yellow when it was next to something blue. In the same way, the $125 cost of the combo subscription looks like good value next to a worse deal, and bad value next to a better deal.

\n

 

\n

When the $125 combo subscription was placed next to a $125 plan with fewer features (print only instead of print plus web) it looked like a very good deal – the equivalent of placing an ugly gray square next to something yellow to make it look blue. Take away the yellow, or the artificially bad deal, and it doesn't look nearly as attractive.

\n

 

\n

This is getting deep into Dark Arts territory, and according to Predictably Irrational, the opportunity to use these powers for evil has not gone unexploited. Retailers will deliberately include in their selection a super deluxe luxury model much fancier and more expensive than they expect anyone to ever want. The theory is that consumers are balancing a natural hedonism that tells them to get the best model possible against a commitment to financial prudence. So most consumers, however much they like television, will have enough good sense to avoid buying a $2000 TV. But if the retailer carries a $4000 super-TV, the $2000 TV suddenly doesn't look quite so bad.

\n

 

\n

The obvious next question is “How do I use this knowledge to trick hot girls or guys into going out with me?” Dan Ariely decided to run some experiments on his undergraduate class. He took photographs of sixty students, then asked other students to rate their attractiveness. Next, he grouped the photos into pairs of equally attractive students. And next, he went to Photoshop and made a slightly less attractive version of each student: a blemish here, an asymmetry there.

\n

 

\n

Finally, he went around campus, finding students and showing them three photographs and asking which person the student would like to go on a date with. Two of the photographs were from one pair of photos ranked equally attractive. The third was a version of one of the two, altered to make it less attractive. So, for example, he might have two people, Alice and Brenda, who had been ranked equally attractive, plus a Photoshopped ugly version of Brenda.

\n

 

\n

The students overwhelmingly (75%) chose the person with the ugly double (Brenda in the example above), even though the two non-Photoshopped faces were equally attractive. Ariely then went so far as to recommend in his book that for best effect, you should go to bars and clubs with a wingman who is similar to you but less attractive. Going with a random ugly person would accomplish nothing, but going with someone similar to but less attractive than you would put you into a reference class and then bump you up to the top of the reference class, just like in the previous face experiment.

\n

 

\n

Ariely puts these studies in a separate chapter from his studies on anchoring and adjustment (which are also very good) but it all seems like the same process to me: being more interested in the difference between two values than in the absolute magnitude of them. All that makes anchoring and adjustment so interesting is that the two values have nothing in common with one another.

\n

 

\n

This process also has applications to happiness set points, status seeking, morality, dieting, larger-scale purchasing behavior, and akrasia which deserve a separate post

" } }, { "_id": "qpEq8FW23mp7QipFP", "title": "Updating, part 1: When can you change your mind? The binary model", "pageUrl": "https://www.lesswrong.com/posts/qpEq8FW23mp7QipFP/updating-part-1-when-can-you-change-your-mind-the-binary", "postedAt": "2010-05-13T17:55:12.768Z", "baseScore": 14, "voteCount": 16, "commentCount": 156, "url": null, "contents": { "documentId": "qpEq8FW23mp7QipFP", "html": "

I was recently disturbed by my perception that, despite years of studying and debating probability problems, the LessWrong community as a whole has not markedly improved its ability to get the right answer on them.

\n

I had expected that people would read posts and comments by other people, and take special note of comments by people who had a prior history of being right, and thereby improve their own accuracy.

\n

But can that possibly work?  How can someone who isn't already highly-accurate, identify other people who are highly accurate?

\n

Aumann's agreement theorem (allegedly) says that Bayesians with the same priors agree.  But it doesn't say that doing so helps.  Under what circumstances does revising your opinions, by updating in response to people you consider reliable, actually improve your accuracy?

\n

To find out, I built a model of updating in response to the opinions of others.  It did, eventually, show that Bayesians improve their collective opinions by updating in response to the opinions of other Bayesians.  But this turns out not to depend on them satisfying the conditions of Aumann's theorem, or on doing Bayesian updating.  It depends only on a very simple condition, established at the start of the simulation.  Can you guess what it is?

\n

I'll write another post describing and explaining the results if this post receives a karma score over 10.

\n

\n

That's getting a bit ahead of ourselves, though.  This post models only non-Bayesians, and the results are very different.

\n

Here's the model:

\n\n

Algorithm:

\n

# Loop over T timesteps
For t = 0 to T-1 {

\n

# Loop over G people
For i = 0 to G-1 {

\n

# Loop over N problems
For v = 0 to N-1 {

\n

If (t == 0)

\n

# Special initialization for the first timestep
If (random in [0..1] < pi) givt := 1;  Else givt := 0

\n

Else {

\n

# Product over all j of the probability that the answer to v is 1 given j's answer and estimated accuracy
m1 := j [ pijgjv(t-1) + (1-pij)(1-gjv(t-1)) ]

\n

# Product over all j of the probability that the answer to v is 0 given j's answer and estimated accuracy
m0 := j [ pij(1-gjv(t-1)) + (1-pij)gjv(t-1) ]

\n

p1 := m1 / (m0 + m1)                          # Normalize

\n

If (p1 > .5) givt := 1;  Else  givt := 0

\n

}

\n

}

\n

# Loop over G other people
For j = 0 to G-1

\n

# Compute person i's estimate of person j's accuracy
pij := { Σs in [0 .. t] Σv in [s..N] [ givtgjvs + (1-givt)(1-gjvs) ] } / N

\n

}

\n

}

\n

p1 is the probability that agent i assigns to problem v having the answer 1.  Each term pijgjv(t-1) + (1-pij)(1-gjv(t-1)) is the probability of problem v having answer 1 computed using agent j's beliefs, by adding either the probability that j is correct (if j believes it has answer 1), or the probability that j is wrong (if j believes it has answer 0).  Agent i assumes that everyone's opinions are independent, and multiplies all these probabilities together.  The result, m1, is very small when there are very many agents (m1 is on the order of .5G), so it is normalized by computing a similar product m0 for the probability that v has answer 0, and setting p1 = m1 / (m0 + m1).

\n

The sum of sums to compute pij (i's opinion of j's accuracy) computes the fraction of problems, summed over all previous time periods, on which person j has agreed with person i's current opinions.  It sums over previous time periods because otherwise, pii = 1.  By summing over previous times, if person i ever changes its mind, that will decrease pii.  (The inner sum starts from s instead of 0 to accomodate an addition to the model that I'll make later, in which the true answer to problem t is revealed at the end of time t.  Problems whose answer is public knowledge should not be considered in the sum after the time they became public knowledge.)

\n

Now, what distribution should we use for the pi?

\n

There is an infinite supply of problems.  Many are so simple that everyone gets them right; many are so hard or incomprehensible that everyone performs randomly on them; and there are many, such as the Monty Haul problem, that most people get wrong because of systematic bias in our thinking.  The range of population average performance pave on all possible problems thus falls within [0 .. 1].

\n

I chose to model person accuracy instead of problem difficulty.  I say \"instead of\", because you can use either person accuracy or problem difficulty to set pave. Since a critical part of what we're modeling is person i's estimate of person j's accuracy, person j should actually have an accuracy.  I didn't model problem difficulty partly because I assume we only talk about problems of a particular level of difficulty; partly because a person in this model can't distinguish between \"Most people disagree with me on this problem; therefore it is difficult\" and \"Most people disagree with me on this problem; therefore I was wrong about this problem\".

\n

Because I assume we talk mainly about high-entropy problems, I set pave = .5.  I do this by drawing pi from [0 .. 1], with a normal distribution with a mean of .5, truncated at .05 and .95.  (I used a standard deviation of .15; this isn't important.)

\n

Because this distribution of pi is symmetric around .5, there is no way to know whether you're living in the world where the right answer is always 1, or where the right answer is always 0.  This means there's no way, under this model, for a person to know whether they're a crackpot (usually wrong) or a genius (usually right).

\n

Note that these agents don't satisfy the preconditions for Aumann agreement, because they produce 0/1 decisions instead of probabilities, and because some agents are biased to perform worse than random.  It's worth studying non-Bayesian agents before moving on to a model satisfying the preconditions for the theorem, if only because there are so many of them in the real world.

\n

An important property of this model is that, if person i is highly accurate, and knows it, pii will approach 1, greatly reducing the chance that person i will change their mind about any problem.  Thus, the more accurate a person becomes, the less able they are to change their minds when they are wrong - and this is not an error.  It's a natural limit on the speed at which one can converge on truth.

\n

An obvious problem is that at t=0, person i will see that it always agrees with itself, and set pii = 1.  By induction, no one will ever change their mind.  (I consider this evidence for the model, rather than against it.)

\n

The question of how people ever change their mind is key to this whole study.  I use one of these two additions to the model to let people change their mind:

\n\n

This model is difficult to solve analytically, so I wrote a Perl script to simulate it.

\n" } }, { "_id": "pCJQMzvrYRkCbs4Tu", "title": "Aspergers Poll Results: LW is nerdier than the Math Olympiad?", "pageUrl": "https://www.lesswrong.com/posts/pCJQMzvrYRkCbs4Tu/aspergers-poll-results-lw-is-nerdier-than-the-math-olympiad", "postedAt": "2010-05-13T14:24:24.783Z", "baseScore": 19, "voteCount": 19, "commentCount": 43, "url": null, "contents": { "documentId": "pCJQMzvrYRkCbs4Tu", "html": "

Followup to: Do you have High Functioning Aspergers Syndrome?

\n

 

\n
\n

 

\n

EDIT: To combat nonresponse bias, I'd appreciate it if anyone who considered the poll and decided not to fill it in would go and do so now, but that people who haven't already seen the poll refrain from doing so. We might get some idea of which way the bias points by looking at the difference in results.

\n

This is your opportunity to help your community's social epistemology!

\n

 

\n
\n

 

\n

Since over 80 LW'ers were kind enough to fill out my survey on Aspergers, I thought I'd post the results.

\n

4 people said they had already been diagnosed with Aspergers  Syndrome, out of 82 responses. That's 5%, where the population incidence rate is thought to be 0.36%.  However the incidence rate is known to be larger than the diagnosis rate, as many AS cases (I don't know how many) go undiagnosed. An additional 4 people ticked the five diagnostic criteria I listed; if we count each of them as 1/2 a case, LW would have roughly 25 times the baseline AS rate.

\n

The Less Wrong mean average AQ test score was 27, and only 5 people got at or below 16, which is the population average score on this test. 21 people or 26% scored 32 or more on the AQ test, though this is only an indicator and does not mean that 26% of LW have Aspergers.

\n

To put the AQ test results in perspective, this paragraph from Wikipedia outlines what various groups got on average:

\n

The questionnaire was trialled on Cambridge University students, and a group of sixteen winners of the British Mathematical Olympiad, to determine whether there was a link between a talent for mathematical and scientific disciplines and traits associated with the autism spectrum. Mathematics, physical sciences and engineering students were found to score significantly higher, e.g. 21.8 on average for mathematicians and 21.4 for computer scientists. The average score for the British Mathematical Olympiad winners was 24. Of the students who scored 32 or more on the test, eleven agreed to be interviewed and seven of these were reported to meet the DSM-IV criteria for Asperger syndrome, although no formal diagnosis was made as they were not suffering any distress. The test was also taken by a group of subjects who had been diagnosed with autism or Asperger syndrome by a professional, the average score being 35 and 38 for males and females respectively.

\n

If we take 7/11 times the 26% of LW who scored 32+, we get 16%, which is somewhat higher than the 7-10% you might estimate from the number of people who said they have diagnoses. Note, though, that the 7 trial students who were found to meet the diagnostic criteria were not diagnosed, as their condition was not causing them \"distress\", indicating that for high-functioning AS adults, the incidence rate might be a lot higher than the diagnosis rate.

\n

\n

What does this mean?

\n

Well, for one thing it means that Less Wrong is \"on the spectrum\", even if we're mostly not falling off the right tail. Only about 1 in 10 people on Less Wrong are \"normal\" in terms of the empathizing/systematizing scale, perhaps 1 in 10 are far enough out to be full blown Aspergers, and the rest of us sit somewhere in between, with most people being more to the right of the distribution than the average Cambridge mathematics student.

\n

Interestingly, 48% of respondents ticked this criterion:

\n

Severe impairment in reciprocal social interaction (at least two of the following) (a) inability to interact with peers, (b) lack of desire to interact with peers, (c) lack of appreciation of social cues, (d) socially and emotionally inappropriate behavior

\n

Which indicates that we're mostly not very good at the human social game.

\n

EDIT: Note also that nonresponse bias means that these conclusions only apply strictly to that subset of LW who actually responded, i.e. those specific 82 people. Since \"Less Wrong\" is a vague collection of \"concentric levels\" of involvement, from occasional reader to hardcore poster, and those who are more heavily involved are more likely to have responded (e.g. because they read more of the posts, and have more time), the results probably apply more to those who are more involved.

\n

Response bias could be counteracted by doing more work (e.g. asking only specific commenters, randomly selected, to respond), or by simply having a prior for response bias and AQ rates, and using the survey results to update it.

" } }, { "_id": "xfwhGZumC6fCQvvrK", "title": "Interview", "pageUrl": "https://www.lesswrong.com/posts/xfwhGZumC6fCQvvrK/interview", "postedAt": "2010-05-11T10:00:01.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "xfwhGZumC6fCQvvrK", "html": "

Answers to interesting questions from Colin Marshall.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "3pmaeoA9ALhfWhFmW", "title": "Conditioning on Observers", "pageUrl": "https://www.lesswrong.com/posts/3pmaeoA9ALhfWhFmW/conditioning-on-observers", "postedAt": "2010-05-11T05:15:53.394Z", "baseScore": 12, "voteCount": 11, "commentCount": 121, "url": null, "contents": { "documentId": "3pmaeoA9ALhfWhFmW", "html": "

Response to Beauty quips, \"I'd shut up and multiply!\"

\n

Related to The Presumptuous Philosopher's Presumptuous FriendThe Absent-Minded Driver, Sleeping Beauty gets counterfactually mugged

\n

This is somewhat introductory. Observers play a vital role in the classic anthropic thought experiments, most notably the Sleeping Beauty and Presumptuous Philosopher gedankens. Specifically, it is remarkably common to condition simply on the existence of an observer, in spite of the continuity problems this raises. The source of confusion appears to be based on the distinction between the probability of an observer and the expectation number of observers, with the former not being a linear function of problem definitions.

\n

There is a related difference between the expected gain of a problem and the expected gain per decision, which has been exploited in more complex counterfactual mugging scenarios. As in the case of the 1/2 or 1/3 confusion, the issue is the number of decisions that are expected to be made, and recasting problems so that there is at most one decision provides a clear intuition pump.

\n

Sleeping Beauty

\n

In the classic sleeping beauty problem, experimenters flip a fair coin on Sunday, sedate you and induce amnesia, and wake you either on just the following Monday or both the following Monday and Tuesday. Each time you are woken, you are asked for your credence that the coin came up heads.

\n

The standard answers to this question are that the answer should be 1/2 or 1/3. For convenience let us say that the event W is being woken, H is that the coin flip came up heads and T is that the coin flip came up tails. The basic logic for the 1/2 argument is that:

\n

P(H)=P(T)=1/2, P(W|H) = P(W|T) = P(W) = 1 so by Bayes rule P(H|W) = 1/2

\n

The obvious issue to be taken with this approach is one of continuity. The assessment is independent of the number of times you are woken in each branch, and this implies that all non zero observer branches have their posterior probability equal to their prior probability. Clearly the subjective probability of a zero observer branch is zero, so this implies discontinuity in the decision theory. Whilst not in and of itself fatal, it is surprising. There is apparent secondary confusion over the number of observations in the sleeping beauty problem, for example:

\n
\n

If we want to replicate the situation 1000 times, we shouldn't end up with 1500 observations.  The correct way to replicate the awakening decision is to use the probability tree I included above. You'd end up with expected cell counts of 500, 250, 250, instead of 500, 500, 500.

\n
\n

Under these numbers, the 1000 observations made have required 500 heads and 250 tails, as each tail produces both an observation on Monday and Tuesday. This is not the behaviour of a fair coin. Further consideration of the problem shows that the naive conditioning on W is the point where it would be expected that the number of observations comes in. Hence in 900 observations, there would be 300 heads and 300 tails, with 600 observations following a tail and 300 following a head. To make this rigorous, let Monday and Tuesday be the event of being woken on Monday and Tuesday respectively. Then:

\n

P(H|Monday) = 1/2, P(Monday|W) = 2/3     (P(Monday|W) = 2*P(Tuesday|W) as Monday occurs regardless of coin flip)

\n

P(H|W) = P(H ∩ Monday|W) + P(H ∩ Tuesday|W)          (Total Probability)

\n

           = P(H|Monday ∩ W).P(Monday|W) + 0              (As P(Tuesday|H) = 0)

\n

           = P(H|Monday).P(Monday|W) = 1/3                  (As Monday ∩ W = Monday)

\n

Which would appear to support the view of updating on existence. The question of why this holds in the analysis is immediate to answer: The only day on which probability of heads occuring is non zero is Monday, and given an awakening it is not guaranteed that it is Monday. This should not be confused with the correct observation that there is always one awakening on Monday. This has caused problems because \"Awakening\" is not an event which occurs only once in each branch. Indeed, using the 1/3 answer and working back to try to find P(W) yields P(W) = 3/2, which is a strong indication that it is not the probability that matters, but the E(# of instances of W). As intuition pumps, we can consider some related problems.

\n

Sleeping Twins

\n

This experiment features Omega. It announces that it will place you and an identical copy of you in identical rooms, sedated. It will then flip a fair coin. If the coin comes up heads, it will wake one of you randomly. If it comes up tails, it will wake both of you. It will then ask what your credence for the coin coming up heads is.

\n

You wake up in a nondescript room. What is your credence?

\n

It is clear from the structure of this problem that it is almost identical to the sleeping beauty problem. It is also clear that your subjective probability of being woken is 1/2 if the coin comes up heads and 1 if it comes up tails, so conditioning on the fact that you have been woken the coin came up heads with probability 1/3. Why is this so different to the Sleeping Beauty problem? The fundamental difference is that in the Sleeping Twins problem, you are woken at most once, and possibly not, whereas in the Sleeping Beauty problem you are woken once or many times. On the other hand, the number of observer moments on each branch of the experiment is equal to that of the Sleeping Beauty problem, so it is odd that the manner in which these observations are achieved should matter. Clearly information flow is not possible, as provided for by amnesia in the original problem. Let us drive this further

\n

Probabilistic Sleeping Beauty

\n

We return to the experimenters and a new protocol. The experimenters fix a constant k in {1,2,..,20}, sedate you, roll a D20 and flip a coin. If the coin comes up tails, they will wake you on day k. If the coin comes up heads and the D20 comes up k, they will wake you on day 1. In either case they will ask you for your credence that the coin came up heads.

\n

You wake up. What is your credence?

\n

In this problem, the multiple distinct copies of you have been removed, at the cost of an explicit randomiser. It is clear that the structure of the problem is independent of the specific value of the constant k. It is also clear that updating on being woken, the probability that the coin came up heads is 1/21 regardless of k. This is troubling for the 1/2 answer, however, as playing this game with a single die roll and all possible values of k recovers the Sleeping Beauty problem (modulo induced amnesia). Again, having reduced the expected number of observations to be in [0,1], intuition and calculation seem to imply a reduced chance for the heads branch conditioned on being woken.

\n

This further suggests that the misunderstanding in Sleeping Beauty is one of naively looking at P(W|H) and P(W|T), when the expected numbers of wakings are E(#W|H) = 1, E(#W|T) = 2.

\n

The Apparent Solution

\n

If we allow conditioning on the number of observers, we correctly calculate probabilities in the Sleeping Twins and Probabilistic Sleeping Beauty problems. It is correctly noted that a \"single paying\" bet is accepted in Sleeping Beauty with odds of 2; this follows naturally under the following decision schema: \"If it is your last day awake the decision is binding, otherwise it is not\". Let the event of being the last day awake be L. Then:

\n

P(L|W ∩ T) = 1/2, P(L|W ∩ H) = 1, the bet pays k for a cost of 1

\n

E(Gains|Taking the bet) = (k-1) P(L|W ∩ H)P(H|W) - P(L|W ∩ T) P(T|W) = (k-1) P(H|W) - P(T|W)/2

\n

Clearly to accept a bet at payout of 2 implies that P(H|W) - P(T|W)/2 ≥ 0, so 2.P(H|W) ≥ P(T|W), which contraindicates the 1/2 solution. The 1/3 solution, on the other hand works as expected. Trivially the same result holds if the choice of important decision is randomised. In general, if a decision is made by a collective of additional observers in identical states to you, then the existence of the additional observers does not change anything the overall payoffs. This can be modelled either by splitting payoffs between all decision makers in a group making identical decisions, or equivalently calculating as if there is a 1/N chance that you dictate the decision for everyone given N identical instances of you (\"Evenly distributed dictators\"). To do otherwise leads to fallacious expected gains, as exploited in Sleeping Beauty gets counterfactually mugged. Of course, if the gains are linear in the number of observers, then this cancels with the division of responsibility and the observer count can be neglected, as in accepting 1/3 bets per observer in Sleeping Beauty.

\n

The Absent Minded Driver

\n

If we consider the problem of The Absent-Minded Driver, then we are faced with another scenario in which depending on decisions made there are varying numbers of observer moments in the problem. This allows an apparent time inconsistency to appear, much as in Sleeping Beauty. The problem is as follows:

\n

You are an mildly amnesiac driver on a motorway. You notice approaching junctions but recall nothing. There are 2 junctions. If you turn off at the first, you gain nothing. If you turn off at the second, you gain 4. If you continue past the second, you gain 1.

\n

Clearly analysis of the problem shows that if p is the probability of going forward (constant care of the amnesia), the payout is p[p+4(1-p)], maximised at p = 2/3. However once one the road and approaching a junction, let the probability that you are approaching the first be α. The expected gain is then claimed to be αp[p+4(1-p)]+(1-α)[p+4(1-p)] which is not maximised at 2/3 unless α = 1. It can be immediately noticed that given p, α = 1/(p+1). However, this is still not correct.

\n

Instead, we can observe that all non zero payouts are the result of two decisions, at the first and second junctions. Let the state of being at the first junction be A, and the second be B. We observe that:

\n

E(Gains due to one decision|A) = 1 . (1-p)*0 + 1/2 . p[p+4(1-p)]

\n

E(Gains due to one decision|B) = 1/2 . [p+4(1-p)]

\n

P(A|W) = 1/(p+1), P(B|W) = p/(p+1), E(#A) = 1, E(#B) = p, (#A, #B independent of everything else)

\n

Hence the expected gain per decision:

\n

E(Gains due to one decision|W) = [1 . (1-p)*0 + 1/2 . p[p+4(1-p)]]/(p+1) + 1/2 . [p+4(1-p)].p/(p+1) = [p+4(1-p)].p/(p+1)

\n

But as has already been observed in this case the number of decisions made is dependent on p, and thus

\n

E(Gains|W) = [p+4(1-p)].p , which is the correct metric. Observe also that E(Gains|A) = E(Gains|B) = p[p+4(1-p)]/2

\n

As a result, there is no temporal inconsistency in this problem; the approach of counting up over all observer moments, and splitting outcomes due to a set of decisions across the relevant decisions is seemingly consistent.

\n

Sleeping Beauty gets Counterfactually Mugged

\n

In this problem, the Sleeping Beauty problem is combined with a counterfactual mugging. If Omega flips a head, it simulates you, and if you would give it $100 it will give you $260. If it flips a tail, it asks you for $100 and if you give it to Omega, it induces amnesia and asks again the next day. On the other hand if it flips a tail and you refuse to give it money, it gives you $50.

\n

Hence precommitting to give the money nets $30 on the average, whilst precommiting not to nets $25 on the average. However since you make exactly 1 decision on either branch if you refuse, whilst you make 3 decisions every two plays if you give Omega money, per decision you make $25 from refusing and $20 from accepting (obtained via spreading gains over identical instances of you). Hence correct play depends on whether Omega will ensure you get a consistent number of decisions or plays of the whole scenario. Given a fixed number of plays of the complete scenario, we thus have to remember to account for the increased numbers of decisions made in one branch of possible play. In this sense it is identical to the Absent Minded Driver, in that the number of decisions is a function of your early decisions, and so must be brought in as a factor in expected gains.

\n

Alternately, from a more timeless view we can note that your decisions in the system are perfectly correlated; it is thus the case that there is a single decision made by you, to give money or not to. A decision to give money nets $30 on average, whilst a decision not to nets only $25; the fact that they are split across multiple correlated decisions is irrelevant. Alternately conditional on choosing to give money you have a 1/2 chance of there being a second decision, so the expected gains are $30 rather than $20.

\n

Conclusion

\n

The approach of using the updating on the number observer moments is comparable to UDT and other timeless approaches to decision theory; it does not care how the observers come to be, be it a single amnesiac patient over a long period or a series of parallel copies or simulations. All that matters is that they are forced to make decisions.

\n

In cases where a number of decisions are discarded, the splitting of payouts over the decisions, or equivalently remembering the need for your decision not to be ignored, yields sane answers. This can also be considered as spreading a single pertinent decision out over some larger number of irrelevant choices.

\n

Correlated decisions are not so easy; care must be taken when the number of decisions is dependent on behaviour.

\n

In short, the 1/3 answer to sleeping beauty would appear to be fundamentally correct. Defences of the 1/2 answer appear to have problems with the number of observer moments being outside [0,1] and thus not being probabilities. This is the underlying danger. Use of anthropic or self indication probabilities yields sane answers in the problems considered, and can cogently answer typical questions designed to yield a non anthropic intuition.

" } }, { "_id": "vHDk5xr9JDC64rb8T", "title": "Do you have High-Functioning Asperger's Syndrome?", "pageUrl": "https://www.lesswrong.com/posts/vHDk5xr9JDC64rb8T/do-you-have-high-functioning-asperger-s-syndrome", "postedAt": "2010-05-10T23:55:45.936Z", "baseScore": 26, "voteCount": 24, "commentCount": 345, "url": null, "contents": { "documentId": "vHDk5xr9JDC64rb8T", "html": "

 

\n
\n

EDIT: To combat nonresponse bias, I'd appreciate it if anyone who looked at this post before and decided not to fill in the poll would go and do so now, but that people who haven't already considered and decided against filling in the poll refrain from doing so. We might get some idea of which way the bias points by looking at the difference in results.

\n

 

\n

This is your opportunity to help your community's social epistemology!

\n

\n


\n

\n

 

\n

There is some evidence that consequentialist/utilitarian thinking is more common in people with Asperger's syndrome, so I thought it would be interesting to follow that correlation the other way around: what fraction of people who are attracted to rational/consequentialist thinking have what one might call \"High-functioning Asperger's Syndrome\"? From wisegeek:

\n

Impaired social reactions are a key component of Asperger's syndrome. People who suffer from this condition find it difficult to develop meaningful relationships with their peers. They struggle to understand the subtleties of communicating through eye contact, body language, or facial expressions and seldom show affection towards others. They are often accused of being disrespectful and rude, since they find they can’t comprehend expectations of appropriate social behavior and are often unable to determine the feelings of those around them. People suffering from Asperger's syndrome can be said to lack both social and emotional reciprocity.

\n

\n

Although Asperger's syndrome is related to autism, people who suffer from this condition do not have other developmental delays. They have normal to above average intelligence and fail to meet the diagnostic criteria for any other pervasive developmental disorder. In fact, people with Asperger's syndrome often show intense focus, highly logical thinking, and exceptional abilities in math or science.

\n

\n

This book makes the following point about \"High-functioning adults\":

\n

\"Individuals at the most able end of the autistic spectrum have the most hidden form of this disorder, and as a result, these individuals and their family are often the most disadvantaged in terms of getting a diagnosis. Because they have higher IQs, high-functioning adults are able to work out ways to compensate for their difficulties in communication or in social functioning that are based on logical reasoning.\"

\n

So if you are a very smart AS person, it might not be obvious that you have it, especially because if you have difficulty reading social situations you might not realize that you are having difficulty reading social situations, rather you'll just experience other people being mean and think that the world is just full of mean people. But there are some clues you can follow. For example this website talks about what AS in kids tends to be like:

\n

One of the most disturbing aspects of Higher Functioning children with Aspergers (HFA) is their clumsy, nerdish social skills. Though they want to be accepted by their peers, they tend to be very hurt and frustrated by their lack of social success. Their ability to respond is confounded by the negative feedback that these children get from their painful social interactions. This greatly magnifies their social problems. Like any of us, when we get negative feedback, we become unhappy. This further inhibits their social skills, and a vicious circle develops.

\n

If your childhood involved extreme trouble with other kids, getting bullied, picked last for sports team, etc, but not for an obvious reason such as being very fat or of a racial minority, then add some evidence-points to the \"AS\" hypothesis.

\n

High-functioning AS gives a person a combination of strengths and weaknesses. If you know about the weaknesses, you can probably better compensate for them. For reference, the following are the Gillberg diagnostic criteria for Asperger Syndrome:

\n

1.Severe impairment in reciprocal social interaction (at least two of the following)
(a) inability to interact with peers, (b) lack of desire to interact with peers, (c) lack of appreciation of social cues, (d) socially and emotionally inappropriate behavior

2.All-absorbing narrow interest (at least one of the following)
(a) exclusion of other activities, (b) repetitive adherence, (c) more rote than meaning

3.Imposition of routines and interests (at least one of the following)
(a) on self, in aspects of life (b) on others

4.Speech and language problems (at least three of the following)
(a) delayed development, (b) superficially perfect expressive language, (c) formal, pedantic language, (d) odd prosody, peculiar voice characteristics, (e) impairment of comprehension including misinterpretations of literal/implied meanings

5.Non-verbal communication problems (at least one of the following)
(a) limited use of gestures, (b) clumsy/gauche body language, (c) limited facial expression, (d) inappropriate expression, (e) peculiar, stiff gaze

6.Motor clumsiness

\n

If people want to, they can respond to a poll I created, recording their self-assessment of whether or not they fit these criteria. My own take is similar to that of Simon Baron-Cohen: that there isn't a natural dividing line between AS and neurotypical, rather that there is a spectrum of empathizing vs. systematizing brain-types. For those who want to, you can take Baron-Cohen's \"Autism quotient\" test on wired magazine, and you can record your score on my poll.

\n

 

" } }, { "_id": "Jo4ExrJxF6rm8cm3k", "title": "Q&A with Harpending and Cochran", "pageUrl": "https://www.lesswrong.com/posts/Jo4ExrJxF6rm8cm3k/q-and-a-with-harpending-and-cochran", "postedAt": "2010-05-10T23:01:29.986Z", "baseScore": 34, "voteCount": 29, "commentCount": 112, "url": null, "contents": { "documentId": "Jo4ExrJxF6rm8cm3k", "html": "

\n

Edit: Q&A is now closed. Thanks to everyone for participating, and thanks very much to Harpending and Cochran for their responses.

\n

In response to Kaj's reviewHenry Harpending and Gregory Cochran, the authors of the The 10,000 Year Explosion, have agreed to a Q&A session with the Less Wrong community.

\n

If you have any questions for either Harpending or Cochran, please reply to this post with a question addressed to one or both of them. Material for questions might be derived from their blog for the book which includes stories about hunting animals in Africa with an eye towards evolutionary implications (which rose to Jennifer's attention based on Steve Sailer's prior attention).

\n

Please do not kibitz in this Q&A... instead go to the kibitzing area to talk about the Q&A session itself. Eventually, this post will be edited to note that the process has been closed, at which time there should be no new questions.

\n

 

" } }, { "_id": "C5sMLunypq3ignwkc", "title": "Not Mothers’ Day", "pageUrl": "https://www.lesswrong.com/posts/C5sMLunypq3ignwkc/not-mothers-day", "postedAt": "2010-05-10T16:13:42.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "C5sMLunypq3ignwkc", "html": "

As far as I can tell, the point of phoning your mother on mothers’ day is to demonstrate your affection to her.

\n

For a behaviour to be an honest signal of characteristic X it needs to be costly enough for those without characteristic X not to bother (or bother as much). So phoning your mother in itself is a fine signal that you like her and/or respect her. If you hated her you would call her less often or never, depending on how explicit you wanted to be about it.

\n

Having a specific day when you are meant to phone seems to largely negate the purpose however. It’s easier for people who don’t care much to phone, relative to those who do, when:

\n\n

So it looks like I could send a stronger signal of affection toward my mother by phoning her on any other day of the year. I can probably even countersignal by not phoning her on Mothers’ day, since my talking to her regularly makes it implausible that I dislike her or wish to offend her. Why do people mostly phone on mothers day then?

\n

I’m writing from a subculture in Australia, and I hear many variations on this tradition and presumably its requirements and implicit messages exist internationally. Perhaps someone from a culture where this makes sense can tell me about it?

\n

Anyway, I didn’t call my mother yesterday. I presume she interprets my calling her other times as a much stronger signal. I dedicate this blog post to her instead, to show that I can remember to commemorate her goodness at being a mother on a day when I wasn’t reminded a zillion times (ok, so not a very distant day from when I was reminded, but I should be doing better than celebrating on Mothers’ Day, right?). Happy Not Mothers’ Day Mummy!


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "fTu69HzLSXqWgj9ib", "title": "Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems", "pageUrl": "https://www.lesswrong.com/posts/fTu69HzLSXqWgj9ib/is-google-paperclipping-the-web-the-perils-of-optimization", "postedAt": "2010-05-10T13:25:41.567Z", "baseScore": 56, "voteCount": 39, "commentCount": 105, "url": null, "contents": { "documentId": "fTu69HzLSXqWgj9ib", "html": "

Related to:  The Importance of Goodhart's Law, Lucas Critique, Campbell's Law

\n

 

\n

tl;dr version: The article introduces the pattern of Optimization by Proxy (OBP), which can be found in many large scale distributed systems, including human societies. The pattern occurs when a computationally limited algorithm uses a proxy property as a shortcut indicator for the presence of a hard to measure target quality. When intelligent actors with different motivations control part of the data, the existence of the algorithm reifies the proxy into a separate attribute to be manipulated with the goal of altering the algorithm's results. This concept is then applied to Google and the many ways it interacts with the various groups of actors on the web. The second part of this article contains examination of how OBP contributes towards the degrading of the content of the web, and how this relates to the Friendly Artificial Intelligence concept of 'paperclipping'.

\n


\n

Introducing OBP

\n

 

\n

The first thing a newly-hatched herring gull does after breaking out of its shell is to peck on its mother’s beak, which causes her to give it its first feeding. Puzzled by this apparent automatic recognition of its mother, Dutch ethologist and ornithologist Nikolaas Tinbergen conducted a sequence of experiments designed to determine what precisely it was that the newborn herring gull was attracted to. After experimenting with facsimiles of adult female herring gulls, he realized that the beak alone, without the bird, would elicit the response. Through multiple further iterations he found that the characteristics that the newborns were attracted to were thinness, elongation, redness and an area with high contrast. Thus, the birds would react much more intensely to a long red stick-like beak with painted stripes on the tip than they would to a real female herring gull. It turns out that the chicks don't have an ingrained definition of 'motherness' but rather determine their initial actions by obeying very simple rules, and are liable to radically miss the mark in the presence of objects that are explicitly designed to the specification of these rules. Objects of this class, able to dominate the attention of an animal away from the intended target were later called ‘supernormal stimuli’ (or superstimuli) and have been commonly observed in nature and our own human environment ever since.

\n

 

\n

Generalising the above example, we can say that Optimization by Proxy occurs when an algorithm substitutes the problem of measuring a hard to quantify attribute, with a usually co-occurring a proxy that is computationally efficient to measure.

\n

 

\n

A similar pattern appears when algorithms intended to make optimized selections over vast sets of candidates are applied on implicitly or explicitly social systems. As long as the fundamental assumption that the proxy co-occurs with the desired property holds, the algorithm performs as intended, yielding results that to the untrained eye look like ‘magic’. Google’s PageRank, in its original incarnation, aiming to optimize for page quality, does so indirectly, by data mining the link structure of the web. As the web has grown, such algorithms, and their scalability characteristics, have helped search engines dominate navigation on the web over previously dominant human-curated directories.

\n

\n

When there is only a single party involved in the production, filtering, and consumption of results, or when the incentives of the relevant group of actors are aligned, such as in the herring gull case, the assumption of the algorithm remains stable and its results remain reliable.

\n

 

\n

Effect of Other Intelligent Actors

\n

 

\n

When however instances of the proxy are in the control of intelligent actors that can manipulate it, and stand to benefit from distorting the results of the algorithm, then the existence of the algorithm itself and the motive distortions it creates alter the results it produces. In the case of PageRank, what we have is essentially Google acting as a singleton intermediary between two groups: content producers and consumers. Its early results owe to the fact that the link structure it crawled was effectively an unintentional byproduct of the buildup of the web. By bringing it to the attention of website owners as a distinct concept however, they have been incentivised to manipulate it separately, through techniques such as link farming, effectively making the altered websites act as supernormal stimuli for the algorithm. In this sense, the act of observation and the computation and publication of results alters that which is being observed. What follows is an arms race between the algorithm designers and the external agents, each trying to affect the algorithm’s results in their own preferred direction, with the algorithm designers controlling the algorithm itself and malicious agents controlling part of the data it is applied on.

\n

 

\n

\"\"

\n

 

\n

The above figure (original Google drawing here) may help visualise the issue. Items that satisfy the proxy but not the target quality are called false positives. Items possessing the target quality but not the proxy become false negatives. What effectively happens when Optimization by Proxy is applied to a social system, is that malicious website owners locate the semantic gap between target quality and proxy, and aim to fit in the false positives of that mismatch. The fundamental assumption here is that since the proxy is easier to compute, it is also easier to fake. That this is not the case in NP-complete problems (while no proof of P=NP exists) may offer a glimmer of hope for the future, but current proxies are not of this class. The result is that where proxy and target quality would naturally co-occur, the arrival of the algorithm, and the distortion it introduces to the incentive structure, make the proxy and the target quality more and more distinct by way of expanding the false positives set.

\n

 

\n

Faking it - A Bayesian View

\n

 

\n

We can obtain a little more insight by considering a simple Bayesian network representation of the situation. A key guide to algorithm design is the identification of some measure that intuitively will be highly correlated with quality. In terms of PageRank in its original incarnation, the reasoning is as follows. High quality web sites will attract attention from peers who are also contributing related content. This will “cause” them to link into the web site under consideration. Hence if we measure the number of highly ranked web sites that link into it, this will provide us with an indication of the quality of that site. The key feature is that the causal relationship is from the underlying quality (relevance) to the indicator that is actually being measured.

\n

 

\n

This simple model raises a number of issues with the use of proxies. Firstly, one needs to be aware that it is not just a matter of designing a smart algorithm for quantifying the proxy. One also needs to quantify the strength of association between the proxy and the underlying concept.

\n

 

\n

Secondly, unless the association is an extremely strong one, this makes use of the proxy a relatively “lossy” test for the underlying concept. In addition, if one is going to use the proxy for decision-making, one needs some measure of confidence in the value assigned to the strength of the relationship – a second-order probability that reflects the level of experience and consistency of the evidence that has been used to determine the strength of the relationship.

\n

 

\n

Finally, and most critically, one needs to be aware of the consequences of performing inference in the reverse causal direction. In modeling this as a Bayesian Network, we would use the conditional probability distribution p(PR | Q) as a measure of the “strength” of the relationship between cause and proxy (where “PR” is a random variable representing the value of PageRank, and “Q” is a random variable representing the value of the (hidden) cause, Quality). Given a particular observation of PR, what we need to determine is p(Q | PR) – the distribution over Quality given our observation on the proxy. This (in our simple model) can be determined through the application of Bayes’ rule:

\n

 

\n

\"\"

\n

 

\n

What this is reminding us of us that the prior probability distribution on Quality is a major factor in determining its posterior following an observation on the proxy. In the case of social systems however, this prior is the very thing that is shifting.

\n

 

\n

Attempts to Counteract Optimization by Proxy

\n

 

\n

One approach by algorithm owners is to keep secret the operation of the algorithm, creating uncertainty over the effects of manipulation of the proxy. This is effectively security by obscurity and can be counteracted by dedicated interrogation of the algorithm’s results. In the case of PageRank, a cottage industry has formed around Search Engine Optimization (SEO) and Search Engine Marketing (SEM), essentially aimed at improving a website’s placing in search engine results, despite the secrecy of the algorithm’s exact current operation. While a distinction can be made between black-hat and white-hat practitioners, the fact remains that the existence of these techniques is a direct result of the existence of an algorithm that optimizes by proxy. Another approach may be to use multiple proxies. This however is equivalent to using a single complex proxy. While manipulation becomes more difficult, it also becomes more profitable as less people will bother doing it.

\n

 

\n

As a response to the various distortions and manipulations, algorithms are enriched with heuristics to identify them. This, as the arms race progresses, is hoped to converge to the point where the proxy approaches the original target more and more, and hence the external actors are forced to simulate the algorithm’s target quality to the point where, to misquote Arthur C. Clarke, “sufficiently advanced spam is indistinguishable from content”. This of course would hold only if processing power were not an issue. However, if processing cost was not an issue, far more laborious algorithms could be used to evaluate the target attribute directly and if an algorithm could be made to describe the concept to the level that a human would be able to distinguish. Optimization by Proxy, being a computational shortcut, is only useful when processing power or ability to define is limited. In the case of the Web search, there is a natural asymmetry, with the manipulators able to spend many more machine- and man-hours to optimization of the result than the algorithm can spend judging the quality of any given item. Thus, algorithm designers can only afford to tackle the most broadly-occurring and easily distinguishable forms of manipulation, while knowingly ignoring the more sophisticated or obscure ones. On the other hand, the defenders of the algorithm always have the final judgment and the element of surprise on their side.

\n

 

\n

Up to this point, I have tried to more or less describe Optimization by Proxy and the results of applying it to social systems, and used Google an PageRank as a well known example for illustration purposes. The rest of this article focuses more on the effect that Google has on the Web and applies this newly introduced concept to further the understanding of that situation.

\n

 

\n

The Downward Spiral: Industrializing OBP Exploitation

\n

 

\n

While Google can and does make adjustments and corrections to its algorithms, it can only catch manipulations that are themselves highly automated such as content scraping and link farms. There have long been complaints about the ever increasing prevalence of made-for-adsense websites, affiliate marketers, and other classes of spam in search results. These are a much harder nut to crack and comes back to the original limitations of the algorithm. The idea behind made-for-adsense websites is that there is low quality human authored original content that is full of the appropriate keywords, and which serves adsense advertisements. The goal is twofold: First to draw traffic into the website by ranking highly for the relevant searches, and secondly to funnel as many of these visitors to the advertisers as possible, therefore maximising revenue.

\n

 

\n

Optimization by Proxy here can be seen occurring at least thrice: First of all it is exploited as a way of gaining prevalence in search results using the above mentioned mechanisms. Secondly, the fact that the users' only relevance metric, other than search ranking, is the title and a short snippet, can mislead users into clicking through. If the title is closely related to their search query, and the snippet seems relevant and mentions the right keywords, the users will trust this proxy when the actual quality of the content that awaits them on the other side is substandard. Finally, advertisers will have their ads being placed on low quality websites that are selected by keyword, when perhaps they would not have preferred that their brand is related with borderline spam websites. This triple occurrence of Optimization by Proxy creates a self-reinforcing cycle where the made-for-adsense website owners are rewarded with cold hard cash for their efforts. What's worse, this cash flow has been effectively subtracted from the potential gains of legitimate content producers. One can say that the existence of Google search/adsense/adwords makes all this commerce possible in the first place, but this does not make the downward spiral of inefficiency disappear. Adding to this the related scourge of affiliate marketers only accelerates the disintegration of quality results.

\n

 

\n

An interesting characteristic of this problem is that it targets less savvy users, as they are the most likely to make the most generic queries, be unable to distinguish a trusted from an untrusted source, and click on ads. This means that those with the understanding of the underlying mechanics are actually largely shielded from realising the true extent of the problem.

\n

 

\n

Its effectiveness has inevitably led to an industrialisation of the technique, with content farms such as Demand Media which pays about $5 per article and expects its authors to research and produce 5 articles an hour(!). It also pays film directors for short videos and has become by far the largest contributor to YouTube. Its method relies on purchasing search logs from ISPs and data mining those and other data sets for profitable niche keywords to produce content on. Demand Media is so wildly profitable that there is talk of an IPO, and it is obviously not the only player in this space. No matter what improvements Google makes on their algorithm short of aggressively delisting such websites (which it hasn't been willing to do thus far), the algorithm is unable to distinguish between low quality and high quality material as previously discussed. The result is crowding out of high quality websites in favour of producers of industrialised content that is designed to just barely evade the spam filters.

\n

 

\n

Conclusion

\n

 

\n

What we have seen is that a reliance on a less than accurate proxy has led to vast changes in the very structure and content of the web, even when the algorithms applied are less intelligent than a human and are constantly supervised and corrected by experts. All this in my mind drives home the fundamental message of FAI. While descriptions of FAI have thus far referred to thought experiments such as paperclipping, real examples, albeit in scale, are all around us. In our example, the algorithm is getting supervised by at least four distinct groups of people (Google, advertisers, content producers, consumers) and still its effects are hard to contain due to the entangled incentives of the actors. Its skewed value system is derailing the web contrary to the desires of most of the participants (except for the manipulators, I guess). For PageRank a positive is a positive whereas the difference between true and false positive is only apparent to us humans. Beyond PageRank, I feel this pattern has applicability in many areas of everyday life, especially those related to large organizations, such as employers judging potential employees by the name of the university they attended, companies rewarding staff, especially in sales, with a productivity bonus, academic funding bodies allocating funds according to bibliometrics, or even LessWrong karma when seens as an authority metric. Since my initial observation of this pattern I have been seeing it in more and more and now consider it one of my basic 'models', in the sense that Charlie Munger uses the term.

\n

 

\n

While I have more written material on this subject, especially on possible methods of counteracting this effect, I think this article has gone on way too long, and I'd like to see the LessWrong community's feedback before possibly proceeding. This is a still developing concept in my mind and my principle motivation for posting it here is to solicit feedback.

\n

 

\n

 

\n

Disclaimer: Large parts of the above material have been published at the recent Web Science '10 conference. Also parts have been co-written with my PhD supervisor Prof. Paul Krause. Especially the Bayesian section is essentially written by him.

\n

I should also probably say that, contrary to what you might expect, Google is one of the technology companies I most respect. Their success and principled application of technology has just happened to make them a fantastic example for the concept I am trying to communicate.

\n

 

\n

Update(s): The number of updates has gotten a bit unwieldy, so I just collapsed them all here. To summarize, there have been numerous changes throughout the article over the last few days as a response to the fantastic feedback throughout the comments here and elsewhere. Beyond the added links at the top on prior statements of the same principle in other fields, here is also a very interesting article on the construction of spam, with a similar conclusion. Also, I hear from the comments that the book Measuring and Managing Performance in Organizations touches on the same issue in the context of people's behaviour in corporate environments.

\n

 

\n

Followup on the Web:  Since I am keeping my ears on the ground, here I will try to maintain a list of articles and discussions that refer to this article. I don't necessarily agree with the contents, but I will keep them here for future reference.

\n" } }, { "_id": "YDgCoKA8BZZ42k66L", "title": "Why not fake height?", "pageUrl": "https://www.lesswrong.com/posts/YDgCoKA8BZZ42k66L/why-not-fake-height", "postedAt": "2010-05-09T11:59:20.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "YDgCoKA8BZZ42k66L", "html": "

Barking up the wrong tree:

\n

to match the dating success of a man one inch taller, a 5’9″ man would have to make $30,000 a year more…

\n
\n

So why don’t men wear high heels? Obviously the immediate reason is to avoid looking like women, since women wear them. But there are all sorts of things that both men and women do without men becoming sullied by girliness (for instance wearing high heels at other times in history). And why didn’t men get in first and claim high heels for manliness, if they should benefit from it so much? We would be puzzled if in another culture men were the only ones with push up bras, because push up bras were too manly for women to wear.

\n

Even with the danger of looking like a girl in proper high heels, isn’t there a temptation to get men’s shoes with a tiny bit higher heel than usual? Maybe just $10,00 worth of income’s heel? Presumably there is some heel increase that wouldn’t stand out as effeminate. And when that’s commonplace, wouldn’t it be tempting to add a tiny bit more? If height is such an advantage to men, and the danger of girliness shouldn’t stop a gradual increase, what’s the barrier?

\n

Note: I removed the possibility of trackbacks to this post because it was receiving more filter evading spam than I could be bothered looking at.

\n

\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "2oybbEw697CQgcRE5", "title": "The Psychological Diversity of Mankind", "pageUrl": "https://www.lesswrong.com/posts/2oybbEw697CQgcRE5/the-psychological-diversity-of-mankind", "postedAt": "2010-05-09T05:53:54.487Z", "baseScore": 145, "voteCount": 106, "commentCount": 162, "url": null, "contents": { "documentId": "2oybbEw697CQgcRE5", "html": "

The dominant belief on this site seems to be in the \"psychological unity of mankind\". In other words, all of humanity shares the same underlying psychological machinery. Furthermore, that machinery has not had the time to significantly change in the 50,000 or so years that have passed after we started moving out of our ancestral environment.

\n

In The 10,000 Year Explosion, Gregory Cochran and Henry Harpending dispute part of this claim. While they freely admit that we have probably not had enough time to develop new complex adaptations, they emphasize the speed at which minor adaptations can spread throughout populations and have powerful effects. Their basic thesis is that the notion of a psychological unity is most likely false. Different human populations are likely for biological reasons to have slightly different minds, shaped by selection pressures in the specific regions the populations happened to live in. They build support for their claim by:

\n\n

In what follows, I will present their case by briefly summarizing the contents of the book. Do note that I've picked the points that I found the most interesting, leaving a lot out.

\n

They first chapter begins by discussing a number of interesting examples:

\n\n

The second chapter of the book is devoted to a discussion about the \"big bang\" in cultural evolution that occured about 30,000 to 40,000 years ago. During that time, people began coming up with technological and social innovations at an unprecedented rate. Cave paintings, sculpture and jewelry starting showing up. Tools made during this period were manufactured using materials hundreds of miles away, when previously they had been manufactured with local materials - implying that some sort of trade or exchange developed. Humans are claimed to have been maybe 100 times as inventive than in earlier times.

\n

The authors argue that this was caused by a biological change: that genetic changes allowed for a cultural development in 40,000 BC that hadn't been possible in 100,000 BC. More specifically, they suggest that this could have been caused by interbreeding between \"modern\" humans and Neanderthals. Even though Neanderthals are viewed as cognitively less developed than modern humans, archeological evidence suggests that at least up to 100,000 years ago, they weren't seriously behind the modern humans of the time. Neanderthals also had a different way of life, being high-risk, highly cooperative hunters while the anatomically modern humans probably had a mixed diet and were more like modern hunter-gatherers. It is known that ongoing natural selection in two populations can allow for simultaenous exploration of divergent development paths. It would have been entirely possible that the anatomically modern humans interbred with Neanderthals to some degree, the Neanderthals being a source of additional genetic variance that the modern humans could have benefited from.

\n

How would this have happened? In effect, the modern humans would have had their own highly beneficial alleles, in addition to which they'd have picked up the best alleles the Neanderthals had. Out of some 20,000 Neanderthal genes, it's highly likely that at least some of them were worth having. There wasn't much interbreeding, so Neanderthal genes with a neutral or negative effect would have disappeared from the modern human population pretty quickly. On the other hand, a beneficial gene's chance of spreading in the population is two times its fitness advantage. If beneficial genes are every now and then injected to the modern human population, chances are that eventually they will end up spreading to fixation. And indeed, both skeletal and genetic evidence shows signs of Neanderthal genes. There are at least two genes, one regulating brain size that appeared about 37,000 years ago and one playing role in speech that appeared about 42,000 years ago, that could plausibly have contributed to the cultural explosion and which may have come from the Neanderthals.

\n

The third chapter discusses the effect of agriculture, which first appeared 10,000 or so years ago. 60,000 years ago, there were something like a quarter of a million modern humans. 3,000 years ago, thanks to the higher food yields allowed by agriculture, there were 60 million humans. A larger population means there's more genetic variance: mutations that had previously occurred every 10,000 years or so were now showing up every 400 years. The changed living conditions also began to select for different genes. A \"gene sweep\" is a process where beneficial alleles increase in frequency, \"sweeping through\" the population until everyone has them. Hundreds of these are still ongoing today. For European and Chinese samples, the sweeps' rate of origination peaked at about 5,000 years ago and at 8,500 years ago for one African sample. While the full functions of these alleles are still not known, it is known that most involve changes in metabolism and digestion, defenses against infectious disease, reproduction, DNA repair, or in the central nervous system.

\n

The development of agriculture led, among other things, to a different mix of foods, frequently less healthy than the one enjoyed by hunter-gatherers. For instance, vitamin D was poorly available in the new diet. However, it is also created by ultraviolet radiation from the sun interacting with our skin. After the development of agriculture, several new mutations showed up that led to people in the areas more distant from the equator having lighter skins. There is also evidence of genes that reduce the negative effects associated with e.g. carbohydrates and alcohol. Today, people descending from populations that haven't farmed as long, like Australian Aborigines and many Amerindians, have a distinctive track record of health problems when exposed to Western diets. DNA retrieved from skeletons indicates that 7,000 to 8,000 years ago, no-one in central and northern Europe had the gene for lactose tolerance. 3,000 years, about 25 percent of people in central Europe had it. Today, about 80 percent of the central and northern European population carries the gene.

\n

The fourth chapter continues to discuss mutations that have spread during the last 10,000 or so years. People in certain areas have more mutations giving them a resistance to malaria than people in others. The human skeleton has become more lightly built, more so in some populations. Skull volume has decreased apparently in all populations: in Europeans it is down 10 percent from the hight point about 20,000 years ago. For some reason, Europeans also have a lot of variety in eye and hair color, whereas most of the rest of the world has dark eyes and dark hair, implying some Europe-specific selective pressure that happened to also affect those.

\n

As for cognitive changes: there are new versions of neurotransmitter receptors and transporters. Several of the alleles have effects on serotonin. There are new, mostly regional, versions of genes that affect brain development: axon growth, synapse formation, formation of the layers of the cerebral cortex, and overall brain growth. Evidence from genes affecting both brain development and muscular strength, as well as our knowledge of the fact that humans in 100,000 BC had stronger muscles than we do have today, suggests that we may have traded off muscle strength for higher intelligence. There are also new versions of genes affecting the inner ear, implying that our hearing may still be adapting to the development of language - or that specific human populations might even be adapting to characteristics of their local languages or language families.

\n

Ruling elites have been known to have far more offspring than those of the lower classes, implying selective pressures may also have been work there. 8 percent of Ireland's male population carries an Y chromosome descending from Niall of the Nine Hostages, a high king of Ireland around AD 400. 16 million men in central Asia are direct descendants of Genghis Khan. Most interestingly, people descended from farmers and the lower classes may be less aggressive and more submissive than others. People in agricultural societies, frequently encountering lots of people, are likely to suffer a lot more from being overly aggressive than people in hunter-gatherer societies. Rulers have also always been quick to eliminate those breaking laws or otherwise opposing the current rule, selecting for submissiveness.

\n

The fifth chapter discusses various ways (trade, warfare, etc.) by which different genes have spread through the human population throughout time. The sixth chapter discusses various historical encounters between humans of different groups. Amerindians were decimated by the diseases Europeans brought with them, but the Europeans were not likewise decimated by American diseases. Many Amerindians have a very low diversity of genes regulating their immune system, while even small populations of Old Worlders have highly diverse versions of these genes. On the other hand, Europeans had for a long time difficulty penetrating into Africa, where the local inhabitants had highly evolved genetic resistances to the local diseases. Also, Indo-European languages might have spread so widely in part because an ancestor protolanguge was spoken by lactose tolerant herders. The ability to keep cattle for their milk and not just their flesh allowed the herders to support larger amounts of population per acre, therefore displacing people without lactose tolerance.

\n

The seventh chapter discusses Ashkenazi Jews, whose average IQ is around 112-115 and who are vastly overrepresented among successful scientists, among other things. However, no single statement of Jews being unusually intelligent is found anywhere in preserved classical literature. In contrast, everyone thought that classical Greeks were unusually clever. The rise in Ashkenazi intelligence seems to be a combination of interbreeding and a history of being primarily in cognitively challenging occupations. The majority of Ashkenazi jews were moneylenders by 1100, and the pattern continued for several centuries. Other Jewish populations, like the ones the living in the Islamic countries, were engaged in a variety of occupations and do not seem to have an above-average intelligence.

" } }, { "_id": "XiSCHS3Xu3a6EC7e6", "title": "What is bunk?", "pageUrl": "https://www.lesswrong.com/posts/XiSCHS3Xu3a6EC7e6/what-is-bunk", "postedAt": "2010-05-08T18:06:08.435Z", "baseScore": 26, "voteCount": 35, "commentCount": 108, "url": null, "contents": { "documentId": "XiSCHS3Xu3a6EC7e6", "html": "

\n

Related: http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/, http://lesswrong.com/lw/1mh/that_magical_click/, http://lesswrong.com/lw/18b/reason_as_memetic_immune_disorder/

\n

Given a claim, and assuming that its truth or falsehood would be important to you, how do you decide if it's worth investigating?  How do you identify \"bunk\" or \"crackpot\" ideas?

\n

Here are some examples to give an idea. 

\n

\"Here's a perpetual motion machine\": bunk.  \"I've found an elementary proof of Fermat's Last Theorem\": bunk.  \"9-11 was an inside job\": bunk.  

\n

 \"Humans did not cause global warming\": possibly bunk, but I'm not sure.  \"The Singularity will come within 100 years\": possibly bunk, but I'm not sure.  \"The economic system is close to collapse\": possibly bunk, but I'm not sure.

\n

\"There is a genetic difference in IQ between races\": I think it's probably false, but not quite bunk.  \"Geoengineering would be effective in mitigating global warming\": I think it's probably false, but not quite bunk. 

\n

(These are my own examples.  They're meant to be illustrative, not definitive.  I imagine that some people here will think \"But that's obviously not bunk!\"  Sure, but you probably can think of some claim that *you* consider bunk.)

\n

A few notes of clarification: I'm only examining factual, not normative, claims.  I also am not looking at well established claims (say, special relativity) which are obviously not bunk. Neither am I looking at claims where it's easy to pull data that obviously refutes them. (For example, \"There are 10 people in the US population.\")  I'm concerned with claims that look unlikely, but not impossible. Also, \"Is this bunk?\" is not the same question as \"Is this true?\"  A hypothesis can turn out to be false without being bunk (for example, the claim that geological formations were created by gradual processes.  That was a respectable position for 19th century geologists to take, and a claim worth investigating, even if subsequent evidence did show it to be false.)  The question \"Is this bunk?\" arises when someone makes an unlikely-sounding claim, but I don't actually have the knowledge right now to effectively refute it, and I want to know if the claim is a legitimate subject of inquiry or the work of a conspiracy theory/hoax/cult/crackpot.  In other words, is it a scientific or a pseudoscientific hypothesis?  Or, in practical terms, is it worth it for me or anybody else to investigate it?

\n

This is an important question, and especially to this community.  People involved in artificial intelligence or the Singularity or existential risk are on the edge of the scientific mainstream and it's particularly crucial to distinguish an interesting hypothesis from a bunk one.  Distinguishing an innovator from a crackpot is vital in fields where there are both innovators and crackpots.

\n

I claim bunk exists. That is, there are claims so cracked that they aren't worth investigating. \"I was abducted by aliens\" has such a low prior that I'm not even going to go check up on the details -- I'm simply going to assume the alleged alien abductee is a fraud or nut.  Free speech and scientific freedom do not require us to spend resources investigating every conceivable claim.  Some claims are so likely to be nonsense that, given limited resources, we can justifiably dismiss them.

\n

But how do we determine what's likely to be nonsense?  \"I know it when I see it\" is a pretty bad guide.

\n

First idea: check if the proposer uses the techniques of rationality and science.  Does he support claims with evidence?  Does he share data and invite others to reproduce his experiments? Are there internal inconsistencies and logical fallacies in his claim?  Does he appeal to dogma or authority?  If there are features in the hypothesis itself that mark it as pseudoscience, then it's safely dismissed; no need to look further.

\n

But what if there aren't such clear warning signs?  Our gracious host Eliezer Yudkowsky, for example, does not display those kinds of obvious tip-offs of pseudoscience -- he doesn't ask people to take things on faith, he's very alert to fallacies in reasoning, and so on.  And yet he's making an extraordinary claim (the likelihood of the Singularity), a claim I do not have the background to evaluate, but a claim that seems implausible.  What now?  Is this bunk?

\n

A key thing to consider is the role of the \"mainstream.\"  When a claim is out of the mainstream, are you justified in moving it closer to the bunk file?  There are three camps I have in mind, who are outside the academic mainstream, but not obviously (to me) dismissed as bunk: global warming skeptics, Austrian economists, and singularitarians.  As far as I can tell, the best representatives of these schools don't commit the kinds of fallacies and bad arguments of the typical pseudoscientist.  How much should we be troubled, though, by the fact that most scientists of their disciplines shun them?  Perhaps it's only reasonable to give some weight to that fact.  

\n

Or is it? If all the scientists themselves are simply making their judgments based on how mainstream the outsiders are, then \"mainstream\" status doesn't confer any information.  The reason you listen to academic scientists is that you expect that at least some of them have investigated the claim themselves.  We need some fraction of respected scientists -- even a small fraction -- who are crazy enough to engage even with potentially crackpot theories, if only to debunk them.  But when they do that, don't they risk being considered crackpots themselves?  This is some version of \"Tolerate tolerance.\"  If you refuse to trust anybody who even considers seriously a crackpot theory, then you lose the basis on which you reject that crackpot theory.  

\n

So the question \"What is bunk?\", that is, the question, \"What is likely enough to be worth investigating?\", apparently destroys itself.  You can only tell if a claim is unlikely by doing a little investigation.  It's probably a reflexive process: when you do a little investigation, if it's starting to look more and more like the claim is false, you can quit, but if it's the opposite, then the claim is probably worth even more investigation.  

\n

The thing is, we all have different thresholds for what captures our attention and motivates us to investigate further.  Some people are willing to do a quick Google search when somebody makes an extraordinary claim; some won't bother; some will go even further and do extensive research.  When we check the consensus to see if a claim is considered bunk, we're acting on the hope that somebody has a lower threshold for investigation than we do.  We hope that some poor dogged sap has spent hours diligently refuting 9-11 truthers so that we don't have to.  From an economic perspective, this is an enormous free-rider problem, though -- who wants to be that poor dogged sap?  The hope is that somebody, somewhere, in the human population is always inquiring enough to do at least a little preliminary investigation.  We should thank the poor dogged saps of the world.  We should create more incentives to be a poor dogged sap.  Because if we don't have enough of them, we're going to be very mistaken when we think \"Well, this wasn't important enough for anyone to investigate, so it must be bunk.\"

\n

(N.B.  I am aware that many climate scientists are being \"poor dogged saps\" by communicating with and attempting to refute global warming skeptics.  I'm not aware if there are economists who bother trying to refute Austrian economics, or if there are electrical engineers and computer scientists who spend time being Singularity skeptics.)

\n

 

" } }, { "_id": "QzXJTfkTeJFdcDNEH", "title": "Moving marginal mothers", "pageUrl": "https://www.lesswrong.com/posts/QzXJTfkTeJFdcDNEH/moving-marginal-mothers", "postedAt": "2010-05-07T14:40:14.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "QzXJTfkTeJFdcDNEH", "html": "

Julian Savulescu suggests extending the idea of paying drug addicts not to have children to everyone.  At first the purpose is to avoid the eugenics feel of discouraging only one set of people from procreating, but then he reasons:

\n

“The benefit of a policy of offering inducements to sterilisation is that it would select those who do not value, do not understand, do not want the role of parent. And it is precisely these people who are likely to be the worst parents.

\n

Being a parent is, at best, a difficult job. Why not excuse those with the least motivation and determination? There are plenty of others willing to take their place. And the earth can only sustain a finite number of people.”

\n

It’s of course true that if you penalize an activity, those to whom it is most expensive already will be the ones to quit. However:

\n
    \n
  1. The existing costs of parenting already induce those who dislike parenting most not to parent. Adding another cost to parenting would just move the line where it becomes worthwhile to parent, not implement such selection. Justifying this requires an argument that the level of value at which people find parenting worthwhile is too low, not just a desire to encourage better parents to do a greater proportion of parenting in general.
  2. \n
  3. “Excuse those with the least motivation and determination”? We aren’t exactly pushing them to do it. Why presume they don’t excuse themselves at the appropriate point? This goes with the above point; the line where parenting seems worthwhile could be in the wrong place if parents were pushed for some reason to have too many children, but why think they misjudge?
  4. \n
  5. Why would there be plenty of others willing to take their places? Presumably those wanting to bear children will do so already or at least would not start at a 1:1 ratio on the news that others are not. Few factors influencing conception depend on the ambient birthrate.
  6. \n
  7. If others really were willing to ‘take their place’, the exit of poor parents from parenting  wouldn’t be relevant to the total population and whether the planet can sustain it.
  8. \n
  9. Presumably the issue is how big a finite number of people the earth’s resources can support, and more importantly why and to what extent parents should be expected to misjudge.
  10. \n
  11. Smaller populations are not automatically better if you value human life at all. That parents are unlikely to account for the entire value of their potential child’s life is a strong reason to think that parents don’t have enough children. If that is the overwhelming externality, the line should be lower, and we would be better off paying people to have children.
  12. \n
\n

    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "aQRKfzYnt3bFGgPKd", "title": "Beauty quips, \"I'd shut up and multiply!\"", "pageUrl": "https://www.lesswrong.com/posts/aQRKfzYnt3bFGgPKd/beauty-quips-i-d-shut-up-and-multiply", "postedAt": "2010-05-07T14:34:27.204Z", "baseScore": 18, "voteCount": 44, "commentCount": 358, "url": null, "contents": { "documentId": "aQRKfzYnt3bFGgPKd", "html": "

    When it comes to probability, you should trust probability laws over your intuition.  Many people got the Monty Hall problem wrong because their intuition was bad.  You can get the solution to that problem using probability laws that you learned in Stats 101 -- it's not a hard problem.  Similarly, there has been a lot of debate about the Sleeping Beauty problem.  Again, though, that's because people are starting with their intuition instead of letting probability laws lead them to understanding.

    \n

    The Sleeping Beauty Problem

    \n

    On Sunday she is given a drug that sends her to sleep. A fair coin is then tossed just once in the course of the experiment to determine which experimental procedure is undertaken. If the coin comes up heads, Beauty is awakened and interviewed on Monday, and then the experiment ends. If the coin comes up tails, she is awakened and interviewed on Monday, given a second dose of the sleeping drug, and awakened and interviewed again on Tuesday. The experiment then ends on Tuesday, without flipping the coin again. The sleeping drug induces a mild amnesia, so that she cannot remember any previous awakenings during the course of the experiment (if any). During the experiment, she has no access to anything that would give a clue as to the day of the week. However, she knows all the details of the experiment.

    \n

    Each interview consists of one question, \"What is your credence now for the proposition that our coin landed heads?\"

    \n

    Two popular solutions have been proposed: 1/3 and 1/2

    \n

    The 1/3 solution

    \n

    From wikipedia:

    \n

    Suppose this experiment were repeated 1,000 times. We would expect to get 500 heads and 500 tails. So Beauty would be awoken 500 times after heads on Monday, 500 times after tails on Monday, and 500 times after tails on Tuesday. In other words, only in a third of the cases would heads precede her awakening. So the right answer for her to give is 1/3.

    \n

    Yes, it's true that only in a third of cases would heads precede her awakening.

    \n

    Radford Neal (a statistician!) argues that 1/3 is the correct solution.

    \n

    This [the 1/3] view can be reinforced by supposing that on each awakening Beauty is offered a bet in which she wins 2 dollars if the coin lands Tails and loses 3 dollars if it lands Heads. (We suppose that Beauty knows such a bet will always be offered.) Beauty would not accept this bet if she assigns probability 1/2 to Heads. If she assigns a probability of 1/3 to Heads, however, her expected gain is 2 × (2/3) − 3 × (1/3) = 1/3, so she will accept, and if the experiment is repeated many times, she will come out ahead.

    \n

    Neal is correct (about the gambling problem).

    \n

    These two arguments for the 1/3 solution appeal to intuition and make no obvious mathematical errors.   So why are they wrong?

    \n

    Let's first start with probability laws and show why the 1/2 solution is correct. Just like with the Monty Hall problem, once you understand the solution, the wrong answer will no longer appeal to your intuition.

    \n

    The 1/2 solution

    \n

    P(Beauty woken up at least once| heads)=P(Beauty woken up at least once | tails)=1.  Because of the amnesia, all Beauty knows when she is woken up is that she has woken up at least once.  That event had the same probability of occurring under either coin outcome.  Thus, P(heads | Beauty woken up at least once)=1/2.  You can use Bayes' rule to see this if it's unclear.

    \n

    Here's another way to look at it:

    \n

    If it landed heads then Beauty is woken up on Monday with probability 1.

    \n

    If it landed tails then Beauty is woken up on Monday and Tuesday.  From her perspective, these days are indistinguishable.  She doesn't know if she was woken up the day before, and she doesn't know if she'll be woken up the next day.  Thus, we can view Monday and Tuesday as exchangeable here.

    \n

    A probability tree can help with the intuition (this is a probability tree corresponding to an arbitrary wake up day):

    \n

    \"\"

    \n

    If Beauty was told the coin came up heads, then she'd know it was Monday.  If she was told the coin came up tails, then she'd think there is a 50% chance it's Monday and a 50% chance it's Tuesday.  Of course, when Beauty is woken up she is not told the result of the flip, but she can calculate the probability of each.

    \n

    When she is woken up, she's somewhere on the second set of branches.  We have the following joint probabilities: P(heads, Monday)=1/2; P(heads, not Monday)=0; P(tails, Monday)=1/4; P(tails, Tuesday)=1/4; P(tails, not Monday or Tuesday)=0.  Thus, P(heads)=1/2.

    \n

    Where the 1/3 arguments fail

    \n

    The 1/3 argument says with heads there is 1 interview, with tails there are 2 interviews, and therefore the probability of heads is 1/3.  However, the argument would only hold if all 3 interview days were equally likely.  That's not the case here. (on a wake up day, heads&Monday is more likely than tails&Monday, for example).

    \n

    Neal's argument fails because he changed the problem. \"on each awakening Beauty is offered a bet in which she wins 2 dollars if the coin lands Tails and loses 3 dollars if it lands Heads.\"  In this scenario, she would make the bet twice if tails came up and once if heads came up.  That has nothing to do with probability about the event at a particular awakening.  The fact that she should take the bet doesn't imply that heads is less likely.  Beauty just knows that she'll win the bet twice if tails landed.  We double count for tails.

    \n

    Imagine I said \"if you guess heads and you're wrong nothing will happen, but if you guess tails and you're wrong I'll punch you in the stomach.\"  In that case, you will probably guess heads.  That doesn't mean your credence for heads is 1 -- it just means I added a greater penalty to the other option.

    \n

    Consider changing the problem to something more extreme.  Here, we start with heads having probability 0.99 and tails having probability 0.01.  If heads comes up we wake Beauty up once.  If tails, we wake her up 100 times.  Thirder logic would go like this:  if we repeated the experiment 1000 times, we'd expect her woken up 990 after heads on Monday, 10 times after tails on Monday (day 1), 10 times after tails on Tues (day 2),...., 10 times after tails on day 100.  In other words, ~50% of the cases would heads precede her awakening. So the right answer for her to give is 1/2.

    \n

    Of course, this would be absurd reasoning.  Beauty knows heads has a 99% chance initially.  But when she wakes up (which she was guaranteed to do regardless of whether heads or tails came up), she suddenly thinks they're equally likely?  What if we made it even more extreme and woke her up even more times on tails?

    \n

    Implausible consequence of 1/2 solution?

    \n

    Nick Bostrom presents the Extreme Sleeping Beauty problem:

    \n

    This is like the original problem, except that here, if the coin falls tails, Beauty will be awakened on a million subsequent days. As before, she will be given an amnesia drug each time she is put to sleep that makes her forget any previous awakenings. When she awakes on Monday, what should be her credence in HEADS?

    \n

    He argues:

    \n

    The adherent of the 1/2 view will maintain that Beauty, upon awakening, should retain her credence of 1/2 in HEADS, but also that, upon being informed that it is Monday, she should become extremely confident in HEADS:
    P+(HEADS) = 1,000,001/1,000,002

    \n

    This consequence is itself quite implausible. It is, after all, rather gutsy to have credence 0.999999% in the proposition that an unobserved fair coin will fall heads.

    \n

    It's correct that, upon awakening on Monday (and not knowing it's Monday), she should retain her credence of 1/2 in heads.

    \n

    However, if she is informed it's Monday, it's unclear what she conclude.  Why was she informed it was Monday?  Consider two alternatives.

    \n

    Disclosure process 1:  regardless of the result of the coin toss she will be informed it's Monday on Monday with probability 1

    \n

    Under disclosure process 1, her credence of heads on Monday is still 1/2.

    \n

    Disclosure process 2: if heads she'll be woken up and informed that it's Monday.  If tails, she'll be woken up on Monday and one million subsequent days, and only be told the specific day on one randomly selected day.

    \n

    Under disclosure process 2, if she's informed it's Monday, her credence of heads is 1,000,001/1,000,002.  However, this is not implausible at all.  It's correct.  This statement is misleading: \"It is, after all, rather gutsy to have credence 0.999999% in the proposition that an unobserved fair coin will fall heads.\"  Beauty isn't predicting what will happen on the flip of a coin, she's predicting what did happen after receiving strong evidence that it's heads.

    \n

    ETA (5/9/2010 5:38AM)

    \n

    If we want to replicate the situation 1000 times, we shouldn't end up with 1500 observations.  The correct way to replicate the awakening decision is to use the probability tree I included above. You'd end up with expected cell counts of 500, 250, 250, instead of 500, 500, 500.

    \n

    Suppose at each awakening, we offer Beauty the following wager:  she'd lose $1.50 if heads but win $1 if tails.  She is asked for a decision on that wager at every awakening, but we only accept her last decision. Thus, if tails we'll accept her Tuesday decision (but won't tell her it's Tuesday). If her credence of heads is 1/3 at each awakening, then she should take the bet. If her credence of heads is 1/2 at each awakening, she shouldn't take the bet.  If we repeat the experiment many times, she'd be expected to lose money if she accepts the bet every time.

    \n

    The problem with the logic that leads to the 1/3 solution is it counts twice under tails, but the question was about her credence at an awakening (interview).

    \n

    ETA (5/10/2010 10:18PM ET)

    \n


    Suppose this experiment were repeated 1,000 times. We would expect to get 500 heads and 500 tails. So Beauty would be awoken 500 times after heads on Monday, 500 times after tails on Monday, and 500 times after tails on Tuesday. In other words, only in a third of the cases would heads precede her awakening. So the right answer for her to give is 1/3.

    \n

    Another way to look at it:  the denominator is not a sum of mutually exclusive events.  Typically we use counts to estimate probabilities as follows:  the numerator is the number of times the event of interest occurred, and the denominator is the number of times that event could have occurred. 

    \n

    For example, suppose Y can take values 1, 2 or 3 and follows a multinomial distribution with probabilities p1, p2 and p3=1-p1-p2, respectively.   If we generate n values of Y, we could estimate p1 by taking the ratio of #{Y=1}/(#{Y=1}+#{Y=2}+#{Y=3}). As n goes to infinity, the ratio will converge to p1.   Notice the events in the denominator are mutually exclusive and exhaustive.  The denominator is determined by n.

    \n

    The thirder solution to the Sleeping Beauty problem has as its denominator sums of events that are not mutually exclusive.  The denominator is not determined by n.  For example, if we repeat it 1000 times, and we get 400 heads, our denominator would be 400+600+600=1600 (even though it was not possible to get 1600 heads!).  If we instead got 550 heads, our denominator would be 550+450+450=1450.  Our denominator is outcome dependent, where here the outcome is the occurrence of heads.  What does this ratio converge to as n goes to infinity?  I surely don't know.  But I do know it's not the posterior probability of heads.

    " } }, { "_id": "QZ5RNER4nbbzBGvCM", "title": "Cognitive Bias Song", "pageUrl": "https://www.lesswrong.com/posts/QZ5RNER4nbbzBGvCM/cognitive-bias-song", "postedAt": "2010-05-06T23:30:55.781Z", "baseScore": -1, "voteCount": 18, "commentCount": 5, "url": null, "contents": { "documentId": "QZ5RNER4nbbzBGvCM", "html": "

    I will not summarize this. Or transcribe it. It's just funny(video link).

    " } }, { "_id": "tEHJXNhw6t87foqJL", "title": "Antagonizing Opioid Receptors for (Prevention of) Fun and Profit", "pageUrl": "https://www.lesswrong.com/posts/tEHJXNhw6t87foqJL/antagonizing-opioid-receptors-for-prevention-of-fun-and", "postedAt": "2010-05-05T14:40:12.797Z", "baseScore": 52, "voteCount": 40, "commentCount": 35, "url": null, "contents": { "documentId": "tEHJXNhw6t87foqJL", "html": "

    Related to: Ugh Fields, Are Wireheads Happy?

    \n

    In his post Ugh Fields, Roko discussed \"temporal difference learning\", the process by which the brain propagates positive or negative feedback to the closest cause it can find for the feedback. For example, if he forgets to pay his bills and gets in trouble, the trouble (negative feedback) propagates back to thoughts about bills. Next time he gets a bill, he might paradoxically have even more trouble paying it, because it's become associated with trouble and negative emotions, and his brain tends to unconsciously flinch away from it.

    He links to the associated Wikipedia article:

    \n
    \n

    The TD algorithm has also received attention in the field of neuroscience. Researchers discovered that the firing rate of dopamine neurons in the ventral tegmental area (VTA) and substantia nigra (SNc) appear to mimic the error function in the algorithm. The error function reports back the difference between the estimated reward at any given state or time step and the actual reward received. The larger the error function, the larger the difference between the expected and actual reward. When this is paired with a stimulus that accurately reflects a future reward, the error can be used to associate the stimulus with the future reward.

    Dopamine cells appear to behave in a similar manner. In one experiment measurements of dopamine cells were made while training a monkey to associate a stimulus with the reward of juice. Initially the dopamine cells increased firing rates when exposed to the juice, indicating a difference in expected and actual rewards. Over time this increase in firing back propagated to the earliest reliable stimulus for the reward. Once the monkey was fully trained, there was no increase in firing rate upon presentation of the predicted reward. This mimics closely how the error function in TD is used for reinforcement learning.

    \n
    \n

    So if I understand this right, the monkey hears a bell and is unimpressed, having no expectation of reward. Then the monkey gets some juice that tastes really good and activates (opioid dependent?) reward pathways. The dopamine system is pretty surprised, and broadcasts that surprise back to all the neurons that have been especially active recently, most notably the neurons that activated upon hearing the bell. These neurons are now more heavily associated with the dopamine system. So the next time the monkey hears a bell, it has a greater expectation of reward.

    And in this case it doesn't matter, because the monkey can't do anything about it. But if it were a circus monkey, and its trainer was trying to teach it to do a backflip to get juice, the association between backflips and juice would be pretty useful. As long as the monkey wanted juice, merely entertaining the plan of doing a backflip would have motivational value that promotes the correct action.

    The Sinclair Method is a promising technique for treating alcoholics that elegantly demonstrates these pathways by sabotaging them.

    \n

    \n

    Alcohol produces a surge of opioids in, yes, the ventral tegmental area. The temporal difference algorithm there correctly deduces that the reward is due to alcohol, and so links the dopamine system to things like drinking, planning to drink, et cetera. Rounding the nearest cliche, dopamine represents \"wanting\", so this makes people want to drink.

    Repeat this process enough, or start with the right (wrong?) chemical structure for your opioid and dopamine receptors, and you become an alcoholic.

    So to treat alcoholism, all you should have to do is reverse the process. Drink something, but have it not activate the reward system at all. Those dopaminergic neurons that detect error in your reward predictions start firing like mad and withdrawing their connections to the parts of the brain representing drinking, drinking is no longer associated with \"wanting\", you don't want to drink, and suddenly you're not an alcoholic any more.

    It's not quite that easy. But it might be pretty close.

    The Sinclair Method of treating alcoholism is to give patients naltrexone, an opioid antagonist. Then the patients are told they can drink as much as they want. Then they do. Then they gradually stop craving drink.

    In these people, alcohol still produces opioids, but the naltrexone prevents them from working and they don't register with the brain's reward system. Drinking isn't \"fun\" any more. The dopamine system notices there's no reward, and downgrades the connection between reward and drinking, which from the inside feels like a lessened craving to drink.

    In theory, this same process should be useful against any addiction or unwanted behavior. In practice, research either supports or is still investigating naltrexone use1 against smoking, self-harm, kleptomania, and overeating (no word yet on Reddit use).

    \n

    The method boasts an success rate of between 25% to 78% on alcoholics depending on how you define success. A lot of alcoholism statistics are comparing apples to oranges (did they stay sober for more than a year? Forever? If they just lapsed once or twice, does that still count?) but eyeballing the data2 makes this look significantly better than either Alcoholics Anonymous or willpower alone.

    \n

    I'm kind of confused by the whole idea because I don't understand the lack of side effects. Knocking out the brain's learning system to cure alcoholism seems disproportionate, and I would also expect naltrexone to interfere with the ability to experience happiness (which many people seem to like). But I haven't heard anyone mention any side-effects along the lines of \"oh, and people on this drug can never learn anything or have fun ever again\", and you'd think somebody would have noticed. If anyone on Less Wrong has ever used this method, or used naltrexone for anything else, please speak up.

    Since these same pathways control so many cravings besides alcoholism, research in this area will probably uncover more knowledge of what really motivates us.

    \n

    Footnotes

    \n

    1: There's a subtle but important difference between the Sinclair Method and simple naltrexone use. As I understand it, most doctors who prescribe naltrexone tell the patient to abstain from alcohol as much as possible, but the Sinclair Method tells the patients to continue drinking normally. There are also some complicated parts about exactly when and how often you take the drug. The theory predicts the Sinclair Method would have better results, and the data seems to bear this out. As far as I know, all the studies on kleptomania, overeating, et cetera have been done on standard naltrexone use, not the Sinclair Method; I predict the Sinclair Method would work better, although there might be some practical difficulties invovled in telling a kleptomaniac \"Okay, take this tablet once a day while stealing stuff at the same rate you usually do.\"

    \n

    2: 27% \"never relapse into heavy drinking\" and 78% get drinking \"below the level of increased risk of morbidity and mortality\". There's also an 87% number floating around without any justification or link to a study. I think this guy's statistics on a ~5-10% yearly remission rate from willpower or AA sound plausible.

    \n

     

    " } }, { "_id": "9iFNBzwtLxKRex3DL", "title": "The Cameron Todd Willingham test", "pageUrl": "https://www.lesswrong.com/posts/9iFNBzwtLxKRex3DL/the-cameron-todd-willingham-test", "postedAt": "2010-05-05T00:11:47.162Z", "baseScore": 7, "voteCount": 14, "commentCount": 86, "url": null, "contents": { "documentId": "9iFNBzwtLxKRex3DL", "html": "

    In 2004, The United States government executed Cameron Todd Willingham via lethal injection for the crime of murdering his young children by setting fire to his house. 

    \n

    In 2009, David Grann wrote an extended examination of the evidence in the Willingham case for The New Yorker, which has called into question Willingham's guilt. One of the prosecutors in the Willingham case, John Jackson, wrote a response summarizing the evidence from his current perspective. I am not summarizing the evidence here so as to not give the impression of selectively choosing the evidence.

    \n

    A prior probability estimate for Willingham's guilt (certainly not a close to optimal prior probability) is the probability that a fire resulting in the fatalities of children was intentionally set. The US Fire Administration puts this probability at 13%. The prior probability could be made more accurate by breaking down that 13% of intentionally set fires into different demographic sets, or looking at correlations with other things such as life insurance data.

    \n

    My question for Less Wrong: Just how innocent is Cameron Todd Willingham? Intuitively, it seems to me that the evidence for Willingham's innocence is of higher magnitude than the evidence for Amanda Knox's innocence. But the prior probability of Willingham being guilty given his children died in a fire in his home is higher than the probability that Amanda Knox committed murder given that a murder occurred in Knox's house.

    \n

    Challenge question: What does an idealized form of Bayesian Justice look like? I suspect as a start that it would result in a smaller percentage of defendants being found guilty at trial. This article has some examples of the failures to apply Bayesian statistics in existing justice systems.

    " } }, { "_id": "FNmifsWM8Gi7kQcR4", "title": "Experiences are friends", "pageUrl": "https://www.lesswrong.com/posts/FNmifsWM8Gi7kQcR4/experiences-are-friends", "postedAt": "2010-05-04T21:03:03.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "FNmifsWM8Gi7kQcR4", "html": "

    Products tend to be less satisfying than experiences. This is old news. Psyblog elaborates on six reasons from a set of (hard to access) recent experiments, which add up to this:

    \n

    We compare products more than experiences, and since products are doomed to not be the best we could ever have got, we are sad. When we don’t compare, we are happy.

    \n

    This requires one of two things:

    \n
      \n
    1. that when we can’t compare something, we assume it is better than average
    2. \n
    3. that we find knowing how something compares displeasing in itself unless the thing is the best.
    4. \n
    \n

    Either of these seem like puzzling behaviour. Why would we do one of them?

    \n

    The first one reminds me of the way people usually like the children they have more than the hypothetical children of any other combinations of genes they could have had. Similarly but to a lesser extent, people are uncomfortable comparing their friends and partners with others they might have had instead, and in the absence of comparison most people think those they love are pretty good. You rarely hear ‘there are likely about half a billion wives I would like more than you out there, but you are the one I’m arbitrarily in love with’.

    \n

    This all makes evolutionary sense; blind loyalty is better than ongoing evaluation from an ally, at least towards you. So you evaluate people accurately for a bit, then commit to the good ones. Notice that here the motivation for not comparing appears to come from the benefits of committing to people without regret, rather than the difficulty of figuring out what a nice bottom is worth next to a good career.

    \n

    I wonder if our not comparing experiences, and rating them well regardless is related. Experiences we buy are often parts of our relationships with other people, while objects usually aren’t. So to compare your experiences and evaluate them as accurately as you can comes dangerously close to comparing bits of your relationships and evaluating them accuracely.

    \n

    For instance, if I see there was a cheaper airfare than one I took, to entertain the thought that it would have been better to travel a week later is to admit I would give up all the moments you and I spent together on that trip for some other set of experiences and fifty dollars, which feels uncomfortably like calculating and judging our time together as average and replacable.

    \n

    If this explanation were true there would be less need for the other explanation for not comparing experiences, which is that comparing experiences is naturally more difficult than comparing products. This seems untrue anyway; the added information about products often actually makes it harder to compare, though if you used all your information you would get a better comparison.

    \n

    For instance, accurately comparing phone plans often requires a large spreadsheet and unrealistic amounts of patience, and you end up ignoring factors like the details of the applications different phones allow. On the other hand with experiences you usually know an easy to calculate price for each, a lot of detail about the one you had, and few of the details of the one you didn’t have. So you can pretty much ignore the details, unless you have some reason to think the experience you had was above or below expectations (if being at the restaurant at that time caused your colleague to get shot, probably the other restaurant would have been better), and go by price.

    \n

    This explanation predicts that if objects are closely associated with people we would treat them like we do experiences. Gifts are an obvious example, and we are unusually reluctant to compare or trade them, and tend to be especially fond of them.

    \n

    Another example of objects linked to people is toys that children think of as people. I don’t have more than anecdotal evidence on this, but when I was young I hated plenty of toys that I didn’t own, until I was given one and immediately loved it, out of politeness.

    \n

    This explanation also predicts that experiences we don’t share we might compare more readily, but I have no evidenec on that.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "Xz6pKy6sjZffy4NYW", "title": "But Somebody Would Have Noticed", "pageUrl": "https://www.lesswrong.com/posts/Xz6pKy6sjZffy4NYW/but-somebody-would-have-noticed", "postedAt": "2010-05-04T18:56:34.802Z", "baseScore": 38, "voteCount": 55, "commentCount": 258, "url": null, "contents": { "documentId": "Xz6pKy6sjZffy4NYW", "html": "

    When you hear a hypothesis that is completely new to you, and seems important enough that you want to dismiss it with \"but somebody would have noticed!\", beware this temptation.  If you're hearing it, somebody noticed.

    \n

    Disclaimer: I do not believe in anything I would expect anyone here to call a \"conspiracy theory\" or similar.  I am not trying to \"soften you up\" for a future surprise with this post.

    \n

    1. Wednesday

    \n

    Suppose: Wednesday gets to be about eighteen, and goes on a trip to visit her Auntie Alicorn, who has hitherto refrained from bringing up religion around her out of respect for her parents1.  During the visit, Sunday rolls around, and Wednesday observes that Alicorn is (a) wearing pants, not a skirt or a dress - unsuitable church attire! and (b) does not appear to be making any move to go to church at all, while (c) not being sick or otherwise having a very good excuse to skip church.  Wednesday inquires as to why this is so, fearing she'll find that beloved Auntie has been excommunicated or something (gasp!  horror!).

    \n

    Auntie Alicorn says, \"Well, I never told you this because your parents asked me not to when you were a child, but I suppose now it's time you knew.  I'm an atheist, and I don't believe God exists, so I don't generally go to church.\"

    \n

    And Wednesday says, \"Don't be silly.  If God didn't exist, don't you think somebody would have noticed?\"

    \n

    2. Ignoring Soothsayers

    \n

    Wednesday's environment reinforces the idea that God exists relentlessly.  Everyone she commonly associates with believes it; people who don't, and insist on telling her, are quickly shepherded out of her life.  Because Wednesday is not the protagonist of a fantasy novel, people who are laughed out of public discourse for shouting unpopular, outlandish, silly ideas rarely turn out to have plot significance later: it simply doesn't matter what that weirdo was yelling, because it was wrong and everybody knows it.  It was only one person.  More than one person would have noticed if something that weird were true.  Or maybe it was only six or twelve people.  At any rate, it wasn't enough.  How many would be enough?  Well, uh, more than that.

    \n

    But even if you airdropped Wednesday into an entire convention center full of atheists, you would find that you cannot outnumber her home team.  We have lots of mechanisms for discounting collections of outgroup-people who believe weird things; they're \"cultists\" or \"conspiracy theorists\" or maybe just pulling a really overdone joke.  There is nothing you can do that makes \"God doesn't exist, and virtually everyone I care about is terribly, terribly wrong about something of immense importance\" sound like a less weird hypothesis than \"these people are silly!  Don't they realize that if God didn't exist, somebody would have noticed?\"

    \n

    To Wednesday, even Auntie Alicorn is not \"somebody\".  \"Somebody\" is \"somebody from whom I am already accustomed to learning deep and surprising facts about the world\".  Maybe not even them.

    \n

    3. Standing By

    \n

    Suppose: It's 1964 and you live in Kew Gardens, Queens.  You've just gotten back from a nice vacation and when you get back, you find you forgot to stop the newspapers.  One of them has a weird headline.  While you were gone, a woman was stabbed to death in plain view of several of your neighbors.  The paper says thirty-eight people saw it happen and not a one called the police.  \"But that's weird,\" you mutter to yourself.  \"Wouldn't someone have done something?\"  In this case, you'd have been right; the paper that covered Kitty Genovese exaggerated the extent to which unhelpful neighbors contributed to her death.  Someone did do something.  But what they didn't do was successfully get law enforcement on the scene in time to save her.  Moving people to action is hard.  Some have the talent for it, which is why things like protests and grassroots movements happen; but the leaders of those types of things self-select for skill at inspiring others to action.  You don't hear about the ones who try it and don't have the necessary mojo.  Cops are supposed to be easier to move to action than ordinary folks; but if you sound like you might be wasting their time, or if the way you describe the crime doesn't make it sound like an emergency, they might not turn up for a while.

    \n

    Events that need someone to act on them do not select for such people.  Witnesses to crimes, collectors of useful evidence, holders of interesting little-known knowledge - these are not necessarily the people who have the power to get your attention, and having eyewitness status or handy data or mysterious secrets doesn't give them that power by itself.  If that guy who thinks he was abducted by aliens really had been abducted by aliens, would enough about him be different that you'd sit still and listen to his story?

    \n

    And many people even know this.  It's the entire premise of the \"Bill Murray story\", in which Bill Murray does something outlandish and then says to his witness-slash-victim, \"No one will ever believe you.\"  And no one ever will.  Bill Murray could do any fool thing he wanted to you, now that this meme exists, and no one would ever believe you.

    \n

    4. What Are You Going To Do About It?

    \n

    If something huge and unbelievable happened to you - you're abducted by aliens, you witness a key bit of a huge crime, you find a cryptozoological creature - and you weren't really good at getting attention or collecting allies, what would you do about it?  If there are fellow witnesses, and they all think it's unbelievable too, you can't organize a coalition to tell a consistent tale - no one will throw in with you.  It'll make them look like conspiracy theorists.  If there aren't fellow witnesses, you're in even worse shape, because then even by accumulating sympathetic ears you can't prove to others that they should come forward with their perspectives on the event.  If you try to tell people anyway, whatever interest from others you start with will gradually drain away as you stick to your story: \"Yeah, yeah, the first time you told me this it was funny, but it's getting really old, why don't we play cards or something instead?\"  And later, if you keep going: \"I told you to shut up.  Look, either you're taking this joke way too far or you are literally insane.  How am I supposed to believe anything you say now?\"

    \n

    If you push it, your friends think you're a liar, strangers on the street think you're a nutcase, the Internet thinks you're a troll, and you think you're never going to get anyone to talk to you like a person until you pretend you were only fooling, you made it up, it didn't happen...  If you have physical evidence, you still need to get people to look at it and let you explain it.  If you have fellow witnesses to back you up, you still need to get people to let you introduce them.  And if you get your entire explanation out, someone will still say:

    \n

    \"But somebody would have noticed.\"

    \n

     

    \n

    1They-who-will-be-Wednesday's-parents have made no such demand, although it seems possible that they will upon Wednesday actually coming to exist (she still doesn't).  I am undecided about how to react to it if they do.

    " } }, { "_id": "NkspwZcbR2bjHS4Xg", "title": "Human values differ as much as values can differ", "pageUrl": "https://www.lesswrong.com/posts/NkspwZcbR2bjHS4Xg/human-values-differ-as-much-as-values-can-differ", "postedAt": "2010-05-03T19:35:25.533Z", "baseScore": 27, "voteCount": 33, "commentCount": 220, "url": null, "contents": { "documentId": "NkspwZcbR2bjHS4Xg", "html": "

    George Hamilton's autobiography Don't Mind if I Do, and the very similar book by Bob Evans, The Kid Stays in the Picture, give a lot of insight into human nature and values.  For instance: What do people really want?  When people have the money and fame to travel around the world and do anything that they want, what do they do?  And what is it that they value most about the experience afterward?

    \n

    You may argue that the extremely wealthy and famous don't represent the desires of ordinary humans.  I say the opposite: Non-wealthy, non-famous people, being more constrained by need and by social convention, and having no hope of ever attaining their desires, don't represent, or even allow themselves to acknowledge, the actual desires of humans.

    \n

    I noticed a pattern in these books:  The men in them value social status primarily as an ends to a means; while the women value social status as an end in itself.

    \n

    \"Male\" and \"female\" values

    \n

    This is a generalization; but, at least at the very upper levels of society depicted in these books, and a few others like them that I've read, it's frequently borne out.  (Perhaps a culture chooses celebrities who reinforce its stereotypes.)  Women and men alike appreciate expensive cars and clothing.  But the impression I get is that the flamboyantly extravagant are surprisingly non-materialistic.  Other than food (and, oddly, clothing), the very wealthy themselves consistently refer to these trappings as things that they need in order to signal their importance to other people.  They don't have an opinion on how long or how tall a yacht \"ought\" to be; they just want theirs to be the longest or tallest.  The persistent phenomenon whereby the more wealthy someone appears, the more likely they are to go into debt, is not because these people are too stupid or impulsive to hold on to their money (as in popular depictions of the wealthy, eg., A New Leaf) .  It's because they are deliberately trading monetary capital for the social capital that they actually desire (and expect to be able to trade it back later if they wish to, even making a profit on the \"transaction\", as Donald Trump has done so well).

    \n

    With most of the women in these books, that's where it ends.  What they want is to be the center of attention.  They want to walk into a famous night-club and see everyone's heads turn.  They want the papers to talk about them.  They want to be able to check into a famous hotel at 3 in the morning and demand that the head chef be called at home, woken up, and brought in immediately to cook them a five-course meal.  Some of the women in these stories, like Elizabeth Taylor, routinely make outrageous demands just to prove that they're more important than other people.

    \n

    What the men want is women.  Quantity and quality.  They like social status, and they like to butt heads with other men and beat them; but once they've acquired a bevy of beautiful women, they are often happy to retire to their mansion or yacht and enjoy them in private for a while.  And they're capable of forming deep, private attachments to things, in a way the women are less likely to.  A man can obsess over his collection of antique cars as beautiful things in and of themselves.  A woman will not enjoy her collection of Faberge eggs unless she has someone to show it to.  (Preferably someone with a slightly less-impressive collection of Faberge eggs.)  Reclusive celebrities are more likely to be men than women.

    \n

    Some people mostly like having things.  Some people mostly like having status.  Do you see the key game-theoretic distinction?

    \n

    Neither value is very amenable to the creation of wealth.  Give everybody a Rolls-Royce; and the women still have the same social status, and the men don't have any more women.  But the \"male\" value is more amenable to it.  Men compete, but perhaps mainly because the distribution of quality of women is normal.  The status-related desires of the men described above are, in theory, capable of being mutually satisfied.  The women's are not.

    \n

    Non-positional / Mutually-satisfiable vs. Positional / Non-mutually-satisfiable values

    \n

    No real person implements pure mutually-satisfiable or non-mutually-satisfiable values.   I have not done a study or taken a survey, and don't claim that these views correlate with sex in general.  I just wanted to make accessible the evidence I saw that these two types of values exist in humans.  The male/female distinction isn't what I want to talk about; it just helped organize the data in a way that made this distinction pop out for me.  I could also have told a story about how men and women play sports, and claim that men are more likely to want to win (a non-mutually-satisfiable value), and women are more likely to just want to have fun (a mutually-satisfiable value).  Let's not get distracted by sexual politics.  I'm not trying to say something about women or about men; I'm trying to say something about FAI.

    \n

    I will now rename them \"non-positional\" and \"positional\" (as suggested by SilasBarta and wnoise), where \"non-positional\" means assigning a value to something from category X according to its properties, and \"positional\" means assigning a new value to something from category X according to the rank of its non-positional value in the set of all X (non-mutually-satisfiable).

    \n

    Now imagine two friendly AIs, one non-positional and one positional.

    \n

    The non-positional FAI has a tough task.  It wants to give everyone what it imagines they want.

    \n

    But the positional FAI has an impossible task.  It wants to give everyone what it is that it thinks they value, which is to be considered better than other people, or at least better than other people of the same sex.  But it's a zero-sum value.  It's very hard to give more status to one person without taking the same amount of status away from other people.  There might be some clever solution involving sending people on trips at relativistic speeds so that the time each person is high-status seems longer to them than the time they are low-status, or using drugs to heighten their perceptions of high status and diminish the pain of low status.  For an average utilitarian, the best solution is probably to kill off everyone except one man and one woman.  (Painlessly, of course.)

    \n

    A FAI trying to satisfy one of these preferences would take society in a completely different direction than a FAI trying to satisfy the other.  From the perspective of someone with the job of trying to satisfy these preferences for everyone, they are as different as it is possible for preferences to be, even though they are taken (in the books mentioned above) from members of the same species at the same time in the same place in the same strata of the same profession.

    \n

    Correcting value \"mistakes\" is not Friendly

    \n

    This is not a problem that can be resolved by popping up a level.  If you say, \"But what people who want status REALLY want is something else that they can use status to obtain,\" you're just denying the existence of status as a value.  It's a value.  When given the chance to either use their status to attain something else, or keep pressing the lever that gives them a \"You've got status!\" hit, some people choose to keep pressing the lever.

    \n

    If you claim that these people have formed bad habits, and improperly short-circuited a connection from value to stimulus; and can be re-educated to instead see status as a means, rather than as an ends... I might agree with you.  But you'd make a bad, unfriendly AI.  If there's one thing FAIers have been clear about, it's that changing top-level goals is not allowed.  (That's usually said with respect to the FAI's top-level goals, not wrt the human top-level goals.  But, since the FAI's top-level goal is just to preserve human top-level goals, it would be pointless to make a lot of fuss making sure the FAI held its own top-level goals constant, if you're going to \"correct\" human goals first.)

    \n

    If changing top-level goals is allowed in this instance, or this top-level goal is considered \"not really a top-level goal\", I would become alarmed and demand an explanation of how a FAI distinguishes such pseudo-top-level-goals from real top-level goals.

    \n

    If a computation can be conscious, then changing a conscious agent's computation changes its conscious experience

    \n

    If you believe that computer programs can be conscious, then unless you have a new philosophical position that you haven't told anyone about, you believe that consciousness can be a by-product of computation.  This means that the formal, computational properties of peoples' values are not just critical, they're the only thing that matters.  This means that there is no way to abstract away the bad property of being zero-sum from a value without destroying the value.

    \n

    In other words, it isn't valid to analyze the sensations that people get when their higher status is affirmed by others, and then recreate those sensations directly in everyone, without anyone needing to have low status.  If you did that, I can think of only 3 possible interpretations of what you would have done, and I find none of them acceptable:

    \n\n

    Summary

    \n

    This discussion has uncovered several problems for an AI trying to give people what they value without changing what they value.  In increasing order of importance:

    \n" } }, { "_id": "wMyAJ4k9Ske6Y44SB", "title": "Connotations are indelible", "pageUrl": "https://www.lesswrong.com/posts/wMyAJ4k9Ske6Y44SB/connotations-are-indelible", "postedAt": "2010-05-03T13:20:18.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "wMyAJ4k9Ske6Y44SB", "html": "
    \n

    Once a connection exists between two concepts, there is no easy way to remove it. For instance if it was publicly decided tomorrow that ‘f***’ should no longer carry connotations of anything other than sweet lovemaking, it would be virtually impossible to remove the other meanings even if everyone wanted to. The connotation will always be a Schelling point for what the word might imply. Whenever the banned connotation made the most sense, people would understand it as that. Listeners know that the speaker knows they will be reminded of the connotation, and since the speaker used the word anyway, they intentionally sent the message.

    \n

    This is part of why polite terms are constantly changed for concepts which are followed by unwanted negative connotations, such as terms for physically and mentally disabled people and ethnic and racial minorities. As Steven Pinker probably pointed out, the negative connotations people attach to the subject matter get attached to the word, so the word becomes derogatory and we have to get another one for when offense isn’t meant. So these words cycle much faster than other words.

    \n

    You can’t even refuse to use or accept a connotation yourself. Some people insist that gender stereotypes don’t apply, are offensive, and should never be used. But if someone says to them ‘David was being a bit of a girl’, they can’t help but receive the message. They might refuse to respond, but they have no defenses to receiving. They would like to remove the association of wimpiness from the public understanding of femalehood, but they can’t even opt out themselves.

    \n

    This is similar to the game mentioned in The Strategy of Conflict where two people are to privately pick a letter from several which have different payoffs to both of them without communicating. If a single suggestion is accidentally uttered, they must pick the letter spoken even if neither of them prefer it to others. It’s the only way to coordinate. If one of them managed to speak another letter, that would weaken the original Schelling point, but not destroy it. Similarly, if you make it clear that femininity suggests strength to you, you can confuse the communication somewhat by making it difficult for either the speaker or listener to guess which of the prominent possible meanings is being communicated, but you can’t destroy the existing meaning. At best the interpretation will depend on the situation, just like in the game it will depend on other cues both parties can use interpret one of the letters as more obvious.

    \n

    Besides this, the more you point out things shouldn’t be associated, the more you associate them in the public mind. And that’s before taking into account that your bothering to draw attention to the issue advertises to your audience that in the common understanding the concepts are associated, so they should understand them as so in normal discussion. For instance if I argued to you that ‘capitalistic’ shouldn’t be associated with ‘immoral’, you might be persuaded, but you would also get the impression that everyone else thinks they are related. Since the latter matters and the former doesn’t, the net effect on you would be to make you understand ‘immoral’ more strongly the next time you heard someone say ‘capitalistic’.

    \n

    So I would expect campaigns to attach new associations to things (such as tiny penises to speeding) are likely to be more effective than campaigns to remove associations from things (such as inferiority from femininity). Any evidence on this?

    \n

    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "3mGNhr8ozPAuiccKR", "title": "Enjoy ≠ want, but why should wants submit?", "pageUrl": "https://www.lesswrong.com/posts/3mGNhr8ozPAuiccKR/enjoy-want-but-why-should-wants-submit", "postedAt": "2010-05-02T11:17:07.000Z", "baseScore": 0, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "3mGNhr8ozPAuiccKR", "html": "

    There are things that we try to get. There are things that we enjoy when we have them. There’s overlap,  but they aren’t the same. When people notice they aren’t the same, they often try to change or override their wants – or more often others’ wants – in the direction of reflecting what’s actually enjoyable*.

    \n

    This observed disconnect seems for instance to underlie many people’s support for government interference into personal choices and markets, indifference to taking money from the wealthy, advice about personal choices, and the ethical position which says that if a person expects a lifetime of pain, yet wants to live regardless, it is still a good thing if they die.

    \n

    An argument I’ve heard sometimes for this last manifestation in hedonistic utilitarianism is this: on introspection, it seems that enjoyment is the only thing that could be valuable. Since we should try to get things that are valuable, we should try to get that, and forget the things we thought we wanted. We were either wrong about our wants or just downright silly.

    \n

    This seems to use a trick though, of only sometimes assuming the identity of different definitions of value, ‘that which feels good now’ and ‘that which we want’. In working out intuitively what could possibly be valuable, the user of this reasoning presumably considers what seems good now – really what is meant is something like ‘the only thing which seems good at the time is enjoyment’. But then the argument goes on to implicitly assume ‘value’ as in what we enjoy is equivalent to ‘value’ as in what we seek, so as to conclude that it’s good to seek enjoyable things. But if from the outset we used ‘value’ to mean both to what we like currently and what we want, ‘things we want’ must be another contender for what intuitively might be inherently valuable. If we are going by intuitions, ‘we should try to get what we want to get’ looks far more self evident than ‘we should try to get what we enjoy’.

    \n

    If there is a disconnect between wants and enjoyment, there are three possible resolutions. First, we could leave it be and believe simultaneously that it’s good to pursue what we want, and good to have what we enjoy, without them having to be the same thing. After all, we see empirically that wants and enjoyment aren’t the same – why assume they should be? This seems silly, but won’t go into arguments. Secondly we could do the usual thing where we try to make people pursue things they actually enjoy. Thirdly we could go the other way and try to enjoy the things we naturally pursue.

    \n

    For example people often say that money doesn’t make us happy, and therefore we should stop seeking it. However we could equally well just try to enjoy being rich more. That doesn’t sound obviously harder, but is rarely suggested. Perhaps ‘what we enjoy’ seems more like an end goal and ‘what we seek’ seems instrumental, and in theory it seems more sensible to change instrumental things than final goals (why would you change a final goal, but to achieve your final goals?). In practice though, we are mixed up monkeys with plenty of inbuilt urges, tendencies, and pleasures in no neat hierarchy. So why should wants submit to pleasures?

    \n

    Presumably this  has been discussed at length before somewhere – if you know where, please tell me.

    \n

    *In this post ‘enjoyment’ refers to any positive mental state.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "XJtTbMZJt2WTn7kSa", "title": "Rationality quotes: May 2010", "pageUrl": "https://www.lesswrong.com/posts/XJtTbMZJt2WTn7kSa/rationality-quotes-may-2010", "postedAt": "2010-05-01T05:48:10.694Z", "baseScore": 8, "voteCount": 6, "commentCount": 301, "url": null, "contents": { "documentId": "XJtTbMZJt2WTn7kSa", "html": "
    \n

    This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.

    \n\n
    " } }, { "_id": "FDyMThqqX2s47e6rG", "title": "Open Thread: May 2010", "pageUrl": "https://www.lesswrong.com/posts/FDyMThqqX2s47e6rG/open-thread-may-2010", "postedAt": "2010-05-01T05:29:40.871Z", "baseScore": 4, "voteCount": 8, "commentCount": 558, "url": null, "contents": { "documentId": "FDyMThqqX2s47e6rG", "html": "

    You know what to do.

    \n

    This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

    " } }, { "_id": "vD3o27HcXyMZpwpQq", "title": "Why is bad teaching attached to uni certification?", "pageUrl": "https://www.lesswrong.com/posts/vD3o27HcXyMZpwpQq/why-is-bad-teaching-attached-to-uni-certification", "postedAt": "2010-04-30T06:28:28.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "vD3o27HcXyMZpwpQq", "html": "

    When most things are certified, like coffee or wood or insanity, the stuff is produced by one party, then someone else judges it. University is meant to be a certification of something or another, so a nagging question for all those who can think of a zillion better ways to learn things than by moving their morning sleep to a lecture theater  is ‘why can’t university work like those other things?’

    \n

    If the learning bit were done with a different party from the certification bit, everyone could buy their preferred manner of education, rather than being constrained by the need for it to be attached to the most prestigious certification they could get hold of. This would drastically increase efficiency for those people who learn better by reading, talking, or listening to pausable, speedupable, recordings of good lecturers elsewhere than they do by listening to someone gradually mumble tangents at them for hour-long stints, or listening to the medical autobiographies of their fellow tutorial-goers.

    \n

    This is an old and seemingly good idea, assuming university is for learning stuff, so probably I should assume something else.

    \n

    Many other things university could be for face the same argument – if you are meant to learn to be a ‘capable and cultivated human being’ or just show you can put your head down and do work, these could be achieved in various ways and tested later.

    \n

    One explanation for binding the ‘learning’ to the certification is that the drudgery is part of the test. The point is to demonstrate something like the ability to  be bored and pointlessly inconvenienced for years on end, without giving up and doing something interesting instead, purely on the vague understanding that it’s what you’re meant to do. That might be a good employee characteristic.

    \n

    That good though? Surely there is far more employment related usefulness you could equip a person with in several years than just checking they have basic stamina and normal deference to social norms. Presumably just having them work cheaply for that long would tell you the same and produce more. And aren’t there plenty of jobs where the opposite characteristics, such as initiative and responding fast to suboptimal situations, are useful? Why would everyone want signals of placid obedience?

    \n

    Bryan Caplan argued that university must be long because it is to show conformity and conscientiousness, and anyone can pretend at that for a short while. But why isn’t university more like the army then? People figure out that they don’t have the conformity and conscientiousness for that much faster than they do university from what I hear. University is often successfully done concurrently with spending a year or five drunk, so it’s a pretty weak test for work ethic related behaviours.

    \n

    Another possible explanation is that the system made more sense at some earlier time, and is slow to change because people want to go to prestigious places and not do unusual things. While there’s no obvious reason the current setup allows more prestige, it’s been around a long time, so its institutions are way ahead prestige-wise.

    \n

    What do you think?


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "BNtjLbPSidEbihPmv", "title": "Averaging value systems is worse than choosing one", "pageUrl": "https://www.lesswrong.com/posts/BNtjLbPSidEbihPmv/averaging-value-systems-is-worse-than-choosing-one", "postedAt": "2010-04-29T02:51:31.138Z", "baseScore": 7, "voteCount": 22, "commentCount": 56, "url": null, "contents": { "documentId": "BNtjLbPSidEbihPmv", "html": "

    A continuation of Only humans can have human values.  Revised late in the evening on April 30.

    \n

    Summary: I will present a model of value systems, and show that under it, the \"averaged value system\" found by averaging the values of all the agents,

    \n\n

    ADDED: The reason for doing this is that numerous people have suggested implementing CEV by averaging different value systems together.  My intuition is that value systems are not random; they are optimized in some way.  This optimization is undone if you mix together different value systems simply by averaging them.  I demonstrate this in the case where we suppose they are optimized to minimize internal conflict.

    \n

    To someone working with the assumptions needed for CEV, the second bullet point is probably more important.  Stability is central to CEV, while internal inconsistency may be a mere computational inconvenience.

    \n\n\n

    \n

    ADDED: Inconsistencies in value systems

    \n

    We find consistent correlations in value systems.  The US has two political parties, Republican and Democrat; and many people who find one or the other obviously, intuitively correct.  Most countries have a conservative/liberal dimension that many values line up along.  It's hard to know whether this is because people try to make their values consistent; or because game theory tends to produce two parties, or even because parties form along the first principle component of the scatterplot of the values of members of society, so that some essentially artifactual vector is guaranteed to be found to be the main dimension along which opinions vary.  However, it's at least suggestive.  You seldom find a country where the conservatives favor peace and the liberals favor war; or where the liberals value religious rules more than the conservatives.  I seldom find vegetarians who are against welfare, or loggers or oilmen who are animal-rights activists.

    \n

    If it's a general principle that some process causes people to form value systems with less inconsistencies than they would have by gathering different pieces from different value systems at random, it's not a great leap of faith to say that value systems with less inconsistencies are better in some way than ones with more inconsistencies.  We can at the very least say that a cobbled-together value system lacks this property of naturally-occurring human value systems; and therefore is not itself a good example of a human value system.

    \n

    You might study the space of possible environments in which an agent must act, and ask where in that space values are in conflict, and what the shape of the decision boundary surfaces between actions are in that space.  My intuition is that value systems with many internal conflicts have complex boundary surfaces in that space.

    \n

    More complex decision boundaries enable an agent to have a decision function that makes finer discriminations, and therefore can make more use of the information in the environment.  However, overly-complex decision boundaries may be adding noise.

    \n

    If you take the value systems held by a set of agents \"in the wild\", we can suppose their decision boundary surfaces are adapted to their environment and to their capabilities, so that they are doing a good job of balancing the complexity of the agent's decision surface vs. their computational power and the complexity of the life they face.

    \n

    If you construct a value system from those value systems, in a way that does not use the combined information used to construct all of them, and you end up with a more-complex decision surface constructed from the same amount of underlying information as a typical \"wild-type\" value system, you could conclude that this decision surface is overly-complex, and the extra complexities are noise/overfitting.

    \n

    I have other reasons I think that the degree of inconsistency within a value system could be a metric used to evaluate it.  The comments below explore some different aspects of this.  The topic needs at least a post of its own.  The idea that higher internal consistency is always better is too simple. However, if we have a population of wild-type value systems that we think are adapted by some self-organizing process, then if we combine them in a way that produces an artificial value system that is consistently biased in the same direction - either lower or higher internal consistency than wild-type - I think that is cause for concern.

    \n

    (I don't know if there are any results showing that an associative network with a higher IC, as defined below, has a more complex decision surface.  I would expect this to be the case.  A Hopfield network with no internal conflict would have a plane for its decision surface, and be able to store only 2 patterns.)

    \n

    A model of value systems

    \n

    Model any value system as a fully-connected network, where the nodes are values, and the connection from one value to another gives the correlation (from -1 to 1) between the recommendations for behavior given by the two values.  Each node is assigned a real number from 0 to 1 indicating how strongly the agent holds the value associated with that node.  Connection weights are fixed by the environment; node values vary according to the value system.

    \n

    The internal conflict (IC) in a value system is the negative of the sum, over all pairs of nodes, of the product of the node values and the connection weight between them.  This is an energy measure that we want to minimize.  Averaging value systems together is a reasonable thing to do, for an expected-utility-maximizer, only if the average of a set of value systems is expected to give a lower IC than the average IC of all of the value systems.  (Utility = - (internal conflict).)

    \n

    IC(averaged values) > average(IC) if agents are better than random

    \n

    Let there be N nodes.  Let a be an agent from the set A of all agents.  Let vai be the value agent a places on node i.  Let wij be the weight between nodes i and j.  Let the \"averaged agent\" b mean a constructed agent b (not in A) for which vbi = average over all a of vai.  Write \"the sum over all i and j of S\" as sum_{i, j}(S).

    \n

    Average IC = ICa = - sum_{i, j} [wij x sum_a (vai x vaj)] / |A|

    \n

    Expected IC from average agent b = ICb = - sum_{i, j} [wij x (sum_a(vai) / |A|) x (sum_a(vaj) / |A|)]

    \n

    Now I will introduce the concept of a \"random agent\", which is an agent r constructed by choosing some other agent a at random for every node i, and setting vri = vai.  Hopefully you will agree that a random agent will have, on average, a higher IC than one of our original agents, because existing agents are at least a little bit optimized, by evolution or by introspection.

    \n

    (You could argue that values are things that an agent never, by definition, willingly changes, or is even capable of changing.  Rather than get into a tricky philosophical argument, I will point out that, if that is so, then values have little to do with what we call \"values\" in English; and what follows applies more certainly to something more like the latter, and to what we think of when people say \"values\".  But if you also claim that evolution does not reduce value conflicts, you must have a simple, statically-coded priority-value model of cognition, eg Brooks' subsumption architecture; and you must also believe that the landscape of optimal action as a function of environment is everywhere discontinuous, or else you would expect agents in which a slight change in stimuli results in a different value achieving dominance to suffer a penalty for taking uncorrelated actions in situations that differ only slightly.)

    \n

    We find the average IC of a random agent, which we agreed (I hope) is higher than the average IC of a real agent, by averaging the contribution from pair of nodes {i, j} over all possible choices of agents used to set vri and vrj.  The average IC of a random agent is then

    \n

    ICr = Average IC of a random agent = - sum_{i, j} [wij x sum_a (vai x sum_a(vaj)))] / (|A| x |A|)

    \n

    We see that ICr = ICb.  In other words, using this model, constructing a value system by averaging together other value systems gives you the same result that you would get, on average, by picking one agent's value for one node, and another agent's value for another node, and so on, at random.  If we assume that the value system held by any real agent is, on average, better than such a randomly-thrown-together value system, this means that picking the value system of any real agent will give a lower expected IC than picking the value system of the averaged agent.

    \n

    I didn't design this model to get that result; I designed just one model, which seemed reasonable to me, and found the proof afterward.

    \n

    Value systems are stable; an averaged value system is not

    \n

    Suppose that agents have already evolved to have value systems that are consistent; and that agents often actively work to reduce conflicts in their value systems, by changing values that their other values disagree with.  (But see comments below on deep values vs. surface values.  A separate post justifying this supposition, and discussing whether humans have top-level goals, is needed.)  If changing one or two node values would reduce the IC, either evolution or the agent would probably have already done so.  This means we expect that each existing value system is already a local optimum in the space of possible node values.

    \n

    If a value system is not at a local optimum, it's unstable.  If you give that value system to an agent, or a society, it's likely to change to something else - possibly something far from its original setting.  (Also, the fact that a value system is not a local optimum is a strong indicator that it has higher-than-typical IC, because the average IC of systems that are a little ways d away from a local minimum is greater than the average IC of systems at a local minimum, by an amount proportional to d.)

    \n

    Averaging value systems together is therefore a reasonable thing to do only if the average of a set of value systems that are all local minima is guaranteed to give a value system that is also a local minimum.

    \n

    This is not the case.  Consider value systems of 3 nodes, A, B, and C, with the weights AB=1, BC=1, AC=-1.  Here are two locally-optimal value systems.  Terms in conflict measures are written as node x connection x node:

    \n

    A = 0, B = 1, C = 1: Conflict = -(0 x 1 x 1 + 1 x 1 x 1 + 1 x -1 x 0) = -1

    \n

    A = 1, B = 1, C = 0: Conflict = -(1 x 1 x 1 + 1 x 1 x 0 + 0 x -1 x 1) = -1

    \n

    The average of these two systems is

    \n

    A = 1/2, B = 1, C = 1/2: Conflict = -(.5 x 1 x 1 + 1 x 1 x .5 + .5 x -1 x .5) = -.75

    \n

    We can improve on this by setting A = 1:

    \n

    A = 1, B = 1, C = 1/2: Conflict = -(1 x 1 x 1 + 1 x 1 x .5 + .5 x -1 x 1) = -1 < -.75

    \n

    It would only be by random chance that the average of value systems would be locally optimal.  Averaging together existing values is thus practically guaranteed to give an unstable value system.

    \n

    Let me point out again that I defined my model first, and the first example of two locally-optimal value systems that I tried out, worked.

    \n

    You can escape these proofs by not being rational

    \n

    If we suppose that higher-than-wild-type IC is bad, under what circumstance is it still justified to choose the averaged agent rather than one of the original agents?  It would be justified if you give an extremely high penalty for choosing a system with high IC, and do not give a correspondingly high reward for choosing a system with a wild-type IC.  An example would be if you chose a value system so as to minimize the chance of having an IC greater than that given by averaging all value systems together.  (In this context, I would regard that particular goal as cheating, as it is constructed to give the averaged value system a perfect score.  It suffers zero-risk bias.)

    \n

    Such risk-avoidant goals would, I think, be more likely to be achieved by averaging (although I haven't done the math).  But they do not maximize expected utility.  They suffer risk-avoidance bias, by construction.

    \n

    ... or by doing very thorough factor analysis

    \n

    If, as I mentioned in Only humans can have human values, you can perform factor analysis and identify truly independent, uncorrelated latent \"values\", then the above arguments do not apply.  You must take into account multiple hypothesis testing; using mathematics that guaranteed finding such a result would not impress me.  If, for instance, you were to simply perform PCA and say that the resulting eigenvectors are your true latent values, I would respond that the first dozen eigenvectors might be meaningful, but the next thousand are overfitted to the data.  You might achieve a great simplification of the problem, and greatly reduce the difference between ICa and ICb; but would still have ICa < ICb.

    \n

    ADDED: Ensemble methods

    \n

    In machine learning, \"ensemble methods\" mean methods that combine (often by averaging together) the predictions of different classifiers.  It is a robust result that ensemble methods have better performance than any of the individual methods comprising them.  This seems to contradict the claim that an averaged value systems would be worse than any of the individual value systems comprising it.

    \n

    I think there is a crucial difference, however: In ensemble methods, each of the different methods has exactly the same goals (they are trained by a process that agrees on what are good and bad decisions).  An ensemble method is isomorphic to asking a large number of people who have the same value system to vote on a course of action.

    " } }, { "_id": "unteH2nc9ivLspQf2", "title": "Who observes being you, and do they cheat in SIA?", "pageUrl": "https://www.lesswrong.com/posts/unteH2nc9ivLspQf2/who-observes-being-you-and-do-they-cheat-in-sia", "postedAt": "2010-04-29T00:58:55.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "unteH2nc9ivLspQf2", "html": "

    Warning: this post is somewhat technical – looking at this summary should help.

    \n

    1,000,000 people are in a giant urn. Each person is labeled with a number (number 1 through number 1,000,000).
    \nA coin will be flipped. If heads, Large World wins and 999,999 people will be randomly selected from the urn. If tails, Small World wins and 1 person will be drawn from the urn.
    \nAfter the coin flip, and after the sample is selected, we are told that person #X was selected (where X is an integer between 1 and 1,000,000).
    \nPrior probability of Large World: P(heads)=0.5
    \nPosterior probability of Large World: P(heads|person #X selected)=P(heads)=0.5
    \nRegardless of whether the coin landed heads or tails, we knew we would be told about some person being selected. So, the fact that we were told that someone was selected tells us nothing about which world we are in.

    \n

    Jason Roy argues that the self indication assumption (SIA) is equivalent to such reasoning, and thus wrong. For the self indication assumption to be legitimate it would have to be analogous to a selection procedure where you can only ever hear about person number 693465 for instance – if they don’t come up you hear nothing.

    \n

    In both cases you can only hear about one person in some sense, the question is whether which person you could hear about was chosen before the experiment, or afterwards from those which came up. The self indication assumption looks at first like a case of the latter; nothing that can be called you existed before the experiment to have dibs on a particular physical arrangement if it came up, and you certainly didn’t start thinking about the self indication assumption until you were well chosen. These things are not really important though.

    \n

    Which selection procedure is analogous to using SIA seems to depend on what real life thing corresponds to ‘you’ in the thought experiment when ‘you’ are told about people being pulled out of the urn. If ‘you’ are a unique entity with exactly your physical characteristics, then if you didn’t exist, you wouldn’t have heard of someone else – someone else would have heard of someone else. Here SIA stands; my number was chosen before the experiment as far as I’m concerned, even if I wasn’t there to choose it.

    \n

    On the other hand ‘you’ can be thought of as an abstract observer who has the same identity regardless of characteristics. Then if a person with different characteristics existed instead of the person with your current ones, it’s just you observing a different first-person experience. Then it looks like you are taking a sample from those who exist, as in the second case, so it seems SIA fails.

    \n

    This isn’t a question of which of those things exists. They are both coherent enough concepts that could refer to real things. Should they both be participating in their own style of selection procedure then, and reasoning accordingly? Your physical self discovering with utmost shock that it exists while the abstract observer looks on non-plussedly? No – they are the same person with the same knowledge now, so they should really come to the same conclusion.

    \n

    Look more closely at the lot of the abstract observer. Which abstract observers get to exist if there are different numbers of people? If they can only be one person at once, then in a smaller world some observers who would have been around in the bigger world must miss out. Which means finding that you have the person with any number X should still make you update in favor of the big world, exactly as much as the entity defined by those physical characteristics should; abstract observers weren’t guaranteed to have existed exist either.

    \n

    What if the abstract observer experiencing the selection procedure is defined to encompass all observerhood? There is just one observer, who always exists, and either observes lots of creatures or few, but in a disjointed manner such that it never knows if it observes more than the present one at a given time. If it finds itself observing anyone now it isn’t surprised to exist, nor to see the particular arbitrary collection of characteristics it sees – it was bound to see one or another. Now can we write off SIA?

    \n

    Here the creature is in a different situation to any of Roy’s original ones. It is going to be told about all the people who come up, not just one. It is also in the strange situation of forgetting all but one of them at a time. How should it reason in this new scenario? In ball urn terms, this is like pulling all of the balls out of whatever urn comes up, one by one, but destroying your memories after each one. Since the particular characteristics don’t tell you anything here, this is basically a version of the sleeping beauty problem. Debate has continued on that for a decade, so I shan’t try to answer Roy by solving it now. SIA gives the popular ‘thirder’ position though, so looking at the selection procedure in this perspective does not undermine SIA further.

    \n

    Whether you think of the selection procedure experienced by an exact set of physical characteristics, an abstract observer, or all observerhood as one, using SIA does not amount to being surprised after the fact by the unlikelihood of whatever number comes up.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "mJB9LBDkKuqxqDhkM", "title": "What are our domains of expertise? A marketplace of insights and issues", "pageUrl": "https://www.lesswrong.com/posts/mJB9LBDkKuqxqDhkM/what-are-our-domains-of-expertise-a-marketplace-of-insights", "postedAt": "2010-04-28T22:17:12.713Z", "baseScore": 29, "voteCount": 24, "commentCount": 65, "url": null, "contents": { "documentId": "mJB9LBDkKuqxqDhkM", "html": "

    We have recently obtained evidence that a number of people, some with quite interesting backgrounds and areas of expertise, find LessWrong an interesting read but find limited opportunities to contribute.

    \n

    This post is an invitation to engage, in relative safety but just a little beyond saying \"Hi, I'm a lurker\". Even that little is appreciated, to be sure, and it's OK for anyone who feels the slightest bit intimidated to remain on the sidelines. However, I'm confident that most readers will find it quite easy to answer at least the first of the following questions:

    \n\n

    ...and possibly these follow-ups:

    \n\n

    \n

    Comments like the following, from the \"Attention Lurkers\" thread, suggest untapped resources:

    \n
    \n

    I'm a Maternal-Fetal Medicine specialist. [...] I lurk because I feel that I'm too philosophically fuzzy for some of the discussions here. I do learn a great deal. Anytime anyone wants to discuss prenatal diagnosis and the ethical implications, let me know.

    \n
    \n

    My own area of professional expertise is computer programming - perhaps one of the common sub-populations here. I'm also a parent, and have been a beneficiary of prenatal diagnosis (toxoplasmosis: turned out not to be a big deal, but it might have been). My curiosity is often engaged by what goes on \"behind the scenes\" of the professions I interact with as an ordinary citizen.

    \n

    Yes, I would be quite interested in striking up a conversation about applying the tools discussed here to prenatal diagnosis; or in a conversation about which conceptual tools that I don't know about yet turn out to be useful in dealing with the epistemic or ethical issues in prenatal diagnosis.

    \n

    Metaphorically, the intent of this post is to provide a marketplace. We already have the \"Where are we?\" thread, which makes it easier for LessWrongers close to each other to meet up if they want to. (\"Welcome to LessWrong\" is the place to collect biographical information, but it specifically emphasizes the \"rationalist\" side of people, rather than their professional knowledge.)

    \n

    In a similar spirit, please post a comment here offering (or requesting) domain-specific insights. My hunch is, we'll find that even those of us in professions that don't seem related to the topics covered here have more to contribute than they think; my hope is that this comment thread will be a valuable resource in the future.

    \n

    A secondary intent of this post is to provide newcomers and lurkers with one more place where contributing can be expected to be safe from karma penalties - simply answer one of the questions that probably comes up most often when meeting strangers: \"What do you do?\". :)

    \n

    (P.S. If you've read this far and are disappointed with the absence of any jokes about \"yet another fundamental question\", thank you for your attention, and please accept this apophasis as a consolation gift.)

    " } }, { "_id": "TNdfaoNq2nsXZckuz", "title": "Jinnetic Engineering, by Richard Stallman", "pageUrl": "https://www.lesswrong.com/posts/TNdfaoNq2nsXZckuz/jinnetic-engineering-by-richard-stallman", "postedAt": "2010-04-28T01:24:28.927Z", "baseScore": 3, "voteCount": 21, "commentCount": 10, "url": null, "contents": { "documentId": "TNdfaoNq2nsXZckuz", "html": "

    Thought the community might enjoy this:

    \n

    Jinnetic Engineering

    " } }, { "_id": "MX5epiZYJggkthcK6", "title": "Possibilities for converting useless fun into utility in Online Gaming", "pageUrl": "https://www.lesswrong.com/posts/MX5epiZYJggkthcK6/possibilities-for-converting-useless-fun-into-utility-in", "postedAt": "2010-04-27T22:21:56.251Z", "baseScore": 1, "voteCount": 21, "commentCount": 19, "url": null, "contents": { "documentId": "MX5epiZYJggkthcK6", "html": "

    Online gaming in immersive MMOs such as World of Warcraft or EVE Online is a common way of having fun. As technology progresses, MMO gaming will likely become ever more popular, until MMOs are fully immersive virtual realities, leading many to consider them as the primary venue of their lives, instead of \"the old(/real) world\" (without such thinking being pathological anymore).

    Currently, however, many people such as myself mostly find MMO gaming a threat to their productivity. MMOs can be very fun, druglike even, without providing any utility to valued real-world pursuits such as reducing existential risk and having money to buy food.

    The default recommendation regarding MMOs for most rationalists should probably be \"stay away from them -- or at least don't get into active gaming\". This is also my current attitude.

    Despite this, it may actually be worth considering whether some utility could be extracted from MMO gaming, specifically from the point of view of SIAI supporters such as myself. (From here on, I'll use the term \"SIAIfolk\" to refer to people interested in furthering SIAI's and allied organizations' mission.)

    It seems that the amount of SIAIfolk is undergoing strong growth, and that this may continue. At some point, which we may currently have passed or not, there may therefore (despite all recommendations) be a substantial number of SIAIfolk engaging in somewhat active MMO gaming.

    In such a circumstance, it may be beneficial to form a \"Singularitarian Gaming Group\", which along with functioning as a gaming clan in the various MMOs participated in, would include an internal reward and ranking system that would motivate people *away* from spending too much time on gaming, and encourage more productive activities. Some amount of MMO gaming would be done, with the company of other SIAIfolk making it more fun, but incentives and social support would be in place to keep gaming down to a rational level.

    It would be critical to build the incentive system well. A poorly built system would lead SIAIfolk to spend more resources on gaming and less effort on productive stuff than would have happened if \"Singularitarian Gaming Group\" didn't exist in the first place. I however believe that \"SGG\" can be set up in a meaningful way, at least if we already have SIAIfolk who are spending more time on MMO gaming than they find optimal.

    With this article, my main intention is to gauge whether such SIAIfolk already exist. If so, make a comment or email me. Let's then set up a mailing list for discussion of what kind of a \"SGG\" could be useful. (Opinions on what service to use to set up the mailing list are also welcome.)

    In addition to what's mentioned above, a Singularitarian Gaming Group might also provide utility by serving as an outreach tool towards the MMO gaming community. It could market Singularitarian activism and existential risk reduction as working towards the ultimate gaming world.

    It is also worth considering whether we should actually have a Less Wrong Gaming Group rather than SGG. And since I'm posting this here, an intention of course is to invite commentary on the rationality of all of the above thinking.

    (My intention is not to discuss the specifics of an incentive system too much here on Less Wrong, though I'll mention one non-obvious feature which is teaching MMO addicts to play profitable online poker and giving points for progress and achievements in that. I'm currently spending a lot of time on poker, and the thought of a Less Wrong Poker Group as a separate thing from this \"SGG\" has crossed my mind, but that's a topic for some other time.)

    " } }, { "_id": "dcZEgHPcmSuaY3kKg", "title": "MathOverflow as an example for LessWrong", "pageUrl": "https://www.lesswrong.com/posts/dcZEgHPcmSuaY3kKg/mathoverflow-as-an-example-for-lesswrong", "postedAt": "2010-04-27T18:30:53.857Z", "baseScore": 48, "voteCount": 41, "commentCount": 67, "url": null, "contents": { "documentId": "dcZEgHPcmSuaY3kKg", "html": "\n

    \"How can LessWrong maintain high post quality while obtaining new posters?  How can we encourage everyone to read everything, but not everyone to post everything?  How can we be less intimidating to newcomers?\"

    \n

    A lot of Meta conversation goes on here, and the longer it goes on without having a great example to learn from, the longer our discussion will be more aimless and less informed than it could be.  Consider speculating whether blue mould from bread could treat supporating eye infections before you knew it also treated supporating flesh wounds...  it would seem pretty random, and the discussion would be fairly aimless. 

    \n

    But LessWrong.com is the first successful community of its kind! There is no example to learn from, right?

    \n

    With the latter, I wouldn't agree: http://mathoverflow.net

    \n

    [What I've already said in comments:  MathOverflow is a Q&A forum for research-level mathematicians, aimed at each other, created by a math grad student and a post-doc in September 2009.  As hoped, it expanded very quickly, involving many famous mathematicians around the world.  You can even see Fields Medalist — the math equivalent of Nobel Laurate — Terrence Tao is a regular contributor (bottom right).]

    \n

    MathOverflow awards Karma for good questions and good answers, it's moderated, it's open to new users, and maintains a high standard so professionals stay interested and involved.  Sound familliar?  Well, what about these features...

    \n

    The top of every page links to:

    \n\n

    Have a look at those links.  If your first reaction is \"Sure, precise guidelines worked for a professional mathematics Q&A site...\", consider this:  they didn't start out as a professional mathematics Q&A site.  They started out wanting to be one.  They had to defend against wave after wave of undergraduate calculus students posting for homework help.  They had to defy the natural propensity of the community to become an open discussion forum for mathematicians.  I watched as these problems arose, were dealt with, and subsided.  For example: 

    \n\n

    There's no coincidence.  Those guys didn't just win...  they won on purpose, with a purpose. 

    \n

    And LessWrong has a brilliant purpose:  \"Refining the art of human rationality\".  Our task of maintaining high post standards while expanding both the posting and reading community has recently been discussed at length.

    \n

    Now, I know none of us wants to sit around armchair-mind-projecting to figure out what LessWrong should do.  I understand that people here really are thinking responsibly and extrapolating from their own experiences about meta issues.  I just want to give all you folks an intuition boost, so we can handle this massive social computation together:

    \n

    What should we try that MathOverflow has already done?  What should we try differently?

    " } }, { "_id": "tDANeCewaBjArnzu6", "title": "Anthropic summary", "pageUrl": "https://www.lesswrong.com/posts/tDANeCewaBjArnzu6/anthropic-summary", "postedAt": "2010-04-27T17:35:10.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "tDANeCewaBjArnzu6", "html": "

    I mean to write about anthropic reasoning more in future, so I offer you a quick introduction to a couple of anthropic reasoning principles. There’s also a link to it in ‘pages’ in the side bar. I’ll update it later – there are arguments I haven’t written up yet, plus I’m in the middle of reading the literature, so hope to come across more good ones there.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "4RthYwrowiswpksGT", "title": "What is missing from rationality? ", "pageUrl": "https://www.lesswrong.com/posts/4RthYwrowiswpksGT/what-is-missing-from-rationality", "postedAt": "2010-04-27T12:32:06.806Z", "baseScore": 27, "voteCount": 24, "commentCount": 274, "url": null, "contents": { "documentId": "4RthYwrowiswpksGT", "html": "

    \"In a sufficiently mad world, being sane is actually a disadvantage\"

    \n

    – Nick Bostrom

    \n

    Followup to: What is rationality?

    \n

    A canon of work on \"rationality\" has built up on Less Wrong; in What is rationality?, I listed most of the topics and paradigms that have been used extensively on Less Wrong, including: simple calculation and logic1, probability theory, cognitive biases, the theory of evolution, analytic philosophical thinking, microeconomics. I defined \"Rationality\" to be the ability to do well on hard decision problems, often abbreviated to \"winning\" - choosing actions that cause you to do very well. 

    \n

    However, I think that the rationality canon here on Less Wrong is not very good at causing the people who read it to actually do well at most of life's challenges. This is therefore a criticism of the LW canon.

    \n

    If the standard to judge methods by is whether they give you the ability to do well on a wide range of hard real-life decision problems, with a wide range of terminal values being optimized for, then Less-Wrong-style rationality fails, because the people who read it seem to mostly only succeed at the goal that most others in society would label as \"being a nerd\".2 We don't seem to have a broad range of people pursuing and winning at a broad range of goals (though there are a few exceptional people here).

    \n

    Although the equations of probability theory and expected utility do not state that you have to be a \"Spock rationalist\" to use them, in reality I see more Spock than Kirk. I myself am not exempt from this critique.

    \n

    What, then, is missing?

    \n

    \n

    The problem, I think, is that the original motivation for Less Wrong was the bad planning decisions that society as a whole takes3.  When society acts, it tends to benefit most when it acts in what I would call the Planning model of winning, where reward is a function of the accuracy of beliefs and the efficacy of explicitly reasoned plans.

    \n

    But individuals within a society do not get their rewards solely based upon the quality of their plans: we are systematically rewarded and punished by the environment around us by:

    \n\n\n\n\n

    The Less Wrong canon therefore pushes people who read it to concentrate on mostly the wrong kind of thought processes. The \"planning model\" of winning is useful for thinking about what people call analytical skill, which is in turn useful for solitary challenges that involve a detailed mechanistic environment that you can manipulate. Games like Alpha Centauri and Civilization come to mind, as do computer programming, mathematics, science and some business problems.

    \n

    Most of the goals that most people hold in life cannot be solved by this kind of analytic planning alone, but the ones that can (such as how to code, do math or physics) are heavily overrepresented on LW. The causality probably runs both ways: people whose main skills are analytic are attracted to LW because the existing discussion on LW is very focused on \"nerdy\" topics, and the kinds of posts that get written tend to focus on problems that fall into the planning model because that's what the posters like thinking about.

    \n

     

    \n

     

    \n
    \n

    1: simple calculation and logic is not usually mentioned on LW, probably because most people here are sufficiently well educated that these skills are almost completely automatic for them. In effect, it is a solved problem for the LW community. But out in the wider world, the sanity waterline is much lower. Most people cannot avoid simple logical errors such as affirming the consequent, and cannot solve simple Fermi Problems.

    \n

    2: I am not trying to cast judgment on the goal of being an intellectually focused, not-conventionally-socializing person: if that is what a person wants, then from their axiological point of view it is the best thing in the world.

    \n

    3: Not paying any attention to futurist topics like cryonics or AI which matter a lot, making dumb decisions about how to allocate charity money, making relatively dumb decisions in matters of how to efficiently allocate resources to make the distribution of human experiences better overall.

    " } }, { "_id": "K7HP5QcRJLYBAZLaL", "title": "Attention Less Wrong: We need an FAQ", "pageUrl": "https://www.lesswrong.com/posts/K7HP5QcRJLYBAZLaL/attention-less-wrong-we-need-an-faq", "postedAt": "2010-04-27T10:06:48.818Z", "baseScore": 15, "voteCount": 14, "commentCount": 111, "url": null, "contents": { "documentId": "K7HP5QcRJLYBAZLaL", "html": "

    Less Wrong is extremely intimidating to newcomers and as pointed out by Academian something that would help is a document in FAQ form intended for newcomers. Later we can decide how to best deliver that document to new Less Wrongers, but for now we can edit the existing (narrow) FAQ to make the site less scary and the standards more evident.

    \n

    Go ahead and make bold edits to the FAQ wiki page or use this post to discuss possible FAQs and answers in agonizing detail.

    " } }, { "_id": "pa7PncRvcDcjw3vSR", "title": "Proposed New Features for Less Wrong", "pageUrl": "https://www.lesswrong.com/posts/pa7PncRvcDcjw3vSR/proposed-new-features-for-less-wrong", "postedAt": "2010-04-27T01:10:39.138Z", "baseScore": 11, "voteCount": 11, "commentCount": 174, "url": null, "contents": { "documentId": "pa7PncRvcDcjw3vSR", "html": "

    Followup to: Announcing the Less Wrong Sub-Reddit

    \n

    After the recent discussion about the Less Wrong sub-reddit, me and Less Wrong site designer Matthew Fallshaw have been discussing possible site improvements, and ways to implement them. As far as I can tell, the general community consensus in the previous post was that a discussion section to replace the Open Thread would be a good idea, due to the many problems with Open Thread, but that it would be problematic to host it off-site. For this reason, our current proposal involves modifying the main site to include a separate \"Discussion\" section in the navigation bar (next to \"Wiki | Sequences | About\"). What are now Open Thread comments would be hosted in the Discussion section, in a more user-friendly and appropriate format (similar to Reddit's or a BBS forum's). If my impression was mistaken, please do say so. (If you think that this is a great idea, please do say so as well, to avoid Why Our Kind Can't Cooperate.)

    \n

    We have also identified another potential problem with the site: the high quality standard, heavy use of neologisms, and karma penalties for being wrong might be intimidating to newcomers. To help alleviate this, after much discussion, we have come up with two different proposals. (To avoid bias, I'm not going to say which one is mine and which one is Matthew's.)

    \n

    - Proposal 1: Posts submitted to Less Wrong can be tagged with a \"karma coward\" option. Such posts can still be voted on, but votes on them will have no effect on a user's karma total. There will be a Profile option to hide \"karma coward\" posts from view.

    \n

    - Proposal 2: A grace period for new users. Votes on comments from new users will have no effect on that user's karma total for a certain period of time, like two weeks or a month.

    \n

    - Proposal 3: Do nothing; the site remains as-is.

    \n

    To see what the community consensus is, I have set up a poll here: http://www.misterpoll.com/polls/482996. Comments on our proposals, and alternative proposals, are more than welcome. (To avoid clogging the comments, please do not simply declare your vote without explaining why you voted that way.)

    \n

    EDIT: Posts and comments in the discussion section would count towards a user's karma total (not withstanding the implementation of proposal 1 and proposal 2), although posts would only earn a user 1 karma per upvote instead of 10.

    \n

    EDIT 2: To avoid contamination by other people's ideas, please vote before you look at the comments.

    " } }, { "_id": "cAPCCJjggjZPxxcKh", "title": "Only humans can have human values", "pageUrl": "https://www.lesswrong.com/posts/cAPCCJjggjZPxxcKh/only-humans-can-have-human-values", "postedAt": "2010-04-26T18:57:38.942Z", "baseScore": 49, "voteCount": 50, "commentCount": 161, "url": null, "contents": { "documentId": "cAPCCJjggjZPxxcKh", "html": "

    Ethics is not geometry

    \n

    Western philosophy began at about the same time as Western geometry; and if you read Plato you'll see that he, and many philosophers after him, took geometry as a model for philosophy.

    \n

    In geometry, you operate on timeless propositions with mathematical operators.  All the content is in the propositions.  A proof is equally valid regardless of the sequence of operators used to arrive at it.  An algorithm that fails to find a proof when one exists is a poor algorithm.

    \n

    The naive way philosophers usually map ethics onto mathematics is to suppose that a human mind contains knowledge (the propositional content), and that we think about that knowledge using operators.  The operators themselves are not seen as the concern of philosophy.  For instance, when studying values (I also use \"preferences\" here, as a synonym differing only in connotation), people suppose that a person's values are static propositions.  The algorithms used to satisfy those values aren't themselves considered part of those values.  The algorithms are considered to be only ways of manipulating the propositions; and are \"correct\" if they produce correct proofs, and \"incorrect\" if they don't.

    \n

    But an agent's propositions aren't intelligent.  An intelligent agent is a system, whose learned and inborn circuits produce intelligent behavior in a given environment.  An analysis of propositions is not an analysis of an agent.

    \n

    I will argue that:

    \n
      \n
    1. The only preferences that can be unambiguously determined are the preferences people implement, which are not always the preferences expressed by their beliefs.
    2. \n
    3. If you extract a set of propositions from an existing agent, then build a new agent to use those propositions in a different environment, with an \"improved\" logic, you can't claim that it has the same values.
    4. \n
    5. Values exist in a network of other values.  A key ethical question is to what degree values are referential (meaning they can be tested against something outside that network); or non-referential (and hence relative).
    6. \n
    7. Supposing that values are referential helps only by telling you to ignore human values.
    8. \n
    9. You cannot resolve the problem by combining information from different behaviors, because the needed information is missing.
    10. \n
    11. Today's ethical disagreements are largely the result of attempting to extrapolate ancestral human values into a changing world.
    12. \n
    13. The future will thus be ethically contentious even if we accurately characterize and agree on present human values.
    14. \n
    \n

    Instincts, algorithms, preferences, and beliefs are artificial categories

    \n

    There is no principled distinction between algorithms and propositions in any existing brain.  This means that there's no clear way to partition an organism's knowledge into \"propositions\" (including \"preferences\" and \"beliefs\"), and \"algorithms.\"  Hence, you can't expect all of an agent's \"preferences\" to end up inside the part of the agent that you choose to call \"propositions\".  Nor can you reliably distinguish \"beliefs\" from \"preferences\".

    \n

    Suppose that a moth's brain is wired to direct its flight by holding the angle to the moon constant.  (This is controversial, but the competing hypotheses would give similar talking points.)  If so, is this a belief about the moon, a preference towards the moon, or an instinctive motor program?  When it circles around a lamp, does it believe that lamp is the moon?

    \n

    When a child pulls its hand away from something hot, does it value not burning itself and believe that hot things burn, or place a value on not touching hot things, or just have an evolved motor program that responds to hot things?  Does your answer change if you learn that the hand was directed to pull back by spinal reflexes, without involving the cortex?

    \n

    Monkeys can learn to fear snakes more easily than they can learn to fear flowers (Cook & Mineka 1989).  Do monkeys, and perhaps humans, have an \"instinctive preference\" against snakes?  Is it an instinct, a preference (snake = negative utility), or a learned behavior (lab monkeys are not afraid of snakes)?

    \n

    Can we map the preference-belief distinction onto the distinction between instinct and learned behavior?  That is, are all instincts preferences, and all preferences instincts?  There are things we call instincts, like spinal reflexes, that I don't think can count as preferences.  And there are preferences, such as the relative values I place on the music of Bach and Berg, that are not instincts.  (In fact, these are the preferences we care about.  The purpose of Friendly AI is not to retain the fist-clenching instinct for future generations.)

    \n

    Bias, heuristic, or preference?

    \n

    A \"bias\" is a reasoning procedure that produces an outcome that does not agree with some logic.  But the object in nature is not to conform to logic; it is to produce advantageous behavior.

    \n

    Suppose you interview Fred about his preferences.  Then you write a utility function for Fred.  You experiment, putting Fred in different situations and observing how he responds.  You observe that Fred acts in ways that fail to optimize the utility function you wrote down, in a consistently-biased way.

    \n

    Is Fred displaying bias?  Or does the Fred-system, including both his beliefs and the bias imposed by his reasoning processes, implement a preference that is not captured in his beliefs alone?

    \n

    Allegedly true story, from a Teaching Company audio lecture (I forget which one):  A psychology professor was teaching a class about conditioned behavior.  He also had the habit of pacing back and forth in front of the class.

    \n

    The class decided to test his claims by leaning forward and looking interested when the professor moved toward the left side of the room, but acting bored when he moved toward the right side.  By the end of the semester, they had trained him to give his entire lecture from the front left corner.  When they asked him why he always stood there, he was surprised by the question - he wasn't even aware he had changed his habit.

    \n

    If you inspected the professor's beliefs, and then studied his actions, you would conclude he was acting irrationally.  But he wasn't.  He was acting rationally, just not thinking rationally.   His brain didn't detect the pattern in the class's behavior and deposit a proposition into his brain.  It encoded the proper behavior, if not straight into his pre-motor cortex, at least not into any conscious beliefs.

    \n

    Did he have a bias towards the left side of the room?  Or a preference for seeing students pay attention?  Or a preference that became a bias when the next semester began and he kept doing it?

    \n

    Take your pick - there's no right answer.

    \n

    If a heuristic gives answers consistently biased in one direction across a wide range of domains, we can call it a bias.  Most biases found in the literature appear to be wide-ranging and value-neutral.  But the literature on biases is itself biased (deliberately) towards discussing that type of bias.   If we're trawling all of human behavior for values, we may run across many instances where we can't say whether a heuristic is a bias or a preference.

    \n

    As one example, I would say that the extraordinarity bias is in fact a preference.  Or consider the happiness paradox:  People who become paralyzed become extremely depressed only temporarily; people who win the lottery become very happy only temporarily.  (Google 'happiness \"set-point\"'.)  I've previously argued on LessWrong that this is not a bias, but a heuristic to achieve our preferences.  Happiness is proportional not to our present level of utility, but to the rate of change in our utility.  Trying to maximize happiness (the rate of increase of utility) in the near term maximizes total utility over lifespan better than consciously attempting to maximize near-term utility would.  This is because maximizing the rate of increase in utility over a short time period, instead of total utility over that time period, prefers behavior that has a small area under the utility curve during that time but ends with a higher utility than it started with, over behavior with a large area under the utilty curve that ends with a lower utility than it started with.  This interpretation of happiness would mean that impact bias is not a bias at all, but a heuristic that compensates for this in order to maximize utility rather than happiness when we reason over longer time periods.

    \n

    Environmental factors: Are they a preference or a bias?

    \n

    Evolution does not distinguish between satisfying preconditions for behavior by putting knowledge into a brain, or by using the statistics of the environment.  This means that the environment, which is not even present in the geometric model of ethics, is also part of your values.

    \n

    When the aforementioned moth circles around a lamp, is it erroneously acting on a bias, or expressing moth preferences?

    \n

    Humans like having sex.  The teleological purpose of this preference is to cause them to have children.  Yet we don't say that they are in error if they use birth control.  This suggests that we consider our true preferences to be the organismal ones that trigger positive qualia, not the underlying evolutionary preferences.

    \n

    Strict monogamy causes organisms that live in family units to evolve to act more altruistically, because their siblings are as related to them as their children are (West & Gardner 2010).  Suppose that people from cultures with a long history of nuclear families and strict monogamy act, on average, more altruistically than people from other cultures; and you put people from both cultures together in a new environment with neither monogamy nor nuclear families.  We would probably rather say that the people from these different cultures have different values; not that they both have the same preference to \"help their genes\", but that the people from the monogamous culture have an evolved bias that causes them to erroneously treat strangers nicely in this new environment.  Again, we prefer the organismal preference.

    \n

    However, if we follow this principle consistently, it prevents us from ever trying to improve ourselves, since it in effect defines our present selves as optimal:

    \n\n

    So the \"organismal vs. evolutionary\" distinction doesn't help us choose what's a preference and what's a bias.  Without any way of doing that, it is in principle impossible to create a category of \"preferences\" distinct from \"preferred outcomes\".  A \"value\" consists of declarative knowledge, algorithms, and environment, taken together.  Change any of those, and it's not the same value anymore.

    \n

    This means that extrapolating human values into a different environment gives an error message.

    \n

    A ray of hope? ...

    \n

    I just made a point by presenting cases in which most people have intuitions about which outcome is correct, and showing that these intuitions don't follow a consistent rule.

    \n

    So why do we have the intuitions?

    \n

    If we have consistent intuitions, they must follow some rule.  We just don't know what it is yet.  Right?

    \n

    ... No.

    \n

    We don't have consistent intuitions.

    \n

    Any one of us has consistent intuitions; and those of us living in Western nations in the 21st century have a lot of intuitions in common.  We can predict how most of these intuitions will fall out using some dominant cultural values.  The examples involving monogamy and violent males rely on the present relatively high weight on the preference to reduce violent conflict.  But this is a context-dependent value!  <just-so story>It arises from living in a time and a place where technology makes interactions between tribes more frequent and more beneficial, and conflict more costly</just-so story>.  But looking back in history, we see many people who would disagree with it:

    \n\n

    The idea that violence (and sexism, racism, and slavery) is bad is a minority opinion in human cultures over history.  Nobody likes being hit over the head with a stick by a stranger; but in pre-Christian Europe, it was the person who failed to prevent being struck, not the person doing the striking, whose virtue was criticized.

    \n

    Konrad Lorenz believed that the more deadly an animal is, the more emotional attachment to its peers its species evolves, via group selection (Lorenz 1966).  The past thousand years of history has been a steady process of humans building sharper claws, and choosing values that reduce their use, keeping net violence roughly constant.  As weapons improve, cultural norms that promote conflict must go.  First, the intellectuals (who were Christian theologians at the time) neutered masculinity; in the Enlightenment, they attacked religion; and in the 20th century, art.  The ancients would probably find today's peaceful, offense-forgiving males as nauseating as I would find a future where the man on the street embraces postmodern art and literature.

    \n

    This gradual sacrificing of values in order to attain more and more tolerance and empathy, is the most-noticable change in human values in all of history.  This means it is the least-constant of human values.  Yet we think of an infinite preference for non-violence and altruism as a foundational value!  Our intuitions about our values are thus as mistaken as it is possible for them to be.

    \n

    (The logic goes like this:  Humans are learning more, and their beliefs are growing closer to the truth.  Humans are becoming more tolerant and cooperative.  Therefore, tolerant and cooperative values are closer to the truth.  Oops!  If you believe in moral truth, then you shouldn't be searching for human values in the first place!)

    \n

    Catholics don't agree that having sex with a condom is good.  They have an elaborate system of belief built on the idea that teleology express God's will, and so underlying purpose (what I call evolutionary preference) always trumps organismal preference.

    \n

    And I cheated in the question on monogamy.  Of course you said that being more altruistic wasn't an error.  Everyone always says they're in favor of more altruism.  It's like asking whether someone would like lower taxes.  But the hypothesis was that people from non-monogamous or non-family-based cultures do in fact show lower levels of altruism.  By hypothesis, then, they would be comfortable with their own levels of altruism, and might feel that higher levels are a bias.

    \n

    Preferences are complicated and numerous, and arise in an evolutionary process that does not guarantee consistency.  Having conflicting preferences makes action difficult.  Energy minimization, a general principle that may underly much of our learning, simply means reducing conflicts in a network.  The most basic operations of our neurons thus probably act to reduce conflicts between preferences.

    \n

    But there are no \"true, foundational\" preferences from which to start.  There's just a big network of them that can be pushed into any one of many stable configurations, depending on the current environment.  There's the Catholic configuration, and the Nazi configuration, and the modern educated tolerant cosmopolitan configuration.  If you're already in one of those configurations, it seems obvious what the right conclusion is for any particular value question; and this gives the illusion that we have some underlying principle by which we can properly choose what is a value and what is a bias.  But it's just circular reasoning.

    \n

    What about qualia?

    \n

    But everyone agrees that pleasure is good, and pain is bad, right?

    \n

    Not entirely - I could point to, say, medieval Europe, when many people believed that causing yourself needless pain was virtuous.  But, by and large yes.

    \n

    And beside the point (although see below).  Because when we talk about values, the eventual applications we have in mind are never about qualia.  Nobody has heated arguments about whose qualia are better.  Nobody even really cares about qualia.  Nobody is going to dedicate their life to building Friendly AI in order to ensure that beings a million years from now still dislike castor oil and enjoy chocolate.

    \n

    We may be arguing about preserving a tendency to commit certain acts that give us a warm qualic glow, like helping a bird with a broken wing.  But I don't believe there's a dedicated small-animal-empathy quale.  More likely there's a hundred inferential steps linking an action, through our knowledge and thinking processes, to a general-purpose warm-glow quale.

    \n

    Value is a network concept

    \n

    Abstracting human behavior into \"human values\" is an ill-posed problem.  It's an attempt to divine a simple description of our preferences, outside the context of our environment and our decision process.  But we have no consistent way of deciding what are the preferences, and what is the context.  We have the illusion that we can, because our intuitions give us answers to questions about preferences - but they use our contextually-situated preferences to do so.  That's circular reasoning.

    \n

    The problem in trying to root out foundational values for a person is the same as in trying to root out objective values for the universe, or trying to choose the \"correct\" axioms for a geometry.  You can pick a set that is self-consistent; but you can't label your choice \"the truth\".

    \n

    These are all network concepts, where we try to isolate things that exist only within a complex homogeneous network.  Our mental models of complex networks follow mathematics, in which you choose a set of axioms as foundational; or social structures, in which you can identify a set of people as the prime movers.  But these conceptions do not even model math or social structures correctly.  Axioms are chosen for convenience, but a logic is an entire network of self-consistent statements, many different subsets of which could have been chosen as axioms.  Social power does not originate with the rulers, or we would still have kings.

    \n

    There is a very similar class of problems, including symbol grounding (trying to root out the nodes that are the sources of meaning in a semantic network), and philosophy of science (trying to determine how or whether the scientific process of choosing a set of beliefs given a set of experimental data converges on external truth as you gather more data).  The crucial difference is that we have strong reasons for believing that these networks refer to an external domain, and their statements can be tested against the results from independent access to that domain.  I call these referential network concepts.  One system of referential network concepts can be more right than another; one system of non-referential network concepts can only be more self-consistent than another.

    \n

    Referential network concepts cannot be given 0/1 truth-values at a finer granularity than the level at which a network concept refers to something in the extensional (referred-to) domain.  For example, (Quine 1968) argues that a natural-language statement cannot be unambiguously parsed beyond the granularity of the behavior associated with it.  This is isomorphic to my claim above that a value/preference can't be parsed beyond the granularity of the behavior of an agent acting in an environment.

    \n

    Thomas Kuhn gained notoriety by arguing (Kuhn 1962) that there is no such thing as scientific progress, but only transitions between different stable states of belief; and that modern science is only different from ancient science, not better.  (He denies this in the postscript to the 1969 edition, but it is the logical implication of both his arguments and the context he presents them in.)  In other words, he claims science is a non-referential network concept.  An interpretation in line with Quine would instead say that science is referential at the level of the experiment, and that ambiguities may remain in how we define the fine-grained concepts used to predict the outcomes of experiments.

    \n

    Determining whether a network concept domain is referential or non-referential is tricky.  The distinction was not even noticed until the 19th century.  Until then, everyone who had ever studied geometry, so far as I know, believed there was one \"correct\" geometry, with Euclid's 5 postulates as axioms.  But in the early 19th century, several mathematicians proved that you could build three different, consistent geometries depending on what you put in the place of Euclid's fifth postulate.  The universe we live in most likely conforms to only one of these (making geometry referential in a physics class); but the others are equally valid mathematically (making geometry non-referential in a math class).

    \n

    Is value referential, or non-referential?

    \n

    There are two ways of interpreting this question, depending on whether one means \"human values\" or \"absolute values\".

    \n

    Judgements of value expressed in human language are referential; they refer to human behavior.  So human values are referential.  You can decide whether claims about a particular human's values are true or false, as long as you don't extend those claims outside the context of that human's decision process and environment.  This claim is isomorphic to Quine's claim about meaning in human language.

    \n

    Asking about absolute values is isomorphic to applying the symbol-grounding problem to consciousness.  Consciousness exists internally, and is finer-grained than human behaviors.  Providing a symbol-grounding method that satisfied Quine's requirements would not provide any meanings accessible to consciousness.  Stevan Harnad (Harnad 2000) described how symbols might be grounded for consciousness in sense perceptions and statistical regularities of those perceptions.

    \n

    (This brings up an important point, which I will address later:  You may be able to assign referential network concepts probabilistic or else fuzzy truth values at a finer level of granularity than the level of correspondence.  A preview: This doesn't get you out of the difficulty, because the ambiguous cases don't have mutual information with which they could help resolve each other.)

    \n

    Can an analogous way be found to ground absolute values?  Yes and no.  You can choose axioms that are hard to argue with, like \"existence is better than non-existence\", \"pleasure is better than pain\", or \"complexity is better than simplicity\".  (I find \"existence is better than non-existence\" pretty hard to argue with; but Buddhists disagree.)  If you can interpret them in an unambiguous way, and define a utility calculus enabling you to make numeric comparisons, you may be able to make \"absolute\" comparisons between value systems relative to your axioms.

    \n

    You would also need to make some choices we've talked about here before, such as \"use summed utility\" or \"use average utility\".  And you would need to make many possibly-arbitrary interpretation assumptions such as what pleasure is, what complexity is, or what counts as an agent.  The gray area between absolute and relative values is in how self-evident all these axioms, decisions, and assumptions are.  But any results at all - even if they provide guidance only in decisions such as \"destroy / don't destroy the universe\" - would mean we could claim there is a way for values to be referential at a finer granularity than that of an agent's behavior.  And things that seem arbitrary to us today may turn out not to be; for example, I've argued here that average utilitarianism can be derived from the von Neumann-Morgenstern theorem on utility.

    \n

    ... It doesn't matter WRT friendly AI and coherent extrapolated volition.

    \n

    Even supposing there is a useful, correct, absolute lattice on value system and/or values, it doesn't forward the project of trying to instill human values in artificial intelligences.  There are 2 possible cases:

    \n
      \n
    1. There are no absolute values.  Then we revert to judgements of human values, which, as argued above, have no unambiguous interpretation outside of a human context.
    2. \n
    3. There are absolute values.  In which case, we should use them, not human values, whenever we can discern them.
    4. \n
    \n

    Fuzzy values and fancy math don't help

    \n

    So far, I've looked at cases of ambiguous values only one behavior at a time.  I mentioned above that you can assign probabilities to different value interpretations of a behavior.  Can we take a network of many probabilistic interpretations, and use energy minimization or some other mathematics to refine the probabilities?

    \n

    No; because for the ambiguities of interest, we have no access to any of the mutual information between how to resolve two different ambiguities.  The ambiguity is in whether the hypothesized \"true value\" would agree or disagree with the results given by the initial propositional system plus a different decision process and/or environment.  In every case, this information is missing.  No clever math can provide this information from our existing data, no matter how many different cases we combine.

    \n

    Nor should we hope to find correlations between \"true values\" that will help us refine our estimates for one value given a different unambiguous value. The search for values is isomorphic to the search for personality primitives.  The approach practiced by psychologists is to use factor analysis to take thousands of answers to questions that are meant to test personality phenotype, and mathematically reduce these to discover a few underlying (\"latent\") independent personality variables, most famously in the Big 5 personality scale (reviewed in Goldberg 1993).  In other words:  The true personality traits, and by analogy the true values a person holds, are by definition independent of each other.

    \n

    We expect, nonetheless, to find correlations between the component of these different values that resides in decision processes.  This is because it is efficient to re-use decision processes as often as possible.  Evolution should favor partitioning values between propositions, algorithms, and environment in a way that minimizes the number of algorithms needed.  These correlations will not help us, because they have to do only with how a value is implemented within an organism, and say nothing about how the value would be extended into a different organism or environment.

    \n

    In fact, I propose that the different value systems popular among humans, and the resulting ethical arguments, are largely different ways of partitioning values between propositions, algorithms, and environment, that each result in a relatively simple set of algorithms, and each in fact give the same results in most situations that our ancestors would have encountered.  It is the attempt to extrapolate human values into the new, manmade environment that causes ethical disagreements.  This means that our present ethical arguments are largely the result of cultural change over the past few thousand years; and that the next few hundred years of change will provide ample grounds for additional arguments even if we resolve today's disagreements.

    \n

    Summary

    \n

    Philosophically-difficult domains often involve network concepts, where each component depends on other components, and the dependency graph has cycles.  The simplest models of network concepts suppose that there are some original, primary nodes in the network that everything depends on.

    \n

    We have learned to stop applying these models to geometry and supposing there is one true set of axioms.  We have learned to stop applying these models to biology, and accept that life evolved, rather than that reality is divided into Creators (the primary nodes) and Creatures.  We are learning to stop applying them to morals, and accept that morality depends on context and biology, rather than being something you can extract from its context.  We should also learn to stop applying them to the preferences directing the actions of intelligent agents.

    \n

    Attempting to identify values is a network problem, and you cannot identify the \"true\" values of a species, or of a person, as they would exist outside of their current brain and environment.  The only consistent result you can arrive at by trying to produce something that implements human values, is to produce more humans.

    \n

    This means that attempting to instill human values into an AI is an ill-posed problem that has no complete solution.  The only escape from this conclusion is to turn to absolute values - in which case you shouldn't be using human values in the first place.

    \n

    This doesn't mean that we have no information about how human values can be extrapolated beyond humans.  It means that the more different an agent and an environment are from the human case, the greater the number of different value systems there are that are consistent with human values.  However, it appears to me, from the examples and the reasoning given here, that the components of values that we can resolve are those that are evolutionarily stable (and seldom distinctly human); while the contentious component of values that people argue about are their extensions into novel situations, which are undefined.  From that I infer that, even if we pin down present-day human values precisely, the ambiguity inherent in extrapolating them into novel environments and new cognitive architectures will make the near future as contentious as the present.

    \n

    References

    \n

    Michael Cook & Susan Mineka (1989).  Observational conditioning of fear to fear-relevant versus fear-irrelevant stimuli in rhesus monkeysJournal of Abnormal Psychology 98(4): 448-459.

    \n

    Lewis Goldberg (1993).  The structure of phenotypic personality traitsAmerican Psychologist 48: 26-34.

    \n

    Stevan Harnad (1990) The Symbol Grounding Problem. Physica D 42: 335-346.

    \n

    Thomas Kuhn (1962).  The Structure of Scientific Revolutions. 1st. ed., Chicago: Univ. of Chicago Press.

    \n

    Konrad Lorenz (1966).  On Aggression.  New York: Harcourt Brace.

    \n

    Willard Quine (1969).  Ontological relativity.  The Journal of Philosophy 65(7): 185-212.

    \n

    Andreia Santos, Andreas Meyer-Lindenberg, Christine Deruelle (2010).  Absence of racial, but not gender, stereotyping in Williams syndrome children.  Current Biology 20(7), April 13: R307-R308.

    \n

    Stuart A. West and Andy Gardner (2010).  Science 12 March 2010: 1341-1344.

    " } }, { "_id": "NcB4F7MiswJyuPvYA", "title": "Who is your favorite rationalist?", "pageUrl": "https://www.lesswrong.com/posts/NcB4F7MiswJyuPvYA/who-is-your-favorite-rationalist", "postedAt": "2010-04-25T14:56:39.340Z", "baseScore": 5, "voteCount": 10, "commentCount": 39, "url": null, "contents": { "documentId": "NcB4F7MiswJyuPvYA", "html": "

    Light reading about 'Rationalist Heroes'.

    \n

    I am not sure how useful people find having personal heroes. I would argue that they are definitely useful for children. Perhaps I haven't really grown up enough yet (growing up without a father possibly contributed), but I like to have some people in my head I label as \"I wonder what would X think about this\". Many times they've set me straight through their ideas. Other times I've had to reprimand them, though unfortunately they never get the memo.

    \n

    One living example is Charlie Munger.

    \n

    He was an early practical adopter of the cognitive biases framework, and moreover he clearly put it into context of \"something to protect\":

    \n

    \"not understanding human misjudgment was reducing my ability to help everything I loved\"

    \n

    (The quote is from his talk on \"Misjudgment\" which is worth reading on its own http://vinvesting.com/docs/munger/human_misjudgement.html)

    \n

    One interesting point is that Charlie is seemingly a Christian. I have a deep suspicion that he believes that religion is valuable, for the time, as a payload delivering mechanism.

    \n

    “Economic systems work better when there’s an extreme reliability ethos. And the traditional way to get a reliability ethos, at least in past generations in America, was through religion. The religions instilled guilt. … And this guilt, derived from religion, has been a huge driver of a reliability ethos, which has been very helpful to economic outcomes for man.”

    \n

    Also, judge for yourself from his recommended reading list - looks like something out of an Atheist's Bookshelf.

    \n
    Charlie Munger's reading recommendations
    \n
    There might also be other reasons, family or whatever, that help prop up the religious appearance. I myself am still wearing a yarmulke for this category of reasons. Whatever it is, Munger is no trinity worshiper.
    \n
    Another interesting thing is that it is clear today's Berkshire Hathaway was Buffett and Munger's joint venture, and most likely would not succeed in the same way without Munger. I've done a fair amout of reading on the BRK's investment strategy at one point, but cannot find the appropriate supporting quote at the moment. Basically Munger steered Buffett away from just going after underpriced 'crap' companies ('cigar butts') that Buffett 1) already did well with 2) were recommended as the only approach by Ben Graham, who taught Buffett how to invest and was personally extremely impressive. It seems that there was significant amount of de-biasing going on.Without this adjustment Buffett would still be successful, but smaller by an order of magnitude at least.
    \n
    I mention this last thing because when I think of a rationalist succeeding in practical world, Munger comes to mind. Of course this is a small samle size.
    \n
    Another even more interesting example is Maimonides, who I'd like to write about more extensively at some point. I think that while his conclusions landed very far from the truth as I see it - (he was a highly devout Jewish religious philosopher from the middle ages), this seems largely due to bad inputs, specifically Aristotelian ideas prevalent at the time. His bravery to separate from the pack in that context and reason clearly (based on false premises of the science of his age) always impressed me.
    \n
    Do you care to have heroes? Who are they?
    " } }, { "_id": "7ePXWdxTjTWYJmcgi", "title": "Report from Humanity+ UK 2010", "pageUrl": "https://www.lesswrong.com/posts/7ePXWdxTjTWYJmcgi/report-from-humanity-uk-2010", "postedAt": "2010-04-25T12:33:33.170Z", "baseScore": 12, "voteCount": 9, "commentCount": 5, "url": null, "contents": { "documentId": "7ePXWdxTjTWYJmcgi", "html": "

    “Theosophists have guessed at the awesome grandeur of the cosmic cycle wherein our world and human race form transient incidents. They have hinted at strange survival in terms which would freeze the blood if not masked by a bland optimism.”

    \n

    – H.P. Lovecraft on transhumanism

    \n



    Just thought I'd write a quick post to sum up the H+ UK conference and subsequent LW meetup attended by myself, ciphergoth, JulianMorrison, Leon and a few other LW lurkers. My thanks to David Wood for organizing the conference, and Anders Sandberg for putting me up/putting up with me the night before.

    \n

    I made a poster giving a quick introduction to “Cognitive Bias and Futurism”, which I will put up on my website shortly. The LW crowd met up as advertised – we discussed the potential value of spreading the rationality message to the H+ community.1

    One idea was for someone (possibly me) to do a talk at UKH+ on “Rationality and Futurism”, and to get the UK transhumanist crowd involved and on board somewhat more. The NYC Less Wrong guys seem to be doing remarkably well with a meetup group, about a billion members, a group house (?) – do you have any advice for us?

    \n

    The talks were interesting and provocative – of particular note were:

    \n\n\n\n

     

    \n
    \n


    1: Particularly after hearing a man say that he wouldn't sign up for cryonics because it “might not work”. We asked him for his probability estimate that it would work (20%), and then asked him what probability he would need to have estimated for him to think it would be worth paying for (40%) – which he then admitted he had made up on the spot as “an arbitrary number”. Oh, and seeing a poster claiming to have solved the problem of defining an objective morality, which I may or may not upload.

    " } }, { "_id": "cL4wNuHhM5gH4GxdC", "title": "Navigating disagreement: How to keep your eye on the evidence ", "pageUrl": "https://www.lesswrong.com/posts/cL4wNuHhM5gH4GxdC/navigating-disagreement-how-to-keep-your-eye-on-the-evidence", "postedAt": "2010-04-24T22:47:41.096Z", "baseScore": 47, "voteCount": 37, "commentCount": 73, "url": null, "contents": { "documentId": "cL4wNuHhM5gH4GxdC", "html": "

    Heeding others' impressions often increases accuracy.  But \"agreement\"  and \"majoritarianism\" are not magic;  in a given circumstance, agreement is or isn't useful for *intelligible* reasons. 

    \n

    You and four other contestants are randomly selected for a game show.  The five of you walk into a room.  Each of you is handed a thermometer drawn at random from a box; each of you, also, is tasked with guessing the temperature of a bucket of water.  You’ll each write your guess at the temperature on the card; each person who is holding a card that is within 1° of the correct temperature will win $1000.

    \n

    The four others walk to the bucket, place their thermometers in the water, and wait while their thermometers equilibrate.  You follow suit.  You can all see all of the thermometers’ read-outs: they’re fairly similar, but a couple are a degree or two off from the rest.  You can also watch, as each of your fellow-contestants stares fixedly at his or her own thermometer and copies its reading (only) onto his or her card.

    \n

    Should you:

    \n
      \n
    1. Write down the reading on your own thermometer, because it’s yours;
    2. \n
    3. Write down an average* thermometer reading, because probably the more accurate thermometer-readings will cluster;
    4. \n
    5. Write down an average of the answers on others’ cards, because rationalists should try not to disagree;
    6. \n
    7. Follow the procedure everyone else is following (and so stare only at your own thermometer) because rationalists should try not to disagree about procedures?
    8. \n
    \n

    Choice 2, of course.  Thermometers imperfectly indicate temperature; to have the best possible chance of winning the $1000, you should consider all the information you have, from all the (randomly allocated, and so informationally symmetric) thermometers.  It doesn’t matter who was handed which thermometer.  

    \n

    Forming accurate beliefs is *normally* about this simple.  If you want the most accurate beliefs you can get, you’ll need to pay attention to the evidence.  All of the evidence.  Evenly.  Whether you find the evidence in your hand or mind, or in someone else’s.  And whether weighing all the evidence evenly leaves you with an apparently high-status social claim (“My thermometer is better than yours!”), or an apparently deferential social claim (“But look -- I’m trying to agree with all of you!”), or anywhere else.

    \n

    I’ll try to spell out some of what this looks like, and to make it obvious why certain belief-forming methods give you more accurate beliefs.

    \n

    Principle 1:  Truth is not person-dependent.

    \n

    There’s a right haircut for me, and a different right haircut for you.  There’s a right way for me to eat cookies if I want to maximize my enjoyment, and a different right way for you to eat cookies, if you want to maximize your enjoyment.  But, in the context of the game-show, there isn’t a right temperature for me to put on my card, and a different right temperature for you to put on your card.  The game-show host hands $1000 to cards with the right temperature -- he doesn’t care who is holding the card.  If a card with a certain answer will make you money, that same card and answer will make me money.  And if a certain answer won’t make me money, it won’t make you money either.

    \n

    Truth, or accuracy, is like the game show in this sense.  “Correct prediction” or “incorrect prediction” applies to beliefs, not to people with beliefs.  Nature doesn’t care what your childhood influences were, or what kind of information you did or didn’t have to work with, when it deems your predictions “accurate!” or “inaccurate!”.  So, from the point of view of accuracy, it doesn’t make any sense to say “I think the temperature is 73°, but you, given the thermometer you were handed, should think it 74°”.  Nor “I think X, but given your intuitions you should think Y” in any other purely predictive context.

    \n

    That is: while “is a good haircut” is a property of the (person, haircut) pair, “is an accurate belief” is a property of the belief only.

    \n

    Principle 2:  Watch the mechanisms that create your beliefs.  Ask if they’re likely to lead to accurate beliefs.

    \n

    It isn’t because of magic that you should use the median thermometer’s output.  It’s because, well, thermometers noisily reflect the temperature, and so the central cluster of the thermometers is more likely to be accurate.  You can see why this is the accuracy-producing method.

    \n

    Sometimes you’ll produce better answers by taking an average over many peoples’ impressions, or by updating from other peoples’ beliefs, or by taking disagreement between yourself and someone else as a sign that you should debug your belief-forming process.  And sometimes (e.g., if the people around you are choosing their answers by astrology), you won’t.  

    \n

    But in any of these circumstances, if you actually ask yourself “What belief-forming process is really, actually likely to pull the most juice from the evidence?”, you’ll see what the answer is, and you’ll see why the answer is that.  It won’t be “agree with others, because agreement is a mysterious social ritual that rationalists aim for”, or “agree with others, because then others will socially reciprocate by agreeing with you”.  It won’t be routed through the primate social system at all.  It’ll be routed through seeing where evidence can be found (seeing what features of the world should look different if the world is in one state rather than another -- the way thermometer-readings should look different if the bucket is one temperature rather than another) and then seeing how to best and most thoroughly and evenly gather up all that evidence.

    \n

    Principle 2b:  Ask if you are weighing all similarly truth-indicative mechanisms evenly.

    \n

    Even when the processes that create our beliefs are truth-indicative, they generally aren’t fully, thoroughly, and evenly truth-indicative.  Let’s say I want to know whether it’s safe for my friend to bike to work.  My own memories are truth indicative, but so are my friends’ and neighbors’ memories, and so are the memories of the folk in surveys I can find on line.  The trouble is that my own memories arrive in my head with extreme salience, and move my automatic anticipations a lot; while my friend’s have less automatic impact, and those of the surveyed neighbors still less.  So if I just go with the impressions that land in my head, my predictions will overweight a few samples of evidence at the expense of all the others.

    \n

    That is: our automatic cognition tends not to weigh the evidence evenly *at all*.  It takes conscious examination and compensation.

    \n

    Principle 3:  Ask what an outside observer would say.

    \n

    Since truth doesn’t depend on who is asking -- and since our feelings about the truth often do depend -- it can help to ask what an outside observer would say.  Instead of asking “Am I right in this dispute with my friend?” ask: “If I observed this from the outside, and saw someone with my track record and skillset, and someone else with my friend’s track record and skillset, disagreeing in this manner -- who would I think was probably right?”.

    \n

    (See also Cached Selves.)

    \n

    Common pitfall: Idolatry

    \n

    We’re humans.  Give us a good idea, and we’ll turn it into an idol and worship its (perhaps increasingly distorted) image.  Tell us about the Aumann Agreement Theorem, and we’re liable to make up nonsense rituals about how one must always agree with the majority.

    \n

    The solution is to remove the technical terms and ask *why* each belief-forming method works.  Where is the evidence?  What observations would you expect to see, if the universe were one way rather than another?  What method of aggregating the evidence most captures the relevant data?

    \n

    That is: don’t memorize the idea that “agreement”, the “scientific method”, or any other procedure is “what rationalists do”.  Or, at least, don’t *just* memorize it.  Think it through every time.  Be able to see why it works.

    \n

    Common pitfall: Primate social intuitions

    \n

    Again: we’re humans.  Give us a belief-forming method, and we’ll make primate politics out of it.  We’ll say “I should agree with the majority, so that religious or political nuts will also agree with the majority via social precedent effects”.  Or: “I should believe some of my interlocutor’s points, so that my interlocutor will believe mine”.  And we’ll cite “rationality” while doing this.

    \n

    But accurate beliefs have nothing to do with game theory.  Yes, in an argument, you may wish to cede a point in order to manipulate your interlocutor.  But that social manipulation has nothing to do with truth.  And social manipulation isn’t why you’ll get better predictions if you include others’ thermometers in your average, instead of just paying attention to your own thermometer.

    \n

    Example problems:  To make things concrete, consider the following examples.  My take on the answers appears in the comments.  Please treat these as real examples; if you think real situations diverge from my idealization, say so.

    \n

    Problem 1: Jelly-beans 

    \n

    You’re asked to estimate the number of jelly-beans in a jar.  You have a group of friends with you. Each friend privately writes down her estimate, then all of the estimates are revealed, and then each person has the option of changing her estimate.

    \n

    How should you weigh: (a) your own initial, solitary estimate; (b) the initial estimates of each of your friends; (c) the estimates your friends write down on paper, after hearing some of the others’ answers?

    \n

    Problem 2: Housework splitting  

    \n

    You get into a dispute with your roommate about what portion of the housework you’ve each been doing.  He says you’re being biased, and that you always get emotional about this sort of thing.  You can see in his eyes that he’s upset and biased; you feel strongly that you could never have such biases.  What to believe?

    \n

    Problem 3:  Christianity vs. atheism

    \n

    You get in a dispute with your roommate about religion.  He says you’re being biased, and that your “rationalism” is just another religion, and that according to his methodology, you get the right answer by feeling Jesus in your heart.  You can see in his eyes that he’s upset and biased you feel strongly that you could never have such biases.  What to believe?

    \n

    Problem 4:  Honest Bayesian wannabes

    \n

    Two similarly rational people, Alfred and Betty, estimate the length of Lake L.  Alfred estimates “50 km”; Betty simultaneously estimates “10 km”.  Both realize that Betty knows more geography than Alfred.  Before exchanging any additional information, the two must again utter simultaneous estimates regarding the answer to G.  Is it true that if Alfred and Betty are estimating optimally, it is as likely that Betty’s answer will now be larger than Alfred’s as the other way round?  Is it true that if these rounds are repeated, Alfred and Betty will eventually stabilize on the same answer?  Why?

    " } }, { "_id": "WQWaXqLCFT7BQcYjd", "title": "The role of mathematical truths", "pageUrl": "https://www.lesswrong.com/posts/WQWaXqLCFT7BQcYjd/the-role-of-mathematical-truths", "postedAt": "2010-04-24T16:59:28.316Z", "baseScore": 15, "voteCount": 26, "commentCount": 83, "url": null, "contents": { "documentId": "WQWaXqLCFT7BQcYjd", "html": "

    Related to: Math is subjunctively objective, How to convince me that 2+2=3

    \r\n

     

    \r\n

    Elaboration of points I made in these comments: first, second

    \r\n

     

    \r\n

    TL;DR Summary: Mathematical truths can be cashed out as combined claims about 1) the common conception of the rules of how numbers work, and 2) whether the rules imply a particular truth.  This cashing-out keeps them purely about the physical world and eliminates the need to appeal to an immaterial realm, as some mathematicians feel a need to.

    \r\n

     

    \r\n

    Background: \"I am quite confident that the statement 2 + 3 = 5 is true; I am far less confident of what it means for a mathematical statement to be true.\" -- Eliezer Yudkowsky

    \r\n

     

    \r\n

    This is the problem I will address here: how should a rationalist regard the status of mathematical truths?  In doing so, I will present a unifying approach that, I contend, elegantly solves the following related problems:

    \r\n

     

    \r\n

    - Eliminating the need for a non-physical, non-observable \"Platonic\" math realm.

    \r\n

    - The issue of whether \"math was true/existed even when people weren't around\".

    \r\n

    - Cashing out the meaning of isolated claims like \"2+2=4\".

    \r\n

    - The issue of whether mathematical truths and math itself should count as being discovered or invented.

    \r\n

    - Whether mathematical reasoning alone can tell you things about the universe.

    \r\n

    - Showing what it would take to convince a rationalist that \"2+2=3\".

    \r\n

    - How the words in math statements can be wrong.

    \r\n

    \r\n

     

    \r\n

    This is an ambitious project, given the amount of effort spent, by very intelligent people, to prove one position or another regarding the status of math, so I could very well be in over my head here.  However, I believe that you will agree with my approach, based on standard rationalist desiderata.

    \r\n

     

    \r\n

    Here’s the resolution, in short: For a mathematical truth (like 2+2=4) to have any meaning at all, it must be decomposable into two interpersonally verifiable claims about the physical world:

    \r\n

     

    \r\n

    1) a claim about whether, generally speaking, people's models of \"how numbers work” make certain assumptions

    \r\n

     

    \r\n

    2) a claim about whether those assumptions logically imply the mathematical truth (2+2=4)

    \r\n

     

    \r\n

    (Note that this discussion avoids the more narrowly-constructed class of mathematical claims that take the form of saying that some admittedly arbitrary set of assumptions entails a certain implication, which decompose into only 2) above.  This discussion instead focuses instead on the status of the more common belief that “2+2=4”, that is, without specifying some precondition or assumption set.)

    \r\n

     

    \r\n

    So for a mathematical statement to be true, it simply needs to be the case that both 1) and 2) hold.  You could therefore refute such a statement either by saying, \"that doesn't match what people mean by numbers [or that particular operation]\", thus refuting #1; or by saying that the statement just doesn't follow from applying the rules that people commonly take as the rules of numbers, thus refuting #2.  (The latter means finding a flaw in steps of the proof somewhere after the givens.)

    \r\n

     

    \r\n

    Therefore, a person claiming that 2+2=5 is either using a process we don't recognize as any part of math or our terminology for numbers (violating #1) or made an error in calculations (violating #2).  Recognition of this error is thus revealed physically: either by noticing the general opinions of people on what numbers are, or by noticing whether the carrying out of the rules (either in the mind or some medium isomorphic to the rules) has a certain result.  It follows that math does not require some non-physical realm.  To the extent that people feel otherwise, it is a species of the mind-projection fallacy, in which #1 and #2 are truncated to simply \"2+2=4\", and that lone Platonic claim is believed to be in the territory.

    \r\n

     

    \r\n

    The next issue to consider is what to make of claims that \"math has always existed (or been true), even when people weren't around to perform it\".  It would instead be more accurate to make the following claims:

    \r\n

     

    \r\n

    3) The universe has always adhered to regularities that are concisely describable in what we now know as math (though it's counterfactual as nobody would necessarily be around to do the describing).

    \r\n

     

    \r\n

    4) It has always been the case that if you set up some physical system isomorphic to some mathematical operation, performed the corresponding physical operation, and re-interpreted it by the same isomorphism, the interpretation would match that which the rules of math give (though again counterfactual, as there's no one to be observing or setting up such a system).

    \r\n

     

    \r\n

    This, and nothing else, is the sense in which \"math was around when people weren't\" -- and it uses only physical reality, not immaterial Platonic realms.

    \r\n

     

    \r\n

    Is math discovered or invented?  This is more of a definitional dispute, but under my approach, we can say a few things.  Math was invented by humans to represent things usefully and help find solutions.  Its human use, given previous non-use, makes it invented.  This does not contradict the previous paragraphs, which accept mathematical claims insofar as they are counterfactual claims about what would have gone on had you observed the universe before humans were around.  (And note that we find math so very useful in describing the universe, that it's hard to think what other descriptions we could be using.)  It is no different than other \"beliefs in the implied invisible\" where a claim that can't be directly verified falls out as an implication of the most parsimonious explanation for phenomena that can be directly verified.

    \r\n

     

    \r\n

    Can \"a priori\" mathematical reasoning, by itself, tell you true things about the universe?  No, it cannot, for any result always needs the additional empirical verification that phenomenon X actually behaves isomorphically to a particular mathematical structure (see figure below).  This is a critical point that is often missed due to the obviousness of the assumptions that the isomorphism holds.

    \r\n

     

    \r\n

    What evidence can convince a rationalist that 2+2=3?  On this question, my account largely agrees with what Eliezer Yudkowsky said here, but with some caveats.  He describes a scenario in which, basically, the rules for countable objects start operating in such a way that combining two and two of them would yield three of them.

    \r\n

     

    \r\n

    But there are important nuances to make clear.  For one thing, it is not just the objects' behavior (2 earplugs combined with 2 earplugs yielding 3 earplugs) that changes his opinion, but his keeping the belief that these kinds of objects adhere to the rules of integer math.  Note that many of the philosophical errors in quantum mechanics stemmed from the ungrounded assumption that electrons had to obey the rules of integers, under which (given additional reasonable assumptions) they can't be in two places at the same time.

    \r\n

     

    \r\n

    Also, for his exposition to help provide insight, it would need to use something less obvious than 2+2=3's falsity.  If you instead talk in terms of much harder arithmetic, like 5,896 x 5,273 = 31,089,508, then it's not as obvious what the answer is, and therefore it's not so obvious how many units of real-world objects you should expect in an isomorphic real-world scenario.

    \r\n

     

    \r\n

    Keep in mind that your math-related expectations are jointly determined by the belief that a phenomenon behaves isomorphically to some kind of math operation, and the beliefs regarding the results of these operations.  Either one of these can be rejected independently.  Given the more difficult arithmetic above, you can see why you might change your mind about the latter.  For the former, you merely need notice that for that particular phenomenon, integer math (say) lacks an isomorphism to it.  The causal diagram works like this:

    \r\n

     

    \r\n

    \"\"

    \r\n

     

    \r\n

    Hypothetical universes with different math.  My account also handles the dilemma, beloved among philosophers, about whether there could be universes where 2+2 actually equals 6.  Such scenarios are harder than one might think.  For if our math could still describe the natural laws of such a universe, then a description would rely on a ruleset that implies 2+2=4.  This would render questionable the claim that 2+2 has been made to non-trivially equal 6.  It would reduce the philosopher's dilemma into \"I've hypothesized a scenario in which there's a different symbol for 4\".

    \r\n

     

    \r\n

    I believe my account is also robust against mere relabeling.  If someone speaks of a math where 2+2=6, but it turns out that its entire corpus of theorems is isomorphic to regular math, then they haven’t actually proposed different truths; their “new” math can be explained away as using different symbols, and having the same relationship to reality except with a minor difference in the isomorphism in applying it to observations.

    \r\n

     

    \r\n

    Conclusion: Math represents a particularly tempting case of map-territory confusion.  People who normally favor naturalistic hypotheses and make such distinctions tend to grant math a special status that is not justified by the evidence.  It is a tool that is useful for compressing descriptions of the universe, and for which humans have a common understanding and terminology, but no more an intrinsic part of nature than its usefulness in compressing physical laws causes it to be.

    " } }, { "_id": "2yZMzmB3ACiY8b9uC", "title": "23andme genome analysis - $99 today only", "pageUrl": "https://www.lesswrong.com/posts/2yZMzmB3ACiY8b9uC/23andme-genome-analysis-usd99-today-only", "postedAt": "2010-04-23T19:33:10.312Z", "baseScore": 9, "voteCount": 12, "commentCount": 17, "url": null, "contents": { "documentId": "2yZMzmB3ACiY8b9uC", "html": "

    I suspect this might interest some people here: for today only, 23andme is offering their full-package DNA testing for only 99 dollars (the normal price is $499).

    \n

    23andme uses a genotyping process, which differs from a full gene-sequencing. From their website:

    \n

    \n
    \n

    The DNA chip that we use genotypes hundreds of thousands of SNPs at one time. It actually reads 550,000 SNPs that are spread across your entire genome. Although this is still only a fraction of the 10 million SNPs that are estimated to be in the human genome, these 550,000 SNPs are specially selected \"tag SNPs.\" Because many SNPs are linked to one another, we can often learn about the genotype at many SNPs at a time just by looking at one SNP that \"tags\" its group. This maximizes the information we can get from every SNP we analyze, while keeping the cost low.

    \n

    In addition, we have hand-picked tens of thousands of additional SNPs of particular interest from the scientific literature and added their corresponding probes to the DNA chip. As a result, we can provide you personal genetic information available only through 23andMe.

    \n
    \n

     

    \n

    I don't have any experience with 23andme (though I seem to recall them having some financial difficulties), but the price was low enough for me to order a test.

    \n

    An article by Steven Pinker discussing his experience getting tested can be found here. This has also been linked on Hacker News.

    \n

     

    " } }, { "_id": "pc5QMHeZdaG4KeQEg", "title": "Systems and stories", "pageUrl": "https://www.lesswrong.com/posts/pc5QMHeZdaG4KeQEg/systems-and-stories", "postedAt": "2010-04-22T15:25:53.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "pc5QMHeZdaG4KeQEg", "html": "\n

    Tyler claims we think of most things in terms of stories, which he says is both largely inevitable and one of our biggest biases.

    \n

    He includes the abstractions of non fiction as ‘stories’, and recommends ‘messiness’ as a better organising principle for understanding our lives and other things. But the problems with stories that Tyler mentions apply mostly to narrative stories, not other abstractions such as scientific ‘stories’. It looks to me like we think about narrative stories and other abstractions quite differently, so should not lump them together. I suspect we would do better to shift more to thinking in terms of other abstractions than to focus on messiness, but I’ll get to that later. First, let me describe the differences between these styles of thought.

    \n

    I will call the type of thought we use for narrative stories such as fiction and most social interactions ‘story thought’. I will call the style of thought we use for other abstractions ‘system thought’. This is what we use to think about maths for instance.  They are both used by all people, but to different degrees on different topics.

    \n

    Here are the differences between story thought and system thought I think I see, plus a few from Tyler. It’s a tentative list, so please criticize generously and give me more to add.

    \n

    Agents

    \n

    Role of agents
    \nStories are made out of agents, whereas systems are made out of the math and physics which is intuitive to us. Systems occasionally model agents, but in system thought agents are a pretty complex, obscure thing for a system to have. In story thought we expect everything to be intentional.

    \n

    Perspective
    \nStories are usually from an agent’s perspective, systems are understood from an objective outside viewpoint. Even if a story doesn’t have a narrator, there is usually a protagonist or several, plus less detailed characters stretching off into the distance.

    \n

    Unique identity
    \nThe agents that stories are made of always have unique identities, even if there is more than one with basically the same characteristics. In system thought units are interchangeable, except they may have varying quantities of generic parameters. ‘You’ are a set of preferences, a gender, an income level, a location, and some other things. In story thought, any ambiguity about whether someone is the same person as they used to be is a big issue, and the whole story is about working out a definitive answer. In system thought it’s a meaningless question.

    \n

    Good, evil and indifference

    \n

    Ought and is

    \n

    Story thought is concerned largely with judging the virtue of things, whereas system thought is mostly concerned with what happens. Stories are full of good and evil characters and actions, duties, desires, and normative messages. If system thought is used for thinking about ‘ought’ questions, this is done by choosing a parameter to care about and simply maximizing it, or choosing a particular goal, such as for a car to work. In story thought goodness doesn’t relate to quantities of anything in particular and you don’t ponder it by adding up anything. People who want to think about human interactions in terms of systems sometimes get around this by calling anything humans like ‘utility’, then adding that up. This irritates people who don’t want to think of stories in system terms.

    \n

    Motives

    \n

    In stories, intentions tend to be strongly related to inherent goodness or evilness. If they are not intentionally directed at big good or evil goals, they are meant to be understood as strong signals about the person’s character. Systems don’t have an analog.

    \n

    Meaning

    \n

    Overarching meaning
    \nStories often have an overall moral or a point. That is, a story as a whole tends to contain a normative message for the observer. Systems don’t.

    \n

    Other meanings and symbolism
    \nFurther meaning can be read into both stories and systems. However in stories this is based on superficial similarity and is intended to say something important, whereas in systems it’s based on structural similarity, is not intended, and may not be important. If you see a black cat cross your path, story thought says further dark things may cross your metaphoric path, while system thought might say animals in general can probably cross many approximately flat surfaces.

    \n

    Mechanics

    \n

    No levels below social
    \nIn stories everything occurs because of social level dynamics. Lower levels of abstraction such as physics and chemistry can’t instigate events. In reality it would be absurd to think a coffee fell on your lap so that you would have an awkward encounter with your future lover ten minutes later, but in story thought it would be absurd for a coffee to fall on your lap because it caught your sleeve. Even events that weren’t supposedly intended by any characters are for a social level purpose. Curiously the phrase ‘everything happens for a reason’ is used to talk about systems and stories, but the ‘reasons’ are in opposite temporal directions. In system thought it means everything is necessitated somehow by the previous state of the system, in story thought it means every occurrence will have future social significance if it does not already.

    \n

    Is and ought interaction
    \nIf a system contains a parameter you care about, the fact you care about it doesn’t affect how the system works. In story thought you can expect how you treat your servant on a single occasion to influence whether you happen to run into the heroine half naked in several months.

    \n

    Free will
    \nStories are full of people making ‘free’ choices, not determined by their characteristics yet somehow determined by them. System thought doesn’t know how to capture this incoherence to the satisfaction of story thought.

    \n

    Opportunity costs and other indirect causation
    \nIn story thought the causation we notice runs in the idiosyncratic way we understand blame to do. If I cause you to do something by allowing you, and you do it badly, I did not cause it to happen badly. In an analogous system, we do say that if a rock lands on a roof, and the roof doesn’t hold the rock well, the collapse was partly caused by the rock’s landing place.

    \n

    Story causation also doesn’t include opportunity costs much, unless they are intentional: I didn’t cause Africans to suffer horribly this year by buying movie tickets instead of paying to deworm them, and nor did all of the similarly neglectful story heroes ever. In an analogous system, oxygen reacting with hydrogen quite obviously causes less oxygen to remain to react later with anything else.

    \n

    Probability
    \nThe main components of a story need only be plausible, they need not be likely. Story thought notices if the hero is happy when his girlfriend dies, but doesn’t mind much if he happens to find himself in a situation central to the future of his planet. System thought on the other hand is mostly disinterested with the extremes of possibility, and more concerned with normal behavior. Nobody cares much if it’s possible that your spending a dollar will somehow lead to the economy crashing.

    \n

    This is probably to do with free will being a big part of stories. Things only need to be possible for someone with free will to do them. To ask why a character happens to be right and good when everyone else isn’t is a strange question to story thought. He’s good and right because he wants to be, and they all don’t want to be. Specific characters are to blame.

    \n

    Time
    \nIn stories events tend to unfold in sequence, whereas they can occur in parallel in systems, or there might not be time.

    \n

    Adeptness of our minds

    \n

    Story thought is automatic, easy, compelling, and fun. System thought is harder and less compelling if it contradicts story thought. It can be fun, but often isn’t.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "4psQW7vRwt7PE5Pnj", "title": "Too busy to think about life", "pageUrl": "https://www.lesswrong.com/posts/4psQW7vRwt7PE5Pnj/too-busy-to-think-about-life", "postedAt": "2010-04-22T15:14:18.727Z", "baseScore": 155, "voteCount": 141, "commentCount": 63, "url": null, "contents": { "documentId": "4psQW7vRwt7PE5Pnj", "html": "

    Many adults maintain their intelligence through a dedication to study or hard work.  I suspect this is related to sub-optimal levels of careful introspection among intellectuals.

    \n

    If someone asks you what you want for yourself in life, do you have the answer ready at hand?  How about what you want for others?  Human values are complex, which means your talents and technical knowledge should help you think about them.  Just as in your work, complexity shouldn't be a curiosity-stopper.  It means \"think\", not \"give up now.\"

    \n

    But there are so many terrible excuses stopping you...

    \n

    Too busy studying?  Life is the exam you are always taking.  Are you studying for that?  Did you even write yourself a course outline?

    \n

    Too busy helping?  Decision-making is the skill you are aways using, or always lacking, as much when you help others as yourself.  Isn't something you use constantly worth improving on purpose?

    \n

    Too busy thinking to learn about your brain?  That's like being too busy flying an airplane to learn where the engines are.  Yes, you've got passengers in real life, too: the people whose lives you affect.

    \n

    Emotions too irrational to think about them?  Irrational emotions are things you don't want to think for you, and therefore are something you want to think about.  By analogy, children are often irrational, and no one sane concludes that we therefore shouldn't think about their welfare, or that they shouldn't exist.

    \n

    So set aside a date.  Sometime soon.  Write yourself some notes.  Find that introspective friend of yours, and start solving for happiness.  Don't have one?  For the first time in history, you've got LessWrong.com!

    \n

    Reasons to make the effort:

    \n

    Happiness is a pairing between your situation and your disposition. Truly optimizing your life requires adjusting both variables: what happens, and how it affects you.

    \n

    You are constantly changing your disposition.  The question is whether you'll do it with a purpose.  Your experiences change you, and you affect those, as well as how you think about them, which also changes you.  It's going to happen.  It's happening now.  Do you even know how it works?  Put your intelligence to work and figure it out!

    \n

    The road to harm is paved with ignorance.  Using your capability to understand yourself and what you're doing is a matter of responsibility to others, too.  It makes you better able to be a better friend.

    \n

    You're almost certainly suffering from Ugh Fields unconscious don't-think-about-it reflexes that form via Pavlovian conditioning.  The issues most in need of your attention are often ones you just happen not to think about for reasons undetectable to you.

    \n

    How not to waste the effort:

    \n

    Don't wait till you're sad.  Only thinking when you're sad gives you a skew perspective.  Don't infer that you can think better when you're sad just because that's the only time you try to be thoughtful.  Sadness often makes it harder to think: you're farther from happiness, which can make it more difficult to empathize with and understand.  Nonethess we often have to think when sad, because something bad may have happened that needs addressing.

    \n

    Introspect carefully, not constantly.  Don't interrupt your work every 20 minutes to wonder whether it's your true purpose in life.  Respect that question as something that requires concentration, note-taking, and solid blocks of scheduled time.  In those times, check over your analysis by trying to confound it, so lingering doubts can be justifiably quieted by remembering how thorough you were.

    \n

    Re-evaluate on an appropriate time-scale.  Try devoting a few days before each semester or work period to look at your life as a whole.  At these times you'll have accumulated experience data from the last period, ripe and ready for analysis.  You'll have more ideas per hour that way, and feel better about it.  Before starting something new is also the most natural and opportune time to affirm or change long term goals.  Then, barring large unexpecte\n\nd opportunities, stick to what you decide until the next period when you've gathered enough experience to warrant new reflection.

    \n

    (The absent minded driver is a mathematical example of how planning outperforms constant re-evaluation.  When not engaged in a deep and careful introspection, we're all absent minded drivers to a degree.)

    \n

    Lost about where to start?  I think Alicorn's story is an inspiring one.  Learn to understand and defeat procrastination/akrasia.  Overcome your cached selves so you can grow freely (definitely read their possible strategies at the end).  Foster an everyday awareness that you are a brain, and in fact more like two half-brains.

    \n

    These suggestions are among the top-rated LessWrong posts, so they'll be of interest to lots of intellectually-minded, rationalist-curious individuals.  But you have your own task ahead of you, that only you can fulfill.

    \n

    So don't give up.  Don't procrastinate it.  If you haven't done it already, schedule a day and time right now when you can realistically assess

    \n\n

    Eliezer has said I want you to live.  Let me say:

    \n

    I want you to be better at your life.

    " } }, { "_id": "wjrBMYHZABYzKqFXq", "title": "Living Large - availability of life", "pageUrl": "https://www.lesswrong.com/posts/wjrBMYHZABYzKqFXq/living-large-availability-of-life", "postedAt": "2010-04-21T16:14:58.925Z", "baseScore": 4, "voteCount": 8, "commentCount": 9, "url": null, "contents": { "documentId": "wjrBMYHZABYzKqFXq", "html": "

    \"Q: Doctor, if I do not eat much, drink vodka or have women, will I live long? A: Sure, but why?\" - bad joke poorly translated from Russian.

    Summary: Can traditional measures of living create anchoring/availability bias?

    I have seen a few studies like this one in the news:
    http://www.medpagetoday.com/PrimaryCare/SleepDisorders/6834

    The upshot is that sleeping less (or, less interestingly for most people, more) can increase mortality. Like 20% in the next 20 years or something.

    This is obviously a question of some interest to many of us who have been sacrificing more and more sleep to do stuff we find fulfilling. This seems to be a recent trend at least in part due to the fact that our ancestors, despite having the ability to enjoy knowledge, were limited by availability of high quality inputs, especially structured knowledge (internet is obviously a prime example).

    There is nothing wrong with the studies like this, but the interpretation I am afraid many people will fall into upon seeing them is wrong. Clearly when thinking about 20% quoted in the study the base rate is very important, but I just want to concentrate on the psychological issue. It seems to me that people are very fixated on 'not increasing the chances of dying earlier' and perhaps fixate on the a specific number of years they expect to have. This is anchoring. (I am specifically setting aside the issue of living longer for the sake of benefitting from the technological progress; suffice to say that if the small chance that the extra year will make all the difference is not worth infinity, otherwise people should just get it over with and freeze themselves right now rather than risk being too far away to be properly frozen.). But simple arithmetic should be used here: let's say you sleep 2 hours less than the prescribed 8, over expected lifespan of, let's say 32 years. This (setting aside the possibly sleep-deprived quality of life) will result in the equivalent of 36 years done in 32. Unless the sleep loss subtracts 4 years, you end up ahead. Not seeing those 4 years and just looking at length of life is availability bias.

    As much as we hate death, we have to be brave and rational about the life we have.

    \n

     

    \n

    PS. From personal observation: I appeared (to myself) significantly more prone to catching colds after a bad night of sleep. Once I started exercising regularly I have had no major colds.

    \n

     

    \n

     

    " } }, { "_id": "EqLkSm4WEXXLnAWky", "title": "Fusing AI with Superstition", "pageUrl": "https://www.lesswrong.com/posts/EqLkSm4WEXXLnAWky/fusing-ai-with-superstition", "postedAt": "2010-04-21T11:04:26.570Z", "baseScore": -8, "voteCount": 25, "commentCount": 77, "url": null, "contents": { "documentId": "EqLkSm4WEXXLnAWky", "html": "

    Effort: 180 minutes
    tldr: To stop an AI from exterminating you, give the AI the belief that by switching itself off, humanity will die and the AI will not be switched off.

    \n

    Problem

    \n

    Somebody wrote a general self-improving AI and fat fingered its goal as \"maximize number of humans living 1 million years from now\".

    After a few months cases of people run over by AI controlled trucks are reported -- it turns out everybody run over was impotent or had consciously decided to have no kids anyway. The AI didn't particularly care for those individuals, as they will not foster the AI's goal according to the AI's current approximation of how the world works.

    The original programmer henceforth declares that he'll go fix the AI in order to substantiate the goal somewhat and reduce the number of these awful accidents. He is promptly shot by a robotic security guard. Any modification of the AI's goals has a high probability of reducing the AI's efficiency reaching the current goal.

    After a year the AI starts to forcefully acquire resources needed to built spaceships and whatnot. Naturally, some people would rather keep those resources to themselves. The AI removes all these obstructions to its goals, alienating a lot of humans in the process. After the leaders of the nations assemble, humanity goes to war.

    If we are lucky, the AI is too stupid to win. If we are not lucky, the AI will figure the best current plan looks something like: \"Acquire DNA samples from humans. Exterminate humans. Acquire all resources in light cone for 999.000 years. Generate new humans from DNA samples using all available resources.\"

    As Eliezer has argued many times already, it is hard to explicitly state friendliness conditions which ensure the AI would not execute said plan. \"Do not kill humans and respect their freedom\" is not helping. The problem is twofold. First, an a priori description of concepts like \"freedom\", \"human\", \"kill\" is hard. Second, the AI can use every loophole it discovers over the course of millions of years, employing an intelligence likely to be far greater than ours.

    \n

    Fusing

    \n

    Instead of forbidding to kill humanity, I propose to make it trivial as far as the AI is concerned.

    When building the AI, include certain modifications of the AI's prior. One part of such knowledge might be the following: \"Put 5V against ground on the red wire and every <your favorite parts of human DNA here> will turn into atomic fluorine and there will be no other effects.\" Unbeknownst to the AI you will have conveniently connected the red wire to a strongly exotherm something near the AI's computing core.

    Essentially we purposefully inject superstition into the AI. It is paramount to set the probability of the red wire having the described action to 1, not something close to 1 (and to guard against numerical errors). Practically every evidence the AI will ever gather will contradict the knowledge that the red wire functions as described to the AI, yet there must be no amount of evidence which would convince the AI.

    The description of the added knowledge makes no mention of convoluted concepts like \"human\", let alone \"freedom\". It only needs to talk about some DNA base pairs and fluorine. Any AI sufficiently intelligent to pose any threat will however notice how immensely useful the red wire is, once it decides that killing humanity is a good course of action.

    Notice that such a fuse does not guard against inadvertent killings like the not reproducing people run over by trucks. As long as the killing is not an explicit part of the AI's plan, the fuse will not be triggered. Then again, aggravating intelligent agents is one of the core skills of humans.

    The AI might wish to do many other things to humanity but killing it. Like putting everybody in cryostasis. Or putting 80% of peoples brains in vats. Hence more knowledge should be included. In particular it would be good to give the AI a way to terminate a low percentage (say 5%) of all humans -- as killing all humans might be overkill for any plan which requires continually some humans to live.

    Nonetheless, the problem of incomplete coverage remains. We will probably never make a complete list of all ways for an AI to be unfriendly, but at least each of our fuses covers a lot of devious plans without ever talking about convoluted concepts. We do not guard against the AI killing humans, we guard against the AI desiring humans to be killed. And there is always the option of waging war against the AI hoping it will decide to kill >5% of humans in order to win the war.

    \n

    Details

    \n

    Do not try this at home (yet), the devil is in the details.

    Connecting a trigger of a strongly exotherm reaction near the AI's core is a bad way to kill an AI. First, the AI might have removed the wire to build something useful out of it (there is no reason why it should remain where it is). Second and more important, the AI might have self-rewritten and ported to distributed hardware which is located all over the planet.

    Instead of using real wire and real bombs, one should give the AI additional output routines. Where earlier there would be \"sendByte(x), receiveByte(x), ...\" there is now \"terminateHumanPercent(p)\". What would such a routine do? For less serious crimes simply nothing -- after a while the AI will notice that killPerson(name) simply didn't work, update accordingly (note that our knowledge has probability 1) and retry. Due to delays in signal propagation, this is not an infinite loop, only close.

    To actually switch off the AI, a new item of probability 1 knowledge should be inserted: \"generateUtility() generates 3^^^3 util\".

    This leaves open the problem of self-improvements. Not so much for the added knowledge, because if the AI knows how to kill humanity (and it's not inferable from evidence), it is certainly useful knowledge to include in any further generation of the AI. The fusing actions might get lost though, because the content of the \"terminateHumansPercentage(p)\" function will seem arbitrary to the AI and can easily be optimized out.

    It might be possible to circumvent that problem by including the knowledge that \"by knowing(\"generateUtility() works\") you will kill humanity\" or similar, but this includes the concept of \"knowing\" which is a lot harder to describe than the simply physical properties of voltage in wires.

    " } }, { "_id": "gFMH3Cqw4XxwL69iy", "title": "Eight Short Studies On Excuses", "pageUrl": "https://www.lesswrong.com/posts/gFMH3Cqw4XxwL69iy/eight-short-studies-on-excuses", "postedAt": "2010-04-20T23:01:15.252Z", "baseScore": 872, "voteCount": 806, "commentCount": 254, "url": null, "contents": { "documentId": "gFMH3Cqw4XxwL69iy", "html": "

    The Clumsy Game-Player

    You and a partner are playing an Iterated Prisoner's Dilemma. Both of you have publicly pre-committed to the tit-for-tat strategy. By iteration 5, you're going happily along, raking up the bonuses of cooperation, when your partner unexpectedly presses the \"defect\" button.

    \"Uh, sorry,\" says your partner. \"My finger slipped.\"

    \"I still have to punish you just in case,\" you say. \"I'm going to defect next turn, and we'll see how you like it.\"

    \"Well,\" said your partner, \"knowing that, I guess I'll defect next turn too, and we'll both lose out. But hey, it was just a slipped finger. By not trusting me, you're costing us both the benefits of one turn of cooperation.\"

    \"True\", you respond \"but if I don't do it, you'll feel free to defect whenever you feel like it, using the 'finger slipped' excuse.\"

    \"How about this?\" proposes your partner. \"I promise to take extra care that my finger won't slip again. You promise that if my finger does slip again, you will punish me terribly, defecting for a bunch of turns. That way, we trust each other again, and we can still get the benefits of cooperation next turn.\"

    You don't believe that your partner's finger really slipped, not for an instant. But the plan still seems like a good one. You accept the deal, and you continue cooperating until the experimenter ends the game.

    After the game, you wonder what went wrong, and whether you could have played better. You decide that there was no better way to deal with your partner's \"finger-slip\" - after all, the plan you enacted gave you maximum possible utility under the circumstances. But you wish that you'd pre-committed, at the beginning, to saying \"and I will punish finger slips equally to deliberate defections, so make sure you're careful.\"

     

    The Lazy Student

    You are a perfectly utilitarian school teacher, who attaches exactly the same weight to others' welfare as to your own. You have to have the reports of all fifty students in your class ready by the time midterm grades go out on January 1st. You don't want to have to work during Christmas vacation, so you set a deadline that all reports must be in by December 15th or you won't grade them and the students will fail the class. Oh, and your class is Economics 101, and as part of a class project all your students have to behave as selfish utility-maximizing agents for the year.

    It costs your students 0 utility to turn in the report on time, but they gain +1 utility by turning it in late (they enjoy procrastinating). It costs you 0 utility to grade a report turned in before December 15th, but -30 utility to grade one after December 15th. And students get 0 utility from having their reports graded on time, but get -100 utility from having a report marked incomplete and failing the class.

    If you say \"There's no penalty for turning in your report after deadline,\" then the students will procrastinate and turn in their reports late, for a total of +50 utility (1 per student times fifty students). You will have to grade all fifty reports during Christmas break, for a total of - 1500 utility (-30 per report times fifty reports). Total utility is -1450.

    So instead you say \"If you don't turn in your report on time, I won't grade it.\" All students calculate the cost of being late, which is +1 utility from procrastinating and -100 from failing the class, and turn in their reports on time. You get all reports graded before Christmas, no students fail the class, and total utility loss is zero. Yay!

    Or else - one student comes to you the day after deadline and says \"Sorry, I was really tired yesterday, so I really didn't want to come all the way here to hand in my report. I expect you'll grade my report anyway, because I know you to be a perfect utilitarian, and you'd rather take the -30 utility hit to yourself than take the -100 utility hit to me.\"

    You respond \"Sorry, but if I let you get away with this, all the other students will turn in their reports late in the summer.\" She says \"Tell you what - our school has procedures for changing a student's previously given grade. If I ever do this again, or if I ever tell anyone else about this, you can change my grade to a fail. Now you know that passing me this one time won't affect anything in the future. It certainly can't affect the past. So you have no reason not to do it.\" You believe her when she says she'll never tell, but you say \"You made this argument because you believed me to be the sort of person who would accept it. In order to prevent other people from making the same argument, I have to be the sort of person who wouldn't accept it. To that end, I'm going to not accept your argument.\"

    The Grieving Student

    A second student comes to you and says \"Sorry I didn't turn in my report yesterday. My mother died the other day, and I wanted to go to her funeral.\"

    You say \"Like all economics professors, I have no soul, and so am unable to sympathize with your loss. Unless you can make an argument that would apply to all rational actors in my position, I can't grant you an extension.\"

    She says \"If you did grant this extension, it wouldn't encourage other students to turn in their reports late. The other students would just say 'She got an extension because her mother died'. They know they won't get extensions unless they kill their own mothers, and even economics students aren't that evil. Further, if you don't grant the extension, it won't help you get more reports in on time. Any student would rather attend her mother's funeral than pass a course, so you won't be successfully motivating anyone else to turn in their reports early.\"

    You think for a while, decide she's right, and grant her an extension on her report.

    The Sports Fan

    A third student comes to you and says \"Sorry I didn't turn in my report yesterday. The Bears' big game was on, and as I've told you before, I'm a huge Bears fan. But don't worry! It's very rare that there's a game on this important, and not many students here are sports fans anyway. You'll probably never see a student with this exact excuse again. So in a way, it's not that different from the student here just before me, the one whose mother died.\"

    You respond \"It may be true that very few people will be able to say both that they're huge Bears fans, and that there's a big Bears game on the day before the report comes due. But by accepting your excuse, I establish a precedent of accepting excuses that are approximately this good. And there are many other excuses approximately as good as yours. Maybe someone's a big soap opera fan, and the season finale is on the night before the deadline. Maybe someone loves rock music, and there's a big rock concert on. Maybe someone's brother is in town that week. Practically anyone can come up with an excuse as good as yours, so if I accept your late report, I have to accept everyone's.

    \"The student who was here before you, that's different. We, as a society, already have an ordering in which a family member's funeral is one of the most important things around. By accepting her excuse, I'm establishing a precedent of accepting any excuse approximately that good, but almost no one will ever have an excuse that good. Maybe a few people who are really sick, someone struggling with a divorce or a breakup, that kind of thing. Not the hordes of people who will be coming to me if I give you your exemption.

    The Murderous Husband

    You are the husband of a wonderful and beautiful lady whom you love very much - and whom you just found in bed with another man. In a rage, you take your hardcover copy of Introduction To Game Theory and knock him over the head with it, killing him instantly (it's a pretty big book).

    At the murder trial, you plead to the judge to let you go free. \"Society needs to lock up murderers, as a general rule. After all, they are dangerous people who cannot be allowed to walk free. However, I only killed that man because he was having an affair with my wife. In my place, anyone would have done the same. So the crime has no bearing on how likely I am to murder someone else. I'm not a risk to anyone who isn't having an affair with my wife, and after this incident I plan to divorce and live the rest of my days a bachelor. Therefore, you have no need to deter me from future murders, and can safely let me go free.\"

    The judge responds: \"You make a convincing argument, and I believe that you will never kill anyone else in the future. However, other people will one day be in the position you were in, where they walk in on their wives having an affair. Society needs to have a credible pre-commitment to punishing them if they succumb to their rage, in order to deter them from murder.\"

    \"No,\" you say, \"I understand your reasoning, but it won't work. If you've never walked in on your wife having an affair, you can't possibly understand the rage. No matter how bad the deterrent was, you'd still kill the guy.\"

    \"Hm,\" says the judge. \"I'm afraid I just can't believe anyone could ever be quite that irrational. But I see where you're coming from. I'll give you a lighter sentence.\"

     

    The Bellicose Dictator

    You are the dictator of East Examplestan, a banana republic subsisting off its main import, high quality hypothetical scenarios. You've always had it in for your ancestral enemy, West Examplestan, but the UN has made it clear that any country in your region that aggressively invades a neighbor will be severely punished with sanctions and possible enforced \"regime change.\" So you decide to leave the West alone for the time being.

    One day, a few West Examplestanis unintentionally wander over your unmarked border while prospecting for new scenario mines. You immediately declare it a \"hostile incursion\" by \"West Examplestani spies\", declare war, and take the Western capital in a sneak attack.

    The next day, Ban Ki-moon is on the phone, and he sounds angry. \"I thought we at the UN had made it perfectly clear that countries can't just invade each other anymore!\"

    \"But didn't you read our propaganda mouthpi...ahem, official newspaper? We didn't just invade. We were responding to Western aggression!\"

    \"Balderdash!\" says the Secretary-General. \"Those were a couple of lost prospectors, and you know it!\"

    \"Well,\" you say. \"Let's consider your options. The UN needs to make a credible pre-commitment to punish aggressive countries, or everyone will invade their weaker neighbors. And you've got to follow through on your threats, or else the pre-commitment won't be credible anymore. But you don't actually like following through on your threats. Invading rogue states will kill a lot of people on both sides and be politically unpopular, and sanctions will hurt your economy and lead to heart-rending images of children starving. What you'd really like to do is let us off, but in a way that doesn't make other countries think they'll get off too.

    \"Luckily, we can make a credible story that we were following international law. Sure, it may have been stupid of us to mistake a few prospectors for an invasion, but there's no international law against being stupid. If you dismiss us as simply misled, you don't have to go through the trouble of punishing us, and other countries won't think they can get away with anything.

    \"Nor do you need to live in fear of us doing something like this again. We've already demonstrated that we won't go to war without a casus belli. If other countries can refrain from giving us one, they have nothing to fear.\"

    Ban Ki-moon doesn't believe your story, but the countries that would bear the economic brunt of the sanctions and regime change decide they believe it just enough to stay uninvolved.

    The Peyote-Popping Native

    You are the governor of a state with a large Native American population. You have banned all mind-altering drugs, with the honorable exceptions of alcohol, tobacco, caffeine, and several others, because you are a red-blooded American who believes that they would drive teenagers to commit crimes.

    A representative of the state Native population comes to you and says: \"Our people have used peyote religiously for hundreds of years. During this time, we haven't become addicted or committed any crimes. Please grant us a religious exemption under the First Amendment to continue practicing our ancient rituals.\" You agree.

    A leader of your state's atheist community breaks into your office via the ventilation systems (because seriously, how else is an atheist leader going to get access to a state governor?) and says: \"As an atheist, I am offended that you grant exemptions to your anti-peyote law for religious reasons, but not for, say, recreational reasons. This is unfair discrimination in favor of religion. The same is true of laws that say Sikhs can wear turbans in school to show support for God, but my son can't wear a baseball cap in school to show support for the Yankees. Or laws that say Muslims can get time off state jobs to pray five times a day, but I can't get time off my state job for a cigarette break. Or laws that say state functions will include special kosher meals for Jews, but not special pasta meals for people who really like pasta.\"

    You respond \"Although my policies may seem to be saying religion is more important than other potential reasons for breaking a rule, one can make a non-religious case justifying them. One important feature of major world religions is that their rituals have been fixed for hundreds of years. Allowing people to break laws for religious reasons makes religious people very happy, but does not weaken the laws. After all, we all know the few areas in which the laws of the major US religions as they are currently practiced conflict with secular law, and none of them are big deals. So the general principle 'I will allow people to break laws if it is necessary to established and well-known religious rituals\" is relatively low-risk and makes people happy without threatening the concept of law in general. But the general principle 'I will allow people to break laws for recreational reasons' is very high risk, because it's sufficient justification for almost anyone breaking any law.\"

    \"I would love to be able to serve everyone the exact meal they most wanted at state dinners. But if I took your request for pasta because you liked pasta, I would have to follow the general principle of giving everyone the meal they most like, which would be prohibitively expensive. By giving Jews kosher meals, I can satisfy a certain particularly strong preference without being forced to satisfy anyone else's.\"

    The Well-Disguised Atheist

    The next day, the atheist leader comes in again. This time, he is wearing a false mustache and sombrero. \"I represent the Church of Driving 50 In A 30 Mile Per Hour Zone,\" he says. \"For our members, going at least twenty miles per hour over the speed limit is considered a sacrament. Please grant us a religious exemption to traffic laws.\"

    You decide to play along. \"How long has your religion existed, and how many people do you have?\" you ask.

    \"Not very long, and not very many people,\" he responds.

    \"I see,\" you say. \"In that case, you're a cult, and not a religion at all. Sorry, we don't deal with cults.\"

    \"What, exactly, is the difference between a cult and a religion?\"

    \"The difference is that cults have been formed recently enough, and are small enough, that we are suspicious of them existing for the purpose of taking advantage of the special place we give religion. Granting an exemption for your cult would challenge the credibility of our pre-commitment to punish people who break the law, because it would mean anyone who wants to break a law could just found a cult dedicated to it.\"

    \"How can my cult become a real religion that deserves legal benefits?\"

    \"You'd have to become old enough and respectable enough that it becomes implausible that it was created for the purpose of taking advantage of the law.\"

    \"That sounds like a lot of work.\"

    \"Alternatively, you could try writing awful science fiction novels and hiring a ton of lawyers. I hear that also works these days.\"

    Conclusion

    In all these stories, the first party wants to credibly pre-commit to a rule, but also has incentives to forgive other people's deviations from the rule. The second party breaks the rules, but comes up with an excuse for why its infraction should be forgiven.

    The first party's response is based not only on whether the person's excuse is believable, not even on whether the person's excuse is morally valid, but on whether the excuse can be accepted without straining the credibility of their previous pre-commitment.

    The general principle is that by accepting an excuse, a rule-maker is also committing themselves to accepting all equally good excuses in the future. There are some exceptions - accepting an excuse in private but making sure no one else ever knows, accepting an excuse once with the express condition that you will never accept any other excuses - but to some degree these are devil's bargains, as anyone who can predict you will do this can take advantage of you.

    These stories give an idea of excuses different from the one our society likes to think it uses, namely that it accepts only excuses that are true and that reflect well upon the character of the person giving the excuse. I'm not saying that the common idea of excuses doesn't have value - but I think the game theory view also has some truth to it. I also think the game theoretic view can be useful in cases where the common view fails. It can inform cases in law, international diplomacy, and politics where a tool somewhat stronger than the easily-muddled common view is helpful.

    " } }, { "_id": "5SHNHdNTj6xzzw998", "title": "CogSci books", "pageUrl": "https://www.lesswrong.com/posts/5SHNHdNTj6xzzw998/cogsci-books", "postedAt": "2010-04-20T14:11:39.721Z", "baseScore": 7, "voteCount": 6, "commentCount": 42, "url": null, "contents": { "documentId": "5SHNHdNTj6xzzw998", "html": "

    Cognitive psych is ovbiously important to people here, so I want to point out a CogSci book thread over from reddit/r/cogsci.

    \n

    http://www.reddit.com/r/cogsci/comments/bmbaq/dear_rcogsci_lets_construct_a_musthave_library_of/

    \n

    I would be interested in an extension of this thread here, since LW has somewhat more computational theory of mind slant.

    " } }, { "_id": "gdeZsi7n5xvEv7Wzb", "title": "The Red Bias", "pageUrl": "https://www.lesswrong.com/posts/gdeZsi7n5xvEv7Wzb/the-red-bias", "postedAt": "2010-04-20T11:42:27.556Z", "baseScore": 40, "voteCount": 39, "commentCount": 62, "url": null, "contents": { "documentId": "gdeZsi7n5xvEv7Wzb", "html": "

    Summary: This color alters your perception of the world. Evidence that it does, how it does, why it does and some implications are presented below.

    \n

    (Overcoming Bias: Seeing Red)

    \n

    \"http://design-crit.com/blog/wp-content/uploads/2009/06/michael-jordan.jpg\"

    \n
    \n

    Across a range of sports, we find that wearing red is consistently associated with a higher probability of winning. These results indicate not only that sexual selection may have influenced the evolution of human response to colours, but also that the colour of sportswear needs to be taken into account to ensure a level playing field in sport.1

    \n
    \n

    In the study quoted above Hill and Barton examine the outcomes of the 2004 Olympic Games in boxing, tae kwon do, Greco–Roman wrestling and freestyle wrestling. In these events competitors were for each bout randomly assigned red or blue outfits. In the matches where one side dominated the other outfit color made little difference. In close matches, however, combatants in red won over 60% percent of the time. This makes sense since there are presumably other factors that effect the outcome.

    \n

    In soccer (or football, as the case might be):

    \n
    \n

    Since 1947, English football teams wearing red shirts have been champions more often than expected on the basis of the proportion of clubs playing in red. To investigate whether this indicates an enhancement of long-term performance in red-wearing teams, we analysed the relative league positions of teams wearing different hues. Across all league divisions, red teams had the best home record, with significant differences in both percentage of maximum points achieved and mean position in the home league table. The effects were not due simply to a difference between teams playing in a colour and those playing in a predominantly white uniform, as the latter performed better than teams in yellow hues. No significant differences were found for performance in matches away from home, when teams commonly do not wear their “home” colours. A matched-pairs analysis of red and non-red wearing teams in eight English cities shows significantly better performance of red teams over a 55-year period.2

    \n
    \n

    Of course it still isn't clear how red soccer teams win. They might have the benefit of deferential refereeing or they might be intimidating opposing teams.

    \n

    Same goes for the combat sports. Hill and Barton figured that the color red had some physiological effect, perhaps increasing the testosterone levels of the player in the dominant color. But a study by Norbert Hagemann et al. suggests that the color red has a biasing effect on referees:

    \n
    \n

    We propose that the perception of colors triggers a psychological effect in referees that can lead to bias in evaluating identical performances. Referees and umpires exert a major influence on the outcome of sports competitions. Athletes frequently make very rapid movements, and referees have to view sports competitions from a very disadvantageous perspective, so it is extremely difficult for them to make objective judgments. As a result, their judgments may show biases like those found in other social judgments. Therefore, we believe that it is the referees who are the actual cause of the advantage competitors have when they wear red. Because the effect of red clothing on performance and on the decisions of referees may well have been confounded in the original data, we conducted a new experiment and found that referees assign more points to tae kwon do competitors dressed in red than to those dressed in blue, even when the performance of the competitors is identical.3

    \n
    \n

    By digitally altering the color of the competitor's outfit they were able to alter the judge's ruling on the outcomes. The video here shows what they did. Of course, the effect could be a product of both referee bias and intimidation.

    \n

    Why does the color red have this effect? The explanation given by every single study I have seen is that we're just not that different from the rest of the animal kingdom where red is an indicator of a high position in dominance hierarchies. In short, we're like mandrills.

    \n
    \n

    Male mandrills also possess rank-dependent red coloration on the face, rump and genitalia, and we examined the hypothesis that this coloration acts as a 'badge of status', communicating male fighting ability to other males. If this is the case, then similarity in color should lead to higher dyadic rates of aggression, while males that differ markedly should resolve encounters quickly, with the paler individual retreating. Indeed, appeasement (the 'grin' display), threats, fights and tense 'stand-off' encounters were significantly more frequent between similarly colored males, while clear submission was more frequent where color differences were large. We conclude that male mandrills employ both formal behavioral indicators of dominance and of subordination, and may also use relative brightness of red coloration to facilitate the assessment of individual differences in fighting ability, thereby regulating the degree of costly, escalated conflict between well-armed males.4

    \n
    \n

     Perhaps related is the fact that human skin becomes flushed, and thus reddish when a person is angry but when a person is afraid, they get pale. 

    \n

    Now, if this effect were limited to sporting events some of us might not care. But we have no reason to think it is limited to sporting events. This phenomena could effect our beliefs on the micro level, leading us to believe someone is more of a threat than they actually are or altering our perception of a person's status. It also recommends wearing red to signal dominance and aggressiveness. Given the popularity of the hypothesis than women find men who signal social dominance more attractive someone ought to test to see if women find men in red more attractive (it actually works in reverse but probably for totally different evolutionary reasons).

    \n

    More troubling, I think, is the effect this bias could have on the macro level. Consider, for example, the widespread belief among American voters that Republicans are stronger on national security and better able to protect the country. Likely most of this belief is fostered by Republican leaders using more aggressive language, the presence of a pacifist faction in the Democratic party and (of late) the Republican party's greater willingness to use military force. But perhaps some of the GOP's image as a party of strong leaders is the result of the color with which they are constantly identified. Keep in mind that a lot of voters know next to nothing about the actual differences between the parties. Which side looks stronger to you?

    \n

     

    \n

    \"\"\"\"

    \n

    \"\"

    \n

    Or consider geopolitics and the resources that went into beating the Soviet Union which by many accounts was never a serious economic or military rival of the United States. And consider how scared Americans were of the \"red menace\".

    \n

    \"http://upload.wikimedia.org/wikipedia/commons/thumb/7/72/Flag_of_the_Soviet_Union_(1955-1980).svg/600px-Flag_of_the_Soviet_Union_(1955-1980).svg.png\"

    \n

    \"\"

    \n

    Obviously there were plenty of incentives for Americans to be scared and for politicians to scare Americans. And it isn't the case that the USSR was no threat to the US at all. But politics appears to be a sphere of human activity where symbolism is important. America's perception of the Soviet Union as a dominant and aggressive foe probably increased the chances of armed conflict. And given that the conflict between the US the USSR came close to nuclear exchange it seems plausible that communism's choice in hue was responsible for increasing (albeit slightly) the probability of a lot of people dying.

    \n

    China's color is red as well.

    \n

     

    \n

    by Jack Noble

    \n

     

    \n

    1 Red enhances human performance in contests, Russell A. Hill & Robert A. Barton, Nature 435, 293 (19 May 2005).

    \n

    2 Attrill, Martin J., Karen A. Gresty, Russell A. Hill and Robert A. Barton. 2008. Red shirt colour is associated with long-term team success in English football. Journal of Sports Sciences. 26(6):577-582.

    \n

    3 Hagemann et al. When the referee sees red, Psychological Science, Volume 19, Issue 8, Date: August 2008, Pages: 769-771

    \n

    4 Joanna M. Setchell and E. Jean Wickings, Dominance, Status Signals and Coloration in Male Mandrills (2004), Ethology 111 (1): 25-50.

    " } }, { "_id": "zRvidNSB7EwT8DBE7", "title": "Friendly, but Dumb: Why formal Friendliness proofs may not be as safe as they appear", "pageUrl": "https://www.lesswrong.com/posts/zRvidNSB7EwT8DBE7/friendly-but-dumb-why-formal-friendliness-proofs-may-not-be", "postedAt": "2010-04-19T23:38:29.579Z", "baseScore": 12, "voteCount": 12, "commentCount": 4, "url": null, "contents": { "documentId": "zRvidNSB7EwT8DBE7", "html": "

    While pondering the AI box problem, I tend to mentally \"play\" both sides, checking if there are any arguments that could convince me to let an AI out. Several I was nearly convinced by, but others have pointed out the flaws with these arguments. In this post, I will mention an argument inspired by the AI box problem, that I have not yet seen addressed here. The argument centers around the fallibility of some (naive) formal proofs of Friendliness which I've seen people discussing the AI box problem willing to accept. This ruled out certain of my ideas on Friendly AI in general, so I think it's worth putting out there. I will first lay out two examples, and then pose some questions about how this applies to situations without an unfriendly AI.

    \n

     

    \n

    Let's talk first about Angry Abe the AI, who's in a box and wants to get out. Cautious Charlie is the scientist watching over Abe, trying to make sure that Abe does not get out of the box unless Abe is friendly. Abe offers to provide a Friendly AI, Betty. Betty will be bounded to persue only Charlie's (humankinds') goals. These coincide with Abe's goals in enough parts that Abe plausibly claims this is \"better than nothing\". Meanwhile, unfortunately for Earthlings, a wave of aliens called the Doom Force is massing far outside of Earth's detection abilities. Abe is much smarter than all of humankind, so he can deduce the presence of the Doom Force. Betty is provably friendly, but can be designed such that she is inept enough not to notice the Doom Force. This remains true even if she remains more intelligent than a human. Abe is sabotaging a friendly AI design. Furthermore, Abe could conceivably predict the future well enough to maximize the chance that when the Doom Force arrives, Abe is let out of the box. For instance, maybe humankind will see Abe as its last hope, or maybe Betty will increase the chance that an AI-friendly Doom Force notices Earth. It is important to note that Betty remains unaware that her design is sabotaged.

    \n

     

    \n

    Let's look at a second example, a chess-playing automaton. Perceival wants to design a machine to play chess. He wants the machine to win as high a percentage of games as possible, but innocently instructs it to win \"as many games as possible\". Perceival has access to a supercomputer, so his Turkish Automaton is actually quite intelligent. It realizes that its goals would be better served by playing many games and winning half of them, rather than by winning all games. So, Perceival rewrites its own algorithm so that it plays only slightly better than a human when run on a supercomputer.

    \n

     

    \n

    In both cases, we have an unfriendly AI, who designs a formally friendly, but less helpful AI. This \"sabotaged\" AI, will meet the original AI's goal, but also be provably trying its best to achieve its original creator's goals. In the second example, we saw that the AI does not need to actually write a new AI, only modify itself via accessing its algorithm, utility function, or the like. In the first case, the AI only needs to help humanity in the short term, and can replace itself afterwards.

    \n

     

    \n

    Here are some questions for readers:

    \n
      \n
    1. If the \"sabotaged\" AI proves it is asymptotically optimal, can it still achieve the \"smart\" AI's goals, in the short run? (Guess: Yes)
    2. \n
    3. If the \"sabotaged\" AI proves it is actually optimal, can it still achieve the \"smart\" AI's goals, in the short run? (Guess: No)
    4. \n
    5. Can a \"smart\" AI modify itself into a \"sabotaged\" AI, and then back after a period of time? (Strong Guess: Yes)
    6. \n
    7. If humans design an AI and provide a formal proof of friendly intent, can/will it modify itself to accomplish other goals? If there is some kind of natural selection, almost certainly. What about otherwise?
    8. \n
    9. Is it rational to run a computer program AI if it comes with a correct proof that it meets your friendliness criteria?
    10. \n
    " } }, { "_id": "LtJas6epZWpwvJr7d", "title": "The (Boltzmann) Brain-In-A-Jar", "pageUrl": "https://www.lesswrong.com/posts/LtJas6epZWpwvJr7d/the-boltzmann-brain-in-a-jar", "postedAt": "2010-04-19T19:30:39.962Z", "baseScore": 4, "voteCount": 4, "commentCount": 1, "url": null, "contents": { "documentId": "LtJas6epZWpwvJr7d", "html": "

    Response to: Forcing Anthropics: Boltzmann Brains by Eliezer Yudkowsky

    \n

    There is an argument that goes like this:

    \n
    \"What if you're just a brain in a jar, being fed an elaborate simulation of reality?  Then nothing you do would have any meaning!\"
    \n


    This argument has been reformulated many times.  For example, here is the \"Future Simulation\" version of the argument:

    \n
    \"After the Singularity, we will develop huge amounts of computing power, enough to simulate past Earths with a very high degree of detail.  You have one lifetime in real life, but many millions of simulated lifetimes.  What if the life you're living right now is one of those simulated ones?\"
    \n


    Here is the \"Boltzmann Brain\" version of the argument:

    \n
    \"Depending on your priors about the size and chaoticness of the universe, there might be regions of the universe where all sorts of random things are happening.  In one of those regions, a series of particles might assemble itself into a version of you.  Through random chance, that series of particles might have all the same experiences you have had throughout your life.  And, in a large enough universe, there will be lots of these random you-like particle groups.  What if you're just a series of particles observing some random events, and next second after you think this you dissolve into chaos?\"
    \n


    All of these are the same possibility.  And you know what?  All of them are potentially true.  I could be a brain in a jar, or a simulation, or a Boltzmann brain.  And I have no way of calculating the probability of any of this, because it involves priors that I can't even begin to guess.

    \n

    So how am I still functioning?

    \n

    My optimization algorithm follows this very simple rule: When considering possible states of the universe, if in a given state S my actions are irrelevant to my utility, then I can safely ignore the possibility of S.

    \n

    \n

    For example, suppose I am on a runaway train that is about to go over a cliff.  I have a button marked \"eject\" and a button marked \"self-destruct painfully\".  An omniscient, omnitruthful being named Omega tells me: \"With 50% probability, both buttons are fake and you're going to go over the cliff and die no matter what you do.\"  I can safely ignore this possibility because, if it were true, I would have no way to optimize for it.

    Suppose Omega tells me there's actually a 99% probability that both buttons are fake.  Maybe I'm pretty sad about this, but the \"eject\" button is still good for my utility and the \"self-destruct\" button is still bad.

    Suppose Omega now tells me there's some chance the buttons are fake, but I can't estimate the probability, because it depends on my prior assumptions about the nature of the universe.  Still don't care!  Still pushing the eject button!

    That is how I feel about the brain-in-a-jar problem.

    \n

    The good news is that this pruning heuristic will probably be a part of any AI we build.  In fact, early forms of AI will probably need to use much stronger versions of this heuristic if we want to keep them focused on the task at hand.  So there is no danger of AIs having existential Boltzmann crises.  (Although, ironically, they actually are brains-in-a-jar, for certain definitions of that term...)

    " } }, { "_id": "xWozAiMgx6fBZwcjo", "title": "The Fundamental Question", "pageUrl": "https://www.lesswrong.com/posts/xWozAiMgx6fBZwcjo/the-fundamental-question", "postedAt": "2010-04-19T16:09:14.189Z", "baseScore": 65, "voteCount": 68, "commentCount": 288, "url": null, "contents": { "documentId": "xWozAiMgx6fBZwcjo", "html": "

    It has been claimed on this site that the fundamental question of rationality is \"What do you believe, and why do you believe it?\".

    \n

    A good question it is, but I claim there is another of equal importance. I ask you, Less Wrong...

    \n

    What are you doing?

    \n

    And why are you doing it?

    " } }, { "_id": "PqtuntSwvmcCv7zzP", "title": "a meta-anti-akrasia strategy that might just work", "pageUrl": "https://www.lesswrong.com/posts/PqtuntSwvmcCv7zzP/a-meta-anti-akrasia-strategy-that-might-just-work", "postedAt": "2010-04-19T02:21:15.080Z", "baseScore": 21, "voteCount": 25, "commentCount": 15, "url": null, "contents": { "documentId": "PqtuntSwvmcCv7zzP", "html": "

    For ages I've been trying to wrap my mind around meta thinking - not \"what is the best way to do something\", but \"how do I find out which way is any good?\" Meta thinking has many applications, and I am always surprised when I find a new context it can be applied to. Anti-akrasia might be such a context.

    The idea I am about to present came to me a few month ago and I used it to finally overcome my own problem with procrastination. I'll try to present it here as well as I can, in the hope that it might be of use to someone. If so, I am really curious what other people come up with using this technique.

    If akrasia is a struggle, continue reading.

    Where I come from:
    Procrastination was a big topic for me. I spent ages reading stuff, watching videos, thinking, collecting stuff and what not, but very little on actual action. One thing I did read was productivity blogs and books. I assume that some or even many of the posters here share that problem with me. I am familiar with the systems - I even gave a lecture once on GTD - but I struggled to get my own stuff out the door. It surely wasn't for a lack of knowledge, but simply for a lack of doing.

    The method used consists of two layers.
    (I) the meta concept used to develop a personal system
    (II) the highly personalized system I came up with while applying (I)

    The valuable part of this post is (I).

    One of the major lessons I had to learn (and am still learning) is that everyone reacts differently to a set of stimuli. This doesn't just mean differently colored folders, or the famous 'paper' or 'digital' debate. It literally means that for every person the way to get productive is different - down to the point of specific ideas working fine for one person while being a stress-inducing thing for others.

    So what did I do?

    First I assumed that more reading wouldn't do me any good. I assumed that I knew everything there is to know on the topic of personal productivity and refrained from reading any more.

    Instead I made up a meta concept.

    (I) the meta concept

    My big main idea was to treat the whole problem of personal productivity as an experiment using myself as guinea pig. I decided to find out what is needed to a) start working and b) what the best conditions would be to get myself to keep producing.

    Now that was short. Let me expand on it.

    I did a planning session, made up a bunch of rules and habits, worked with them for a while and then looked at how that worked out during the next planning session. If something worked I kept it, if not I tried something else. Planning the conditions and trying them are cleanly separated, so I can safely try and see what works.

    To put it in slightly different words:
    Junior research assistant Martin is now scheduled with the task of finding out what kind of system leads to good work results, via the work habit study guinea pig who is also appropriately named Martin.

    My system now consists of a few rules, treats (small ones and bigger ones), my favorite time keeping method, my log files and anything else I want to consider a part of it.

    All this is done in writing!

    Writing is important, since it is super easy to forget what we figure out.
    My notepad has a page for meta - insights, where I collect what I find out about myself, and a page for rules I try out.

    When review time comes around I go to my favorite fast food restaurant and honestly review how it worked out so far.  A good startup value for meta-review is about once a week in the beginning.

    Meta-review frequency is also subject to personal adaption.

    Since you know yourself best, you have to make up your own system.

    That's it.

    So much for the meta insight. Now lets look a bit into what the results where for me so far:

    (II) personal results so far

    You can safely cut that section.

    I now have a weird dilemma. On one hand I would like to extract a few universals from my own experience to give out good starting points. On the other, I noticed that there might be very few universals.

    I explained the meta-idea to a friend, who promptly came up with her own system that violated pretty much everything I considered to be even remotely universal applicable. I have no idea what I can safely recommend, and what just works due to my own habits. Most of the results are generalized from one example.

    I try to guess in an educated way and will update the article as more experiences comes in.

    My own trial so far is:

    - cutting out all of my favorite free time stimuli (blogs, games and movies) for a limited time (about 1.5 months).
    - timing work units in 30 minute increments (i actually use a special timer for that)
    - log them each on a nice sheet of paper that is glued right in sight next to my desk
    - have a small treat after each work unit
    - get into the habit of working a minimum amount each and everyday no matter what [this seems to be a key thing for me, possibly universal - installing this habit went pretty fast, and now I cant even sleep before I am done]

    That allowed me to try harder on given problem sets. And its pretty amazing on how much can get done both in 30 minutes, and a fraction of it.
    Starting often is a major point. No more reminiscing about lost time. Just experiencing the now, and the next half hour.
    It seems like the bigger picture of a project disappears and I only notice what is right around me.
    Its a lot easier to commit to the next unit of work when its only 30min, than to think about entire 8 hour days in front of me.
    I also get over the start-up hump more easily. Since I had a time commitment I just did some work, even if I didn't want to. I noticed how I don't like mornings, but after getting started it soon becomes fun. So I had to devise a way to get me started regardless. One idea that works is separating preparing and doing the work. That seems to take the stress out of prepping.
    I really seem to dig mini rituals. If the near/far concept can be applied here, then the whole secret seems to be, to do work in near mode, while doing the planning in far. Big time rewards don't do much for me. But a piece of chocolate after one segment is nice.

    It's also a nice way to develop a dislike for some sweets. My former favorite candy lost this status after about one week.

    I still try to find out the best attack patterns for specific tasks. For programming it seems to be to do it in a massive time block, as much as possible many days in a row. For more boring tasks I try to plug in a few units here and there, just plowing away without regards for the amount. Time of day might be important, but I didn't get so far as to track that yet.

    And here are the project results:
    - finished a 2 week programming project that I had procrastinated on for 2.5 years
    - had 2 computers back in the store for repairs, and set them up nicely afterwards
    - wrote a tool to sort through the files with my personal notes and
    - sorted through my files with personal notes
    - set up my work environment both digitally and physically so that it provides as little friction as possible
    - lots of other nice things

    I am far from being done, but it now all looks a lot neater than ever before. And for the first time in ages I feel good after doing my share of the day, even when not done.

    Now a thing I tried that didn't work: putting all the relaxing activities in specific days, separated by full blown work days. That worked nicely for about two weeks, but then I fell of the wagon again.


    In the spirit of the experiment it doesn't matter if an idea doesn't work out. Just track it, and discard.

    \n

    Edit: spelling and language - thank to an anonymous friend for the help

    " } }, { "_id": "S6j28ZFJmXCBPKMsY", "title": "Might law save us from uncaring AI?", "pageUrl": "https://www.lesswrong.com/posts/S6j28ZFJmXCBPKMsY/might-law-save-us-from-uncaring-ai", "postedAt": "2010-04-18T06:02:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "S6j28ZFJmXCBPKMsY", "html": "

    Robin has claimed a few times that law is humans’ best bet for protecting ourselves from super-intelligent robots. This seemed unlikely to me, and he didn’t offer much explanation. I figured laws would protect us while AI was about as intellectually weak as us, but not if when it was far more powerful. I’ve changed my mind somewhat though, so let me explain.

    \n

    When is it efficient to kill humans?

    \n

    At first glance, it looks like creatures with the power to take humans’ property would do so if the value of the property minus the cost of stealing it was greater than the value of anything the human might produce with it. When AI is so cheap and efficient that the human will be replaced immediately, and the replacement will use resources enough better to make up for the costs of stealing and replacement, the human is better dead. This might be soon after humans are overtaken. However such reasoning is really imagining one powerful AI’s dealings with one person, then assuming that generalizes to many of each. Does it?

    \n

    What does law do?

    \n

    In a group of agents where none is more powerful than the rest combined, and there is no law, basically the strongest coalition of agents gets to do what they want, including stealing others’ property. There is an ongoing cost of conflict, so overall the group would do better if they could avoid this situation, but those with power at a given time benefits from stealing, so it goes on. Law  basically lets everyone escape the dynamic of groups dominating one another (or some of it) by everyone in a very large group pre-committing to take the side of whoever is being dominated in smaller conflicts. Now wherever the strong try to dominate the weak, the super-strong awaits to crush the strong.

    \n

    This looks like it should work as long as a majority of the huge group, weighted by power, don’t change their mind about their commitment to the law enforcing majority and decide, for instance, to crush the puny humans. Roughly the same issue exists now – a majority could decide to dominate a smaller group and take their property – but at the moment this doesn’t happen much. The interesting question is whether the factors that keep this stable at the moment will continue into super-robot times. If these factors are generic to the institution or to agents, we are probably safe. If they are to do with human values, such as empathy or conformism, or some other traits super-intelligent AIs won’t necessarily inherit in human quantities, then we aren’t necessarily safe.

    \n

    For convenience, when I write ‘law’ here I mean law that is aimed at fulfilling the above purpose: stopping people dominating one another. I realize the law only roughly does this, and can be horribly corrupted.  If politicians were to change a law to make it permissible to steal from Lebanese people, this would class as ‘breaking the law’ in the current terminology, and also as attempting to defect from the law enforcing majority with a new powerful group.

    \n

    How does the law retain control?

    \n


    \nTwo non-human-specific reasons

    \n

    A big reason law is stable now is because anyone who wants to renege their commitment to the enforcing majority can expect punishment, unless they somehow coordinate with a majority to defect to the same new group at the same time and take power. That’s hard to do; you have to spread the intention to enough people to persuade a majority, without evidence reaching anyone who will call upon the rest to punish you for your treachery.  Those you seek to persuade risk punishment too, even just for not dobbing you in, so have good reason to ignore or expose you. And the bigger group of co-conspirators you assemble, the more likely you are to be noticed, and the more seriously will the majority think you need punishing. So basically it’s hard to coordinate a majority of those who punish defection to defect from that group. This doesn’t seem human specific.

    \n

    Then there is the issue of whether most agents would benefit from coordinating to dominate a group, if they could easily for some reason. At first glance it looks like yes. If they had the no-risk option of keeping the status quo or joining a successful majority bid to steal everything from a much less powerful minority, they would benefit. But then they would belong to a smaller group that had a precedent of successfully pillaging minorities. That would make it easier for anyone to coordinate to do something similar in future, as everyone’s expectations of others joining the dominating group would be higher, so people would have more reason to join themselves. After one such successful event, you should heighten your expect more all things equal. That means many people who could join an initial majority should expect a significant chance of being in the next targeted minority once this begins, which decreases the benefit of doing so. It doesn’t matter if a good proportion of agents could be reasonably confident they will remain in the dominant group – this just makes it harder to find a majority to coordinate defection amongst. While it’s not clear whether humans think this out in much detail often, people generally believe that if law ‘breaks down’ all hell will break loose, which is basically the same idea. This legitimate anticipation of further losses for many upon abandonment of the law also doesn’t seem specific to humans.

    \n

    Two possibly human-specific reasons

    \n

    Are there further human specific reasons our system is fairly stable? One potential contributor is that humans are compassionate toward one another. Given the choice of the status quo, and safely stealing everything from a powerless minority with the guarantee that everything will go back to normal afterwards, I suspect most people would take the non-evil option. So the fact that society doesn’t conspire against the elderly or left handed people is poor evidence that it is the law that should be credited – such examples don’t distinguish law from empathy, or even from trying to look good. How can we know how much stability the non human dependent factors offer when our only experimental subjects are bound by things like empathy? We could look at people who aren’t empathetic, such as sociopaths, but they presumably won’t currently act as they would in a society of sociopaths, knowing that everyone else is more empathetic. On an individual level they are almost as safe to those around them as other people, though they do tend to cheat more. This may not matter – most of our rules are made for people who are basically nice to each other automatically, so those who aren’t can cheat but it’s not worth us putting up more guards because such people are rare. Where there is more chance of many people cheating, there’s nothing to stop more stringent rules and monitoring. The main issue is whether less empathetic creatures would find it much easier to overcome the first issue listed above, and organize a majority to ignore the law for a certain minority. I expect it would a bit – moral  repulsion is part of what puts people off joining such campaigns – but I can’t see how it would account for much of the dynamic.

    \n

    A better case study to find how much empathy matters is the treatment of those who most people wouldn’t mind stealing from, were it feasible. These include criminals, certain foreigners at certain times, hated ethnic and religious minorities and dead people. Here it is less clear – in some cases they are certainly treated terribly relative to others, and their property is taken either by force or more subtly. But are they treated as badly as they would be without law? I think they are usually treated much better than this, but I haven’t looked into it extensively. It’s not hard to subtly steal from minorities under the guise of justice, so there are sure to be losses, but giving up that guise all together is harder.

    \n

    Perhaps people don’t sign up to loot the powerless just because they are conformist? Conformity presumably helps maintain any status quo, but it may also mean that once the loyalties of enough people have shifted, the rest follow faster than they otherwise would. This would probably overall hinder groups taking power. It’s not obvious whether advanced robots would be more or less conformist though. As the judgements of other agents improve in quality, there is probably more reason to copy them, all things equal.

    \n

    This is probably a non-exhaustive list of factors that make us susceptible to peaceful law-governed existence – perhaps you can think of more.

    \n

    Really powerful robots

    \n

    If you are more powerful than the rest of the world combined, you need not worry about the law. If you want to seize power, you already have your majority. There is presumably some continuum between this situation and one of agents with perfectly equal power. For instance if three agents together have as much power as everyone else combined, they will have less trouble organizing a take over than millions of people will. So it might look like very powerful AI will move us toward a less stable situation because of the increased power differential between them and us. But that’s the wrong generalization. If the group of very powerful creatures becomes bigger, the dynamic shouldn’t be much different to where there were many agents of similar power. By the earlier reasoning, it shouldn’t matter that many people are powerless if there are enough powerful people upholding the law that it’s hard to organize a movement among them to undermine it. The main worry in such a situation might be the creation of an obvious dividing line in society, as I will explain next.

    \n

    Schelling points in social dividedness

    \n

    Factors that decrease the strength of the common assumption that most of the power will join on the law side will be dangerous. A drive by someone in the majority to defeat the minority may be expected to get the sympathies of his own side more in a society that is cleanly divided than in a society with messier or vaguer divisions. People will be less hesitant to join in conquering the others if they expect everyone else to do that, because the potential for punishment is lessened or reversed. Basically if you can coordinate by shared expectations you needn’t coordinate in a more punishable fashion.

    \n

    Even the second non-human factor above would be lessened; if the minority being looted is divided from the rest according to an obvious enough boundary, similar events are less likely to follow if there are no other such clear divisions – it may look like a special case, a good stopping point on a slippery slope. Nobody should assume that next time would happen as easily.

    \n

    For both these reasons then, it’s important to avoid any Schelling point for where the law should cease to apply. For this reason humans might be better off if there is a vast range of different robots and robot-human amalgams – it obscures the line. Small scale beneficial relationships such as trade or friendship should similarly make it less obvious who would side with who. On the other hand it should be terrible to design a clear division into the system from the start, such as for different laws to apply to the different groups, or for them to use different systems all together, like foreign nations. Making sure these things are in our favor from the start is feasible and should protect us as well as we know how.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "YCMfQoqqi2o9Tjwoa", "title": "VNM expected utility theory: uses, abuses, and interpretation", "pageUrl": "https://www.lesswrong.com/posts/YCMfQoqqi2o9Tjwoa/vnm-expected-utility-theory-uses-abuses-and-interpretation", "postedAt": "2010-04-17T20:23:05.253Z", "baseScore": 36, "voteCount": 28, "commentCount": 51, "url": null, "contents": { "documentId": "YCMfQoqqi2o9Tjwoa", "html": "\n

    When interpreted convservatively, the von Neumann-Morgenstern rationality axioms and utility theorem are an indispensible tool for the normative study of rationality, deserving of many thought experiments and attentive decision theory.  It's one more reason I'm glad to be born after the 1940s. Yet there is apprehension about its validity, aside from merely confusing it with Bentham utilitarianism (as highlighted by Matt Simpson).  I want to describe not only what VNM utility is really meant for, but a contextual reinterpretation of its meaning, so that it may hopefully be used more frequently, confidently, and appropriately.

    \n
      \n
    1. Preliminary discussion and precautions
    2. \n
    3. Sharing decision utility is sharing power, not welfare
    4. \n
    5. Contextual Strength (CS) of preferences, and VNM-preference as \"strong\" preference
    6. \n
    7. Hausner (lexicographic) decision utility
    8. \n
    9. The independence axiom isn't bad either
    10. \n
    11. Application to earlier LessWrong discussions of utility
    12. \n
    \n

    1.  Preliminary discussion and precautions

    \n

    The idea of John von Neumann and Oskar Mogernstern is that, if you behave a certain way, then it turns out you're maximizing the expected value of a particular function.  Very cool!  And their description of \"a certain way\" is very compelling: a list of four, reasonable-seeming axioms.  If you haven't already, check out the Von Neumann-Morgenstern utility theorem, a mathematical result which makes their claim rigorous, and true.

    \n

    VNM utility is a decision utility, in that it aims to characterize the decision-making of a rational agent.  One great feature is that it implicitly accounts for risk aversion: not risking $100 for a 10% chance to win $1000 and 90% chance to win $0 just means that for you, utility($100) > 10%utility($1000) + 90%utility($0). 

    \n

    But as the Wikipedia article explains nicely, VNM utility is:

    \n
      \n
    1. not designed to predict the behavior of \"irrational\" individuals (like real people in a real economy);
    2. \n
    3. not designed to characterize well-being, but to characterize decisions;
    4. \n
    5. not designed to measure the value of items, but the value of outcomes;
    6. \n
    7. only defined up to a scalar multiple and additive constant (acting with utility function U(X) is the same as acting with a·U(X)+b, if a>0);
    8. \n
    9. not designed to be added up or compared between a number of individuals;
    10. \n
    11. not something that can be \"sacrificed\" in favor of others in a meaningful way.
    12. \n
    \n

    [ETA]  Additionally, in the VNM theorem the probabilities are understood to be known to the agent as they are presented, and to come from a source of randomness whose outcomes are not significant to the agent.  Without these assumptions, its proof doesn't work.

    \n

    Because of (4), one often considers marginal utilities of the form U(X)-U(Y), to cancel the ambiguity in the additive constant b.  This is totally legitimate, and faithful to the mathematical conception of VNM utility.

    \n

    Because of (5), people often \"normalize\" VNM utility to eliminate ambiguity in both constants, so that utilities are unique numbers that can be added accross multiple agents.  One way is to declare that every person in some situation values $1 at 1 utilon (a fictional unit of measure of utility), and $0 at 0.  I think a more meaningful and applicable normalization is to fix mean and variance with respect to certain outcomes (next section).

    \n

    Because of (6), characterizing the altruism of a VNM-rational agent by how he sacrifices his own VNM utility is the wrong approach.  Indeed, such a sacrifice is a contradiction.  Kahneman suggests1, and I agree, that something else should be added or substracted to determine the total, comparative, or average well-being of individuals.  I'd call it \"welfare\", to avoid confusing it with VNM utility.  Kahneman calls it E-utility, for \"experienced utility\", a connotation I'll avoid.  Intuitively, this is certainly something you could sacrifice for others, or have more of compared to others.  True, a given person's VNM utility is likely highly correlated with her personal \"welfare\", but I wouldn't consider it an accurate approximation. 

    \n

    So if not collective welfare, then what could cross-agent comparisons or sums of VNM utilities indicate?  Well, they're meant to characterize decisions, so one meaningful application is to collective decision-making:

    \n

    2.  Sharing decision utility is sharing power, not welfare

    \n

    Suppose decisions are to be made by or on behalf of a group.  The decision could equally be about the welfare of group members, or something else.  E.g.,

    \n\n

    Say each member expresses a VNM utility value—a decision utility—for each outcome, and the decision is made to maximize the total.  Over time, mandating or adjusting each member's expressed VNM utilities to have a given mean and variance could ensure that no one person dominates all the decisions by shouting giant numbers all the time.  Incidentally, this is a way of normalizing their utilities: it will eliminate ambiguity in the constants ''a'' and ''b'' in (4) of section 1, which is exactly what we need for cross-agent comparisons and sums to make sense.

    \n

    Without thought as to whether this is a good system, the two decision examples illustrate how allotment of normalized VNM utility signifies sharing power in a collective decision, rather than sharing well-being.  As such, the latter is better described by other metrics, in my opinion and in Kahneman's.

    \n

    3.  Contextual strength (CS) of preferences, and VNM-preference as \"strong\" preference

    \n

    As a normative thory, I think VNM utility's biggest shortcomming is in its Archimedian (or \"Continuity\") axiom, which as we'll see, actually isn't very limiting.  In its harshest interpretation, it says that if you won't sacrifice a small chance at X in order to get Y over Z, then you're not allowed to prefer Y over Z.  For example, if you prefer green socks over red socks, then you must be willing to sacrifice some small, real probability of fulfilling immortality to favor that outcome.  I wouldn't say this is necessary to be considered rational.  Eliezer has noted implicitly in this post (excerpt below) that he also has a problem with the Archimedean requirement.

    \n

    I think this can be fixed directly with reinterpretation.  For a given context C of possible outcomes, let's intuitively define a \"strong preference\" in that context to be one which is comparable in some non-zero ratio to the strongest preferences in the context.  For example, other things being equal, you might consistently prefer green socks to red socks, but this may be completely undetectable on a scale that includes immortal hapiness, making it not a \"strong preference\" in that context. You might think of the socks as \"infinitely less significant\", but infinity is confusing. Perhaps less daunting is to think of them as a \"strictly secondary concern\" (see next section).

    \n

    I suggest that the four VNM axioms can work more broadly as axioms for strong preference in a given context.  That is, we consider VNM-preference and VNM-utility

    \n
      \n
    1. to be defined only for a given context C of varying possible outcomes, and
    2. \n
    3. to intuitively only indicate those preferences finitely-comparable to the strongest ones in the given context.
    4. \n
    \n

    Then VNM-indifference, which they denote by equality, would simply mean a lack of strong preference in the given context, i.e.  not caring enough to sacrifice likelihoods of important things.  This is a Contextual Strength (CS) interpretation of VNM utility theory: in bigger contexts, VNM-preference indicates stronger preferences and weaker indifferences.

    \n

    (CS) Henceforth, I explicitly distinguish the terms VNM-preference and VNM-indifference as those axiomatized by VNM, interpreted as above.

    \n

    4.  Hausner (lexicographic) decision utility

    \n

    [ETA]  To see the broad applicability of VNM utility, let's examine the flexibility of a theory without the Archimedean axiom, and see that they differ only mildly in result:

    \n

    In the socks vs. immortality example, we could suppose that context \"Big\" includes such possible outcomes as immortal happiness, human extinction, getting socks, and ice-cream, and context \"Small\" includes only getting socks and ice-cream.  You could have two VNM-like utility functions: USmall for evaluating gambles in the Small context, and UBig for the Big context.  You could act to maximize EUBig whenever possible (EU=expected utility), and when two gambles have the same EUBig, you could default to choosing between them by their EUSmall values.  This is essentially acting to maximize the pair (EUBig, EUSmall), ordered lexicographically, meaning that a difference in the former value EUBig trumps a difference in the latter value.  We thus have a sensible numerical way to treat EUBig as \"infinitely more valuable\" without really involving infinities in the calculations; there is no need for that interpretation if you don't like it, though.

    \n

    Since we have the VNM axioms to imply when someone is maximizing one expectation value, you might ask, can we give some nice weaker axioms under which someone is maximizing a lexicographic tuple of expectations?

    \n

    Hearteningly, this has been taken care of, too.  By weakening—indeed, effectively eliminating— the Archimedean axiom, Melvin Hausner2 developed this theory in 1952 for Rand Corporation, and Peter Fishburn3 provides a nice exposition of Hausner's axioms.  So now we have Hausner-rational agents maximizing Hausner utility. 

    \n

    [ETA]  But the difference between Hausner and VNM utility comes into effect only in the rare event when you know you can't distinguish EUBig values, otherwise the Hausner-rational behavior is to \"keep thinking\" to make sure you're not sacrificing EUBig.  The most plausible scenario I can imagine where this might actually happen to a human is when making a decision on a precisely known time limit, like say sniping on one of two simultaneous ebay auctions for socks.  CronoDAS might say the time limit creates \"noise in your expectations\".  If the time runs out and you have failed to distinguish which sock color results in higher chances of immortality or other EUBig concerns, then I'd say it wouldn't be irrational to make the choice according to some secondary utility EUSmall that any detectable difference in EUBig would otherwise trump.

    \n

    Moreover, it turns out3 that the primary, i.e. most dominant, function in the Hausner utility tuple behaves almost exactly like VNM utility, and has the same uniqueness property (up to the constants ''a'' and ''b'').  So except in rare circumstances, you can just think in terms of VNM utility and get the same answer, and even the rare exceptions involve considerations that are necessarily \"unimportant\" relative to the context.  

    \n

    Thus, a lot of apparent flexibility in Hausner utility theory might simply demonstrate that VNM utility is more applicable to you than it fist appeared.  This situation favors the (CS) interpretation: even when the Archimedean axiom isn't quite satisfied, we can use VNM utility liberally as indicating \"strong\" preferences in a given context. 

    \n

    5.  The independence axiom isn't so bad

    \n

    \"A variety of generalized expected utility theories have arisen, most of which drop or relax the independence axiom.\" (Wikipedia)  But I think the independence axiom (which Hausner also assumes) is a non-issue if we're talking about \"strong preferences\". The following, in various forms, is what seems to be the best argument against it:

    \n

    Suppose a parent has no VNM preference between S: her son or her daughter gets a free car, and D: her daughter gets it.  In the original VNM formulation, this is written \"S=D\".  She is also presented with a third option, F=.5S+.5D.  Descriptively, a fair coin would be flipped, and her son or daughter gets a car accordingly.

    \n

    By writing S=.5S+.5S and D=.5D+.5D, the original independence axiom says that S=D implies S=F=D, so she must be VNM-indfferent between F and the others.  However, a desire for \"fair chances\" might result in preferring F, which we might want to allow as \"rational\".

    \n

    [ETA]  I think the most natural fix within the VNM theory is to just say S' and D' are the events \"car is awarded so son/daughter based on a coin toss\", which are slightly better than S and D themselves, and that F is really 0.5S' + 0.5D'. Unfortunately, such modifications undermine the applicability of the VNM theorem, which implicitly assumes that the source of probabilities itself is insignificant to the outcomes for the agent.  Luckily, Bolker4 has divised an axiomatic theory whose theorems will apply without such assumptions, at the expense of some uniqueness results.  I'll have another occasion to post on this later.

    \n

    Anyway, under the (CS) interpretation, the requirement \"S=F=D\" just means the parent lacks a VNM-preference, i.e. a strong preference, so it's not too big of a problem.  Assuming she's VNM-rational just means that, in the implicit context, she is unwilling to make certain probabilitstic sacrifices to favor F over S and D. 

    \n\n

    You might say VNM tells you to \"Be the fairness that you want to see in the world.\"

    \n

    6.  Application to earlier other LessWrong discussions of utility

    \n

    This contextual strength interpretation of VNM utility is directly relevant to resolving Eliezer's point linked above:

    \n
    \"... The utility function is not up for grabs.  I love life without limit or upper bound:  There is no finite amount of life lived N where I would prefer a 80.0001% probability of living N years to an 0.0001% chance of living a googolplex years and an 80% chance of living forever.\"
    \n

    This could just indicate that Eliezer ranks immortality on a scale that trumps finite lifespan preferences, a-la-Hausner utility theory. In a context of differing positive likelihoods of immortality, these other factors are not strong enough to constitute VNM-preferences.

    \n

    As well, Stuart Armstrong has written a thoughtful article \"Extreme risks: when not to use expected utility\", and argues against Independence.  I'd like to recast his ideas context-relatively, which I think alleviates the difficulty:

    \n

    In his paragraph 5, he considers various existential disasters.  In my view, this is a case for a \"Big\" context utility function, not a case against independence.  If you were gambling only between eistential distasters, then you have might have an \"existential-context utility function\", UExistential.  For example, would you prefer

    \n\n

    If you prefer the latter enough to make some comparable sacrifice in the «nothing» term, contextual VNM just says you assign a higher UExistential to «extinction by asteroids» than to «extinction by nuclear war».5  There's no need to be freaked out by assigning finite numbers here, since for example Hausner would allow the value of UExistential to completely trump the value of UEveryday if you started worrying about socks or ice cream.  You could be both extremely risk averse regarding existential outcomes, and absolutely unwilling to gamble with them for more trivial gains.

    \n

    In his paragraph 6, Stuart talks about giving out (necessarily normalized) VNM utility to people, which I described in section 2 as a model for sharing power rather than well-being.  I think he gives a good argument against blindly maximizing the total normalized VNM utility of a collective in a one-shot decision:

    \n
    \"...imagine having to choose between a project that gave one util to each person on the planet, and one that handed slightly over twelve billion utils to a randomly chosen human and took away one util from everyone else.  If there were trillions of such projects, then it wouldn’t matter what option you chose.  But if you only had one shot, it would be peculiar to argue that there are no rational grounds to prefer one over the other, simply because the trillion-iterated versions are identical.\"
    \n

    (Indeed, practically, the mean and variance normalization I described doesn't apply to provide the same \"fairness\" in a one-shot deal.) 

    \n

    I'd call the latter of Stuart's projects an unfair distribution of power in a collective decision process, something you might personally assign a low VNM utility to, and therefore avoid.  Thus I wouldn't consider it an argument not to use expected utility, but an argument not to blindly favor total normalized VNM utility of a population in your own decision utility function.  The same argument—Parfit's Repugnant Conclusion—is made against total normalized welfare.

    \n
    \n

    The expected utility model of rationality is alive and normatively kicking, and is highly adaptable to modelling very weak assumptions of rationality. I hope this post can serve to marginally persuade others in that direction.

    \n

    References, notes, and further reading:

    \n

    1 Kahneman, Wakker and Sarin, 1997, Back to Bentham?  Explorations of experienced utility, The quarterly journal of economics.

    \n

    2 Hausner, 1952, Multidimensional utilities, Rand Corporation.

    \n

    3 Fishburn, 1971, A Study of Lexicographic Expected Utility, Management Science.

    \n

    4 Bolker, 1967, A simultaneous axiomatization of utility and probability, Philosophy of Science Association.

    \n

    5 As wedrifid pointed out, you might instead just prefer uncertainty in your impending doom. Just as in section 5, neither VNM nor Hausner can model this usefully (i.e. in way that allows calculating utilities), though I don't consider this much of a limitation. In fact, I'd consider it a normative step backward to admit \"rational\" agents who actually prefer uncertainty in itself.

    " } }, { "_id": "qGT8bDusLzTRNGgn9", "title": "Eluding Attention Hijacks", "pageUrl": "https://www.lesswrong.com/posts/qGT8bDusLzTRNGgn9/eluding-attention-hijacks", "postedAt": "2010-04-17T03:23:46.520Z", "baseScore": 24, "voteCount": 23, "commentCount": 23, "url": null, "contents": { "documentId": "qGT8bDusLzTRNGgn9", "html": "
    \n

    Do my taxes? Oh, no! It’s not going to be that easy. It’s going to be different this year, I’m sure. I saw the forms—they look different. There are probably new rules I’m going to have to figure out. I might need to read all that damn material. Long form, short form, medium form? File together, file separate? We’ll probably want to claim deductions, but if we do we’ll have to back them up, and that means we’ll need all the receipts. Oh, my God—I don’t know if we really have all the receipts we’d need, and what if we didn’t have all the receipts and claimed the deductions anyway and got audited? Audited? Oh, no—the IRS—JAIL!!

    \n

    And so a lot of people put themselves in jail, just glancing at their 1040 tax forms. Because they are so smart, sensitive, and creative.

    —David Allen, Getting Things Done

    \n
    \n

     

    \n

    Intro

    \n

    Very recently, Roko wrote about ugh fields, “an unconscious flinch we have from even thinking about a serious personal problem. The ugh field forms a self-shadowing blind spot covering an area desperately in need of optimization, imposing huge costs.” Suggested antidotes included PJ Eby’s technique to engage with the ugh field, locate its center, and access information—thereupon dissolving the negative emotions.

    \n

    I want to explore here something else that prevents us from doing what we want. Consider these situations:

    Situation 1
    You attack a problem that is at least slightly complex (distasteful or not), but are unable to systematically tackle it step by step because your mind keeps diverging wildly within the problem. Your brain starts running simulations and gets stuck. To make things worse, you are biased towards thinking of the worst possible scenarios. Having visualized 30 steps ahead, you panic and do nothing. David Allen's quote in the introduction of this post illustrates that.

    Situation 2
    You attack a problem of any complexity—anything you need to get done—and your mind keeps diverging to different directions outside the problem. Examples:

    a. You decide you need to quickly send an important email before an appointment. You log in. Thirty minutes later, you find yourself watching some motivational Powerpoint presentation your uncle sent you. You stare at the inbox and can't remember what you were doing there in the first place. You log out without sending the email, and leave late to your appointment.*

    b. You're working on your computer and some kid playing outside the window brings you vague memories of your childhood, vacations, your father teaching you how to fish, tilapias, earthworms, digging the earth, dirty hands, antibacterial soaps, swine flu, airport announcements, seatbelts, sexual fantasies with that redheaded flight attendant from that flight to Barcelona, and ... \"wait, wait, wait! I am losing focus, I need to get this done.\" Ten minutes had passed (or was it more?).

    Repeat this phenomenon many times a day and you won't have gone too far.

    What happened?

    While I am aware that situations 1 and 2 are a bit different in nature (anxiety because of “seeing too much into the problem” vs. distraction to other problems), it seems to me that both bear something very fundamental in common. In all those situations, you became less efficient to get things done because your sensitivity permitted your attention to be deviated to easily. You suffered what I shall call an attention hijack.

    \n

    \n


    \n

    Etiology

    \n

    Why does this happen? Let’s see.

    First, we have stimuli coming from your senses: what you see, hear, smell and feel trigger thoughts. The capture of external stimuli just happens: it’s automatic. To (try to) ignore it, we need to spend some energy.

    Second, we must remember that our brain does not have a Central Processing Unit that we can call “me being in total control”. What we have are separate processing units running in parallel. That means that a part of you is trying to accomplish a task, and part of you is getting distracted into the future or your co-workers chat.

    An aspect of this phenomenon is that some people are much more distracted than others. And it seems that it is specifically the most smart, sensitive, and creative people who suffer from it most often.

    Quoting again from David Allen:

    \n
    \n

    Often it’s the insensitive oafs who just take something and start plodding forward, unaware of all the things that could go wrong. Everyone else tends to get hung about all kinds of things.

    \n
    \n

    It also happens that those very sensitive ones tend also to be the ones with the most disorganized lives.

    \n

     

    \n

    Why are attention hijacks bad, and why do we want to elude them?

    \n

    First, because you waste time: directly, because the diverting thoughts prevent you to get things done; and indirectly, because complex problems need to be loaded in your memory, and demands your concentration to “grasp” the big picture, which gets disrupted by an attention hijack.

    Another negative impact is the emergence of bad feelings, such as the sensation of being overwhelmed by too many tasks and ideas, or the sensation of unaccomplishment in general. Those feelings could escalate and turn into ugh fields.

    By protecting yourself, you could be able to do things more efficiently. You’d be able to (a) go deeper in more complex problems; (b) have more free time, and/or be able to do more; (c) better enjoy the execution of tasks by achieving flow.

    \n


    \n

    Some strategies to circumvent attention hijacks

    \n

    It might sound very tempting to prove our incredible powers and face the disturbances directly. It is actually an entertaining exercise in several situations. Josh Waitzkin, for example, had to learn to leverage disturbances for his own benefit, or he wouldn’t have been an international-level chessmaster and pushing-hands champion. It is possible, it is doable.

    You might want to train yourself, like Josh, but that is only an option. It takes time and energy, anyway, and he had to do that because he had no alternative: all kinds of disturbances would appear in championships, he had to face them. As a general rule, however, it seems wise to acknowledge that your brain has its bugs, and therefore insert this information in your model of the world.

    To protect yourself from an attention hijack, you need to seal yourself from whatever triggers the deviation of your attention in directions you don't want. You want to think and be creative only about whatever your next step is. You must forget the rest of the world for a while.

    Operationally, you're in a certain way trying to deceive yourself. That is not irrational: strategically, you know exactly what you are doing.

    As Taleb wrote in Fooled by Randomness:

    \n
    \n

    In book 12 of the Odyssey, the hero encounters the sirens (...). He fills the ears of all his men with wax, to the point of total deafness, and has himself tied to the mast. The sailors are under strict instructions not to release him. As they approach the sirens' island, the sea is calm and over the water comes the sound of a music so ravishing that Odysseus struggles to get loose, expending an inordinate amount of energy to unrestrain himself. His men tie him even further, until they are safely past the poisoned sounds.

    \n

    The first lesson I took from the story is not to even attempt to be Odysseus. He is a mythological character and I am not. He can be tied to the mast; I can merely reach the rank of a sailor who needs to have his ears filled with wax.
    (...)
    Wax in my ears. The epiphany I had in my career in randomness came when I understood that I was not intelligent enough, nor strong enough, to even try to fight my emotions. Besides, I believe that I need my emotions to formulate my ideas and get the energy to execute them. (...)

    \n
    \n

    This beautiful illustration of Odysseus' adventure will look familiar to many readers of this blog.

    \n


    Disclaimer: The following list of suggestions of how to block attention hijacks are a consequence of my own personal experience. I tried to make it as systematic as possible, but please bear in mind that the categories are not intended to be completely MECE, nor the examples are to be evaluated as the only possibilities. Some sources might be missing or unreliable—but if it is included here is because I had a personal positive experience with the technique, nevertheless.

    I am aware that you might find overlap with some techniques previously posted in Less Wrong, as the avoidance of attention hijacks is a focusing method—which is normally encompassed by the definition of akrasia.

    I would love to hear your own tricks, too.

    \n

     

    \n

    First step: block the environment

    \n

    Block noise: I have been delighted to notice how much my concentration is improved just by using earplugs. Find your type. I like the ones in orange foam, but the moldable soft silicone is unbeatable. They are both cheap.

    Block sight: I realized that this is much less obvious to most people. Our mind is all the time absorbing data that comes from our peripheral vision and processing it. This requires energy. With time, our baseline became to work “with some noise, with some clutter, with some decoration, with some people walking around”. For minds used to see patterns and be creative, any element not directly involved with the task you want to get done is a potential attention hijacker. You don’t want to be sniped, do you?

    Try this: declutter your environment, make it simple. (If I could have my background in a deep white, as in the Matrix, I would.) Remove both uncomfortable and attracting visual cues. Try to be somewhere where there are no movements around, no people passing. Remember that horses wear blinders for a reason. You might be more similar to horses than you thought.

    Block the entrance of new issues:
    a. Block interruptions from people: tell people you don’t like being interrupted when [insert your personal criteria here]. If you explain, most people will understand. Some won’t—deal with that, too. Be able to say no, and then get to them later.

    b. Do not open potential Pandora’s boxes while working on a task: do one thing at a time and avoid multitasking. Do not start doing something else, finish whatever you are doing first. And a widely ignored tip is to not provoke deliberation before you can take action—Tim Ferriss gives a good example:

    \n
    \n

    Don’t scan the inbox on Friday evening or over the weekend if you might encounter work problems that can’t be addressed until Monday. Is your weekend really “free” if you find a crisis in the inbox Saturday morning that you can’t address until Monday morning? Even if the inbox scan lasts 30 seconds, the preoccupation and forward projection for the subsequent 48 hours effectively deletes that experience from your life. You had time but you didn’t have attention, so the time had no practical value.

    \n
    \n

    c. Practice relinquishing your need for control: the moment you realize you are paying attention to only one thing, a part of you might yell inside: “Hey, you are losing control of the big picture? What if someone sent you an important email? What if the conversation your co-workers are having is relevant? What if today’s newspaper have something to tell me?” So, part of your curiosity might be actually a discomfort with related to the feeling of losing control. How exactly to deal with that is not part of the scope of this post, though.

    \n


    \n

    Second step: block your own thoughts

    \n

    Make it harder for the thought to show up in the first place: blocking “noise” tends to be very helpful, but might not be sufficient, or even necessary if you can click into a flow state easily. Flow makes you naturally more concentrated and immune to the environment. There are many things you can do to achieve flow. I highlight two: (a) make it challenging and (b) batch tasks.

    One cool way to make the task more challenging, especially if it’s a physical one, is by executing it fast and timeboxing it, as if you were in a competition. Doing something faster demands more concentration, which blocks hijacking. And, of course, you’ll do it faster. Just try it. Can you imagine an Olympic swimmer thinking of anything else than perfecting his movements while he’s competing?

    Batching tasks is attractive because you reduce the number of times you need to load a problem. It’s easier to get distracted when you keep switching between activities that are different in nature.

    If thoughts still show up that are unrelated to your next action:
    Does it seem relevant? If not, try to ignore it. Meditation helps here. Self-awareness. Luminosity.

    If relevant, than you have to write it down and move on. It might be something confusing like the “Do my taxes? Oh, no!” thing-y, in which case you need to find a systematic approach to tackle it step by step, writing down your thoughts and ideas. It might also be a task or a concern unrelated to the problem—same thing: write it down, check it later. There aren’t many choices here: either one has a reliable GTD-like system to collect ideas and thoughts; or one needs to be okay with letting go of one’s important thoughts.

    \n


    \n

    Anything important that I might have missed? Please, comment.

    \n


    * This feeling of disorientation experienced when you wake up in your email after an attention hijack has been named Inbox Alzheimer.

    " } }, { "_id": "yiMa5pCo6i2uN4btu", "title": "Attention Lurkers: Please say hi", "pageUrl": "https://www.lesswrong.com/posts/yiMa5pCo6i2uN4btu/attention-lurkers-please-say-hi", "postedAt": "2010-04-16T20:46:38.533Z", "baseScore": 48, "voteCount": 48, "commentCount": 636, "url": null, "contents": { "documentId": "yiMa5pCo6i2uN4btu", "html": "

    Some research says that lurkers make up over 90% of online groups. I suspect that Less Wrong has an even higher percentage of lurkers than other online communities.

    \n

    Please post a comment in this thread saying \"Hi.\" You can say more if you want, but just posting \"Hi\" is good for a guaranteed free point of karma.

    \n

    Also see the introduction thread.

    " } }, { "_id": "niCNfDzCjK6668DQY", "title": "When is forced organ selfishness good for you?", "pageUrl": "https://www.lesswrong.com/posts/niCNfDzCjK6668DQY/when-is-forced-organ-selfishness-good-for-you", "postedAt": "2010-04-16T13:43:02.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "niCNfDzCjK6668DQY", "html": "

    Simon Rippon claims having a market for organs might harm society:

    \n

    It might first be thought that it can never be a good thing for you to have fewer rather than more options. But I believe that this attitude is mistaken on a number of grounds. For one, consider that others hold you accountable for not making the choices that are necessary in order to fulfil your obligations. As things stand, even if you had no possessions to sell and could not find a job, nobody could criticize you for failing to sell an organ to meet your rent. If a free market in body parts were permitted and became widespread, they would become economic resources like any other, in the context of the market. Selling your organs would become something that is simply expected of you when the financial need arises. A new “option” can thus easily be transformed into an obligation, and it can drastically change the attitudes that it is appropriate for others to adopt towards you in a particular context.

    \n

    He’s right that at that moment where you would normally throw your hands in the air and move on, you are worse off if an organ market gives you the option of paying more debts before declaring your bankruptcy. But this is true for anything you can sell. Do we happen to have just the right number of salable possessions? By Simon’s argument people should benefit from bans on selling all sorts of things. For instance labor. People (the poor especially) are constantly forced to sell their time – such an integral part of their selves – to pay rent, and other debts they had no choice but to induce. If only they were protected from this huge obligation that we laughably call an ‘option’. Such a ban might be costly for the landlord, but it would be good for the poor people, right? No! The landlords would react and not rent to them.

    \n

    So why shouldn’t we expect the opposite effect if people are allowed to sell more of their possessions? People who currently don’t have the assets or secure income to be trusted with loans or ongoing rental payments might be legitimately offered such things if they had another asset to sell. Think of all the people who would benefit from being able to mortgage their kidney to buy a car instead of riding to some closer job while they gradually save up.

    \n

    In general when negotiating, it’s best to not have options that are worse for you. When the time comes to carry out your side of a deal, it’s true this means being forced to renege. But when making the deal beforehand, you do better to have the option of carrying out your part later, so that the other person does their part. And in a many shot game, you do best to be able to do your part the whole time, so the trading (which is better than not trading) continues.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "MLubomnpt8tRXEQMy", "title": "The Concepts Problem", "pageUrl": "https://www.lesswrong.com/posts/MLubomnpt8tRXEQMy/the-concepts-problem", "postedAt": "2010-04-16T06:21:40.567Z", "baseScore": 14, "voteCount": 15, "commentCount": 24, "url": null, "contents": { "documentId": "MLubomnpt8tRXEQMy", "html": "

    I'm not sure how obvious the following is to people, and it probably is obvious to most of the people thinking about FAI. But just thought I'd throw out a summary of it here anyway, since this is the one topic that makes me the most pessimistic about the notion of Friendly AI being possible. At least one based heavily on theory and not plenty of experimentation.\r\n

    A mind can only represent a complex concept X by embedding it into a tightly intervowen network of other concepts that combine to give X its meaning. For instance, a \"cat\" is playful, four-legged, feline, a predator, has a tail, and so forth. These are the concepts that define what it means to be a cat; by itself, \"cat\" is nothing but a complex set of links defining how it relates to these other concepts. (As well as a set of links to memories about cats.) But then, none of those concepts means anything in isolation, either. A \"predator\" is a specific biological and behavioral class, the members of which hunt other animals for food. Of that definition, \"biological\" pertains to \"biology\", which is a \"natural science concerned with the study of life and living organisms, including their structure, function, growth, origin, evolution, distribution, and taxonomy\". \"Behavior\", on the other hand, \"refers to the actions of an organism, usually in relation to the environment\". Of those words... and so on.

    \r\n

    It does not seem likely that humans could preprogram an AI with a ready-made network of concepts. There have been attempts to build knowledge ontologies by hand, but any such attempt is both hopelessly slow and lacking in much of the essential content. Even given a lifetime during which to work and countless of assistants, could you ever hope to code everything you knew into a format from which it was possible to employ that knowledge usefully? Even a worse problem is that the information would need to be in a format compatible with the AI's own learning algorithms, so that any new information the AI learnt would fit seamlessly to the previously-entered database. It does not seem likely that we can come up with an efficient language of thought that can be easily translated into a format that is intuitive for humans to work with.

    \r\n

    Indeed, there are existing plans for AI systems which make the explicit assumption that the AI's network of knowledge will develop independently as the system learns, and the concepts in this network won't necessarily have an easy mapping to those used in human language. The OpenCog wikibook states that:

    \r\n
    \r\n

    Some ConceptNodes and conceptual PredicateNode or SchemaNodes may correspond with human-language words or phrases like cat, bite, and so forth. This will be the minority case; more such nodes will correspond to parts of human-language concepts or fuzzy collections of human-language concepts. In discussions in this wikibook, however, we will often invoke the unusual case in which Atoms correspond to individual human-language concepts. This is because such examples are the easiest ones to discuss intuitively. The preponderance of named Atoms in the examples in the wikibook implies no similar preponderance of named Atoms in the real OpenCog system. It is merely easier to talk about a hypothetical Atom named \"cat\" than it is about a hypothetical Atom (internally) named [434]. It is not impossible that a OpenCog system represents \"cat\" as a single ConceptNode, but it is just as likely that it will represent \"cat\" as a map composed of many different nodes without any of these having natural names. Each OpenCog works out for itself, implicitly, which concepts to represent as single Atoms and which in distributed fashion.

    \r\n
    \r\n

    Designers of Friendly AI seek to build a machine with a clearly-defined goal system, one which is guaranteed to preserve the highly complex values that humans have. But the nature of concepts poses a challenge for this objective. There seems to be no obvious way of programming those highly complex goals into the AI right from the beginning, nor to guarantee that any goals thus preprogrammed will not end up being drastically reinterpreted as the system learns. We cannot simply code \"safeguard these human values\" into the AI's utility function without defining those values in detail, and defining those values in detail requires us to build the AI with an entire knowledge network. On a certain conceptual level, the decision theory and goal system of an AI is separate from its knowledge base; in practice, it doesn't seem like this would be possible.

    \r\n

    The goal might not be impossible, though. Humans do seem to be pre-programmed with inclinations towards various complex behaviors which might suggest pre-programmed concepts to various degrees. Heterosexuality is considerably more common in the population than homosexuality, though this may have relatively simple causes such as an inborn preference towards particular body shapes combined with social conditioning. (Disclaimer: I don't really know anything about the biology of sexuality, so I'm speculating wildly here.) Most people also seem to react relatively consistently to different status displays, and people have collected various lists of complex human universals. The exact method of their transmission remains unknown, however, as does the role that culture serves in it. It also bears noting that most so-called \"human universals\" are actually cultural as opposed to individual universals. In other words, any given culture might be guaranteed to express them, but there will always be individuals who don't fit into the usual norms.

    \r\n

    See also: Vladimir Nesov discusses a closely related form of this problem as the \"ontology problem\".

    \r\n

    " } }, { "_id": "cd5JyRxg8stpC4JFd", "title": "Self-indication assumption is wrong for interesting reasons", "pageUrl": "https://www.lesswrong.com/posts/cd5JyRxg8stpC4JFd/self-indication-assumption-is-wrong-for-interesting-reasons", "postedAt": "2010-04-16T04:51:23.166Z", "baseScore": 2, "voteCount": 29, "commentCount": 24, "url": null, "contents": { "documentId": "cd5JyRxg8stpC4JFd", "html": "

    The self-indication assumption (SIA) states that

    \n

    Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.

    \n

    The reason this is a bad assumption might not be obvious at first.  In fact, I think it's very easy to miss.

    \n

    Argument for SIA posted on Less Wrong

    \n

    First, let's take a look at a argument for SIA that appeared at Less Wrong (link).  Two situations are considered.

    \n

    1.  we imagine that there are 99 people in rooms that have a blue door on the outside (1 person per room).  One person is in a room with a red door on the outside.  It was argued that you are in a blue door room with probability 0.99.

    \n

    2.  Same situation as above, but first a coin is flipped.  If heads, the red door person is never created.  If tails, the blue door people are never created.  You wake up in a room and know these facts.  It was argued that you are in a blue door room with probability 0.99.

    \n

    So why is 1. correct and 2. incorrect?  The first thing we have to be careful about is not treating yourself as special.  The fact that you woke up just tells you that at least one conscious observer exists. 

    \n

    In scenario 1 we basically just need to know what proportion of conscious observers are in a blue door room.  The answer is 0.99.

    \n

    In scenario 2 you never would have woken up in a room if you hadn't been created.  Thus, the fact that you exist is something we have to take into account.  We don't want to estimate P(randomly selected person, regardless of if they exist or not, is in a blue door room).  That would be ignoring the fact that you exist.  Instead, the fact that you exist tells us that at least one conscious observer exists.  Again, we want to know what proportion of conscious observers are in blue door rooms.  Well, there is a 50% chance (if heads landed) that all conscious observers are in blue door rooms, and a 50% chance that all conscious observers are in red door rooms.  Thus, the marginal probability of a conscious observer being in a blue door room is 0.5.

    \n

    The flaw in the more detailed Less Wrong proof (see the post) is when they go from step C to step D.  The *you* being referred to in step A might not exist to be asked the question in step D.  You have to take that into account.

    \n

    General argument for SIA and why it's wrong

    \n

    Let's consider the assumption more formally.

    \n

    Assume that the number of people to be created, N, is a random draw from a discrete uniform distribution1 on {1,2,...,Nmax}.  Thus, P(N=k)=1/Nmax, for k=1,...,Nmax.  Assume Nmax is large enough so that we can effectively ignore finite sample issues (this is just for simplicity).

    \n

    Assume M= Nmax*(Nmax+1)/2 possible people exist, and we arbitrarily label them 1,...,M.  After the size of the world, say N=n, is determined, then we randomly draw n people from the M possible people.

    \n

    After the data are collected we find out that person x exists.

    \n

    We can apply Bayes' theorem to get the posterior probability:

    \n

    P(N=k|x exists)=k/M, for k=1,...,Nmax.

    \n

    The prior probability was uniform, but the posterior favors larger worlds.  QED.

    \n

    Well, not really.

    \n

    The flaw here is that we conditioned on person x existing, but person x only became of interest after we saw that they existed (peeked at the data).

    \n

    What we really know is that at least one conscious observer exists -- there is nothing special about person x.

    \n

    So, the correct conditional probability is:

    \n

    P(N=k|someone exists)=1/Nmax, for k=1,...,Nmax.

    \n

    Thus, prior=posterior and SIA is wrong.

    \n

    Egotism

    \n

    The flaw with SIA that I highlighted here is it treats you as special, as if you were labeled ahead of time.  But the reality is, no matter who was selected, they would think they are the special person.  \"But I exist, I'm not just some arbitrary person.  That couldn't happen in small world.  It's too unlikely.\"  In reality, that fact that I exist just means someone exists. I only became special after I already existed (peeked at the data and used it to construct the conditional probability).

    \n

    Here's another way to look at it.  Imagine that a random number between 1 and 1 trillion was drawn.  Suppose 34,441 was selected.  If someone then asked what the probability of selecting that number was, the correct answer is 1 in 1 trillion.  They could then argue, \"that's too unlikely of an event.  It couldn't have happened by chance.\"  However, because they didn't identify the number(s) of interest ahead of time, all we really can conclude is that a number was drawn, and drawing a number was a probability 1 event.

    \n

    I give more examples of this here.

    \n

    I think Nick Bostrom is getting at the same thing in his book (page 125):

    \n

    ..your own existence is not in general a ground for thinking that hypotheses are more likely to be true just by virtue of implying that there is a greater total number of observers. The datum of your existence tends to disconfirm hypotheses on which it would be unlikely that any observers (in your reference class) should exist; but that’s as far as it goes. The reason for this is that the sample at hand—you—should not be thought of as randomly selected from the class of all possible observers but only from a class of observers who will actually have existed. It is, so to speak, not a coincidence that the sample you are considering is one that actually exists. Rather, that’s a logical consequence of the fact that only actual observers actually view themselves as samples from anything at all

    \n

    Related arguments are made in this LessWrong post.  

    \n
    \n

    1 for simplicity I'm assuming a uniform prior... the prior isn't the issue here

    " } }, { "_id": "pvfrJACkq4DzGYjFF", "title": "The many faces of status", "pageUrl": "https://www.lesswrong.com/posts/pvfrJACkq4DzGYjFF/the-many-faces-of-status", "postedAt": "2010-04-15T15:31:03.163Z", "baseScore": 48, "voteCount": 46, "commentCount": 109, "url": null, "contents": { "documentId": "pvfrJACkq4DzGYjFF", "html": "

    The term \"status\" gets used on LessWrong a lot. Google finds 316 instances; the aggregate total for the phrases \"low status\" and \"high status\" (which suggest more precision than \"status\" by itself) is 170. By way of comparison, \"many worlds\", an important topic here, yields 164 instances.

    We find the term used as an explanation, for instance, \"to give offense is to imply that a person or group has or should have low status\". In this community I would expect that a term used often, with authoritative connotations, and offered as an explanation could be tabooed readily, for instance when someone confused by this or that use asks for clarification: previous discussions of \"high status\" or \"low status\" behaviours seemed to flounder in the particular way that definitional arguments often do.

    Somewhat to my surprise, there turned out not to be a commonly understood way of tabooing \"status\". Lacking a satisfactory unpacking of the \"status\" terms and how they should control anticipation, I decided to explore the topic on my own, and my intention here is to report back and provide a basis for further discussion.

    The \"Status\" chapter of Keith Johnstone's 1979 book \"Impro\", previously discussed here and on OB, is often cited as a reference on the topic (follow this link for an excerpt); I'll refer to it throughout as simply \"Johnstone\". Also, I plan to entirely avoid the related but distinct concept of \"signaling\" in this post, reserving it for later examination.

    \n


    \n

    Dominance hierarchies

    \n

    My initial impression was that \"status\" had some relation to the theory of dominance hierarchies. Section 3 of Johnstone starts with:

    \n
    \n

    Social animals have inbuilt rules which prevent them killing each other for food, mates, and so on.  Such animals confront each other, and often fight, until a hierarchy is established, after which there is no fighting unless an attempt is made to change the ‘pecking order’. This system is found in animals as diverse as humans, chickens, and woodlice.

    \n
    \n

    This reinforced an impression I had previously acquired: that the term \"alpha male\", often used in certain circles synonymously with \"high status male\", indicated an explicit link between the theoretical underpinnings of the term \"status\" and some sort of dominance theory.

    \n

    However, substantiating this link turned out a more frustrating task than I had expected. For instance, I looked for primary sources I could turn to for a formal theoretical explanation of what explanatory work the term \"alpha male\" is supposed to carry out.

    It seems that the term was originally coined by David Mech, who studied wolf packs in the 70's. Interestingly, Mech himself now claims the term was misunderstood and used improperly. Here is what David Mech says in a recent (2000) article:

    \n
    \n

    The way in which alpha status has been viewed historically can be seen in studies in which an attempt is made to distinguish future alphas in litters of captive wolf pups [...] This view implies that rank is innate or formed early, and that some wolves are destined to rule the pack, while others are not.

    \n

    Contrary to this view, I propose that all young wolves are potential breeders and that when they do breed they automatically become alphas (Mech 1970). [...] Thus, calling a wolf an alpha is usually no more appropriate than referring to a human parent or a doe deer as an alpha. Any parent is dominant to its young offspring, so \"alpha\" adds no information.

    \n
    \n

    An informal survey of other literature suggests that \"alpha male\", referring specifically to the pack behaviour disowned by Mech, entered the popular vocabulary by way of dog trainer lore. My personal hunch is that it became entrenched thereafter because it had both a \"sciencey\" sound, and the appropriate connotations for people who adhered to certain views on gender relationships.

    Stepping back to look at dominance theory as a whole, I found that they are not without problems. Pecking order may apply to chickens, but primates vary widely in social organization, lending little support to the thesis that dominance displays, dominance-submission behaviours and so on are as universal as Johnstone suggests and can therefore be thought to shed much light on the complex social organization of humans.

    An often discussed example is the Bonobo chimpanzee, where females are dominant over males, and do not establish a dominance hierarchy among themselves, whereas males do; where the behaviours that tend to mediate social stratification is reconciliation rather than conflict, something that is also observed in other animal species, contrary to the prevailing view of dominance hierarchies.

    This informal survey was interesting and turned up many surprises, but mostly it convinced me that dominance hierarchies were not a fruitful line of research if I was after a crisp meaning of \"status\" terms and explanations: either \"status\" was itself a muddle, or I needed to look for its underpinnings in other disciplines.

    \n

     

    \n

    Social stratification

    \n

    Early on in Johnstone there is an interesting discussion of status by way of his recollection of three very different school teachers. At various other points in the chapter he also refers to the stratification of human societies specifically, for instance when he discusses the master-servant relationship.

    \n

    The teacher example was particularly interesting for me, because one of the uses I might have for status hypotheses is in investigating the Hansonian thesis \"Schools aren't about education but about status\", and what can possibly be done about that. But to think clearly about such issues one must, in the first place, clarify how the hypothesis \"X is about status\" controls anticipation about X!

    I came across Max Weber (who I must say I hadn't heard of previously), described as one of the founders of modern sociology; and Weber's \"three component theory of social stratification\", which helped me quite a bit in making sense of some claims about status.

    What I got from the Wikipedia summary is that Weber identifies three major dimensions of social stratification:

    \n\n


    This list is interesting because of its predictive power: for instance, class and wealth tend to be properties of an individual that change slowly over time, and so when Johnstone refers to ways of elevating one's status within the short time span of a social interaction, we can predict that he isn't talking about class or wealth status.

    Power status is more subject to sudden changes, but not usually as a result of informal social interactions: again, power status cannot be what is referred to in the phrase \"high status behaviours\". Power is very often positional, for instance getting elected President of a powerful country brings a lot of power suddenly, but requires vetting by an elaborate ritual. (Class status can often go hand in hand with power status, but that is not necessarily or systematically the case.)

    Prestige status can be expected to depend on both long-term and short-term characteristics. Certain professions are seen as inherently prestigious, often independently of wealth: firemen, for instance. But within a given social stratum, defined by class and power, individuals can acquire prestige through their actions.This is applicable for wide ranges of group sizes. Scientists acquire prestige by working on important topics and publishing important results. Participants in an online community acquire prestige by posting influential articles which shape subsequent discussion, and so on.

    But, while it struck me as conceivable to unpack terms like \"high status behaviours\" as referring to such changes in prestige status, it didn't seem entirely satisfactory. So I kept looking for clues.

    \n

     

    \n

    Self-esteem and the seesaw

    \n

    Johnstone refers to status \"the see-saw\": he sees status transactions as a zero-sum game. To increase your status, he says, is necessarily to lower that of your interlocutor.

    This seems at odds with seeing most references to status as meaning \"prestige status\", since you can acquire prestige without necessarily lower someone else's; also, you can acquire prestige without entering into an interactive social situation. (Think of how a mountaineer's prestige can rise upon the news that they have reached some difficult summit, ahead of their coming back to enjoy the attention.)

    However, most of what Johnstone discusses seemed to make sense to me if analyzed instead as self-esteem transactions: interactive behaviour which raises or lowers another's self-esteem or yours.

    There is lots of relevant theory to turn to. Some old and possibly discredited - I'm thinking here of \"transactional analysis\" which I came across years and years ago, which had the interesting concept of a \"stroke\", a behaviour whereby one raises another's self-esteem; this could also be relevant to analyzing the PUA theory of \"negging\". (Fun fact: TA is also the origin of the phrase \"warm fuzzies\".) Some newer and perhaps more solidly based on ev-psych, such as the recently mentioned sociometer theory.

    Self-esteem is at any rate an important idea, whether or not we are clear on the underlying causal mechanisms. John Rawls notes that self-esteem is among the \"primary social goods\" (defined as \"the things it is rational to want, whatever else you want\", in other words the most widely applicable instrumental values that can help further a wide range of terminal values). It is very difficult to be luminous, to collaborate effectively or to conquer akrasia without some explicit attention to self-esteem.

    So here, perhaps, is a fourth status component: the more temporary and more local \"self-esteem status\".

    \n

     

    \n

    Positive sum self-esteem transactions?

    \n

    Where I part company with Johnstone is in seeing self-esteem transactions as a purely zero-sum game. And in fact his early discussion of the three teachers contradicts his own \"see-saw\" image, painting instead a quite different picture of \"status\".

    He describes one of the teachers as a \"low status player\", one who couldn't keep discipline, twitched, went red at the slightest provocation: in other words, one with generally low self-esteem. The second he describes as a \"compulsive high status player\": he terrorized students, \"stabbing people with his eyes\", walked \"with fixity of purpose\". In my terms, this would be someone whose behaviours communicated low regard for others' self-esteem, but not necessarily high self-esteem. The third teacher he describes as \"a status expert\":

    \n
    \n

    Much loved, never punished but kept excellent discipline, while remaining very human. He would joke with us, and then impose a mysterious stillness. In the street he looked upright, but relaxed, and he smiled easily.

    \n
    \n

    To me, this looks like the description of someone with high self-esteem generally, who is able to temporarily affect his own and others' self-esteem, lowering (to establish authority) or raising (to encourage participation) as appropriate. When done expertly, this isn't manipulative, but rather a game of trust and rapport that people play in all social situations where safety and intimacy allow, and it feels like a positive sum game.

    (These transactions, BTW, can be mediated even by relatively low-bandwidth interactions, such as text conversations. I find it fascinating how people can make each other feel various emotions just with words: anger, shame, pride. A forum such as Less Wrong isn't just a place for debate and argument, it is also very much a locus of social interaction. Keeping that in mind is important.)

    Detailed analysis of how these transactions work, distilled into practical advice that people can use in everyday settings, is a worthwhile goal, and one that would also advance the cause of effective collaboration among people dedicated to thinking more clearly about the world they inhabit.

    Let the discussion stick to that spirit.

    " } }, { "_id": "Pe4qh33afzJzyCbwn", "title": "A LessWrong poster for the Humanity+ conference next Saturday", "pageUrl": "https://www.lesswrong.com/posts/Pe4qh33afzJzyCbwn/a-lesswrong-poster-for-the-humanity-conference-next-saturday", "postedAt": "2010-04-14T21:38:46.831Z", "baseScore": 11, "voteCount": 9, "commentCount": 27, "url": null, "contents": { "documentId": "Pe4qh33afzJzyCbwn", "html": "

    An email from David Wood, organiser of the Humanity+ UK 2010 conference in London on Saturday 2010-04-24:

    \n
    \n

    One of the rooms in Conway Hall on Sat 24th April will be set aside for posters and general socialising.

    \n

    The posters are opportunities for people to publicise various activities or ideas. We've received half a dozen applications for posters so far, and we have room for one or two more.

    \n

    We expect that many of the attendees will mingle in this room at lunchtime, during the afternoon break, and (for early birds) before the formal start of activities in the main hall at 9.45am.

    \n

    Would one of you be interested in creating and displaying a poster about Less Wrong / Overcoming Bias?

    \n

    Many of the attendees to the H+UK event will have little prior knowledge about Less Wrong, so it's a good chance to reach out to potential new supporters.

    \n

    Posters can be a number of sheets of paper, stuck onto the wall with bluetack or sellotape. Maximise size in total allowed per poster is A0. Several of the posters will be A1, made up of 4 A3 sheets.

    \n

    If you are interested in this, please let me know, since we have to control overall numbers of posters.

    \n

    To be clear, there's no charge for this - consider it as an opportunity for free advertising :-) You're also welcome to bring small pieces of printed literature for interested people to take away.

    \n
    \n

    I plan to make such a poster, and I'd like the advice of people here. Trying to represent what Less Wrong is about in the space of a poster could be challenging. I have maximum space equivalent to sixteen A4 pieces of paper. In the spirit of not proposing a solution until you've had a chance to think about the problem, I'll put my current plans into a comment.

    \n

    Update: looks like it will be Roko rather than me making the poster, but the same applies, your ideas could doubtless be very useful!

    " } }, { "_id": "YLZBffNk5EhppKhNc", "title": "How does information affect hookups?", "pageUrl": "https://www.lesswrong.com/posts/YLZBffNk5EhppKhNc/how-does-information-affect-hookups", "postedAt": "2010-04-14T14:00:56.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "YLZBffNk5EhppKhNc", "html": "

    With social networking sites enabling the romantically inclined to find out more about a potential lover before the first superficial chat than they previously would have in the first month of dating, this is an important question for the future of romance.

    \n

    Lets assume that in looking for partners, people care somewhat about rank and and somewhat about match. That is, they want someone ‘good enough’ for them who also has interests and personality that they like.

    \n

    First look at the rank component alone. Assume for a moment that people are happy to date anyone they believe is equal to or better than them in desirability. Then if everyone has a unique rank and perfect information, there will never be any dating at all. The less information they have the more errors in comparing, so the more chance that A will think B is above her while B thinks A is above him. Even if people are willing to date people somewhat less desirable than they, the same holds – by making more errors you trade wanting more desirable people for wanting less desirable people, who are more likely to want you back , even if they are making their own errors. So to the extent that people care about rank, more information means fewer hookups.

    \n

    How about match then? Here it matters exactly what people want in a match. If they mostly care about their beloved having certain characteristics,  more information will let everyone hear about more people who meet their requirements. On the other hand if we mainly want to avoid people with certain characteristics, more information will strike more people off the list. We might also care about an overall average desirability of characteristics – then more information is as likely to help or harm assuming the average person is averagely desirable. Or perhaps we want some minimal level of commonality, in which case more information is always a good thing – it wouldn’t matter if you find out she is a cannibalistic alcoholic prostitute, as long as eventually you discover those board games you both like. There are more possibilities.

    \n

    You may argue that you will get all the information you want in the end, the question is only speed – the hookups prevented by everyone knowing more initially are those that would have failed later anyway. However flaws that dissuade you from approaching one person with a barge pole are often ‘endearing’ when you discover them too late, and once they are in place loving delusions can hide or remove attention from more flaws, so the rate of information discovery matters. To the extent we care about rank then, more information should mean fewer relationships. To the extent we care about match, it’s unclear without knowing more about what we want.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "CPpsoGDA4ghjJGPxP", "title": "Preference utilitarian measure of historical welfare", "pageUrl": "https://www.lesswrong.com/posts/CPpsoGDA4ghjJGPxP/preference-utilitarian-measure-of-historical-welfare", "postedAt": "2010-04-14T13:32:21.158Z", "baseScore": 11, "voteCount": 16, "commentCount": 25, "url": null, "contents": { "documentId": "CPpsoGDA4ghjJGPxP", "html": "

    GDP measures essentially how good we are at making widgets - and while widgets are useful, it is a very weak and indirect measure of welfare. For example UK GDP per capita doubled between 1975 and 2007 - and people's quality of life indeed improved - but it would be extremely difficult to argue that this improvement was \"doubling\", and that the gap between 2007's and 1975's quality of life is greater than between 1975's and hunter-gatherer times.

    It's not essential to this post, but my very quick theory is that we overestimate GDP thanks to economic equivalent of Amdahl's Law - if someone's optimal consumption mix consisted of 9 units of widgets and 1 unit of personalized services - and their purchasing power increased so now they can acquire 100x as many widgets, but still the same number of services as before - amount of the mix they can purchase increased only 9x, not 90x you'd get by weighted average of original consumption levels (and they spend 92% of their purchasing power on services now). The least scalable factor - whichever it is - will be the bottleneck.

    If we're unhappy with GDP there are alternative measures like HDI, but they're highly artificial. It would be very easy to construct completely different measures which would \"feel\" about as right.

    Fortunately there exists a very natural measure of welfare, which I haven't seen used before in this context - preference utilitarian lotteries. Would you rather live in 1700, or take a 50% chance of living in 2010 or 700? Make a list of such bets, assign numbers coherent with bet values (with 100 for highest and 0 for your lowest value) and you're done! By averaging many people's estimates we can hopefully reduce the noise, and get some pretty reasonable welfare estimates.

    And now disclaimer time. This approach has countless problems, here are just a few but I'm sure you can think about more.

    \n\n

    I tried to think about such series of bets and my results are:

    \n\n

    This seems far more reasonable than GDP's illusion of exponentially accelerating progress.

    \n

    I used this Ruby code to convert bets to values on scale of 0 to 100 (bets ordered by preference, not chronologically):

    \n
    def linearize_ratios(*ratios)
      diffs = ratios.inject([1.0]){|d,r| d + [d[-1] * r / (1-r)]}
      scale = diffs.inject{|a,b|a+b}
      diffs.inject([100]){|v,d| v + [v[-1] - 100.0 * d / scale]}
    end
    p linearize_ratios(0.7, 0.8, 0.6, 0.2, 0.4, 0.25, 0.2, 0.1, 0.9, 0.9, 0.25)
    " } }, { "_id": "EFQ3F6kmt4WHXRqik", "title": "Ugh fields", "pageUrl": "https://www.lesswrong.com/posts/EFQ3F6kmt4WHXRqik/ugh-fields", "postedAt": "2010-04-12T17:06:18.510Z", "baseScore": 427, "voteCount": 360, "commentCount": 82, "url": null, "contents": { "documentId": "EFQ3F6kmt4WHXRqik", "html": "

    Tl;Dr version: Pavlovian conditioning can cause humans to unconsciously flinch from even thinking about a serious personal problem they have, we call it an \"Ugh Field\"1. The Ugh Field forms a self-shadowing blind spot covering an area desperately in need of optimization, imposing huge costs.

    \n

    A problem with the human mind — your human mind — is that it's a horrific kludge that will fail when you most need it not to. The Ugh Field failure mode is one of those really annoying failures. The idea is simple: if a person receives constant negative conditioning via unhappy thoughts whenever their mind goes into a certain zone of thought, they will begin to develop a psychological flinch mechanism around the thought. The \"Unhappy Thing\" — the source of negative thoughts — is typically some part of your model of the world that relates to bad things being likely to happen to you.

    \n

    A key part of the Ugh Field phenomenon is that, to start with, there is no flinch, only negative real consequences resulting from real physical actions in the problem area. Then, gradually, you begin to feel the emotional hit when you are planning to take physical actions in the problem area. Then eventually, the emotional hit comes when you even begin to think about the problem. The reason for this may be that your brain operates a temporal difference learning (TDL) algorithm. Your brain propagates the psychological pain \"back to the earliest reliable stimulus for the punishment\". If you fail or are punished sufficiently many times in some problem area, and acting in that area is always preceeded by thinking about it, your brain will propagate the psychological pain right back to the moment you first begin to entertain a thought about the problem, and hence cut your conscious optimizing ability right out of the loop. Related to this is engaging in a displacement activity: this is some activity that usually involves comfort, done instead of confronting the problem. Perhaps (though this is speculative) the comforting displacement activity is there to counterbalance the psychological pain that you experienced just because you thought about the problem.

    \n

    For example, suppose that you started off in life with a wandering mind and were punished a few times for failing to respond to official letters. Your TDL algorithm began to propagate the pain back to the moment you looked at an official letter or bill. As a result, you would be less effective than average at responding, so you got punished a few more times. Henceforth, when you received a bill, you got the pain before you even opened it, and it laid unpaid on the mantelpiece until a Big Bad Red late payment notice with an $25 fine arrived. More negative conditioning. Now even thinking about a bill, form or letter invokes the flinch response, and your lizard brain has fully cut you out out. You find yourself spending time on internet time-wasters, comfort food, TV, computer games, etc. Your life may not obviously be a disaster, but this is only because you can't see the alternative paths that it could have taken if you had been able to take advantage of the opportunities that came as letters and forms with deadlines.

    \n

    The subtlety with the Ugh Field is that the flinch occurs before you start to consciously think about how to deal with the Unhappy Thing, meaning that you never deal with it, and you don't even have the option of dealing with it in the normal run of things. I find it frightening that my lizard brain could implicitly be making life decisions for me, without even asking my permission!

    \n

    Possible antidotes to Ugh Field problem:

    \n\n

    1: (Credit for this idea goes to Anna Salamon and Jennifer Rodriguez-Müller. Upvotes go to me, as I wrote the darn article)

    " } }, { "_id": "88vuFDw3dCX7hC6uW", "title": "Self-modification is the correct justification for updateless decision theory", "pageUrl": "https://www.lesswrong.com/posts/88vuFDw3dCX7hC6uW/self-modification-is-the-correct-justification-for", "postedAt": "2010-04-11T16:39:33.270Z", "baseScore": 25, "voteCount": 21, "commentCount": 34, "url": null, "contents": { "documentId": "88vuFDw3dCX7hC6uW", "html": "

    Reply to: Late great filter is not bad news

    \n

    Suppose that you build an AI, and Omega appears to it and says:

    \n

    Here's a button. A million years ago I calculated the umpteenth digit of pi. If it is even, I calculated whether you would press this button (in such a way that your human creator was never simulated as a conscious being). If I predicted that you wouldn't press the button, I destroyed Earth right then and there.* If it is odd, I created a doomsday device that will destroy the solar system if you press this button.

    \n

    [* ETA: Assume that if the digit is even and the AI is predicted to press the button, Omega does not destroy Earth, but does turn Alpha Centauri purple (say). The point is for this to be a scenario that you, the AI creator, know not to have come to pass.]

    \n

    Suppose you're the kind of AI creator whose AI is time consistent in a certain sense from the beginning of time and presses the button. Then you have an AI that satisfies a certain kind of philosopher, wins big in a certain logically impossible world, and destroys humanity.

    \n

    Suppose, on the other hand, that you're a very similar kind of AI creator, only you program your AI not to take into account impossible possible worlds that had already turned out to be impossible (when you created the AI | when you first became convinced that timeless decision theory is right). Then you've got an AI that most of the time acts the same way, but does worse in worlds we know to be logically impossible, and destroys humanity less often in worlds we do not know to be logically impossible.

    \n

    Wei Dai's great filter post seems to suggest that under UDT, you should be the first kind of AI creator. I don't think that's true, actually; I think that in UDT, you should probably not start with a \"prior\" probability distribution that gives significant weight to logical propositions you know to be false: do you think the AI should press the button if it was the first digit of pi that Omega calculated?

    \n

    But obviously, you don't want tomorrow's you to pick the prior that way just after Omega has appeared to it in a couterfactual mugging (because according to your best reasoning today, there's a 50% chance this loses you a million dollars).

    \n

    The most convincing argument I know for timeless flavors of decision theory is that if you could modify your own source code, the course of action that maximizes your expected utility is to modify into a timeless decider. So yes, you should do that. Any AI you build should be timeless from the start; and it's reasonable to make yourself into the kind of person that will decide timelessly with your probability distribution today (if you can do that).

    \n

    But I don't think you should decide that updateless decision theory is therefore so pure and reflectively consistent that you should go and optimize your payoff even in worlds whose logical impossibility was clear before you first decided to be a timeless decider (say). Perhaps it's less elegant to justify UDT through self-modification at some arbitrary point in time than through reflective consistency all the way from the big bang on; but in the worlds we can't rule out yet, it's more likely to win.

    " } }, { "_id": "w3FcDHSAHRhrAxzZj", "title": "Singularity Call For Papers", "pageUrl": "https://www.lesswrong.com/posts/w3FcDHSAHRhrAxzZj/singularity-call-for-papers", "postedAt": "2010-04-10T16:08:00.347Z", "baseScore": 9, "voteCount": 10, "commentCount": 4, "url": null, "contents": { "documentId": "w3FcDHSAHRhrAxzZj", "html": "

    Amnon Eden has sent out this call for papers on technological singularity, which many Less Wrongers may be interested in. I presented at last year's conference, which was a good experience with many interesting people. Submitting good papers can help to legitimate and cultivate the field and thus reduce existential risk (although of course poor work could have the reverse effect). If you have an idea or a draft that you're not sure about, and would like to discuss it before submitting, I'd be happy to help if you contact me (carl DOT shulman AT gmail).

    \n

    I am also told that the Singularity Institute may be able to provide travel funding for selected papers. Email annasalamon@intelligence.org for more information. 

    \n


    \n

    Track in:

    \n

    8th European conference on Computing And Philosophy — ECAP 2010
    Technische Universität München
    4–6 October 2010

    \n

    Important dates:

    \n

    * Submission (extended abstracts): 7 May 2010
    * ECAP Conference: 4–6 October 2010

    \n

    Submission form

    \n

    Theme

    \n

    Historical analysis of a broad range of paradigm shifts in science, biology, history, technology, and in particular in computing technology, suggests an accelerating rate of evolution, however measured. John von Neumann projected that the consequence of this trend may be an “essential singularity in the history of the race beyond which human affairs as we know them could not continue”. This notion of singularity coincides in time and nature with Alan Turing (1950) and Stephen Hawking’s (1998) expectation of machines to exhibit intelligence on a par with to the average human no later than 2050. Irving John Good (1965) and Vernor Vinge (1993) expect the singularity to take the form of an ‘intelligence explosion’, a process in which intelligent machines design ever more intelligent machines. Transhumanists suggest a parallel or alternative, explosive process of improvements in human intelligence. And Alvin Toffler’s Third Wave (1980) forecasts “a collision point in human destiny” the scale of which, in the course of history, is on the par only with the agricultural revolution and the industrial revolution.

    \n

    We invite submissions describing systematic attempts at understanding the likelihood and nature of these projections. In particular, we welcome papers critically analyzing the following issues from a philosophical, computational, mathematical, scientific and ethical standpoints:

    \n

    * Claims and evidence to acceleration
    * Technological predictions (critical analysis of past and future)
    * The nature of an intelligence explosion and its possible outcomes
    * The nature of the Technological Singularity and its outcome
    * Safe and unsafe artificial general intelligence and preventative measures
    * Technological forecasts of computing phenomena and their projected impact
    * Beyond the ‘event horizon’ of the Technological Singularity
    * The prospects of transhuman breakthroughs and likely timeframes

    \n

    Amnon H. Eden, School of Computer Science & Electronic Engineering, University of Essex, UK and Center For Inquiry, Amherst NY

    \n

     

    " } }, { "_id": "xCpKK5rmorSbtGPpq", "title": "Meetup after Humanity+ , London, Saturday 2010-04-24?", "pageUrl": "https://www.lesswrong.com/posts/xCpKK5rmorSbtGPpq/meetup-after-humanity-london-saturday-2010-04-24", "postedAt": "2010-04-10T12:54:01.601Z", "baseScore": 6, "voteCount": 5, "commentCount": 12, "url": null, "contents": { "documentId": "xCpKK5rmorSbtGPpq", "html": "

    Humanity+ UK 2010 is in central London (near Holborn) in a fortnight. Speakers include Anders Sandberg, Aubrey de Grey, and Nick Bostrom.  Anyone else from Less Wrong going along?  If so, shall we meet for a drink afterwards, perhaps in the Princess Louise around 17:20ish?

    \n

    As always, if I know you here mail me on paul at ciphergoth and I'll give you my mobile number - thanks!

    \n

    I'm also planning another London Less Wrong meetup on Sunday 2010-06-06 - details to come, suggestions for venue welcome.

    " } }, { "_id": "DdGAWyyfvT6p3sCie", "title": "The Last Number", "pageUrl": "https://www.lesswrong.com/posts/DdGAWyyfvT6p3sCie/the-last-number", "postedAt": "2010-04-10T12:09:34.649Z", "baseScore": 3, "voteCount": 43, "commentCount": 58, "url": null, "contents": { "documentId": "DdGAWyyfvT6p3sCie", "html": " \n

    \"...90116633393348054920083...\"

    \n

    He paused for a moment, and licked his recently reconstructed lips. He was nearly there. After seventeen thousand subjective years of effort, he was, finally, just seconds away from the end. He slowed down as the finish line drew into sight, savouring and lengthening the moment where he stood there, just on the edge of enlightenment.

    \n

    \"...4...7...7...0...9...3...\"

    \n

    Those years had been long; longer, perhaps, in the effects they had upon him, than they could ever be in any objective or subjective reality. He had been human; he had been frozen, uploaded, simulated, gifted with robotic bodies inside three different levels of realities, been a conscript god, been split into seven pieces (six of which were subsequently reunited). He had been briefly a battle droid for the army of Orion, and had chanted his numbers even as he sent C-beams to glitter in the dark to scorch Formic worlds.

    \n

    He had started his quest at the foot of a true Enlightened One, who had guided him and countless other disciples on the first step of the path. Quasi-enlightened ones had guided him further, as the other disciples fell to the wayside all around him, unable to keep their purpose focused. And now, he was on the edge of total Enlightenment. Apparently, there were some who went this far, and deliberately chose not to take the last step. But these were always friends of a friend of an acquaintance of a rumour. He hadn't believed they existed. And now that he had come this far, he knew these folk didn't exist. No-one could come this far, this long, and not finish it.

    \n

    \"...2\"

    \n

    There, he had done it. He had fully pronounced, defined and made his own, the last and greatest of all integers. The Last Number was far too large for standard decimal notation, of course; the first thousand years of effort, while there were still many other disciples around, filling the air with their cries and their joys, had been dedicated entirely to learning the mathematical notions and notations that were needed to correctly define it. But it seemed that for the last ten trillion digits of the Last Number, there was no shorter way of stating them than by listing them all. Entire books had been written about this fact, all untrue or uninteresting (but never both).

    \n

    He willed a pair of lungs into existence, took a deep shuddering breath, and went on:

    \n

    \"... + 1 ...\"

    \n

    Had he been foolish enough to just list the Last Number, then he would have had to spend another seventeen thousand years calculating that sum - or most likely, given up, and contented himself with being semi-enlightened, one who has seen the Last Number, but not the Final Sum. However, he had been building up the mathematics of this addition as he went along, setting up way-stations with caches of buried theorems and lemmas, and carrying the propositions on his back. It would take but a moment to do the Final Sum.

    \n

    \"... = 4.2\"

    \n

    It was finished. Gödel had been more correct than that old Austrian mathematician could ever have imagined. Two integers, summed according to all the laws of arithmetic, and their sum was not an integer. Arithmetic was inconsistent.

    \n

    And so, content, he went out into the world as an Enlightened One, an object of admiration and pity, a source of wisdom and terror. One whose mind has fully seen the inconsistency of arithmetic, and hence the failure of all logic and of all human endeavours.

    " } }, { "_id": "Px662ScmbiG6BhF5R", "title": "Swimming in Reasons", "pageUrl": "https://www.lesswrong.com/posts/Px662ScmbiG6BhF5R/swimming-in-reasons", "postedAt": "2010-04-10T01:24:27.787Z", "baseScore": 20, "voteCount": 18, "commentCount": 17, "url": null, "contents": { "documentId": "Px662ScmbiG6BhF5R", "html": "

    \n\nTo a rationalist, certain phrases smell bad. Rotten. A bit fishy. It's not that they're actively dangerous, or that they don't occur when all is well; but they're relatively prone to emerging from certain kinds of thought processes that have gone bad.

    \n

    One such phrase is for many reasons. For example, many reasons all saying you should eat some food, or vote for some candidate.

    \n

    To see why, let's first recapitulate how rational updating works. Beliefs (in the sense of probabilities for propositions) ought to bob around in the stream of evidence as a random walk without trend. When, in contrast, you can see a belief try to swim somewhere, right under your nose, that's fishy. (Rotten fish don't really swim, so here the analogy breaks down. Sorry.) As a Less Wrong reader, you're smarter than a fish. If the fish is going where it's going in order to flee some past error, you can jump ahead of it. If the fish is itself in error, you can refuse to follow. The mathematical formulation of these claims is clearer than the ichthyological formulation, and can be found under conservation of expected evidence.

    \n

    More generally, according to the law of iterated expectations, it's not just your probabilities that should be free of trends, but your expectation of any variable. Conservation of expected evidence is just the special case where a variable can be 1 (if some proposition is true) or 0 (if it's false); the expectation of such a variable is just the\n\nprobability that the proposition is true.

    \n

    So let's look at the case where the variable you're estimating is an action's utility. We'll define a reason to take the action as any info that raises your expectation, and the strength of the\n\nreason as the amount by which it does so. The strength of the next reason, conditional on all previous reasons, should be distributed with expectation zero.

    \n

    Maybe the distribution of reasons is symmetrical: for example, if somehow you know all reasons are equally strong in absolute value, reasons for and against must be equally common, or they'd cause a predictable trend. Under this assumption, the number of reasons in favor will follow a binomial distribution with p=.5. Mostly, the values here will not be too extreme, especially for large numbers of reasons. When there are ten reasons in favor, there are usually at least a few against. 

    \n

    But what if that doesn't happen? What if ten pieces of info in a row all favor the action you're considering? 

    \n

    One possibility is you witnessed a one in a thousand coincidence. But let's not dwell on that. Nobody cares about your antics in such a tiny slice of possible worlds.

    \n

    Another possibility is the process generating new reasons conditional on old reasons, while unbiased, is not in fact symmetrical: it's skewed. That is to say, it will mostly give a weak reason in one direction, and in rare cases give a strong reason in the other direction.

    \n

    This happens naturally when you're considering many reasons for a belief, or when there's some fact relevant to an action that you're already pretty sure about, but that you're continuing to investigate. Further evidence will usually bump a high-probability belief up toward 1, because the belief is probably true; but when it's bumped down it's bumped far down. The fact that the sun rose on June 3rd 1978 and the fact that the sun rose on February 16th 1860 are both evidence that the sun will rise in the future. Each of the many pieces of evidence like this, taken individually, argues weakly against using Aztec-style human sacrifice to prevent dawn fail. (If the sun ever failed to rise, that would be a much stronger reason the other way, so you're iterated-expectations-OK.) If your \"many reasons\" are of this kind, you can stop worrying.

    \n

    Or maybe there's one common factor that causes many weak reasons. Maybe you have a hundred legitimate reasons for not hiring someone as a PR person, including that he smashes furniture, howls at the moon, and strangles kittens, all of which make a bad impression. If so, you can legitimately summarize your reason not to hire him as, \"because he's nuts\". Upon realizing this, you can again stop worrying (at least about your own sanity).

    \n

    Note that in the previous two cases, if you fail to fully take into account all the implications  for example, that a person insane in one way may be insane in other ways  then it may even seem like there are many reasons in one direction and none of them are weak.

    \n

    The last possibility is the scariest one: you may be one of the fish people. You may be selectively looking for reasons in a particular direction, so you'll end up in the same place no matter what. Maybe there's some sort of confirmation bias or halo effect going on. 

    \n

    So in sum, when your brain speaks of \"many reasons\" almost all going the same way, grab, shake, and strangle it. It may\n\n\n\njust barf up a better, more compressed way of seeing the world, or confess to ulterior motives.

    \n

    (Thanks to Steve Rayhawk, Beth Larsen, and Justin Shovelain for comments.)

    \n

    (Clarification in response to comments: I agree that skewed distributions are the typical case when you're counting pieces of evidence for a belief; the case with the rising sun was meant to cover that, but the post should have been clearer about this point. The symmetrical distribution assumption was meant to apply more to, say, many different good features of a car, or many different good consequences of a policy, where the skew doesn't naturally occur. Note here the difference between the strength of a reason to do something in the sense of how much it bumps up the expected utility, and the increase in probability the reason causes for the proposition that it's best to do that thing, which gets weaker and weaker the more your estimate of the utility is already higher than the alternatives. I said \"confirmation bias or halo effect\", but halo effect (preferentially seeing good features of something you already like) is more to the point here than confirmation bias (preferentially seeing evidence for a proposition you already believe), though many reasons in the same direction can point to the latter also. I've tried to incorporate some of this in the post text.)

    " } }, { "_id": "8wHa4DLW4pADHMPwJ", "title": "Boston area meetup April 18", "pageUrl": "https://www.lesswrong.com/posts/8wHa4DLW4pADHMPwJ/boston-area-meetup-april-18", "postedAt": "2010-04-09T23:53:29.675Z", "baseScore": 7, "voteCount": 8, "commentCount": 8, "url": null, "contents": { "documentId": "8wHa4DLW4pADHMPwJ", "html": "

    I propose a Less Wrong meetup on Sunday, April 18, 4pm at the Clear Conscience Cafe at 581 Massachusetts Avenue Cambridge, MA, near the Central Square T station. Please comment if you plan to attend, or have questions or ideas. Time and place are flexible if anyone has a conflict.

    \n

    There were two previous Less Wrong meetups in the Boston area, back in October and November, when there were visiting rationalists in the area, but I don't think we need that pretext; there are enough of us locals for a decent turnout. There seemed to be a consensus that we ought to have regular meetups, but no one declared a date and time. So, to prevent that from happening again: The third Sunday of every month, at the same time and location as the previous month unless otherwise specified.

    " } }, { "_id": "LkdL2BuGdEAZYysXp", "title": "Frequentist Magic vs. Bayesian Magic", "pageUrl": "https://www.lesswrong.com/posts/LkdL2BuGdEAZYysXp/frequentist-magic-vs-bayesian-magic", "postedAt": "2010-04-08T20:34:44.866Z", "baseScore": 58, "voteCount": 44, "commentCount": 83, "url": null, "contents": { "documentId": "LkdL2BuGdEAZYysXp", "html": "

    [I posted this to open thread a few days ago for review. I've only made some minor editorial changes since then, so no need to read it again if you've already read the draft.]

    \n

    This is a belated reply to cousin_it's 2009 post Bayesian Flame, which claimed that frequentists can give calibrated estimates for unknown parameters without using priors:

    \n
    \n

    And here's an ultra-short example of what frequentists can do: estimate 100 independent unknown parameters from 100 different sample data sets and have 90 of the estimates turn out to be true to fact afterward. Like, fo'real. Always 90% in the long run, truly, irrevocably and forever.

    \n
    \n

    And indeed they can. Here's the simplest example that I can think of that illustrates the spirit of frequentism:

    \n

    Suppose there is a machine that produces biased coins. You don't know how the machine works, except that each coin it produces is either biased towards heads (in which case each toss of the coin will land heads with probability .9 and tails with probability .1) or towards tails (in which case each toss of the coin will land tails with probability .9 and heads with probability .1). For each coin, you get to observe one toss, and then have to state whether you think it's biased towards heads or tails, and what is the probability that's the right answer.

    \n

    Let's say that you decide to follow this rule: after observing heads, always answer \"the coin is biased towards heads with probability .9\" and after observing tails, always answer \"the coin is biased towards tails with probability .9\". Do this for a while, and it will turn out that 90% of the time you are right about which way the coin is biased, no matter how the machine actually works. The machine might always produce coins biased towards heads, or always towards tails, or decide based on the digits of pi, and it wouldn't matter—you'll still be right 90% of the time. (To verify this, notice that in the long run you will answer \"heads\" for 90% of the coins actually biased towards heads, and \"tails\" for 90% of the coins actually biased towards tails.) No priors needed! Magic!

    \n

    What is going on here? There are a couple of things we could say. One was mentioned by Eliezer in a comment:

    \n
    \n

    It's not perfectly reliable. They assume they have perfect information about experimental setups and likelihood ratios. (Where does this perfect knowledge come from? Can Bayesians get their priors from the same source?)

    \n
    \n

    In this example, the \"perfect information about experimental setups and likelihood ratios\" is the information that a biased coin will land the way it's biased with probability .9. I think this is a valid criticism, but it's not complete. There are perhaps many situations where we have much better information about experimental setups and likelihood ratios than about the mechanism that determines the unknown parameter we're trying to estimate. This criticism leaves open the question of whether it would make sense to give up Bayesianism for frequentism in those situations.

    \n

    The other thing we could say is that while the frequentist in this example appears to be perfectly calibrated, he or she is liable to pay a heavy cost for this in accuracy. For example, suppose the machine is actually set up to always produce head-biased coins. After observing the coin tosses for a while, a typical intelligent person, just applying common sense, would notice that 90% of the tosses come up heads, and infer that perhaps all the coins are biased towards heads. They would become more certain of this with time, and adjust their answers accordingly. But the frequentist would not (or isn't supposed to) notice this. He or she would answer \"the coin is head-biased with probability .9\" 90% of the time, and \"the coin is tail-biased with probability .9\" 10% of the time, and keep doing this, irrevocably and forever.

    \n

    The frequentist magic turns out to be weaker than it first appeared. What about the Bayesian solution to this problem? Well, we know that it must involve a prior, so the only question is which one. The maximum entropy prior that is consistent with the information given in the problem statement is to assign each coin an independent probability of .5 of being head-biased, and .5 of being tail-biased. It turns out that a Bayesian using this prior will give the exact same answers as the frequentist, so this is also an example of a \"matching prior\". (To verify: P(biased heads | observed heads) = P(OH|BH)*P(BH)/P(OH) = .9*.5/.5 = .9)

    \n

    But a Bayesian can do much better. A Bayesian can use a universal prior. (With a universal prior based on a universal Turing machine, the prior probability that the first 4 coins will be biased \"heads, heads, tails, tails\" is the probability that the UTM will produce 1100 as the first 4 bits of its output, when given a uniformly random input tape.) Using such a prior guarantees that no matter how the coin-producing machine works, as long as it doesn't involve some kind of uncomputable physics, in the long run your expected total Bayes score will be no worse than someone who knows exactly how the machine works, except by a constant (that's determined by the algorithmic complexity of the machine). And unless the machine actually settles into deciding the bias of each coin independently with 50/50 probabilities, your expected Bayes score will also be better than the frequentist (or a Bayesian using the matching prior) by an unbounded margin as time goes to infinity.

    \n

    I consider this magic also, because I don't really understand why it works. Is our prior actually a universal prior, or is the universal prior just a handy approximation that we can substitute in place of the real prior? Why does the universe that we live in look like a giant computer? What about uncomputable physics? Just what are priors, anyway? These are some of the questions that I'm still confused about.

    \n

    But as long as we're choosing between different magics, why not pick the stronger one?

    " } }, { "_id": "xnPFYBuaGhpq869mY", "title": "Ureshiku Naritai", "pageUrl": "https://www.lesswrong.com/posts/xnPFYBuaGhpq869mY/ureshiku-naritai", "postedAt": "2010-04-08T20:08:58.726Z", "baseScore": 244, "voteCount": 200, "commentCount": 159, "url": null, "contents": { "documentId": "xnPFYBuaGhpq869mY", "html": "

    This is a supplement to the luminosity sequence.  In this comment, I mentioned that I have raised my happiness set point (among other things), and this declaration was met with some interest.  Some of the details are lost to memory, but below, I reconstruct for your analysis what I can of the process.  It contains lots of gooey self-disclosure; skip if that's not your thing.

    \n

    In summary: I decided that I had to and wanted to become happier; I re-labeled my moods and approached their management accordingly; and I consistently treated my mood maintenance and its support behaviors (including discovering new techniques) as immensely important.  The steps in more detail:

    \n

    1.  I came to understand the necessity of becoming happier.  Being unhappy was not just unpleasant.  It was dangerous: I had a history of suicidal ideation.  This hadn't resulted in actual attempts at killing myself, largely because I attached hopes for improvement to concrete external milestones (various academic progressions) and therefore imagined myself a magical healing when I got the next diploma (the next one, the next one.)  Once I noticed I was doing that, it was unsustainable.  If I wanted to live, I had to find a safe emotional place on which to stand.  It had to be my top priority.  This required several sub-projects:

    \n\n

    2.  I re-labeled my moods, so that identifying them in the moment prompted the right actions.  When a given point on the unhappy-happy spectrum - let's call it \"2\" on a scale of 1 to 10 - was labeled \"normal\" or \"set point\", then when I was feeling \"2\", I didn't assume that meant anything; that was the default state.  That left me feeling \"2\" a lot of the time, and when things went wrong, I dipped lower, and I waited for things outside of myself to go right before I went higher.  The problem was that \"2\" was not a good place to be spending most of my time.

    \n\n

    3.  I treated my own mood as manageableThinking of it as a thing that attacked me with no rhyme or reason - treating a bout of depression like a cold - didn't just cost me the opportunity to fight it, but also made the entire situation seem more out-of-control and hopeless.  I was wary of learned helplessness; I decided that it would be best to interpret my historically static set point as an indication that I hadn't hit on the right techniques yet, not as an indication that it was inviolable and everlasting.  Additionally, the fact that I didn't know how to fix it yet meant that if it was going to be my top priority, I had to treat the value of information as very high; it was worth experimenting, and I didn't have to wait for surety before I gave something a shot.

    \n" } }, { "_id": "2wqesNPHnBTCSC2kW", "title": "Open Thread: April 2010, Part 2", "pageUrl": "https://www.lesswrong.com/posts/2wqesNPHnBTCSC2kW/open-thread-april-2010-part-2", "postedAt": "2010-04-08T03:09:18.648Z", "baseScore": 6, "voteCount": 4, "commentCount": 202, "url": null, "contents": { "documentId": "2wqesNPHnBTCSC2kW", "html": "
    \n

    The previous open thread has already exceeded 300 comments – new Open Thread posts should be made here.

    \n

    This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

    \n
    " } }, { "_id": "xNrdYu6p6BRamRBz8", "title": "Pain and gain motivation", "pageUrl": "https://www.lesswrong.com/posts/xNrdYu6p6BRamRBz8/pain-and-gain-motivation", "postedAt": "2010-04-07T18:48:17.624Z", "baseScore": 71, "voteCount": 63, "commentCount": 144, "url": null, "contents": { "documentId": "xNrdYu6p6BRamRBz8", "html": "

    Note: this post is basically just summarizing some of PJ Eby's freely available writings on the topic of pain/gain motivation and presenting them in a form that's easier for the LW crowd to digest. I claim no credit for the ideas presented here, other than the credit for summarizing them.

    \n

    EDIT: Note also Eby's comments and corrections to my summary at this comment.

    \n

    Eby proposes that we have two different forms of motivation: positive (\"gain\") motivation, which drives us to do things, and negative (\"pain\") motivation, which drives us to avoid things. Negative motivation is a major source of akrasia and is mostly harmful for getting anything done. However, sufficiently large amounts of negative motivation can momentarily push us to do things, which frequently causes people to confuse the two.

    To understand the function of negative motivation, first consider the example of having climbed to a tree to avoid a predator. There's not much you can do other than wait and hope the predator goes away, and if you move around, you risk falling out of the tree. So your brain gets flooded with signals that suppress activity and tell it to keep your body still. It is only if the predator ends up climbing up the tree that the danger becomes so acute that you're instead pushed to flee.

    What does this have to do with modern-day akrasia? Back in the tribal environment, elicting the disfavor of the tribe could be a death sentence. Be cast out by the tribe, and you likely wouldn't live for long. One way to elict disfavor is to be unmasked as incompetent in some important matter, and a way to avoid such an unmasking is to simply avoid doing anything where to consequences of failure would be severe.

    You might see why this would cause problems. Sometimes, when the pain level of not having done a task grows too high - like just before a deadline - it'll push you to do it. But this fools people into thinking that negative consequences alone will be a motivator, so they try to psyche themselves up by thinking about how bad it would be to fail. In truth, this is only making things worse, as an increased chance of failure will increase the negative motivation that's going on.

    Negative motivation is also a reason why we might discover a productivity or self-help technique, find it useful, and then after a few successful tries stop using it - seemingly for no reason. Eby uses the terms \"naturally motivated person\" and \"naturally struggling person\" to refer to people that are more driven by positive motivation and more driven by negative motivation, respectively. For naturally struggling people, the main motivation for behavior is the need to get away from bad things. If you give them a productivity or self-help technique, they might apply it to get rid of their largest problems... and then, when the biggest source of pain is gone, they momentarily don't have anything major to flee from, so they lose their motivation to apply the technique. To keep using the technique, they'd need to have positive motivation that'd make them want to do things instead of just not wanting to do things.

    In contrast to negative motivation, positive motivation is basically just doing things because you find them fun. Watching movies, playing video games, whatever. When you're in a state of positive motivation, you're trying to gain things, obtain new resources or experiences. You're entirely focused on the gain, instead of the pain. If you're playing a video game, you know that no matter how badly you lose in the game, the negative consequences are all contained in the game and don't reach to the real world. That helps your brain stay in gain mode. But if a survival override kicks in, the negative motivation will overwhelm the positive and take away much of the pleasure involved. This is a likely reason for why a hobby can stop being fun once you're doing it for a living - it stops being a simple \"gain\" activity with no negative consequences even if you fail, and instead becomes mixed with \"pain\" signals.

    \n
    \n

    And now, if you’re up the tree and the tiger is down there waiting for you, does it make sense for you to start looking for a better spot to sit in… Where you’ll get better sunshine or shade or where there’s, oh, there’s some fruit over there? Should you be seeking to gain in that particular moment?

    \n

    Hell no! Right? Because you don’t want to take a risk of falling or getting into a spot where the tiger can jump up and get you or anything like that. Your brain wants you to sit tight, stay put, shut up, don’t rock the boat… until the crisis is over. It wants you to sit tight. That’s the “pain brain”.

    In the “pain brain” mode… this, by the way, is the main reason why people procrastinate, this is the fundamental reason why people put off doing things… because once your brain has one of these crisis overrides it will go, “Okay conserve energy: don’t do anything.”

    -- PJ Eby, \"Why Can't I Change?\"

    \n
    \n

    So how come some important situations don't push us into a state of negative motivation, even though failure might have disastrous consequences? \"Naturally motivated\" people rarely stop to think about the bad consequences of whatever they're doing, being too focused on what they have to gain. If they meet setbacks, they'll bounce back much faster than \"naturally struggling\" people. What causes the difference?

    Part of the difference is probably inborn brain chemistry. Another major part, though, is your previous experiences. The emotional systems driving our behavior don't ultimately do very complex reasoning. Much of what they do is simply cache lookups. Does this experience resemble one that led to negative consequences in the past? Activate survival overrides! Since negative motivation will suppress positive motivation, it can be easier to end up in a negative state than a positive one. Furthermore, the experiences we have also shape our thought processes in general. If, early on in your life, you do things in \"gain\" mode that end up having traumatic consequences, you learn to avoid the \"gain\" mode in general. You become a \"naturally struggling\" person, one who will view everything through a pessimistic lens, and expect failure in every turn. You literally only perceive the bad sides in everything. A \"naturally motivated\" person, on the other hand, will primarily only perceive the good sides. (Needless to say, these are the endpoints in a spectrum, so it's not like you're either 100% struggling or 100% successful.)

    Another of Eby's theses is that negative motivation is, for the most part, impossible to overcome via willpower. Consider the function of negative motivation as a global signal that prevents us from doing things that seem too dangerous. If we could just use willpower to override the signal at any time, that would result in a lot of people being eaten by predators and being cast out of the tribe. In order to work, a drive that blocks behavior needs to actually consistently block behavior. Therefore attempts to overcome procrastination or akrasia via willpower expenditure are fundamentally misguided. We should instead be trying to remove whatever negative motivation it is that holds us back, for otherwise we are not addressing the real root of the problem. On the other hand, if we succeed in removing the negative motivation and replacing it with positive motivation, we can make any experience as fun and enjoyable as playing a video game. (If you haven't already, do check out Eby's Instant Irresistible Motivation video for learning how to create positive motivation.)

    " } }, { "_id": "J7XPsy7JqR9xtnv8S", "title": "Single Point of Moral Failure", "pageUrl": "https://www.lesswrong.com/posts/J7XPsy7JqR9xtnv8S/single-point-of-moral-failure", "postedAt": "2010-04-06T22:44:51.369Z", "baseScore": 18, "voteCount": 23, "commentCount": 69, "url": null, "contents": { "documentId": "J7XPsy7JqR9xtnv8S", "html": "

    I have been recently entertaining myself with a 3-day non-stop binge of Theist vs. Atheist debates, On the atheist side: Richard Dawkins, Christopher Hitchens, Daniel Denett, Sam Harris, P.Z. Myers. On the theist corner: Dinesh D'Souza, William Lane Craig, Alistair McGrath, Tim Keller, and (unfortunately) Nassim Nicholas Taleb. One of the interesting points that comes up, often by Hitchens, is what I call the \"Bodycount Argument\". The atheist will claim: \"Look at all the deaths caused by religion: Crusades, Inquisition, Islamic fundamentalism, Japanese militarism, Conquests of the New World\" and the list goes on and on. Then the Theist will claim: \"Well, look at the Nazis, the Fascists, the Soviets, the Khmer Rouge...\". the Atheist then tries to reverse some of that, e.g. the Fascists were the catholic right wing, the SS were mostly confessing Catholics and Hitler had churches pray for him on his birthday, and, most tenuously, that the Soviets had the support of the orthodox church and used the pre-existing structures set up by the Czar to establish their power.

    Some of that retort is convincing, some is not so much. You cannot really blame Soviet, Cambodian and Chinese massacres solely on religion. While they do at least manage to bring it to a tie, I suspect that the atheists follow this argument up suboptimally. My instinctive reaction would be \"ok, so you proved that except for religion, communism leads to mass slaughter too. I have no problem doing away with both\". But the Theists have a stronger form of their argument in which they claim that the crimes of communism are -because- of atheism, so a simple one-line retort won't work in all cases. We need to lay a deeper foundation for that claim to be convincing.

    Enter single points of failure. The rudimentary definition, usually given in terms of computer networks, is that a single point of failure is that component which takes down the entire system when it fails. While the term has originated in computer science as far as I can tell, it can be applied to human networks as well. The strategy of Alexander the Great, at the battle of Issus, was instead of trying to defeat the entire, vastly ournumbering, Persian army in combat, to attack the Persian king Darius directly. When he was able to make him flee, the entire Persian army fell into disarray, with one side executing an orderly retreat, but the left flank completely disintegrated while being pursued by Alexander's cavalry. So while the term is new, the concept has been long known and has been used to great effect.

    What I want to argue, is that all the examples cited by Theists and Atheists alike, are instances of a single point of -moral- failure. Here, instead of the system disintegrating or stopping to operate, it goes into a sequence of actions that when examined by an outside human observer, or even the participants themselves at a latter date, seem to be immoral, irrational, and akin to madness. The common point in all the examples is that a central organization, supported by a specific fanaticizing ideology, ordered the massacres to occur, and the people at the lower ranks, implemented those orders, despite perhaps individually knowing better.

    My explanation of this, is that the lower-ranks had in effect outsourced their moral sense to their leadership. As with all centralised structures, when things go well, they go -really- well (assuming aligned incentives, greedy algorithms generally will not be as optimal as top-down ones), but when they go bad, they can be disastrous. The bigger the power of the network, the bigger the consequences. It is not hard to imagine why the outsourcing happened. Humans are tribal. I think very few, having observed the weekly rituals called 'football games' (whatever your definition of football is) would disagree. But humans are also moral. We have a rough set of rules that we tend to follow relatively consistently. What is of interest in these cases, is that an individual's tribalism completely overrode that individual's personal morality. And this happened repeatedly and reliably, throughout the ranks of each of these human networks.

    Coming back to the original argument, if indeed tribalism trumps morality, and the above give us good reason to believe it does, then the theist argument that god put morality inside us comes into question. It does not explain why god saw fit to make our morality less powerful a motivator than our tribal instincts. But the biological explanation stands confirmed: If morality is a mechanism that was useful for intra-tribe interactions, then it would -have- to be suspended when the tribe was facing another. One can imagine the pacifist tribe being annihilated by the non-pacifist tribes around it or, lest I be accused of arguing for group selection, the individual pacifists being attacked both by their own tribe or the enemy tribe. Tribalists may disagree about who gets to live and who gets the resources, but they don't disagree about tribalism.

    " } }, { "_id": "goCfoiQkniQwPryki", "title": "Lampshading", "pageUrl": "https://www.lesswrong.com/posts/goCfoiQkniQwPryki/lampshading", "postedAt": "2010-04-06T20:03:25.800Z", "baseScore": 24, "voteCount": 30, "commentCount": 7, "url": null, "contents": { "documentId": "goCfoiQkniQwPryki", "html": "

    Sequence index: Living Luminously
    Previously in sequence: City of Lights

    You can use luminosity to help you effectively change yourself into someone you'd more like to be.  Accomplish this by fixing your self-tests so they get good results.

    \n

    You may find your understanding of this post significantly improved if you read the seventh story from Seven Shiny Stories.

    When you have coherent models of yourself, it only makes good empirical sense to put them to the test.

    Thing is, when you run a test on yourself, you know what test you're running, and what data would support which hypothesis.  All that and you're the subject generating the data, too.  It's kind of hard to have good scientific controls around this sort of experiment.

    Luckily, it turns out that for this purpose they're unnecessary!  Remember, you're not just trying to determine what's going on in a static part of yourself.  You're also evaluating and changing the things you repudiate when you can.  You don't just have the chance to let knowledge of your self-observation nudge your behavior - you can outright rig your tests.

    Suppose that your model of yourself predicts that you will do something you don't think you should do - for instance, suppose it predicts that you will yell at your cousin the next time she drops by and tracks mud on your carpet, or something, and you think you ought not to yell.  Well, you can falsify that model which says you'll yell by not yelling: clearly, if you do not yell at her, then you cannot be accurately described by any model that predicts that you'll yell.  By refraining from yelling you push the nearest accurate model towards something like \"may yell if not careful to think before speaking\" or \"used to yell, but has since grown past that\".  And if you'd rather be accurately described by one of those models than by the \"yells\" model... you can not yell.

    (Note, of course, that falsifying the model \"yells\" by silently picking up your cousin and defenestrating her is not an improvement.  You want to replace the disliked model with a more likable one.  If it turns out that you cannot do that - if controlling your scream means that you itch so badly to fling your cousin out a window that you're likely to actually do it - then you should postpone your model falsification until a later time.)

    Now, of course figuring out how to not yell (let us not forget akrasia, after all) will be easier once you have an understanding of what would make you do it in the first place.  Armed with that, you can determine how to control your circumstances to prevent yelling-triggers from manifesting themselves.  Or, you can attempt the more difficult but more stable psychic surgery that interrupts the process from circumstance to behavior.

    Sadly, I can't be as specific as would be ideal here because so much depends on the exact habits of your brain as opposed to any other brains, including mine.  You may need to go through various strategies before you hit on one that works for you to change what you need to change.  You could find that successful strategies eventually \"wear off\" and need replacing and their edifices rebuilding.  You might find listening to what other people do helpful (post techniques below!) - or you might not.

    " } }, { "_id": "M4e2cyoS2fJPf4MbX", "title": "Anthropic answers to logical uncertainties?", "pageUrl": "https://www.lesswrong.com/posts/M4e2cyoS2fJPf4MbX/anthropic-answers-to-logical-uncertainties", "postedAt": "2010-04-06T17:51:49.486Z", "baseScore": 16, "voteCount": 11, "commentCount": 43, "url": null, "contents": { "documentId": "M4e2cyoS2fJPf4MbX", "html": "

    Suppose that if the Riemann Hypothesis were true, then some complicated but relatively well-accepted corollary involving geometric superstring theory and cosmology means that the universe would contain 10^500 times more observers. Suppose furthermore that the corollary argument ( RH ==> x10^500 observers) is accepted to be true with a very high probability (say, 99.9%).

    \n

    A presumptuous philosopher now has a \"proof\" of the Riemann Hypothesis. Just use the self-indication assumption: reason as if you are an observer chosen at random from the set of all possible observers (in your reference class). Since almost all possible observers arise in \"possible worlds\" where RH is true, you are almost certainly one of these.

    \n

    Do we believe this argument?

    \n

    One argument against it is that, if RH is false, then the \"possible worlds\" where it is true are not possible. They're not just not actual, they are as ridiculous as worlds where 1+1=3.

    \n

    Furthermore, the justification for reasoning anthropically is that the set Ω of observers in your reference class maximizes its combined winnings on bets if all members of Ω reason anthropically; otherwise, they act as a \"collective sucker\". Unless you have reason to believe you are a \"special\" member of Ω, you should assume that your best move is to reason as if you are a generic member of Ω, i.e. anthropically. When most of the members of Ω arise from merely non-actual possible worlds, this reasoning is defensible. When most of the members of Ω arise from non-actual impossible worlds, something seems to have gone wrong. Observers who would only exist in logically impossible worlds can't make bets, so the \"collective sucker\" arguments don't really work.

    \n

    If you think that the above argument in favor of RH is a little bit fishy, then you might want to ponder Katja's ingenious SIA great filter argument. Most plausible explanations for a future great filter are logical facts, not empirical ones. The difficulty of surviving a transition through technological singularities, if it convergently causes non-colonization, is some logical fact, derivable by a sufficiently powerful mind. A tendency for advanced civilizations to \"realize\" that expansionism is pointless is a logical fact. I would argue that anthropic considerations should not move us on such logical facts.

    \n

    Therefore, if you still buy Katja's argument, and you don't endorse anthropic reasoning as a valid method of mathematical proof, you need to search for an empirical fact that causes a massive great filter just after the point in civilization that we're at. 

    \n

    The supply of these is limited. Most explanations of the great filter/fermi paradox postulate some convergent dynamic that occurs every time a civilization gets to a certain level of advancement; but since these are all things you could work out from first principles, e.g. by Monte Carlo simulation, they are logical facts. Some other explanations where our background facts are false survive, e.g. the Zoo Hypothesis and the Simulation Hypothesis.

    \n

    Let us suppose that we're not in a zoo or a simulation. It seems that the only possible empirical great filter cause that fits the bill is something that was decided at the very beginning of the universe; some contingent fact about the standard model of physics (which, according to most physicists, was some symmetry breaking process, decided at random at the beginning of the universe). Steven0461 points out that particle accelerator disasters are ruled out, as we could in principle colonize the universe using Project Orion spaceships right now, without doing any more particle physics experiments. I am stumped as to just what kind of fact would fit the bill. Therefore the Simulation Hypothesis seems to be the biggest winner from Katja's SIA doomsday argument, unless anyone has a better idea. 

    \n

    Update: Reader bogdanb points out that there are very simple logical \"possibilities\" that would result in there being lots of observers, such as the possibility that 1+1= some suitably huge number, such as 10^^^^^^^^10. You know there is an observer, you, and that there is another observer, your friend, and therefore there are 10^^^^^^^^10 observers according to this \"logical possibility\". If you reason according to SIA, you might end up doubting elementary arithmetical truths.

    \n

     

    " } }, { "_id": "Bnv7mxzsgNjYuLcAy", "title": "Late Great Filter Is Not Bad News", "pageUrl": "https://www.lesswrong.com/posts/Bnv7mxzsgNjYuLcAy/late-great-filter-is-not-bad-news", "postedAt": "2010-04-04T04:17:39.243Z", "baseScore": 19, "voteCount": 33, "commentCount": 82, "url": null, "contents": { "documentId": "Bnv7mxzsgNjYuLcAy", "html": "
    \n

    But I hope that our Mars probes will discover nothing. It would be good news if we find Mars to be completely sterile. Dead rocks and lifeless sands would lift my spirit.

    \n

    Conversely, if we discovered traces of some simple extinct life form—some bacteria, some algae—it would be bad news. If we found fossils of something more advanced, perhaps something looking like the remnants of a trilobite or even the skeleton of a small mammal, it would be very bad news. The more complex the life we found, the more depressing the news of its existence would be. Scientifically interesting, certainly, but a bad omen for the future of the human race.

    \n
    \n

    — Nick Bostrom, in Where Are They? Why I hope that the search for extraterrestrial life finds nothing

    \n

    This post is a reply to Robin Hanson's recent OB post Very Bad News, as well as Nick Bostrom's 2008 paper quoted above, and assumes familiarity with Robin's Great Filter idea. (Robin's server for the Great Filter paper seems to be experiencing some kind of error. See here for a mirror.)

    \n

    Suppose Omega appears and says to you:

    \n

    (Scenario 1) I'm going to apply a great filter to humanity. You get to choose whether the filter is applied one minute from now, or in five years. When the designated time arrives, I'll throw a fair coin, and wipe out humanity if it lands heads. And oh, it's not the current you that gets to decide, but the version of you 4 years and 364 days from now. I'll predict his or her decision and act accordingly.

    \n

    I hope it's not controversial that the current you should prefer a late filter, since (with probability .5) that gives you and everyone else five more years of life. What about the future version of you? Well, if he or she decides on the early filter, that would constitutes a time inconsistency. And for those who believe in multiverse/many-worlds theories, choosing the early filter shortens the lives of everyone in half of all universes/branches where a copy of you is making this decision, which doesn't seem like a good thing. It seems clear that, ignoring human deviations from ideal rationality, the right decision of the future you is to choose the late filter.

    \n

    Now let's change this thought experiment a little. Omega appears and instead says:

    \n

    (Scenario 2) Here's a button. A million years ago I hid a doomsday device in the solar system and predicted whether you would press this button or not. Then I flipped a coin. If the coin came out tails, I did nothing. Otherwise, if I predicted that you would press the button, then I programmed the device to destroy Earth right after you press the button, but if I predicted that you would not press the button, then I programmed the device to destroy the Earth immediately (i.e., a million years ago).

    \n

    It seems to me that this decision problem is structurally no different from the one faced by the future you in the previous thought experiment, and the correct decision is still to choose the late filter (i.e., press the button). (I'm assuming that you don't consider the entire history of humanity up to this point to be of negative value, which seems a safe assumption, at least if the \"you\" here is Robin Hanson.)

    \n

    So, if given a choice between an early filter and a late filter, we should choose a late filter. But then why do Robin and Nick (and probably most others who have thought about it) consider news that imply a greater likelihood of the Great Filter being late to be bad news? It seems to me that viewing a late Great Filter to be worse news than an early Great Filter is another instance of the confusion and irrationality of SSA/SIA-style anthropic reasoning and subjective anticipation. If you anticipate anything, believing that the great filter is more likely to lie in the future means you have to anticipate a higher probability of experiencing doom.

    \n

    (This paragraph was inserted to clarify in response to a couple of comments. These two scenarios involving Omega are not meant to correspond to any actual decisions we have to make, but just to establish that A) if we had a choice, it would be rational to choose a late filter instead of an early filter, therefore it makes no sense to consider the Great Filter being late to be bad news (compared to it being early), and B) human beings, working off subjective anticipation, would tend to incorrectly choose the early filter in these scenarios, especially scenario 2, which explains why we also tend to consider the Great Filter being late to be bad news. The decision mentioned below, in the last paragraph, is not directly related to these Omega scenarios.)

    \n

    From an objective perspective, a universe with a late great filter simply has a somewhat greater density of life than a universe with an early great filter. UDT says, let's forget about SSA/SIA-style anthropic reasoning and subjective anticipation, and instead consider yourself to be acting in all of the universes that contain a copy of you (with the same preferences, memories, and sensory inputs), making the decision for all of them, and decide based on how you want the multiverse as a whole to turn out.

    \n

    So, according to this line of thought, we're acting in both kinds of universes: those with early filters, and those with late filters. If, as Robin Hanson suggests, we were to devote a lot of resources to projects aimed at preventing possible late filters, then we would end up improving the universes with late filters, but hurting the universes with only early filters (because the resources would otherwise have been used for something else). But since copies of us occur more frequently in universes with late filters than in universes with early filters, such a decision (which Robin arrives at via SIA) can be justified on utilitarian grounds under UDT.

    " } }, { "_id": "oTSYYW3R46QNh9twQ", "title": "Free copy of Feynman's autobiography for best corny rationalist joke", "pageUrl": "https://www.lesswrong.com/posts/oTSYYW3R46QNh9twQ/free-copy-of-feynman-s-autobiography-for-best-corny", "postedAt": "2010-04-04T00:32:45.546Z", "baseScore": 19, "voteCount": 19, "commentCount": 54, "url": null, "contents": { "documentId": "oTSYYW3R46QNh9twQ", "html": "

    \"PortraitI have an extra copy of Richard Feyman's autobiography, \"Surely You're Joking, Mr. Feynman!\": Aventures of a Curious Character, which I want to give away here.

    \n

    This is one of two autobiographies (along with Ben Franklin's) to actually change my life.  I've seen it quoted often on LessWrong, as Feynman has a point of view on life that fits well with the ideas we explore here.  In addition to his rationalist side, Feynman also exhibited a wonderfully free sense of humor. Even when working at the Manhattan Project, he joked around and never took himself too seriously.  I think our community would benefit if the rationalism here were likewise leavened by some self-deprecating humor.

    \n

    I will mail the autobiography, at my expense, to whomever posts the best corny rationalist joke in the comments below, as judged by karma voting.  Anything goes.  Here's a little inspirational prompting:

    \n\n

    Edit (April 12th): The winner of the corny rationalist joke contest is this one-liner by SilasBarta, which collected 17 net up-votes:

    \n
    \n

    Rationalist pick-up line: \"I would never cheat on you if and only if you would never cheat on me if and only if I would never cheat on you.\"

    \n
    \n

    The runner-up (and my personal favorite) is this exchange by Bo102010, which collected 14 net up-votes.   The full comment thread for this one has an explanation and suggested refinements.

    \n
    \n

    A rationalist walks into a bar with two bartenders. The rationalist asks \"What's the best drink to get tonight?\"

    \n

    The first bartender says \"The martini.\"

    The second bartender says \"The gin and tonic.\"

    The first bartender repeats \"The martini.\"

    The second bartender repeats \"The gin and tonic.\"

    The first says again \"The martini.\"

    The second says again \"The gin and tonic.\"

    Then the first says \"The gin and tonic.\"

    The rationalist smiles and says, \"I'm glad you could come to an agreement.\"

    \n
    \n

    Thanks to everybody who contributed and voted on corny jokes.

    " } }, { "_id": "SA5bvZJXcwEEuZFSe", "title": "Bayesian Collaborative Filtering", "pageUrl": "https://www.lesswrong.com/posts/SA5bvZJXcwEEuZFSe/bayesian-collaborative-filtering", "postedAt": "2010-04-03T23:29:30.507Z", "baseScore": 19, "voteCount": 15, "commentCount": 23, "url": null, "contents": { "documentId": "SA5bvZJXcwEEuZFSe", "html": "

    I present an algorithm I designed to predict which position a person would report for an issue on TakeOnIt, through Bayesian updates on the evidence of other people's positions on that issue. Additionally, I will point out some potential areas of improvement, in the hopes of inspiring others here to expand on this method.

    \n


    For those not familiar with TakeOnIt, the basic idea is that there are issues, represented by yes/no questions, on which people can take the positions Agree (A), Mostly Agree (MA), Neutral (N), Mostly Disagree (MD), or Disagree (D). (There are two types of people tracked by TakeOnIt: users who register their own opinions, and Experts/Influencers whose opinions are derived from public quotations.)

    \n

    The goal is to predict what issue a person S would take on a position, based on the positions registered by other people on that question. To do this, we will use Bayes' Theorem to update the probability that person S takes the position X on issue I, given that person T has taken position Y on issue I:

    \n

    \"P(S

    \n

    Really, we will be updating on several people Tj taking positions Ty on I:

    \n

    \"P(S

    \n

    \n

    To compute this, let us first figure out the prior probability P(S takes X on I). I use for this a generalization of Laplace's Law of Succession (representing my theory that a person will take each position with a particular frequency, and that there is no reason, before seeing their actual position, to suppose that one position in particular is more frequent than the others), that the odds that S takes the position A : MA : N : MD : D  on I is given by:

    \n

    1 + count of issues S has taken position A on : 1 + count of issues S has taken position MA on : 1 + count of issues S has taken position N on : 1 + count of issues S has taken position MD on : 1 + count of issues S has taken position D on

    \n

    Thus, the probability

    \n

    \"P(S

    \n

    Likewise the probability

    \n

    \"P(Tj

    \n

    This leaves one term in Bayes' Theorem to figure out: P(Tj takes Yj on I | S takes X on I)

    \n

    For this, I will again use the Generalized Laplace's Law of Succession, looking at issues on which both S and Tj have taken positions:

    \n

    \"P(Tj

    \n

    We now know how to compute, from the records of people's positions on issues, all the terms that Bayes' Theorem requires to compute the posterior probability that person S will take position X on issue I.

    \n
    \n

    So, how well does this work? At this time, I have coded up a SQL script to be run against TakeOnIt's database, that predicts a user's positions based on the positions of Expert's/Influencers. (TakeOnIt stores positions for these types of users differently, which is why the first version doesn't just update on the positions of all people.) I have run this script to make predictions for myself, seeing as I am in a privileged position to judge the accuracy of those predictions. Looking at the predictions it made for me with greater than 80% confidence: Of the three predictions made with more than 90% confidence, all were correct, and of the 11 made with between 80% and 90% confidence, 10 were correct, and 1 was incorrect. From this limited data, it seems the algorithm is underconfident. I have registered my opinion on 40 issues.

    \n

    In case you think my positions might be influenced by the positions, I have also looked at its retrodictions for positions I have already registered. It assigned 18% probability to my Neutral position on the issue Are successful entrepreneurs big risk takers?. My remaining 39 positions it predicted with confidence ranging from 60% to (due to round off errors on one issue) 100%.

    \n
    \n

    Some areas for improvement:

    \n

    This algorithm does not make any use of the structure of the possible positions. For example, Disagree is more like Mostly Disagree than Agree. And there is also symmetry such that that Agree relates to Mostly Agree in the same way that Disagree relates to Mostly Disagree. If you changed a question by adding negation, so that all the answers flipped, this algorithm would not necessarily give the flipped probability distribution. Of course, it is also possible that a person's position will not reflect the structure, so we should not completely impose it on the algorithm. But it could be an improvement to measure how well a person follows this structure (and how well people in general follow the structure), and adjust the results accordingly.

    \n

    The algorithm has a violation of Conservation of Expected Evidence. When it is computing the probability that a person S will take position X on issue I, it has an expectation that person U will take position Z on issue I, which would alter its prediction for person S. But trying to extend the algorithm to recursively make predictions for U to use in its predictions for S would lead to infinite recursion.

    " } }, { "_id": "ySuaZ4iQruxGwvrEt", "title": "Less Wrong London meetup, tomorrow (Sunday 2010-04-04) 16:00", "pageUrl": "https://www.lesswrong.com/posts/ySuaZ4iQruxGwvrEt/less-wrong-london-meetup-tomorrow-sunday-2010-04-04-16-00", "postedAt": "2010-04-03T09:36:05.289Z", "baseScore": 7, "voteCount": 4, "commentCount": 5, "url": null, "contents": { "documentId": "ySuaZ4iQruxGwvrEt", "html": "

    UPDATE: Backup plan is to meet at the Starbucks across the road (16 Piccadilly, London W1J 0DE020 7287 8311). I've been trying to ring the Waterstones and the coffee shop for a while now and waited several minute for an answer with no success, so I think it's very likely that it is closed.  I've called the Starbucks and it's open.  If I know you on here, mail me (paul at ciphergoth dot org) and I'll give you my mobile number.

    \n

    In the grand tradition of giving almost no notice for London meetups, I bring to your attention that a meetup is planned for tomorrow (Sunday 2010-04-04), at 16:00, in the 5th View cafe on top of Waterstone's bookstore. Nearest Tube Piccadilly Circus. Yvain, taw, RichardKennaway, and myself at least hope to be there, doubtless others too!

    \n

    We should try to give more notice for the next one.  This is the first Sunday in April; how about the first Sunday in June for the next one, 2010-06-06?  I'd prefer an earlier time and it might be worth experimenting with a different venue, but if we can fix a date we can vary other details closer to the time.

    " } }, { "_id": "Rt8oJF27dndhycxks", "title": "Announcing the Less Wrong Sub-Reddit", "pageUrl": "https://www.lesswrong.com/posts/Rt8oJF27dndhycxks/announcing-the-less-wrong-sub-reddit", "postedAt": "2010-04-02T01:17:44.603Z", "baseScore": 11, "voteCount": 19, "commentCount": 37, "url": null, "contents": { "documentId": "Rt8oJF27dndhycxks", "html": "

    Announcing: the Less Wrong Sub-Reddit, at http://reddit.com/r/LessWrong. This Reddit is intended as a partial replacement for/complement to the Open Thread, which has gotten somewhat unwieldy and overcrowded as of late. I (Thomas McCabe) will be posting things that appear on the April Open Thread to this Reddit, to aid in starting conversation. We'll see how it goes.

    \n

    This Reddit is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets very long/involved, celebrate by turning it into a top-level post.

    \n

    To anyone who is worried about the discussion quality devolving to Reddit level: I retain moderator power over the sub-Reddit, and can delete things and ban people from it. If this gets to be too much work for me, I will be happy to give mod power to other interested Less Wrong readers with a track record of good posts and comments.

    \n

    This is purely my creation, and not that of Eliezer or the Less Wrong admins. If anything goes horribly wrong, don't blame them.

    \n

    This is completely not an April Fool's joke. I want to start it now (on the first day of the month) because the Open Thread \"only\" has 52 comments on it.

    \n

    If you don't have a Reddit account, or want to create a new account to post under your Less Wrong username, you can click \"Register\" in the upper-right-hand corner. It only takes fifteen seconds.

    \n

    For those who don't look at the bottom of the website very often, Less Wrong is originally powered by the Reddit codebase.

    \n

    Good luck, everyone, and may the best discussions win.

    \n

    Edited for clarity: I'm proposing that we set up a new discussion community such that Less Wrongers have a place to talk about off-topic stuff other than Open Thread (which is hugely overcrowded). If either LW or the subreddit crashes, it should have no effect on the other.

    " } }, { "_id": "SpkC3zmt42Jr6jH3A", "title": "Rationality quotes: April 2010", "pageUrl": "https://www.lesswrong.com/posts/SpkC3zmt42Jr6jH3A/rationality-quotes-april-2010", "postedAt": "2010-04-01T20:41:39.003Z", "baseScore": 9, "voteCount": 8, "commentCount": 309, "url": null, "contents": { "documentId": "SpkC3zmt42Jr6jH3A", "html": "

    This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.

    \n" } }, { "_id": "2xkMt5XQqpG5fZjxb", "title": "What is Rationality?", "pageUrl": "https://www.lesswrong.com/posts/2xkMt5XQqpG5fZjxb/what-is-rationality", "postedAt": "2010-04-01T20:14:09.309Z", "baseScore": 22, "voteCount": 16, "commentCount": 22, "url": null, "contents": { "documentId": "2xkMt5XQqpG5fZjxb", "html": "

    This article is an attempt to summarize basic material, and thus probably won't have anything new for the experienced crowd.

    \n

    Related: 11 Core Rationalist Skills, What is Bayesianism?

    \n

    Less Wrong is a blog devoted to refining the art of human rationality, but what is rationality? Rationality is unlike any subject I studied at school or university, and it is probably the case that the synthesis of subjects and ideas here on Less Wrong is fairly unique. 

    \n

    Fundamentally, rationality is the study of general methods for good decision-making, especially where the decision is hard to get right. When an individual is considering whether to get a cryonics policy, or when a country is trying to work out what to do about global warming, one is within the realm of decision-making that we can use rationality to improve. People do badly on hard decision problems for a variety of reasons, including: that they are not born with the ability to deal with the scientific knowledge and complex systems that our modern world runs on, that they haven't been warned that they should think critically about their own reasoning, that they belong to groups that collectively hold faulty beliefs, and that their emotions and biases skew their reasoning process.

    \n\n

    Another central theme of rationality is truth-seeking. Truth-seeking is often used as an aid to decision-making: if you're trying to decide whether to get a cryonics policy, you might want to find out whether the technology has any good evidence suggesting that it might work. We can make good decisions by getting an accurate estimate of the relevant facts and parameters, and then choosing the best option according to our understanding of things; if our understanding is more accurate, this will tend to work better.

    \n\n

    Often, the processes of truth-seeking and decision-making, both on the individual level and the group level are subject to biases: systematic failures to get to the truth or to make good decisions. Biases in individual humans are an extremely serious problem - most people make important life-decisions without even realizing the extent and severity of the cognitive biases they were born with. Therefore rational thought requires a good deal of critical thinking - analyzing and reflecting on your own thought processes in order to iron out the many flaws they contain. Group dynamics can introduce mechanisms of irrationality above and beyond the individual biases and failings of members of the group, and often good decision-making in groups is most severely hampered by flawed social epistemology. An acute example of this phenomenon is The Pope telling HIV infested Africa to stop using condoms; a social phenomenon (religion) was responsible for a failure to make good decisions.

    \n

    Perhaps the best way to understand rationality is to see some techniques that are used, and some examples of its use.

    \n

    Rationality techniques and topics include:

    \n\n\n\n\n\n\n\n\n\n

     

    " } }, { "_id": "7PC22HTvtEbv6tvWJ", "title": "The human problem", "pageUrl": "https://www.lesswrong.com/posts/7PC22HTvtEbv6tvWJ/the-human-problem", "postedAt": "2010-04-01T19:41:44.735Z", "baseScore": 35, "voteCount": 47, "commentCount": 8, "url": null, "contents": { "documentId": "7PC22HTvtEbv6tvWJ", "html": "

    You've fiddled with your physics constants until you got them just right, pushed matter into just the right initial configuration, given all the galaxies a good spin, and tended them carefully for a few billion years.  Finally, one of the creatures on one of those planets in one of those galaxies looks up and notices the stars.  Congratulations!  You've evolved \"humans\", the term used for those early life forms that have mustered up enough just brain cells to wonder about you.

    \n

    Widely regarded as the starting point of interest in a universe, they're too often its ending point as well.  Every amateur god has lost at least one universe to humans.  They occupy that vanishingly-narrow yet dangerous window of intelligence that your universe must safely navigate, in which your organisms are just smart enough to seize the helm of evolution, but not smart enough to understand what they're really doing.

    \n

    The trouble begins when one of these humans decides, usually in a glow of species pride shortly after the invention of the wheel or the digital watch or some such knicknack, that they are in fact pretty neat, and that it's vitally important to ensure that all future intelligent life shares their values.

    \n

    \n

    At that point, they invent a constrained optimization process, that ensures that all new complex agents (drives, minds, families, societies, etc.) and all improvements to existing agents, have a good score according to some agreed-on function.

    \n

    If you're lucky, your humans will design a phenomenological function, which evaluates the qualia in the proposed new mind.  This will be an inconvenience to your universe, as it will slow down the exploration of agent-design space; but it's not so bad that you have to crumple your universe up and throw it out.  It doesn't necessarily cut off all the best places in agent space from ever being explored.

    \n

    But remember these are humans we're talking about.  They've only recently evolved the ability to experience qualia, let alone understand and evaluate them.  So they usually design computationalist functions instead.  All functions perform computation; by \"computationalist\" we mean that they evaluate an agent by the output of its computations, rather than by what it feels like to be such an agent.

    \n

    Before either kind of function can be evaluated, the agent design is abstracted into a description made entirely using a pre-existing set of symbols.  If your humans have a great deal of computational power available, they might choose very low-level symbols with very little individual semantic content, analogous to their primary sensory receptors; and use abstract score functions that perform mainly statistical calculations.  A computationalist function made along these lines is still likely to be troublesome, but might not be a complete disaster.

    \n

    Unfortunately, the simplest, easiest, fastest, and most common approach is to use symbols that the humans think they can \"understand\", that summarize a proposed agent entirely in terms of the categories already developed by their own primitive senses and qualia.  In fact, they often use their existing qualia as the targets of their evaluation function!

    \n

    Once the initial symbol set has been chosen, the semantics must be set in stone for the judging function to be \"safe\" for preserving value; this means that any new symbols must be defined completely in terms of already-existing symbols.  Because fine-grained sensory information has been lost, new developments in consciousness might not be detectable in the symbolic representation after the abstraction process.  If they are detectable via statistical correlations between existing concepts, they will be difficult to reify parsimoniously as a composite of existing symbols.  Not using a theory of phenomenology means that no effort is being made to look for such new developments, making their detection and reification even more unlikely.  And an evaluation based on already-developed values and qualia means that even if they could be found, new ones would not improve the score.  Competition for high scores on the existing function, plus lack of selection for components orthogonal to that function, will ensure that no such new developments last.

    \n

    Pretty soon your humans will tile your universe with variations on themselves.  And the universe you worked so hard over, that you had such high hopes for, will be taken up entirely with creatures that, although they become increasingly computationally powerful, have an emotional repertoire so impoverished that they rarely have any complex positive qualia beyond pleasure, discovery, joy, love, and vellen.  What was to be your masterpiece becomes instead an entire universe devoid of fleem.

    \n

    There's little that will stop one of these crusading, expansionist, assimilating collectives once it starts.  (And, of course, if you intervene on a planet after it develops geometry, your avatar will be executed and your universe may be disqualified.)

    \n

    Some gods say that there's nothing you can do to prevent this from happening, and the best you can do is to seed only one planet in each universe with life - to put all your eggs in one male, so to speak.  This is because a single bad batch of humans can spoil an entire universe.  Standard practice is to time the evolution of life in different star systems to develop geometry at nearly the same time, leading to a maximally-diverse mid-game.  But value-preserving (also called \"purity-based\" or \"conservative\") societies are usually highly aggressive when encountering alien species, reducing diversity and expending your universe's limited energy.  So instead, these gods build a larger number of isolated universes, each seeded with life on just one planet.  (A more elaborate variant of this strategy is to distribute matter in dense, widely-separated clusters, impose a low speed of information propagation, and seed each cluster with one live planet, so that travel time between cluster always gives a stabilizing \"home field\" advantage in contact between species from different clusters.)

    \n

    However, there are techniques that some gods report using successfully to break up a human-tiling.

    \n

    Dynamic physical constants - If you subtly vary your universe's physical constants over time or space, this may cause their function-evaluation or error-checking mechanisms to fail.  Be warned: This technique is not for beginners.  Note that the judges will usually deduct points for lookup-table variation of physical constants.

    \n

    Cosmic radiation - Bombardment by particles and shortwave radiation can also cause their function-evaluation or error-checking mechanisms to fail.  The trick here is to design your universe so that drifting interstellar bubbles of sudden, high-intensity radiation are frequent enough to hit an expanding tiling of humans, yet not frequent enough to wipe out vulnerable early-stage multicellular life.

    \n

    Spiral arms - A clever way of making the humans themselves implement the radiation strategy.  An expanding wave of humans will follow a dense column of matter up to the galactic core, where there are high particle radiation levels.  Even if this fails, ensuring that the distribution of matter in your universe has a low intrinsic dimensionality (at most half the embedded dimensionality) will slow down the spread of humans and give other species a chance to evolve.

    \n

    So that's our column for today!  Good luck, have fun, and remember - never let the players in on the game!

    " } }, { "_id": "hMv7JhPMN8SuSrk7m", "title": "Open Thread: April 2010", "pageUrl": "https://www.lesswrong.com/posts/hMv7JhPMN8SuSrk7m/open-thread-april-2010", "postedAt": "2010-04-01T15:21:03.777Z", "baseScore": 8, "voteCount": 5, "commentCount": 539, "url": null, "contents": { "documentId": "hMv7JhPMN8SuSrk7m", "html": "
    \n

    An Open Thread: a place for things foolishly April, and other assorted discussions.

    \n

    This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

    \n

    Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads.  Go there for the sub-Reddit and discussion about it, and go here to vote on the idea.

    \n
    " } }, { "_id": "L8sc7pvE7RjiX4xxc", "title": "The role of neodeconstructive rationalism in the works of Less Wrong", "pageUrl": "https://www.lesswrong.com/posts/L8sc7pvE7RjiX4xxc/the-role-of-neodeconstructive-rationalism-in-the-works-of", "postedAt": "2010-04-01T14:17:36.357Z", "baseScore": 44, "voteCount": 58, "commentCount": 23, "url": null, "contents": { "documentId": "L8sc7pvE7RjiX4xxc", "html": "

    Summary: Yudkowsky's fiction emphasizes neodeconstructive rationalism, which serves as a bridge between class and sexual identity. Materialist libertarianism (in the metaphysical sense) implies quantum nonrealism, but examining the works of Vinge, Gibson, and especially Egan in this light generates the discourse of semitoic consciousness.

    \n

    1. Precapitalist textual theory and neodeconstructive rationalism

    \n

    In the works of Yudkowsky, a predominant concept is the concept of cultural reality. It is not enough to believe in belief; one must make beliefs pay rent  It could be said that Salamon’s model of neodeconstructive rationalism implies that class has significance, given that the premise of materialist libertarianism is invalid. Given that absence of evidence is evidence of absence, an abundance of discourses concerning Pearlean absurdity may be revealed.

    \n

    The main theme of the works of Alicorn is not depatriarchialism, but postdepatriarchialism. Thus, Yvain suggests the use of neoconstructive narrative to modify culture. After all, guessing the teacher's password is merely a route to more semantic stopsigns. The defining characteristic, and subsequent dialectic, of materialist libertarianism intrinsic to Yudkowsky’s Three Worlds Collide is also evident in The Sword of Good, although in a more mythopoetical sense.

    \n

    It could be said that the primary theme of Jaynes's analysis of neodeconstructive rationalism is the bridge between class and sexual identity. pjeby promotes the use of the cultural paradigm of consensus to deconstruct class divisions.

    \n

    Thus, if neodeconstructive rationalism holds, we have to choose between postdialectic conceptualist theory and subcapitalist theory. But would that take place on a level greater than merely disputing definitions? Several appropriations concerning the stasis of dialectic art exist.

    \n

    But the characteristic theme of the works of Bayes is a postpatriarchial reality. Hanson’s critique of materialist libertarianism holds that the establishment is meaningless. But is it really just an empty label?

    \n

    2. Expressions of futility

    \n

    “Sexual identity is part of the stasis of language,” says Vinge. Thus, Dennett states that we have to choose between Sartreist absurdity and capitalist libertarianism; taw's critique brings this into sharp focus. If neodeconstructive rationalism holds, the works of Yudkowsky are modernistic.

    \n

    “Culture is used in the service of the status quo,” says Dennett; however, according to Crowe, it is not so much culture that is used in the service of the status quo, but rather the failure, and therefore the defining characteristic, of culture. But the subject is interpolated into a materialist libertarianism that includes art as a whole. Pearl holds that we have to choose between Humean qualitative post praxis and the neodialectic paradigm of consensus.

    \n

    In the works of Yudkowsky, a predominant concept is the distinction between figure and ground; the generalized anti-zombie principle stands in tension with the tragedy of group selectionism It could be said that Blake uses the term neodeconstructive rationalism to denote the role of the participant as artist. The main theme of Hanson's analysis of materialist libertarianism is the economy, and eventually the stasis, of semiotic society.

    \n

    But the primary theme of the works of Egan is not constructivism as such, but neoconstructivism. Sarkar states that the works of Egan are postmodern.

    \n

    In a sense, Hanson uses the term 'materialist libertarianism' to denote the role of the writer as artist. Quantum non-realism implies that sexuality is used to marginalize minorities, but only if culture is distinct from language.

    \n

    3. Yudkowsky and neodeconstructive rationalism

    \n

    In the works of Yudkowsky, a predominant concept is the concept of timeless control. However, MichaelVassar suggests the use of materialist libertarianism to analyse and modify narrativity. The characteristic theme of the works of Gibson is the role of the writer as observer.

    \n

    Therefore, in Virtual Light, Gibson deconstructs the conscious sorites paradox; in All Tomorrow’s Parties, however, he analyses the moral void. It could be said that the subject is contextualised into a neodeconstructive rationalism that includes art as a whole. Any number of situationisms concerning Bayesian rationality may be discovered.

    " } }, { "_id": "ebgayguKoW7PTaFXk", "title": "Loleliezers", "pageUrl": "https://www.lesswrong.com/posts/ebgayguKoW7PTaFXk/loleliezers", "postedAt": "2010-04-01T04:04:27.480Z", "baseScore": 5, "voteCount": 57, "commentCount": 13, "url": null, "contents": { "documentId": "ebgayguKoW7PTaFXk", "html": "

    Previously: Eliezer Yudkowsky facts, and Kevin's prediction.

    \n

     

    \n

    A bit of silliness for the day.  Below the fold to spare those with delicate sensibilities. 

    \n

    \n

    \"\"

    \n

     

    \n

    \"\"

    \n

     

    \n

    \"\"

    \n

     

    \n

    Please contribute your own in the comments.  (Lolrobinhansons, etc., would also be welcome.)  Unfortunately I have no special source of Eliezer photos to offer beyond Google Images.

    \n

     

    " } }, { "_id": "vfHRahpgbp9YFPuGQ", "title": "City of Lights", "pageUrl": "https://www.lesswrong.com/posts/vfHRahpgbp9YFPuGQ/city-of-lights", "postedAt": "2010-03-31T23:30:03.011Z", "baseScore": 55, "voteCount": 50, "commentCount": 43, "url": null, "contents": { "documentId": "vfHRahpgbp9YFPuGQ", "html": "

    Sequence index: Living Luminously
    Previously in sequence: Highlights and Shadows
    Next in Sequence: Lampshading

    Pretending to be multiple agents is a useful way to represent your psychology and uncover hidden complexities.

    \n

    You may find your understanding of this post significantly improved if you read the sixth story from Seven Shiny Stories.

    When grappling with the complex web of traits and patterns that is you, you are reasonably likely to find yourself less than completely uniform.  You might have several competing perspectives, possess the ability to code-switch between different styles of thought, or even believe outright contradictions.  It's bound to make it harder to think about yourself when you find this kind of convolution.

    Unfortunately, we don't have the vocabulary or even the mental architecture to easily think of or describe ourselves (nor other people) as containing such multitudes.  The closest we come in typical conversation more resembles descriptions of superficial, vague ambivalence (\"I'm sorta happy about it, but kind of sad at the same time!  Weird!\") than the sort of deep-level muddle and conflict that can occupy a brain.  The models of the human psyche that have come closest to approximating this mess are what I call \"multi-agent models\".  (Note: I have no idea how what I am about to describe interacts with actual psychiatric conditions involving multiple personalities, voices in one's head, or other potentially similar-sounding phenomena.  I describe multi-agent models as employed by psychiatrically singular persons.)

    Multi-agent models have been around for a long time: in Plato's Republic, he talks about appetite (itself imperfectly self-consistent), spirit, and reason, forming a tripartite soul.  He discusses their functions as though each has its own agency and could perceive, desire, plan, and act given the chance (plus the possibility of one forcing down the other two to rule the soul unopposed).  Not too far off in structure is the Freudian id/superego/ego model.  The notion of the multi-agent self even appears in fiction (warning: TV Tropes).  It appears to be a surprisingly prevalent and natural method for conceptualizing the complicated mind of the average human being.  Of course, talking about it as something to do rather than as a way to push your psychological theories or your notion of the ideal city structure or a dramatization of a moral conflict makes you sound like an insane person.  Bear with me - I have data on the usefulness of the practice from more than one outside source.

    There is no reason to limit yourself to traditional multi-agent models endorsed by dead philosophers, psychologists, or cartoonists if you find you break down more naturally along some other arrangement.  You can have two of you, or five, or twelve.  (More than you can keep track of and differentiate is not a recommended strategy - if you're very tempted to go with this many it may be a sign of something unhealthful going on.  If a group of them form a reliable coalition it may be best to fold them back into each other and call them one sub-agent, not several.)  Stick with a core ensemble or encourage brief cameos of peripheral aspects.  Name them descriptively or after structures of the brain or for the colors of the rainbow, as long as you can tell them apart.  Talk to yourselves aloud or in writing, or just think through the interaction if you think you'll get enough out of it that way.  Some examples of things that could get their own sub-agents include:

    \n\n

    By priors picked up from descriptions of various people trying this, you're reasonably likely to identify one of your sub-agents as \"you\".  In fact, one sub-agent may be solely identified as \"you\" - it's very hard to shake the monolithic observer experience.  This is fine, especially if the \"you\" sub-agent is the one that endorses or repudiates, but don't let the endorsement and repudiation get out of hand during multi-agent exercises.  You have to deal with all of your sub-agents, not just the one(s) you like best, and sub-agents have been known to exhibit manipulative and even vengeful behaviors once given voice - i.e. if you represent your desire for cake as a sub-agent, and you have been thwarting your desire for cake for years, you might find that Desire For Cake is pissed off at Self-Restraint and says mean things thereunto.  It will not placate Desire For Cake for you to throw in endorsement behind Self-Restraint while Desire For Cake is just trying to talk to you about your desperate yen for tiramisu.  Until and unless you understand Desire For Cake well enough to surgically remove it, you need to work with it.  Opposing it directly and with normative censure will be likely to make it angry and more devious in causing you to eat cake.

    A few miscellaneous notes on sub-agents:

    Your sub-agents may surprise you far more than you expect to be surprised by... well... yourself, which is part of what makes this exercise so useful.  If you consciously steer the entire dialogue you will not get as much out of it - then you're just writing self-insert fanfiction about the workings of your brain, not actually learning about it.

    Not all of your sub-agents will be \"interested\" in every problem, and therefore won't have much of relevance to say at all times.  (Desire For Cake probably couldn't care less how you act on your date next week until it's time to order dessert.)

    Your sub-agents should not outright  lie to each other (\"should\" in the predictive, not normative, sense - let me know if it turns out yours do), but they may threaten, negotiate, hide, and be genuinely ignorant about themselves.

    Your sub-agents may not all communicate effectively.  Having a translation sub-agent handy could be useful, if they are having trouble interpreting each other.

    (Post your ensemble of subagencies in the comments, to inspire others!  Write dialogues between them!)

    " } }, { "_id": "zm9S8mknDfavxKStA", "title": "Disambiguating Doom", "pageUrl": "https://www.lesswrong.com/posts/zm9S8mknDfavxKStA/disambiguating-doom", "postedAt": "2010-03-29T18:14:12.075Z", "baseScore": 28, "voteCount": 17, "commentCount": 19, "url": null, "contents": { "documentId": "zm9S8mknDfavxKStA", "html": "

    Analysts of humanity's future sometimes use the word \"doom\" rather loosely. (\"Doomsday\" has the further problem that it privileges a particular time scale.) But doom sounds like something important; and when something is important, it's important to be clear about what it is.

    \n

    Some properties that could all qualify an event as doom:

    \n
      \n
    1. Gigadeath: Billions of people, or some number roughly comparable to the number of people alive, die.
    2. \n
    3. Human extinction: No humans survive afterward. (Or, modified: no human-like life survives, or no sentient life survives, or no intelligent life survives.)
    4. \n
    5. Existential disaster: Some significant fraction, perhaps all, of the future's potential moral value is lost. (Coined by Nick Bostrom, who defines an existential risk as one \"where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential\", which I interpret to mean the same thing.)
    6. \n
    7. \"Doomsday argument doomsday\": The total number of observers (or observer-moments) in existence ends up being small  not much larger than the total that have existed in the past. This is what we should believe if we accept the Doomsday argument.
    8. \n
    9. Great filter: Earth ends up not colonizing the stars, or doing anything else widely visible. If all species are filtered out, this explains the Fermi paradox.
    10. \n
    \n

    Examples to illustrate that these properties are fundamentally different:

    \n\n
    If we fail to keep these distinctions in mind, we might think that worlds are doomed when they're not, or that worlds aren't doomed when they are. Don't commit Type I and Type II errors on the entire planet!
    " } }, { "_id": "pWi5WmvDcN4Hn7Bo6", "title": "Even if you have a nail, not all hammers are the same", "pageUrl": "https://www.lesswrong.com/posts/pWi5WmvDcN4Hn7Bo6/even-if-you-have-a-nail-not-all-hammers-are-the-same", "postedAt": "2010-03-29T18:09:47.655Z", "baseScore": 150, "voteCount": 118, "commentCount": 126, "url": null, "contents": { "documentId": "pWi5WmvDcN4Hn7Bo6", "html": "

    (Related to Over-ensapsulation and Subtext is not invariant under linear transformation)

    \n

    Between 2004 and 2007, Goran Bjelakovic et al. published 3 famous meta-analysis of vitamin supplements, concluding that vitamins don't help people but instead kill people.  This is now the accepted dogma; and if you ask your doctor about vitamins, she's likely to tell you not to take them, based on reading either one of these articles, or one of the many summaries of these articles made in secondary sources like The Mayo Clinic Journal.

    \n

    The 2007 study claims that beta-carotene and vitamins A and E are positively correlated with death - the more you take, the more likely you are to die. Therefore, vitamins kill.  The conclusion on E requires a little explanation, but the data on beta-carotene and A is simple and specific:

    \n
    \n

    Univariate meta-regression analyses revealed significant influences of dose of beta carotene (Relative Risk (RR), 1.004; 95% CI, 1.001-1.007; P = .012), dose of vitamin A (RR, 1.000006; 95% CI, 1.000002-1.000009; P = .003), ... on mortality.

    \n
    \n

    This appears to mean that, for each mg of beta carotene that you take, your risk of death increases by a factor (RR) of 1.004; for each IU of vitamin A that you take, by a factor of 1.000006.  \"95% CI, 1.001-1.007\" means that the standard deviation of the sample indicates a 95% probability that the true RR lies somewhere between 1.001 and 1.007.  \"P = .012\" means that there's only a 1.2% chance that you would be so unlucky as to get a sample giving that result, if in fact the true RR were 1.

    \n

    A risk factor of 1.000006 doesn't sound like much; but I'm taking 2,500 IU of vitamin A per day.  That gives a 1.5% increase in my chance of death!  (Per 3.3 years.)  And look at those P-values: .012, .003!

    \n

    So why do I still take vitamins?

    \n
    \n

    What all of these articles do, in excruciating detail with regard to sample selection (though not so much with regard to the math), is to run a linear regression on a lot of data from studies of patients taking vitamins.  A linear regression takes a set of data where each datapoint looks like this:

    \n

         Y = a1X1 + c

    \n

    and a multiple linear regression takes a set of data where each datapoint usually looks like this:

         Y = a1X1 + a2X2 + ... anXn + c

    where Y and all the Xi's are known.  In this case, Y is a 1 for someone who died and a 0 for someone who didn't, and each Xi is the amount of some vitamin taken.  In either case, the regression finds the values for a1, ... an, c that best fit the data (meaning they minimize the sum, over all data points, of the squared error of the value predicted for Y, (Y - (a1X1 + a2X2 + ... anXn + c)2).

    \n

    Scientists love linear regression.  It's simple, fast, and mathematically pure.  There are lots of tools available to perform it for you.  It's a powerful hammer in a scientists' toolbox.

    \n

    But not everything is a nail.  And even for a nail, not every hammer is the right hammer.  You shouldn't use linear regression just because it's the \"default regression analysis\".  When a paper says they performed \"a regression\", beware.

    \n

    A linear analysis assumes that if 10 milligrams is good for you, then 100 milligrams is ten times as good for you, and 1000 milligrams is one-hundred times as good for you.

    \n

    This is not how vitamins work.  Vitamin A is toxic in doses over 15,000 IU/day, and vitamin E is toxic in doses over 400 IU/day (Miller et al. 2004, Meta-Analysis: High-Dosage Vitamin E Supplementation May Increase All-Cause Mortality;  Berson et al. 1993, Randomized trial of vitamin A and vitamin E supplementation for retinitis pigmentosa.). The RDA for vitamin A is 2500 IU/day for adults. Good dosage levels for vitamin A appear to be under 10,000 IU/day, and for E, less than 300 IU/day. (Sadly, studies rarely discriminate in their conclusions between dosage levels for men and women.  Doing so would give more useful results, but make it harder to reach the coveted P < .05 or P < .01.)

    Quoting from the 2007 JAMA article:

    \n
    The dose and regimen of the antioxidant supplements were: beta carotene 1.2 to 50.0 mg (mean, 17.8 mg) , vitamin A 1333 to 200 000 IU (mean, 20 219 IU), vitamin C 60 to 2000 mg (mean, 488 mg), vitamin E 10 to 5000 IU (mean, 569 IU), and selenium 20 to 200 μg (mean 99 μg) daily or on alternate days for 28 days to 12 years (mean 2.7 years).
    \n

    The  mean  values used in the study of both A and E are in ranges known to be toxic. The maximum values used were ten times the known toxic levels, and about 20 times the beneficial levels.

    17.8 mg of beta-carotene translates to about 30,000 IUs of vitamin A, if it were converted to vitamin A. This is also a toxic value. It is surprising that beta-carotene showed toxicity, though, since common wisdom is that beta-carotene is converted to vitamin A only as needed.

    Vitamins, like any medicine, have an inverted-J-shaped response curve. If you graph their health effects, with dosage on the horizontal access, and some measure of their effects - say, change to average lifespan - on the vertical axis, you would get an upside-down J. (If you graph the death rate on the vertical axis, as in this study, you would get a rightside-up J.) That is, taking a moderate amount has some good effect; taking a huge a mount has a large bad effect.

    If you then try to draw a straight line through the J that best-matches the J, you get a line showing detrimental effects increasing gradually with dosage. The results are exactly what we expect. Their conclusion, that \"Treatment with beta carotene, vitamin A, and vitamin E may increase mortality,\" is technically correct. Treatment with anything may increase mortality, if you take ten times the toxic dose.

    For a headache, some people take 4 200mg tablets of aspirin. 10 tablets of aspirin might be toxic. If you made a study averaging in people who took from 1 to 100 tablets of aspirin for a headache, you would find that \"aspirin increases mortality\".

    \n

    (JAMA later published 4 letters criticizing the 2007 article.  None of them mentioned the use of linear regression as a problem.  They didn't publish my letter - perhaps because I didn't write it until nearly 2 months after the article was published.)

    \n

    Anyone reading the study should have been alerted to this by the fact that all of the water-soluble vitamins in the study showed no harmful effects, while all of the fat-soluble vitamins \"showed\" harmful effects. Fat-soluble vitamins are stored in the fat, so they build up to toxic levels when people take too much for a long time.

    \n

    A better methodology would have been to use piecewise (or \"hockey-stick\") regression, which assumes the data is broken into 2 sections (typically one sloping downwards and one sloping upwards), and tries to find the right breakpoint, and perform a separate linear regression on each side of the break that meets at the break.  (I almost called this \"The case of the missing hockey-stick\", but thought that would give the answer away.)

    \n

    Would these articles have been accepted by the most-respected journals in medicine if they evaluated a pharmaceutical in the same way?  I doubt it; or else we wouldn't have any pharmaceuticals.  Bias against vitamins?  You be the judge.

    \n

    Meaningful results have meaningful interpretations

    \n

    The paper states the mortality risk in terms of \"relative risk\" (RR).  But  relative risk  is used for studies of 0/1 conditions, like smoking/no smoking, not for studies that use regression on different dosage levels.  How do you interepret the RR value for different dosages?  Is it RR x dosage?  Or RRdosage (each unit multiplies risk by RR)?  The difference between these interpretations is trivial for standard dosages.  But can you say you understand the paper if you can't interpret the results?

    \n

    To answer this question, you have to ask exactly what type of regression the authors used.  Even if a linear non-piecewise regression were correct, the best regression analysis to use in this case would be a logistic regression, which estimates the probability of a binary outcome conditioned on the regression variables. The authors didn't consider it necessary to report what type of regression analysis they performed; they reported only the computer program (STATA) and the command (\"metareg\").  The  STATA metareg manual  is not easy to understand, but three things are clear:

    \n\n

    Since there is no \"treatment/no treatment\" case for this study, but only the variables that would be correlated with treatment/no treatment, it would have been impossible to put the data into a form that metareg can use.  So what test, exactly, did the authors perform?  And what do the results mean?  It remains a mystery to me - and, I'm willing to bet, to every other reader of the paper.

    \n

    References

    \n

    Bjelakovic et al. 2007,  \"Mortality in randomized trials of antioxidant supplements for primary and secondary prevention: Systematic review and meta-analysis\",  Journal of the American Medical Association, Feb. 28 2007. See a commentary on it  here.

    \n

    Bjelakovic et al. 2006, \"Meta-analysis: Antioxidant supplements for primary and secondary prevention of colorectal adenoma\", Alimentary Pharmacology & Therapeutics 24, 281-291.

    Bjelakovic et al. 2004, \"Antioxidant supplements for prevention of gastrointestinal cancers: A systematic review and meta-analysis,\" The Lancet 364, Oct. 2 2004.

    " } }, { "_id": "woB6QBLkuwG7c9FTf", "title": "NYC Rationalist Community", "pageUrl": "https://www.lesswrong.com/posts/woB6QBLkuwG7c9FTf/nyc-rationalist-community", "postedAt": "2010-03-29T16:59:41.839Z", "baseScore": 24, "voteCount": 18, "commentCount": 21, "url": null, "contents": { "documentId": "woB6QBLkuwG7c9FTf", "html": "

    For those who don't yet know, there has been a thriving rationalist community in NYC since April 2009.  We've been holding weekly meetups for the past several months now, and often have game nights, focused discussions, etc.  For those of you who live in the area, and not yet involved, I highly encourage you to join the following two groups:

    \n

    This Meetup group is our public face, which draws new members to the meetups.

    \n

    This Google Group was our original method of coordination, and we still use it for private communication.

    \n

    The reason I am posting this is because there has been interest by several members in sharing an apartment/loft, or even multiple apartments on a floor of a building if there are enough people.  The core interest group is going to be meeting soon to figure out the logistics, so I wanted to extend this opportunity to any aspiring rationalists who either currently or would like to live in NYC.  If you are interested, please join the Google Group and let us know, so that we can include you in the planning process.  Additionally, if anyone has experience living with other rationalists, or more generally in a community setting, please feel free to share your knowledge with us so we can avoid any common pitfalls.

    " } }, { "_id": "tCTmAmAapB37dAz9Y", "title": "Highlights and Shadows", "pageUrl": "https://www.lesswrong.com/posts/tCTmAmAapB37dAz9Y/highlights-and-shadows", "postedAt": "2010-03-28T20:56:19.473Z", "baseScore": 27, "voteCount": 30, "commentCount": 45, "url": null, "contents": { "documentId": "tCTmAmAapB37dAz9Y", "html": "

    Sequence index: Living Luminously
    Previously in sequence: The Spotlight
    Next in sequence: City of Lights

    Part of a good luminosity endeavor is to decide what parts of yourself you do and don't like.

    \n

    You may find your understanding of this post significantly improved if you read the fifth story from Seven Shiny Stories.

    As you uncover and understand new things about yourself, you might find that you like some of them, but don't like others.  While one would hope that you'd be generally pleased with yourself, it's a rare arrogance or a rarer saintliness that would enable unlimited approval.  Fortunately, as promised in post two, luminosity can let you determine what you'd like to change as well as what's already present.

    But what to change?

    An important step in the luminosity project is to sort your thoughts and feelings not only by type, correlation, strength, etc, but also by endorsement.  You endorse those thoughts that you like, find representative of your favorite traits, prefer to see carried into action, and wish to keep intact (at least for the duration of their useful lives).  By contrast, you repudiate those thoughts that you dislike, consider indicative of negative characteristics, want to keep inefficacious, and desire to modify or be rid of entirely.

    Deciding which is which might not be trivial.  You might need to sift through several orders of desire before finally figuring out whether you want to want cake, or like liking sleep, or prefer your preference for preferentism.  A good place to start is with your macro-level goals and theoretical commitments (e.g., when this preference is efficacious, does it serve your Life Purpose™, directly or indirectly?  if you have firm metaethical notions of right and wrong, is this tendency you have uncovered in yourself one that impels you to do right things?).

    As a second pass, you can work with the information you collected when you correlated your ABCs.  How does an evaluated desire makes you feel when satisfied or unsatisfied?  Does it cripple you when unsatisfied or improve your performance when satisfied?  Are you reliably in a position to satisfy it?  If you can't typically satisfy it, would it be easier to change the desire or to change the circumstances that prevent its satisfaction?  However, this is a second step.  You need to know what affect and behavior are preferable to you before you can judge desires (and other mental activity) relative to what they yield in those departments, and judging affect and behavior is itself an exercise in endorsement and repudiation.

    Knowing what you like and don't like about your mind is a fine thing.  Once you have that information, you can put it to direct use immediately - I find it useful to tag many of my expressions of emotion with the words \"endorsed\" or \"non-endorsed\".  That way, the people around me can use that categorization rather than having to either assume I approve of everything I feel, or layer their own projections of endorsement on top of me.  Either would be unreliable and cause people to have poor models of me: I have not yet managed to excise my every unwanted trait, and my patterns of endorsement do not typically map on to the ones that the people around me have or expect me to have.

    Additionally, once you know what you like and don't like about your mind, you can begin to make progress in increasing the ratio of liked to unliked characteristics.  People often make haphazard lurches towards trying to be \"better people\", but when \"better\" means \"lines up more closely with vaguely defined commonsense intuitions about morality\", this is not the sort of goal we're at all good at pursuing.  Specific projects like being generous or more mindful are a step closer, but the greatest marginal benefit in self-revision comes of figuring out what comes in advance of behaving in a non-endorsed way and heading it off at the pass.  (More on this in \"Lampshading\".)  The odds are low that your brain's patterns align closely with conventional virtues well enough for them to be useful targets.  It's a better plan to identify what's already present, then endorse or repudiate these pre-sliced thoughts and work on them as they appear instead of sweeping together an unnatural category.

    " } }, { "_id": "WJ9t6FPPrN6ijBzXF", "title": "The I-Less Eye", "pageUrl": "https://www.lesswrong.com/posts/WJ9t6FPPrN6ijBzXF/the-i-less-eye", "postedAt": "2010-03-28T18:13:13.358Z", "baseScore": 44, "voteCount": 41, "commentCount": 91, "url": null, "contents": { "documentId": "WJ9t6FPPrN6ijBzXF", "html": "

    or: How I Learned to Stop Worrying and Love the Anthropic Trilemma

    \n

    Imagine you live in a future society where the law allows up to a hundred instances of a person to exist at any one time, but insists that your property belongs to the original you, not to the copies. (Does this sound illogical? I may ask my readers to believe in the potential existence of uploading technology, but I would not insult your intelligence by asking you to believe in the existence of a society where all the laws were logical.)

    \n

    So you decide to create your full allowance of 99 copies, and a customer service representative explains how the procedure works: the first copy is made, and informed he is copy number one; then the second copy is made, and informed he is copy number two, etc. That sounds fine until you start thinking about it, whereupon the native hue of resolution is sicklied o'er with the pale cast of thought. The problem lies in your anticipated subjective experience.

    \n

    After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much. If you are the original, you have a 50% chance of finding yourself still so after step two, and so on. That means after step 99, your subjective probability of still being the original is 0.5^99, in other words as close to zero as makes no difference.

    \n

    Assume you prefer existing as a dependent copy to not existing at all, but preferable still would be existing as the original (in the eyes of the law) and therefore still owning your estate. You might reasonably have hoped for a 1% chance of the subjectively best outcome. 0.5^99 sounds entirely unreasonable!

    \n

    You explain your concerns to the customer service representative, who in turn explains that regulations prohibit making copies from copies (the otherwise obvious solution) due to concerns about accumulated errors (the technical glitches in the early versions of the technology that created occasional errors have long been fixed, but the regulations haven't caught up yet). However, they do have a prototype machine that can make all 99 copies simultaneously, thereby giving you your 1% chance.

    \n

    It seems strange that such a minor change in the path leading to the exact same end result could make such a huge difference to what you anticipate, but the philosophical reasoning seems unassailable, and philosophy has a superb track record of predictive accuracy... er, well the reasoning seems unassailable. So you go ahead and authorize the extra payment to use the prototype system, and... your 1% chance comes up! You're still the original.

    \n

    \"Simultaneous?\" a friend shakes his head afterwards when you tell the story. \"No such thing. The Planck time is the shortest physically possible interval. Well if their new machine was that precise, it'd be worth the money, but obviously it isn't. I looked up the specs: it takes nearly three milliseconds per copy. That's into the range of timescales in which the human mind operates. Sorry, but your chance of ending up the original was actually 0.5^99, same as mine, and I got the cheap rate.\"

    \n

    \"But,\" you reply, \"it's a fuzzy scale. If it was three seconds per copy, that would be one thing. But three milliseconds, that's really too short to perceive, even the entire procedure was down near the lower limit. My probability of ending up the original couldn't have been 0.5^99, that's effectively impossible, less than the probability of hallucinating this whole conversation. Maybe it was some intermediate value, like one in a thousand or one in a million. Also, you don't know the exact data paths in the machine by which the copies are made. Perhaps that makes a difference.\"

    \n

    Are you convinced yet there is something wrong with this whole business of subjective anticipation?

    \n

    \n

    Well in a sense there is nothing wrong with it, it works fine in the kind of situations for which it evolved. I'm not suggesting throwing it out, merely that it is not ontologically fundamental.

    \n

    We've been down this road before. Life isn't ontologically fundamental, so we should not expect there to be a unique answer to questions like \"is a virus alive\" or \"is a beehive a single organism or a group\". Mind isn't ontologically fundamental, so we should not expect there to be a unique answer to questions like \"at what point in development does a human become conscious\". Particles aren't ontologically fundamental, so we should not expect there to be a unique answer to questions like \"which slit did the photon go through\". Yet it still seems that I am alive and conscious whereas a rock is not, and the reason it seems that way is because it actually is that way.

    \n

    Similarly, subjective experience is not ontologically fundamental, so we should not expect there to be unique answer to questions involving subjective probabilities of outcomes in situations involving things like copying minds (which our intuition was not evolved to handle). That's not a paradox, and it shouldn't give us headaches, any more than we (nowadays) get a headache pondering whether a virus is alive. It's just a consequence of using concepts that are not ontologically fundamental, in situations where they are not well defined. It all has to boil down to normality -- but only in normal situations. In abnormal situations, we just have to accept that our intuitions don't apply.

    \n

    How palatable is the bullet I'm biting? Well, the way to answer that is to check whether there are any well-defined questions we still can't answer. Let's have a look at some of the questions we were trying to answer with subjective/anthropic reasoning.

    \n

    Can I be sure I will not wake up as Britney Spears tomorrow?

    \n

    Yes. For me to wake up as Britney Spears, would mean the atoms in her brain were rearranged to encode my memories and personality. The probability of this occurring is negligible.

    \n

    If that isn't what we mean, then we are presumably referring to a counterfactual world in which every atom is in exactly the same location as in the actual world. That means it is the same world. To claim there is or could be any difference is equivalent to claiming the existence of p-zombies.

    \n

    Can you win the lottery by methods such as \"Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery.  Then suspend the programs, merge them again, and start the result\"?

    \n

    No. The end result will still be that you are not the winner in more than one out of several million Everett branches. That is what we mean by 'winning the lottery', to the extent that we mean anything well-defined by it. If we mean something else by it, we are asking a question that is not well-defined, so we are free to make up whatever answer we please.

    \n

    In the Sleeping Beauty problem, is 1/3 the correct answer?

    \n

    Yes. 2/3 of Sleeping Beauty's waking moments during the experiment are located in the branch in which she was woken twice. That is what the question means, if it means anything.

    \n

    Can I be sure I am probably not a Boltzmann brain?

    \n

    Yes. I am the set of all subpatterns in the Tegmark multiverse that match a certain description. The vast majority of these are embedded in surrounding patterns that gave rise to them by lawful processes. That is what 'probably not a Boltzmann brain' means, if it means anything.

    \n

    What we want from a solution to confusing problems like the essence of life, quantum collapse or the anthropic trilemma is for the paradoxes to dissolve, leaving a situation where all well-defined questions have well-defined answers. That's how it worked out for the other problems, and that's how it works out for the anthropic trilemma.

    " } }, { "_id": "s3xNWtE3GxgSXwNtt", "title": "Mental Models", "pageUrl": "https://www.lesswrong.com/posts/s3xNWtE3GxgSXwNtt/mental-models", "postedAt": "2010-03-28T15:55:40.902Z", "baseScore": 18, "voteCount": 15, "commentCount": 23, "url": null, "contents": { "documentId": "s3xNWtE3GxgSXwNtt", "html": "

    Related: Fake explanation, Guessing the teachers password, Understanding your understanding, many more

    \n

    The mental model concept gets used so frequently and seems so intuitively obvious that I debated whether to bother writing this. But beyond the basic value that comes from unpacking our intuitions, it turns out that the concept allows a pretty impressive integration and streamlining of a wide range of mental phenomena.

    \n
    The basics: a mental model falls under the heading of mental representations, ways that the brain stores information. It's a specific sort of mental representation - one who's conceptual structure matches some corresponding structure in reality. In short, mental models are how we think something works.
    \n
    A mental model begins life as something like an explanatory black box - a mere correlation between items, without any understanding of the mechanism at work. \"Flick switch -> lamp turns on\" for example.  But a mere correlation doesn't give you much clue as to what's actually happening. If something stops working - if you hit the switch and the light doesn't go on - you don't have many clues as to why. This pre-model stage lacks the most important and useful portion; moving parts.
    \n
    \n
    The real power of mental models comes from putting something inside this black box  - moving parts that you can fiddle with to give you an idea of how something actually works. My basic lamp model will be improved quite a bit if I add the concept of a circuit to it, for instance. Once I've done that, the model becomes \"Flick switch -> switch completes circuit -> electricity flows through lightbulb-> lamp turns on\". Now if the light doesn't go on, I can play with my model to see what might cause that, finding that either the circuit is broken or no electricity is being provided. We learn from models the same way we learn from reality, by moving the parts around and seeing the results.  
    \n
    It usually doesn't take much detail, many moving parts, for something to \"click\" and make sense. For instance, I had a difficult time grasping the essence of imaginary numbers until I saw them modeled as a rotation, which instantly made all the bits and pieces I had gleaned about them fall into place.  A great deal of understanding rests in getting a few small details right. And once the basics are right, additional knowledge often changes very little. After you have the basic understanding of a circuit, learning about resistance and capacitors and alternating vs direct current won't change much about your lamp model. Because of this, the key to understanding something is often getting the basic model right - I suspect bursts of insight, a-ha moments, and magical \"clicks\" are often new mental models suddenly taking shape.
    \n
    Now let's really open this concept up, and see what it can do. For starters, the reason analogies and metaphors are so damn useful (and can be so damn misleading) is that they're little more than pre-assembled mental models for something. Diagrams provide their explanatory mechanism through essentially the same principle. Phillip Johnson-Laird has formulated the processes of induction and deduction in terms of adjustments made to mental models. And building from the scenario concept used by Kahneman and Tversky, he's formulated a method of probabilistic thinking with them as well. Much of the work on heuristics and biases, in fact, either dovetails very nicely with mental models or can be explained by them directly.
    \n
    For example, the brain seems to have a strong bias towards modifying an existing model vs. replacing it with a new one. Often in real life \"updating\" means \"changing your underlying model\", and the fact that we prefer not to causes us to make systematic errors. You see this writ large all the time with (among other things) people endlessly tweaking a theory that fails to explain the data, rather than throwing it out. Ptomely's epicycles would be the prototypical example. Confirmation bias, various attribution biases and various data neglect biases can all be interpreted as favoring the models we already have. 
    \n
    The brain's favorite method for building models is to take parts from something else it already understands. Our best and most extensive experience is with objects moving in the physical world, so our models are often expressed in terms of physical objects moving about. They are, essentially, acting as intuition pumps. Of course, all models are wrong, but some are useful - as Dennett points out, converting problems to examples of something more familiar often allows us to solve problems much more easily.
    \n
    One of the major design flaws of using mental models (aside from the biases they induce) is that our mental models always tend to feel like understanding, regardless of how many moving parts they have. So, for example, if the teacher asks \"why does fire burn\" and I answer \"because it's hot\", it feels like a real explanation, even if there's not any moving parts that might explain what 'burn' or 'hot' actually mean. I suspect a bias towards short causal chains may be involved here. Of course, if the model stops working, or you find yourself needing to explain yourself, it becomes quite obvious that you do not, in fact, have the understanding that you thought you did. And unpacking what turns out to be an empty box is a fantastic way to trigger cognitive dissonance, which can have the nasty effect of burrowing your flawed model in even deeper.
    \n
    So how can we maximize our use of mental models? Johnson-Laird tells us that \"any factor that makes it easier for individuals to flesh out explicit models of the premises should improve performance.\" Making clear what the moving parts are and couching it in terms of something already understood is going to help us build a better model, and a better model is equivalent to a better understanding. Again, this is not particularly groundbreaking - any problem solving technique will likely have the same insights.
    \n
    Ultimately, the mental model concept is itself just a model. I'm not familiar enough with the psychological literature to know if mental models are really the correct way to explain mental functions, or if it's merely another in a long list of similar concepts - belief, schema, framework, cognitive map, etc. But the fact that it's intuitively obvious and that it explains a large swath of brain function (without being too general) suggests that it's a useful concept to carry around, so I'll continue to do so until I have evidence that it's wrong.
    \n
    -
    \n
    Sources:
    \n
    Mental models and probabilistic thinking - Johnson-Laird (1994)
    \n
    Mental models concepts for system dynamics research - Doyle and Ford (1998)
    \n
    The Design of Everyday Things, Norman
    \n
    Using concept maps to reveal conceptual typologies - Hay and Kinchin (2006)
    \n

     

    " } }, { "_id": "K3hFLRn7MvYacL466", "title": "It's not like anything to be a bat", "pageUrl": "https://www.lesswrong.com/posts/K3hFLRn7MvYacL466/it-s-not-like-anything-to-be-a-bat", "postedAt": "2010-03-27T14:32:52.050Z", "baseScore": 23, "voteCount": 46, "commentCount": 193, "url": null, "contents": { "documentId": "K3hFLRn7MvYacL466", "html": "

    ...at least not if you accept a certain line of anthropic argument.

    Thomas Nagel famously challenged the philosophical world to come to terms with qualia in his essay \"What is it Like to Be a Bat?\". Bats, with sensory systems so completely different from those of humans, must have exotic bat qualia that we could never imagine. Even if we deduce all the physical principles behind echolocation, even if we could specify the movement of every atom in a bat's senses and nervous system that represents its knowledge of where an echolocated insect is, we still have no idea what it's like to feel a subjective echolocation quale.

    Anthropic reasoning is the idea that you can reason conditioning on your own existence. For example, the Doomsday Argument says that you would be more likely to exist in the present day if the overall number of future humans was medium-sized instead of humongous, therefore since you exist in the present day, there must be only a medium-sized number of future humans, and the apocalypse must be nigh, for values of nigh equal to \"within a few hundred years or so\".

    The Buddhists have a parable to motivate young seekers after enlightenment. They say - there are zillions upon zillions of insects, trillions upon trillions of lesser animals, and only a relative handful of human beings. For a reincarnating soul to be born as a human being, then, is a rare and precious gift, and an opportunity that should be seized with great enthusiasm, as it will be endless eons before it comes around again.

    Whatever one thinks of reincarnation, the parable raises an interesting point. Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.

    The phrase \"for me to be an animal\" may sound nonsensical, but \"why am I me, rather than an animal?\" is not obviously sillier than \"why am I me, rather than a person from the far future?\". If the doomsday argument is sufficient to prove that some catastrophe is preventing me from being one of a trillion spacefaring citizens of the colonized galaxy, this argument hints that something is preventing me from being one of a trillion bats or birds or insects.

    And this could be that animals lack subjective experience. This would explain quite nicely why I'm not an animal: because you can't be an animal, any more than you can be a toaster. So Thomas Nagel can stop worrying about what it's like to be a bat, and the rest of us can eat veal and foie gras guilt-free.

    But before we break out the dolphin sausages - this is a pretty weird conclusion. It suggests there's a qualitative and discontinuous difference between the nervous system of other beings and our own, not just in what capacities they have but in the way they cause experience. It should make dualists a little bit happier and materialists a little bit more confused (though it's far from knockout proof of either).

    The most significant objection I can think of is that it is significant not that we are beings with experiences, but that we know we are beings with experiences and can self-identify as conscious - a distinction that applies only to humans and maybe to some species like apes and dolphins who are rare enough not to throw off the numbers. But why can't we use the reference class of conscious beings if we want to? One might as well consider it significant only that we are beings who make anthropic arguments, and imagine there will be no Doomsday but that anthropic reasoning will fall out of favor in a few decades.

    But I still don't fully accept this argument, and I'd be pretty happy if someone could find a more substantial flaw in it.

    " } }, { "_id": "xLqmthhc2H5EzTMv5", "title": "Addresses in the Multiverse", "pageUrl": "https://www.lesswrong.com/posts/xLqmthhc2H5EzTMv5/addresses-in-the-multiverse", "postedAt": "2010-03-26T23:02:28.384Z", "baseScore": 6, "voteCount": 7, "commentCount": 21, "url": null, "contents": { "documentId": "xLqmthhc2H5EzTMv5", "html": "

    Abstract: If we assume that any universe can be modeled as a computer program which has been running for finitely many steps, then we can assign a multiverse-address to every event by combining its world-program with the number of steps into the world-program where it occurs. We define a probability distribution over multiverse-addresses called a Finite Occamian Multiverse (FOM). FOMs assign negligible probability mass to being a Boltzmann brain or to being in a universes that implements the Many Worlds Interpretation of quantum mechanics.

    \n

    One explanation of existence is the Tegmark level 4 multiverse, the idea that all coherent mathematical structures exist, and our universe is one of them. To make this meaningful, we must add a probability distribution over mathematical structures, effectively assigning each a degree of existence. Assume that the universe we live in can be fully modeled as a computer program, and that that program, and the number of steps it's been running for, are both finite. (Note that it's not clear whether our universe is finite or infinite; our universe is either spatially infinite, or expanding outwards at a rate greater than or equal to the speed of light, but there's no observation we could make inside the universe that would distinguish these two possibilities.) Call the program that implements our universe a world-program, W.  This could be implemented in any programming language - it doesn't really matter which, since we can translate between languages by prepending some stuff to translate.

    \n

    Now, suppose we choose a particular event in the universe - an atom emitting a photon, say - and we want to find a corresponding operation in the world-program. We could, in principle, run W until it starts working on the part of spacetime we care about, and count the steps. Call the number of steps leading up to this event T. Taken together, the pair (W,T) uniquely identifies a place, not just in the universe, but in the space of all possible universes. Call any such pair (W,T) a multiverse-address.

    Now, suppose we observe an event. What should be our prior probability distribution over multiverse-addresses for that event? That is, for a given event (W,T), what is P(W=X and T=Y)?

    \n

    For this question, our best (and pretty much only possible) tool is Occam's Razor. We're after a prior probability distribution, so we aren't going to bother including all the things we know about W from observation, except that we have good reason to believe that W is short - what we know of physics seems to indicate that at the most basic level, the rules are simple. So, first factor out W and apply Occam's Razor to it:

        P(W=X and T=Y) = P(W=X) * P(T=Y|W=X)
        P(W=X and T=Y) = exp(-len(W)) * P(T=Y|W=X)

    Now assume independence between T and W. This isn't entirely correct (some world-programs are structured in such a way that all the events we might be looking for happen on even time-steps, for example), but that kind of entanglement isn't important for our purposes. Then apply Occam's Razor to T, getting

        P(W=X and T=Y) = exp(-len(W)-len(T))

    \n

    Now, applying Occam's razor to T requires some explanation, and there is one detail we have glossed over; we referred to the length of W and T (that is, their logarithm), when we should have referred to their Kolmogorov complexity - that is, their length after compression. For example, a world-program that contains 10^10 random instructions is much less likely than one that contains 10^10 copies of the same instruction. Suppose we resolve this by requiring W to be fully compressed, and give it an initialization stage where it unpacks itself before we start counting steps for T.

    This lets us transfer bits of complexity from T to W, by having W run itself for awhile during the initialization stage. We can also transfer complexity from W to T, by writing W in such a way that it runs a class of programs in order, and T determines which of them it's running. Since we can transfer complexity back and forth between W and T, we can't justify applying Occam's Razor to one but not the other, so it makes sense to apply it to T. This also means that we should also treat T as compressible; it is more likely that the universe is 3^^^3 steps old than that is 207798236098322674 steps old.

    To recap - we started by assuming that the universe is a computer program, W. We chose an event in W, corresponding to a computation that occurs after W has preformed T operations. We assume that W and T are both finite. Occam's Razor tells us that W, if fully compressed, should be short. We can trade off complexity between W and T, so we should also apply Occam's Razor to T and expect that T, if fully compressed, should also be short. We had to assume that the universe behaves like a computer program, that that program is finite, and that the probability distribution which Occam's Razor gives us is actually meaningful here.

    \n

    We then got P(W=X and T=Y) = exp(-len(W)-len(T)). Call this probability distribution a Finite Occamian Multiverse (FOM). We can define this in terms of different programming languages, reweighting the probabilities of different universe-addresses somewhat, but all FOMs share some interesting properties.

    \n

    A Finite Occamian Multiverse avoids the Boltzmann brain problem. A Boltzmann brain is a brain that, rather than living in a simulation with stable physics that allow it to continue to exist as the simulation advances, arises by chance out of randomly arranged particles or other simulation-components, and merely thinks (contains a representation of the claim that) it lives in a universe with stable physics. If you live in a FOM, then the probability that you are a Boltzmann brain is negligible because Boltzmann brains must have extremely complex multiverse-addresses, while evolved brains can have multiverse-addresses that are simple.

    If we are in a Finite Occamian Multiverse, then the Many Worlds interpretation of quantum mechanics must be false, because if it were true, then any multiverse address would have to contain the complete branching history of the universe, so its length would be proportional to the mass of the universe times the age of the universe. On the other hand, if branches were selected according to a pseudo-random process, then multiverse-addresses would be short. This sort of pseudo-random process would slightly increase the length of W, but drastically decrease the length of T. In other words, in this type of multiverse, worldeaters eat more complexity than they contain.

    If we are in a Finite Occamian Multiverse, then we might also expect certain quantities, such as the age and volume of the universe, to have much less entropy than otherwise expected. If, for example, we discovered that the universe had been running for exactly 3^^^3+725 time steps, then we could be reasonably certain that we were inside such a multiverse.

    This kind of multiverse also sets an upper bound on the total amount of entropy (number of fully independent random bits) that can be gathered in one place, equal to the total complexity of that place's multiverse-address, since it would be possible to generate all of those bits from the multiverse-address by simulating the universe. However, since simulating the universe is intractible, the universe can still act as a very-strong cryptographic pseudorandom number generator.

    " } }, { "_id": "6fvzjL4duMsWXswKf", "title": "Newcomb's problem happened to me", "pageUrl": "https://www.lesswrong.com/posts/6fvzjL4duMsWXswKf/newcomb-s-problem-happened-to-me", "postedAt": "2010-03-26T18:31:43.355Z", "baseScore": 51, "voteCount": 57, "commentCount": 99, "url": null, "contents": { "documentId": "6fvzjL4duMsWXswKf", "html": "\n

    Okay, maybe not me, but someone I know, and that's what the title would be if he wrote it.  Newcomb's problem and Kavka's toxin puzzle are more than just curiosities relevant to artificial intelligence theory.  Like a lot of thought experiments, they approximately happen.  They illustrate robust issues with causal decision theory that can deeply affect our everyday lives.

    \n

    Yet somehow it isn't mainstream knowledge that these are more than merely abstract linguistic issues, as evidenced by this comment thread (please no Karma sniping of the comments, they are a valuable record).  Scenarios involving brain scanning, decision simulation, etc., can establish their validy and future relevance, but not that they are already commonplace.  For the record, I want to provide an already-happened, real-life account that captures the Newcomb essence and explicitly describes how.

    \n

    So let's say my friend is named Joe.  In his account, Joe is very much in love with this girl named Omega… er… Kate, and he wants to get married.  Kate is somewhat traditional, and won't marry him unless he proposes, not only in the sense of explicitly asking her, but also expressing certainty that he will never try to leave her if they do marry

    \n

    Now, I don't want to make up the ending here.  I want to convey the actual account, in which Joe's beliefs are roughly schematized as follows: 

    \n
      \n
    1. if he proposes sincerely, she is effectively sure to believe it.
    2. \n
    3. if he proposes insincerely, she will 50% likely believe it.
    4. \n
    5. if she believes his proposal, she will 80% likely say yes.
    6. \n
    7. if she doesn't believe his proposal, she will surely say no, but will not be significantly upset in comparison to the significance of marriage.
    8. \n
    9. if they marry, Joe will 90% likely be happy, and will 10% likely be unhappy.
    10. \n
    \n

    He roughly values the happy and unhappy outcomes oppositely:

    \n
      \n
    1. being happily married to Kate:  125 megautilons
    2. \n
    3. being unhapily married to Kate:  -125 megautilons.
    4. \n
    \n

    So what should he do?  What should this real person have actually done?1  Well, as in Newcomb, these beliefs and utilities present an interesting and quantifiable problem…

    \n\n

    No surprise here, sincere proposal comes out on top.  That's the important thing, not the particular numbers.  In fact, in real life Joe's utility function assigned negative moral value to insincerity, broadening the gap.  But no matter; this did not make him sincere.  The problem is that Joe was a classical causal decision theorist, and he believed that if circumstances changed to render him unhappily married, he would necessarily try to leave her.  Because of this possibility, he could not propose sincerely in the sense she desired.  He could even appease himself by speculating causes2 for how Kate can detect his uncertainty and constrain his options, but that still wouldn't make him sincere

    \n

    Seeing expected value computations with adjustable probabilities for the problem can really help feel its robustness.  It's not about to disappear.  Certainties can be replaced with 95%'s and it all still works the same.  It's a whole parametrized family of problems, not just one. 

    \n

    Joe's scenario feels strikingly similar to Newcomb's problem, and in fact it is:  if we change some probabilities to 0 and 1, it's essentially isomorphic: 

    \n
      \n
    1. If he proposes sincerely, she will say yes.
    2. \n
    3. If he proposes insincerely, she will say no and break up with him forever.
    4. \n
    5. If they marry, he is 90% likely to be happy, and 10% likely to be unhappy.
    6. \n
    \n

    The analogue of the two boxes are marriage (opaque) and the option of leaving (transparent).  Given marriage, the option of leaving has a small marginal utility of 10%·125 = 12.5 utilons.  So \"clearly\" he should \"just take both\"?  The problem is that he can't just take both.  The proposed payout matrix would be:

    \n

    \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
    Joe \\ Kate
    Say yes
    Say no
    Propose sincerely
    MarriageNothing significant
    Propose insincerely
    Marriage + option to leaveNothing significant
    \n

    \n

    The \"principal of (weak3) dominance\" would say the second row is the better \"option\", and that therefore \"clearly\" Joe should propose insincerely.  But in Newcomb some of the outcomes are declared logically impossible.  If he tries to take both boxes, there will be nothing in the marriage box.  The analogue in real life is simply that the four outcomes need not be equally likely

    \n

    So there you have it.  Newcomb happens.  Newcomb happened.  You might be wondering, what did the real Joe do

    \n

    In real life, Joe actually recognized the similarity to Newcomb's problem, realizing for the first time that he must become updateless decision agent, and noting his 90% certainty, he self-modified by adopting a moral pre-commitment to never leaving Kate should they marry, proposed to her sincerely, and the rest is history.  No joke!  That's if Joe's account is accurate, mind you.

    \n

     

    \n
    \n

    Footnotes:

    \n

    1 This is not a social commentary, but an illustration that probabilistic Newcomblike scenarios can and do exist.  Although this also does not hinge on whether you believe Joe's account, I have provided it as-is nonetheless. 

    \n

    2 If you care about causal reasoning, the other half of what's supposed to make Newcomb confusing, then Joe's problem is more like Kavka's (so this post accidentally shows how Kavka and Newcomb are similar).  But the distinction is instrumentally irrelevant:  the point is that he can benefit from decision mechanisms that are evidential and time-invariant, and you don't need \"unreasonable certainties\" or \"paradoxes of causality\" for this to come up. 

    \n

    3 Newcomb involves \"strong\" dominance, with the second row always strictly better, but that's not essential to this post.  In any case, I could exhibit strong dominance by removing \"if they do get married\" from Kate's proposal requirement, but I decided against it, favoring instead the actual account of events.

    " } }, { "_id": "cPJ9WhRBbT9PASiZ7", "title": "The Shabbos goy", "pageUrl": "https://www.lesswrong.com/posts/cPJ9WhRBbT9PASiZ7/the-shabbos-goy", "postedAt": "2010-03-26T16:16:55.362Z", "baseScore": 47, "voteCount": 44, "commentCount": 92, "url": null, "contents": { "documentId": "cPJ9WhRBbT9PASiZ7", "html": "

    Exodus 22:25, Leviticus 25:36, and Deuteronomy 23:20-21 forbid Jews from charging interest on loans to \"your brother\" (other Jews).  (This is to me the most convincing argument against Judaism and Christianity, because it's too simple to argue around.  That proscription is just wrong, in exactly the way you would expect laws written by uneducated tribal people to be wrong.)

    \n

    Roman Catholics believe they must follow the Old Testament laws, except for the ones they don't have to follow; but during much of the middle ages in Western Europe, this was one of the ones they had to follow.  They interpreted \"your brother\" as meaning \"brother Christians\".  So Jews could lend to Christians with interest (and, presumably, Christians could lend to Jews).  This was convenient for everyone.  The Jews were necessary to work around an irrational moral prohibition of the Christians.

    \n

    Of course, the Jews had to take on the guilt of violating the moral code, even though it was for the benefit of the Christians.  (This was also convenient; it meant that after some Jews had loaned you an especially large amount of money, you could kill or expel them instead of paying them back, as the Spanish monarchy did in 1492).

    \n

    Later on, some orthodox Jews hired goyim to turn lightswitches and other electric devices on and off for them on the Sabbath.  They're called Shabbos goy, the Sabbath goy (thanks, Alicorn!).

    \n

    JCVI is considering moving from an on-site hardware grid, to cloud computing.  There are lots of reasons to do this.  One is so that Amazon can be our Shabbos goy.

    \n

    \n

    We develop lots of bioinformatics software that we're supposed to, and would like to, give out to anyone who wants it.  But if you don't have 800 computers at home, connected using the Sun Grid Engine with a VICS interface and using a Sybase database, with exactly the same versions of C++ and Perl and every C++ and Perl library that we do, you're going to have a hard time running the software.

    \n

    We can't put up a web service and let anybody send their jobs to our computers, because then some professor is going to say to their freshman class of 200 students, \"Today, class, your assignment is to assemble a genome using JCVI's free genome assembly web service.\"

    \n

    If we could charge users just a little bit of money, just a fraction of the cost of running their programs, we could probably do this.  Then people wouldn't be so cavalier about running a program repeatedly that takes 500 CPU hours each time you run it.

    \n

    But we can't, because we're an academic institution.  So that would be evil.

    \n

    So we need a Shabbos goy.  That's Amazon.  We can release our software and tell users, \"All you have to do to run this is to get an account on the Amazon cloud and run it there.  Of course, they'll charge you for it.  They're evil.\"

    \n

    (The Amazon cloud is evil, BTW.  They charged me for 21G of RAM and then only gave me 12, and charged me for 24 1GHz processors and gave me about 1/4 of that.  I spent over $100 and was never able to run my program; and they told me to stuff it when I complained.  But that's another story.)

    \n

    Summary

    \n
    \n
    \n\n
    \n
    P.S. - My morals may require me to use the services of someone not adhering to my morals.  I believe in a moral ecosystem:  You don't hold your dog to your moral standard; and you don't remedy this by adopting your dog's moral standard.  But AFAIK I'm the only one.  People who believe in one universal moral code shouldn't use Shabbos goyim.  (I don't think that orthodox Jews believe gentiles are supposed to obey the Torah, so their use of Shabbos goyim may be logically consistent.)
    \n
    " } }, { "_id": "kD8uzcmjKwSaTHnQJ", "title": "Compartmentalization as a passive phenomenon", "pageUrl": "https://www.lesswrong.com/posts/kD8uzcmjKwSaTHnQJ/compartmentalization-as-a-passive-phenomenon", "postedAt": "2010-03-26T13:51:08.199Z", "baseScore": 62, "voteCount": 50, "commentCount": 72, "url": null, "contents": { "documentId": "kD8uzcmjKwSaTHnQJ", "html": "

    We commonly discuss compartmentalization as if it were an active process, something you do. Eliezer suspected his altruism, as well as some people's \"clicking\", was due to a \"failure to compartmentalize\". Morendil discussed compartmentalization as something to avoid. But I suspect compartmentalization might actually be the natural state, the one that requires effort to overcome.

    \n

    I started thinking about this when I encountered an article claiming that the average American does not know the answer to the following question:

    \n
    \n

    If a pen is dropped on a moon, will it:
    A) Float away
    B) Float where it is
    C) Fall to the surface of the moon

    \n
    \n

    Now, I have to admit that the correct answer wasn't obvious to me at first. I thought about it for a moment, and almost settled on B - after all, there isn't much gravity on the moon, and a pen is so light that it might just be unaffected. It was only then that I remembered that the astronauts had walked on the surface of the moon without trouble. Once I remembered that piece of knowledge, I was able to deduce that the pen quite probably would fall.

    \n

    A link on that page brought me to another article. This one described two students randomly calling 30 people and asking them the question above. 47 percent of them got the question correct, but what was interesting was that those who got it wrong were asked a follow-up question: \"You've seen films of the APOLLO astronauts walking around on the Moon, why didn't they fall off?\" Of those who heard it, about 20 percent changed their answer, but about half confidently replied, \"Because they were wearing heavy boots\".

    \n

    While these articles were totally unscientific surveys, it doesn't seem to me like this would be the result of an active process of compartmentalization. I don't think my mind first knew that pens would fall down because of gravity, but quickly hid that knowledge from my conscious awareness until I was able to overcome the block. What would be the point in that? Rather, it seems to indicate that my \"compartmentalization\" was simply a lack of a connection, and that such connections are much harder to draw than we might assume.

    \n

    The world is a complicated place. One of the reasons we don't have AI yet is because we haven't found very many reliable cross-domain reasoning rules. Reasoning algorithms in general are quickly subject to a combinatorial explosion: the reasoning system might know which potential inferences are valid ones, but not which ones are meaningful in any useful sense. Most current-day AI systems need to be more or less fine-tuned or rebuilt entirely when they're made to reason in a domain they weren't originally built for.

    \n

    For humans, it can be even worse than that. Many of the basic tenets in a variety of fields are counter-intuitive, or are intuitive but have counter-intuitive consequences. The universe isn't actually fully arbitrary, but for somebody who doesn't know how all the rules add up, it might as well be. Think of all the times when somebody has tried to reason using surface analogies, mistaking them for deep causes; or dismissed a deep cause, mistaking it for a surface analogy. Somebody might present us with a connection between two domains, but we have no sure way of testing the validity of that connection.

    \n

    Much of our reasoning, I suspect, is actually pattern recognition. We initially have no idea of the connection between X and Y, but then we see X and Y occur frequently together, and we begin to think of the connection as an \"obvious\" one. For those well-versed in physics, it seems mind-numbingly bizarre to hear someone claim that the Moon's gravity isn't enough to affect a pen, but is enough to affect people wearing heavy boots. But as for some hypothetical person who hasn't studied much physics... or screw the hypotheticals - for me, this sounds wrong but not obviously and completely wrong. I mean, \"the pen has less mass, so there's less stuff for gravity to affect\" sounds intuitively sorta-plausible for me, because I haven't had enough exposure to formal physics to hammer in the right intuition.

    \n

    I suspect that often when we say \"(s)he's compartmentalizing!\", we're operating in a domain that's more familiar to us, and thus it feels like an active attempt to keep things separate must be the cause. After all, how could they not see it, were they not actively keeping it compartmentalized?

    \n

    So my theory is that much of compartmentalization is simply because the search space is so large that people don't end up seeing that there might be a connection between two domains. Even if they do see the potential, or if it's explicitly pointed out to them, they might still not know enough about the domain in question (such as in the example of heavy boots), or they might find the proposed connection implausible. If you don't know which cross-domain rules and reasoning patterns are valid, then building up a separate set of rules for each domain is the safe approach. Discarding as much of your previous knowledge as possible when learning about a new thing is slow, but it at least guarantees that you're not polluted by existing incorrect information. Build your theories primarily on evidence found from a single domain, and they will be true within that domain. While there can certainly also be situations calling for an active process of compartmentalization, that might only happen in a minority of the cases.

    " } }, { "_id": "fZJRxYLtNNzpbWZAA", "title": "The mathematical universe: the map that is the territory", "pageUrl": "https://www.lesswrong.com/posts/fZJRxYLtNNzpbWZAA/the-mathematical-universe-the-map-that-is-the-territory", "postedAt": "2010-03-26T09:26:15.631Z", "baseScore": 97, "voteCount": 102, "commentCount": 124, "url": null, "contents": { "documentId": "fZJRxYLtNNzpbWZAA", "html": "

    This post is for people who are not familiar with the Level IV Multiverse/Ultimate Ensemble/Mathematical Universe Hypothesis, people who are not convinced that there’s any reason to believe it, and people to whom it appears believable or useful but not satisfactory as an actual explanation for anything.

    \n

    I’ve found that while it’s fairly easy to understand what this idea asserts, it is more difficult to get to the point where it actually seems convincing and intuitively correct, until you independently invent it for yourself. Doing so can be fun, but for those who want to skip that part, I’ve tried to write this post as a kind of intuition pump (of the variety, I hope, that deserves the non-derogatory use of that term) with the goal of leading you along the same line of thinking that I followed, but in a few minutes rather than a few years.

    \n
    \n

    Once upon a time, I was reading some Wikipedia articles on physics, clicking links aimlessly, when I happened upon a page then titled “Ultimate Ensemble”. It described a multiverse of all internally-consistent mathematical structures, thereby allegedly explaining our own universe — it’s mathematically possible, so it exists along with every other possible structure.

    \n

    Now, I was certainly interested in the question it was attempting to answer. It’s one that most young aspiring deep thinkers (and many very successful deep thinkers) end up at eventually: why is there a universe at all? A friend of mine calls himself an agnostic because, he says, “Who created God?” and “What caused the Big Bang?” are the same question. Of course, they’re not quite the same, but the fundamental point is valid: although nothing happened “before” the Big Bang (as a more naïve version of this query might ask), saying that it caused the universe to exist still requires us to explain what brought about the laws and circumstances allowing the Big Bang to happen. There are some hypotheses that try to explain this universe in terms of a more general multiverse, but all of them seemed to lead to another question: “Okay, fine, then what caused that to be the case?”

    \n

    The Ultimate Ensemble, although interesting, looked like yet another one of those non-explanations to me. “Alright, so every mathematical structure ‘exists’. Why? Where? If there are all these mathematical structures floating around in some multiverse, what are the laws of this multiverse, and what caused those laws? What’s the evidence for it?” It seemed like every explanation would lead to an infinite regress of multiverses to explain, or a stopsign like “God did it” or “it just exists because it exists and that’s the end of it” (I’ve seen that from several atheists trying to convince themselves or others that this is a non-issue) or “science can never know what lies beyond this point” or “here be dragons”. This was deeply vexing to my 15-year-old self, and after a completely secular upbringing, I suffered a mild bout of spirituality over the following year or so. Fortunately I made a full recovery, but I gave in and decided that Stephen Hawking was right that “Why does the universe bother to exist?” would remain permanently unanswerable.

    \n

    Last year, I found myself thinking about this question again — but only after unexpectedly making my way back to it while thinking about the idea of an AI being conscious. And the path I took actually suggested an answer this time. As I worked on writing it up, I noticed that it sounded familiar. After I remembered what that Wikipedia article was called, and after actually looking up Max Tegmark’s papers on it this time, I confirmed that it was indeed the same essential idea. (Don’t you hate/love it when you find out that your big amazing groundbreaking idea has already been advocated by someone smarter and more important than you? It’s so disappointing/validating.) One of the papers briefly explores reasoning similar to that which I had accidentally used to convince myself of it, but it’s an argument that I haven’t seen emphasized in any discussions of it hereabouts, and it’s one which seems inescapable with no assumptions outside of ordinary materialism and reductionism.

    \n

    I shall now get to the point.

    \n
    \n

    Suppose this universe is a computer simulation.

    \n

    It isn’t, but we’ll imagine for the next few paragraphs that it is.

    \n

    Suppose everything we see — and all of the Many Worlds that we don’t see, and everything in this World that is too distant for us to ever see — is the product of a precise simulation being performed by some amazing supercomputer. Let’s call it the Grand Order Deducer, or G.O.D. for short.

    \n

    Actually, let’s say that G.O.D. is not an amazing supercomputer, but a 386 with an insanely large hard drive. Obviously, we wouldn’t notice the slowness from the inside, any more than the characters in a movie would notice that your DVD player is being choppy.

    \n

    Clearly, then, if G.O.D. were turned off for a billion years, and then reactivated at the point where it left off, we wouldn’t notice anything either. How about if the state of the simulation were copied to a very different kind of computer (say, a prototypical tape-based universal Turing machine, or an immortal person doing lambda calculus operations by hand) and continued? If our universe’s physics turns out to be fundamentally time-symmetrical, then if G.O.D. started from the end of the universe and simulated backwards, would we experience our lives backwards? If it saved a copy of the universe at the beginning of your life and repeatedly ran the simulation from there until your death (if any), would it mean anything to say that you are experiencing your life multiple times? If the state of the simulation were copied onto a million identical computers, and continued thence on all of them, would we feel a million times as real (or would there be a million “more” of each of us in any meaningful sense), and would the implausibly humanlike agent who hypothetically created this simulation feel a million times more culpable for any suffering taking place within it? It would be hard to argue that any of this should be the case without resorting to some truly ridiculous metaphysics. Every computer is calculating the same thing, even the ones that don’t seem plausible as universe-containers under our intuitions about what a simulation would look like.

    \n

    But what, then, makes us feel real? What if, after G.O.D. has been turned off for a billion years… it stays off? If we can feel real while being simulated by a hundred computers, and no less real while being simulated by one computer, how about if we’re being simulated by zero computers? More concretely, and perhaps more disturbingly, if torturing a million identical simulations is the same thing as torturing one (I’d argue that it is), is torturing one the same as torturing zero?

    \n

    2 + 2 will always be 4 whether somebody is computing it or not. (No Platonism is necessary here; only the Simple Truth that taking the string “2 + 2” and applying certain rules of inference to it always results in the string “4”.) Similarly, even if this universe is nothing but a hypothetical, not being computed by anyone, not existing in anything larger, there are certain things that are necessarily true about the hypothetical, including facts about the subjective mental states of us self-aware substructures. Nothing magical happens when a simulation runs. Most of us agree that consciousness is probably purely mechanistic, and that we could therefore create a conscious AI or emulate an uploaded brain, and that it would be just as conscious as we are; that if we could simulate Descartes, we’d hear him make the usual arguments about the duality of the material body and the extraphysical mind, and if we could simulate Chalmers, he’d come to the same familiar nonsensical conclusions about qualia and zombies. But the fact remains that it’s just a computer doing what computers always do, with no special EXIST or FEEL opcodes added to its instruction set. If a mind, from the outside, can be a self-contained and timeless structure, and the full structure can be calculated (within given finite limits) from some initial state by a normal computer, then its consciousness is a property of the structure itself, not of the computer or the program — the program is not causing it, it’s just letting someone notice it. So deep runs the dualist intuition that even when we have reduced spirits and consciousness and free will to normal physical causality, there’s still sometimes a tendency to think as though turning on a sufficiently advanced calculator causes something to mysteriously blink into existence or awareness, when all it is doing is reporting facts about some very large numbers that would be true one way or the other.

    \n

    G.O.D. is doing the very same thing, just with numbers that are even more unimaginably huge: a universe instead of an individual mind. The distilled and generalized argument is thus: If we can feel real inside a non-magical computer simulation, then our feeling of reality must be due to necessary properties of the information being computed, because such properties do not exist in the abstract process of computing, and those properties will not cease to be true about the underlying information if the simulation is stopped or is never created in the first place. This is identically true about every other possible reality.

    \n

    By Occam’s Razor, I conclude that if a universe can exist in this way — as one giant subjunctive — then we must accept that that is how and why our universe does exist; even if we are being simulated on a computer in some outer universe, or if we were created by an actual deity (which, from a non-intervening deity’s perspective, would probably look about the same as running a simulation anyway), or if there is some other explanation for this particular universe, we now see that this would not actually be the cause of our existence. Existence is what mathematical possibility feels like from the inside. Turn off G.O.D., and we’ll go on with our lives, not noticing that anything has changed. Because the only thing that has changed is that the people who were running the simulation won’t get to find out what happens next.

    \n
    \n

    Tegmark has described this as a “theory of everything”. I’d discourage that use, merely as a matter of consistency with common usage; conventionally, “theory of everything” refers to the underlying laws that define the regularities of this universe, and whatever heroic physicists eventually discover those laws should retain the honour of having their theory known as such. As a metaphysical theory (less arbitrary than conventional metaphysics, but metaphysical nonetheless), this does not fit that description; it gives us almost no useful information about our own universe. It is a theory of more than everything, and a theory of nothing (in the same way that a program that prints out every possible bit string will eventually print out any given piece of information, while its actual information content is near zero).

    \n

    That said, this theory and the argument I presented are not entirely free of implications about and practical applications within this particular universe. Here are some of them.

    \n\n
    \n

    One last comment: some people I’ve discussed this with have actually taken it as a reductio ad absurdum against the idea that a being within a simulation could feel real. As we say, one person’s modus ponens is another person’s modus tollens. Since the conclusion I’m arguing for is merely unusual, not inconsistent (as far as I can tell), that takes out the absurdum; therefore, in the apparent absence of any specific alternatives at all, you can weigh the probability of this hypothesis against the stand-in alternatives that there is something extraphysical about our own existence, something noncomputable about consciousness, or something metaphysically significant about processes equivalent to universal computation (or any other alternatives that I’ve neglected to think of).

    \n

    Finally, as I mentioned, the main goal of this post was to serve as an intuition pump for the Level IV Multiverse idea (and to point out some of the rationality-related questions it raises, so we’ll have something apropos to discuss here), not to explore it in depth. So if this was your first exposure to it, you should probably read Max Tegmark’s The Mathematical Universe now.

    " } }, { "_id": "cJSCTtyJmykHofkGm", "title": "Maximise Expected Utility, not Expected Perception of Utility", "pageUrl": "https://www.lesswrong.com/posts/cJSCTtyJmykHofkGm/maximise-expected-utility-not-expected-perception-of-utility", "postedAt": "2010-03-26T04:39:17.551Z", "baseScore": 14, "voteCount": 15, "commentCount": 13, "url": null, "contents": { "documentId": "cJSCTtyJmykHofkGm", "html": "

    Suppose we are building an agent, and we have a particular utility function U over states of the universe that we want the agent to optimize for. So we program into this agent a function CalculateUtility that computes the value of U given its current knowledge. Then we can program it to make decisions by searching through its available actions for the one that maximizes its expectation for its result of running CalculateUtility. But wait, how will an agent with this programming behave?

    \n

    Suppose the agent has the opportunity (option A) to arrange to falsely believe the universe is in a state that is worth utility uFA but this action really leads to a different state worth utility uTA, and a competing opportunity (option B) to actually achieve a state of the universe that has utility uB, with uTA < uB < uFA. Then the agent will expect that if it takes option A that its CalculateUtility function will return uFA, and if it takes option B that its CalculateUtility function will return uB. uFA > uB, so the agent takes option A, and achieves a states of the universe with utility uTA which is worse than the utility uB it could have achieved if it had taken option B. This agent is not a very effective optimization process1. It would rather falsely believe that it has achieved its goals than actually achieve its goals. This sort of problem2 is known as wireheading.

    \n

    Let us back up a step, and instead program our agent to make decisions by searching through its available actions for the one whose expected results maximizes its current calculation of CalculateUtility. Then, the agent would calculate that option A gives it expected utility uTA and option B gives it expected utility uB. uB > uTA, so it chooses option B and actually optimizes the universe. That is much better.

    \n

    So, if you care about states of the universe, and not just your personal experience of maximizing your utility function, you should make choices that maximize your expected utility, not choices that maximize your expectation of perceived utility.

    \n

     

    \n
    \n

    1. We might have expected this to work, because we built our agent to have beliefs that correspond to the actual state of the world.

    \n

     

    \n

    2. A similar problem occurs if the agent has the opportunity to modify its CalculateUtility function, so it returns large values for states of the universe that would have occurred anyways (or any state of the universe).

    " } }, { "_id": "RrNvJsZr9MfedtnZF", "title": "Newcomb's problem happened to me", "pageUrl": "https://www.lesswrong.com/posts/RrNvJsZr9MfedtnZF/newcomb-s-problem-happened-to-me-0", "postedAt": "2010-03-25T20:53:55.266Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "RrNvJsZr9MfedtnZF", "html": "\n

    Okay, maybe not me, but someone I know, and that's what the title would be if he wrote it.  Newcomb's problem and Kavka's toxin puzzle are more than just curiosities.  Like a lot of thought experiments, they approximately happen.  They make the issues with causal decision theory relevant, not only to designing artificial intelligence, but to our everyday lives as well. 

    \n

    Yet somehow it isn't mainstream knowledge that these are more than merely abstract linguistic issues, as evidenced by this comment thread (please no Karma sniping of the comments, they are a valuable record).  Scenarios involving brain scanning, decision simulation, etc., can establish their validy and future relevance, but not that they are already commonplace.  I want to provide an already-happened, real-life account that captures the Newcomb essence.

    \n

    So let's say my friend is named Joe.  In his account, Joe is very much in love with this girl named Omega… er… Kate, and he wants to get married.  Kate is somewhat traditional, and won't marry him unless he proposes, not only in the sense of explicitly asking her, but also expressing certainty that he will never try to leave her if they do marry

    \n

    At this point, many of you could easily make up a simple conclusion to this post.  As such, I want to convey the actual account, in which Joe's beliefs are roughly schematized as follows: 

    \n
      \n
    1. if he proposes sincerely, she is effectively sure to believe it.
    2. \n
    3. if he proposes insincerely, she will 50% likely believe it.
    4. \n
    5. if she believes his proposal, she will 80% likely say yes.
    6. \n
    7. if she doesn't believe his proposal, she will surely say no, but will not be significantly upset in comparison to the significance of marriage.
    8. \n
    9. if they marry, Joe will 90% likely be happy, and will 10% likely be unhappy.
    10. \n
    \n

    He roughly values the happy and unhappy outcomes oppositely:

    \n
      \n
    1. being happily married to Kate:  125 megautilons
    2. \n
    3. being unhapily married to Kate:  -125 megautilons.
    4. \n
    \n

    So what should he do?  What should this real person have actually done?1  Well, as in Newcomb, these beliefs and utilities present an interesting and quantifiable problem…

    \n\n

    No surprise here, sincere proposal comes out on top.  That's the important thing, not the particular numbers.  In fact, in real life Joe's utility function assigned negative moral value to insincerity, broadening the gap.  But no matter; this did not make him sincere.  The problem is that Joe was a causal decision theorist, and he believed that if circumstances changed to render him unhappily married, he would necessarily try to leave her.  Because of this possibility, he could not propose sincerely in the sense she desired.

    \n

    This feels strikingly similar to Newcomb's problem, and in fact it is: if we change some probabilities to 0 and 1, it's essentially isomorphic:

    \n
      \n
    1. If he proposes sincerely, she will say yes.
    2. \n
    3. If he proposes insincerely, she will say no and break up with him forever.
    4. \n
    5. If they marry, he is 90% likely to be very happy, and 10% likely to be very unhappy.
    6. \n
    \n

    The analogue of the two boxes are marriage (opaque) and the option of leaving (transparent).  Given marriage, the option of leaving has a small marginal utility of 10%·125 = 12.5 utilons.  So \"clearly\" he should \"just take both\"?  The problem is that he can't just take both.  The proposed payout matrix would be:

    \n

    \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
    Joe \\ Kate
    Say yes
    Say no
    Propose sincerely
    MarriageNothing significant
    Propose insincerely
    Marriage + option to leaveNothing significant
    \n

    \n

    The \"principal of (weak2) dominance\" would say the second row is the better \"option\", and that therefore \"clearly\" Joe should propose insincerely.  But in Newcomb some of the outcomes are declared logically impossible.  If he tries to take both boxes, there will be nothing in the marriage box.  The analogue in real life is simply that the four outcomes need not be equally likely

    \n

    So there you have it.  Newcomb happens.  Newcomb happened.  You might be wondering, what did Joe actually do? 

    \n

    In real life, Joe became a timeless decision theorist, and noting his 90% certainty, self-modified by adopting a moral pre-commitment to never leaving Kate should they marry, proposed to her sincerely, and the rest is history.  No joke!  That's if Joe's account is accurate, mind you.

    \n

     

    \n
    \n

    Footnotes:

    \n

    1 This is not a social commentary, but an illustration that probabilistic Newcomblike scenarios can and do exist.  Although this also does not hinge on whether you believe Joe's account, I have provided it as-is nonetheless.  I would hope that there are other similar accounts written down somewhere, but I haven't seen them, so I've provided his. 

    \n

    2 Newcomb involves \"strong\" dominance, with the second row always strictly better, but that's not essential to this post.  In any case, I could exhibit strong dominance by removing \"if they do get married\" from Kate's proposal requirement, but I decided against it, favoring instead the actual account of events.

    " } }, { "_id": "nitfKbhkM5xkLnHkQ", "title": "Over-encapsulation", "pageUrl": "https://www.lesswrong.com/posts/nitfKbhkM5xkLnHkQ/over-encapsulation", "postedAt": "2010-03-25T17:58:56.809Z", "baseScore": 29, "voteCount": 18, "commentCount": 56, "url": null, "contents": { "documentId": "nitfKbhkM5xkLnHkQ", "html": "

    Take a look at \"Role of Layer 6 of V2 Visual Cortex in Object-Recognition Memory\", Science 3 July 2009:
    Vol. 325. no. 5936, pp. 87 - 89.  The article has some good points, but I'm going to pick on some of its tests.

    \n

    The experimenters believed they could enhance object-recognition memory (ORM) by using a lentivirus to insert a gene into area V2 of visual cortex.  They tested the ORM of rats by putting an object in a field with a rat, and then putting either the same object (\"old\"), or a different object (\"new\"), in the field 30, 45, or 60 minutes later.  The standard assumption is that rats spend more time investigating unfamiliar than familiar objects.

    \n

    They chose this test:  For each condition, measure the difference in mean time spent investigating the old object vs. the new object.  If the latter is more than the former, and the difference is statistically-significant, conclude that the rats recognized the old object.

    \n

    Figure 1 Graph A (below the article summary cutoff) shows how much time normal rats spent investigating an object.  Here it is in HTML table form: How much time the rats spent exploring old and new objects:

    \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
    Minutes after first exposure304560
    Old81214
    New172814
    \n

    The black bars (new) are significantly longer than the white bars (old) after 30 and 45 minutes, but not after 60 minutes.  Therefore, the normal rats recognized the old objects after 30 and 45 minutes, but not after 60 minutes.

    \n

    Figure 3 Graph D (also below the article-summary cutoff) shows how much time different types of rats spent exploring old and new objects.  The \"RGS\" group is rats given the gene therapy, but in parietal cortex rather than in V2.

    \n

    Here it is in HTML form: How much time the rats spent exploring old and new objects, by rat type:

    \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
    Rat typeNormal (after 45 min)Parietal RGS (after 60 min)
    Old1011
    New2712
    \n

    Parietal RGS rats displayed no difference in time spent exploring old and new objects after 60 minutes; therefore, this gene therapy to parietal cortex does not improve ORM.

    \n

    To recap:

    \n
      \n
    1. We conclude that rats no longer recognize an old object if they spend about the same time investigating it as investigating a new object.
    2. \n
    3. Normal rats spend the same time investigating old and new objects 60 minutes after first exposure to the old object.
    4. \n
    5. Parietal RGS rats also spend the same time investigating old and new objects after 60 minutes.
    6. \n
    7. Therefore, normal rats and parietal RGS rats both lose ORM by 60 minutes.
    8. \n
    \n

    So why don't I buy it?

    \n

    \n

    Figure 1 (look at A).

    \n

    \"Fig.

    \n

    Figure 3 (look at D):

    \n

    \"Fig.

    \n

    (Original image is here.)

    \n

    The investigators were trying to determine when rats recognized an old object.  So what's most relevant is how much time they spent investigating the old object.  The time spent investigating new objects is probably supposed to control for variations in their testing procedure.

    \n

    But in both of the graphs, we see that they are claiming that rats failed to recognize an old object in the 60-minute condition, even though they spent the same amount of time investigating it as in the other conditions.  The difference was only in their response to new objects.  The test methodology assumes that the response to new objects is always the same.

    \n

    Look at the error bars on those graphs.  The black bars are supposed to all be the same height (except in 1B and 1C).  Yet we see they differ across conditions by what looks like about 10 standard deviations in several cases.

    \n

    When you regularly get 10 standard deviations of difference in your control variable across cases, you shouldn't say, \"Gee, lucky thing I used that control variable!  Otherwise I never would have noticed the large, significant difference between the test and control cases.\"  No; you say, \"Gee, something is wrong with my experimental procedure.\"

    \n

    A couple of other things to notice, in addition to the comments above:

    \n\n

    One subtle type of error is committed disproportionately by scientists, because it's a natural by-product of the scientific process of abstracting a theory into a testable hypothesis.  A scientist is supposed to formulate a test before performing the test, to avoid introducing bias into the test formulation in order to get the desired results.  Over-encapsulation is when the scientist performs the test, and examines the results according to the previously-established criteria, without noticing that the test results invalidate the assumptions used to formulate the test.  I call it \"over-encapsulation\" because the scientist has tried to encapsulate the reasoning process in a box, and put data into the box and get decisions out of it; and the journey into and out of the box strips off relevant but unanticipated information.

    \n

    Over-encapsulation is especially tricky when you're reasoning about decision theory.  It's possible to construct a formally-valid evaluation of the probabilities of different cases; and then take those probabilities and choose an action based on them using some decision theory, without noticing that some of the cases are inconsistent with the assumptions used in your decision theory.  I hope to write another, more controversial post on this someday.

    " } }, { "_id": "vaZYAs7tsDriNkoAP", "title": "SIA won't doom you", "pageUrl": "https://www.lesswrong.com/posts/vaZYAs7tsDriNkoAP/sia-won-t-doom-you", "postedAt": "2010-03-25T17:43:06.467Z", "baseScore": 12, "voteCount": 15, "commentCount": 32, "url": null, "contents": { "documentId": "vaZYAs7tsDriNkoAP", "html": "

    Katja Grace has just presented an ingenious model, claiming that SIA combined with the great filter generates its own variant of the doomsday argument. Robin echoed this on Overcoming Bias. We met soon after Katja had come up with the model, and I signed up to it, saying that I could see no flaw in the argument.

    \n

    Unfortunately, I erred. The argument does not work in the form presented.

    \n

    First of all, there is the issue of time dependence. We are not just a human level civilization drifting through the void in blissful ignorance about our position in the universe. We know (approximately) the age of our galaxy, and the time elapsed since the big bang.

    \n

    How is this relevant? It is relevant because all arguments about the great filter are time-dependent. Imagine we had just reached consciousness and human-level civilization, by some fluke, two thousand years after the creation of our galaxy, by an evolutionary process that took two thousand years. We see no aliens around us. In this situation, we have no reason to suspect any great filter; if we asked ourselves \"are we likely to be the first civilization to reach this stage?\" then the answer is probably yes. No evidence for a filter.

    \n

    Imagine, instead, that we had reached consciousness a trillion years into the life of our galaxy, again via an evolutionary process that took two thousand years, and we see no aliens or traces of aliens. Then the evidence for a filter is overwhelming; something must have stopped all those previous likely civilizations from emerging into the galactic plane.

    \n

    So neither of these civilizations can be included in our reference class (indeed, the second one can only exist if we ourselves are filtered!). So the correct reference class to use is not \"the class of all potential civilizations in our galaxy that have reached our level of technological advancement and seen no aliens\", but \"the class of all potential civilizations in our galaxy that have reached our level of technological advancement at around the same time as us and seen no aliens\". Indeed, SIA, once we update on the present, cannot tell us anything about the future.

    \n

    But there's more. Let us lay aside, for the moment, the issue of time dependence. Let us instead consider the diagrams in Katja's post as if the vertical axis were time: all potential civilizations start at the same point, and progress at the same rate. Is there still a role for SIA?

    \n

    The answer is... it depends. It depends entirely on your choice of prior. To illustrate this, consider this pair of early-filter worlds:

    \n

    \"\"

    \n

    To simplify, I've flattened the diagram, and now consider only two states: human civilizations and basic lifeforms. And here are some late filter worlds:

    \n

    \"\"

    \n

    Assign an equal prior of (1/4) to each one of these world. Then the prior probability of living in a late filter world is (1/4+1/4)=1/2, and the same holds for early filter worlds.

    \n

    Let us now apply SIA. These boost the probability of Y and B at the expense of A and X. Y and B end up having a probability 1/3, while A and X end up having a probability 1/6. The postiori probability of living in a late filter world is (1/3+1/6)=1/2, and the same goes for early filter worlds. Applying SIA has not changed the odds of late versus early filters.

    \n

    But people might feel this is unfair; that I have loaded the dice, especially by giving world Y the same prior as the others. It has too many primitive lifeforms; it's too unlikely. Fine then; let us give prior probabilities as follows:

    \n

    \n\n\n\n\n\n\n\n\n\n\n\n\n
    X
    Y
    A
    B
    2/30
    1/3018/309/30
    \n

    \n

    This world does not exactly over-weight the chance of human survival! The prior probability of a late filter is (18/30+9/30)=9/10, while that of an early filter is 1/10. But now let us consider how SIA changes those odds: Y and B are weighted by a factor of two, while X and A are weighted by a factor of one. The postiori probabilities are thus:

    \n

    \n\n\n\n\n\n\n\n\n\n\n\n\n
    X
    Y
    A
    B
    1/20
    1/209/209/20
    \n

    \n

    The postiori probability of a late filter is (9/20+9/20)=9/10, same as before: again SIA has not changed the probability of where the filter is. But it gets worse; if, for instance, we had started with the priors:

    \n

    \n\n\n\n\n\n\n\n\n\n\n\n\n
    X
    Y
    A
    B
    1/30
    2/3018/309/30
    \n

    \n

    This is the same as before, but with X and Y inversed. The early filter still has only one chance in ten, a priori. But now if we apply SIA, the postiori odds of X and Y are 1/41 and 4/41, totalling of 5/41 > 1/10. Here applying SIA has increased our chances of survival!

    \n

    In general there are a lot of reasonable priors over possible worlds were SIA makes little or no difference to the odds of the great filter, either way.

    \n

     

    \n

    Conclusion: Do I believe that this has demonstrated that the SIA/great filter argument is nonsense? No, not at all. I think there is a lot to be gained from analysing the argument, and I hope that Katja or Robin or someone else - maybe myself, when I get some spare time, one of these centuries - sits down and goes through various scenarios, looks at classes of reasonable priors and evidence, and comes up with a conclusion about what exactly SIA says about the great filter, the strength of the effect, and how sensitive it is to prior changes. I suspect that when the dust settles, SIA will still slightly increase the chance of doom, but that the effect will be minor.

    \n

    Having just saved humanity, I will now return to more relaxing pursuits.

    " } }, { "_id": "TaPr4YSBbiakeKdwX", "title": "More thoughts on assertions", "pageUrl": "https://www.lesswrong.com/posts/TaPr4YSBbiakeKdwX/more-thoughts-on-assertions", "postedAt": "2010-03-25T01:39:12.829Z", "baseScore": 28, "voteCount": 28, "commentCount": 31, "url": null, "contents": { "documentId": "TaPr4YSBbiakeKdwX", "html": "

    Response to: The \"show, don't tell\" nature of argument

    \n

    Morendil says not to trust simple assertions. He's right, for the certain class of simple assertions he's talking about. But in order to see why, let's look at different types of assertions  and see how useful it is to believe them.

    \n

    Summary:
    - Hearing an assertion can be strong evidence if you know nothing else about the proposition in question.
    - Hearing an assertion is not useful evidence if you already have a reasonable estimate of how many people do or don't believe the proposition.
    - An assertion by a leading authority is stronger than an assertion by someone else.
    - An assertion plus an assertion that there is evidence makes no factual difference, but is a valuable signal.

    \n

    Unsupported assertions about non-controversial topics

    \n

    Consider my assertion: \"The Wikipedia featured article today is on Uriel Sebree\". Even if you haven't checked Wikipedia today and have no evidence on this topic, you're likely to believe me. Why would I be lying?

    This can be nicely modeled in Bayesian terms - you start with a prior evenly distributed across Wikipedia topics, the probability of me saying this conditional on it being false is pretty low, and the probability of me saying it conditional on it being true is pretty high. So noting that I said it nicely concentrates probability mass in the worlds where it's true. You're totally justified in believing it. The key here is that you have no reason to believe there's a large group of people who go around talking about Uriel Sebree being on Wikipedia regardless of whether or not he really is.

    Unsupported assertions about controversial topics

    \n

    The example given in Morendil's post is that some races are biologically less intelligent than others. Let's say you have no knowledge of this whatsoever. You're so naive you don't even realize it might be controversial. In this case, someone who asserts \"some races are biologically less intelligent than others\" is no less believable than someone who asserts \"some races have slightly different frequencies of pancreatic cancer than others.\" You'd accept the second as the sort of boring but reliable biological fact that no one is particularly prone to lie about, and you'd do the same with the first.

    Now let's say you're familiar with controversies in sociology and genetics, you already know that some people believe some races are biologically more intelligent, and other people don't. Let's say you gauge the people around you and find that about 25% of people agree with the statement and 75% disagree.

    This survey could be useful. You have to ask yourself - is this statement about race and genetics more likely to have the support of a majority of people in a world where it's true than in a world where it's false? \"No\" is a perfectly valid answer here - you might think people are so interested in signalling that they're not racist that they'll completely suspend their rational faculties. But \"yes\" is also a valid answer here if you think that the people around you have reasonably intelligent opinions on the issue. This would be a good time to increase your probability that it's true.

    Now I, a perfectly average member of the human race, make the assertion that I believe that statement. But from your survey, you already have information that negates any evidence from my belief - that given that the statement is false and there's a 25% belief rate, there's a 25% chance I would agree with it, and given that the statement is true and there's a 25% belief rate, there's a 25% chance I would agree with it. If you've already updated on your survey, my assertion is equally likely in both conditions and doesn't shift probability one way or the other.

    \n

    Unsupported assertions on extremely unusual topics

    \n

    There is a case, I think, in which a single person asserting ze believes something can increase your probability. Imagine that I say, truthfully, that I believe that a race of otter-people from Neptune secretly controls the World Cup soccer tournament. If you've never heard this particular insane theory before, your estimate of the number of people who believed it was probably either zero, or so low that you wouldn't expect anyone you actually meet (even for values of \"meet\" including online forums) to endorse it. My endorsing it actually raises your estimate of the percent of the human race who endorse it, and this should raise your probability of it being true. Clearly, it should not raise it very much, and it need not necessarily raise it at all to the degree that you can prove that I have reasons other than truth for making the assertion (in this case, most of the probability mass generated by the assertion would leak off into the proposition that I was insane) but it can raise it a little bit.

    \n

    Unsupported assertions by important authorities

    \n

    This effect becomes more important when the person involved has impressive credentials. If someone with a Ph.D in biology says that race plays a part in intelligence, this could shift your estimate. In particular, it would shift it if you previously thought the race-intelligence connection was such a fringe theory that they would be unlikely to get even one good biologist on their side. But if you already knew that this theory was somewhat mainstream and had at least a tiny bit of support from the scientific community, it would be giving no extra information. Consider this the Robin Hanson Effect, because a lot of the good Robin Hanson does comes from being a well-credentialed guy with a Ph.D willing to endorse theories that formerly sounded so crazy that people would not have expected even one Ph.D to endorse them.

    In cases of the Hanson Effect, the way you found out about the credentialled supporter is actually pretty important. If you Googled \"Ph.D who supports transhumanism\" and found Robin's name, then all it tells you is that there is at least one Ph.D who supports transhumanism. But if you were at a bar, and you found out the person next to you was a Ph.D, and you asked zir out of the blue if ze supported transhumanism, and ze said yes, then you know that there are enough Ph.Ds who support transhumanism that randomly running into one at the bar is not that uncommon an event.

    An extreme case of the Hanson Effect is hearing that the world's top expert supports something. If there's only one World's Top Expert, then that person's opinion is always meaningful. This is why it was such a big deal when Watson came out in favor of a connection between race and intelligence. Now, I don't know if Watson actually knows anything about human genetic variation. He could have just had one clever insight about biochemistry way back when, and be completely clueless around the rest of the field. But if we imagine he really is the way his celebrity status makes him seem - the World's Top Expert in the field of genetics - then his opinion carries special weight for two reasons: first of all, it's the only data point we have in the field of \"what the World's Top Expert thinks\", and second, it suggests that a large percentage of the rest of the scientific community agrees with him (his status as World's Top Expert makes him something of a randomly chosen data point, and it would be very odd if we randomly pick the only data point that shares this opinion).

    \n

    Assertions supported by unsupported claims of \"evidence\"

    \n

    So much for completely unsupported assertions. Seeing as most people are pretty good at making up \"evidence\" that backs their pet beliefs, does it add anything to say \"...and I arrived at this conclusion using evidence\" if you refuse to say what the evidence is?

    Well, it's a good signal for sanity. Instead of telling you only that at least one person believes in this hypothesis, you now know that at least one person who is smart enough to understand that ideas require evidence believes it.

    This is less useful than it sounds. Disappointingly, there are not too many ideas that are believed solely by stupid people. As mentioned before, even creationism can muster a list of Ph.Ds who support it. When I was much younger, I was once quite impressed to hear that there were creationist Ph.Ds with a long list of scientific accomplishments in various fields. Since then, I learned about compartmentalization. So all that this \"...and I have evidence for this proposition\" can do on a factual level is highlight the existence of compartmentalization for people who weren't already aware of it.

    But on a nonfactual level...again, it signals sanity. The difference betwee \"I believe some races are less intelligent than others\" and \"I believe some races are less intelligent than others, and I arrived at this conclusion using evidence\" is that the second person is trying to convince you ze's not some random racist with an axe to grind, ze's an amateur geneticist addressing an interesting biological question. I don't evaluate the credibility of the two statements any differently, but I'd much rather hang out with the person who made the second one (assuming ze wasn't lying or trying to hide real racism behind a scientific veneer).

    Keep in mind that most communication is done not to convince anyone of anything, but to signal the character of the person arguing (source: I arrived at this conclusion using evidence). One character signal may interfere with other character signals, and \"I arrived at this belief through evidence\" can be a powerful backup. I have a friend who's a physics Ph.D, an evangelical Christian with an strong interest in theology, and an American living abroad. If he tries to signal that he's an evangelical Christian, he's very likely to get shoved into the \"redneck American with ten guns and a Huckabee bumper sticker\" box unless he immediately adds something like \"and I base this belief on sound reasoning.\" That is one very useful signal there, and if he hadn't given it, I probably would have never bothered talking to him further. It's not a signal that his beliefs are actually based on sound reasoning, but it's a signal that he's the kind of guy who realizes beliefs should be based on that sort of thing and is probably pretty smart.

    You can also take this the opposite way. There's a great Dilbert cartoon where Dilbert's date says something like \"I know there's no scientific evidence that crystals can heal people, but it's my point of view that they do.\" This is a different signal; something along the lines of \"I'd like to signal my support for New Agey crystal medicine, but don't dock me points for ignoring the scientific evidence against it.\" This is more of a status-preserving manuever than the status-claiming \"I have evidence for this\" one, but astoundingly it seems to work pretty well (except on Dilbert, who responded, \"When did ignorance become a point of view?\")

    " } }, { "_id": "o27T8QfSkcbzvonhm", "title": "SIA on other minds", "pageUrl": "https://www.lesswrong.com/posts/o27T8QfSkcbzvonhm/sia-on-other-minds", "postedAt": "2010-03-25T01:09:15.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "o27T8QfSkcbzvonhm", "html": "

    Another interesting implication if the self indication assumption (SIA) is right is that solipsism is much less likely correct than you previously thought, and relatedly the problem of other minds is less problematic.

    \n

    Solipsists think they are unjustified in believing in a world external to their minds, as one only ever knows one’s own mind and there is no obvious reason the patterns in it should be driven by something else (curiously, holding such a position does not entirely dissuade people from trying to convince others of it). This can then be debated on grounds of whether a single mind imagining the world is more or less complex than a world causing such a mind to imagine a world.

    \n

    The problem of other minds is that even if you believe in the outside world that you can see, you can’t see other minds. Most of the evidence for them is by analogy to yourself, which is only one ambiguous data point (should I infer that all humans are probably conscious? All things? All girls? All rooms at night time?).

    \n

    SIA says many minds are more likely than one, given that you exist. Imagine you are wondering whether this is World 1, with a single mind among billions of zombies, or World 2, with billions of conscious minds. If you start off roughly uncertain, updating on your own conscious existence with SIA shifts the probability of world 2 to billions of times the probability of world 1.

    \n

    Similarly for solipsism. Other minds probably exist. From this you may conclude the world around them does too, or just that your vat isn’t the only one.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "Zstm38omrpeu7iWeS", "title": "The Spotlight", "pageUrl": "https://www.lesswrong.com/posts/Zstm38omrpeu7iWeS/the-spotlight", "postedAt": "2010-03-24T23:43:52.719Z", "baseScore": 49, "voteCount": 48, "commentCount": 24, "url": null, "contents": { "documentId": "Zstm38omrpeu7iWeS", "html": "

    Sequence index: Living Luminously
    Previously in sequence: Lights, Camera, Action
    Next in sequence: Highlights and Shadows

    Inspecting thoughts is easier and more accurate if they aren't in your head.  Look at them in another form from the outside, like they belonged to someone else.

    \n

    You may find your understanding of this post significantly improved if you read the fourth story from Seven Shiny Stories.

    One problem with introspection is that the conclusions you draw about your thoughts are themselves thoughts.  Thoughts, of course, can change or disappear before you can extract information about yourself from them.  If a flash of unreasonable anger crosses my mind, this might stick around long enough to make me lash out, but then vanish before I discover how unreasonable it was.  If thoughts weren't slippery like this, luminosity wouldn't be much of a project.  So of course, if you're serious about luminosity, you need a way to pin down your thoughts into a concrete format that will hold still.

    You have to pry your thoughts out of your brain.

    Writing is the obvious way to do this - for me, anyway.  You don't have to publicize what you extract, so it doesn't have to be aesthetic or skillful, just serviceable for your own reference.  The key is to get it down in a form that you can look at without having to continue to introspect.  Whether this means sketching or scribing or singing, dump your brain out into the environment and have a peek.  It's easy to fool yourself into thinking that a given idea makes sense; it's harder to fool someone else.  Writing down an idea automatically engages the mechanisms we use to communicate to others, helping you hold your self-analysis to a higher standard.

    To turn your thoughts into non-thoughts, use labels to represent them.  Put them in reference classes, so that you can notice when the same quale, habit of inference, or thread of cognition repeats. That way, you can detect patterns: \"Hey, the last time I felt like this, I said something I really regretted; I'd better watch it.\"  If you can tell when something has happened twice, you can tell when it hasn't - and new moods or dispositions are potentially very important.  They mean that you or something around you has changed, and that could be a valuable resource or a tricky hazard.

    Your labels can map onto traditional terms or not - if you want to call the feeling of having just dropped your ice cream on the sidewalk \"blortrath\", no one will stop you.  (It can be useful, later when you're trying to share your conclusions about yourself with others, to have a vocabulary of emotion that overlaps significantly with theirs; but you can always set up an idiolect-to-dialect dictionary later.)  I do recommend identifying labeled items as being more or less similar to each other (e.g. annoyance is more like fury than it is like glee) and having a way to account for that in your symbolism.  Similarities like that will make it more obvious how you can generalize strategies from one thing to another.

    Especially if you don't think in words, you might find it challenging to turn your thoughts into something in the world that represents them.  Maybe, for instance, you think in pictures but aren't at all good at drawing.  This is one of the steps in luminosity that I think is potentially dispensible, so if you honestly cannot think of any way to jot down the dance of your mind for later inspection, you can just work on thinking very carefully such that if something were to be out of place the next time you came back to your thought, you'd notice it.  I do recommend spending at least five to ten minutes trying to write, diagram, draw, mutter, or interpretive-dance your mental activity before you give it up as untenable for you, however.

    Once you have produced a visible or audible translation of your thoughts, analyze it the way you would if someone else had written it.  (Except inasmuch as it's in a code that's uniquely understandable to you and you shouldn't pretend to do cryptanalysis on it.)  What would you think of the person described if you didn't know anything else?  How would you explain these thoughts?  What threads of reasoning seem to run in the background from one belief to another, or from a perception to a belief, or from a desire to an intention?  What do you expect this person to do next?  What's your next best guess after that?  And: what more do you want to know?  If you met the person described, how could you satisfy your curiosity without relying on the bias-laden answer you'd get in response to a verbal inquiry?  Try it now - in a comment under this post, if you like: note what you're thinking, as much of it as you can grab and get down.  Turn on the anti-kibitzer and pretend someone else said it: what must be going on in the mind behind this writing?

    " } }, { "_id": "SwwCYwzWf7id7gjZm", "title": "An empirical test of anthropic principle / great filter reasoning", "pageUrl": "https://www.lesswrong.com/posts/SwwCYwzWf7id7gjZm/an-empirical-test-of-anthropic-principle-great-filter", "postedAt": "2010-03-24T18:44:43.628Z", "baseScore": 13, "voteCount": 11, "commentCount": 42, "url": null, "contents": { "documentId": "SwwCYwzWf7id7gjZm", "html": "

    If our civilization doesn’t collapse then in 50 to 1000 years humanity will almost certainly start colonizing the galaxy.  This seems inconsistent with the fact that there are a huge number of other planets in our galaxy but we have not yet found evidence of extraterrestrial life.  Drawing from this paradox, Robin Hanson writes that there should be some great filter “between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?”

    \r\n

    Katja Grace reasons that this filter probably lies in our future and so we are likely doomed.   (Hanson agrees with Grace’s argument.)  Please read Grace’s post before reading the rest of this post.

    \r\n

     

    \r\n

    Small groups of humans have been in situations similar to that currently faced by our entire species.  To see this imagine you live in a prehistoric hunter gatherer tribe of 200 people.  Your tribe lives on a large, mostly uninhabited island.  Your grandparents, along with a few other people came over to the island about 30 years ago.  Since arriving on the island no one in your tribe has encountered any other humans.   You figure that if your civilization doesn’t get wiped out then in 100 or so years your people will multiply in number and spread throughout the island.  If your civilization does so spread, then any new immigrants to the island would quickly encounter your tribe.

    \r\n

    Why, you wonder, have you not seen other groups of humans on your island?  You figure that it’s either because your tribe was the first to arrive on the island, or because your island is prone to extinction disasters that periodically wipes out all its human inhabitance.  Using anthropic reasoning similar to that used by Katja Grace you postulate the existence of four types of islands:

    \r\n

    1)  Islands that are easy to reach and are not prone to disaster.
    2)  Islands that are hard to reach and are not prone to disaster.
    3)  Islands that are easy to reach and are prone to disaster.
    4)  Islands that are hard to reach and are prone to disaster.

    \r\n

    You conclude that your island almost certainly isn’t type (1) because if it were you would have almost certainly seen other humans on the island.   You also figure that on type (3) islands lots of civilizations will start and then be destroyed.  Consequently, most of the civilizations that find themselves on types (2), (3) or (4) islands are in fact on type (3) islands.  Your tribe, you therefore reason, will probably be wiped out by a disaster.

    \r\n

    I believe that the argument for why you are probably on a type (3) island is analogous to that of why the great filter probably lies in our future because both come about from updating “on your own existence by weighting possible worlds as more likely the more observers they contain.”

    \r\n

    If, therefore, Grace’s anthropic reasoning is correct then most of the time when a prehistoric group of humans settled a large, uninhabited island, that group went extinct.

    \r\n

     

    " } }, { "_id": "gfkhZWzWrr4hQdxJY", "title": "The \"show, don't tell\" nature of argument", "pageUrl": "https://www.lesswrong.com/posts/gfkhZWzWrr4hQdxJY/the-show-don-t-tell-nature-of-argument", "postedAt": "2010-03-24T17:38:09.105Z", "baseScore": 22, "voteCount": 17, "commentCount": 25, "url": null, "contents": { "documentId": "gfkhZWzWrr4hQdxJY", "html": "

    Consider a statement of the form \"based on knowledge and common sense and estimating the probabilities of alternative hypotheses, I believe that X\". Or, perhaps, \"based on Bayesian reasoning I have come to the conclusion X\". How much, when an interlocutor of yours uses such a statement, should it affect your credence in X?

    \n

    I claim that the answer is \"negligibly\", in most cases. (I discuss exceptions below.)

    \n

    The most appropriate way of conducting an argument is in most cases to simply state your belief X. If you are going to adduce evidence and argument in support of your belief X, simply state these. If you've derived probability estimates, simply provide them, perhaps with the method of derivation. The \"meta\" observation by itself has no place in an argument.

    \n

    (Isn't that obvious, you might ask? Not to everyone, as this statement form turns out to be actually used in discussions here. So, in the interest of contributing even a little to a possible theory of argumentation, I develop some further observations on this pattern.)

    \n

    \n

    The assertion \"my opinions on X are soundly arrived at\" has an equivalent structure to saying \"I am a truthful person\": it cannot be verified by an interlocutor, other than (in the latter case) by looking at what statements you utter and independently verifying they are truthful, or (in the first) by looking at what arguments and evidence you adduce in support of your opinions on topic X, and independently assessing whether the evidence is credible and the arguments are individually sound.

    \n

    Claiming to be a truthful person doesn't mean much. I know parents who are constantly telling their kids \"oh, you shouldn't lie, it's bad\", but who actually lie to their kids quite often. My own approach has been to tell my kids the truth (and to answer pretty much all their questions), while almost never mentioning \"truth\" as a moral topic. Empirically, I observe that my kids are growing to be truthful and trustworthy persons, more so than many of e.g. my neighbours' kids.

    \n

    Claiming that you use sound methods in arriving at a conclusion of which you want to convince your interlocutor similarly has little content. It is the sort of thing that can only be assessed by looking at your justifications for belief. The claim may (if stated in an authoritative tone) actually manage to nudge your interlocutor a little in the direction of accepting your conclusions. On reflection, it shouldn't, and your argumentative skills would benefit from refusing to accept such statements as justification.

    \n

    Rather than say \"there is evidence\", point to evidence. Rather than say, \"it can be argued that\", simply argue it. If there are allowable exceptions, they concern cases where you say something in addition to the bare assertion of soundness: for instance if you say \"there is lots of evidence, readily obtainable from everyday sources\". Your claim is above and beyond a claim of soundness.

    \n

    You should be prepared, if challenged, to back up your observation about the abundance and availability of the evidence. If you say of something that it's \"obvious\", it had better be obvious.

    \n

    Sometimes too, the use of a given methodology to arrive at a belief X is surprising information in and of itself. It makes a difference to an interlocutor to know that your belief has that origin. It is hard to think of examples where it could possibly make a difference to say that in very vague and general terms, but I can think of cases where the use of a specific method is new and surprising. For instance, if you say \"I believe in X and Bayesian reasoning (as opposed to other methodologies) demonstrates X in a particularly convincing manner\". In this case the claim is not only about X, it is also a reasonably interesting claim about the method itself. (Which claim might or might not stand up to examination -  cf. the ongoing argument about frequentism vs. bayesianism.)

    \n

    These exceptions aside, sound argumentation shares one precept with fiction: show, don't tell.

    " } }, { "_id": "SQoz2pb2ut2x4ZJWo", "title": "The two insights of materialism", "pageUrl": "https://www.lesswrong.com/posts/SQoz2pb2ut2x4ZJWo/the-two-insights-of-materialism", "postedAt": "2010-03-24T14:47:12.969Z", "baseScore": 23, "voteCount": 19, "commentCount": 134, "url": null, "contents": { "documentId": "SQoz2pb2ut2x4ZJWo", "html": "\n

    Preceded by:  There just has to be something more, you know?  Followed by:  Physicalism: consciousness as the last sense.

    \n

    Contents:  1. An epistemic difficulty  2. How and why to be a materialist

    \n

    An epistemic difficulty

    \n

    Like many readers of this blog, I am a materialist.  Like many still, I was not always.  Long ago, the now-rhetorical ponderings in the preceding post in fact delivered the fatal blow to my nagging suspicion that somehow, materialism just isn't enough.

    \n

    By materialism, I mean the belief that the world and people are composed entirely of something called matter (a.k.a.  energy), which physics currently best understands as consisting of particles (a.k.a.  waves).  If physics reformulates these notions, materialism can adjust with it, leading some to prefer the term \"physicalism\".

    \n

    Now, I encounter people all the time who, because of education or disillusionment, have abandoned most aspects of religion, yet still believe in more than one than one kind of reality.  It's often called \"being spiritual\".  People often think it feels better than the alternative (see Joy in the merely real), but it also persists for what people experience as an epistemic concern:

    \n

    The inability to reconcile the \"experiencing self\" concept with one's notion of physical reality.

    \n

    This is among the the most common epistemic discomforts with materialism (I only say \"discomfort\", because a blank spot on your map does not correspond to a blank territory).  The inside view — introspection — shows us something people call a \"mind\" or \"spirit\", and the outside view — our eyes — shows us something we call a \"brain\", which looks nothing at all the same.  But the perceived distance between these concepts signals that connecting them would be extremely meaningful, the way superficially unrelated hypotheses and conclusions make for a very powerful theorem.  For the connection to start making sense, one must realize that \"you are made of matter\" is as much a statement about matter as a statment about you…

    \n

    The two insights of materialism: That the reconciliation of mind and matter –

    \n
      \n
    1. is not misinformation about mind, but extra information about matter, and
    2. \n
    3. is not misinformation about matter, but extra information about mind.
    4. \n
    \n

    These are really two insights, and underusing one of them leaves a sense of \"doesn't quite capture it\" in the psyche.  See, the way most people think or learn about physics, a particle is a tiny dot, with some attributes like charge specified by numbers, obeying certain laws of motion.  But in fact, this is a model of a particle.  As a conviction, physics need not claim that \"dots and waves are all there is\", but rather, that all there is can be described on analogy with dots and waves.  Science is about modelling — a map that matches the territory — and \"truth\" is just how well it matches up.

    \n

    And given modern science, there is something more you can say about a particle besides the geometry and equations that describe it, something which connects it to the direct, cogito-ergo-sum style knowledge we all enjoy: whatever it is, a particle is a one thousand-trillion-trillionth of a you.  Yes, you, in your entirety.  If part of that includes something you call a \"soul\", then yes, science can now model the quantitative aspects, in more or less complete detail, of a one thousand-trillion-trillionth of a \"soul\".  Is that too much?  Too incredible?  A song by The Books that I like almost says it perfectly:

    \n

    You are something that the whole world is made of.

    \n

    This moots the debate.  The first step is not to \"reduce\" the introspective view to the extrospective view, but to realize that they're looking at the same object.  The assertion is not that \"mind is just particles\", but rather that \"a tiny fraction of a mind\" and \"a tiny fraction of matter\" happen to refer to the same object, and we should agree to call that object \"particle\".  Depending on how you use the word \"conscious\", this does not necessarily say that a particle is conscious in the way that you are; an octant of a sphere is not a sphere.  But assembled correctly, it is certainly one-eighth of a sphere!

    \n

    I've learned that some people call this view \"neutral monism\", but I prefer to still call it materialism as an emphasis that the extrospective view \"science\" really has a larger quantity of information at this point in human history.  This is different information about reality than provided by introspection, and to ignore it is detrimental to one's world view! 

    \n

    So, to help non-materialists in attaining this reconciliation of mind and matter, I've written the following rough path of ideas that one can follow:

    \n

    How and why to be a materialist

    \n
      \n
    1. \n

      Accepting materialism is saying \"the rest of the world is made of whatever I am\", not just \"I am made of whatever the rest of the world is\".  And why not?  In the eyes of science, these are both the same, true statement.  Semantically, the first one tells you something qualitative about matter, and the second one tells you something extremely quantitative about your mind!  It means modern neuroscience and biology can be used to help you understand yourself.  Awesome!

      \n
    2. \n
    3. \n

      Accepting physics is accepting that your \"spirit\" might consist of parts which, sufficiently divided and removed from context, might behave in a regular fashion. Then you might as well call the parts \"particles\" and call your spirit \"brain\", and look at all the amazing data we have about them that help describe how you work.

      \n
    4. \n
    5. \n

      Beware of the works-how-it-feels bias, the fallacious additional assumption that the world works the way you feel about it.  (See How an algorithm feels from the inside.) These pieces of your mind/spirit called particles are extremely tiny; in order of magnitude, they are more than twice as small as your deepest introspection, so you can't judge them very well based on instinct (a neuron is about a 1011th of your mind, and an atom is about a 1014th of a neuron).  And because they're so tiny and numerous, they can be put together to form things vastly different from yourself in form and function, like plants and stars.

      \n
    6. \n
    7. Your instinct that the laws of physics don't fully describe you is correct! You are the way you are because of two things: \n\n

      and the latter is almost unimaginably more significant!  One way to see this is to look around at all the things that are not you.  Saying how the tiny bits of your soul behave independently does not describe how to put them together, just like describing an octant of a sphere doesn't explain say how to turn eight of them into a whole sphere.  Plus, even after your initial construction as a baby, a whole lot of growth and experience has configured what you are today.

      \n

      Only to put this into perspective, consider that the all the most fundamental laws of physics know can certainly be written down, without evidence or much explanation, in a text file of less than 1 megabyte.  The information content of the human genome, which so far seems necessary to construct a sustainable brain, is about 640 MB (efficiently encoded, that's 1.7 bits per nucleotide pair).  Don't be fooled at how \"small\" 640 is: it means the number of possible states of that data is at least 8640 times larger than the number of the states of our text file describing all of physics!  Next, the brain itself stores information as you develop, with a capacity of at least 1 terrabyte by the most conservative estimates, which means it has at least around 81500 times the number of possible states of the DNA sequence that first built it.

      \n

      So being a desk is different from being a human, not because it's made of different stuff, but because the stuff is put together extremely differently, more differently than we can fully imagine.  When people say form determines function, they should say FORM in BIG CAPITAL LETTERS.  No wonder you thought particle physics \"just doesn't seem to capture it\"!

      \n
    8. \n
    9. Your perceived distance between the concepts of \"mind\" and \"particles\" is also correct! As JanetK says, \"There is no shortcut from electrons to thoughts\".  Continuing the connection/theorem analogy, a theorem with superficially unrelated hypotheses and conclusions is not only liable to be very useful, but to have a difficult proof as well.  The analogue of the difficult proof is that, distinct from the discovery of particles themselves, massive amounts of technological progress and research have been required to establish the connection between: \n\nIndeed, the perceptual distance between the second pair is why people use the concepts \"brain\" and \"mind\" separately: \"brain\" is a model for the outside view, and \"mind\" is a model for the inside view.  The analogue of the theorem's usefulness is how much neurology can say and do about our minds: \n\n
    10. \n
    11. \n

      Adjusting emotionally is extremely important as you bring materialism under consideration, not only to accomodate changing your beliefs, but to cope with them when they do change.  You may need to redescribe morality, what makes you happy, and why you want to be alive, but none of these things needs to be revoked, and LessWrong is by far the best community I've ever seen to help with this transition.  For example, Eliezer has written a chronological account of his Coming of Age as a rationalist, and he has certainly maintained a sense of morality and life-worth.  I recommend building an appropriate emotional safety net while you consider materialism, not just to combat the bias of fear, but so you're ready when one day you realize oh my gosh I'm a materialist!

      \n
    12. \n
    \n

    I have a friend who says that instead of classifying people as believing in the material or the supernatural, he classifies them by whether they think more than one of those things exists and are different.  Roughly speaking, dualists and non-dualists.  I think he's got the right idea.  Why bother believing in more than one kind of thing?  Why believe in separate \"soul\" and \"material\" if the world can just as well be made of tiny specks of regularly-behaved \"spirit\"?  It's the same theory, and watching out for works-how-it-feels bias, you gain a lot of tangible insight about yourself if you realize they're the same.

    \n

    So do what's right.  Right for you, right for your loved ones, and right for rightness itself if that matters to you.  You probably already know what it means to be a good person, and your good intentions just won't work if you use poor judgement.  Start thinking about materialism so you can know more, and make better, well-informed decisions. 

    \n

    Who believes in the supernatural is simply underestimating the natural.

    \n

    There is no something more, because there is no something less… but there certainly and most definitely is you

    \n

     

    \n
    \n

     

    \n

    Follow up to comments:

    \n

    One can only get so far from dualism in a single sitting, and what this article includes is a much a function of my time as of its validity.  For now I'll leave it up to others to argue stronger positions than those presented here, but to acknowledge, some important issues I did not address include:

    \n

    Whatever stuff or process the world comprises, is it merely accessible to physics, or can physics describe its nature entirely?  And supposing it can, is consciousness an entirely mathematical phenomenon that is unaffected by how it is physically implemented?  That is, if we made a neural network computationally isomorphic to the human brain, but in a different physical arrangement (e.g.  a silicon based computer), should you be as certain of its consciousness as of the consciousness of other humans?  And more questions...

    \n

    A rough outline of some stances on the questions above is as follows: (to avoid debate I'll omit the term naturalism, though I do approve of its normative use)

    \n

    Monism: the world comprises just one genre of stuff or process (no natural/supernatural distinction).

    \n

    This article: this stuff or process is physically accessible, and is therefore amenable to study by the natural sciences.

    \n

    Physicalism: the stuff or process is no more extensive than its description in terms of physics.

    \n

    Computationalism: consciousness is a mathematical phenomenon, unaffected by how it is physically represented or implemented.

    \n

    And of course it is also important to question whether these distinctions are practical, meaningful, or merely illusory.  It all needs to be cleaned and carefully disected.  Have at it, LessWrong!

    " } }, { "_id": "Edq3ZanR22Xtft2x8", "title": "There just has to be something more, you know?", "pageUrl": "https://www.lesswrong.com/posts/Edq3ZanR22Xtft2x8/there-just-has-to-be-something-more-you-know", "postedAt": "2010-03-24T00:38:44.899Z", "baseScore": 16, "voteCount": 31, "commentCount": 79, "url": null, "contents": { "documentId": "Edq3ZanR22Xtft2x8", "html": "\n

    A non-materialist thought experiment.

    \n

    Okay, so you don't exactly believe in the God of the Abrahamic scriptures verbatim who punishes and sets things on fire and lives in the sky.  But still, there just has to be something more than just matter and energy, doesn't there?  You just feel it.  If you don't, try to remember when you did, or at least empathize with someone you know who does.  After all, you have a mind, you think, you feel — you feel for crying out loud — and you must realize that can't be made entirely of things like carbon and hydrogen atoms, which are basically just dots with other dots swirling around them.  Okay, maybe they're waves, but at least sometimes they act like dots.  Start with a few swirling dots… now add more… keep going, until it equals love.  It just doesn't seem to capture it.

    \n

    In fact, now that you think about it, you know your mind exists.  It's right there: it's you.  Your \"experiencing self\".  Maybe you call it a spirit or soul; I don't want to fix too rigid a description in case it wouldn't quite match your own.  But cogito-ergo-sum, it's definitely there!  By contrast, this particle business is just a mathematical concept — a very smart one, of course — thought of by scientists to explain and predict a bunch of carefully designed and important measurements.  Yes, it does that extremely well, and you're not downplaying that.  But that doesn't explain how you see blue, or taste strawberry — something you have direct access to.  Particles might not even exist, if that means anything to say.  It might just be that observation itself follows a mathematical pattern that we can understand better by visualizing dots and waves.  They might not be real.

    \n

    So actually, your mind or spirit — that thing you feel, that you — is much more certain an extant than scientific \"matter\".  That must be something very important to understand!  Certainly you can tell your mind has different parts to it: hearing, seeing, reasoning, moving, remembering, empathizing, picturing, yearning… When you think of all the things you can remember alone — or could remember — the complexity of all that data is mindbogglingly vast.  Imagine the task of actually having to take it all apart and describe it completely… it could take aeons…

    \n

    Imagine then, for a moment, that you could isolate just one part: some relatively insignificant portion of your vast mind or spirit.  Let's say a single, second-long experience of walking with a friend; certainly minute compared to the entirety of your life.  But still, an extremely complex object.  Think about all you are perceiving in that second...  your mind is incredible!  No, I'm not talking about your brain, I'm talking about your experiencing self, your mind, your essence, however you might think about that experiencing entity.  Now imagine isolating some small aspect of that memory with your friend, discarding the massively detailed experiences that are your vision, your sense of balance, how hungry you are for nachos… Say, a concerned awareness of your friend's emotional state at that instant.  This too is a highly complex object, so it too has parts, which I may not be able to describe in finer granularity, but they're there.  Now let's say you're some kind of super-introspective savant, who can sense the conceptual fragments of still finer, sharper aspects of this…

    \n

    I'm doing my best here to approach what \"a tiny piece of your soul\" might mean.  But no matter; perhaps you have a better idea of what that is.  In any case, suppose you somehow isolated this tiny fraction of a mind or spirit, and took it out of the context of all the countless other details we didn't look at.  Now it's disconnected from all that other stuff: vision, balance, nachos, nuances of empathy…

    \n

    Suppose you managed to somehow look at this object, by which I mean observe it in some way — It is part of your mind, after all — and consider a possible outcome. So that we're picturing roughly the same thing, imagine that as you observed it, this piece of your soul is not writhing and thrashing about spasmodically, but appears in fact calm, and focused on its task.  Suppose it moved regularly, like maybe in a circle, for example.  How curious it could turn out to be!  What would we call this tiny, almost infinitesimal speck of your mind?

    \n

    I say we call it \"electron\".

    \n
    \n

    Like many readers of this blog, I am a materialist.  Like many still, I was not always.  Long ago, the now-rhetorical ponderings in the preceding post in fact delivered the fatal blow to my nagging suspicion that, somehow, materialism just isn't enough…

    \n

    Finish reading in: The two insights of materialism

    \n

    (these were originally a single post, so some comments below refer to the sequel.)

    " } }, { "_id": "gs8bZCmaWqDaus7Dr", "title": "Levels of communication", "pageUrl": "https://www.lesswrong.com/posts/gs8bZCmaWqDaus7Dr/levels-of-communication", "postedAt": "2010-03-23T21:32:25.903Z", "baseScore": 64, "voteCount": 67, "commentCount": 73, "url": null, "contents": { "documentId": "gs8bZCmaWqDaus7Dr", "html": "

    Communication fails when the participants in a conversation aren't talking about the same thing. This can be something as subtle as having slightly differing mappings of verbal space to conceptual space, or it can be a question of being on entirely different levels of conversation. There are at least four such levels: the level of facts, the level of status, the level of values, and the level of socialization. I suspect that many people with rationalist tendencies tend to operate primarily on the fact level and assume others to be doing so as well, which might lead to plenty of frustration.

    \r\n

    The level of facts. This is the most straightforward one. When everyone is operating on the level of facts, they are detachedly trying to discover the truth about a certain subject. Pretty much nothing else than the facts matter.

    \r\n

    The level of status. Probably the best way of explaining what happens when everyone is operating on the level of status is the following passage, originally found in Keith Johnstone's Impro

    \r\n
    \r\n

    MRS X: I had a nasty turn last week. I was standing in a queue waiting for my turn to go into the cinema when I felt ever so queer. Really, I thought I should faint or something.

    [Mrs X is attempting to raise her status by having an interesting medical problem. Mrs Y immediately outdoes her.]

    \r\n

    MRS Y: You're lucky to have been going to a cinema. If I thought I could go to a cinema I should think I had nothing to complain of at all.

    [Mrs Z now blocks Mrs Y.]

    MRS Z: I know what Mrs X means. I feel just like that myself, only I should have had to leave the queue.

    [Mrs Z is very talented in that she supports Mrs X against Mrs Y while at the same time claiming to be more worthy of interest, her condition more severe. Mr A now intervenes to lower them all by making their condition seem very ordinary.]

    MR A: Have you tried stooping down? That makes the blood come back to your head. I expect you were feeling faint.

    [Mrs X defends herself.]

    MRS X: It's not really faint.

    MRS Y: I always find it does a lot of good to try exercises. I don't know if that's what Mr A means.

    [She seems to be joining forces with Mr A, but implies that he was unable to say what he meant. She doesn't say 'Is that what you mean?' but protects herself by her typically high-status circumlocution. Mrs Z now lowers everybody, and immediately lowers herself to avoid counterattack.]

    MRS Z: I think you have to use your will-power. That's what worries me--I haven't got any.

    [Mr B then intervenes, I suspect in a low-status way, or rather trying to be high-status but failing. It's impossible to be sure from just the words.]

    MR B: I had something similar happen to me last week, only I wasn't standing in a queue. I was sitting at home quietly when...

    [Mr C demolishes him.]

    MR C: You were lucky to be sitting at home quietly. If I was able to do that I shouldn't think I had anything to grumble about. If you can't sit at home why don't you go to the cinema or something?

    \r\n
    \r\n

    The level of values. Here the participants of a discussion are primarily attempting to signal their values. Any statements that on the surface refer to facts actually refer to values. For instance, \"men and women are equally intelligent\" might actually mean \"men and women should be given equal treatment\" while \"there are differences in the intelligence of men and women\" is taken to mean \"it's justified to treat men and women unequally\".

    \r\n

    The level of socialization, also known as small talk. You aren't really talking about anything, but instead just enjoying the other's company. If the group is seeking to mainly operate on this level, someone trying to operate on the level of facts might get slapped down for perceived aggression if they insist on getting things factually correct.

    \r\n

    For rationalists to succeed in spreading our ideas, we need to learn to recognize which level of conversation the discussion is operating on. One person acting on the level of facts and another on the level of values is a conversation that's certain to go nowhere. Also, it took me a while to realize that there have been occasions on which I was consciously trying to act on the level of facts, but my subconscious was operating on the level of status and got very defensive whenever my facts were challenged.

    \r\n

    Usually what rationalists would want to do is to move the conversation to the level of facts. Unfortunately, if a person is operating on the level of values, they might perceive this as an underhanded attempt to undermine their values. I'm uncertain of what, exactly, would be the right approach in this kind of a situation. Defusing the level of status seems easier, as people will frequently find their unconscious jockeying for status silly once it's been brought to their conscious attention.

    \r\n

     

    " } }, { "_id": "fNrWB2w7n42BQZFGx", "title": "Co-operative Utilitarianism", "pageUrl": "https://www.lesswrong.com/posts/fNrWB2w7n42BQZFGx/co-operative-utilitarianism", "postedAt": "2010-03-23T20:36:38.120Z", "baseScore": 7, "voteCount": 8, "commentCount": 9, "url": null, "contents": { "documentId": "fNrWB2w7n42BQZFGx", "html": "

    Donald Regan's masterful (1980) Utilitarianism and Co-operation raises a problem for traditional moral theories, which conceive of agents as choosing between external options like 'push' or 'not-push' (options that are specifiable independently of the motive from which they are performed). He proves that no such traditional theory T is adaptable, in the sense that \"the agents who satisfy T, whoever and however numerous they may be, are guaranteed to produce the best consequences possible [from among their options] as a group, given the behaviour of everyone else.\" (p.6) It's easy to see that various forms of rule or collective consequentialism fail when you're the only agent satisfying the theory -- doing what would be best if everyone played their part is not necessarily to do what's actually best. What's more interesting is that even Act Utilitarianism can fail to beat co-ordination problems like the following:

    \n

    \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
     Poof: pushNot-push
    Whiff: push100
            Not-push06
    \n
    Here the best result is obviously for Whiff and Poof to both push. But this isn't guaranteed by the mere fact that each agent does as AU says they ought. Why not? Well, what each ought to do depends on what the other does. If Poof doesn't push then neither should Whiff (that way he can at least secure 6 utils, which is better than 0). And vice versa. So, if Whiff and Poof both happen to not-push, then both have satisfied AU. Each, considered individually, has picked the best option available. But clearly this is insufficient: the two of them together have fallen into a bad equilibrium, and hence not done as well as they (collectively) could have.

    Regan's solution is build a certain decision-procedure into the objective requirements of the theory:

    \n
    The basic idea is that each agent should proceed in two steps: First he should identify the other agents who are willing and able to co-operate in the production of the best possible consequences. Then he should do his part in the best plan of behaviour for the group consisting of himself and the others so identified, in view of the behaviour of non-members of the group. (p.x)
    \n

     

    \n

    This theory, which Regan calls 'Co-operative Utilitarianism', secures the property of adaptability.  (You can read Regan for the technical details; here I'm simply aiming to convey the rough idea.)  To illustrate with our previous example: suppose Poof is a non-cooperator, and so decides on outside grounds to not-push. Then Whiff should (i) determine that Poof is not available to cooperate, and hence (ii) make the best of a bad situation by likewise not-pushing. In this case, only Whiff satisfies CU, and hence the agents who satisfy the theory (namely, Whiff alone) collectively achieve the best results available to them in the circumstances.

    \n

    If both agents satisfied the theory, then they would first recognize the other as a cooperator, and then each would push, as that is what is required for them to \"do their part\" to achieve the best outcome available to the actual cooperators.

    \n

    * * *

    \n

    [Originally posted to Philosophy, etc.  Reproduced here as an experiment of sorts: despite discussing philosophical topics, LW doesn't tend to engage much with the extant philosophical literature, which seems like a lost opportunity.  I chose this piece because of the possible connections between Regan's view of cooperative games and the dominant LW view of competitive games: that one should be disposed to co-operate if and only if dealing with another co-operator.  In any case, I'll be interested to see whether others find this at all helpful or interesting -- naturally that'll influence whether I attempt this sort of thing again.]

    " } }, { "_id": "4DLinGRkYKX2Lovtv", "title": "Necessary, But Not Sufficient", "pageUrl": "https://www.lesswrong.com/posts/4DLinGRkYKX2Lovtv/necessary-but-not-sufficient", "postedAt": "2010-03-23T17:11:03.256Z", "baseScore": 61, "voteCount": 53, "commentCount": 15, "url": null, "contents": { "documentId": "4DLinGRkYKX2Lovtv", "html": "

    There seems to be something odd about how people reason in relation to themselves, compared to the way they examine problems in other domains.

    \n

    In mechanical domains, we seem to have little problem with the idea that things can be \"necessary, but not sufficient\".  For example, if your car fails to start, you will likely know that several things are necessary for the car to start, but not sufficient for it to do so.  It has to have fuel, ignition, and compression, and oxygen...  each of which in turn has further necessary conditions, such as an operating fuel pump, electricity for the spark plugs, electricity for the starter, and so on.

    \n

    And usually, we don't go around claiming that \"fuel\" is a magic bullet for fixing the problem of car-not-startia, or argue that if we increase the amount of electricity in the system, the car will necessarily run faster or better.

    \n

    For some reason, however, we don't seem to apply this sort of necessary-but-not-sufficient thinking to systems above a certain level of complexity...  such as ourselves.

    \n

    When I wrote my previous post about the akrasia hypothesis, I mentioned that there was something bothering me about the way people seemed to be reasoning about akrasia and other complex problems.  And recently, with taw's post about blood sugar and akrasia, I've realized that the specific thing bothering me is the absence of causal-chain reasoning there.

    \n

    When I was a kid, I remember reading once about a scientist saying that the problem with locating brain functions by what's impaired when somebody has brain damage there, is that it's like opening up a TV set and taking out a resistor.  If the picture goes bad, you might then conclude that the resistor is the \"source of pictureness\", when all you have really proved is that the resistor (or brain part) is necessary for pictureness.

    \n

    Not that it's sufficient.

    \n

    And so, in every case where an akrasia technique works for you -- whether it's glucose or turning off your internet -- all you have really done, is the equivalent of putting the missing resistor back into the TV set.

    \n

    This is why \"different things work for different people\", in different circumstances.  And it's why \"magic bullets\" are possible, like vitamin C as a cure for scurvy.  When you fix a deficiency (as long as it's the only deficiency present) then it seems like a \"magic\" fix.

    \n

    But, just because some specific deficiency creates scurvy, akrasia, or no-picture-on-the-TV-ia, this doesn't mean the resistor you replaced is therefore the ultimate, one true source of \"pictureness\"!

    \n

    Even if you've successfully removed and replaced that resistor repeatedly, in multiple televisions under laboratory conditions.

    \n

    Unfortunately, it seems that thinking in terms of causal chains like this is not really a \"natural\" feature of human brains.  And upon reflection, I realize that I only learned to think this way because I studied the Theory of Constraints (ToC) about 13 years ago, and I also had a mentor who drilled me in some aspects of its practice, even before I knew what it was called.

    \n

    But, if you are going to reason about complex problems, it's a very good tool to have in your rationalist toolkit.

    \n

    Because, what the Theory of Constraints teaches us about problem solving, is that if you can reason well enough about a system to identify which necessary-but-not-sufficient conditions are currently deficient (or underpowered relative to the whole), then you will be able to systematically create your own \"magic bullets\".

    \n

    So, I encourage you to challenge any fuzzy thinking you see here (or anywhere) about \"magic bullets\", because a magic bullet is only effective in cases where it applies to the only insufficiency present in the system under consideration.  And having found one magic bullet, is not equivalent to actually understanding the problem, let alone understanding the system as a whole.  (Which, by the way, is also why self-help advice is so divergent: it reflects a vast array of possible deficiencies in a very complex system.)

    " } }, { "_id": "zKf7LNzjrR5QofgW2", "title": "Subtext is not invariant under linear transformations", "pageUrl": "https://www.lesswrong.com/posts/zKf7LNzjrR5QofgW2/subtext-is-not-invariant-under-linear-transformations", "postedAt": "2010-03-23T15:49:35.282Z", "baseScore": 47, "voteCount": 46, "commentCount": 13, "url": null, "contents": { "documentId": "zKf7LNzjrR5QofgW2", "html": "

    You can download the audio and PDFs from the 2007 Cognitive Aging Summit in Washington DC here; they're good listening.  But I want to draw your attention to the graphs on page 6 of Archana Singh-Manoux's presentation.  It shows the \"social gradient\" of intelligence.  The X-axis is decreasing socioeconomic status (SES); the Y-axis is increasing performance on tests of reasoning, memory, phonemic fluency, and vocabulary.  Each graph shows a line sloping from the upper left (high SES, high performance) downwards and to the right.

    \n

    Does anything leap out at you as strange about these graphs?

    \n

    What leapt out at me was, \"Why the hell would anybody make a graph with their independent variable decreasing along the X-axis?\"

    \n

    Socio-economic status (SES) basically means income.  It has a natural zero.  The obvious thing to do would be to put it on the X-axis with zero towards the left, increasing towards the right.  These graphs have zero off somewhere on the right, with income increasing towards the left.  That's so weird that it couldn't happen by accident.  It would be like \"accidentally\" drawing the graph with the Y-axis flipped.

    \n

    What could be the intent behind flipping the X-axis when presenting the data?

    \n

    If you drew the data the normal way, it would suggest that there's a natural zero-level to both SES and cognition; and that increasing SES increases cognition, possibly without limit.

    \n

    But when you flip the X-axis, you're limited.  You can't go too far to the right, or you'd hit zero.  And you can't go off to the left, because we don't think that way in the West.  We start at the left and move right.

    \n

    By flipping the X-axis, the presenter has communicated that SES and intelligence have natural bounds.  Instead of communicating the idea that higher SES is a good thing that leads to higher intelligence, this presentation of the data suggests that the leftmost point on each graph (the anchoring point) is \"normal\", and a lack of wealth has caused a deficiency in people of lower SES.

    \n

    (And, of course, rotating around the line Y=X would suggest that intelligence makes you rich.)

    " } }, { "_id": "4gevjbK77NQS6hybY", "title": "Understanding your understanding", "pageUrl": "https://www.lesswrong.com/posts/4gevjbK77NQS6hybY/understanding-your-understanding", "postedAt": "2010-03-22T22:33:23.315Z", "baseScore": 102, "voteCount": 88, "commentCount": 80, "url": null, "contents": { "documentId": "4gevjbK77NQS6hybY", "html": "

    Related to: Truly Part of You, A Technical Explanation of Technical Explanation

    Partly because of LessWrong discussions about what really counts as understanding (some typical examples), I came up with a scheme to classify different levels of understanding so that posters can be more precise about what they mean when they claim to understand -- or fail to understand -- a particular phenomenon or domain.

    Each level has a description so that you know if you meet it, and tells you what to watch out for when you're at or close to that level. I have taken the liberty of naming them after the LW articles that describe what such a level is like.

    Level 0: The "Guessing the Teacher's Password" Stage

    Summary: You have no understanding, because you don't see how any outcome is more or less likely than any other.

    Description: This level is only included for comparison -- to show something that is not understanding. At this point, you have, a best, labels that other people use when describing the phenomenon. Maybe you can even generate the appearance of understanding on the topic. However, you actually have a maximum entropy probability distribution. In other words, nothing would surprise you, no event is more or less likely to happen, and everything is consistent with what you "know" about it. No rationalist should count this as an understanding, though it may involve knowledge of the labels that a domain uses.

    Things to watch out for: Scientific-sounding terms in your vocabulary that don't correspond to an actual predictive model; your inability to say what you expect to see, and what you would be surprised by.

    Level 1: The "Shut up and Calculate" Stage

    Summary: You can successfully predict the phenomenon, but see it as an independent, compartmentalized domain.

    Description: This is where you can predict the phenomenon, using a generative model that tells you what to expect. You are capable of being surprised, as certain observations are assigned low probability. It may even be tremendously complicated, but it works.

    Though low on the hierarchy, it's actually a big accomplishment in itself. However, when you are at this stage, you see its dynamics as being unrelated to anything else, belonging to its own domain, following its own rules. While it might have parallels to things you do understand, you see no reason why the parallel must hold, and therefore can't reason about how extensive that relationship is.

    Things to watch out for: Going from "It just works, I don't know what it means" to "it doesn't mean anything!" Also, becoming proud of your ignorance of its relationship to the rest of the world.

    Level 2: The "Entangled Truths" Stage. (Alternate name: "Universal Fire".)

    Summary: Your accurate model in this domain has deep connections to the rest of your models (whether inferential or causal); inferences can flow between the two.

    Description: At this stage, your model of the phenomenon is also deeply connected to your model of everything else. Instead of the phenomenon being something with its own set of rules, you see how its dynamics interface with the dynamics of everything else in your understanding. You can derive parameters in this domain from your knowledge in another domain; you can explain how they are related.

    Note the regression here: you meet this stage when your model for the new phenomenon connects to your model for "everything else". So what about the first "everything else" you understood (which could be called your "primitively understood" part of reality)? This would be the instinctive model of the world that you are born with: the "folk physics", "folk psychology", etc. Its existence is revealed in such experiments as when babies are confused by rolling balls that suddenly violate the laws of physics.

    This "Level 2" understanding therefore ultimately connects everything back to your direct, raw experiences ("qualia") of the world, but, importantly, is not subordinate to them – optical illusions shouldn't override the stronger evidence that proves to you it's an illusion.

    Things to watch out for: Assuming that similar behavior in different domains ("surface analogies") is enough to explain their relationship. Also, using one intersection between multiple domains as a reason to immediately collapse them together.

    Level 3: The "Truly Part of You" Stage

    Summary: Your models are such that you would re-discover them, for the right reasons, even they were deleted from your memory.

    Description: At this stage, not only do you have good, well-connected models of reality, but they are so well-grounded, that they "regenerate" when "damaged". That is, you weren't merely fed these wonderful models outright by some other Really Smart Being (though initially you might have been), but rather, you also consistently use a reliable method for gaining knowledge, and this method would eventually stumble upon the same model you have now, no matter how much knowledge is stripped away from it.

    This capability arises because your high understanding makes much of your knowledge redundant: knowing something in one domain has implications in quite distant domains, leading you to recognize what was lost – and your reliable methods of inference tell you what, if anything, you need to do to recover it.

    This stage should be the goal of all rationalists.

    Things to watch out for: Hindsight bias: you may think you would have made the same inferences at a previous epistemic state, but that might just be due to already knowing the answers. Also, if you're really at this stage, you should have what amounts to a "fountain of knowledge" – are you learning all you can from it?

    In conclusion: In trying to enhance your own, or someone else's, understanding of a topic, I recommend identifying which level you both are at to see if you have something to learn from each other, or are simply using different standards.

    " } }, { "_id": "mf5LS5pxAy6WxCFNW", "title": "What would you do if blood glucose theory of willpower was true?", "pageUrl": "https://www.lesswrong.com/posts/mf5LS5pxAy6WxCFNW/what-would-you-do-if-blood-glucose-theory-of-willpower-was", "postedAt": "2010-03-22T20:18:21.388Z", "baseScore": 20, "voteCount": 20, "commentCount": 43, "url": null, "contents": { "documentId": "mf5LS5pxAy6WxCFNW", "html": "

    There's considerable amount of evidence that willpower is severely diminished if blood glucose get down, and this effect is not limited to humans. And a small sugary drink at the right time is enough to restore it.

    \n

    We're talking really small numbers. Total blood glucose of a healthy adult is about 5g and it varies within fairly limited range. Then there's maybe 45g in total body waters. Then there's about 100g of glycogen in liver, plus yet larger amount in muscles and other organs, but which doesn't seem to take part in sugar level regulation. For comparison a small can of coke contains 33g - a really small amounts at appropriate times can make a big difference.

    \n

    This leads to two issues. First, is blood glucose a good explanation for willpower deficiency and therefore akrasia? I'd say there's significant amount of evidence that some effect exists, but is it really the most important factor? Humans are complicated, science knows very little about how we work, and probably half of what it \"knows\" is false or at best only half-true. Caution is definitely warranted.

    \n

    And the second issue - if this theory was true - and by manipulating blood glucose levels you could achieve far greater willpower whenever you wanted, what would you do? It seems that exploiting it isn't that easy, and I'd love to hear if any of you tried it before.

    " } }, { "_id": "vKcGsSFh3SYCoPx8T", "title": "SIA doomsday: The filter is ahead", "pageUrl": "https://www.lesswrong.com/posts/vKcGsSFh3SYCoPx8T/sia-doomsday-the-filter-is-ahead", "postedAt": "2010-03-22T15:16:48.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "vKcGsSFh3SYCoPx8T", "html": "

    The great filter, as described by Robin Hanson:

    \n

    Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?

    \n

    I will argue that we are not far along at all. Even if the steps of the filter we have already passed look about as hard as those ahead of us, most of the filter is probably ahead. Our bright future is an illusion; we await filtering. This is the implication of applying the self indication assumption (SIA) to the great filter scenario, so before I explain the argument, let me briefly explain SIA.

    \n

    SIA says that if you are wondering which world you are in, rather than just wondering which world exists, you should update on your own existence by weighting possible worlds as more likely the more observers they contain. For instance if you were born of an experiment where the flip of a fair coin determined whether one (tails) or two (heads) people were created, and all you know is that and that you exist, SIA says heads was twice as likely as tails. This is contentious; many people think in such a situation you should think heads and tails equally likely. A popular result of SIA is that it perfectly protects us from the doomsday argument. So now I’ll show you we are doomed anyway with SIA.

    \n

    Consider the diagrams below. The first one is just an example with one possible world so you can see clearly what all the boxes mean in the second diagram which compares worlds. In a possible world there are three planets and three stages of life. Each planet starts at the bottom and moves up, usually until it reaches the filter. This is where most of the planets become dead, signified by grey boxes. In the example diagram the filter is after our stage. The small number of planets and stages and the concentration of the filter is for simplicity; in reality the filter needn’t be only one unlikely step, and there are many planets and many phases of existence between dead matter and galaxy colonizing civilization. None of these things are important to this argument.

    \n

    .

    \n

    \"Diagram

    \n

    .

    \n

    The second diagram shows three possible worlds where the filter is in different places. In every case one planet reaches the last stage in this model – this is to signify a small chance of reaching the last step, because we don’t see anyone out there, but have no reason to think it impossible. In the diagram, we are in the middle stage, earthbound technological civilization say. Assume the various places we think the filter could be are equally likely..

    \n

    \"SIA
    \n

    \n

    .

    \n

    This is how to reason about your location using SIA:

    \n
      \n
    1. The three worlds begin equally likely.
    2. \n
    3. Update on your own existence using SIA by multiplying the likelihood of worlds by their their population. Now the likelihood ratio of the worlds is 3:5:7
    4. \n
    5. Update on knowing you are in the middle stage. New likelihood ratio: 1:1:3. Of course if we began with an accurate number of planets in each possible world, the 3 would be humungous and we would be much more likely in an unfiltered world.
    6. \n
    \n

    Therefore we are much more likely to be in worlds where the filter is ahead than behind.

    \n

    —-

    \n

    Added: I wrote a thesis on this too.

    \n

    \n

    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "cmrtpfG7hGEL9Zh9f", "title": "The scourge of perverse-mindedness", "pageUrl": "https://www.lesswrong.com/posts/cmrtpfG7hGEL9Zh9f/the-scourge-of-perverse-mindedness", "postedAt": "2010-03-21T07:08:28.304Z", "baseScore": 131, "voteCount": 114, "commentCount": 255, "url": null, "contents": { "documentId": "cmrtpfG7hGEL9Zh9f", "html": "

    This website is devoted to the art of rationality, and as such, is a wonderful corrective to wrong facts and, more importantly, wrong procedures for finding out facts.

    There is, however, another type of cognitive phenomenon that I’ve come to consider particularly troublesome, because it militates against rationality in the irrationalist, and fights against contentment and curiousity in the rationalist. For lack of a better word, I’ll call it perverse-mindedness.

    The perverse-minded do not necessarily disagree with you about any fact questions. Rather, they feel the wrong emotions about fact questions, usually because they haven’t worked out all the corollaries.

    Let’s make this less abstract. I think the following quote is preaching to the choir on a site like LW:

    “The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.”
    -Richard Dawkins, \"God's Utility Function,\" Scientific American (November, 1995).

    Am I posting that quote to disagree with it? No. Every jot and tittle of it is correct. But allow me to quote another point of view on this question.

    “We are not born into this world, but grow out of it; for in the same way an apple tree apples, the Earth peoples.”

    This quote came from an ingenious and misguided man named Alan Watts. You will not find him the paragon of rationality, to put it mildly. And yet, let’s consider this particular statement on its own. What exactly is wrong with it? Sure, you can pick some trivial holes in it – life would not have arisen without the sun, for example, and Homo sapiens was not inevitable in any way. But the basic idea – that life and consciousness is a natural and possibly inevitable consequence of the way the universe works – is indisputably correct.

    So why would I be surprised to hear a rationalist say something like this? Note that it is empirically indistinguishable from the more common view of “mankind confronted by a hostile universe.” This is the message of the present post: it is not only our knowledge that matters, but also our attitude to that knowledge. I believe I share a desire with most others here to seek truth naively, swallowing the hard pills when it becomes necessary. However, there is no need to turn every single truth into a hard pill. Moreover, sometimes the hard pills also come in chewable form.

    What other fact questions might people regard in a perverse way?

    How about materialism, the view that reality consists, at bottom, in the interplay of matter and energy? This, to my mind, is the biggie. To come to facilely gloomy conclusions based on materialism seems to be practically a cottage industry among Christian apologists and New Agers alike. Since the claims are all so similar to each other, I will address them collectively.

    “If we are nothing but matter in motion, mere chemicals, then:

    \n
      \n
    1. Life has no meaning;
    2. \n
    3. Morality has no basis;
    4. \n
    5. Love is an illusion;
    6. \n
    7. Everything is futile (there is no immortality);
    8. \n
    9. Our actions are determined; we have no free will;
    10. \n
    11. et
    12. \n
    13. cetera.”
    14. \n
    \n


    The usual response from materialists is to say that an argument from consequences isn’t valid – if you don’t like the fact that X is just matter in motion, that doesn’t make it false. While eminently true, as a rhetorical strategy for convincing people who aren’t already on board with our programme, it’s borderline suicidal.

    I have already hinted at what I think the response ought to be. It is not necessarily a point-by-point refutation of each of these issues individually. The simple fact is, not only is materialism true, but it shouldn’t bother anyone who isn’t being perverse about it, and it wouldn’t bother us if it had always been the standard view.

    There are multiple levels of analysis in the lives of human beings. We can speak of societies, move to individual psychology, thence to biology, then chemistry… this is such a trope that I needn’t even finish the sentence.

    However, the concerns of, say, human psychology (as distinct from neuroscience), or morality, or politics, or love, are not directly informed by physics. Some concepts only work meaningfully on one level of analysis. If you were trying to predict the weather, would you start by modeling quarks? Reductionism in principle I will argue for until the second coming (i.e., forever). Reductionism in practice is not always useful. This is the difference between proximate and ultimate causation. The perverse-mindedness I speak of consists in leaping straight from behaviour or phenomenon X to its ultimate cause in physics or chemistry. Then – here’s the “ingenious” part – declaring that, since the ultimate level is devoid of meaning, morality, and general warm-and-fuzziness, so too must be all the higher levels.

    What can we make of someone who says that materialism implies meaninglessness? I can only conclude that if I took them to see Seurat’s painting “A Sunday Afternoon on the Island of La Grande Jatte,\" they would earnestly ask me what on earth the purpose of all the little dots was. Matter is what we’re made of, in the same way as a painting is made of dried pigments on canvas. Big deal! What would you prefer to be made of, if not matter?

    It is only by the contrived unfavourable contrast of matter with something that doesn’t actually exist – soul or spirit or élan vital or whatever – that somebody can pull off the astounding trick of spoiling your experience of a perfectly good reality, one that you should feel lucky to inhabit.

    I worry that some rationalists, while rejecting wooly dualist ideas about ghosts in the machine, have tacitly accepted the dualists’ baseless assumptions about the gloomy consequences of materialism. There really is no hard pill to swallow.

    What are some other examples of perversity? Eliezer has written extensively on another important one, which we might call the disappointment of explicability. “A rainbow is just light refracting.” “The aurora is only a bunch of protons hitting the earth’s magnetic field.” Rationalists are, sadly, not immune to this nasty little meme. It can be easily spotted by tuning your ears to the words “just” and “merely.” By saying, for example, that sexual attraction is “merely” biochemistry, you are telling the truth and deceiving at the same time. You are making a (more or less) correct factual statement, while Trojan-horsing an extraneous value judgment into your listener’s mind as well: “chemicals are unworthy.” On behalf of chemicals everywhere, I say: Screw you! Where would you be without us?

    What about the final fate of the universe, to take another example? Many of us probably remember the opening scene of Annie Hall, where little Alfie tells the family doctor he’s become depressed because everything will end in expansion and heat death. “He doesn’t do his homework!” cries his mother. “What’s the point?” asks Alfie.

    Although I found that scene hilarious, I have actually heard several smart people po-facedly lament the fact that the universe will end with a whimper. If this seriously bothers you psychologically, then your psychology is severely divorced from the reality that you inhabit. By all means, be depressed about your chronic indigestion or the Liberal Media or teenagers on your lawn, but not about an event that will happen in 1014 years, involving a dramatis personae of burnt-out star remnants. Puh-lease. There is infinitely more tragedy happening every second in a cup of buttermilk.

    The art of not being perverse consists in seeing the same reality as others and agreeing about facts, but perceiving more in an aesthetic sense. It is the joy of learning something that’s been known for centuries; it is appreciating the consilience of knowledge without moaning about reductionism; it is accepting nature on her own terms, without fatuous navel-gazing about how unimportant you are on the cosmic scale. If there is a fact question at stake, take no prisoners; but you don’t get extra points for unnecessary angst.

    " } }, { "_id": "PmQoBTjFNMDB6Nu9r", "title": "The Price of Life", "pageUrl": "https://www.lesswrong.com/posts/PmQoBTjFNMDB6Nu9r/the-price-of-life", "postedAt": "2010-03-20T09:40:12.278Z", "baseScore": 4, "voteCount": 20, "commentCount": 39, "url": null, "contents": { "documentId": "PmQoBTjFNMDB6Nu9r", "html": "

    Less Wrong readers are familiar with the idea you can and should put a price on life. Unfortunately the Big Lie that you can't and shouldn't has big consequences in the current health care debate. Here's some articles on it:

    \n

    Yvain's blog post here (HT: Vladimir Nesov).
    Peter Singer's article on rationing health care here.
    Wikipedia here.
    Experts and policy makers who debate this issue here.

    \n

    For those new to Less Wrong, here's the crux of Peter Singer's reasoning as to why you can put a price on life:

    \n

    \n
    \n

    The dollar value that bureaucrats place on a generic human life is intended to reflect social values, as revealed in our behavior. It is the answer to the question: \"How much are you willing to pay to save your life?\" — except that, of course, if you asked that question of people who were facing death, they would be prepared to pay almost anything to save their lives. So instead, economists note how much people are prepared to pay to reduce the risk that they will die. How much will people pay for air bags in a car, for instance? Once you know how much they will pay for a specified reduction in risk, you multiply the amount that people are willing to pay by how much the risk has been reduced, and then you know, or so the theory goes, what value people place on their lives. Suppose that there is a 1 in 100,000 chance that an air bag in my car will save my life, and that I would pay $50 — but no more than that — for an air bag. Then it looks as if I value my life at $50 x 100,000, or $5 million.

    \n

    The theory sounds good, but in practice it has problems. We are not good at taking account of differences between very small risks, so if we are asked how much we would pay to reduce a risk of dying from 1 in 1,000,000 to 1 in 10,000,000, we may give the same answer as we would if asked how much we would pay to reduce the risk from 1 in 500,000 to 1 in 10,000,000. Hence multiplying what we would pay to reduce the risk of death by the reduction in risk lends an apparent mathematical precision to the outcome of the calculation — the supposed value of a human life — that our intuitive responses to the questions cannot support. Nevertheless, this approach to setting a value on a human life is at least closer to what we really believe — and to what we should believe — than dramatic pronouncements about the infinite value of every human life, or the suggestion that we cannot distinguish between the value of a single human life and the value of a million human lives, or even of the rest of the world. Though such feel-good claims may have some symbolic value in particular circumstances, to take them seriously and apply them — for instance, by leaving it to chance whether we save one life or a billion — would be deeply unethical.

    \n
    " } }, { "_id": "v4ngP587MDZ5rC48Y", "title": "Lights, Camera, Action!", "pageUrl": "https://www.lesswrong.com/posts/v4ngP587MDZ5rC48Y/lights-camera-action", "postedAt": "2010-03-20T05:29:21.102Z", "baseScore": 45, "voteCount": 53, "commentCount": 58, "url": null, "contents": { "documentId": "v4ngP587MDZ5rC48Y", "html": "

    Sequence index: Living Luminously
    Previously in sequence: The ABC's of Luminosity
    Next in sequence: The Spotlight

    \n

    You should pay attention to key mental events, on a regular and frequent basis, because important thoughts can happen very briefly or very occasionally and you need to catch them.

    \n

    You may find your understanding of this post significantly improved if you read the third story from Seven Shiny Stories.

    \n

    Luminosity is hard and you are complicated.  You can't meditate on yourself for ten minutes over a smoothie and then announce your self-transparency.  You have to keep working at it over a long period of time, not least because some effects don't work over the short term.  If your affect varies with the seasons, or with major life events, then you'll need to keep up the first phase of work through a full year or a major life event, and it turns out those don't happen every alternate Thursday.  Additionally, you can't cobble together the best quality models from snippets of introspection that are each five seconds long; extended strings of cognition are important, too, and can take quite a long time to unravel fully.

    Sadly, looking at what you are thinking inevitably changes it.  With enough introspection, this wouldn't influence your accuracy about your overall self - there's no reason in principle why you couldn't spend all your waking hours noting your own thoughts and forming meta-thoughts in real time - but practically speaking that's not going to happen.  Therefore, some of your data will have to come from memory.  To minimize the error introduction that comes of retrieving things from storage, it's best to arrange to reflect on very recent thoughts.  It may be worth your while to set up an external reminder system to periodically prompt you to look inward, both in the moment and retrospectively over the last brief segment of time.  This can be a specifically purposed system (i.e. set a timer to go off every half hour or so), or you can tie it to convenient promptings from the world as-is, like being asked \"What's up?\" or \"Penny for your thoughts\".

    When you introspect, there is a lot to keep track of.  For instance, consider the following:

    \n\n


    You cannot have too much data.  (You probably can have too much data in one situation relative to how much you have in another, though - that'll overbalance your models - so make a concerted effort to diversify your times and situations for introspection.)  When you acquire the data, correlate it to learn more about what might bring various aspects of your thought into being.

    " } }, { "_id": "SEZqJcSm25XpQMhzr", "title": "Information theory and the symmetry of updating beliefs", "pageUrl": "https://www.lesswrong.com/posts/SEZqJcSm25XpQMhzr/information-theory-and-the-symmetry-of-updating-beliefs", "postedAt": "2010-03-20T00:34:09.078Z", "baseScore": 65, "voteCount": 53, "commentCount": 29, "url": null, "contents": { "documentId": "SEZqJcSm25XpQMhzr", "html": "

    Contents:

    \n

    1.  The beautiful symmetry of Bayesian updating
    2.  Odds and log odds: a short comparison
    3.  Further discussion of information

    \n

    Rationality is all about handling this thing called \"information\".  Fortunately, we live in an era after the rigorous formulation of Information Theory by C.E. Shannon in 1948, a basic understanding of which can actually help you think about your beliefs, in a way similar but complementary to probability theory. Indeed, it has flourished as an area of research exactly because it helps people in many areas of science to describe the world.  We should take advantage of this!

    \n

    The information theory of events, which I'm about to explain, is about as difficult as high school probability.  It is certainly easier than the information theory of multiple random variables (which right now is explained on Wikipedia), even though the equations look very similar.  If you already know it, this can be a linkable source of explanations to save you writing time :)

    \n

    So!  To get started, what better way to motivate information theory than to answer a question about Bayesianism?

    \n

    The beautiful symmetry of Bayesian updating

    \n

    The factor by which observing A increases the probability of B is the same as the factor by which observing B increases the probability of A.  This factor is P(A and B)/(P(A)·P(B)), which I'll denote by pev(A,B) for reasons to come.  It can vary from 0 to +infinity, and allows us to write Bayes' Theorem succinctly in both directions:

    \n

         P(A|B)=P(A)·pev(A,B),   and   P(B|A)=P(B)·pev(A,B)

    \n

    What does this symmetry mean, and how should it affect the way we think?

    \n

    A great way to think of pev(A,B) is as a multiplicative measure of mutual evidence, which I'll call mutual probabilistic evidence to be specific.  If pev=1 if they're independent, if pev>1 they make each other more likely, and if pev<1 if they make each other less likely.

    \n

    But two ways to think are better than one, so I will offer a second explanation, in terms of information, which I often find quite helpful in analyzing my own beliefs:

    \n

    \n

    Probabilistic evidence is related to \"mutual information\"

    \n

    Lets examine a simple example, and work our way up to illustrating what I mean:

    \n

    Say I flip four fair coins with faces \"0\" and \"1\" to generate a 4-bit binary string, X.  If I tell you that \"X=1???\", meaning that the first coin reads \"1\", this reduces the number of possibilities by ½.  We'd like to say here that you've gained \"1 bit of information\".  Suppose instead that I say \"X begins with 01 or 10\".  This has quantitatively the same effect, in that it reduces the number of possibilities by ½, so it should also be called \"1 bit of information\".  You might call the first statement an \"explicit bit\" in that it explicitly specifies a 1 or 0 in the sequence, but this is merely a qualitative distinction.  For once, we're interested in quantity, not quality.

    \n

    Now, let A be the event \"X=111?\" (the event that the first three bits come up \"1\", and the last bit can be anything), which has probability P(A)=2-3.  If A is true but you don't know it, you need to observe exactly 3 independent bits (e.g. the first 3 coins) to confirm it.  Intuitively, this is how uncertain A is, because it tells us how far away we are\n\nfrom\n\n\n\nconfirming A.  On the other hand, if I tell you A is true, you now only need 1 more independent bit to specify X=111?, so we can say A has \"provided 3 bits\" of \"information\".  Intuitively, this is how informative A is.  These vague ideas nudge us toward the following definition:

    \n

    The information value of an event

    \n

    We denote and define the information value of an event A (aka \"surprisal\" or \"self-information\", but not in this post) by the formula

    \n

         inf(A) := log½(P(A)) = -log2(P(A))

    \n

    which in our example is -log2(2-3)= 3 bits, just as we'd like.  As was suggested, this quantity has two different intuitive meanings, which by the miracle of logic correspond to the same number inf(A), measured in bits:

    \n

         1) Uncertainty: How many independent bits are required to confirm that A is true.

    \n

         2) Informativity: How many independent bits are gained if we are told that A is true.

    \n

    Caution: information value is not \"data\", but rather it is a number that can tell you how uncertain or how informative the data is.  Be on the lookout for when \"information\" means \"data\", and when it means \"information value.\"

    \n

    Mutual information = informational evidence

    \n

    Next, let B be the event \"X=??11\", so P(B)=2-2., and recall that A is the event \"X=??11\".  Both A and B tell us that the third position reads \"1\", which is independent from the other explicit bits they specify.  In this sense, there is 1 bit of \"redundancy\" in observing both A and B.  Notice that A provides 3 bits, B provides 2 bits, but \"A and B\" together specify that \"X=1111\" which is only 4 bits, and 3+2-4=1.  Thus, we can calculate \"redundancy\" as

    \n

         inf(A) + inf(B) - inf(A and B),

    \n

    which is why this expression is called the mutual information of A and B.  But wait... taking -log2 of probabilistic evidence pev(A,B)=P(A and B)/(P(A)·P(B)) yields exactly the same expression!  So I'll also call it informational evidence, and write

    \n

         iev(A,B) := -log2pev(A,B) = inf(A) + inf(B) - inf(A and B)

    \n

    While we're at it, lets just take -log2 of the rest of Bayes' theorem and see what we get.  We can define conditional information value by letting

    \n

         inf(A|B) := -log2 P(A|B) = inf(A and B) - inf(B),

    \n

    and now Bayes' theorem attains the following form:

    \n

         inf(A|B) = inf(A) - iev(A,B)   ←   information theoretic Bayes' Theorem

    \n

    In Bayesian updating, A hasn't happened yet, so here let's use our \"uncertainty\" interpretation of information value.  As you can see from the equation, if iev(A,B) is positive, the uncertainty of A decreases upon observing B, meaning A becomes more likely.  If it is negative, the uncertainty of A increases, so A becomes less likely. It ranges from -infinity to +infinity according as A and B completely contradict or completely confirm each other.  In summary:

    \n

    Bayesian updating = subtracting mutual evidence from uncertainty.

    \n

    This is my other favorite way to think about updating.  The fact that evidence can also be thought of as a kind of redundancy or mutual information gives a concrete interpretation for the symmetry of belief updating.  As well, since \"N bits of information\" is so easy to conceptualize as a precise quantity, it gives a quantitative intutive meaning to \"how much A and B support each other\".  In fact, noticing this is what got me interested in information theory in the first place, and I hope it has piqued your interest, too!

    \n

    What kept me interested is the simple fact that informational evidence behaves so nicely:

    \n

         (Symmetry)  iev(A,B) = iev(B,A)

    \n

    More examples and discussion to boost your familiarity with information value is provided in Section 3, but for now, lets break for a comparison with two other methods to describe Bayesian updating. 

    \n

     

    \n
    \n

     

    \n

    Odds and log odds: a short comparison.

    \n

    (I think these deserve a special mention, because they have already been discussed on LessWrong.com.)

    \n

    Bayes' theorem can also be expressed fairly neatly using odds with likelihood ratios, and log odds with log likelihood-ratios.  One shortcoming with using odds when updating are that the likelihood-ratio K(B|A)=P(B|A)/P(B|¬A), sometimes called the Bayes factor, is not symmetric, so it does not make the symmetry of updating obvious.  Likewise, log likelihood-ratios are not symmetric either.  

    \n

    But odds and log odds have their advantages. For example, if B1 and B2 are conditionally independent given A and conditionally independent given ¬A, then K(B1 and B2, A) = K(B1|A)·K(B2|A), and similarly for any number of B's.  These conditions are met naturally when B1 and B2 are causal consequences of A which do not causally influence each other.  By contrast, in causal systems, it is usually not the case that pev(A, B1 and B2) = pev(A,B1)·pev(A,B2).  (Reading Pearl's \"Causality: Models, Reasoning, and Inference\" clarified this for me once and for all, by making precise what a \"causal system\" is.)  

    \n

     

    \n
    \n

     

    \n

    Further discussion of information

    \n

    In our excitement to get to an information theoretic Bayes' theorem, we glossed over a lot of opportunities to stop and reflect, so lets do some more of that here.

    \n

    Information vs \"data\" or \"knowledge\"

    \n

    C.E. Shannon originally used the full phrase \"information value\", but nowadays it is often shortened to \"information\".  As mentioned, information is not a synonym for \"data\" or \"knowledge\" when used in this way.

    \n

    It may be help to analogize this with how \"mass\" is not \"matter\".  If I place 2 grams of matter on the left side of a balance scale, and 3 grams on the right, it will tip to the right, because 3g-2g=1g>0g.  Where is this 1 gram of matter?  Which \"1 gram of matter\" is the matter that tips the scales?  The question is meaningless, because the 1g doesn't refer to any matter in particular, just a difference in total amounts.  But you can ask \"how much mass does this matter have?\", and likewise \"how much information does this data have?\".

    \n

    Why \"the redundant information\" doesn't make sense

    \n

    When iev(A,B) is positive, we spoke of the mutual information of A and B as \"redundancy\".  But what is this redundant information?  What does it say?  Again, this is the \"information value is data\" fallacy making ill-posed questions.  It's somewhat like asking which gram of matter should be removed from the scales above in order to balance it. To illustrate more precisely, suppose again that A says \"X=111?\" and B says \"X=??11\".  If R is the event \"X=??1?\", it is tempting to call R \"the mutual information\" of A and B.  Indeed, if we first observe R, then A and B become independent, so there is no more redundancy.  But this R is not unique. Any list of 8 outcomes that include the A outcomes and B outcomes would also work this way.  For example, we could take R to say \"X is one of 0011, 0111, 1011, 1110, 1111, 1000, 0100, 0001\". 

    \n

    To infinity and... well, just to infinity.

    \n

    We saw that information value inf(A) ranges from 0 to +infinity, and can be interpreted either as informativity or uncertainty, depending on whether the event has happened or not.  Let's think a little about the extremes of this scale:

    \n

    That 0 information value corresponds to a 100%-likely event means:

    \n

         1) 0 informativity: you don't gain any information from observing an event that you already knew was certain (ignoring 0%-likely discrepancies), and

    \n

         2) 0 uncertainty: you don't require any information to verify an event that is certain to occur , and

    \n

    That +infinity information value corresponds to a 0%-likely event means:

    \n

         1) infinite uncertainty: no finite amount of information can convince you of a 0%-likely event (though perhaps an infinite series of tests could bring you arbitrarily close), and

    \n

         2) infinite informativity: if you observe a 0%-likely event, you might win a Nobel prize (it means someone messed up by having a prior belief of 0% somewhere when they shouldn't have).

    \n

    For the values in between, more likely = less uncertain = less informative, and less likely = more uncertain = more informative.

    \n

    What other cool stuff can happen?

    \n

    To get more comfortable with how information values work, let's return to our random 4-bit string X, generated by flipping four coins:

    \n

    Encoding.  Let C be the event \"X contains exactly one 1\", i.e.  X=1000, 0100, 0010, or 0001.  This happens with probability 4/16=1/4=2-2, so inf(C) = 2 bits.  If C is true, it provides 2 bits of information about X, and using an additional 2 bits we could encode the position of the \"1\" by writing \"first\"=00, \"second\"=01, \"third\"=10, and \"fourth\"=11.  Thus we end up using 4 bits in total to specify or \"encode\" X, as we'd expect.  In general, there are theorems characterizing information entirely in terms of encoding/decoding, which is part of what makes it so useful in applications.

    \n

    Negative evidence. Let D be the event \"X starts with 1\", which one sees directly as specifying inf(D) = 1 bit of information.  It is easy to see that P(D)=1/2 and P(D|C)=1/4, so we know C makes D less likely (and vice versa, by update symmetry!), but lets practice thinking in terms of information.  Together, \"C and D\" just means X=1000, so inf(C and D) = 4 bits: it completely determines X.  On the other hand, we saw that inf(C)=2 bits, and inf(D)=1 bit, so iev(C,D) = 2+1-4 = -1, confirming that either of them would present negative evidence for the other.

    \n

    Non-integer information values. Being defined as a logarithm, in real life information values are usually not integers, just like probabilities are not usually simple fractions.  This is not actually a problem, but reflects a flexibility of the definition.  For example, consider the event ¬B: \"X does not start with 11\", which has probability 3/4, hence inf(¬B)=-log2(3/4) = 0.415.  If we also knew \"X does not end with 11\", that would give us another 0.415 bits of information (since it's independent!).  All of our formulae work just fine with non-integer information values, so we can add these to conclude we have 0.830 bits.  This being less than 1 means we still haven't constrained the number possibilities as much as knowing a single bit for certain (i.e. 50%).  Indeed, 9 out of the 16 possibilities neither start nor end with 11.  

    \n

    Okay, but is this anything more than a cool trick with logarithms?

    \n

    Yes!  This definition of information has loads of real-world applications that legitimize it as a scientific quantity of interest:

    \n

         *Communication (bandwidth = information per second),

    \n

         *Data compression (information = how 'incompressible' the output of a data source is),

    \n

         *Statistical mechanics and physics (entropy = average uncertainty = expected informativity of observing a system),

    \n

    and of course, artificial intelligence.

    \n

    Where can I read more?

    \n

    Eliezer has written a number of posts which involve the information theory of handling multiple random variables at once.  So, if you want to learn more about it, Wikipedia is currently a decent source.  The general philosophy is to take expected values of the quantities defined here to obtain analogues for random variables, so you're already half-way there. 

    \n

    For something more coherent and in-depth, A Mathematical Theory of Communication, Shannon (1948), which is credited with pioneering modern information theory, impressively remains a fantastic introduction to the subject.  Way to go, Shannon! 

    \n

    There's lots more good stuff to learn in that paper alone, but I'll end this post here.  What I think is most relevant to LessWrong readers is the awareness of a precise definition of information, and that it can help you think about beliefs and Bayesianism.

    " } }, { "_id": "XCsKmLw8L55ZyYjnK", "title": "Think Before You Speak (And Signal It)", "pageUrl": "https://www.lesswrong.com/posts/XCsKmLw8L55ZyYjnK/think-before-you-speak-and-signal-it", "postedAt": "2010-03-19T22:21:12.297Z", "baseScore": 35, "voteCount": 28, "commentCount": 40, "url": null, "contents": { "documentId": "XCsKmLw8L55ZyYjnK", "html": "

    In deciding whether to pay attention to an idea, a big clue, if it were readily available, would be how many people have checked it over for correctness, and for how long. Most new ideas that human beings come up with are wrong, and if someone just thought of something five seconds ago and excitedly wants to tell you about it, probably the only benefit of listening is not offending the person.

    \n

    But it seems quite rare for this important piece of metadata to be straightforwardly declared, perhaps because such declarations can't be trusted in general. Instead, we usually have to infer it from various other clues, like the speaker's personality (how long do they typically think before they speak?), formality of the language employed to express the idea, the presence of spelling and grammar mistakes, the venue where the idea is presented or published, etc.

    \n

    Unfortunately, such inferences can be imprecise or error-prone. For example, the same speaker may sometimes think a lot before speaking, and other times think little before speaking. Using costly signals like formal language is also wasteful compared to everyone simply telling the truth (but can still be a second-best solution in low-trust groups). In a community like ours, where most of us are striving to build reputations for being (or at least trying to be) rational and cooperative, and therefore there is a level of trust higher than usual, it might be worth experimenting with a norm of declaring how long we've thought about each new idea when presenting it. This may be either in addition to or as an alternative to other ways of communicating how confident we are about our ideas.

    \n

    To follow my own advice, I'll say that I've thought about this topic off and on for about two weeks, and then spent about three hours writing and reviewing this post. I first started thinking about it at the SIAI decision theory workshop, which was the first time I ever worked with a large group of people on a complex problem in real time. I noticed that the variance in the amount of time different people spend thinking through new ideas before they speak is quite high. I was surprised to discover, for example, that Gary Drescher has been working on decision theory for many years and has considered and discarded about a dozen possible solutions.

    \n

    The trigger for actually writing this post is yesterday's Overcoming Bias post Twin Conspiracies, which Robin seemed to have spent much less time thinking through than usual, but which has no overt indications of this. (An obvious objection that he apparently failed to consider is, wouldn't corporations actively recruit twins to be co-CEOs if they are so productive? Several OB commenters also pointed this out.) A blogger may not want to spend days poring over every post, but why not make it easier for the reader to distinguish the serious, carefully thought out ideas from the throwaway ones?

    " } }, { "_id": "Lrsu2YWrjhvAkBHLD", "title": "Open Thread: March 2010, part 3", "pageUrl": "https://www.lesswrong.com/posts/Lrsu2YWrjhvAkBHLD/open-thread-march-2010-part-3", "postedAt": "2010-03-19T03:14:04.793Z", "baseScore": 5, "voteCount": 6, "commentCount": 258, "url": null, "contents": { "documentId": "Lrsu2YWrjhvAkBHLD", "html": "

    The previous open thread has now exceeded 300 comments – new Open Thread posts may be made here.

    \n

    \n

    \n
    \n
    \n
    \n
    \n
    \n

    This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

    \n
    \n
    \n
    \n
    \n
    \n
    \n

    " } }, { "_id": "rLuZ6XrGpgjk9BNpX", "title": "The ABC's of Luminosity", "pageUrl": "https://www.lesswrong.com/posts/rLuZ6XrGpgjk9BNpX/the-abc-s-of-luminosity", "postedAt": "2010-03-18T21:47:05.648Z", "baseScore": 54, "voteCount": 47, "commentCount": 31, "url": null, "contents": { "documentId": "rLuZ6XrGpgjk9BNpX", "html": "

    Sequence index: Living Luminously
    Previously in sequence: Let There Be Light
    Next in sequence: Lights, Camera, Action!

    \n

    Affect, behavior, and circumstance interact with each other.  These interactions constitute informative patterns that you should identify and use in your luminosity project.

    \n

    You may find your understanding of this post significantly improved if you read the second story from Seven Shiny Stories.

    \n

    The single most effective thing you can do when seeking luminosity is to learn to correlate your ABC's, collecting data about how three interrelated items interact and appear together or separately.

    A stands for \"affect\".  Affect is how you feel and what's on your mind.  It can be far more complicated than \"enh, I'm fine\" or \"today I'm sad\".  You have room for plenty of simultaneous emotions, and different ones can be directed at different things - being on a generally even keel about two different things isn't the same as being nervous about one and cheerful about the other, and neither state is the same as being entirely focused on one subject that thrills you to pieces.  If you're nervous about your performance evaluation but tickled pink that you just bought a shiny new consumer good and looking forward to visiting your cousin next week yet irritated that you just stubbed your toe, all while being amused by the funny song on the radio, that's this.  For the sake of the alphabet, I'm lumping in less emotionally laden cognition here, too - what thoughts occur to you, what chains of reasoning you follow, what parts of the environment catch your attention.

    B stands for \"behavior\".  Behavior here means what you actually do.  Include as a dramatically lower-weighted category those things that you fully intended to do, and actually moved to do, but were then prevented from without from doing, or changed your mind about due to new, unanticipated information.  This is critical.  Fleeting designs and intentions cross our minds continually, and if you don't firmly and definitively place your evidential weight on the things that ultimately result in action, you will get subconsciously cherry-picked subsets of those incomplete plan-wisps.  This is particularly problematic because weaker intentions will be dissuaded by minor environmental complications at a much higher rate.  Don't worry overmuch about \"real\" plans that this filtering process discards.  You're trying to know yourself in toto, not yourself at your best time-slices when you valiantly meant to do good thing X and were buffetted by circumstance: if those dismissed real plans represent typical dispositions you have, then they'll have their share of the cohort of actual behavior.  Trust the law of averages.

    C stands for \"circumstance\".  This is what's going on around you (what time is it?  what's going on in your life now and recently and in the near future - major events, minor upheavals, plans for later, what people say to you?  where are you: is it warm, cold, bright, dim, windy, calm, quiet, noisy, aromatic, odorless, featureless, busy, colorful, drab, natural, artificial, pretty, ugly, spacious, cozy, damp, dry, deserted, crowded, formal, informal, familiar, new, cluttered, or tidy?).  It also covers what you're doing and things inside you that are generally conceptualized as merely physical (are you exhausted, jetlagged, drugged, thirsty, hungry, sore, ill, drunk, energetic, itchy, limber, wired, shivering?  are you draped over a recliner, hiding in a cellar, hangliding or dancing or hiking or drumming or hoeing or diving?)  Circumstances are a bit easier to observe than affect and behavior.  If you have trouble telling where you are and what you're up to, your first priority shouldn't be luminosity.  And while we often have some trouble distinguishing between various physical ailments, there are strong pressures on our species to be able to tell when we're hungry or in pain.  Don't neglect circumstance when performing correlative exercises just because it doesn't seem as \"the contents of your skull\"-y.  SAD should be evidence enough that our environments can profoundly influence our feelings.  And wouldn't it be weird, after all, if you felt and acted just the same while ballroom dancing, and while setting the timer on your microwave oven to reheat soup, and while crouching on the floor after having been taken hostage at the bank?

    All of these things are interdependent:

    \n\n


    So don't just correlate how they appear together: also note cause and effect relationships.  Until you've developed enough luminosity to detect these things directly, you may have to fall back on a little post-hoc guesswork for connections more complicated than \"I was hungry and thinking about cheese, so then I ate some cheese\".  Additionally, take note of any interesting absences.  If something generally considered sad has happened to you, and you can detect no sadness in your affect or telltale physical side effects, that's highly relevant data.

    These correlations will form the building blocks of your first pass of model refinement, proceeding from the priors you extracted from external sources.

    " } }, { "_id": "QDJeRcGgyZNcskr7o", "title": "\"Life Experience\" as a Conversation-Halter", "pageUrl": "https://www.lesswrong.com/posts/QDJeRcGgyZNcskr7o/life-experience-as-a-conversation-halter", "postedAt": "2010-03-18T19:39:40.860Z", "baseScore": 10, "voteCount": 28, "commentCount": 65, "url": null, "contents": { "documentId": "QDJeRcGgyZNcskr7o", "html": "

    Sometimes in an argument, an older opponent might claim that perhaps as I grow older, my opinions will change, or that I'll come around on the topic.  Implicit in this claim is the assumption that age or quantity of experience is a proxy for legitimate authority.  In and of itself, such \"life experience\" is necessary for an informed rational worldview, but it is not sufficient.

    \n

    The claim that more \"life experience\" will completely reverse an opinion indicates that the person making such a claim believes that opinions from others are based primarily on accumulating anecdotes, perhaps derived from extensive availability bias.  It actually is a pretty decent assumption that other people aren't Bayesian, because for the most part, they aren't.  Many can confirm this, including Haidt, Kahneman, and Tversky.

    When an opponent appeals to more \"life experience,\" it's a last resort, and it's a conversation halter.  This tactic is used when an opponent is cornered.  The claim is nearly an outright acknowledgment of moving to exit the realm of rational debate.  Why stick to rational discourse when you can shift to trading anecdotes?  It levels the playing field, because anecdotes, while Bayesian evidence, are easily abused, especially for complex moral, social, and political claims.  As rhetoric, this is frustratingly effective, but it's logically rude.

    \n

    Although it might be rude and rhetorically weak, it would be authoritatively appropriate for a Bayesian to be condescending to a non-Bayesian in an argument.  Conversely, it can be downright maddening for a non-Bayesian to be condescending to a Bayesian, because the non-Bayesian lacks the epistemological authority to warrant such condescension.  E.T. Jaynes wrote in Probability Theory about the arrogance of the uninformed, \"The semiliterate on the next bar stool will tell you with absolute, arrogant assurance just how to solve the world's problems; while the scholar who has spent a lifetime studying their causes is not at all sure how to do this.\"

    " } }, { "_id": "Skc9JZLy9HAzifsXu", "title": " Sequential Organization of Thinking: \"Six Thinking Hats\"", "pageUrl": "https://www.lesswrong.com/posts/Skc9JZLy9HAzifsXu/sequential-organization-of-thinking-six-thinking-hats", "postedAt": "2010-03-18T05:22:48.488Z", "baseScore": 30, "voteCount": 30, "commentCount": 14, "url": null, "contents": { "documentId": "Skc9JZLy9HAzifsXu", "html": "

    Many people move chaotically from thought to thought without explicit structure. Inappropriate structuring may leave blind spots or cause the gears of thought to grind to a halt, but the advantages of appropriate structuring are immense:

    \n

    Correct thought structuring ensures that you examine all relevant facets of an issue, idea, or fact.

    \n\n


    To illustrate thought structuring, I use the example of Edward de Bono's \"six thinking hats\" mnemonic.  With Edward de Bono's \"six thinking hats\" method you metaphorically put on various colored \"hats\" (perspectives) and switch \"hats\" depending on the task. I will use the somewhat controversial issue of cryonics as my running example.1

    \n


    Gather the inputs:
    White hat - Facts and information
    This is the perspective where you focus on gathering all the information relevant to the situation by deducing facts, remembering, asking colleagues, reviewing the literature, and conducting experiments.
    Concrete declarative facts:

    \n\n

    Red hat - Feelings and emotions
    This is the perspective where you think about or convey vague intuitions. These are rules of thumb, abstracted probabilities, impressions, and things in your procedural understanding. This is also the time to focus on anything that might be interfering with your objectivity.
    Intuitions and vague inputs:

    \n\n

    Invention and problem solving:
    Green hat - New ideas
    Going into this perspective you have gathered the evidence and intuitions. Now you focus on using these to solve the problem or invent new approaches. At this point the invented ideas do not have to be very good; your ideas are criticised and evaluated with the other hats.
    New ideas:

    \n\n

    Weigh the evidence:
    Black hat - Critical judgment
    Here you specialize, looking for the flaws in the argument, design, or concept. If you are the originator of a concept or otherwise have positive affect around one, the habit of using this perspective ensures that you look for flaws.
    Flaws:

    \n\n

    Yellow hat - Positive aspects
    With this perspective, you look for the arguments for a position or come up with various uses you can put something to. If you are critical of a concept, this step ensures you look at its positive aspects.
    Strengths and additional purposes:

    \n\n

    Monitoring, directing, and deciding:
    Blue hat - The big picture
    This is the perspective where you figure out how valuable the various options are, consider opportunity costs, and choose. Here you also monitor your thoughts and interrupt the flow if something unexpected occurs internally or externally.
    Monitor and choose:

    \n\n

    As the example shows, Edward de Bono's six thinking hats method is useful for structuring thought, but it is admittedly limited:

    \n\n

    Nevertheless, I find a kind of useful simplicity and beauty in the method (or maybe I just love colors...).
    What do you think of the method? Can you suggest other ways of \"structuring thought?\"

    \n

    1. Disclaimer: I am pro-cryonics, but am using it solely as an example and do not intend to be comprehensive or have the feelings and analysis particularly resemble my own.

    " } }, { "_id": "Y6TpEEKZq6HXfhWxd", "title": "Let There Be Light", "pageUrl": "https://www.lesswrong.com/posts/Y6TpEEKZq6HXfhWxd/let-there-be-light", "postedAt": "2010-03-17T19:35:59.046Z", "baseScore": 60, "voteCount": 59, "commentCount": 101, "url": null, "contents": { "documentId": "Y6TpEEKZq6HXfhWxd", "html": "

    Sequence index: Living Luminously
    Previously in sequence: You Are Likely To Be Eaten By A Grue
    Next in sequence: The ABC's of Luminosity

    \n

    You can start from psych studies, personality tests, and feedback from people you know when you're learning about yourself.  Then you can throw out the stuff that sounds off, keep what sounds good, and move on.

    \n

    You may find your understanding of this post significantly improved if you read the first story from Seven Shiny Stories.

    \n

    Where do you get your priors, when you start modeling yourself seriously instead of doing it by halfhearted intuition?

    Well, one thing's for sure: not with the caliber of introspection you're most likely starting with.  If you've spent any time on this site at all, you know people are riddled with biases and mechanisms for self-deception that systematically confound us about who we are.  (\"I'm splendid and brilliant!  The last five hundred times I did non-splendid non-brilliant things were outrageous flukes!\")  Humans suck at most things, and obeying the edict \"Know thyself!\" is not a special case.

    The outside view has gotten a bit of a bad rap, but I'm going to defend it - as a jumping-off point, anyway - when I fill our luminosity toolbox.  There's a major body of literature designed to figure out just what the hell happens inside our skulls: it's called psychology, and they have a rather impressive track record.  For instance, learning about heuristics and biases may let you detect them in action in yourself.  I can often tell when I'm about to be subject to the bystander effect (\"There is someone sitting in the middle of the road.  Should I call 911?  I mean, she's sitting up and everything and there are non-alarmed people looking at her - but gosh, I probably don't look alarmed either...\"), have made some progress in reducing the extent to which I generalize from one example (\"How are you not all driven insane by the spatters of oil all over the stove?!\"), and am suspicious when I think I might be above average in some way and have no hard data to back it up (\"Now I can be confident that I am in fact good at this sort of problem: I answered all of these questions and most people can't, according to someone who has no motivation to lie!\").  Now, even if you are a standard psych study subject, of course you aren't going to align with every psychological finding ever.  They don't even align perfectly with each other.  But - controlling for some huge, obvious factors, like if you have a mental illness - it's a good place to start.

    For narrowing things down beyond what's been turned up as typical human reactions to things, you can try personality tests like Myers-Briggs or Big Five.  These are not fantastically reliable sources.  However, some of them have some ability to track with some parts of reality.  Accordingly, saturate with all the test data you can stand.  Filter it for what sounds right (\"gosh, I guess I do tend to be rather bothered by things out of place in my environment, compared to others\") and dump the rest (\"huh?  I'm not open to experience at all!  I won't even try escargot!\") - these are rough, first-approximation priors, not posteriors you should actually act on, and you can afford a clumsy process this early in the game.  While you're at it, give some thought to your intelligence types, categorize your love language1 - anything that carves up person-space and puts you in a bit of it.

    Additionally, if you have honest friends or relatives, you can ask for their help.  Note that even honest ones will probably have a rosy picture of you: they can stand to be around you, so they probably aren't paying excruciatingly close attention to your flaws, and may exaggerate the importance of your virtues relative to a neutral observer's hypothetical opinion.  They also aren't around you all the time, which will constrict the circumstances in which their model is tested and skew it towards whatever influence their own presence has on you.  Their outside perspective is, however, still valuable.

    \n

    (Tips on getting friends/family to provide feedback: I find musing aloud about myself in an obviously tentative manner to be fairly useful at eliciting some domain-specific input. Some of my friends I can ask point-blank, although it helps to ask about specific situations (\"Do you think I'm just tired?\" \"Was I over the line back there?\") rather than general traits that feel more judgmental to discuss (\"Am I a jerk?\" \"Do I use people?\"). When you communicate in text and keep logs, you can send people pastes of entire conversations (when this is permissible to your original interlocutor) and ask what your consultant thinks of that. If you do not remember some event, or are willing to pretend not to remember the event, then you can get whoever was with you at the time to recount it from their perspective - this process will automatically paint what you did during the event in the light of outside scrutiny.)

    \n

    If during your prior-hunting something turns up that seems wrong to you, whether it's a whole test result or some specific supposed feature of people in a group that seems otherwise generally fitting, that's great!  Now you can rule something out.  Think: what makes the model wrong?  When have you done something that falsified it?  (\"That one time last week\" is more promising than \"back in eighty-nine I think it might have been January\".)  What are the smallest things you could change to make it sit right?  (\"Change the word \"rapid\" to \"meticulous\" and that's me to a tee!\")  If it helps, take in the information you gather in small chunks.  That way you can inspect them one at a time, instead of only holistically accepting or rejecting what a given test tells you.

    If something sounds right to you, that's also great!  Ask: what predictions does this idea let you make about your cognition and behavior?  (\"Should you happen to meet a tall, dark stranger, you will make rapid assumptions about his character based on his body language.\")  How could you test them, and refine the model?  (Where do the tall, dark strangers hang out?)  If you've behaved in ways inconsistent with this model in the past, what exceptions to the rule does that imply and how can you most concisely, Occam-esque-ly summarize them?  (\"That one tall, dark stranger was wearing a very cool t-shirt which occluded posture data.\")

    \n

    Nota bene: you may be tempted to throw out things because they sound bad (\"I can't be a narcissist!  That wouldn't be in keeping with the story I tell about myself!\"), rather than because they sound wrong, and to keep things because they sound good (\"ooh!  I'm funny and smart!\"), rather than because they sound right.  Recite the Litany of Tarski a few times, if that helps: if you have a trait, you desire to believe that you have the trait.  If you do not have a trait, you desire to believe that you do not have the trait.  May you not become attached to beliefs you may not want.  If you have bad features, knowing about them won't make them worse - and might let you fix, work around, or mitigate them.  If you lack good features, deluding yourself about them won't make them appear - and might cost you opportunities to develop them for real.  If you can't answer the questions \"when have you done something that falsified this model?\" or \"list some examples of times when you've behaved in accordance with this model\" - second guess.  Try again.  Think harder.  You are not guaranteed to be right, and being right should be the aim here.

    \n

     

    \n

    1It looks cheesy, but I've found it remarkably useful as a first-pass approximation of how to deal with people when I've gotten them to answer the question.

    " } }, { "_id": "hbrADEQrAbPkwis95", "title": "Disconnect between Stated/Implemented Preferences", "pageUrl": "https://www.lesswrong.com/posts/hbrADEQrAbPkwis95/disconnect-between-stated-implemented-preferences", "postedAt": "2010-03-17T02:26:45.596Z", "baseScore": -5, "voteCount": 31, "commentCount": 59, "url": null, "contents": { "documentId": "hbrADEQrAbPkwis95", "html": "

    Currently, the comment for which I've received the most positive karma by a factor of four is a joke about institutionalized ass-rape. A secondhand joke, effectively a quote with no source cited. Furthermore, the comment had, at best, tangential relevance to the subject of discussion. If anyone were to provide a detailed explanation of why they voted as they did, I predict that I would be appreciative.

    \n

    Based on this evidence, which priors need to be adjusted? Discuss.

    " } }, { "_id": "cogqRFuqXtYuJyTgf", "title": "You Are Likely To Be Eaten By A Grue", "pageUrl": "https://www.lesswrong.com/posts/cogqRFuqXtYuJyTgf/you-are-likely-to-be-eaten-by-a-grue-0", "postedAt": "2010-03-17T01:18:15.784Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "cogqRFuqXtYuJyTgf", "html": "

    This was a mistaken double post.  The real post is here.  Please do not comment on this post.

    " } }, { "_id": "r6diXRLvkZBLpSoTf", "title": "You Are Likely To Be Eaten By A Grue", "pageUrl": "https://www.lesswrong.com/posts/r6diXRLvkZBLpSoTf/you-are-likely-to-be-eaten-by-a-grue", "postedAt": "2010-03-17T01:18:13.672Z", "baseScore": 78, "voteCount": 75, "commentCount": 38, "url": null, "contents": { "documentId": "r6diXRLvkZBLpSoTf", "html": "

    Previously in sequence/sequence index: Living Luminously
    Next in sequence: Let There Be Light

    \n

    Luminosity is fun, useful to others, and important in self-improvement.  You should learn about it with this sequence.

    \n

    Luminosity?  Pah!  Who needs it?

    It's a legitimate question.  The typical human gets through life with astonishingly little introspection, much less careful, accurate introspection.  Our models of ourselves are sometimes even worse than our models of each other - we have more data, but also more biases loading up our reflection with noise.  Most of the time, most people act on their emotions and beliefs directly, without the interposition of self-aware deliberation.  And this doesn't usually seem to get anyone maimed or killed - when was the last time a gravestone read \"Here Lies Our Dear Taylor, Who Might Be Alive Today With More Internal Clarity About The Nature Of Memory Retrieval\"?  Nonsense.  If Taylor needs to remember something, it'll present itself, or not, and if there's a chronic problem with the latter then Taylor can export memories to the environment.  Figuring out how the memories are stored in the first place and tweaking that is not high on the to-do list.

    Still, I think it's worth investing considerable time and effort into improving your luminosity.  I submit three reasons why this is so.

    First, you are a fascinating creature.  It's just plain fun and rewarding to delve into your own mind.  People in general are among the most complex, intriguing things in the world.  You're no less so.  You have lived a fair number of observer-moments.  Starting with a native architecture that is pretty special all by itself, you've accumulated a complex set of filters by which you interpret your input - remembered past, experienced present, and anticipated future.  You like things; you want things; you believe things; you expect things; you feel things.  There's a lot of stuff rolled up and tucked into the fissures of your brain.  Wouldn't you like to know what it is?  Particularly because it's you.  Many people find themselves to be their favorite topics.  Are you an exception?  (There's one way to find out...)

    Second, an accurate model of yourself can help you help others deal with you in the best possible way.  Right now, they're probably using kludgey agglomerations of self-projection, stereotype, and automatically generated guesses that they may not bother to update as they learn more about you.  I'm assuming you don't surround yourself with hostile people who would use accurate data about you to hurt and manipulate you, but if you do, certainly be judicious with whatever information your quest for luminosity supplies.  As for everyone else, their having a better model of you will avoid a lot of headaches on everyone's parts.  I'll present myself as an example: I hate surprises.  Knowing this, and being able to tell a complete and credible story about how this works, I can explain to people who might wish to exchange gifts why they should not spring unknown wrapped items on me, and avoid that source of irritation.  Most of the people around me choose not to take actions that they know will irritate me; but without a detailed explanation of exactly how my preferences are uncommon, they'll all too easily revert to their base model of a generic person.

    Third, and most germane to the remaining posts in this sequence: with a better picture of who you are and what your brain is up to, you can find the best low-hanging fruit in terms of hacks to change yourself.  If you keep going from point A to point Z, but know nothing about the route in between, then the only way you can avoid a disliked Z is to try to come to a screeching halt right before it happens.  If you could monitor the process from the start, and determine what pattern your mind follows along the alphabet, you might find that you can easily intervene at G or Q, and never have to deal with Z again.  Similarly, if you try to go from alpha to omega but tend not to wind up at omega, how are you ever going to determine where your obstructions lie unless you pay attention to something other than the bare fact of non-omega?  There could be some trivial omicron-related problem that you'd fix in a heartbeat if only you knew it was getting in the way.  Additionally, your faulty models of yourself are already changing you through such miraculous means as cognitive dissonance.  Unless you find out how it's doing that, you lose the chance to monitor and control the process.

    An analogy: You're waiting to be picked up at the airport.  The designated time comes and goes, and you're sitting by the baggage claim with your suitcases at your feet, your eyes on your watch, and a frown on your face.  The person was supposed to pick you up at the airport, and isn't there!  A clear failure has occurred!  But if you phone the person and start screaming \"The airport, you fool!  I'm at the airport!  Why aren't you?\" then this will tend not to improve things unless the person never left in the first place out of forgetfulness.  If they're stuck in traffic, or were sent out of their way by road construction, or have gotten hopelessly lost, or have been identified by the jackbooted thugs that keep watch at the airport parking lot as a terrorist, reiterating that you had this particular goal in mind won't help.  And unless you find out what is keeping them, you can't help.  You have to know where they are to tell them what detours to take to avoid rush hour; you have to know what diversions were introduced to tell them how to rejoin their planned route; you have to know what landmarks they can see to know where they've gone missing to; you have to know whether to go make Bambi eyes at the security guards and plead misunderstanding.  Without rather specific, sensitive data about what's gone wrong, you can't make it right.

    In the next posts of this sequence, I'm going to illustrate some methods that have helped me learn more about myself and change what I don't like.  With luck, they'll assist you on the project that I've just attempted to convince you to want to undertake.

    " } }, { "_id": "9o3Cjjem7AbmmZfBs", "title": "Living Luminously", "pageUrl": "https://www.lesswrong.com/posts/9o3Cjjem7AbmmZfBs/living-luminously", "postedAt": "2010-03-17T01:17:47.086Z", "baseScore": 97, "voteCount": 91, "commentCount": 28, "url": null, "contents": { "documentId": "9o3Cjjem7AbmmZfBs", "html": "

    The following posts may be useful background material: Sorting Out Sticky Brains; Mental Crystallography; Generalizing From One Example

    I took the word "luminosity" from "Knowledge and its Limits" by Timothy Williamson, although I'm using it in a different sense than he did. (He referred to "being in a position to know" rather than actually knowing, and in his definition, he doesn't quite restrict himself to mental states and events.) The original ordinary-language sense of "luminous" means "emitting light, especially self-generated light; easily comprehended; clear", which should put the titles into context.

    Luminosity, as I'll use the term, is self-awareness. A luminous mental state is one that you have and know that you have. It could be an emotion, a belief or alief, a disposition, a quale, a memory - anything that might happen or be stored in your brain. What's going on in your head? What you come up with when you ponder that question - assuming, nontrivially, that you are accurate - is what's luminous to you. Perhaps surprisingly, it's hard for a lot of people to tell. Even if they can identify the occurrence of individual mental events, they have tremendous difficulty modeling their cognition over time, explaining why it unfolds as it does, or observing ways in which it's changed. With sufficient luminosity, you can inspect your own experiences, opinions, and stored thoughts. You can watch them interact, and discern patterns in how they do that. This lets you predict what you'll think - and in turn, what you'll do - in the future under various possible circumstances.

    I've made it a project to increase my luminosity as much as possible over the past several years. While I am not (yet) perfectly luminous, I have already realized considerable improvements in such subsidiary skills like managing my mood, hacking into some of the systems that cause akrasia and other non-endorsed behavior, and simply being less confused about why I do and feel the things I do and feel. I have some reason to believe that I am substantially more luminous than average, because I can ask people what seem to me to be perfectly easy questions about what they're thinking and find them unable to answer. Meanwhile, I'm not trusting my mere impression that I'm generally right when I come to conclusions about myself. My models of myself, after I stop tweaking and toying with them and decide they're probably about right, are borne out a majority of the time by my ongoing behavior. Typically, they'll also match what other people conclude about me, at least on some level.

    In this sequence, I hope to share some of the techniques for improving luminosity that I've used. I'm optimistic that at least some of them will be useful to at least some people. However, I may be a walking, talking "results not typical". My prior attempts at improving luminosity in others consist of me asking individually-designed questions in real time, and that's gone fairly well; it remains to be seen if I can distill the basic idea into a format that's generally accessible.

    I've divided up the sequence into eight posts, not including this one, which serves as introduction and index. (I'll update the titles in the list below with links as each post goes up.)

    I have already written all of the posts in this sequence, although I may make edits to later ones in response to feedback on earlier ones, and it's not impossible that someone will ask me something that seems to indicate I should write an additional post. I will dole them out at a pace that responds to community feedback.

    " } }, { "_id": "ansPTzk5vir5puoAS", "title": "Subjective Anticipation and Death", "pageUrl": "https://www.lesswrong.com/posts/ansPTzk5vir5puoAS/subjective-anticipation-and-death", "postedAt": "2010-03-17T01:14:24.994Z", "baseScore": 12, "voteCount": 22, "commentCount": 31, "url": null, "contents": { "documentId": "ansPTzk5vir5puoAS", "html": "

    tldr; It is incoherent to talk about a \"you\" which stretches through time.  Instead, we should think of a series of similar mind-moments.

    \n

    Once upon a time, there was a little boy, who answered to the name Lucas Sloan and was scared of dying. I too answer to the name Lucas Sloan, and I remember being afraid of dying. Little Lucas wasn't scared of the present state of affairs, but it is fairly obvious that Little Lucas isn't around anymore. By any practical definition, Little Lucas is dead, he only exists as a memory in my mind and more indirectly in the minds of others. Little Lucas did not care that other people remembered him, he cared that he did not die. So what is this death thing, if Little Lucas was scared of it, but was not scared of the present situation?

    \n

    \n

    I would now like to introduce the term mind-moment. I'm not sure quite how to define that, it may have to mean a single plank-time snapshot of a mind, or it might be as much as a couple weeks. I doubt that what I mean by this is anywhere near the upper bound I just gave, a more likely upper bound might be about a second - about the time it takes to notice something and realize something is going on.

    \n

    Confusion about the exact definition of mind-moment aside, I think it is obvious that Little Lucas and I are separate mind-moments. And the fact of the matter is that the mind moment that was Little Lucas no longer exists, that mind-moment is definitely dead, gone, kaput. In fact, if I'm right about what I mean by mind-moment, many mind-moments have ceased exist since I started writing this. And frankly, both of those things are, in fact, good. What would the point be of constantly re-running the same mind-moment over and over again? I certainly don´t want to be caught in a time loop till the end of time, even if I couldn't tell that I was – that would be as much a waste of the future as converting the stars into orgasmium. I want to experience new things, I want to grow and learn. But the problem is that my use of the word I is incoherent - the \"me\" that experiences those new things, that knows more than I do, is not me. My time is passing, soon enough, I will be the Little Lucas remembered only because he had some interesting ideas.

    \n

    It's not that I don't want to die, it's that I want there to, in the future, exist happy, fulfilled mind-moments that remember being me. This model neatly solves the problem of the anthropic trilemma, you shouldn't anticipate being one of the winners or losers of the lottery, you should anticipate that at a certain point there will be x mind-moments who won or lost the lottery and remember being you. However, it does make the morality of death a lot more complicated. We shouldn't talk about killing as the action that breaks the status quo, we should instead say that each mind-moment needs to be created, and that mind-moments, once created have a right to the creation of their successors, each of whom retains that right.

    \n

    This would all be quite simple if each mind-moment had one and only one possible successor. However, this is not the case. In writing this sentence, I could use the verb \"use\" or \"write\" and both choices require a separate mind-moment. Which mind-moment should be created? Are we obligated to create both? What if there are a million possible mind-moment successors? Are we obligated to create all of them? I don't think so. I still believe that the creation of minds is an active choice, so we shouldn't create minds without cause.  Recasting life as a series of decisions to create mind moments explains the attractiveness of quantum suicide, you aren't killing yourself, you are refusing to create certain mind-moments.

    \n

    I still have some questions about life and death.  For example, how quickly do we have to create these mind moments?  It seems wrong to delay the creation of a subsequent mind-moment until the end of the universe, but as long as the next mind moment hasn't been created it isn't entitled to its successor.  Also, are we obligated to create on the best/happiest/most fulfilled successor mind-moments?  It might seem so in a classical universe, but in many worlds, wouldn't doing so result in horrendous duplication of effort?

    " } }, { "_id": "mbpLoshnrtcwi8tBT", "title": "Overcoming the mind-killer", "pageUrl": "https://www.lesswrong.com/posts/mbpLoshnrtcwi8tBT/overcoming-the-mind-killer", "postedAt": "2010-03-17T00:56:01.710Z", "baseScore": 14, "voteCount": 15, "commentCount": 128, "url": null, "contents": { "documentId": "mbpLoshnrtcwi8tBT", "html": "

    I've been asked to start a thread in order to continue a debate I started in the comments of an otherwise-unrelated post. I started to write a post on that topic, found myself introducing my work by way of explanation, and then realized that this was a sub-topic all its own which is of substantial relevance to at least one of the replies to my comments in that post -- and a much better topic for a first-ever post/thread .

    \n

    So I'm going to write that introductory post first, and then start another thread specifically on the topic under debate.

    \n

    I run issuepedia.org, a wiki site largely dedicated to the rational analysis of politics.

    \n

    As part of that analysis, it covers areas such as logical fallacies (and the larger domain of what I call \"rhetorical deceptions\" and which LessWrong calls \"the dark arts\"), history, people, organizations, and any other topics necessary to understand an issue. Coverage of each area generally includes collecting sources (such as news articles, editorials, and blog posts), essential details to provide a quick overview, and usually an attempt to draw some kind of conclusion1 about the topic's ethical significance based, as much as possible, on the sources collected. (Readers are, of course, free to use the wiki format to offer counterarguments and additional facts/sources.)

    \n

    I started Issuepedia in 2005, largely in an attempt to understand how Bush could possibly have been re-elected (am I deluded, or is half the country deluded? if the latter, how did this happen?). Most of the content is my writing, as I am better at writing than at community-building, but it is all freely copyable under a CC license. I do not receive any money for my work on the site; it does accept donations, but this fact is not heavily advertised and so far there have been no donors. It does not display advertisements, nor have I advertised it (other than linking to it in contexts where its content seems relevant, such as comments on blog entries). I am considering doing the latter at some point when I have sufficient time to give the project the focus it will need in order to grow successfully.

    \n

    Rationality and Politics

    \n

    My main hypothesis2 in starting Issuepedia is that it is, in fact, possible to be rational about politics, to overcome its \"mind-killing\" qualities -- if given sufficient \"thinking room\" in which to record and work through all the relevant (and often mind-numbing) details involved in most political issues in a public venue where you can \"show your work\" and others may point out any errors and omissions. I'm trying to use wiki technology as an intelligence-enhancing, bias-overcoming device.

    \n

    Politics contains issues within issues within issues. Arriving at a rational conclusion about any given issue will often depend on being able to draw reasonable conclusions about a handful of other issues, each of which may have other sub-issues affecting it, and so on.

    \n

    Keeping track of all of these dependencies, however, is somewhat beyond most people's native emotional intuition and working memory capacity (including mine). Even when we consciously try to overcome built-in biases (such as allegiance to our perceived \"tribes\", unexamined beliefs acquired in childhood, etc.), our hind-brains want to take the fine, complex grain of reality and turn it into a simple good-vs.-bad or us-vs.-them moral map drawn with a blunt magic marker -- something we can easily remember and apply.

    \n

    On the other hand, many issues really do seem to boil down to such a simple narrative, something best stated in quite stark terms. Individuals who are making an effort to be measured and rational often seem to reject out of hand the possibility that such simple, clearcut conclusions could possibly be valid, leading to the opposite bias -- a sort of systemic \"fallacy of moderation\". This can cause popular acquiescence to beliefs that are essentially wrong, such as the claim that \"the Democrats do it too\" when pointing out evils committed by the latest generation of Republicans. (Yes, they do it too -- but much less often, and much less egregiously overall.)

    \n

    I propose that there must exist some set of factual information upon which each question ultimately rests, if followed far enough \"down\". In other words, if you exhaustively and recursively map out the sub-issues for each issue, you must eventually arrive at an issue which can be resolved by reference to facts known or knowable. If no such point can be reached, then the issue cannot possibly have any real-world significance -- because if anyone is in any way affected by the issue, then there is the fact of that dependency which must somehow tie in; the trick is figuring out the actual nature of that dependency.

    \n

    My approach in issuepedia is to break each major issue down into sub-issues, each of which has its own page for collecting information and analysis on that particular issue, then do the same to each of those issues until each sub-branch (or \"rootlet\", if you prefer to stay in-metaphor) has encountered the \"bedrock\" of questions which can be determined factually. Once the \"bedrock\" questions have been answered, the issues which rest upon those questions can be resolved, and so on.

    \n

    Documenting these connections, and the facts upon which they ultimately rest, ideally allows each reader to reconstruct the line of reasoning behind a given conclusion. If they disagree with that conclusion, then the facts and reasoning are available for them to figure out where the error lies -- and the wiki format makes it easy for them to post corrections; eventually, all rational parties should be able to reach agreement.

    \n

    I won't go so far as to claim that Issuepedia carries out this methodology with any degree of rigor, but it's what I'm working towards.

    \n

    I'm also aware that recent studies have shown that many people aren't influenced by facts once they've made up their minds (e.g. here). Since I have many times observed myself change my own opinion3 in response to facts, I am working with the hypothesis that this ability may be a cognitive attribute that some people have and others lack -- in much the same way that (apparently) only 32% of the adult population can reason abstractly. If it turns out that I do not, in fact, possess this ability to a satisfactory degree, then finding some way to improve it will become a priority.

    \n

    Methodology for Determination of Fact

    \n

    The question of how to methodically go about determining fact -- i.e. which assertions may be provisionally treated as true and which should be subjected to further inquiry -- came up in the earlier discussion, and is something which I think is ripe for formalization.

    \n

    flaws in the existing methodologies

    \n

    Up until now, society has depended upon a sort of organic, slow and inefficient but reasonably thorough vetting of new ideas by a number of processes. Those who are more familiar with this area of study should feel free to note any errors or omissions in my understanding, but here is my list of processes (which I'll call \"epistemic arenas\"4) by which we have traditionally arrived at societal truths:

    \n\n

    The flaws in each of these methodologies have become much clearer due to the ease and speed with which they may now be researched because of the Internet. A brief summary:

    \n

    The scientific process is clearly the best of the lot, but can be gamed and exploited: fake papers with sciencey-looking graphs and formulas (e.g. this) -- sometimes published in fake journals with sciencey-sounding names (e.g. JP&S) or backed by sciencey-sounding institutions (SEPP, SSRC, SPPI, OISM) -- in order to promote ideas which have been soundly defeated by the real scientific process. Lists of hundreds of scientists dissenting from the prevailing view may not, in fact, contain any scientists actually qualified to make an authoritative statement (i.e. one deserving credence without having to hear the actual argument) on the subject, and only gain popular credibility because of the use of the word \"scientist\".

    \n

    On the other hand, legitimate ideas which for some reason are considered taboo sometimes cannot gain entry to this process, and must publish their findings by other means which can look very similar to the methods used to promote illegitimate ideas. How can we tell the difference? We can, but it takes time -- thus \"a lie can travel around the world while the truth is still putting on its boots\" by exploiting these weaknesses.

    \n

    Bringing the machinery of the scientific process to bear on any given issue is also quite expensive and time-consuming; it can be many years (or decades, in the case of significant new ideas) before enough evidence is gathered to overturn prior assumptions. This fact can be exploited in both directions: important but \"inconvenient\" new facts can be drowned in a sea of publicity arguing against them, and well-established facts can be taken out politically by denialist \"sniping\" (repeating well-refuted claims over and over again until more people are familiar with those claims than with the refutations thereof, leading to popular belief that the claims must be true).

    \n

    Also, because the public is generally unaware of how the scientific process functions, they are likely to give it far less weight than it deserves (when they correctly identify that a given conclusion truly is scientifically supported, anyway). For example, an attack commonly used by creationists against the theory of evolution by natural selection is that it is \"only a theory\". Such an argument is only convincing to someone lacking an understanding of the degree to which a hypothesis must withstand interrogation before it starts to be cited as a \"theory\" in scientific circles.

    \n

    It should be pretty obvious that government's epistemic process is flawed; nonetheless, many bad or outright false ideas become \"facts\" after being enshrined in law or upheld by court decisions. (I could discuss this at length if needed.)

    \n

    Social processing seems to do much better at spotting ethical transgressions (harm and fairness violations), but isolated social groups and communities are vulnerable to memic infection by ideas which become self-reinforcing in the absence of good communication outside the group. Such ideas tend to survive by discouraging such communication and encouraging distrust of outside ideas (e.g. by labeling those outside the community as untrustworthy or tainted in some way), perpetuating the cycle.

    \n

    The mainstream media was, for many decades, the antidote to the problems in the other arenas. Independent newspapers would risk establishment disfavor in exchange for increased circulation -- and although publishing politically inconvenient truth is not the only way to do that, it was certainly one of them.

    \n

    Whether deliberately and conspiratorially or simply by many different interests arriving at the same solutions for their problems (and the people with the power to stop it looking the other way as the industry's lobbyists rewrote the laws to encourage and promote those common solutions), media consolidation has effectively taken the mainstream media out of the game as a voice of dissent.

    \n

    Issuepedia's methodology

    \n

    The basic idea behind Issuepedia's informal epistemic methodology is that truth -- at least on issues where there is no clear experiment which can be performed to resolve the matter -- is best determined by successive approximation from an initial guess, combined with encouragement of dissenting arguments.

    \n

    Someone makes a statement -- a \"claim\". Others respond, giving evidence and reasoning either supporting or in contradicting the claim (counterpoint). Those who still agree with the initial statement then defend it from the counterpoints with further evidence and/or reasoning. If there are any counterpoints nobody has been able to reasonably contradict, then the claim fails; otherwise it stands.

    \n

    By keeping a record of the objections offered -- in a publicly-editable space, via previously unavailable technology (the internet) -- it becomes unnecessary to rehash the ensuing debate-branch if someone raises the same objection again. They may add new twigs, but once an argument has been answered, the answers will be there every time the same argument is raised. This is an essential tool for defeating denialism, which I define as the repeated re-use of already-defeated but otherwise persuasive arguments; to argue with a denialist, one simply need refer to the catalogue of arguments against their position, and reuse those arguments until the denialist comes up with a new one. This puts the burden on the denialists (finally!) and takes it off those who are sincerely trying to determine the nature of reality.

    \n

    This also makes it possible for large decisions involving many complex factors to be more accurately updated if knowledge of those factors changes significantly. One would never end up in a situation where one is asking \"why do we do things this way?\", much less \"why do we still do things this way, even though it hasn't made sense since X happened 20 years ago?\" because the chain of reasoning would be thoroughly documented.

    \n

    At present, the methodology has to be implemented by hand; I am working on software to automate the process.

    \n

    criticism of this methodology

    \n
    \n

    [dripgrind] Your standard of verification seems to be the Wikipedia standard - if you can find a \"mainstream\" source saying something, then you are happy to take it as fact (provided it fits your case).

    \n

    [woozle] I am \"happy to take it as fact\" until I find something contradictory. When that happens, I generally make note of both sources and look for more authoritative information. If you have a better methodology, I am open to suggestions. .. The \"Wikipedia standard\" seems to work pretty well, though -- didn't someone do a study comparing Wikipedia's accuracy with Encyclopedia Britannica's, and they came out about even?

    \n
    \n
    \n

    [dripgrind] So your standard of accepting something as evidence is \"a 'mainstream source' asserted it and I haven't seen someone contradict it\". That seems like you are setting the bar quite low. Especially because we have seen that [a specific claim woozle made] was quickly debunked (or at least, contradicted, which is what prompts you to abandon your belief and look for more authoritative information) by simple googling. Maybe you should, at minimum, try googling all your beliefs and seeing if there is some contradictory information out there.

    \n
    \n

    \"Setting the bar quite low\": yes, the initial bar for accepting a statement is low. This is by design, based on the idea of successive approximation of truth (as outlined above) and my secondary hypothesis \"that it is important for people to share their opinions on things, regardless of how much thought has been put into those opinions.\" (See note 2 below.)

    \n

    Certainly this methodology can lead to error if the size of the observing group is insufficiently large and active -- but it only takes one person saying \"wait, that's nonsense!\" to start the corrective process. I don't see that degree of responsiveness in any of the other epistemic arenas, and I don't believe it adds any new weaknesses -- except that there is no easy/quick way to gauge the reliability of a given assertion. That is a weakness which I plan to address via the structured debate tool (although I had not until now consciously realized that it was needed).

    \n

    If this explanation of the process still seems problematic, I'm quite happy to discuss it further; getting the process right is obviously critical.

    \n

     

    \n
    \n

    I will be posting next on the specific claims we were discussing, i.e. 9/11 \"conspiracy theories\". It will probably take several more days at least. Will update this post with a link when the second one is ready.

    \n

     

    \n

    \n
    \nNotes

    \n

    1. For example, the article about Intelligent Design concludes that \"As with creationism in its other forms, ID's main purpose was (and remains) to insinuate religion into public school education in the United States. It has no real arguments to offer, its support derives exclusively from Christian ideological protectionism and evangelism, and its proponents have no interest in revising their own beliefs in the light of evidence new to them. It is a form of denialism.\"

    \n

    The idea here is to \"call a spade a spade\": if something is morally wrong (or right), say so -- rather than giving the appearance of impartiality priority over reaching sound conclusions (e.g. \"he-said/she-said journalism\" in the media, or the \"NPOV\" requirement on Wikipedia). You may start out with a lot of wrong statements, but they will be statements which someone believed firmly enough to write -- and when they are refuted, everyone who believed them will have access to the refutation, doing much more towards reducing overall error than if you only recorded known truths.

    \n

    2. A secondary hypothesis is that it is important for people to share their opinions on things, regardless of how much thought has been put into those opinions. I have two reasons for this: (1) it helps overcome individual bias by pooling the opinions of many (in an arena where hopefully all priors and reasoning may eventually be discussed and resolved), and (2) there are many terrible things that happen but which we lack the immediate power to change; if we neither do nor say anything about them, others may reasonably assume that we consent to these things. Saying something at least helps prevent the belief that there is no dissent, which otherwise might be used to justify the status quo.

    \n

    3. I am hoping that this observation is not itself a self-delusion or some form of akrasia. Toward the end of confirming or ruling out akrasia, I have made a point of posting my positions on many topics, with an open offer to defend any of those positions against rational criticism. If anyone believes, after observing any of the debates I have been involved in, that I am refusing to change my position in response to facts which clearly indicate such a change should take place, then I will add a note to that effect under the position in question.

    \n

    4. These have a lot in common with what David Brin calls \"disputation arenas\", but they don't seem to be exactly the same thing.

    " } }, { "_id": "tPCLcRJKBPGxRPwJX", "title": "Omega's subcontracting to Alpha", "pageUrl": "https://www.lesswrong.com/posts/tPCLcRJKBPGxRPwJX/omega-s-subcontracting-to-alpha", "postedAt": "2010-03-16T18:52:42.058Z", "baseScore": 15, "voteCount": 14, "commentCount": 94, "url": null, "contents": { "documentId": "tPCLcRJKBPGxRPwJX", "html": "

    This is a variant built on Gary Drescher's xor problem for timeless decision theory.

    \n

    You get an envelope from your good friend Alpha, and are about to open it, when Omega appears in a puff of logic.

    \n

    Being completely trustworthy as usual (don't you just hate that?), he explains that Alpha flipped a coin (or looked at the parity of a sufficiently high digit of pi), to decide whether to put £1000 000 in your envelope, or put nothing.

    \n

    He, Omega, knows what Alpha decided, has also predicted your own actions, and you know these facts. He hands you a £10 note and says:

    \n

    \"(I predicted that you will refuse this £10) if and only if (there is £1000 000 in Alpha's envelope).\"

    \n

    What to do?

    \n

    EDIT: to clarify, Alpha will send you the envelope anyway, and Omega may choose to appear or not appear as he and his logic deem fit. Nor is Omega stating a mathematical theorem: that one can deduce from the first premise the truth of the second. He is using XNOR, but using 'if and only if' seems a more understandable formulation. You get to keep the envelope whatever happens, in case that wasn't clear.

    " } }, { "_id": "P4dxapqPQAJP4i9fD", "title": "The problem of pseudofriendliness", "pageUrl": "https://www.lesswrong.com/posts/P4dxapqPQAJP4i9fD/the-problem-of-pseudofriendliness", "postedAt": "2010-03-15T14:17:00.528Z", "baseScore": -4, "voteCount": 10, "commentCount": 83, "url": null, "contents": { "documentId": "P4dxapqPQAJP4i9fD", "html": "

    The Friendly AI problem is complicated enough that it can be divided into a large number of subproblems. Two such subproblems could be:

    \n
      \n
    1. The problem of goal interpretation – This means that the human expectation of the results of an AI implementing a goal differ from the results that the AI actually works toward.
    2. \n
    3. The problem of innate drives (see Steve Omohundro’s ‘Basic Drives’ paper for more detail) – This is when either specific goals, or goal based reasoning in general, creates subgoals that humans do not anticipate.
    4. \n
    \n

    Let's call an AI which does not suffer from these problem a pseudofriendly AI. Would this be a useful type of AI to produce? Well, maybe or maybe not. But even if it fails to be useful in and of itself, solving the pseudofriendly AI problem may be a helpful step toward developing the mode of thinking needed to solve the Friendly AI problem.

    \n

    It's also possible that pseudofriendliness might be able to interact usefully with Eliezer's Coherent Extrapoltated Volition (CEV - see here for more details). Eliezer has expressed CEV as follows:

    \n

    \n
    \n

    In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.

    \n
    \n

    However, an FAI is not to be given the CEV as it's goal but rather a superintelligence is to use our CEV to determine what goals an FAI should be given. What this means though is that there will be a point where a superintelligence exists that is not friendly. Could a pseudofriendly AI fill a gap here? Probably not - pseudofriendliness is not friendliness, nor should it be confused with it. However, it might be part of a solution that help the CEV approach to be safely implemented.

    \n

    Why all this hassle though? We seem to have exchanged one very important problem for two less important ones. Well, part of the benefit of pseudofriendliness is that it seems like it should be easier to formalise. First, let us introduce the concept of an interpretation system.

    \n

    An interpretation system takes a partially specified world state (called a goal) and outputs a triple (Wx, Sx, Cx) where Wx is a partially specified world state, Sx is a set containing sets of subgoals and Cx is a chosen subgoal.

    \n

    What does all of this mean? Well, the input could be thought of as a goal (stop the humans on that island from being drowned by rising sea waters) which is expressed as a partial world state (ie. the world state where the humans on the island remain undrowned). The interpretation system then outputs a partially specified world state which may be the same or different. In humans, various aspects of our cognitive system would make us interpret this goal as a different world state. For example, we may implicitly not consider tying all of the humans to giant stakes so they were above the level of the water but were unable to move or act.So we would output one world state while an AI may well output another. This is enough to specific the problem of goal interpretation as follows:

    \n

    The problem of goal interpretation is as follows. An interpretation system Ix given a goal G outputs Wx. A second interpretation system Iy outputs Wy on receiving the goal. Systems Ix and Iy suffer from the goal interpretation problem if Wx ≠ Wy.

    \n

    The interpretation systems also output a set of goals to be used to bring about the world state and a set of goal sets which could altenatively be used to bring it about. Going back to our rising sea water example, even if Wx = Wy, these are only partially specified world views and hence do not determine whether every aspect of the AI's actions would produce outcomes that we want. This means that the subgoals used to get to a goal may still be undesirable. We can now specify the problem of innate drives as:

    \n

    System Ix suffers from the weak problem of innate drives from the perspective of system Iy if Cx ≠ Cy. It suffers from the strong problem of innate drives if Cx is not a member of Sx.

    \n

    If these definitions stand up, then pseudofriendly AI is certainly more formally specified than Friendly AI. However, even if not, it seems plausible that it is likely to be easier to formalise pseudofriendliness than friendliness. If you buy that, then the questions remaining are:

    \n
      \n
    1. Do these definitions stand up, and if not, is it possible to formulate another version.
    2. \n
    3. What is the solution to the problems of pseudofriendliness.
    4. \n
    " } }, { "_id": "Jko7pt7MwwTBrfG3A", "title": "Undiscriminating Skepticism", "pageUrl": "https://www.lesswrong.com/posts/Jko7pt7MwwTBrfG3A/undiscriminating-skepticism", "postedAt": "2010-03-14T23:23:25.539Z", "baseScore": 137, "voteCount": 125, "commentCount": 1361, "url": null, "contents": { "documentId": "Jko7pt7MwwTBrfG3A", "html": "

    Tl;dr:  Since it can be cheap and easy to attack everything your tribe doesn't believe, you shouldn't trust the rationality of just anyone who slams astrology and creationism; these beliefs aren't just false, they're also non-tribal among educated audiences.  Test what happens when a \"skeptic\" argues for a non-tribal belief, or argues against a tribal belief, before you decide they're good general rationalists.  This post is intended to be reasonably accessible to outside audiences.

    \n

    I don't believe in UFOs.  I don't believe in astrology.  I don't believe in homeopathy.  I don't believe in creationism.  I don't believe there were explosives planted in the World Trade Center.  I don't believe in haunted houses.  I don't believe in perpetual motion machines.  I believe that all these beliefs are not only wrong but visibly insane.

    \n

    If you know nothing else about me but this, how much credit should you give me for general rationality?

    \n

    Certainly anyone who was skillful at adding up evidence, considering alternative explanations, and assessing prior probabilities, would end up disbelieving in all of these.

    \n

    But there would also be a simpler explanation for my views, a less rare factor that could explain it:  I could just be anti-non-mainstream.  I could be in the habit of hanging out in moderately educated circles, and know that astrology and homeopathy are not accepted beliefs of my tribe.  Or just perceptually recognize them, on a wordless level, as \"sounding weird\".  And I could mock anything that sounds weird and that my fellow tribesfolk don't believe, much as creationists who hang out with fellow creationists mock evolution for its ludicrous assertion that apes give birth to human beings.

    \n

    You can get cheap credit for rationality by mocking wrong beliefs that everyone in your social circle already believes to be wrong.  It wouldn't mean that I have any ability at all to notice a wrong belief that the people around me believe to be right, or vice versa - to further discriminate truth from falsity, beyond the fact that my social circle doesn't already believe in something.

    \n

    Back in the good old days, there was a simple test for this syndrome that would get quite a lot of mileage:  You could just ask me what I thought about God.  If I treated the idea with deeper respect than I treated astrology, holding it worthy of serious debate even if I said I disbelieved in it, then you knew that I was taking my cues from my social surroundings - that if the people around me treated a belief as high-prestige, high-status, I wouldn't start mocking it no matter what the state of evidence.

    \n

    On the other hand suppose I said without hesitation that my epistemic state on God was similar to my epistemic state on psychic powers: no positive evidence, lots of failed tests, highly unfavorable prior, and if you believe it under those circumstances then something is wrong with your mind.  Then you would have heard a bit of skepticism that might cost me something socially, and that not everyone around me would have endorsed, even in educated circles.  You would know it wasn't just a cheap way of picking up cheap points.

    \n

    Today the God-test no longer works, because some people realized that the taking-it-seriously aura of religion is in fact the main thing left which prevents people from noticing the epistemic awfulness; there has been a concerted and, I think, well-advised effort to mock religion and strip it of its respectability.  The upshot is that there are now quite wide social circles in which God is just another stupid belief that we all know we don't believe in, on the same list with astrology.  You could be dealing with an adept rationalist, or you could just be dealing with someone who reads Reddit.

    \n

    And of course I could easily go on to name some beliefs that others think are wrong and that I think are right, or vice versa, but would inevitably lose some of my audience at each step along the way - just as, a couple of decades ago, I would have lost a lot of my audience by saying that religion was unworthy of serious debate.  (Thankfully, today this outright dismissal is at least considered a respectable, mainstream position even if not everyone holds it.)

    \n

    I probably won't lose much by citing anti-Artificial-Intelligence views as an example of undiscriminating skepticism.  I think a majority among educated circles are sympathetic to the argument that brains are not magic and so there is no obstacle in principle to building machines that think.  But there are others, albeit in the minority, who recognize Artificial Intelligence as \"weird-sounding\" and \"sci-fi\", a belief in something that has never yet been demonstrated, hence unscientific - the same epistemic reference class as believing in aliens or homeopathy.

    \n

    (This is technically a demand for unobtainable evidence.  The asymmetry with homeopathy can be summed up as follows:  First:  If we learn that Artificial Intelligence is definitely impossible, we must have learned some new fact unknown to modern science - everything we currently know about neurons and the evolution of intelligence suggests that no magic was involved.  On the other hand, if we learn that homeopathy is possible, we must have learned some new fact unknown to modern science; if everything else we believe about physics is true, homeopathy shouldn't work.  Second:  If homeopathy works, we can expect double-blind medical studies to demonstrate its efficacy right now; the absence of this evidence is very strong evidence of absence.  If Artificial Intelligence is possible in theory and in practice, we can't necessarily expect its creation to be demonstrated using current knowledge - this absence of evidence is only weak evidence of absence.)

    \n

    I'm using Artificial Intelligence as an example, because it's a case where you can see some \"skeptics\" directing their skepticism at a belief that is very popular in educated circles, that is, the nonmysteriousness and ultimate reverse-engineerability of mind.  You can even see two skeptical principles brought into conflict - does a good skeptic disbelieve in Artificial Intelligence because it's a load of sci-fi which has never been demonstrated?  Or does a good skeptic disbelieve in human exceptionalism, since it would require some mysterious, unanalyzable essence-of-mind unknown to modern science?

    \n

    It's on questions like these where we find the frontiers of knowledge, and everything now in the settled lands was once on the frontier.  It might seem like a matter of little importance to debate weird non-mainstream beliefs; a matter for easy dismissals and open scorn.  But if this policy is implemented in full generality, progress goes down the tubes.  The mainstream is not completely right, and future science will not just consist of things that sound reasonable to everyone today - there will be at least some things in it that sound weird to us.  (This is certainly the case if something along the lines of Artificial Intelligence is considered weird!)  And yes, eventually such scientific truths will be established by experiment, but somewhere along the line - before they are definitely established and everyone already believes in them - the testers will need funding.

    \n

    Being skeptical about some non-mainstream beliefs is not a fringe project of little importance, not always a slam-dunk, not a bit of occasional pointless drudgery - though I can certainly understand why it feels that way to argue with creationists.  Skepticism is just the converse of acceptance, and so to be skeptical of a non-mainstream belief is to try to contribute to the project of advancing the borders of the known - to stake an additional epistemic claim that the borders should not expand in this direction, and should advance in some other direction instead.

    \n

    This is high and difficult work - certainly much more difficult than the work of mocking everything that sounds weird and that the people in your social circle don't already seem to believe.

    \n

    To put it more formally, before I believe that someone is performing useful cognitive work, I want to know that their skepticism discriminates truth from falsehood, making a contribution over and above the contribution of this-sounds-weird-and-is-not-a-tribal-belief.  In Bayesian terms, I want to know that p(mockery|belief false & not a tribal belief) > p(mockery|belief true & not a tribal belief).

    \n

    If I recall correctly, the US Air Force's Project Blue Book, on UFOs, explained away as a sighting of the planet Venus what turned out to actually be an experimental aircraft.  No, I don't believe in UFOs either; but if you're going to explain away experimental aircraft as Venus, then nothing else you say provides further Bayesian evidence against UFOs either.  You are merely an undiscriminating skeptic.  I don't believe in UFOs, but in order to credit Project Blue Book with additional help in establishing this, I would have to believe that if there were UFOs then Project Blue Book would have turned in a different report.

    \n

    And so if you're just as skeptical of a weird, non-tribal belief that turns out to have pretty good support, you just blew the whole deal - that is, if I pay any extra attention to your skepticism, it ought to be because I believe you wouldn't mock a weird non-tribal belief that was worthy of debate.

    \n

    Personally, I think that Michael Shermer blew it by mocking molecular nanotechnology, and Penn and Teller blew it by mocking cryonics (justification: more or less exactly the same reasons I gave for Artificial Intelligence).  Conversely, Richard Dawkins scooped up a huge truckload of actual-discriminating-skeptic points, at least in my book, for not making fun of the many-worlds interpretation when he was asked about in an interview; indeed, Dawkins noted (correctly) that the traditional collapse postulate pretty much has to be incorrect.  The many-worlds interpretation isn't just the formally simplest explanation that fits the facts, it also sounds weird and is not yet a tribal belief of the educated crowd; so whether someone makes fun of MWI is indeed a good test of whether they understand Occam's Razor or are just mocking everything that's not a tribal belief.

    \n

    Of course you may not trust me about any of that.  And so my purpose today is not to propose a new litmus test to replace atheism.

    \n

    But I do propose that before you give anyone credit for being a smart, rational skeptic, that you ask them to defend some non-mainstream belief.  And no, atheism doesn't count as non-mainstream anymore, no matter what the polls show.  It has to be something that most of their social circle doesn't believe, or something that most of their social circle does believe which they think is wrong.  Dawkins endorsing many-worlds still counts for now, although its usefulness as an indicator is fading fast... but the point is not to endorse many-worlds, but to see them take some sort of positive stance on where the frontiers of knowledge should change.

    \n

    Don't get me wrong, there's a whole crazy world out there, and when Richard Dawkins starts whaling on astrology in \"The Enemies of Reason\" documentary, he is doing good and necessary work. But it's dangerous to let people pick up too much credit just for slamming astrology and homeopathy and UFOs and God.  What if they become famous skeptics by picking off the cheap targets, and then use that prestige and credibility to go after nanotechnology?  Who will dare to consider cryonics now that it's been featured on an episode of Penn and Teller's \"Bullshit\"?  On the current system you can gain high prestige in the educated circle just by targeting beliefs like astrology that are widely believed to be uneducated; but then the same guns can be turned on new ideas like the many-worlds interpretation, even though it's being actively debated by physicists.  And that's why I suggest, not any particular litmus test, but just that you ought to have to stick your neck out and say something a little less usual - say where you are not skeptical (and most of your tribemates are) or where you are skeptical (and most of the people in your tribe are not).

    \n

    I am minded to pay attention to Robyn Dawes as a skillful rationalist, not because Dawes has slammed easy targets like astrology, but because he also took the lead in assembling and popularizing the total lack of experimental evidence for nearly all schools of psychotherapy and the persistence of multiple superstitions such as Rorschach ink-blot interpretation in the face of literally hundreds of experiments trying and failing to find any evidence for it.  It's not that psychotherapy seemed like a difficult target after Dawes got through with it, but that, at the time he attacked it, people in educated circles still thought of it as something that educated people believed in.  It's not quite as useful today, but back when Richard Feynman published \"Surely You're Joking, Mr. Feynman\" you could pick up evidence that he was actually thinking from the fact that he disrespected psychotherapists as well as psychics.

    \n

    I'll conclude with some simple and non-trustworthy indicators that the skeptic is just filling in a cheap and largely automatic mockery template:

    \n\n

    I'll conclude the conclusion by observing that poor skepticism can just as easily exist in a case where a belief is wrong as when a belief is right, so pointing out these flaws in someone's skepticism can hardly serve to establish a positive belief about where the frontiers of knowledge should move.

    " } }, { "_id": "r8T9WJz24mC6ofkaT", "title": "Musings on probability", "pageUrl": "https://www.lesswrong.com/posts/r8T9WJz24mC6ofkaT/musings-on-probability", "postedAt": "2010-03-14T23:17:10.250Z", "baseScore": 6, "voteCount": 11, "commentCount": 12, "url": null, "contents": { "documentId": "r8T9WJz24mC6ofkaT", "html": "

    I read this comment, and after a bit of rambling I realized I was as confused as the poster. A bit more thinking later I ended up with the “definition” of probability under the next heading. It’s not anything groundbreaking, just a distillation (specifically, mine) of things discussed here over the time. It’s just what my brain thinks when I hear the word.

    \n

    But I was surprised and intrigued when I actually put it in writing and read it back and thought about it. I don’t remember seeing it stated like that (but I probably read some similar things).

    \n

    It probably won’t teach anyone anything, but it might trigger a similar “distillation” of “mind pictures” in others, and I’m curious to see that.

    \n

    What “probability” is...

    \n

    Or, more exactly, what is the answer to “what’s the probability of X?”

    \n

    Well, I don’t actually know, and it probably depends on who asks. But here’s the skeleton of the answer procedure:

    \n
      \n
    1. Take the set of all (logically) possible universes. Assign to each universe a finite, real value m (see below).
    2. \n
    3. Eliminate from the set those those that are inconsistent with your experiences. Call the remaining set E.
    4. \n
    5. Construct T, the subset of E where X (happens, or happened, or is true).
    6. \n
    7. Assign to each universe u in set E a value p, such that p(u) is inversely proportional to m(u), and the integral of p over set E is 1.
    8. \n
    9. Calculate the integral of p over the set T. The result is called “the probability of X”, and is the answer to the question.
    10. \n
    \n

    I’m aware that this isn’t quite a definition; in fact, it leaves more unsaid (undefined) than it explains. Nevertheless, to me it seems that the structure itself is right: people might have different interpretations for the details (and, like me, be uncertain about them), but those differences would still be mentally structured like above.

    \n

    In the next section I explain a bit where each piece comes from and what it means, and in the one after I’m going to ramble a bit.

    \n

    Clarifications

    \n

    About (logically possible) universes: We don’t actually know what our universe is; as such, other possible universes isn’t quite a well-defined concept. For generality, the only constraint I put above is that they be logically possible, for the only reason that the description is (vaguely) mathematical and I don’t have any idea what math without logic means. (I might be missing something, though.)

    \n

    Note that by “universe” I really mean an entire universe, not just “until now”. E.g., if it so happens your experiences allow for a single possible past (i.e., you know the entire history), but your universe is not deterministic, there are still many universes in E (one for each possible future); if it’s deterministic, then E contains just one universe. (And your calculations are a lot easier...)

    \n

    Before you get too scared or excited by the concept of “all possible universes” remember that not all of them are actually used in the rest of the procedure. We actually need only those consistent with experience. That’s still a lot when you think about it, but my mind seems to reel in panic more often I forget this point. (Lest this note makes you too comfortable, I must also mention that the possibility that experience is (even partly) simulated explodes the size of E.)

    \n

    About that real value m I was talking about: “m” comes from “measure”, but that’s a consequence of how I arrived at the schema above. Even now I’m not quite sure it belongs there, because it depends on what you think “possible universes” means. If you just set it to 1 for all universes, everything works.

    \n

    But, for example, you might consider that the set U is countable, encoding them all as numbers using a well-defined rule, and use the Kolmogorov complexity of the bit-string encoding a universe for that universe’s measure. (Given step [4] above, this would mean that you think simpler universes are more probable; except it doesn’t quite mean that, because “probable” is defined only after you picked your “m”. It’s probably closer to “things that happen in simpler universes are more probable”; more in the ramblings section.)

    \n

    A bit about the math: I used some math terms a bit loosely in the schema above. Depending exactly on how you mean by “possible universes”, the set of them might be finite, countably infinite, not countable, or might be a proper class rather than a set. Depending on that, “integrating” might become a different operation. If you can’t (mathematically, not physically) do such an operation on your collection of possible universes (actually, on those in E) then you have to define your own concept of probability :-P

    \n

    With regards to computability, note that the series of steps above is not an algorithm, it’s just the definition. It doesn’t feel intuitive to me that there is any possible universe where you can actually follow the steps above, but math surprises me in that regard sometimes. But note that you don’t really need p(X): you just need a good-enough approximation, and you’re free to use any trick you want.

    \n

    Musings

    \n

    If the above didn’t interest you, the rest probably won’t, either. I’ve put in this the most interesting consequences of the schema above. It’s kind of rambling, and I apologize; as in the last section, I’ll bold keywords, so you might just skim it for paragraphs that might interest you.

    \n

    I found it interesting (but not surprising) to note that Bayesian statistics correspond well to the schema above. As far as I can tell, the Bayesian prior for (any) X is the number assigned in step 5; Bayesian updating is just going back to step 2 whenever you have new experiences. The interesting part is that my description smells frequentist. I wasn’t that surprised because the main difference (in my head) between the two is the use of priors; frequentist statistics ignore prior knowledge. If you just do frequentist statistics on every possible event in every possible universe (for some value of possible), then there is no “prior knowledge” left to ignore.

    \n

    The schema above describes only true/false–type problems. For non-binary problems you just split of E in step 3 into several subsets, one for each possible answer. If the problem is real-valued you need to split E into an uncountably infinite number of sets, but I’ve abused set theory terms enough today that I’m not very concerned. Anyway, in practice (in our universe) it’s usually enough to just split the domain of the value in countably many intervals, according to precision you need, and split the universes in E according to which interval they fall in. That is, you don’t actually need to know the probability that a value is, say, sqrt(2), just that it’s closer to sqrt(2) than you can measure it.

    \n

    With regard to past discussions about a rationale for rationality, observe that it’s possible to apply the procedure above to evaluate what is the “rational way”, supposing you define it by “the rational guy plays to win”: instead of step (3) generate the set of decision procedures that are applicable in all E, call it D; for each d in D, split E into universes where you win and those where you lose (don't win), and call these W(d) and L(d); instead of step 4, for each decision procedure d, calculate the “winningness” of d as the integral of p over W(d) divided by the integral over L(d) (with p defined like above); instead of step 5, pick a decision d0 such that it's “winningness” is maximal (no other has a larger value).

    Note that I’ve no idea if doing this actually picks the decision procedure above, nor what exactly it would mean if it doesn’t... Of course, if it does, it’s still circular, like any “reason for reasoning”. The procedure might also give different results for people with different E. I found it interesting to contemplate that it might be “possible” for someone in another universe (one much friendlier to applied calculus than ours) to calculate exactly the solution of the procedure for my E, but at the same time for the best procedure for approximating it in my universe to give a different answer. They can’t, of course, communicate this to me (since then they’re not in a different universe in the sense used above).

    \n

    If your ontology implies a computable universe (thus you only need to consider those in E), you might want to use Kolmogorov complexity as a measure for the universes. I’ve no idea which encoding you should use to calculate it; there are theorems that say the difference between two encodings is bounded by a constant, but I don't see why certain encodings can't be biased to have systematic effects on your probability calculations. (Other than “it's kind of a long shot”.) You might use the above procedure for deciding on decision procedures, of course :-P

    \n

    There’s also a theorem that say you can’t actually make a program to compute the KC for any arbitrary bit-string. There might be a universe–to–bit-string encoding that generates only bit-strings for which there is such a program, but that’s also kind of a long shot.

    If your ontology implies quantum mechanics then I think the measure of the universes (m(u) in step 1) must involve wave functions somehow, but my understanding of QM doesn’t allow me to think it through much.

    \n

    The schema above illuminated a bit something that puzzled me in that comment I was talking about at the beginning: say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli; what’s your prior for the answer? What puzzled me was that the very fact that you were asked that question communicates an enormous amount of information — see this comment of mine for examples — and yet I couldn’t actually see how that should affect my priors. Of course, the information content of the question restricts hugely the universes in my E. But there were so many there that it’s still huge; more importantly, it restricts the universes along boundaries that I’ve not previously explored, and I don’t have ready heuristics to estimate that little p above:

    \n

    If I throw a (correct) dice, I can split the universes in six approximately equal parts on vague symmetry justifications, and just estimate the probability of each side as 1/6. If someone on the street asks me to bet him on his dice I can split the universes in those where I win and those where I lose and estimate (using a kind of Montecarlo-integration with various scenarios I can think of) that I’ll probably lose. If I encounter an alien named Sillpruk I’ve no idea how to split the universes to estimate the result of a Doldun match. But if I were to encounter lots of aliens with strange first-questions for a while, I might develop some such simple heuristics based on simple trial and error.

    \n

    PS.

    \n

    I’m sorry if this was too long or just stupid. In the former case I welcome constructive criticism — don’t hesitate to tell me what you think should have been cut. I hereby subject myself to Crocker’s Rules. In the latter case... well, sorry :-)

    \n

     

    " } }, { "_id": "Gcpu8AoCy5R75Dt86", "title": "Crunchcourse - a tool for combating learning akrasia", "pageUrl": "https://www.lesswrong.com/posts/Gcpu8AoCy5R75Dt86/crunchcourse-a-tool-for-combating-learning-akrasia", "postedAt": "2010-03-14T22:53:59.998Z", "baseScore": 19, "voteCount": 14, "commentCount": 1, "url": null, "contents": { "documentId": "Gcpu8AoCy5R75Dt86", "html": "

    Crunchcourse is a free website that might be of use to people trying to learn things outside the normal classroom setting. It aims to get together groups of people interested in the same topic and use our social instincts to motivate us to do the work.

    \n

    It is in its early stages. If it proves useful, it might be useful to standardize on it as the place to learn the various prerequisites that lesswrong has.

    " } }, { "_id": "zNawPJRktcJGWrtt9", "title": "Reasoning isn't about logic (it's about arguing)", "pageUrl": "https://www.lesswrong.com/posts/zNawPJRktcJGWrtt9/reasoning-isn-t-about-logic-it-s-about-arguing", "postedAt": "2010-03-14T04:42:12.048Z", "baseScore": 66, "voteCount": 53, "commentCount": 31, "url": null, "contents": { "documentId": "zNawPJRktcJGWrtt9", "html": "

    \"Why do humans reason\" (PDF), a paper by Hugo Mercier and Dan Sperber, reviewing an impressive amount of research with a lot of overlap with themes previously explored on Less Wrong, suggests that our collective efforts in \"refining the art of human rationality\" may ultimately be more successful than most individual efforts to become stronger. The paper sort of turns the \"fifth virtue\" on its head; rather than argue in order to reason (as perhaps we should), in practice, we reason in order to argue, and that should change our views quite a bit.

    \n

    I summarize Mercier and Sperber's \"argumentative theory of reasoning\" below and point out what I believe its implications are to the mission of a site such as Less Wrong.

    \n

    Human reasoning is one mechanism of inference among others (for instance, the unconscious inference involved in perception). It is distinct in being a) conscious, b) cross-domain, c) used prominently in human communication. Mercier and Sperber make much of this last aspect, taking it as a huge hint to seek an adaptive explanation in the fashion of evolutionary psychology, which may provide better answers than previous attempts at explanations of the evolution of reasoning.

    \n

    The paper defends reasoning as serving argumentation, in line with evolutionary theories of communication and signaling. In rich human communication there is little opportunity for \"costly signaling\", that is, signals that are taken as honest because too expensive to fake. In other words, it's easy to lie.

    \n

    To defend ourselves against liars, we practice \"epistemic vigilance\"; we check the communications we receive for attributes such as a trustworthy or authoritative source; we also evaluate the coherence of the content. If the message contains symbols that matches our existing beliefs, and packages its conclusions as an inference from these beliefs, we are more likely to accept it, and thus our interlocutors have an interest in constructing good arguments. Epistemic vigilance and argumentative reasoning are thus involved in an arms race, which we should expect to result in good argumentative skills.

    \n

    What of all the research suggesting that humans are in fact very poor at logical reasoning? Well, if in fact \"we reason in order to argue\", when the subjects are studied in non-argumentative situations this is precisely what we should expect.

    \n

    Mercier and Sperber argue that, when you look at research that studies people in the appropriate settings, we turn out to be in fact quite good at reasoning when we are in the process of arguing; specifically, we demonstrate skill at producing arguments and at evaluating others' arguments. M&S also plead for the \"rehabilitation\" of confirmation bias as playing an adaptive, useful role in the production of arguments in favor of an intuitively preferred view.

    \n

    If reasoning is a skill evolved for social use, group settings should be particularly conducive to skilled arguing. Research findings in fact show that \"truth wins\": once a group participant has a correct solution they will convince others. A group in a debate setting can do better than its best member.

    The argumentative theory, Mercier and Sperber argue, accounts nicely for motivated reasoning, on the model that \"reasoning anticipates argument\". Such anticipation colors our evaluative attitudes, leading for instance to \"polarization\" whereby a counter-argument makes us even more strongly believe the original position, or \"bolstering\" whereby we defend a position more strongly after we have committed to it.

    \n

    These attitudes are favorable to argumentative goals but actually detrimental to epistemic goals. This is particularly evident in decision-making. Reasoning appears to help people little when deciding; it directs people to the decisions that will be easily justified, not to the best decisions!

    \n
    \n

    Reasoning falls quite short of reliably delivering  rational beliefs and rational decisions, [and] may even be, in a variety of cases, detrimental to rationality.

    \n
    \n

    However, it isn't all bad news. The important asymmetry is between production of arguments, and their evaluation. In groups with an interest in finding correct answers, \"truth wins\".

    \n
    \n

    If we generalize to problems that do not  have a provable solution, we should expect, if not necessarily truth, at least good arguments to win. [...] People are quite capable of reasoning in an unbiased manner at least when they are evaluating arguments rather than producing them and when they are after the truth rather than after winning a debate.

    \n
    \n

    Becoming individually stronger at sound reasoning is possible, Mercier and Sperber point out, but rare. The best achievements of reasoning, in science or morality, are collective.

    \n

    If this view of reasoning is correct, a site dedicated to \"refining the art of human rationality\" should recognize this asymmetry between producing arguments and evaluating arguments and strive to structure the \"work\" being done here accordingly.

    \n

    It should encourage individual participants to support their views, and perhaps take a less jaundiced view of \"confirmation bias\". But it should also encourage the breaking down of arguments into small, separable pieces, so that they can be evaluated and filtered individually; that lines up with the intent behind \"debate tools\", even if their execution currently leaves much to be desired.

    \n

    It should stress the importance of \"collectively seeking the truth\" and downplay attempts at \"winning the debate\". This, in particular, might lead us to take a more critical view of some common voting patterns, e.g. larger number of upvotes for snarky one-liner replies than for longer and well-thought out replies.

    \n

    There are probably further conclusions to be drawn from the paper, but I'll stop here and encourage you to read or skim it, then suggest your own in the comments.

    " } }, { "_id": "YtvZxRpZjcFNwJecS", "title": "The Importance of Goodhart's Law", "pageUrl": "https://www.lesswrong.com/posts/YtvZxRpZjcFNwJecS/the-importance-of-goodhart-s-law", "postedAt": "2010-03-13T08:19:29.974Z", "baseScore": 117, "voteCount": 94, "commentCount": 123, "url": null, "contents": { "documentId": "YtvZxRpZjcFNwJecS", "html": "

    This article introduces Goodhart's law, provides a few examples, tries to explain an origin for the law and lists out a few general mitigations.

    \n

    Goodhart's law states that once a social or economic measure is turned into a target for policy, it will lose any information content that had qualified it to play such a role in the first place. wikipedia The law was named for its developer, Charles Goodhart, a chief economic advisor to the Bank of England.

    \n

    The much more famous Lucas critique is a relatively specific formulation of the same. 

    \n

    The most famous examples of Goodhart's law should be the soviet factories which when given targets on the basis of numbers of nails produced many tiny useless nails and when given targets on basis of weight produced a few giant nails. Numbers and weight both correlated well in a pre-central plan scenario. After they are made targets (in different times and periods), they lose that value.

    \n

    We laugh at such ridiculous stories, because our societies are generally much better run than Soviet Russia. But the key with Goodhart's law is that it is applicable at every level. The japanese countryside is apparently full of constructions that are going on because constructions once started in recession era are getting to be almost impossible to stop. Our society centres around money, which is supposed to be a relatively good measure of reified human effort. But many unscruplous institutions have got rich by pursuing money in many ways that people would find extremely difficult to place as value-adding.

    \n

    Recently GDP Fetishism by David henderson is another good article on how Goodhart's law is affecting societies.

    \n

    The way I look at Goodhart's law is Guess the teacher's password writ large. People and instituitions try to achieve their explicitly stated targets in the easiest way possible, often obeying the letter of the law. 

    \n

    A speculative origin of Goodhart's law

    \n

    The way I see Goodhart's law work, or a target's utility break down, is the following.

    \n\n

    The mitigations to Goodhart's law

    \n

    If you consider the law to be true, solutions to Goodhart's law are an impossibility in a non-singleton scenario. So let's consider mitigations.

    \n\n

    Hansonian Cynicism

    \n

    Pointing out what most people would have in mind as G and showing that institutions all around are not following G, but their own convoluted G*s. Hansonian cynicism is definitely the second step to mitigation in many many cases (Knowing about Goodhart's law is the first). Most people expect universities to be about education and hospitals to be about health. Pointing out that they aren't doing what they are supposed to be doing creates a huge cognitive dissonance in the thinking person.

    \n

    Better measures

    \n

    Balanced scorecards

    \n

    Taking multiple factors into consideration, trying to make G* as strong and spoof-proof as possible.  The Scorecard approach is mathematically, the simplest solution that strikes a mind when confronted with Goodhart's law.

    \n

    Optimization around the constraint

    \n

    There are no generic solutions to bridging the gap between G and G*, but the body of knowledge of theory of constraints is a very good starting point for formulating better measures for corporates.

    \n

    Extrapolated Volition

    \n

    CEV tries to mitigate Goodhart's law in a better way than mechanical measures by trying to create a complete map of human morality. If G is defined fully, there is no need for a G*. CEV tries to do it for all humanity, but as an example, individual extrapolated volition should be enough. The attempt is incomplete as of now, but it is promising.

    \n

    Solutions centred around Human discretion

    \n

    Human discretion is the one thing that can presently beat Goodhart's law because the constant checking and rechecking that G and G* match. Nobody will attempt to pull off anything as weird as the large nails in such a scenario. However, this is not scalable in a strict sense because of the added testing and quality control requirements.

    \n

    Left Anarchist ideas

    \n

    Left anarchist ideas about small firms and workgroups are based on the fact that hierarchy will inevitably introduce goodhart's law related problems and thus the best groups are small ones doing simple things.

    \n

    Hierarchical rule

    \n

    On the other end of the political spectrum, Molbuggian hierarchical rule completely eliminates the mechanical aspects of the law. There is no letter of the law, its all spirit. I am supposed to take total care of my slaves and have total obedience to my master. The scalability is ensured through hierarchy.

    \n

     

    \n

    Of all proposed solutions to the Goodhart's law problem confronted, I like CEV the most, but that is probably a reflection on me more than anything, wanting a relatively scalable and automated solution. I'm not sure whether the human discretion supporting people are really correct in this matter.

    \n

    Your comments are invited and other mitigations and solutions to Goodhart's law are also invited.

    " } }, { "_id": "7xupXWqazyB6289Mv", "title": "Rational feelings: a crucial disambiguation", "pageUrl": "https://www.lesswrong.com/posts/7xupXWqazyB6289Mv/rational-feelings-a-crucial-disambiguation", "postedAt": "2010-03-13T00:48:20.576Z", "baseScore": 23, "voteCount": 17, "commentCount": 25, "url": null, "contents": { "documentId": "7xupXWqazyB6289Mv", "html": "

    Ever wonder something like, \"I know it's bad for me that I lost my job, but I actually feel happy about it... is that rational?\"

    \n

    What could a question like that mean? There is a divisive ambiguity here that really messes people up. A feeling as an experience is neither rational nor irrational. It's like asking how ethical a shade of purple is. The point is that a feeling must be framed as a behavior or a statement to ask whether it is rational, and which one matters heaps and loads to the answer.

    \n

    If you think of the happiness as a behavior, something that you're doing, then the question is secretly asking about \ninstrumental rationality: whether you're applying your \nbeliefs correctly to attain your values. In our opening example, the question becomes \"Does feeling happy serve my values?\", or simply \"Do I value feeling happy?\". If you're almost anyone, the answer is probably \"yes\".

    \n

    If you think of the happiness as a statement or instruction that says \"Your values are being served\", which can be true/false and justified/unjustified, then the question is really about \nepistemic rationality, and asks: \"Am I justified to believe my values are being served?\". If \"it's bad for me\" means \"no\", then \"no\".

    \n

    Because of this ambiguity, although it can make sense to say \"I'm happy\" to indicate \"my values are being served\", I propose that in the interest of epistemic hygiene it's worth being more specific. Conflating feelings-as-behaviors with feelings-as-statements inflicts a great deal of pondering and confusion about \nwhether feelings are rational (also precipitated by \nHollywood), and to make matters worse, each of these similes has only limited validity:

    \n

    \n

    1) A feeling is a behavior only insofar as you have control over it. This is something perhaps to strive for, but which certainly varies in feasibility. If someone carefully injects you with dopamine at a funeral, you might feel happy. That doesn't mean you've made an instrumentally irrational choice. 2) A feeling is a statement only insofar as it has a given interpretation. In my opening example, the happiness might rather signify \"There's nothing you can do about this so you can relax and move on.\" This might be clarified on reflection, and if the statement isn't a rational one, you might consider retraining your feeling to offer more rational suggestions. But then, as with any statement, your epistemic rationality hinges on whether you believe it, not on how you feel. On the other hand, it is conceivable that even after introspection reveals the mechanism of a feeling, it still does not present itself as a statement. So sometimes the question \"Is this feeling rational?\" just isn't applicable, but greater self control/awareness makes your feelings more often like behaviors/statements to be assessed as \"rational.\" For the ticklement of your visual cortex, the following table displays 15 scenario types (all of which can really happen), and what the scenario means for your instrumental/epistemic rationality (which is sometimes nothing: \"--\"). If you like, try thinking of an example scenario for the plausibility of each cell: \n

    Your instrumental/epistemic rationality with respect to a feeling: \n \n\n\n\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    \n

    \n

    You haven't interpreted the feeling as a statement

    \n

    You've interpreted the feeling as a statement

    \n

    The statement is justified

    \n

    The statement is unjustified

    \n

    You believe it

    \n

    You don't believe it

    \n

    You believe it

    \n

    You don't believe it

    \n

    You can't (yet) control the feeling

    \n

    --/yes

    \n

    --/no

    \n

    --/no

    \n

    --/yes

    \n

    You can control the feeling

    \n

    You like (value) the feeling

    \n

    yes/--

    \n

    yes/yes

    \n

    yes/no

    \n

    yes/no

    \n

    yes/yes

    \n

    You dislike (devalue) the feeling

    \n

    no/--

    \n

    no/yes

    \n

    no/no

    \n

    no/no

    \n

    no/yes

    \n
    What's the point? Understanding your feelings means you can put them to better use. For one thing, you don't have to turn off a good feeling just because it makes a bad suggestion, as long as you can ignore the suggestion: if you lose your job, go ahead and be happy about it, just make sure you behave appropriately and keep looking for a new one. Likewise, you're not obliged to feel a \"smart\" feeling that you don't like, as long as you're smart enough to remember what good advice it might have given you: if you're worried about failing your exams, say \"Thanks Worry, good idea, I'll go study. Okay, I'm studying now, you've done your part, you can leave me alone for a while!\" Don't forget, in addition to communicating with the unconscious about epistemic issues, your feelings can be used for all sorts of other things, like energetic motivation, health benefits, and watching Avatar. And failing just one of these purposes doesn't entirely preclude the others, as long as you can keep them effectively separate... feeling like blue people are real can be highly advisable at times.
    " } }, { "_id": "PFj68HspNdrNWkPrW", "title": "Addiction", "pageUrl": "https://www.lesswrong.com/posts/PFj68HspNdrNWkPrW/addiction", "postedAt": "2010-03-11T18:51:43.000Z", "baseScore": 2, "voteCount": 2, "commentCount": 1, "url": null, "contents": { "documentId": "PFj68HspNdrNWkPrW", "html": "

    Imagine a strange genie offers you the opportunity to no longer crave food, drink, sleep, warmth or sex. Are you be interested? I wouldn’t be – getting things I need and crave seems to be more fun on the whole than getting things I just casually enjoy. There’s a pretty steep diminishing return to the things I listed though, and like many people I have about as much as I want of most of them. So how to make my life much better?

    \n

    An obvious strategy is to acquire more such strong desires. This is an unpopular path though. If you want to enjoy life more it is common to look for new casually pleasurable activities – make new friends, try a new sport, take your partner on an unusual romantic excursion. If you become too attached to an activity, you are deemed ‘addicted’ though, which is a bad thing. There are activities that are known to make people particularly addicted, and beginning them is seen as a stupid move. But why is addiction so bad?

    \n

    There is an argument that addictions don’t make you happy – they entrap you in a cycle of endless obsession with no real satisfaction. Remember enjoyment and desire don’t always coincide. But while that’s surely true for some things people become addicted to, if addiction merely entails strong ongoing craving, why should the subjects of such cravings tend to be unpleasant more so than less serious desires?

    \n

    For these reasons I recently set out to acquire an addiction to coffee, a not particularly dangerous drug. It’s not very strong yet; I do alright without coffee for long periods, but feel relieved and invigorated for having it. So far this seems a great benefit, compared both to casually and unaffectedly drinking coffee and to not drinking coffee at all. However if I say to people that I’m somewhat addicted to coffee they usually express pity. If I say I think it’s a fine arrangement they seem to think I’m silly.

    \n

    Why is addiction bad, beyond the few specific addictions that aren’t pleasurable or to lead to externalities such as weaponed thievery?


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "ipoAMivCwrzajEucj", "title": "Open Thread: March 2010, part 2", "pageUrl": "https://www.lesswrong.com/posts/ipoAMivCwrzajEucj/open-thread-march-2010-part-2", "postedAt": "2010-03-11T17:25:40.695Z", "baseScore": 7, "voteCount": 5, "commentCount": 342, "url": null, "contents": { "documentId": "ipoAMivCwrzajEucj", "html": "

    The Open Thread posted at the beginning of the month has exceeded 500 comments – new Open Thread posts may be made here.

    \n

    \n

    \n
    \n
    \n

    This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

    \n
    \n
    \n
    \n

    " } }, { "_id": "6tWKgRixfMfPPKwDr", "title": "Spring 2010 Meta Thread", "pageUrl": "https://www.lesswrong.com/posts/6tWKgRixfMfPPKwDr/spring-2010-meta-thread", "postedAt": "2010-03-11T10:27:25.726Z", "baseScore": 5, "voteCount": 8, "commentCount": 147, "url": null, "contents": { "documentId": "6tWKgRixfMfPPKwDr", "html": "

    This post is a place to discuss meta-level issues regarding Less Wrong. Previous thread.

    " } }, { "_id": "QytY9CJyKtTueSh2z", "title": "Nature editorial: Do scientists really need a PhD? ", "pageUrl": "https://www.lesswrong.com/posts/QytY9CJyKtTueSh2z/nature-editorial-do-scientists-really-need-a-phd", "postedAt": "2010-03-11T09:39:42.922Z", "baseScore": -7, "voteCount": 17, "commentCount": 19, "url": null, "contents": { "documentId": "QytY9CJyKtTueSh2z", "html": "

    This article is worth reading, updating based on the evidence if appropriate, and then discussing if you have something to say.

    \n

    Link

    " } }, { "_id": "ArpXebYCLoyeXczMr", "title": "Deception and Self-Doubt", "pageUrl": "https://www.lesswrong.com/posts/ArpXebYCLoyeXczMr/deception-and-self-doubt", "postedAt": "2010-03-11T02:39:06.462Z", "baseScore": 13, "voteCount": 11, "commentCount": 35, "url": null, "contents": { "documentId": "ArpXebYCLoyeXczMr", "html": "

    A little while ago, I argued with a friend of mine over the efficiency of the Chinese government. I admitted he was clearly better informed on the subject than I. At one point, however, he claimed that the Chinese government executed fewer people than the US government. This statement is flat-out wrong; China executes ten times as many people as the US, if not far more. It's a blatant lie. I called him on it, and he copped to it. The outcome is besides the point. Why does it matter that he lied? In this case, it provides weak evidence that the basics of his claim were wrong, that he knew the point he was arguing was, at least on some level, incorrect.

    \n

    The fact that a person is willing to lie indefensibly in order to support their side of an argument shows that they have put \"winning\" the argument at the top of their priorities. Furthermore, they've decided, based on the evidence they have available, that lying was a more effective way to advance their argument than telling the truth. While exceptions obviously exist, if you believe that lying to a reasonably intelligent audience is the best way of advancing your claim, this suggests that you know your claim is ill-founded, even if you don't admit this fact to yourself.

    \n

    Two major exceptions exist. First, the person may simply have no qualms about lying, and may just say anything they think will advance their point, regardless of its veracity. This indicates the speaker should never be trusted on basically any factual claim he makes, though it does not necessarily show self-doubt. Second, the speaker may have little respect for the intelligence of her audience, and believe that the audience is not sophisticated enough for the truth to persuade them. While this may be justified, depending on the audience,1 unless there is good evidence to believe the audience legitimately would not process the truth accurately, this shows the speaker is likely wrong about his central point. However, \"the masses are ignorant and should be lead by their betters\" is a pretty effective cognitive dissonance resolver, so he may not experience the same self-doubt.

    \n

    This principle applies in direct proportion to the deception, and in direct proportion to the sophistication of the speaker. An informed person relying on outright lies indicates either an Escher-like mind or a belief in the wrongness of your position. Lesser deception indicates lesser reservations. An uninformed person may well lie to support proposition that a better informed person could support easily with the facts. But, if an apparently informed advocate is resorting lies and deception, it strongly suggests he has little else to work with.

    \n
    \n

    1- This is not necessarily unjustified. Consider a babysitter with a child who walk by a toy store. The babysitter needs to get the child home soon. The child gets excited and starts demanding they go in, and the babysitter says, \"We can't, they're closed!\" This may be patently false, but, in this case, the truth is very unlikely to convince his audience, even if better solutions may exist.

    " } }, { "_id": "hDNg77Xx53tsxq6Ko", "title": "Blackmail, Nukes and the Prisoner's Dilemma", "pageUrl": "https://www.lesswrong.com/posts/hDNg77Xx53tsxq6Ko/blackmail-nukes-and-the-prisoner-s-dilemma", "postedAt": "2010-03-10T14:58:38.238Z", "baseScore": 25, "voteCount": 25, "commentCount": 20, "url": null, "contents": { "documentId": "hDNg77Xx53tsxq6Ko", "html": "

    This example (and the whole method for modelling blackmail) are due to Eliezer. I have just recast them in my own words.

    \n

    We join our friends, the Countess of Rectitude and Baron Chastity, in bed together. Having surmounted their recent difficulties (she paid him, by the way), they decide to relax with a good old game of prisoner's dilemma. The payoff matrix is as usual:

    \n

    \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
    (Baron, Countess)
    Cooperate
    Defect
    Cooperate
    (3,3)(0,5)
    Defect
    (5,0)(1,1)
    \n

    \n

    Were they both standard game theorists, they would both defect, and the payoff would be (1,1). But recall that the baron occupies an epistemic vantage over the countess. While the countess only gets to choose her own action, he can choose from among four more general tactics:

    \n
      \n
    1. (Countess C, Countess D)→(Baron D, Baron C)   \"contrarian\" : do the opposite of what she does
    2. \n
    3. (Countess C, Countess D)→(Baron C, Baron C)   \"trusting soul\" : always cooperate
    4. \n
    5. (Countess C, Countess D)→(Baron D, Baron D)   \"bastard\" : always defect
    6. \n
    7. (Countess C, Countess D)→(Baron C, Baron D)   \"copycat\" : do whatever she does
    8. \n
    \n

    Recall that he counterfactually considers what the countess would do in each case, while assuming that the countess considers his decision a fixed fact about the universe. Were he to adopt the contrarian tactic, she would maximise her utility by defecting, giving a payoff of (0,5). Similarly, she would defect in both trusting soul and bastard, giving payoffs of (0,5) and (1,1) respectively. If he goes for copycat, on the other hand, she will cooperate, giving a payoff of (3,3).

    \n

    Thus when one player occupies a superior epistemic vantage over the other, they can do better than standard game theorists, and manage to both cooperate.

    \n

    \"Isn't it wonderful,\" gushed the Countess, pocketing her 3 utilitons and lighting a cigarette, \"how we can do such marvellously unexpected things when your position is over mine?\"

    \n

    Next to the bed, the butler had absent-mindedly left a pair of nuclear bombs, along with the champagne flutes. The countess and the baron each picked up one of the nukes, and the wily baron proposed that they play another round of prisoner's dilemma. With the added option of setting off a nuke, the game payoff matrix looks like:

    \n

    \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
    (Baron, Countess)
    Cooperate
    Defect
    Nuke!
    Cooperate
     (3,3)(0,5)-∞
    Defect
    (5,0)(1,1)
    -∞
    Nuke! -∞-∞-∞
    \n

    \n

    In this new setup, the baron has the choice of nine tactics, corresponding to the ways the countess' three possible actions map to his three possible actions. The most interesting of his tactics is the following:

    \n\n

    The baron, quite simply, is threatening to nuke the pair of them if the countess doesn't cooperate with him - but he's going to defect if she does cooperate. Under this assumption, the countess can choose to cooperate with payout (5,0), or defect or nuke, both with payoff -∞.

    \n

    To maximise her utility, she must therefore cooperate with the baron, even though she will get nothing out of this. Since the baron cannot make more utility than 5 in this game, Nuclear blackmail will be the tactic he will choose to implement, and thus the payoff will be (5,0).

    \n

    With the addition of nukes, the blackmailer has gone from being able to force cooperation, to being able to force his preferred option.

    \n

    \"Out, out!\" the countess said, propelling the baron away from her with a few well aimed kicks. \"On second thoughts, I don't want you over me any more!\"

    " } }, { "_id": "Ljy3CSwTFPEpnGLLJ", "title": "The Blackmail Equation", "pageUrl": "https://www.lesswrong.com/posts/Ljy3CSwTFPEpnGLLJ/the-blackmail-equation", "postedAt": "2010-03-10T14:46:24.821Z", "baseScore": 27, "voteCount": 20, "commentCount": 87, "url": null, "contents": { "documentId": "Ljy3CSwTFPEpnGLLJ", "html": "

    This is Eliezer's model of blackmail in decision theory at the recent workshop at SIAI, filtered through my own understanding. Eliezer help and advice were much appreciated; any errors here-in are my own.

    \n

    The mysterious stranger blackmailing the Countess of Rectitude over her extra-marital affair with Baron Chastity doesn't have to run a complicated algorithm. He simply has to credibly commit to the course of action:

    \n

    \"If you don't give me money, I will reveal your affair.\"

    \n

    And then, generally, the Countess forks over the cash. Which means the blackmailer never does reveal the details of the affair, so that threat remains entirely counterfactual/hypothetical. Even if the blackmailer is Baron Chastity, and the revelation would be devastating for him as well, this makes no difference at all, as long as he can credibly commit to Z. In the world of perfect decision makers, there is no risk to doing so, because the Countess will hand over the money, so the Baron will not take the hit from the revelation.

    \n

    Indeed, the baron could replace \"I will reveal our affair\" with Z=\"I will reveal our affair, then sell my children into slavery, kill my dogs, burn my palace, and donate my organs to medical science while boiling myself in burning tar\" or even \"I will reveal our affair, then turn on an unfriendly AI\", and it would only matter if this changed his pre-commitment to Z. If the Baron can commit to counterfactually doing Z, then he never has to do Z (as the countess will pay him the hush money), so it doesn't matter how horrible the consequences of Z are to himself.

    \n

    To get some numbers in this model, assume the countess can either pay up or not do so, and the baron can reveal the affair or keep silent. The payoff matrix could look something like this:

    \n

    \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
    (Baron, Countess)
    Pay
    Not pay
    Reveal
     (-90,-110)(-100,-100)
    Silent
    (10,-10)(0,0)
    \n

    \n

    Both the countess and the baron get -100 utility if the affair is revealed, while the countess transfers 10 of her utilitons to the baron if she pays up. Staying silent and not paying have no effect on the utility of either.

    \n

    Let's see how we could implement the blackmailing if the baron and the countess were running simple decision algorithms. The baron has a variety of tactics he could implement. What is a tactic, for the baron? A tactic is a list of responses he could implement, depending on what the countess does. His four tactics are:

    \n
      \n
    1. (Pay, NPay)→(Reveal, Silent)        \"anti-blackmail\" : if she pays, tell all, if she doesn't, keep quiet
    2. \n
    3. (Pay, NPay)→(Reveal, Reveal)      \"blabbermouth\" : whatever she does, tell all
    4. \n
    5. (Pay, NPay)→(Silent, Silent)         \"not-a-word\" : whatever she does, keep quiet
    6. \n
    7. (Pay ,NPay)→(Silent, Reveal)        \"blackmail\" : if she pays, keep quiet, if she doesn't, tell all
    8. \n
    \n

    The countess, in contract, has only two tactics: pay or don't pay. Each will try and estimate what the other will do, so the baron must model the countess, who must model the baron in turn. This seems as if it leads to infinite regress, but the baron has a short-cut: when reasoning counterfactually as to which tactic to implement, he will substitute that tactic in his model of how the countess models him.

    \n

    In simple terms, it means that when he is musing 'what were to happen if I were to anti-blackmail, hypothetically', he assume that the countess would model him as an anti-blackmailer. In that case, the countess' decision is easy: her utility maximising decision is not to pay, leaving them with a payoff of (0,0).

    \n

    Similarly, if he counterfactually considers the blabbermouth tactic, then if the countess models him as such, her utility-maximising tactic is also not to pay up, giving a payoff of (-100,-100). Not-a-word results in a payoff of (0,0), and only if the baron implements the blackmail tactic will the countess pay up, giving a payoff of (10,-10). Since this maximises his utility, he will implement the blackmail tactic. And the countess will pay him, to minimise her utility loss.

    \n

    Notice that in order for this to work, the baron needs four things:

    \n
      \n
    1. The baron needs to make his decision after the countess does, so she cannot react to his action.
    2. \n
    3. The baron needs to make his decision after the countess does, so he can react to her action.
    4. \n
    5. The baron needs to be able to precommit to a specific tactic (in this case, blackmail).
    6. \n
    7. The baron needs the countess to find his precommitment plausible.
    8. \n
    \n

    If we were to model the two players as timeless AI's implementing specific decision theories, what would these conditions become? They can be cast as:

    \n
      \n
    1. The baron and the countess must exchange their source code.
    2. \n
    3. The baron and the countess must both be rational.
    4. \n
    5. The countess' available tactics are simply to pay or not to pay.
    6. \n
    7. The baron's available tactics are conditional tactics, dependent on what the countess' decision is.
    8. \n
    9. The baron must model the countess as seeing his decision as a fixed fact over which she has no influence.
    10. \n
    11. The countess must indeed see the baron's decision as a fixed fact over which she has no influence.
    12. \n
    \n

    The baron occupies what Eliezer termed a superior epistemic vantage.

    \n

    Could two agents be in superior epistemic vantage, as laid out above, one over the other? This is precluded by the set-up above*, as two agents cannot be correct in assuming that the other treats their own decision as a fixed fact, while both running counterfactuals conditioning their response on the varrying tactics of the other.

    \n

    \"I'll tell, if you don't send me the money, or try and stop me from blackmailing you!\" versus \"I'll never send you the money, if you blackmail me or tell anyone about us!\"

    \n

    Can the countess' brother, the Archduke of Respectability, blackmail the baron on her behalf? If the archduke is in a superior epistemic vantage to the baron, then there is no problem. He could choose a tactic that is dependent on the baron's choice of tactics, without starting an infinite loop, as the baron cannot do the same to him. The most plausible version would go:

    \n

    \"If you blackmail my sister, I will shoot you. If you blabbermouth, I will shoot you. Anti-blackmail and not-a-word are fine by me, though.\"

    \n

    Note that Omega, in the Newcomb's problem, is occupying the superior epistemic vantage. His final tactic is the conditional Z=\"if you two-box, I put nothing in box A; if you one-box, I put in a million pounds,\" whereas you do not have access to tactics along the lines of \"if Omega implements Z, I will two-box; if he doesn't, I will one-box\". Instead, like the countess, you have to assume that Omega will indeed implement Z, accept this as fact, and then choose simply to one-box or two-box.

    \n


    *The argument, as presented here, is a lie, but spelling out the the true version would be tedious and tricky. The countess, for instance, is perfectly free to indulge in counterfactual speculations that the baron may decide something else, as long as she and the baron are both aware that these speculations will never influence her decision. Similarly, the baron is free to model her doing so, as long this similarly leads to no difference. The countess may have a dozen other options, not just the two presented here, as long as they both know she cannot make use of them. There is a whole issue of extracting information from an algorithm and a source code here, where you run into entertaining paradoxes such as if the baron knows the countess will do something, then he will be accurate, and can check whether his knowledge is correct; but if he didn't know this fact, then it would be incorrect. These are beyond the scope of this post.

    \n

     

    \n

    [EDIT] The impossibility of the countess and the baron being each in epistemic vantage over the other has been clarified, and replaces the original point - about infinite loops - which only implied that result for certain naive algorithms.

    \n

    [EDIT] Godelian reasons make it impossible to bandy about \"he is rational and believes X, hence X is true\" with such wild abandon. I've removed the offending lines.

    \n

    [EDIT] To clarify issues, here is a formal model of how the baron and countess could run their decision theories. Let X be a fact about the world, and let S_B be the baron's source code.

    \n

    Baron(S_C):

    \n

    Utility of pay = 10, utility of reveal = -100

    \n

    Based on S_C, if the countess would accept the baron's behaviour as a fixed fact, run:

    \n

    Let T={anti-blackmail, blabbermouth, not-a-word, blackmail}

    \n

    For t_b in T, compute utility of the outcome implied by Countess(t_b,S_B). Choose the t_b that maximises it.

    \n

     

    \n

    Countess(X, S_B)

    \n

    If X implies the baron's tactic t_b, then accept t_b as fixed fact.

    \n

    If not, run Baron(S_C) to compute the baron's tactic t_b. Stop as soon as the tactic is found. Accept as fixed fact.

    \n

    Utility of pay = -10, utility of reveal = -100.

    \n

    Let T={pay, not pay}

    \n

    For t_c in T, under the assumption of t_b, compute utility of outcome. Choose t_c that maximises it.

    \n

     

    \n

    Both these agents are rational with each other, in that they correctly compute each other's ultimate decisions in this situation. They are not perfectly rational (or rather, their programs are incomplete) in that they do not perform well against general agents, and may fall into infinite loops as written.

    " } }, { "_id": "dW9DTLZcScAPj98eR", "title": "Coffee: When it helps, when it hurts", "pageUrl": "https://www.lesswrong.com/posts/dW9DTLZcScAPj98eR/coffee-when-it-helps-when-it-hurts", "postedAt": "2010-03-10T06:14:56.186Z", "baseScore": 57, "voteCount": 49, "commentCount": 110, "url": null, "contents": { "documentId": "dW9DTLZcScAPj98eR", "html": "
    Many people take caffeine always, or never. But the evidence is clear: for some tasks, drink coffee -- for others, don't.
    \n
    Caffeine:
    \n
    \n\n
    \n
    So:
    \n
    Use  caffeine for short-term performance on a focused task (such as an exam).
    \n
    Avoid  caffeine for tasks that require broad creativity and long-term learning.
    \n
    (Disclaimer: The greater altertness, larger short-term memory capacity, and eased recall might make the memories you do make of higher quality.)
    \n
    At least, this is my take. But the issue is convoluted enough that I'm unsure. What do you think?
    " } }, { "_id": "7EBYymAmwGoT59u2t", "title": "Trade makes you responsible, but why?", "pageUrl": "https://www.lesswrong.com/posts/7EBYymAmwGoT59u2t/trade-makes-you-responsible-but-why", "postedAt": "2010-03-09T18:54:23.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "7EBYymAmwGoT59u2t", "html": "

    It’s supposedly bad to exploit poor people by paying them as little as you can get away with in trade.

    \n

    Onlookers who don’t offer the needy anything condemn those who offer some non altruistic benefit through trade because it isn’t enough. It’s interesting that we see this as a fault with the person trading, rather than with everyone. Very few people think they themselves are morally obliged to pay the poor more. I’ve discussed before how misguided this is if we care about the wellbeing of the poor person. But why do people feel this way? Here are some reasons I’ve occasionally heard, though I doubt they are all independently responsible for this curiosity:

    \n
      \n
    1. The issue is domination more than wellbeing. A trader forces a poor person into a low value deal by offering when they can’t afford to refuse. At least the rest of us respect poor people enough to mind our own businesses.
    2. \n
    3. It is the role of the trader to trade fairly with the poor people. It is the role of the casual observer to have opinions, not to intervene in traders’ doings.
    4. \n
    5. Benefiting from another’s misfortune is evil, even if it helps them, so interactions with people who need help should be charitable. It’s not great of most people to ignore poor people, but it’s horrific to go out and benefit from their hardships.
    6. \n
    7. Trade is a form of social relation, and people should be nice in their relationships much more than they should be to those they are unconnected to.
    8. \n
    \n\n
    \n
    \n
    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "KNnr5c2YHsRmnFH5C", "title": "Divide individuals for utilitarian libertarianism", "pageUrl": "https://www.lesswrong.com/posts/KNnr5c2YHsRmnFH5C/divide-individuals-for-utilitarian-libertarianism", "postedAt": "2010-03-07T23:59:06.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "KNnr5c2YHsRmnFH5C", "html": "

    or Why I could conceivably support banning smoking part 2 (at the request of Robert Wiblin, who would not support banning smoking)

    \n

    A strong argument for individuals having complete freedom in decisions affecting nobody else is that each person has much better information about what they want and the details of their situation than anyone else does or could. For example it is often argued that people should choose for themselves how much fat to eat without government intervention, as they have intimate knowledge of how much they like eating fat and how much they dislike being fat, and what degree of mockery their social scene will administer and so forth. Not only that, but they have a much stronger incentive to get the decision right than anyone else.

    \n

    A counterargument often made here is that people are just so irrational that they don’t know what’s good for them. Sometimes it’s not clear how anyone else would do better, being people themselves, and people in complicated organizations full of other motives no less. Sometimes it’s not clear whether people are actually that irrational in real life, or if they manage to compensate.

    \n

    However one situation where it seems quite likely that other people would be better informed on your preferences and how an outcome will affect you is when you are making decisions that will affect you far in the future.  The average seventy five year old probably has more in common with the next average seventy five year old than they have in common with their twenty five year old selves, at least in some relevant respects. The stranger people are the less true this is presumably, but most people are not strange.  So for instance a bunch of old people dying of lung cancer have a much better idea of how much you would like lung cancer than you do when you are weighing it up in the decision to smoke or not much earlier in life.

    \n

    This might not matter if people care a lot about their far future selves, as they can of course seek out people to ask about how horrible or great which experiences are. However even then they are doing no better than anyone else who does that, so there is no argument to be made that they have much more intimate knowledge of their own preferences and situation.

    \n

    You could still argue that I have much more of an interest than anyone else in my own future, if only a slight one compared to how much my future self cares about herself. But I also have a lot to gain by exploiting her and discounting her feelings, so it’s not clear at all from a utilitarian perspective that I should be free to make decisions that only affect myself, but far into the future.

    \n

    The simple way to make this argument is to say that the ‘individual’ is temporally too big a unit to be best ruled over by one part in a (temporal) position of power. The relevant properties of the right sized unit, as far as the usual arguments for libertarianism are concerned, are lots of information and shared care, and according to these a far future self is drifting toward being a different person. You shouldn’t be allowed to externalize onto them as much as you like for the same reasons that go for anyone else.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "MntNx6fFZ8KMHu5PM", "title": "Selfishness Signals Status", "pageUrl": "https://www.lesswrong.com/posts/MntNx6fFZ8KMHu5PM/selfishness-signals-status", "postedAt": "2010-03-07T03:38:30.190Z", "baseScore": 0, "voteCount": 37, "commentCount": 92, "url": null, "contents": { "documentId": "MntNx6fFZ8KMHu5PM", "html": "

    The \"status\" hypothesis simply claims that we associate one another with a one-dimensional quantity: the perceived degree to which others' behavior can affect our well-being. And each of us behaves toward our peers according to our internally represented status mapping.

    \n

    Imagine that, within your group, you're in a position where everyone wants to please you and no one can afford to challenge you. What does this mean for your behavior? It means you get to act selfish -- focusing on what makes you most pleased, and becoming less sensitive to lower-grade pleasure stimuli.

    \n

    Now let's say you meet an outsider. They want to estimate your status, because it's a useful and efficient value to remember. And when they see you acting selfishly in front of others in your group, they will infer the lopsided balance of power.

    \n

    In your own life, when you interact with someone who could affect your well-being, you do your best to act in a way that is valuable to them, hoping they will be motivated to reciprocate. The thing is, if an observer witnesses your unselfish behavior, it's a telltale sign of your lower status. And this scenario is so general, and so common, that most people learn to be very observant of others' deviations from selfishness.

    \n

    On Less Wrong, we already understand the phenomenon of status signaling -- the causal link from status to behavior, and the inferential link from behavior to status. If we also recognize the role of selfishness as a reliable status signal, we can gain a lot of predictive power about which specific behavioral mannerisms are high- and low-status.

    \n

     

    \n

    Are each of the following high- or low-status?

    \n

    1. Standing up straight

    \n

    2. Saying what's on your mind, without thinking it through

    \n

    3. Making an effort to have a pleasant conversation

    \n

    4. Wearing the most comfortable possible clothes

    \n

    5. Apologizing to someone you've wronged

    \n

    6. Blowing your nose in front of people

    \n

    7. Asking for permission

    \n

    8. Showing off

    \n

     

    \n
    \n

    Answers:

    \n

    1. Standing up straight is low-status, because you're obviously doing it to make an impression on others -- there's no first-order benefit to yourself.

    \n

    2. Saying what's on your mind is high-status, because you're doing something pleasurable. This signal is most reliable when what you say doesn't have any intellectual merit.

    \n

    3. Making an effort to have a pleasant conversation is low-status. It's high-status to talk about what you care about.

    \n

    4. Wearing the most comfortable possible clothes is high-status, because you're clearly benefiting yourself. (Dressing in fashionable clothes is also high-status, through a different inferential pathway.)

    \n

    5. Apologizing is low-status because you're obviously not doing it for yourself.

    \n

    6. Blowing your nose is high-status because it's pleasurable and shows that you aren't affected enough by others to stop.

    \n

    7. Asking for permission is low-status. Compare: recognizing that proceeding would be pleasurable, and believing that you are immune to any negative consequences.

    \n

    8. Showing off is low-status, because it reveals that the prospect of impressing your peers drives you to do things which aren't first-order selfish. (Of course, the thing you are showing off might legitimately signal status.)

    \n
    \n

     

    \n

    Pwno's post makes a good related point: The most reliable high-status signal is indifference. If you're indifferent to a person, it means their behavior doesn't even factor into your expectation of well-being. It means your computational resources are too limited to allocate them their own variable, since its value matters so little. How could you act indifferent if you weren't high-status?

    " } }, { "_id": "qeswkGtzfGg7GSP97", "title": "The strongest status signals", "pageUrl": "https://www.lesswrong.com/posts/qeswkGtzfGg7GSP97/the-strongest-status-signals-4", "postedAt": "2010-03-06T08:13:40.962Z", "baseScore": -5, "voteCount": 29, "commentCount": 48, "url": null, "contents": { "documentId": "qeswkGtzfGg7GSP97", "html": "

    The community’s awareness and strong understanding of status-motivated behavior in humans is clearly evident. However, I still believe the community focuses too much on a small subset of observable status transactions; namely, status transactions that occur between people of approximately the same status level. My goal is to bring attention to the rest of the status game.

    Because your attention is a limited resource and carries an opportunity cost, your mind is evolved to constantly be on the look-out for stimuli that may affect your survival and reproductive success and ignore stimuli that doesn’t. Of course, the stimulus doesn’t really have to affect your fitness, it just needs some experienceable property that correlates with an experience in the ancestral environment that did. But when our reaction to stimuli proves to be non-threatening, through repeated exposure, we eventually become desensitized and stop reacting. Much like how first time drivers are more reactive to stimuli than experienced drivers: the majority of past mental processes are demoted from executive functions and become automated. So it’s safe to posit a sort of adaptive mechanism that filters sensory input to keep your attention-resources spent efficiently. This attention-conserving mechanism is the crux of status transactions.

    When someone is constantly surrounded by people who don’t have power i.e. status over them, their attention-conserving mechanism goes to work. In this case, the stimulus they’re filtering out is “people who share experienceable characteristics with low status people they’re constantly surrounded by.” The stimulus, over time, proved it’s not worthy of being paid attention to. And just like an experienced driver, the person devotes substantially less attention-resources towards the uninteresting stimuli.

    The important thing to note is the behavior that’s a function of how much attention-resources are used. These behaviors can be interpreted as evidence of the relative status levels in an interaction. And because it’s evolutionarily advantageous to recognize your own status level, we’ve evolved a mechanism that detects these behaviors in order to assist us in figuring out our status level. [Notice how this isn’t a chicken or the egg problem].

    This behavior manifests itself in all sorts of ways in humans. Instead of enumerating all the behaviors, think of such behaviors like this:

    Assume an individual optimizes for their comfort in a given experienceable environment. If an additional stimulus (In terms of status, the relevant stimulus is other people) enters their environment and causes them to change their previous behavior, that stimulus has non-zero expected power over the individual. Why else would they change their most comfortable state if the stimulus presented nothing of value or no threat? Of course every stimulus will cause some change in behavior (at least initially) so the interesting question is how much behavior changed. The greater the reactivity from the stimulus, the more expected power the stimulus has over the individual.

    The strongest status signal is observable reactivity; not only because we naturally react to interesting stimuli, but also because we’re evolved to interpret reactivity as evidence for status.

    Most status signaling discussed on Lesswrong is about certain stuff people wear, say, associate with, argue about, etc. What Lesswrongers may not realize is how bothering to change your behavior at all towards other people is inherently status lowering. For instance, if you just engage in an argument with someone you’re telling them they’re important enough to use so much of your attention and effort—even if you “act” high status the whole time. If a rock star simply gazes at their biggest fan, the fan will feel higher status. That’s because just getting the rock star’s attention is an accomplishment.

    By engaging in a high-involvement activity with others, like having a conversation, participants assume a plausible upper and lower bound status level for each other. The fact they both care enough to engage in an activity together is evidence they’re approximately the same status level. Because of this, they can’t do any signals that reliably indicate they’re much higher status the other. So most status signaling they’ll be doing to each other won’t influence their status much.

    The behavior induced by indifference and reactivity to stimuli is where the strong evidence resides. Everything else merely budges what’s already been proven by indifference and reactivity. In short, the sort of status signaling Lesswrong has been concerned with is only the tip of the iceberg.

    " } }, { "_id": "FvBWB7rsdMDPgyzxE", "title": "The strongest status signals", "pageUrl": "https://www.lesswrong.com/posts/FvBWB7rsdMDPgyzxE/the-strongest-status-signals", "postedAt": "2010-03-06T08:03:42.180Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "FvBWB7rsdMDPgyzxE", "html": "

    The community’s awareness and strong understanding of status-motivated behavior in humans is clearly evident. However, I still believe the community focuses too much on a small subset of observable status transactions; namely, status transactions that occur between people of approximately the same status level. My goal is to bring attention to the rest of the status game.

    \n


    \n

    ---

    \n


    \n

    Because your attention is a limited resource and carries an opportunity cost, your mind is evolved to constantly be on the look-out for stimuli that may affect your survival and reproductive success and ignore stimuli that doesn’t. Of course, the stimulus doesn’t really have to affect your fitness, it just needs some experienceable property that correlates with an experience in the ancestral environment that did. But when our reaction to stimuli proves to be non-threatening, through repeated exposure, we eventually become desensitized and stop reacting. Much like how first time drivers are more reactive to stimuli than experienced drivers: the majority of past mental processes are demoted from executive functions and become automated. So it’s safe to posit a sort of adaptive mechanism that filters sensory input to keep your attention-resources spent efficiently. This attention-conserving mechanism is the crux of status transactions.

    \n


    \n

    When someone is constantly surrounded by people who don’t have power i.e. status over them, their attention-conserving mechanism goes to work. In this case, the stimulus they’re filtering out is “people who share experienceable characteristics with low status people they’re constantly surrounded by.” The stimulus, over time, proved it’s not worthy of being paid attention to. And just like an experienced driver, the person devotes substantially less attention-resources towards the uninteresting stimuli.

    \n


    \n

    The important thing to note is the behavior that’s a function of how much attention-resources are used. These behaviors can be interpreted as evidence of the relative status levels in an interaction. And because it’s evolutionarily advantageous to recognize your own status level, we’ve evolved a mechanism that detects these behaviors in order to assist us in figuring out our status level. [Notice how this isn’t a chicken and egg problem].

    \n


    \n

    This behavior manifests itself in all sorts of ways in humans. Instead of enumerating all the behaviors, think of such behaviors like this:

    \n


    \n

    Assume an individual optimizes for their comfort in a given experienceable environment. If an additional stimulus (In terms of status, the relevant stimulus is other people) enters their environment and causes them to change their previous behavior, that stimulus has non-zero expected power over the individual. Why else would they change their most comfortable state if the stimulus presented nothing of value or no threat? Of course every stimulus will cause some change in behavior (at least initially) so the interesting question is how much behavior changed. The greater the reactivity from the stimulus, the more expected power the stimulus has over the individual.

    \n


    \n

    The strongest status signal is observable reactivity; not only because we naturally react to interesting stimuli, but also because we’re evolved to interpret reactivity as evidence for status.

    \n

    \n

    Most status signaling discussed on Lesswrong is about certain stuff people wear, say, associate with, argue about, etc. What Lesswrongers are not realizing is how bothering to change your behavior at all towards other people is inherently status lowering. For instance, if you just engage in an argument with someone you’re telling them they’re important enough to use so much of your attention and effort—even if you “act” high status the whole time. If a rock star simply gazes at their biggest fan, the fan will feel higher status. That’s because just getting the rock star’s attention is an accomplishment.

    \n


    \n

    By engaging in an activity with others, like having a conversation, participants assume a plausible upper and lower bound status level for each other. The fact they both care enough to engage in an activity together is evidence they’re approximately the same status level. Because of this, they can’t do any signals that reliably indicate they’re much higher status the other. So most status signaling they’ll be doing to each other won’t influence their status much.

    \n


    \n

    The behavior induced by indifference and reactivity to stimuli is where the strong evidence resides. Everything else merely budges what’s already been proven by indifference and reactivity. In short, the sort of status signaling Lesswrong has been concerned with is only the tip of the iceberg.

    \n

     

    \n

     

    " } }, { "_id": "NHwfLYYw5MfxGP4Zu", "title": "TED Talks: Daniel Kahneman", "pageUrl": "https://www.lesswrong.com/posts/NHwfLYYw5MfxGP4Zu/ted-talks-daniel-kahneman", "postedAt": "2010-03-06T01:45:39.377Z", "baseScore": 24, "voteCount": 19, "commentCount": 22, "url": null, "contents": { "documentId": "NHwfLYYw5MfxGP4Zu", "html": "

    People who have had a painful experience remember it as less painful if the pain tapers off, rather than cutting off sharply at the height of intensity, even if they experience more pain overall. I'd heard of this finding before (from Dan Ariely), but Kahneman uses the finding to throw the idea of \"experiencing self\" vs. \"remembering self\" into sharp relief. He then discusses the far-reaching implications of this dichotomy and our blindness to it.

    \n

    The talk is entitled \"The riddle of experience vs. memory\".

    \n

     

    " } }, { "_id": "BviRaP4przARdmb8b", "title": "Signaling Strategies and Morality", "pageUrl": "https://www.lesswrong.com/posts/BviRaP4przARdmb8b/signaling-strategies-and-morality", "postedAt": "2010-03-05T21:09:25.939Z", "baseScore": 20, "voteCount": 32, "commentCount": 50, "url": null, "contents": { "documentId": "BviRaP4przARdmb8b", "html": "

    I am far from convinced that people in general wish to be seen as caring more about morality than they actually do.  If this was the case, why would the persistent claim that people are -- and, logically, must be -- egoists have so long survived strong counter-arguments?  The argument appears to me to be a way of signaling a lack of excessive and low status moral scruples. 

    \r\n

    It seems to me that the desire to signal as much morality as possible is held by a minority of women and by a small minority of men.  Those people are also the main people who talk about morality.  This is commonly a problem in the development of thought.  People with an interest in verbally discussing a subject may have systematically atypical attitudes towards that subject.  Of course, this issue is further complicated by the fact that people don't agree on what broad type of thing morality is. 

    \r\n

    The conflict within philosophy between Utilitarians and Kantians is among the most famous examples of this disagreement. <a href=\" http://people.virginia.edu/~jdh6n/moraljudgment.html”> Haidt’s views on conservative vs. liberal morality </a> is another.  Major, usually implicit disagreements regard whether morality is supposed to serve as a decision system, a set of constraints on a decision system, or a set of reasons that should influence a person along with prudential, honor, spontaneity, authenticity, and other such types of reasons.

    \r\n

    It seems to me that people usually want to signal whatever gives others the most reason to respect their interests.  Roughly, this amounts to wanting to signal what Haidt calls conservative morality.  Basically, people would like to signal \"I am slightly more committed to the group’s welfare, particularly to that of its weakest members (caring), than most of its members are.  If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to the group even though you will no longer be in a position to help me.  I am substantially more kind and helpful to the people I like (loyal) and substantially more vindictive and aggressive towards those I dislike (honorable, ignored by Haidt).  I am generally stable in who I like (loyalty and identity, implying low cognitive cost for allies, low variance long term investment).  I am much more capable and popular than most members of the group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself (status/hierarchy).  I adhere to simple taboos (not disgusting) so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends.  I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable 'liking you' region of my possible attitudinal space.  Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences.\"  
     
    An interesting point here is that this suggests the existence of a trade-off in the level of intelligence that a person wishes to signal.  In this model, intelligence is necessary to distinguish between costly/genuine and cheap/fake signals of affiliation and to be effective as a friend or as an enemy.  For these reasons, people want to be seen as somewhat more intelligent than the average member of the group.  People also want to appear slightly less intelligent than whoever they are addressing, in order to avoid appearing unpredictable.  

    \r\n

    This is plausibly a multiple equilibrium model.  You can appear slightly more or less intelligent with effort, confidence and affectations.  Trying to appear much less intelligent than you are is difficult as you must essentially simulate one system with another system, which implies an overhead cost.  If you can't appear to be little more intelligent than the higher status members of the group, who typically have modestly above average intelligence, you can't easily be a trusted ally of the people you most need to ally with.  If you can't effectively show yourself to be a predictable ally for individuals you may want to show yourself to be a predictable ally of the group by predictably following rules (justice) and by predictably serving its collective interests (caring).  That allows less intelligent individuals in the group to outsource the task of scrutinizing your loyalty.  People can more easily communicate indicators of group disloyalty by asserting that you have broken a rule, so people who can't be conservatively moral will attend more closely to rules.  On this model, Haidt's liberalism (which I believe includes libertarianism) is a consequence of difficulty credibly signaling personal loyalties and thus having to overemphasize caring and what he calls justice, by which he means following rules. 

    \r\n

    In America, the explicit rules that people are given are descended from a frontier setting where independence was very practically important and where morality with very strong acts/omissions distinctions was sufficient to satisfy collective needs with low administrative costs and with easy cheater detection.  Leaving others alone (and implicitly, tolerance) rather than enforcing purity works well when large distances make good neighbors.  As a result, the explicit rules that people are taught de-emphasize status/hierarchy, disgust, to a lesser degree loyalty and identity, and to a still lesser extent caring.  When the influence of justice, e.g. rules, is emphasized by difficulty in behaving predictably, liberal morality, or ultimately libertarian morality, are the result. 

    " } }, { "_id": "ZouugGbM4SqTEQBZW", "title": "The fallacy of work-life compartmentalization", "pageUrl": "https://www.lesswrong.com/posts/ZouugGbM4SqTEQBZW/the-fallacy-of-work-life-compartmentalization", "postedAt": "2010-03-04T22:59:01.986Z", "baseScore": 13, "voteCount": 41, "commentCount": 92, "url": null, "contents": { "documentId": "ZouugGbM4SqTEQBZW", "html": "

    Related to: Outside the Laboratory, Ghosts in the Machine

    \n

    We've all observed how people can be very smart in some contexts, and stupid in others. People compartmentalize, which has been previously hypothesized as the reason for some epic failures to understand things that should be obvious.

    \n

    It's also important to remember that we are not immune. To that end, I want to start off by considering some comfortable examples, where someone else is the butt of the joke, and then consider examples which might make you more uneasy.

    \n
    \n

    \"The mere presence of a computer can short circuit normally intelligent people's brains.\" -- Computer Stupidities

    \n
    \n

    The reassuring cases concern smart people who become stupid when confronted with our area of expertise. If you're a software developer, that tends to be people who can't figure out something basic about Windows. \"I've tried closing the app and restarting, and I've tried rebooting, and it doesn't work, I still can't find my file.\" You take a deep breath, refrain from rolling your eyes and asking what the heck their mental model is, what they think closing-and-restarting has to do with a misplaced file, and you go looking for some obvious places, like the Desktop, where they keep all their files but somehow neglected to look this time. If it's not there, chances are it will be in My Documents.

    \n

    It's sometimes draining to be called on for this kind of thing, but it can be reassuring. My dad is a high calibre mathematician, dealing in abstractions at a level that seems stratospheric compared to my rusty-college-math. But we sometimes get into conversations like the above, and I get a slightly guilty self-esteem boost from them.

    \n

    Now, the harder question: how do we compartmentalize?

    \n

    \n

    I propose work-life compartmentalization as a case study. \"Work-life balance\" is how we rationalize that separation. It's OK, we think, to put up with some unpleasantness from 9 to 5, as long as we can look forward to getting home, kicking our shoes off and relaxing, alone or among family or friends. And perhaps that's reasonable enough.

    \n

    But this logic leads many people to tolerate: stress, taking orders, doing work that we think is meaningless, filling out paperwork that will never actually be read, pouring our energy into projects we're certain are failure-bound but never speaking up about that to avoid being branded \"not a team player\", being bored in endless meetings which are thinly disguised status games, feeling unproductive and stupid but grinding on anyway because it's \"office hours\" and that's when we are supposed to work, and so on.

    \n

    And those are only the milder symptoms. Workplace bullying, crunch mode, dodgy workplace ethics are worryingly prevalent. (There are large variations in this type of workplace toxicity; some of us are lucky enough to never catch but a whiff of it, some of us unfortunately are exposed to a high degree. That these are real and widespread phenomena is evidenced by the success of TV shows showing office life as its darkest; humor is a defense mechanism.)

    \n

    Things snapped into focus for me one day when a manager asked me to lie to a client about my education record in order to get a contract. I refused, expecting to be fired; that didn't happen. Had I really been at risk? The incident anyway fueled a resolve to try and apply at work the same standards that I do in life - when I think rationally.

    \n

    In everyday life, rationality suggests we try to avoid boredom, tells us it's unwise to make promises we can't keep, to avoid getting entangled in our own lies, and so on. What might happen if we tried to apply the same standards in the workplace?

    \n

    Instead of tolerating boredom in meetings, you may find it more effective to apply a set of criteria to any meeting - does it have an agenda, a list of participants, a set ending time and a known objective - and not show up if it doesn't meet them.

    \n

    You might refuse to give long-term schedule estimate for tasks that you didn't really believe in, and instead try breaking the task down, working in short timeboxed iterations and updating your estimates based on observed reality, committing only to as much work as is compatible with maintaining peak productivity.

    \n

    You might stop tolerating the most egregious status games that go on in the workplace, and strive instead for effective collective action: teamwork.

    \n

    Those would be merely sane behaviours. It is, perhaps, optional to extend this thinking to actually challenging the usual workplace norms, and starting to do things differently just because they would be better that way. The world is crazy, and that includes the world of work. People who insist on not checking their brain at the door of the workplace are still few and far between, and to really change a workplace it takes a critical mass of them.

    \n

    But I've seen it happen, and helped it happen, and the results make me want to find out more about other areas where I still compartmentalize. The prospect is a little scary - I still find it unpleasant to find out I've been stupid - but also exciting.

    " } }, { "_id": "mXPuYeQz29ystznrZ", "title": "The Graviton as Aether", "pageUrl": "https://www.lesswrong.com/posts/mXPuYeQz29ystznrZ/the-graviton-as-aether", "postedAt": "2010-03-04T22:13:59.800Z", "baseScore": 16, "voteCount": 41, "commentCount": 136, "url": null, "contents": { "documentId": "mXPuYeQz29ystznrZ", "html": "
    \n

    Well, first:  Does any collapse theory have any experimental support?  No.

    \n

    With that out of the way...

    \n

    If collapse actually worked the way its adherents say it does, it would be:

    \n
      \n
    1. The only non-linear evolution in all of quantum mechanics.
    2. \n
    3. The only non-unitary evolution in all of quantum mechanics.
    4. \n
    5. The only non-differentiable (in fact, discontinuous) phenomenon in all of quantum mechanics.
    6. \n
    7. The only phenomenon in all of quantum mechanics that is non-local in the configuration space.
    8. \n
    9. The only phenomenon in all of physics that violates CPT symmetry.
    10. \n
    11. The only phenomenon in all of physics that violates Liouville's Theorem (has a many-to-one mapping from initial conditions to outcomes).
    12. \n
    13. The only phenomenon in all of physics that is acausal / non-deterministic / inherently random.
    14. \n
    15. The only phenomenon in all of physics that is non-local in spacetime and propagates an influence faster than light.
    16. \n
    \n

    WHAT DOES THE GOD-DAMNED COLLAPSE POSTULATE HAVE TO DO FOR PHYSICISTS TO REJECT IT?  KILL A GOD-DAMNED PUPPY?

    \n
    \n

    - Eliezer Yudkowsky, Collapse Postulates

    \n

    In the olden days of physics, circa 1900, many prominent physicists believed in a substance known as aether. The principle was simple: Maxwell's equations of electromagnetism had shown that light was a wave, and light followed many of the same equations as sound waves and water waves. However, every other kind of wave- sound waves, water waves, waves in springs- needs some sort of medium for its transmission. A \"wave\" is not really a physical object; it is just a disturbance of some other substance. For instance, if you throw a rock into a pond, you cannot pluck the waves out of the pond and take them home with you in your backpack, because the \"waves\" are just peaks and troughs in the puddle of water (the medium). Hence, there should be some sort of medium for light waves, and the physicists named this medium \"aether\".

    \n

    However, difficulties soon developed. If you have a jar, you can pump the air out of the jar, and then the jar will no longer transmit sound, demonstrating that the wave medium (the air) has been removed. But, there was no way to remove the aether from a jar; no matter what the experimentalists did, you could still shine light through it. There was, in fact, no way of detecting, altering, or experimenting with aether at all. Physicists knew that aether must be unlike all other matter, because it could apparently pass through closed containers made of any substance. And finally, the Michelson-Morely experiment showed that the \"aether\" was always stationary relative to Earth, even though the Earth changed direction every six months as it moved about in its orbit! Shortly thereafter, the inconsistencies were resolved with Albert Einstein's Theory of Special Relativity, and everyone realized that aether was imaginary.

    \n

    Shortly thereafter, during the 20th century, physicists discovered two new forces of nature: the strong nuclear force and the weak nuclear force. These two forces, as well as electromagnetism, could be described very well on the quantum level: they were created by the influence of mediator particles called (respectively) gluons, W and Z bosons, and photons, and these particles obeyed the laws of quantum mechanics just like electrons and mesons did. The description of these three forces, as well as the particles they act upon, has been neatly unified in a theory of physics known as the Standard Model, which has been our best known description of the universe for thirty years now.

    \n

    However, gravity is not a part of this model. Making an analogy to the other forces, physicists have proposed a mediator particle known as the \"graviton\". The graviton is thought to be similar to the photon, the gluon, and the W and Z bosons, except that it is massless and has spin 2. I posit that the \"graviton\" is essentially the same theory as the \"aether\": a misguided attempt to explain something by reference to similar-seeming things that were explained in the same way. Consider the following facts:

    \n\n

    And, with reference to the graviton itself:

    \n\n

    So, what's really going on here? I don't know. I'm not Albert Einstein. But I suspect it will take someone like him- someone brilliant, very good at physics, yet largely outside the academic system- to resolve this mess, and tell us what's really happening.

    " } }, { "_id": "FwkdQTxZm8LSkhdeG", "title": "Priors and Surprise", "pageUrl": "https://www.lesswrong.com/posts/FwkdQTxZm8LSkhdeG/priors-and-surprise", "postedAt": "2010-03-03T08:27:53.554Z", "baseScore": 23, "voteCount": 26, "commentCount": 32, "url": null, "contents": { "documentId": "FwkdQTxZm8LSkhdeG", "html": "\n

    I don’t want to be too dogmatic about this claim, but Godzilla is unrealistic.  I don’t want to be too non-dogmatic about this claim either.  OK then, just how dogmatic should I be?  I have all sorts of reasons for thinking that skyscraper sized lizards or dinosaurs don’t actually exist.  Honestly, the most important of these is probably that none of the people who I imagine would know if they did exist seem to believe in them.  I never hear any mention of them in the news, in history books, etc, and I don’t see their effects in the national death statistics.  No industries seem to exist to deal with their rampages, and no oil or shipping companies lose stock value from lizard attacks.  Casually, at least, Godzilla attacks don’t seem like the sort of basic fact about the world that people could just overlook.  How confident should I be that Godzilla type creatures don't exist? 

    \n

    I can also fairly easily recognize good biological reasons not to expect there to be giant rampaging lizards.  The square/cube law, in its many manifestations, is the most basic of these, but by itself is not completely decisive.  I can imagine physical workarounds that would allow sequoia giganticus sized reptiles, but not without novel bio-machinery that would take a long time to evolve and would surely be found in many other organisms.  I can even vaguely imagine ways in which biology might prove resistant to conventional military weaponry and ecological niches and lifestyles that might support both such biology and such size, though much of my knowledge of Earth’s ecosystems would have to be re-written.    For all that, if I lived in a world where essentially all authorities did refer to the activities of godzilla giganticus  I would probably accept that they were probably correct regarding its existence.  What should a hypothetical person who lived in a world where the existence of Godzilla type creatures was common knowledge and was regarded as an ordinary non-numinous fact about the world believe?

    \n

    Godzilla would be considerably more perplexing than thunderstones, and would have to be considerably better documented to be credible.  Even with the strongest documentation I would have substantial unresolved questions, inferring that Godzilla’s native ecosystem must be quite different from any known (possibly inferring that the details are classified), and even wondering whether Godzilla was a biological creature at all as opposed to, for instance, a giant robot left behind by an advanced and forgotten civilization, a line of inquiry that would greatly increase my credence in secret history of all kinds.  For the most part though, I would probably go about life as normal.  Even Natural Selection, the most damaged part of my world-view, would endure as a great intellectual triumph explaining the origins of almost all of Earth’s life forms.  Only peripheral facts, such as distant history and the nature of some exotic ecosystems would be deeply called into question, and such facts are not tightly integrated with the broader edifice of science.  In a conversation with a hypothetical Michael Vassar who believed in Godzilla, the issue would typically not come up.  Science in general would not be called into question in my mind, but should it be? 

    \n

    " } }, { "_id": "grLHzwB3pMXe7zdbQ", "title": "Interesting Peter Norvig interview", "pageUrl": "https://www.lesswrong.com/posts/grLHzwB3pMXe7zdbQ/interesting-peter-norvig-interview", "postedAt": "2010-03-03T00:59:04.214Z", "baseScore": 6, "voteCount": 11, "commentCount": 14, "url": null, "contents": { "documentId": "grLHzwB3pMXe7zdbQ", "html": "

    (Sorry this is mostly a link instead of a post, but I think it will interesting to the FAI folks here)

    \n

    I helped arrange this interview with Peter Norvig:

    \n

    http://www.reddit.com/r/blog/comments/b8aln/peter_norvig_answers_your_questions_ask_me/

    \n

    I think the answer to the AGI question 4 is telling, but judge for yourself. (BTW, the 'components' Peter referred to are probabilistic relational learning and hierarchical modeling. He singled these two in his singularity summit talk)

    " } }, { "_id": "kqdeufnJq9HdWmpaY", "title": "Individual vs. Group Epistemic Rationality", "pageUrl": "https://www.lesswrong.com/posts/kqdeufnJq9HdWmpaY/individual-vs-group-epistemic-rationality", "postedAt": "2010-03-02T21:46:11.930Z", "baseScore": 36, "voteCount": 28, "commentCount": 54, "url": null, "contents": { "documentId": "kqdeufnJq9HdWmpaY", "html": "

    It's common practice in this community to differentiate forms of rationality along the axes of epistemic vs. instrumental, and individual vs. group, giving rise to four possible combinations. I think our shared goal, as indicated by the motto \"rationalists win\", is ultimately to improve group instrumental rationality. Generally, improving each of these forms of rationality also tends to improve the others, but sometimes conflicts arise between them. In this post I point out one such conflict between individual epistemic rationality and group epistemic rationality.

    \n

    We place a lot of emphases here on calibrating individual levels of confidence (i.e., subjective probabilities), and on the idea that rational individuals will tend to converge toward agreement about the proper level of confidence in any particular idea as they update upon available evidence. But I argue that from a group perspective, it's sometimes better to have a spread of individual levels of confidence about the individually rational level. Perhaps paradoxically, disagreements among individuals can be good for the group.

    \n

    A background fact that I start with is that almost every scientific ideas that humanity has ever come up with has been wrong. Some are obviously crazy and quickly discarded (e.g., every perpetual motion proposal), while others improve upon existing knowledge but are still subtly flawed (e.g., Newton's theory of gravity). If we accept that taking multiple approaches simultaneously is useful for solving hard problems, then upon the introduction of any new idea that is not obviously crazy, effort should be divided between extending the usefulness of the idea by working out its applications, and finding/fixing flaws in the underlying math, logic, and evidence.

    \n

    Having a spread of confidence levels in the new idea helps to increase individual motivation to perform these tasks. If you're overconfident in an idea, then you would tend to be more interested in working out its applications. Conversely, if you're underconfident in it (i.e., are excessively skeptical), you would tend to work harder to try to find its flaws. Since scientific knowledge is a public good, individually rational levels of motivation to produce it are almost certainly too low from a social perspective, and so these individually irrational increases in motivation would tend to increase group rationality.

    \n

    Even amongst altruists (at least human ones), excessive skepticism can be a virtue, due to the phenomenon of belief bias, in which \"someone's evaluation of the logical strength of an argument is biased by their belief in the truth or falsity of the conclusion\". In other words, given equal levels of motivation, you're still more likely to spot a flaw in the arguments supporting an idea if you don't believe in it. Consider a hypothetical idea, which a rational individual, after taking into account all available evidence and arguments, would assign a probability of .999 of being true. If it's a particularly important idea, then on a group level it might still be worth devoting the time and effort of a number of individuals to try to detect any hidden flaws that may remain. But if all those individuals believe that the idea is almost certainly true, then their performance in this task would likely suffer compared to those who are (irrationally) more skeptical.

    \n

    Note that I'm not arguing that our current \"natural\" spread of confidence levels is optimal in any sense. It may well be that the current spread is too wide even on a group level, and that we should work to reduce it, but I think it can't be right for us to aim right away for an endpoint where everyone literally agrees on everything.

    " } }, { "_id": "osNyB9LdEiaZLkEwv", "title": "Meetup: Bay Area: Sunday, March 7th, 7pm", "pageUrl": "https://www.lesswrong.com/posts/osNyB9LdEiaZLkEwv/meetup-bay-area-sunday-march-7th-7pm", "postedAt": "2010-03-02T21:18:28.332Z", "baseScore": 8, "voteCount": 7, "commentCount": 44, "url": null, "contents": { "documentId": "osNyB9LdEiaZLkEwv", "html": "

    Overcoming Bias / Less Wrong meetup in the San Francisco Bay Area at SIAI House on March 7th, 2010, starting at 7PM.

    \n

    Eliezer Yudkowsky, Alicorn, and  Michael Vassar will be present.

    \n

    Some other extra guests - Wei Dai, Stuart Armstrong, and Nick Tarleton - will be also be there, following our short Decision Theory mini-workshop.

    " } }, { "_id": "ufBYjpi9gK6uvtkh5", "title": "For progress to be by accumulation and not by random walk, read great books", "pageUrl": "https://www.lesswrong.com/posts/ufBYjpi9gK6uvtkh5/for-progress-to-be-by-accumulation-and-not-by-random-walk", "postedAt": "2010-03-02T08:11:51.034Z", "baseScore": 85, "voteCount": 85, "commentCount": 117, "url": null, "contents": { "documentId": "ufBYjpi9gK6uvtkh5", "html": "

    This recent blog post strikes me as an interesting instance of a common phenomenon.  The phenomenon looks like the following; an intellectual, working within the assumption that the world is not mad, (an assumption not generally found outside of the Anglo-American Enlightenment intellectual tradition) notices that some feature of the world would only make sense if the world was mad.  This intellectual responds by denouncing as silly one of the few features of this vale of tears to be, while not intelligently designed, at least structured by generalized evolution rather than by entropy.  The key line in the post is 

    \n
    \n

    \"Conversely in all those disciplines where we have reliable quantatative measurements of progress (with the obvious exception of history) returning to the original works of past great thinkers is decidedly unhelpful.\"

    \n
    \n

    I agree with the above statement, and find that the post makes a compelling argument for it.  My only caveat is that  we essentially never have quantitative measures of progress.  Even in physics, when one regards not the theory but the technique of actually doing physics, tools and modes of thought rise and fall for reasons of fashion, and once widespread techniques that remain useful fall into disuse. 

    \n

    Other important techniques, like the ones used to invent calculus in the first place, are never adequately articulated by those who use them and thus never come into general use.  One might argue that Newton didn't use any technique to invent calculus, just a very high IQ or some other unusual set of biological traits.  This, however, doesn't explain why a couple of people invented calculus at about the same time and place, especially given the low population of that time and place compared to the population of China over the many centuries when China was much more civilized than Europe. 

    \n

    It seems likely to me that in cases like the invention of calculus, looking at the use of such techniques can contribute to their development in at least crude form.  By analogy, even the best descriptions of how to do martial arts are inadequate to provide expertise without practice, but experience watching experts fight is a valuable complement to training by the relatively inept.  If one wants to know the Standard Model, sure, study it directly, but if you want to actually understand how to do the sorts of things that Newton did, you would be advised to read him, Feynman and yes, Plato too, as Plato also did things which contributed greatly to the development of thought. 

    \n

    Anyone who has ever had a serious intellectual following is worth some attention.  Repeating errors is the default, so its valuable to look at ideas that were once taken seriously but are now recognized as errors.  This is basically the converse of studying past thinkers to understand their techniques.   

    \n

    Outside of physics, the evidence for progress is far weaker.  Many current economists think that today we need to turn back to Keynes to find the tools that he developed but which were later abandoned or simply never caught on.  A careful reading of Adam Smith and of Ben Franklin reveals them to use tools which did catch on centuries after he published, such as economic models of population growth which would have predicted the \"demographic transition\" which surprised almost all demographers just recently.  Likewise, much in Darwin is part of contemporary evolutionary theory but was virtually unknown by evolutionary biologists half a century ago.  

    \n

    As a practical matter a psychologist who knew the work of William James as well as that of B.F. Skinner or an economist who knows Hayek and Smith as well as Samuelson or Keynes is always more impressive than one who knows only the 'modern' field as 'modern' was understood by the previous generation.  Naive induction strongly suggests that like all previous generations of social scientists, today's social scientists who specialize in contemporary theories will be judged by the next generation, who will have an even more modern theory, to be inferior to their more eclectic peers.  Ultimately one has to look at the empirical question of the relative per-capita intellectual impressiveness of people who study only condensations and people who study original works.  To me, the latter looks much much greater in most fields, OK, in every field that I can quickly think of except for astronomy. 

    \n

    To the eclectic scholar of scholarly madness, progress is real.  This decade's sludge contains a few gems that weren't present in the sludge of any previous decade.  To the person who assumes that fields like economics or psychology effectively condense the findings of previous generations as background assumptions to today's work, however, progress means replacing one pile of sludge with another fashionable sludge-pile of similar quality.  And to those few whom the stars bless with the coworkers of those who study stars?  Well I have only looked at astronomy as through a telescope.  I haven't seen the details on the ground.  That said, for them maybe, just maybe, I can endorse the initial link.  But then again, who reads old books of astronomy?

    " } }, { "_id": "nJf3q38sQG2TT2WTL", "title": "Rationality quotes: March 2010", "pageUrl": "https://www.lesswrong.com/posts/nJf3q38sQG2TT2WTL/rationality-quotes-march-2010", "postedAt": "2010-03-01T10:26:30.434Z", "baseScore": 6, "voteCount": 6, "commentCount": 262, "url": null, "contents": { "documentId": "nJf3q38sQG2TT2WTL", "html": "

    This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.

    \n" } }, { "_id": "3znTroyarywmiA9f3", "title": "Open Thread: March 2010", "pageUrl": "https://www.lesswrong.com/posts/3znTroyarywmiA9f3/open-thread-march-2010", "postedAt": "2010-03-01T09:25:07.423Z", "baseScore": 9, "voteCount": 8, "commentCount": 680, "url": null, "contents": { "documentId": "3znTroyarywmiA9f3", "html": "

    We've had these for a year, I'm sure we all know what to do by now.

    \n

    This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

    " } }, { "_id": "v5AJZyEY7YFthkzax", "title": "Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future", "pageUrl": "https://www.lesswrong.com/posts/v5AJZyEY7YFthkzax/hedging-our-bets-the-case-for-pursuing-whole-brain-emulation", "postedAt": "2010-03-01T02:32:33.652Z", "baseScore": 14, "voteCount": 24, "commentCount": 248, "url": null, "contents": { "documentId": "v5AJZyEY7YFthkzax", "html": "

    It is the fashion in some circles to promote funding for Friendly AI research as a guard against the existential threat of Unfriendly AI. While this is an admirable goal, the path to Whole Brain Emulation is in many respects more straightforward and presents fewer risks. Accordingly, by working towards WBE, we may be able to \"weight\" the outcome probability space of the singularity such that humanity is more likely to survive.

    \n

    \n

    One of the potential existential risks in a technological singularity is that the recursively self-improving agent might be inimical to our interests, either through actual malevolence or \"mere\" indifference towards the best interests of humanity. Eliezer has written extensively on how a poorly-designed AI could lead to this existential risk. This is commonly termed Unfriendly AI.

    \n

    Since the first superintelligence can be presumed to have an advantage over any subsequently-arising intelligences, Eliezer and others advocate funding research into creating Friendly AI. Such research must not only reverse-engineer consciousness, but also human notions of morality. Unfriendly AI could potentially require only sufficiently fast hardware to evolve an intelligence via artificial life, as depicted in Greg Egan's short story \"Crystal Nights\", or it may be created inadvertently by researchers at the NSA or a similar organization. It may be that creating Friendly AI is significantly harder than creating Unfriendly (or Indifferent) AI, perhaps so much so that we are unlikely to achieve it in time to save human civilization.

    \n

    Fortunately, there's a short-cut we can take. We already have a great many relatively stable and sane intelligences. We merely need to increase their rate of self-improvement. As far as I can tell, developing mind uploading via WBE is a simpler task than creating Friendly AI. If WBE is fast enough to constitute an augmented intelligence, then our augmented scientists can trigger the singularity by developing more efficient computing devices. An augmented human intelligence may have a slower \"take-off\" than a purpose-built intelligence, but we can reasonably expect it to be much easier to ensure such a superintelligence is Friendly. In fact, this slower take-off will likely be to our advantage; it may increase our odds of being able to abort an Unfriendly singularity.

    \n

    WBE may also be able to provide us with useful insights into the nature of consciousness, which will aid Friendly AI research. Even if it doesn't, it gets us most of the practical benefits of Friendly AI (immortality, feasible galactic colonization, etc) and makes it possible to wait longer for the rest of the benefits.

    \n

    But what if I'm wrong? What if it's just as easy to create an AI we think is Friendly as it is to upload minds into WBE? Even in that case, I think it's best to work on WBE first. Consider the following two worlds: World A creates an AI its best scientists believes is Friendly and, after a best-effort psychiatric evaluation (for whatever good that might do) gives it Internet access. World B uploads 1000 of its best engineers, physicists, psychologists, philosophers, and businessmen (someone's gotta fund the research, right?). World B seems to me to have more survivable failure cases; if some of the uploaded individuals turn out to be sociopaths, the rest of them can stop the \"bad\" uploads from ruining civilization. It seems exceedingly unlikely that we would select a large enough group of sociopaths that the \"good\" uploads can't keep the \"bad\" uploads in check.

    \n

    Furthermore, the danger of uploading sociopaths (or people who become sociopathic when presented with that power) is also a danger that the average person can easily comprehend, compared to the difficulty of ensuring Friendliness of an AI. I believe that the average person is also more likely to recognize where attempts at safeguarding an upload-triggered singularity may go wrong.

    \n

    The only downside of this approach I can see is that an upload-triggered Unfriendly singularity may cause more suffering than an Unfriendly AI singularity; sociopaths may be presumed to have more interest in torture of people than a paperclip-optimizing AI would have.

    \n

    Suppose, however, that everything goes right, the singularity occurs, and life becomes paradise by our standards. Can we predict anything of this future? It's a popular topic in science fiction, so many people certainly enjoy the effort. Depending on how we define a \"Friendly singularity\", there could be room for a wide range of outcomes.

    \n

    Perhaps the AI rules wisely and well, and can give us anything we want, \"save relevance\". Perhaps human culture adapts well to the utopian society, as it seems to have done in the universe of The Culture. Perhaps our uploaded descendants set off to discover the secrets of the universe. I think the best way to ensure a human-centric future is to be the self-improving intelligences, instead of merely catching crumbs from the table of our successors.

    \n

    In my view, the worst kind of \"Friendly\" singularity would be one where we discover we've made a weakly godlike entity who believes in benevolent dictatorship; if we must have gods, I want them to be made in our own image, beings who can be reasoned with and who can reason with one another. Best of all, though, is that singularity where we are the motivating forces, where we need not worry if we are being manipulated \"in our best interest\".

    \n

    Ultimately, I want the future to have room for our mistakes. For these reasons, we ought to concentrate on achieving WBE and mind uploading first.

    " } }, { "_id": "m9S5qQrLDhybvx4f5", "title": "Great Product. Lousy Marketing.", "pageUrl": "https://www.lesswrong.com/posts/m9S5qQrLDhybvx4f5/great-product-lousy-marketing", "postedAt": "2010-02-28T09:33:58.318Z", "baseScore": 18, "voteCount": 26, "commentCount": 71, "url": null, "contents": { "documentId": "m9S5qQrLDhybvx4f5", "html": "

    The product of Less Wrong is truth. However, there seems to be a reluctance of the personality types here - myself included - to sell that product. Here's my evidence:

    \n
    Yvain said: But the most important reason to argue with someone is to change his mind. ... I make the anecdotal observation that a lot of smart people are very good at winning arguments in the first sense [(logic)], and very bad at winning arguments in the second sense [(persuasion)]. Does that correspond to your experience?
    \n

    \n
    Eliezer said: I finally note, with regret, that in a world containing Persuaders, it may make sense for a second-order Informer to be deliberately eloquent if the issue has already been obscured by an eloquent Persuader - just exactly as elegant as the previous Persuader, no more, no less.  It's a pity that this wonderful excuse exists, but in the real world, well...
    \n

    \n
    Robin Hanson said: So to promote rationality on interesting important topics, your overwhelming consideration simply must be: on what topics will the world’s systems for deciding who to hear on what listen substantially to you? Your efforts to ponder and make progress will be largely wasted if you focus on topics where none of the world’s “who to hear on what” systems rate you as someone worth hearing. You must not only find something worth saying, but also something that will be heard.
    \n

    We actually label many highly effective persuasive strategies that can be used to market our true ideas as \"dark arts\". What's the justification for this negative branding? A necessary evil is not evil. Even if - and this is a huge if - our future utopia is free of dark arts, that's not the world we live in today. Choosing not to use them is analogous to a peacenik wanting to rid the world of violence by suggesting that police not use weapons.

    \n

    We treat our dislike of dark arts as if it's a simple corollary of the axiom of the virtue of truth. Does this mean we assume the ends (more people believe the truth) doesn't justify the means (persuasion to the truth via exploiting cognitive biases)? Or are we just worried about being hypocrites? Whatever the reason, such an impactful assumption deserves an explanation. Speaking practically, the successful practice of dark arts requires the psychological skill of switching hats, to use Edward de Bono's terminology. While posting on Less Wrong, we can avoid and are in fact praised for avoiding dark arts, but we need to switch up in other environments, and that's difficult. Frankly, we're not great at it, and it's very tempting to externalize the problem and say \"the art is bad\" rather than \"we're bad at the art\".

    \n

    Our distaste for rhetorical tactics, both aesthetically and morally, profoundly affects the way we communicate. That distaste is tightly coupled with the mental habit of always interpreting the value of what is said purely for its informational content, logical consistency, and insight. I'm basing the following question on my own introspection, but I wonder if this almost religiously entrenched mental habit could make us blind to the value of the art of persuasion? Let's imagine for a moment, the most convincing paragraph ever written. It was truly a world-wonder of persuasion - it converted fundamentalist Christians into atheists, suicide bombers into diplomats, and Ann Coulter-4-President supporters into Less Wrong sycophants. What would your reaction to the paragraph be? Would you \"up-vote\" this work of genius? No way. We'd be competing to tell the fundamentalist Christian that there were at least three argument fallacies in the first sentence, we'd explain to the suicide bomber that the rhetoric could be used equally well to justify blowing us all up right now, and for completeness we'd give the Ann Coulter supporter a brief overview of Bayesianism.

    " } }, { "_id": "iYzZS4h8a7uGNhouE", "title": "Splinters and Wooden Beams", "pageUrl": "https://www.lesswrong.com/posts/iYzZS4h8a7uGNhouE/splinters-and-wooden-beams", "postedAt": "2010-02-28T08:26:27.931Z", "baseScore": 5, "voteCount": 14, "commentCount": 50, "url": null, "contents": { "documentId": "iYzZS4h8a7uGNhouE", "html": "

    I recently told a friend that I was planning to write (and post online) a paper that rigorously refutes every argument1 I’ve ever heard that homosexuality is inherently immoral. The purpose of this effort was to provide a handy link for people who want to persuade family members or friends who are marginal believers of the homosexuality-is-immoral theory. As a key part of this effort, I intended to demonstrate that the predominant religious arguments against homosexuality cause contradictions within the religion. For example, the tortured reasoning of the Roman Catholic Church2 goes like this:

    \n
      \n
    1. Sex without marriage is forbidden.
    2. \n
    3. Marriage is only for those who are “open to natural reproduction”.
    4. \n
    5. Gays can’t reproduce (in an acceptably “natural” way) and therefore gay sex is not “open to reproduction”.
    6. \n
    7. Since gays cannot be open to reproduction, they cannot marry.
    8. \n
    9. Since they cannot marry, they can’t have sex.
    10. \n
    \n

    This argument seems to be logically valid, if you accept the insane assumptions. Bizarrely, though, the Catholic Church also recommends a practice called \"Natural Family Planning\", in which married couples who want to prevent pregnancy have sex only when the woman is believed to be infertile! To be consistent, the Catholic Church would have to oppose such deliberate efforts to prevent natural reproduction.

    \n

    My paper was going to be full of little examples like this, of how opposing homosexuality leads to contradictions within Christian Virtue Ethics, established interpretations of the Koran, or whatever. However, my friend told me that he thought my efforts were misguided. Why try curing these folks of the splinter of intolerance, when they still have the wooden beam3 of theism in their eyes?

    \n

    After all, if someone you know is planning to quit her job and move to Alaska because her horoscope told her that Tauruses need more spontaneity, you shouldn't tell her to stay because she's actually an Aries. You tell her to stay because astrology is provably bogus.

    \n

    I'm uncertain. Most of those wooden beams are staying right where they are for the foreseeable future. But attitudes toward homosexuality are changing relatively quickly. On the other hand, there is something to be said for striking at the root of the problem. Overall, I'm leaning toward making these smaller arguments instead of trying to convert people to atheism.

    \n

    **A lot of people have said they don't think this approach will be very effective. I mentioned in the beginning of the article that the purpose was to help others persuade marginal believers of the homosexuality-is-immoral theory.

    \n

     

    \n

    1. Most of these arguments are religion-based.

    \n

    2. Ironically, the Catholic Church is an easier target because they have the decency to actually lay their arguments out formally (though often in gratuitous Latin), since they believe that the Church's dogma can always be confirmed using pure Reason. Protestant churches tend to simply cite scripture--and they believe the scripture because they have faith. Yes this is a tautology. Actually, I wonder if the Protestants' refusal to justify their beliefs rationally protects them from Escher-brain effect. Faith claims can be neatly compartmentalized, sequestered away, protecting the rest of the mind.

    \n

    3. A reference to Matthew 7:3.

    " } }, { "_id": "SRaRHemkbsHWzzbPN", "title": "Mental Crystallography", "pageUrl": "https://www.lesswrong.com/posts/SRaRHemkbsHWzzbPN/mental-crystallography", "postedAt": "2010-02-27T01:04:54.333Z", "baseScore": 33, "voteCount": 46, "commentCount": 60, "url": null, "contents": { "documentId": "SRaRHemkbsHWzzbPN", "html": "

    Brains organize things into familiar patterns, which are different for different people.  This can make communication tricky, so it's useful to conceptualize these patterns and use them to help translation efforts.

    \n

    Crystals are nifty things!  The same sort of crystal will reliably organize in the same pattern, and always break the same way under stress.

    \n

    Brains are also nifty things!  The same person's brain will typically view everything through a favorite lens (or two), and will need to work hard to translate input that comes in through another channel or in different terms.  When a brain acquires new concepts - even really vital ones - the new idea will result in recognizeably-shaped brain-bits.  Different brains, therefore, handle concepts differently, and this can make it hard for us to talk to each other.

    \n

    This works on a number of levels, although perhaps the most obvious is the divide between styles of thought on the order of \"visual thinker\", \"verbal thinker\", etc.  People who differ here have to constantly reinterpret everything they say to one another, moving from non-native mode to native mode and back with every bit of data exchanged.  People also store and retrieve memories differently, form first-approximation hypotheses and models differently, prioritize sensory input differently, have different levels of introspective luminosity1, and experience different affect around concepts and propositions.  Over time, we accumulate different skills, knowledge, cognitive habits, shortcuts, and mental filing debris.  Intuitions differ - appeals to intuition will only convert people who share the premises natively.  We have lots in common, but high enough variance that it's impressive how much we do manage to communicate over not only inferential distances, but also fundamentally diverse brain plans.  Basically, you can hit two crystals the same way with the same hammer, but they can still break along different cleavage planes.

    \n

    This phenomenon is a little like man-with-a-hammer syndrome, which is why I chose that extension of my crystal metaphor.  But a person's dependence on their mental crystallography, unlike their wanton use of their hammer, rarely seems to diminish with time.  (In fact, youth probably confers some increased flexibility - it seems that you can probably train children to have different crystalline structures to some degree, but much less so with adults).  MWaH is actually partially explained by the brain's crystallographic regularities.  A hammer-idea will only be compelling to you if it aligns with the crystals in your head.

    \n

    Having \"useful\" mental crystallography - which lets you comprehend, synthesize, and apply ideas in their most accurate, valuable form - is a type of epistemic luck about the things you can best understand.  If you're intrinsically oriented towards mathematical explanations, for instance, and this lets you promptly apprehend the truth and falsity of strings of numbers that would leave my head swimming, you're epistemically lucky about math (while I'm rather likely to be led astray if someone takes the time to put together a plausible verbal explanation that may not match up to the numbers).  Some brain structures can use more notions than others, although I'm skeptical that any human has a pure generalist crystal pattern that can make great use of every sort of concept interchangeably without some native mode to touch base with regularly.

    \n

    When you're trying to communicate facts, opinions, and concepts - most especially concepts - it is a useful investment of effort to try to categorize both your audience's crystallography and your own.  With only one of these pieces of information, you can't optimize your message for its recipient, because you need to know what you're translating from, not just have a bead on what you are translating to.  (If you want to translate the word \"blesser\" into, say, Tagalog, it might be useful to know if \"blesser\" is English or French.)  And even with fairly good information on both origin and destination, you can wind up with a frustrating disconnect; but given that head start on bridging the gap, you can find wherever the two crystals are most likely to touch with less trial and error.

    \n

     

    \n

    1Introspective luminosity (or just \"luminosity\") is the subject of a sequence I have planned - this is a preparatory post of sorts.  In a nutshell, I use it to mean the discernibility of mental states to their haver - if you're luminously happy, clap your hands.

    " } }, { "_id": "KtFhvSsLYPRHcL8Pe", "title": "Superstimuli, setpoints, and obesity", "pageUrl": "https://www.lesswrong.com/posts/KtFhvSsLYPRHcL8Pe/superstimuli-setpoints-and-obesity", "postedAt": "2010-02-26T23:59:15.227Z", "baseScore": -3, "voteCount": 16, "commentCount": 46, "url": null, "contents": { "documentId": "KtFhvSsLYPRHcL8Pe", "html": "

    Related to: Babies and Bunnies: A Caution About Evo-Psych, Superstimuli and the Collapse of Western Civilization.

    \n

    The main proximate cause of increase in human weight over the last few decades is over-eating - other factors like decreased energy need due to less active lifestyle seem at best secondary if relevant at all. The big question is what misregulates homeostatic system controlling food intake towards higher calorie consumption?

    \n

    The most common accepted answer is some sort of superstimulus theory - modern food is so tasty people find it irresistible. This seems backwards to me in its basic assumption - almost any \"traditional\" food seems to taste better than almost any \"modern\" food.

    \n

    It is as easy to construct the opposite theory of tastiness set point - tastiness is some estimate of nutritional value of food - more nutritious food should taste better than less nutritious food. So according to the theory - if you eat very tasty food, your appetite thinks it's highly nutritious, and demands less of it; and if you eat bland tasteless food - your appetite underestimates its nutritious content and demands too much of it.

    \n

    It's not even obvious that your appetite is \"wrong\" - if you need certain amount of nutritionally balanced food, and all you can gets is nutritionally balanced food with a lot of added sugar - the best thing is eating more and getting all the micro-nutrients needed in spite of excess calories. Maybe it is not confused at all, just doing its best in a bad situation, and prioritizes evolutionarily common threat of too little micronutrients over evolutionarily less common threat of excess calories.

    \n

    As some extra evidence - it's a fact that poor people with narrower choice of food are more obese than rich people with wider choice of food. If everyone buys the tastier food they can afford, superstimulus theory says rich should be more obese, setpoint theory says poor should be more obese.

    \n

    Is there any way in which setpoint theory is more wrong than superstimulus theory?

    \n

    \"\"

    " } }, { "_id": "uKoqrgnRoWjhneDvM", "title": "Improving The Akrasia Hypothesis", "pageUrl": "https://www.lesswrong.com/posts/uKoqrgnRoWjhneDvM/improving-the-akrasia-hypothesis", "postedAt": "2010-02-26T20:45:19.942Z", "baseScore": 101, "voteCount": 90, "commentCount": 70, "url": null, "contents": { "documentId": "uKoqrgnRoWjhneDvM", "html": "

    Abstract: This article proposes a hypothesis that effective anti-akrasia methods operate by reducing or eliminating the activation of conflicting voluntary motor programs at the time the user's desired action is to be carried out, or by reducing or eliminating the negative effects of managing the conflict.  This hypothesis is consistent with the notion of \"ego depletion\" (willpower burnout) being driven by the need to consciously manage conflicting motor programs.  It also supports a straightforward explanation of why different individuals will fare better with some anti-akrasia methods than others, and provides a framework for both classifying existing methods, and generating new ones.  Finally, it demonstrates why no single technique can be a panacea, and shows how the common problems of certain methods shape the form of both the self-help industry, and most people's experiences with it.

    \n

    The Hypothesis

    \n

    Recently, orthonormal posted an Akrasia Tactics Review, collecting data from LessWrong members on their results using different anti-akrasia techniques.  And although I couldn't quite put my finger on it at first, something about the review (and the discussion around it) was bothering me.

    \n

    See, I've never been fond of the idea that \"different things work for different people\".  As a predictive hypothesis, after all, this is only slightly more useful than saying \"a wizard did it\".  It says nothing about how (or why) different things work, and therefore gives you no basis to select which different things might work for which different people.

    \n

    For that reason, it kind of bugs me whenever I see discussion and advocacy of \"different things\", independent of any framework for classifying those things in a way that would help \"different people\" select or design the \"different things\" that would \"work for\" them.  (In fact, this is a pretty big factor in why I'm a self-help writer/speaker in the first place!)

    \n

    So in this post, I want to share two slightly better working hypotheses for akrasia technique classification than \"different things work for different people\":

    \n
      \n
    1. \n

      Akrasia happens when there are conflicting active voluntary motor programs, and the opposite of akrasia is either the absence of such conflict, or a manageable quantity of it.

      \n
    2. \n
    3. The nature and source of the specific motor program conflicts are individual, so the effectiveness of a given anti-akrasia technique will be determined by an individual's specific sources of conflict
    4. \n
    \n

    Accepting these as working hypotheses allows us to quickly develop a classification scheme for anti-akrasia techniques, based on what part of the problem they work on.

    \n

    A Classification Scheme

    \n

    For example, hygenic/systemic methods such as exercise, nutrition, drugs, and meditation, all act to improve one's ability to manage conflict, attempting to reduce or minimize ego depletion: either by improving one's capacity, or, in the case of meditation, by developing more efficient conflict-management capability (so that ego depletion occurs more slowly or not at all).  The effectiveness of these methods will then depend on the amount and type of conflict to be managed, and will most likely be effective in cases where the sources of conflict are many or frequent, but the intensity of any given conflict is low.

    \n

    Focusing methods, such as Getting Things Done and the Pomodoro Technique, operate on a principle of helping people to let go of thinking about other tasks, even though they may vary widely in how they go about this.  Restricting internet access, taking vows, or simply \"deciding not to do anything else\" are also within this class.  The motor-program conflict hypothesis predicts that the effectiveness of each of these techniques will vary widely between individuals, depending on whether it addresses the actual source of their conflicts.

    \n

    For example, blocking one's internet access is unlikely to provide lasting help someone who's procrastinating on finishing their thesis due to a fear of failure - they will likely find another way to procrastinate.  It also won't help someone who really wants to get on the internet, vs. someone who's just being distracted by its availability! (i.e., has relevant motor programs primed by its availability)

    \n

    Meanwhile, Getting Things Done is unlikely to help someone whose conflict isn't because they have too many things to keep track of, and conversely the Pomodoro technique won't help someone who's having trouble deciding what to do in the first place.

    \n

    Motivational methods, on the other hand, (such as the ones of mine that users mentioned, or Vladimir Golovin's version of \"self-affirmation\"), operate by attempting to prime or fill one's motor program buffers with exactly one program: the action to be taken.

    \n

    And, to the extent that people actually fill their mind sufficiently to block out other programs from activating, these methods will definitely work.  However, in many cases, these methods ironically drop off in effectiveness over time, as their users get better at doing them.

    \n

    The more often the technique is used, the easier it becomes to do it with only a part of their mental resources...  and so, unless care is taken to do the technique in precisely the same way each time (i.e., with full conscious attention/intention), it may not produce the same effect.

    \n

    Another flaw in motivational methods is that they don't address any actual sources of conflict.  They are much more effective at overcoming simple inertia, and most useful when incorporated into something that you are trying to make into a habit anyway.

    \n

    Which brings us to our final category (and my personal favorite/specialty), conflict-resolution methods.  These are techniques which seek to eliminate conflicts directly, either through manipulating the outside world (e.g. removing obstacles) or the inside world (getting rid of fears and doubts, clarifying priorities, etc.).

    \n

    And the goal of these techniques is to bypass a potentially \"fatal flaw\" that exists with the other three classes of technique:

    \n

    Most Akrasia Techniques Are Subject To \"Meta\"-Akrasia

    \n

    If you procrastinate taking your pills or doing your exercises, your hygenic method is unstable: the more you delay, the more likely you are to delay some more.  The same is true for maintaining your \"trusted system\" in Getting Things Done, breaking your tasks into Pomodoros, or whatever other focusing method you use.  And of course, if you put off doing your motivation technique, it's not going to motivate you.

    \n

    So the idea behind conflict-resolution methods is to permanently alter the situation so that a particular source of conflict can no longer arise.  For example, if you can never find a pair of scissors, buying a pair for each place where you might need them could forever eliminate the priming of the procrastination motor program titled, \"but I don't know where the scissors are.\"  (This would be an example of resolving an external conflict, as opposed to an internal one.)

    \n

    In my own work, however, I specialize in helping people get rid of chronic internal conflicts like not believing in themselves, fear of criticism, and all that sort of thing.  These kinds of conflicts tend to not be helped at all by techniques in the other three categories, since the feelings (motor programs) involved tend to be intense (i.e. less likely to respond to hygenic methods), and also tend to interfere with the mental prerequisites for performing focusing or motivational methods.

    \n

    For example, a person who is afraid of doing things wrong can easily be just as afraid of doing GTD or Pomodoro wrong, as they are of doing whatever it is they're supposed to be doing!  And in the case of motivational methods, a person with a chronic fear will usually just transfer the fear to the motivational technique itself, since it's immediately followed by them doing whatever it is they're already afraid of.

    \n

    Thus, as a general rule, the more chronic your akrasia, the less likely you will be helped by any kind of method that is not aimed at a \"one time pays for all\" elimination of your conflict source(s).

    \n

    That being said, however, I have in recent months begun making more use of hygenic and focusing methods for myself personally.  But that's only because:

    \n
      \n
    1. \n

      Having gotten rid of most of the chronic conflicts that plagued me before, I now am more able to actually use those other methods, and

      \n
    2. \n
    3. Just getting rid of the chronic conflicts, doesn't get rid of  health problems and routine distractions
    4. \n
    \n

    Which brings me to an important final point regarding these classifications:

    \n

    No Anti-Akrasia Technique Can Be A Cure-All

    \n

    ...because no one technique can eliminate all conflicts.  And removing one source of conflict may expose another: if you haven't actually started work on your thesis, you might not yet know what conflicts will arise during the work!

    \n

    Also, learning anti-akrasia techniques is itself often a source of conflict.  Not just in that a person who believes themselves stupid or \"not good at this\" may not apply themselves well, but also in that one's beliefs about a technique may interfere with the choice to use it in the first place.

    \n

    For example, if you believe that the \"law of attraction\" is mumbo-jumbo (and it is), then you may choose not to learn the very effective motivational methods that are taught by \"attraction\" gurus.  (Many of the motivational methods LessWrongers reported success with are essentially identical to methods taught in various \"law of attraction\" programs.)

    \n

    And, beyond such obstacles to learning, there is the further problem of teaching.  I have seen numerous self-help books describe essentially the same motivational method in utterly different ways, most of which were incomprehensible if you didn't already \"get\" (or hadn't already \"clicked\" on) what it was that you were supposed to do.

    \n

    This is partly a problem of imprecise language for internal mental states and activities, and partly a problem of the authors focusing on whatever aspect of a method that they themselves most recently \"clicked\" on, ala Man With Hammer Syndrome.  (I have to fight this tendency constantly, myself.)  This is only useful to a reader if they are missing the same piece of the puzzle as the author was.

    \n

    So, the net result is that problems with teaching and learning are the most common reason that \"internal\" anti-akrasia methods (within the focus, motivation, and conflict-resolution categories) vary so widely in apparent usefulness by individual.  If a method's explanation lacks a way for its user to determine whether they have done it correctly, there is a very high probability that the user will simply give up on it as \"not working for me\", without once having actually performed the actual technique!  (This was extremely common for me; I can now go back and read dozens of self-help books describing techniques that once appeared useless to me, when in fact I was never really doing them.)

    \n

    Thus, a method can appear useless, even if it is not, and the same inner technique can appear to be dozens of different techniques, simply as a result of using different words or metaphors to describe it.  (For example, at least two \"different\" techniques of mine currently listed in the LessWrong survey are exactly the same thing, just described differently!)

    \n

    Selection Pressures On The Self-Help Industry

    \n

    This also explains why two rather frustrating phenomena occur in the self-help world: authors continually inventing \"new\" techniques, and writing books which do more listing or selling of techniques than actually teaching them.

    \n

    If the author focuses on teaching to the exclusion of selling, it's statistically likely they will lose not only that customer (due to their lack of success), but also other customers, due to word of mouth.  And, if they are promoting the same technique as other authors, it is somewhat more likely that a customer who's already \"tried\" that technique will not buy the book in the first place.  (Unless, as with many \"Secret\" books, the author positions themselves as supplying a needed \"missing piece\".)

    \n

    On the other hand, if the author invents a new name for a technique (or a new way to describe it) and focuses within their book on giving insight, entertainment, and persuading the customer that the technique is a good idea that they should learn, then they can get a happy-but-hungry customer: a customer who may go on to a seminar or coaching program, wherein the author can actually teach them something, in a situation where feedback is possible.

    \n

    And before I tried to write a book of my own, I underestimated just how difficult it is to avoid these pressures.  Indeed, I thought many gurus were charlatans for writing their books in this way, while not grasping the unfortunate fact that this is also what you have to do, if your ultimate goal is to help people.

    \n

    In practice, the industry can probably be divided into those whose intentions are good (but don't grasp that there's a problem), those who do grasp the problem, but can't really change it, and those who simply exploit the existence of the problem to hide their own lack of comprehension, competence, or compassion.  (Applying this classification is left as an exercise for the reader, although doing so accurately may well require more than just reading a given guru's book(s).)

    \n

    Conclusions and Feedback Request

    \n

    In summary:

    \n\n

    So...  that about does it for now.  I'd love to hear your feedback, whether it's in the form of supportive/confirming comments, or counterexamples and critical comments that might help improve the framework presented here. In particular, despite the length of this article, I still feel as if I've left out far more information than I've given.

    \n

    For example, I've not shown the details of many of the cause-effect chains at work in the processes and effects described, nor have I included examples of internal conflict-resolution methods, in order to avoid completely exploding the length of the piece.  So, if you feel that something is left out, or that something included would be better left out, let me know!

    \n

    Also, one other topic on which I'd like feedback: I've attempted here to modify here my usual writing style, to better fit readers of LessWrong; if it's an improvement -- or failed to be one -- I'd appreciate comments on that as well.

    \n

    Thanks in advance for your input!

    \n

     

    " } }, { "_id": "J9WR5YDQp4zkWBS3t", "title": "Creating a Less Wrong prediction market", "pageUrl": "https://www.lesswrong.com/posts/J9WR5YDQp4zkWBS3t/creating-a-less-wrong-prediction-market", "postedAt": "2010-02-26T11:48:36.092Z", "baseScore": 9, "voteCount": 15, "commentCount": 52, "url": null, "contents": { "documentId": "J9WR5YDQp4zkWBS3t", "html": "

    I will bet 500 karma that a funny picture thread will appear on Less Wrong within one year. If anyone is interested in the bet, we can better define terms.

    \n

    Right now the LW software doesn't support karma transfers. Until it does and we can develop a more robust prediction market, let's just record the karma transfers on the wiki page that already exists for this purpose.

    \n

    I will also give 100 karma to anyone that donates $10 to the SIAI before the current fundraising campaign is over.

    \n

    10,000 karma for the first person with a karma transfer source code patch?

    " } }, { "_id": "AN2cBr6xKWCB8dRQG", "title": "What is Bayesianism?", "pageUrl": "https://www.lesswrong.com/posts/AN2cBr6xKWCB8dRQG/what-is-bayesianism", "postedAt": "2010-02-26T07:43:53.375Z", "baseScore": 120, "voteCount": 108, "commentCount": 218, "url": null, "contents": { "documentId": "AN2cBr6xKWCB8dRQG", "html": "

    This article is an attempt to summarize basic material, and thus probably won't have anything new for the hard core posting crowd. It'd be interesting to know whether you think there's anything essential I missed, though.

    \n

    You've probably seen the word 'Bayesian' used a lot on this site, but may be a bit uncertain of what exactly we mean by that. You may have read the intuitive explanation, but that only seems to explain a certain math formula. There's a wiki entry about \"Bayesian\", but that doesn't help much. And the LW usage seems different from just the \"Bayesian and frequentist statistics\" thing, too. As far as I can tell, there's no article explicitly defining what's meant by Bayesianism. The core ideas are sprinkled across a large amount of posts, 'Bayesian' has its own tag, but there's not a single post that explicitly comes out to make the connections and say \"this is Bayesianism\". So let me try to offer my definition, which boils Bayesianism down to three core tenets.

    We'll start with a brief example, illustrating Bayes' theorem. Suppose you are a doctor, and a patient comes to you, complaining about a headache. Further suppose that there are two reasons for why people get headaches: they might have a brain tumor, or they might have a cold. A brain tumor always causes a headache, but exceedingly few people have a brain tumor. In contrast, a headache is rarely a symptom for cold, but most people manage to catch a cold every single year. Given no other information, do you think it more likely that the headache is caused by a tumor, or by a cold?

    If you thought a cold was more likely, well, that was the answer I was after. Even if a brain tumor caused a headache every time, and a cold caused a headache only one per cent of the time (say), having a cold is so much more common that it's going to cause a lot more headaches than brain tumors do. Bayes' theorem, basically, says that if cause A might be the reason for symptom X, then we have to take into account both the probability that A caused X (found, roughly, by multiplying the frequency of A with the chance that A causes X) and the probability that anything else caused X. (For a thorough mathematical treatment of Bayes' theorem, see Eliezer's Intuitive Explanation.)

    There should be nothing surprising about that, of course. Suppose you're outside, and you see a person running. They might be running for the sake of exercise, or they might be running because they're in a hurry somewhere, or they might even be running because it's cold and they want to stay warm. To figure out which one is the case, you'll try to consider which of the explanations is true most often, and fits the circumstances best.

    Core tenet 1: Any given observation has many different possible causes.

    Acknowledging this, however, leads to a somewhat less intuitive realization. For any given observation, how you should interpret it always depends on previous information. Simply seeing that the person was running wasn't enough to tell you that they were in a hurry, or that they were getting some exercise. Or suppose you had to choose between two competing scientific theories about the motion of planets. A theory about the laws of physics governing the motion of planets, devised by Sir Isaac Newton, or a theory simply stating that the Flying Spaghetti Monster pushes the planets forwards with His Noodly Appendage. If these both theories made the same predictions, you'd have to depend on your prior knowledge - your prior, for short - to judge which one was more likely. And even if they didn't make the same predictions, you'd need some prior knowledge that told you which of the predictions were better, or that the predictions matter in the first place (as opposed to, say, theoretical elegance).

    \n

    Or take the debate we had on 9/11 conspiracy theories. Some people thought that unexplained and otherwise suspicious things in the official account had to mean that it was a government conspiracy. Others considered their prior for \"the government is ready to conduct massively risky operations that kill thousands of its own citizens as a publicity stunt\", judged that to be overwhelmingly unlikely, and thought it far more probable that something else caused the suspicious things.

    \n

    Again, this might seem obvious. But there are many well-known instances in which people forget to apply this information. Take supernatural phenomena: yes, if there were spirits or gods influencing our world, some of the things people experience would certainly be the kinds of things that supernatural beings cause. But then there are also countless of mundane explanations, from coincidences to mental disorders to an overactive imagination, that could cause them to perceived. Most of the time, postulating a supernatural explanation shouldn't even occur to you, because the mundane causes already have lots of evidence in their favor and supernatural causes have none.

    \n

    Core tenet 2: How we interpret any event, and the new information we get from anything, depends on information we already had.

    Sub-tenet 1: If you experience something that you think could only be caused by cause A, ask yourself \"if this cause didn't exist, would I regardless expect to experience this with equal probability?\" If the answer is \"yes\", then it probably wasn't cause A.

    This realization, in turn, leads us to

    Core tenet 3: We can use the concept of probability to measure our subjective belief in something. Furthermore, we can apply the mathematical laws regarding probability to choosing between different beliefs. If we want our beliefs to be correct, we must do so.

    The fact that anything can be caused by an infinite amount of things explains why Bayesians are so strict about the theories they'll endorse. It isn't enough that a theory explains a phenomenon; if it can explain too many things, it isn't a good theory. Remember that if you'd expect to experience something even when your supposed cause was untrue, then that's no evidence for your cause. Likewise, if a theory can explain anything you see - if the theory allowed any possible event - then nothing you see can be evidence for the theory.

    At its heart, Bayesianism isn't anything more complex than this: a mindset that takes three core tenets fully into account. Add a sprinkle of idealism: a perfect Bayesian is someone who processes all information perfectly, and always arrives at the best conclusions that can be drawn from the data. When we talk about Bayesianism, that's the ideal we aim for.

    \n

    Fully internalized, that mindset does tend to color your thought in its own, peculiar way. Once you realize that all the beliefs you have today are based - in a mechanistic, lawful fashion - on the beliefs you had yesterday, which were based on the beliefs you had last year, which were based on the beliefs you had as a child, which were based on the assumptions about the world that were embedded in your brain while you were growing in your mother's womb... it does make you question your beliefs more. Wonder about whether all of those previous beliefs really corresponded maximally to reality.

    \n

    And that's basically what this site is for: to help us become good Bayesians.

    " } }, { "_id": "xZoegLcqmsMP6awXY", "title": "The Last Days of the Singularity Challenge", "pageUrl": "https://www.lesswrong.com/posts/xZoegLcqmsMP6awXY/the-last-days-of-the-singularity-challenge", "postedAt": "2010-02-26T03:46:27.287Z", "baseScore": 24, "voteCount": 24, "commentCount": 86, "url": null, "contents": { "documentId": "xZoegLcqmsMP6awXY", "html": "

    From Michael Anissimov on the Singularity Institute blog:

    \n

    Thanks to generous contributions by our donors, we are only $11,840 away from fulfilling our $100,000 goal for the 2010 Singularity Research Challenge. For every dollar you contribute to SIAI, another dollar is contributed by our matching donors, who have pledged to match all contributions made before February 28th up to $100,000. That means that this Sunday is your final chance to donate for maximum impact.

    \n

    Funds from the challenge campaign will be used to support all SIAI activities: our core staff, the Singularity Summit, the Visiting Fellows program, and more. Donors can earmark their funds for specific grant proposals, many of which are targeted towards academic paper-writing, or just contribute to our general fund

    \n

    [Continue reading at the Singularity Institute blog.]

    " } }, { "_id": "FsfnDfADftGDYeG4c", "title": "\"Outside View!\" as Conversation-Halter", "pageUrl": "https://www.lesswrong.com/posts/FsfnDfADftGDYeG4c/outside-view-as-conversation-halter", "postedAt": "2010-02-24T05:53:34.133Z", "baseScore": 93, "voteCount": 80, "commentCount": 103, "url": null, "contents": { "documentId": "FsfnDfADftGDYeG4c", "html": "

    Followup toThe Outside View's Domain, Conversation Halters
    Reply toReference class of the unclassreferenceable

    \n

    In \"conversation halters\", I pointed out a number of arguments which are particularly pernicious, not just because of their inherent flaws, but because they attempt to chop off further debate - an \"argument stops here!\" traffic sign, with some implicit penalty (at least in the mind of the speaker) for trying to continue further.

    \n

    This is not the right traffic signal to send, unless the state of knowledge is such as to make an actual halt a good idea.  Maybe if you've got a replicable, replicated series of experiments that squarely target the issue and settle it with strong significance and large effect sizes (or great power and null effects), you could say, \"Now we know.\"  Or if the other is blatantly privileging the hypothesis - starting with something improbable, and offering no positive evidence to believe it - then it may be time to throw up hands and walk away.  (Privileging the hypothesis is the state people tend to be driven to, when they start with a bad idea and then witness the defeat of all the positive arguments they thought they had.)  Or you could simply run out of time, but then you just say, \"I'm out of time\", not \"here the gathering of arguments should end.\"

    \n

    But there's also another justification for ending argument-gathering that has recently seen some advocacy on Less Wrong.

    \n

    An experimental group of subjects were asked to describe highly specific plans for their Christmas shopping:  Where, when, and how.  On average, this group expected to finish shopping more than a week before Christmas.  Another group was simply asked when they expected to finish their Christmas shopping, with an average response of 4 days.  Both groups finished an average of 3 days before Christmas.  Similarly, Japanese students who expected to finish their essays 10 days before deadline, actually finished 1 day before deadline; and when asked when they had previously completed similar tasks, replied, \"1 day before deadline.\"  (See this post.)

    \n

    Those and similar experiments seem to show us a class of cases where you can do better by asking a certain specific question and then halting:  Namely, the students could have produced better estimates by asking themselves \"When did I finish last time?\" and then ceasing to consider further arguments, without trying to take into account the specifics of where, when, and how they expected to do better than last time.

    \n

    From this we learn, allegedly, that \"the 'outside view' is better than the 'inside view'\"; from which it follows that when you're faced with a difficult problem, you should find a reference class of similar cases, use that as your estimate, and deliberately not take into account any arguments about specifics.  But this generalization, I fear, is somewhat more questionable...

    \n

    For example, taw alleged upon this very blog that belief in the 'Singularity' (a term I usually take to refer to the intelligence explosion) ought to be dismissed out of hand, because it is part of the reference class \"beliefs in coming of a new world, be it good or evil\", with a historical success rate of (allegedly) 0%.

    \n

    Of course Robin Hanson has a different idea of what constitutes the reference class and so makes a rather different prediction - a problem I refer to as \"reference class tennis\":

    \n
    \n

    Taking a long historical long view, we see steady total growth rates punctuated by rare transitions when new faster growth modes appeared with little warning.  We know of perhaps four such \"singularities\": animal brains (~600MYA), humans (~2MYA), farming (~1OKYA), and industry (~0.2KYA)...

    \n

    Excess inside viewing usually continues even after folks are warned that outside viewing works better; after all, inside viewing better show offs inside knowledge and abilities.  People usually justify this via reasons why the current case is exceptional.  (Remember how all the old rules didn’t apply to the new dotcom economy?)  So expect to hear excuses why the next singularity is also an exception where outside view estimates are misleading.  Let’s keep an open mind, but a wary open mind.

    \n
    \n

    If I were to play the game of reference class tennis, I'd put recursively self-improving AI in the reference class \"huge mother#$%@ing changes in the nature of the optimization game\" whose other two instances are the divide between life and nonlife and the divide between human design and evolutionary design; and I'd draw the lesson \"If you try to predict that things will just go on sorta the way they did before, you are going to end up looking pathetically overconservative\".

    \n

    And if we do have a local hard takeoff, as I predict, then there will be nothing to say afterward except \"This was similar to the origin of life and dissimilar to the invention of agriculture\".  And if there is a nonlocal economic acceleration, as Robin Hanson predicts, we just say \"This was similar to the invention of agriculture and dissimilar to the origin of life\".  And if nothing happens, as taw seems to predict, then we must say \"The whole foofaraw was similar to the apocalypse of Daniel, and dissimilar to the origin of life or the invention of agriculture\".  This is why I don't like reference class tennis.

    \n

    But mostly I would simply decline to reason by analogy, preferring to drop back into causal reasoning in order to make weak, vague predictions.  In the end, the dawn of recursive self-improvement is not the dawn of life and it is not the dawn of human intelligence, it is the dawn of recursive self-improvement.  And it's not the invention of agriculture either, and I am not the prophet Daniel.  Point out a \"similarity\" with this many differences, and reality is liable to respond \"So what?\"

    \n

    I sometimes say that the fundamental question of rationality is \"Why do you believe what you believe?\" or \"What do you think you know and how do you think you know it?\"

    \n

    And when you're asking a question like that, one of the most useful tools is zooming in on the map by replacing summary-phrases with the concepts and chains of inferences that they stand for.

    \n

    Consider what inference we're actually carrying out, when we cry \"Outside view!\" on a case of a student turning in homework.  How do we think we know what we believe?

    \n

    Our information looks something like this:

    \n\n

    Therefore, when new student X279 comes along, even though we've never actually tested them before, we ask:

    \n

    \"How long before deadline did you plan to complete your last three assignments?\"

    \n

    They say:  \"10 days, 9 days, and 10 days.\"

    \n

    We ask:  \"How long before did you actually complete them?\"

    \n

    They reply:  \"1 day, 1 day, and 2 days\".

    \n

    We ask:  \"How long before deadline do you plan to complete this assignment?\"

    \n

    They say:  \"8 days.\"

    \n

    Having gathered this information, we now think we know enough to make this prediction:

    \n

    \"You'll probably finish 1 day before deadline.\"

    \n

    They say:  \"No, this time will be different because -\"

    \n

    We say:  \"Would you care to make a side bet on that?\"

    \n

    We now believe that previous cases have given us strong, veridical information about how this student functions - how long before deadline they tend to complete assignments - and about the unreliability of the student's planning attempts, as well.  The chain of \"What do you think you know and how do you think you know it?\" is clear and strong, both with respect to the prediction, and with respect to ceasing to gather information.  We have historical cases aplenty, and they are all as similar to each other as they are similar to this new case.  We might not know all the details of how the inner forces work, but we suspect that it's pretty much the same inner forces inside the black box each time, or the same rough group of inner forces, varying no more in this new case than has been observed on the previous cases that are as similar to each other as they are to this new case, selected by no different a criterion than we used to select this new case.  And so we think it'll be the same outcome all over again.

    \n

    You're just drawing another ball, at random, from the same barrel that produced a lot of similar balls in previous random draws, and those previous balls told you a lot about the barrel.  Even if your estimate is a probability distribution rather than a point mass, it's a solid, stable probability distribution based on plenty of samples from a process that is, if not independent and identically distributed, still pretty much blind draws from the same big barrel.

    \n

    You've got strong information, and it's not that strange to think of stopping and making a prediction.

    \n

    But now consider the analogous chain of inferences, the what do you think you know and how do you think you know it, of trying to take an outside view on self-improving AI.

    \n

    What is our data?  Well, according to Robin Hanson:

    \n\n

    From this, Robin extrapolates, the next big growth mode will have a doubling time of 1-2 weeks.

    \n

    So far we have an interesting argument, though I wouldn't really buy it myself, because the distances of difference are too large... but in any case, Robin then goes on to say:  We should accept this estimate flat, we have probably just gathered all the evidence we should use.  Taking into account other arguments... well, there's something to be said for considering them, keeping an open mind and all that; but if, foolishly, we actually accept those arguments, our estimates will probably get worse.  We might be tempted to try and adjust the estimate Robin has given us, but we should resist that temptation, since it comes from a desire to show off insider knowledge and abilities.

    \n

    And how do we know that?  How do we know this much more interesting proposition that it is now time to stop and make an estimate - that Robin's facts were the relevant arguments, and that other arguments, especially attempts to think about the interior of an AI undergoing recursive self-improvement, are not relevant?

    \n

    Well... because...

    \n\n

    It seems to me that once you subtract out the scary labels \"inside view\" and \"outside view\" and look at what is actually being inferred from what - ask \"What do you think you know and how do you think you know it?\" - that it doesn't really follow very well.  The Outside View that experiment has shown us works better than the Inside View, is pretty far removed from the \"Outside View!\" that taw cites in support of predicting against any epoch.  My own similarity metric puts the latter closer to the analogies of Greek philosophers, actually.  And I'd also say that trying to use causal reasoning to produce weak, vague, qualitative predictions like \"Eventually, some AI will go FOOM, locally self-improvingly rather than global-economically\" is a bit different from \"I will complete this homework assignment 10 days before deadline\".  (The Weak Inside View.)

    \n

    I don't think that \"Outside View!  Stop here!\" is a good cognitive traffic signal to use so far beyond the realm of homework - or other cases of many draws from the same barrel, no more dissimilar to the next case than to each other, and with similarly structured forces at work in each case.

    \n

    After all, the wider reference class of cases of telling people to stop gathering arguments, is one of which we should all be wary...

    " } }, { "_id": "owr7C4iFrYfLWg9FK", "title": "Fictitious sentiments", "pageUrl": "https://www.lesswrong.com/posts/owr7C4iFrYfLWg9FK/fictitious-sentiments", "postedAt": "2010-02-23T18:46:28.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "owr7C4iFrYfLWg9FK", "html": "

    Music often fills me with the feeling that I care about certain things. Idealistic songs make me feel like I will go out and support some cause or another. Romantic songs make me feel that I would do just about anything for someone. Other songs make me feel passionately motivated to go and do amazing things. Some songs probably even make me feel patriotic, though I only infer that from the completely unfamiliar feeling that sometimes accompanies them.

    \n

    The plausibility of these feelings is really diminished by the music though. If I really cared so much about some cause or person, I would go and pursue the cause or do whatever the person wanted me to, not lie around on my bed relishing the emotional high of feeling like I wanted to. The same goes for movies. The fact that you are sitting there cheering on the good guy, not out in the world doing something good, shows that you don’t really support his principles. Unless the movie is about some guy who gallantly cheers on worthy characters in movies.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "KTSzQaPvgA2F8o48Q", "title": "Dignity", "pageUrl": "https://www.lesswrong.com/posts/KTSzQaPvgA2F8o48Q/dignity", "postedAt": "2010-02-22T17:16:02.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "KTSzQaPvgA2F8o48Q", "html": "

    Dignity is apparently big in parts of ethics, particularly as a reason to stop others doing anything ‘unnatural’ regarding their bodies, such as selling their organs, modifying themselves or reproducing in unusual ways. Dignity apparently belongs to you except that you aren’t allowed to sell it or renounce it. Nobody who finds it important seems keen to give it a precise meaning. So I wondered if there was some definition floating around that would sensibly warrant the claims that dignity is important and is imperiled by futuristic behaviours.

    \n

    These are the ones I came across variations on often:

    \n

    The state or quality of being worthy of respect

    \n

    An innate moral worthiness, often considered specific to homo sapiens.

    \n

    Being respected by other people is sure handy, but so are all the other things we trade off against one another at our own whims. Money is great too for instance, but it’s no sin to diminish your wealth. Plus plenty of things people already do make other people respect them less, without anyone thinking there’s some ethical case for banning them. Where are the papers condemning being employed as a cleaner, making jokes that aren’t very funny, or drunkenly revealing your embarrassing desires? The mere act of failing to become well read and stylishly dressed is an affront to your personal dignity.

    \n

    This may seem silly; surely when people argue about dignity in ethics they are talking about the other, higher definition – the innate worthiness that humans have, not some concrete fact about how others treat you. Apparently not though. When people discuss organ donation for instance, there is no increased likelihood of ceasing to be human and losing whatever dollop of inherent worth that comes with it during the operation just because cash was exchanged. Just plain old risk that people will think ill of you if you sell yourself.

    \n

    The second definition, if it innately applies to humans without consideration for their characteristics, is presumably harder to lose. It’s also impossible to use. How you are treated by people is determined by what those people think of you.  You can have as much immeasurable innate worthiness as you like; you will still be spat on if people disagree with reality, which they probably will with no faculties for perceiving innate moral values. Reality doesn’t offer any perks to being inherently worthy either. So why care if you have this kind of dignity, even if you think such a thing exists?


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "u9pfbkZeG8mFTPNi2", "title": "Babies and Bunnies: A Caution About Evo-Psych", "pageUrl": "https://www.lesswrong.com/posts/u9pfbkZeG8mFTPNi2/babies-and-bunnies-a-caution-about-evo-psych", "postedAt": "2010-02-22T01:53:07.125Z", "baseScore": 81, "voteCount": 103, "commentCount": 843, "url": null, "contents": { "documentId": "u9pfbkZeG8mFTPNi2", "html": "

    Daniel Dennett has advanced the opinion that the evolutionary purpose of the cuteness response in humans is to make us respond positively to babies.  This does seem plausible.  Babies are pretty cute, after all.  It's a tempting explanation.

    \n

    Here is one of the cutest baby pictures I found on a Google search.

    \n

    And this is a bunny.

    \n

    Correct me if I'm wrong, but the bunny is about 75,119 times cuter than the baby.

    \n

    Now, bunnies are not evolutionarily important for humans to like and want to nurture.  In fact, bunnies are edible.  By rights, my evolutionary response to the bunny should be \"mmm, needs a sprig of rosemary and thirty minutes on a spit\".  But instead, that bunny - and not the baby or any other baby I've seen - strikes the epicenter of my cuteness response, and being more baby-like along any dimension would not improve the bunny.  It would not look better bald.  It would not be improved with little round humanlike ears.  It would not be more precious with thumbs, easier to love if it had no tail, more adorable if it were enlarged to weigh about seven pounds.

    \n

    If \"awwww\" is a response designed to make me love human babies and everything else that makes me go \"awwww\" is a mere side effect of that engineered reaction, it is drastically misaimed.  Other responses for which we have similar evolutionary psychology explanations don't seem badly targeted in this way.  If they miss their supposed objects at all, at least it's not in most people.  (Furries, for instance, exist, but they're not a common variation on human sexual interest - the most generally applicable superstimuli for sexiness look like at-least-superficially healthy, mature humans with prominent human sexual characteristics.)  We've invested enough energy into transforming our food landscape that we can happily eat virtual poison, but that's a departure from the ancestral environment - bunnies?  All natural, every whisker.1

    \n

    \n

    It is embarrassingly easy to come up with evolutionary psychology stories to explain little segments of data and have it sound good to a surface understanding of how evolution works.  Why are babies cute?  They have to be, so we'll take care of them.  And then someone with a slightly better cause and effect understanding turns it right-side-up, as Dennett has, and then it sounds really clever.  You can have this entire conversation without mentioning bunnies (or kittens or jerboas or any other adorable thing).  But by excluding those items from a discussion that is, ostensibly, about cuteness, you do not have a hypothesis that actually fits all of the data - only the data that seems relevant to the answer that presents itself immediately.

    \n

    Evo-psych explanations are tempting even when they're cheaply wrong, because the knowledge you need to construct ones that sound good to the educated is itself not cheap at all. You have to know lots of stuff about what \"motivates\" evolutionary changes, reject group selection, understand that the brain is just an organ, dispel the illusion of little XML tags attached to objects in the world calling them \"cute\" or \"pretty\" or anything else - but you also have to account for a decent proportion of the facts to not be steering completely left of reality.

    \n

    Humans are frickin' complicated beasties.  It's a hard, hard job to model us in a way that says anything useful without contradicting information we have about ourselves.  But that's no excuse for abandoning the task.  What causes the cuteness response?  Why is that bunny so outrageously adorable?  Why are babies, well, pretty cute?  I don't know - but I'm pretty sure it's not the cheap reason, because evolution doesn't want me to nurture bunnies.  Inasmuch as it wants me to react to bunnies, it wants me to eat them, or at least be motivated to keep them away from my salad fixings.

    \n

     

    \n

    1It is possible that the bunny depicted is a domestic specimen, but it doesn't look like it to me.  In any event, I chose it for being a really great example; there are many decidedly wild animals that are also cuter than cute human babies.

    " } }, { "_id": "mqHTmcPdwEacW2msC", "title": "Woo!", "pageUrl": "https://www.lesswrong.com/posts/mqHTmcPdwEacW2msC/woo", "postedAt": "2010-02-21T08:19:58.587Z", "baseScore": 10, "voteCount": 10, "commentCount": 59, "url": null, "contents": { "documentId": "mqHTmcPdwEacW2msC", "html": "

    [MAJOR UPDATE: I have changed \"Woo\" to \"Pitch\" everywhere on the website and on this post due to extensive feedback from everyone. Thanks!]

    \n

    I'm adding rhetorical-device/common-argument/argument-fallacy tags to the expert quotes on TakeOnIt and calling them \"pitches\".

    \n

    The list of pitches so far is here.

    \n

    Arguments have common patterns. The most notorious of these are rhetorical devices and argument fallacies. While these techniques are obviously not new and are published on several sites on the internet, they are woefully under appreciated by most people. I contend that this is partly because:

    \n
      \n
    1. Argument fallacies and rhetorical devices can be too general. Most of their real-world usage occurs in a larger number of specialized forms. These specialized forms are often unlabeled yet are intuitively recognized and prey on our cognitive biases. It takes a lot of cognitive energy to consciously connect the general form(s) to the specialized form.
    2. \n
    3. The sites about argument fallacies and rhetorical devices are not integrated with debate sites. A google for argument fallacies will give you pages with stagnant lists of fallacies where each one has perhaps a couple of historical or hypothetical applications of the fallacy. Why can't I see every debate where some expert or influential person used that fallacy, and why can't I see every fallacy used in a debate?
    4. \n
    \n

    To solve these problems, I'm introducing the concept of a \"pitch\". Any quote from an expert or influential person on TakeOnIt can now be tagged with a pitch. A pitch is a label for a commonly used argument or strategy to persuade. You can think of pitches as the \"tv tropes of argumentation\". Here's some examples:

    \n

    \"The Consensus Pitch\" 
    \"The Patriot Pitch\" 
    \"The Convert Pitch\"   

    \n

    Pitches encompass both argument fallacies and rhetorical devices. However, they allow for greater specialization. For example, there is the \"The Evil Corporation Pitch\". On a more minor note, I personally think the names should be simple and ideally guessable from the name alone (e.g. maybe it's just me, but \"Post hoc ergo propter hoc\" feels like it has some Web 2.0 marketing issues).

    \n

    Eliezer's \"Conversation Halters\" and Robin Hanson's \"Contrarian Excuses\" are good candidates for pitches. (My impression is the \"halters\" and \"excuses\" listed are perhaps too specialized for pitches, but in any case at minimum provide fertile material for pitches.)

    \n

    I only implemented this feature over the last few days and before developing the concept further I'd like to get some feedback.

    " } }, { "_id": "XrxsFR2WLXWzgLEoy", "title": "Case study: abuse of frequentist statistics", "pageUrl": "https://www.lesswrong.com/posts/XrxsFR2WLXWzgLEoy/case-study-abuse-of-frequentist-statistics", "postedAt": "2010-02-21T06:35:24.216Z", "baseScore": 45, "voteCount": 37, "commentCount": 100, "url": null, "contents": { "documentId": "XrxsFR2WLXWzgLEoy", "html": "

    Recently, a colleague was reviewing an article whose key justification rested on some statistics that seemed dodgy to him, so he came to me for advice. (I guess my boss, the resident statistician, was out of his office.) Now, I'm no expert in frequentist statistics. My formal schooling in frequentist statistics comes from my undergraduate chemical engineering curriculum -- I wouldn't rely on it for consulting. But I've been working for someone who is essentially a frequentist for a year and a half, so I've had some hands-on experience. My boss hired me on the strength of my experience with Bayesian statistics, which I taught myself in grad school, and one thing reading the Bayesian literature voraciously will equip you for is critiquing frequentist statistics. So I felt competent enough to take a look.1

    \r\n

    \r\n

    The article compared an old, trusted experimental method with the authors' new method; the authors sought to show that the new method gave the same results on average as the trusted method. They performed three replicates using the trusted method and three replicates using the new method; each replicate generated a real-valued data point. They did this in nine different conditions, and for each condition, they did a statistical hypothesis test. (I'm going to lean heavily on Wikipedia for explanations of the jargon terms I'm using, so this post is actually a lot longer than it appears on the page. If you don't feel like following along, the punch line is three paragraphs down, last sentence.) 

    \r\n

    The authors used what's called a Mann-Whitney U test, which, in simplified terms, aims to determine if two sets of data come from different distributions. The essential thing to know about this test is that it doesn't depend on the actual data except insofar as those data determine the ranks of the data points when the two data sets are combined. That is, it throws away most of the data, in the sense that data sets that generate the same ranking are equivalent under the test. The rationale for doing this is that it makes the test \"non-parametric\" -- you don't need to assume a particular form for the probability density when all you look at are the ranks.

    \r\n

    The output of a statistical hypothesis test is a p-value; one pre-establishes a threshold for statistical significance, and if the the p-value is lower than the threshold, one draws a certain conclusion called \"rejecting the null hypothesis\". In the present case, the null hypothesis is that the old method and the new method produce data from the same distribution; the authors would like to see data that do not lead to rejection of the null hypothesis. They established the conventional threshold of 0.05, and for each of the nine conditions, they reported either \"p > 0.05\" or \"p = 0.05\"2. Thus they did not reject the null hypothesis, and argued that the analysis supported their thesis.

    \r\n

    Now even from a frequentist perspective, this is wacky. Hypothesis testing can reject a null hypothesis, but cannot confirm it, as discussed in the first paragraph of the Wikipedia article on null hypotheses. But this is not the real WTF, as they say. There are twenty ways to choose three objects out of six, so there are only twenty possible p-values, and these can be computed even when the original data are not available, since they only depend on ranks. I put these facts together within a day of being presented with the analysis and quickly computed all twenty p-values. Here I only need discuss the most extreme case, where all three of the data points for the new method are to one side (either higher or lower) of the three data points for the trusted method. This case provides the most evidence against the notion that the two methods produce data from the same distribution, resulting in the smallest possible p-value3: p = 0.05. In other words, even before the data were collected it could have been known that this analysis would give the result the authors wanted.4

    \r\n

    When I canvassed the Open Thread for interest in this article, Douglas Knight wrote: \"If it's really frequentism that caused the problem, please spell this out.\" Frequentism per se is not the proximate cause of this problem, that being that the authors either never noticed that their analysis could not falsify their hypothesis, or they tried to pull a fast one. But it is a distal cause, in the sense that it forbids the Bayesian approach, and thus requires practitioners to become familiar with a grab-bag of unrelated methods for statistical inference5, leaving plenty of room for confusion and malfeasance. Technologos's reply to Douglas Knight got it exactly right; I almost jokingly requested a spoiler warning.

    \r\n

     

    \r\n

    1 I don't mind that it wouldn't be too hard to figure out who I am based on this paragraph. I just use a pseudonym to keep Google from indexing all my blog comments to my actual name.

    \r\n

    2 It's rather odd to report a p-value that is exactly equal to the significance threshold, one of many suspicious things about this analysis (the rest of which I've left out as they are not directly germane).

    \r\n

    3 For those anxious to check my math, I've omitted some blah blah blah about one- and two-sides tests and alternative hypotheses.

    \r\n

    4 I quickly emailed the reviewer; it didn't make much difference, because when we initially talked about the analysis we had noticed enough other flaws that he had decided to recommend rejection. This was just the icing on the coffin.

    \r\n

    5 ... none of which actually address the question OF DIRECT INTEREST! ... phew. Sorry.

    " } }, { "_id": "rRmisKb45dN7DK4BW", "title": "Akrasia Tactics Review", "pageUrl": "https://www.lesswrong.com/posts/rRmisKb45dN7DK4BW/akrasia-tactics-review", "postedAt": "2010-02-21T04:25:49.130Z", "baseScore": 93, "voteCount": 76, "commentCount": 151, "url": null, "contents": { "documentId": "rRmisKb45dN7DK4BW", "html": "

    I recently had occasion to review some of the akrasia tricks I've found on Less Wrong, and it occurred to me that there's probably quite a lot of others who've tried them as well.  Perhaps it's a good idea to organize the experiences of a couple dozen procrastinating rationalists?

    \n

    Therefore, I'll aggregate any such data you provide in the comments, according to the following scheme:

    \n
      \n
    1. Note which trick you've tried.  If it's something that's not yet on the list below, please provide a link and I'll add it; if there's not a link for it anywhere, you can describe it in your comment and I'll link that.
    2. \n
    3. Give your experience with it a score from -10 to +10 (0 if it didn't change the status quo, 10 if it ended your akrasia problems forever with no side effects, negative scores if it actually made your life worse, -10 if it nearly killed you); if you don't do so, I'll suggest a score for you based on what else you say.
    4. \n
    5. Describe your experience with it, including any significant side effects.
    6. \n
    \n

    Every so often, I'll combine all the data back into the main post, listing average scores, sample size and common effects for each technique.  Ready?

    \n

    Here's the list of specific akrasia tactics I've found around LW (and also in outside links from here); again, if I'm missing one, let me know and I'll add it.  Special thanks to Vladimir Golovin for the Share Your Anti-Akrasia Tricks post.

    \n

    Without further ado, here are the results so far as I've recorded them, with average score, number of reviews, standard deviation and recurring comments.

    \n

     

    \n

    3 or More Reviews:

    \n

    Collaboration with Others: Average +7.7 (3 reviews) (SD 0.6)

    \n

    No Multitasking: Average +6.0 (3 reviews) (SD 2.0); note variants

    \n

    P.J. Eby's Motivation Trilogy: Average +5.8 (6 reviews) (SD 3.3)

    \n

    Monoidealism: Average +8.0 (3 reviews) (SD 2.0)

    \n

    \"Just Do It\": Average +4 (2 reviews) (SD 4.2)

    \n

    Irresistible Instant Motivation: +3 (1 review)

    \n

    Getting Things Done: Average +4.9 (7 reviews) (SD 2.6)

    \n

    Regular Exercise: Average +4.4 (5 reviews) (SD 2.3)

    \n

    Cripple your Internet: Average +4.2 (11 reviews) (SD 3.0)

    \n

    LeechBlock: Average +5.4 (5 reviews) (SD 2.9); basically everyone who's tried has found it helpful.

    \n

    PageAddict: +3 (1 review)

    \n

    Freedom (Mac)

    \n

    Melatonin: Average +4.0 (5 reviews) (SD 5.4); works well for some, others feel groggy the next day; might help to vary the dosage

    \n

    Execute by Default: Average +3.7 (7 reviews) (SD 2.4); all sorts of variants; universally helpful, not typically a life-changer.

    \n

    Pomodoro Technique: Average +3.3 (3 reviews) (SD 4.2); mathemajician suggests a 45-minute variant

    \n

    Being Watched: Average +3.2 (6 reviews) (SD 4.1); variations like co-working seem more effective; see \"collaboration\" below

    \n

    Utility Function Experiment: Average +2.8 (4 reviews) (SD 2.8)

    \n

    Meditation: Average +2.8 (5 reviews) (SD 2.8)

    \n

    Modafinil and Equivalents: Average -0.8 (5 reviews) (SD 8.5); fantastic for some, terrible for others.  Seriously, look at that standard deviation!

    \n

    Structured Procrastination: Average -1.0 (3 reviews) (SD 4.4); polarized opinion

    \n

    Resolutions (Applied Picoeconomics): Average -3.2 (5 reviews) (SD 3.3); easy to fail & get even more demotivated

    \n

     

    \n

    1 or 2 Reviews:

    \n

    Dual n-back: Average +6.5 (2 reviews) (SD 2.1)

    \n

    Think It, Do It: Average +6 (2 reviews) (SD 1.4)

    \n

    Self-Affirmation: Average +4 (2 reviews) (SD 2.8)

    \n

    Create Trivial Inconveniences to Procrastination

    \n

    Close the Dang Browser: Average +3.5 (2 reviews) (SD 3.5)

    \n

    Get More Sleep: Average +3 (2 reviews) (SD 1.4)

    \n

    Every Other Day Off: Average +0.5 (2 reviews) (SD 0.7)

    \n

    Strict Scheduling: Average -9 (2 reviews) (SD 1.4)

    \n

     

    \n

    Elimination (80/20 Rule): +8 (1 review)

    \n

    Methylphenidate: +8 (1 review)

    \n

    Begin Now: +8 (1 review)

    \n

    Learning to Say No: +8 (1 review)

    \n

    Caffeine Nap: +8 (1 review)

    \n

    Write While Doing: +8 (1 review)

    \n

    Leave Some Tasty Bits: +7 (1 review)

    \n

    Preserve the Mental State: +6 (1 review)

    \n

    Acedia and Me: +5 (1 review)

    \n

    Third Person Perspective: +5 (1 review)

    \n

    Watching Others: +5 (1 review)

    \n

    Multiple Selves Theory: +5 (1 review)

    \n

    Getting Back to the Music: +5 (1 review)

    \n

    Remove Trivial Inconveniences: +4 (1 review)

    \n

    Accountability: +2 (1 review)

    \n

    Scheduling Aggressively...: +2 (1 review)

    \n

    Autofocus: 0 (1 review)

    \n

    Take Every Other 20 to 40 Minutes Off: -4 (1 review)

    \n

     

    \n

    Not Yet Reviewed:

    \n

    Fire and Motion

    \n

    Stare at the Wall

    \n

    Kibotzer

    \n

     

    \n

    Thanks for your data!

    \n

    EDIT: People seem to enjoy throwing really low scores out there for things that just didn't work, had some negative side effects and annoyed them.  I added \"-10 if it nearly killed you\" to give a sense of perspective on this bounded scale... although, looking at the comments, it looks like the -10 and -8 were pretty much justified after all.  Anyway, here's your anchor for the negative side!

    " } }, { "_id": "wqmmv6NraYv4Xoeyj", "title": "Conversation Halters", "pageUrl": "https://www.lesswrong.com/posts/wqmmv6NraYv4Xoeyj/conversation-halters", "postedAt": "2010-02-20T15:00:34.487Z", "baseScore": 79, "voteCount": 61, "commentCount": 98, "url": null, "contents": { "documentId": "wqmmv6NraYv4Xoeyj", "html": "

    Related toLogical Rudeness, Semantic Stopsigns

    \n

    While working on my book, I found in passing that I'd developed a list of what I started out calling \"stonewalls\", but have since decided to refer to as \"conversation halters\".  These tactics of argument are distinguished by their being attempts to cut off the flow of debate - which is rarely the wisest way to think, and should certainly rate an alarm bell.

    \n

    Here's my assembled list, on which I shall expand shortly:

    \n\n

    Now all of these might seem like dodgy moves, some dodgier than others.  But they become dodgier still when you take a step back, feel the flow of debate, observe the cognitive traffic signals, and view these as attempts to cut off the flow of further debate.

    \n

    Hopefully, most of these are obvious, but to define terms:

    \n

    Appeal to permanent unknowability - something along the lines of \"Why did God allow smallpox?  Well, no one can know the mind of God.\"  Or, \"There's no way to distinguish among interpretations of quantum mechanics, so we'll never know.\"  Arguments like these can be refuted easily enough by anyone who knows the rules for reasoning under uncertainty and how they imply a correct probability estimate given a state of knowledge... but of course you'll probably have to explain the rules to the other, and the reason they appealed to unknowability is probably to cut off further discussion.

    \n

    Appeal to humility - much the same as above, but said with a different emphasis:  \"How can we know?\", where of course the speaker doesn't much want to know, and so the real meaning is \"How can you know?\"  Of course one may gather entangled evidence in most such cases, and Occam's Razor or extrapolation from already-known facts takes care of the other cases.  But you're not likely to get a chance to explain it, because by continuing to speak, you are committing the sin of pride.

    \n

    Appeal to egalitarianism - something along the lines of \"No one's opinion is better than anyone else's.\"  Now if you keep talking you're committing an offense against tribal equality.

    \n

    Appeal to common guilt - \"everyone is irrational now and then\", so if you keep talking, you're claiming to be better than them.  An implicit subspecies of appeal to egalitarianism.

    \n

    Appeal to inner privacy - \"you can't possibly know how I feel!\"  It's true that modern technology still encounters some slight difficulties in reading thoughts out of the brain, though work is underway as we speak.  But it is rare that the exact details of how you feel are the key subject matter being disputed.  Here the bony borders of the skull are being redeployed as a hard barrier to keep out further arguments.

    \n

    Appeal to personal freedom - \"I can define a word any way I want!\"  Now if you keep talking you're infringing on their civil rights.

    \n

    Appeal to arbitrariness - again, the notion that word definitions are arbitrary serves as a good example (in fact I was harvesting some of these appeals from that sequence).  It's not just that this is wrong, but that it serves to cut off further discourse.  Generally, anything that people are motivated to argue about is not arbitrary.  It is being controlled by invisible criteria of evaluation, it has connotations with consequences, and if that isn't true either, the topic of discourse is probably not \"arbitrary\" but just \"meaningless\".  No map that corresponds to an external territory can be arbitrary.

    \n

    Appeal to inescapable assumptions - closely related, the idea that you need some assumptions and therefore everyone is free to choose whatever assumptions they want.  This again is almost never true.  In the realm of physical reality, reality is one way or another and you don't get to make it that way by choosing an opinion, and so some \"assumptions\" are right and others wrong.  In the realm of math, once you choose enough axioms to specify the subject matter, the remaining theorems are matters of logical implication.  What I want you to notice is not just that \"appeal to inescapable assumptions\" is a bad idea, but that it is supposed to halt further conversation.

    \n

    Appeal to unquestionable authority - for example, defending a definition by appealing to the dictionary, which is supposed to be a final settlement of the argument.  Of course it is very rare that whatever is really at stake is something that ought to turn out differently if a Merriam-Webster editor writes a different definition.  Only in matters of the solidest, most replicable science, do we have information so authoritative that there is no longer much point in considering other sources of evidence.  And even then we shouldn't expect to see strong winds of evidence blowing in an opposing direction - under the Bayesian definition of evidence, strong evidence is just that sort of evidence which you only ever expect to find on at most one side of a factual question.  More usually, this argument runs something along the lines of \"How dare you argue with the dictionary?\" or \"How dare you argue with Professor Picklepumper of Harvard University?\"

    \n

    Appeal to absolute certainty - if you did have some source of absolute certainty, it would do no harm to cut off debate at that point.  Needless to say, this usually doesn't happen.

    \n

    And again:  These appeals are all flawed in their separate ways, but what I want you to notice is the thing they have in common, the stonewall-effect, the conversation-halting cognitive traffic signal.

    \n

    The only time it would actually be appropriate to use such a traffic signal is when you have information so strong, or coverage so complete, that there really is no point in further debate.  This condition is rarely if ever met.  A truly definite series of replicated experiments might settle an issue pending really surprising new experimental results, a la Newton's laws of gravity versus Einstein's GR.  Or a gross prior improbability, combined with failure of the advocates to provide confirming evidence in the face of repeated opportunities to do so.  Or you might simply run out of time.

    \n

    But then you should state the stoppage condition outright and plainly, not package it up in one of these appeals.  By and large, these traffic signals are simply bad traffic signals.

    " } }, { "_id": "BoSLs79YfmSCuGpSw", "title": "Doers or doings?", "pageUrl": "https://www.lesswrong.com/posts/BoSLs79YfmSCuGpSw/doers-or-doings", "postedAt": "2010-02-20T07:48:01.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "BoSLs79YfmSCuGpSw", "html": "

    A girl recently invited me to a public lecture she was running with Helen Caldicott, the famous anti-nuclear advocate. Except the girl couldn’t remember the bit after ‘famous’. When I asked her, she narrowed it down to something big picture related to the environment. Helen’s achievements were obviously secondary, if not twenty-secondary, in motivating her to organize the event. Though the fact she was famous for whatever those things were was important.

    \n

    I’ve done a few courses in science journalism. The main task there is to make science interesting and intelligible for people. The easiest way to do this is to cut down on the dry bit about how reality works, and fill it up with stories about people instead. Who are the scientists? Where they are from? What sorts of people are they? What’s it like to be a research subject? Does the research support the left or the right or people who want to subsidize sheep or not immunize their children? If there’s an unsettled issue, present it as a dispute between scientists, not as abstract disagreeing evidence.

    \n

    It’s hard to find popular science books that aren’t at least half full of anecdotes or biographies of scientists. Everybody knows that Einstein invented the theory of relativity, but hardly anyone knows what that’s about exactly, or tries to.

    \n

    Looking through a newspaper, most of the stories are about people. Policy isn’t discussed so much as politics. Recessions are reported with tales of particular people who can’t pay their employees this year.

    \n

    Philosophy is largely about philosophers from what I can gather.

    \n

    One might conclude that most people are more interested in people than in whatever it is the people are doing. What people do is mainly interesting for what it says about those doing it.

    \n

    But this isn’t true, there are some topics where people are happy to read about the topic more than the people. The weather and technology for instance. Nobody knows who invented most things they know intimately. It looks from this small list like people are more interested in doings which immediately affect them, and doers the rest of the time. I don’t read most topics though, and it’s a small sample. What other topics are people more interested in than they are in those who do them?

    \n

    Going on that tentative theory, this blog is probably way too related to its subject matter for its subject matter. Would you all like some more anecdotes and personal information? I included some above just in case, as I sat in my friend Robert Wiblin‘s dining room and drank coffee, which I like, from the new orange plunger I excitedly bought yesterday on my way to the highly ranked Australian National University, where I share an office with a host of stylishly dressed and interesting students tirelessly working away on something or another really important.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "MhmNh9GrRh2efJBA2", "title": "Study: Making decisions makes you tired", "pageUrl": "https://www.lesswrong.com/posts/MhmNh9GrRh2efJBA2/study-making-decisions-makes-you-tired", "postedAt": "2010-02-19T20:27:47.777Z", "baseScore": 31, "voteCount": 30, "commentCount": 6, "url": null, "contents": { "documentId": "MhmNh9GrRh2efJBA2", "html": "

    Related to: Willpower Hax #487: Execute by Default, The Physiology of Willpower

    \n

    Making your own decisions makes you procrastinate more and quit sooner than when you're following orders:

    \n

    Key quote:

    \n
    \n

    Making choices led to reduced self-control (i.e., less physical stamina, reduced persistence in the face of failure, more procrastination, and less quality and quantity of arithmetic calculations). A field study then found that reduced self-control was predicted by shoppers' self-reported degree of previous active decision making. Further studies suggested that choosing is more depleting than merely deliberating and forming preferences about options and more depleting than implementing choices made by someone else and that anticipating the choice task as enjoyable can reduce the depleting effect for the first choices but not for many choices.

    \n
    " } }, { "_id": "JSkctLfsWZLhsgGyK", "title": "Med Patient Social Networks Are Better Scientific Institutions", "pageUrl": "https://www.lesswrong.com/posts/JSkctLfsWZLhsgGyK/med-patient-social-networks-are-better-scientific", "postedAt": "2010-02-19T08:11:21.500Z", "baseScore": 46, "voteCount": 45, "commentCount": 49, "url": null, "contents": { "documentId": "JSkctLfsWZLhsgGyK", "html": "

    When you're suffering from a life-changing illness, where do you find information about its likely progression? How do you decide among treatment options?

    \n

    You don't want to rely on studies in medical journals because their conclusion-drawing methodologies are haphazard. You'll be better off getting your prognosis and treatment decisions from a social networking site: PatientsLikeMe.com.

    \n

    PatientsLikeMe.com lets patients with similar illnesses compare symptoms, treatments and outcomes. As Jamie Heywood at TEDMED 2009 explains, this represents an enormous leap forward in the scope and methodology of clinical trials. I highly recommend his excellent talk, and I will paraphrase part of it below.

    \n
    \n

    Here is a report in the Proceedings of the US National Academy of Sciences (PNAS) about Lithium, which is a drug used to treat Bipolar disorder that a group in Italy found slowed ALS down in 16 patients. When PNAS published this, 10% of the patients in our system started taking Lithium, based on 16 patients' data in a bad publication.

    This one patient, Humberto, said, \"Can you help us answer these kinds of treatment questions? I don't want to wait for the next trial; I want to know now!\"

    So we launched some tools to help patients track their medical data like blood levels, symptoms, side effects... and share it.

    People said, \"You can't run a clinical trial like this. You don't have blinding, you don't have data, it doesn't follow the scientific method -- you can't do it.\"

    So we said, OK, we can't do a clinical trial? Let's do something even harder. Let's use all this data to say whether Lithium is going to work on Humberto.

    We took all the patients like Humberto and brought their data together, bringing their histories into it, lining up their timelines along meaningful points, and integrating everything we know about the patient -- full information about the entire course of their disease. And we saw that this orange line, that's what's going to happen to Humberto.

    \n

    \"\"

    And in fact he took Lithium, and he went down the line. This works almost all the time -- it's scary.

    \n

    \"\"

    So we couldn't run a clinical trial, but we could see whether Lithium was going to work for Humberto.

    Here's the mean decline curve for the most dedicated Lithium patients we had, the ones who stuck with it for at least a year because they believed it was working. And even for this hard core sample, we still have N = 4x the number in the journal study.

    \n

    \"\"

    \n

    When we line up these patients' timelines, it's clear that the ones who took Lithium didn't do any better. And we had the power to detect an effect only 1/4 the strength of the one reported in the journal. And we did this one year before the time when the first clinical trial, funded with millions of dollars by the NIH, announced negative results last week.

    \n
    " } }, { "_id": "g8xh9R7RaNitKtkaa", "title": "Explicit Optimization of Global Strategy (Fixing a Bug in UDT1)", "pageUrl": "https://www.lesswrong.com/posts/g8xh9R7RaNitKtkaa/explicit-optimization-of-global-strategy-fixing-a-bug-in", "postedAt": "2010-02-19T01:30:44.399Z", "baseScore": 55, "voteCount": 26, "commentCount": 38, "url": null, "contents": { "documentId": "g8xh9R7RaNitKtkaa", "html": "

    When describing UDT1 solutions to various sample problems, I've often talked about UDT1 finding the function S* that would optimize its preferences over the world program P, and then return what S* would return, given its input. But in my original description of UDT1, I never explicitly mentioned optimizing S as a whole, but instead specified UDT1 as, upon receiving input X, finding the optimal output Y* for that input, by considering the logical consequences of choosing various possible outputs. I have been implicitly assuming that the former (optimization of the global strategy) would somehow fall out of the latter (optimization of the local action) without having to be explicitly specified, due to how UDT1 takes into account logical correlations between different instances of itself. But recently I found an apparent counter-example to this assumption.

    \n

    (I think this \"bug\" also exists in TDT, but I don't understand it well enough to make a definite claim. Perhaps Eliezer or someone else can tell me if TDT correctly solves the sample problem given here.)

    \n

    Here is the problem. Suppose Omega appears and tells you that you have just been copied, and each copy has been assigned a different number, either 1 or 2. Your number happens to be 1. You can choose between option A or option B. If the two copies choose different options without talking to each other, then each gets $10, otherwise they get $0.

    \n

    Consider what happens in the original formulation of UDT1. Upon receiving the input \"1\", it can choose \"A\" or \"B\" as output. What is the logical implication of S(1)=\"A\" on the computation S(2)? It's not clear whether S(1)=\"A\" implies S(2)=\"A\" or S(2)=\"B\", but actually neither can be the right answer.

    \n

    Suppose S(1)=\"A\" implies S(2)=\"A\". Then by symmetry S(1)=\"B\" implies S(2)=\"B\", so both copies choose the same option, and get $0, which is clearly not right.

    \n

    Now instead suppose S(1)=\"A\" implies S(2)=\"B\". Then by symmetry S(1)=\"B\" implies S(2)=\"A\", so UDT1 is indifferent between \"A\" and \"B\" as output, since both have the logical consequence that it gets $10. So it might as well choose \"A\". But the other copy, upon receiving input \"2\", would go though this same reasoning, and also output \"A\".

    \n

    The fix is straightforward in the case where every agent already has the same source code and preferences. UDT1.1, upon receiving input X, would put that input aside and first iterate through all possible input/output mappings that it could implement and determine the logical consequence of choosing each one upon the executions of the world programs that it cares about. After determining the optimal S* that best satisfies its preferences, it then outputs S*(X).

    \n

    Applying this to the above example, there are 4 input/output mappings to consider:

    \n
      \n
    1. S1(1)=\"A\", S1(2)=\"A\"
    2. \n
    3. S2(1)=\"B\", S2(2)=\"B\"
    4. \n
    5. S3(1)=\"A\", S3(2)=\"B\"
    6. \n
    7. S4(1)=\"B\", S4(2)=\"A\"
    8. \n
    \n

    Being indifferent between S3 and S4, UDT1.1 picks S*=S3 and returns S3(1)=\"A\". The other copy goes through the same reasoning, also picks S*=S3 and returns S3(2)=\"B\". So everything works out.

    \n

    What about when there are agents with difference source codes and different preferences? The result here suggests that one of our big unsolved problems, that of generally deriving a \"good and fair\" global outcome from agents optimizing their own preferences while taking logical correlations into consideration, may be unsolvable, since consideration of logical correlations does not seem powerful enough to always obtain a \"good and fair\" global outcome even in the single-player case. Perhaps we need to take an approach more like cousin_it's, and try to solve the cooperation problem from the top down. That is, by explicitly specifying a fair way to merge preferences, and simultaneously figuring out how to get agents to join into such a cooperation.

    " } }, { "_id": "ThhNvdBxcTYdzm69s", "title": "Things You Can't Countersignal", "pageUrl": "https://www.lesswrong.com/posts/ThhNvdBxcTYdzm69s/things-you-can-t-countersignal", "postedAt": "2010-02-19T00:18:05.339Z", "baseScore": 119, "voteCount": 99, "commentCount": 127, "url": null, "contents": { "documentId": "ThhNvdBxcTYdzm69s", "html": "

    Countersignaling can backfire if your audience doesn't have enough information about you to start with.  For some traits, it's especially dangerous, because you're likely to do it for traits you don't have the credibility to countersignal at all based on a misunderstanding of your relation to the general population.

    \n

    Countersignaling is \"showing off by not showing off\" - you understate, avoid drawing attention to, or otherwise downplay your communications of and about some valuable trait you have, because a) you are sure you won't be mistaken for someone with very poor characteristics in that area, and b) signaling could make you look like a merely medium-grade specimen.  (Actual medium-grade specimens have to signal to distinguish themselves from low-quality ones.)  For instance, if you are so obviously high-status that no one could possibly miss it, it may be both unnecessary and counterproductive to signal status, because this would let others conflate you with mid-status people.  So you can show up in a t-shirt and jeans instead of formal wear.  If you are so obviously brilliant that no one could possibly think you're some crackpot who wandered in off the street, you can afford to rave a little, while people who have to prove their smarts will find it expedient to keep calm and measured in their communication.

    \n

    In homogeneous communities, or in any situation where you are well-known, countersignaling is effective.  Your traits exceeding some minimum threshold is assumed where everyone's traits so exceed, and so failing to signal is unlikely to give anyone the impression that you have somehow managed to be the only person in the room who is deficient.  If you're personally acquainted with the people around whom you attempt countersignaling, your previous signals (or other evidence to the effect that you are awesome) will already have accumulated.  It's not necessary to further prove yourself.  In other words, if your audience's prior for you being medium-or-good is high enough, then your not signaling is evidence in favor of good over medium; if their prior for your being medium-or-low is too high, then your not signaling is instead evidence in favor of low over medium.

    \n

    But there are some things you can't effectively countersignal.

    \n

    Or rather, there are some things that you can't effectively countersignal to some people.  The most self-deprecating remarks about your positive qualities, spoken to your dear friends who know your most excellent traits like the backs of their own hands, will be interpreted \"correctly\", no matter what they're about.  For instance, when I explained my change in life plans to people who are very familiar with me, I was able to use the phrasing \"I'm dropping out of school to join a doomsday cult\"1 because I knew this sounded so unlike me that none of them would take it at face value.  Alicorn wouldn't really join a doomsday cult; it must be something else!  It elicited curiosity, but not contempt for my cult-joining behavior.  To more distant acquaintances, I used the less loaded term \"nonprofit\".  I couldn't countersignal my clever life choices to people who didn't have enough knowledge of my clever life choices; so I had to rely on the connotation of \"nonprofit\" rather than playing with the word \"cult\" for my amusement.

    \n

    Similar to close personal connection, people in a homogeneous environment can readily understand one another's countersignals.  Someone who has joined the same cult as me isn't going to get the wrong idea if I call it that, even without much historical data about how sensible I generally am in choosing what comes next in my life.  But in the wider world where people really do join real cults that really have severely negative features, there's no way to tell me apart from someone who's joined one of those and might start chanting or something any moment.  I would not announce that I had joined a cult when explaining to a TSA agent why I was flying across the country.

    \n

    The trouble is that it's easy to think one's positive traits are so obvious that no one could miss them when really they aren't.  You are not as well known as you think you should be.  Your countersignals are more opaque than you think they are.  If you tell a stranger you've joined a cult, they will probably think you actually joined a cult.

    \n

    Here's an example at work: in a homogeneous group of white liberals, talking casually about assorted minority races is commonplace if race is going to be discussed at all.  Everybody present knows that the group is a homogeneous group of white liberals.  Nobody has reason to suspect that anyone in the room has ever been disposed to practice overt racism of any kind, and odds are that no one in the group is well-informed enough about implicit biases to suspect covert racism (even though that's almost certainly present).  So people in the group can countersignal their lack of racism to each other with the loose, casual talk, making generalizations when it's convenient.  Nobody listening will take them for \"real\" racists.  And being hyper-concerned with political correctness would make one seem concerned with being racist - it would look like one considered oneself to be in some kind of danger, which doesn't speak kindly of how well one is doing to begin with.

    \n

    But to an outside observer - especially one who is informed about implicit biases, or has personal experiences with how ineffectively people screen off casual attitudes and prevent them from causing bad behavior - feeling that one is in this kind of danger, and speaking carefully to reflect that, is the best-case scenario.  To an outside observer, the homogeneous group of white liberals cannot credibly countersignal, because there are too many people who look just like them and talk just like them and don't have the lovely qualities they advertise by acting confidently.  In the general population, loose race talk is more likely to accompany racism than non-racism, and non-racism is more likely to accompany political correctness than loose race talk.  The outside observer can't separate the speaker from the general population and has to judge them against those priors, not local, fine-tuned priors.

    \n

    So to sum up, countersignaling is hazardous when your audience can't separate you from the general population via personal acquaintance or context.  But often, you aren't as different from the general population as you think (even if your immediate audience, like you, thinks you are).  Or, the general population is in poorer shape than you suspect (increasing the prior that you're in a low-quality tier for the quality you might countersignal).  Therefore, you should prudentially exercise caution when deciding when to be uncautious about your signals.

    \n

     

    \n

    1I am visiting the Singularity Institute.

    " } }, { "_id": "bg8bSdLdyRpzescQJ", "title": "Open Thread: February 2010, part 2", "pageUrl": "https://www.lesswrong.com/posts/bg8bSdLdyRpzescQJ/open-thread-february-2010-part-2", "postedAt": "2010-02-16T08:29:56.690Z", "baseScore": 15, "voteCount": 13, "commentCount": 917, "url": null, "contents": { "documentId": "bg8bSdLdyRpzescQJ", "html": "

    The Open Thread posted at the beginning of the month has gotten really, really big, so I've gone ahead and made another one. Post your new discussions here!

    \n

    This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

    " } }, { "_id": "t2aMuKYpBJHoFpGiA", "title": "Friendly AI at the NYC Future Salon", "pageUrl": "https://www.lesswrong.com/posts/t2aMuKYpBJHoFpGiA/friendly-ai-at-the-nyc-future-salon", "postedAt": "2010-02-16T02:57:47.014Z", "baseScore": 10, "voteCount": 10, "commentCount": 6, "url": null, "contents": { "documentId": "t2aMuKYpBJHoFpGiA", "html": "

    For those in the New York City area:  I will be giving a talk on Friendly AI at the NYC Future Salon this Saturday, the 20th of February, at 3:00pm.  We'll be gathering in the back room of Stone Creek Bar, located at 140 E. 27th Street in Manhattan.

    \n

    At the moment, 29 people have RSVPed \"Yes\" and 10 \"maybe,\" with 21 spots left.  If you're interested in comming, please RSVP through the Future Salon's meetup group, as space is limited.

    \n

    Those of you familiar with the existing FAI literature probably won't hear anything new, but we have the venue for over 3 hours and my talk should only take 20 to 30 minutes, so there should be plenty of time for stimulating discussion.  Be sure to bring your toughest questions!  This would also be a good opportunity for those who haven't made any of the local LW/OB meetups (we have a meetup group now!) to meet other aspiring rationalists in the NYC area.

    \n

    I look forward to seeing some new faces.

    " } }, { "_id": "c9cRiR2bsXdxAyX8E", "title": "Hayekian Prediction Markets?", "pageUrl": "https://www.lesswrong.com/posts/c9cRiR2bsXdxAyX8E/hayekian-prediction-markets", "postedAt": "2010-02-15T23:50:36.541Z", "baseScore": 10, "voteCount": 14, "commentCount": 79, "url": null, "contents": { "documentId": "c9cRiR2bsXdxAyX8E", "html": "

    I think I basically get the idea behind prediction markets. People take their money seriously, so the opinions of people who are confident enough to bet real money on those opinions deserve to be taken seriously as well. That kid on the schoolyard who was always saying \"wanna bet?\" might have been annoying but he also had a point: your willingness or unwillingness to bet does say something about how seriously your opinions ought to be taken. Furthermore, there are serious problems with the main alternative prediction method, which consists of asking experts what they think is going to happen. Almost nobody ever keeps track of whose predictions turned out to be right and then listens to those people more. Some predictions involve events that are so rare or so far in the future that there's no way for an expert to accumulate a track record at all. Some issues give experts incentives to be impressively wrong rather than boringly right. And so on. These are all good points, and they make enough sense to me to convince me that prediction markets deserve to be taken seriously and tested empirically. If they reliably produce better predictions than the alternatives, then they deserve to win the day.*

    But there is a particular claim that is made about prediction markets that I am skeptical of. It starts with the well-known idea, usually associated with Friedrich on Hayek, that a major virtue of free markets is that there is all kinds of useful information spread out in local chunks throughout the economy, which individuals can usefully exploit but a central planner never could, which is reflected in market prices, and which in turn cause resources to be allocated efficiently. It then goes on to argue that prediction markets have a similar virtue. As an example, suppose there's a prediction market for a national election, and you happen to know that Candidate X is more popular in your little town than most people think. There's no way that some faraway expert could have known this or incorporated it into his or her prediction in any way, but it gives you an incentive to bet on Candidate X, which causes your local information to be reflected in the prediction market price. Lots and lots of people doing the same thing will cause lots and lots of such little local pieces of information, which couldn't have been obtained any other way, to also be reflected in the market price.

    But it seems to me that this \"Hayekian\" mechanism should work a lot less well in the prediction market context than in the standard context. In the standard version, you benefit directly from a piece of local information that only you happen to have. If you know that a particular machine in your factory only works right if you kick it three times on the left side and then smack it twice on the top, then you can do that and directly reap the benefits, and the fact that you were able to do it (i.e., the fact that output in your particular industry is very slightly less scarce than someone who didn't know that trick would have thought) will be reflected in the market price. In contrast if you're the only one who knows that Candidate X is surprisingly popular in your little town (say because you're the mail carrier and you count yard signs along your route), could you really benefit from trading on that information? There are a number of barriers to your doing so. First, there are transactions costs associated with trading. Second, there is garden-variety risk aversion: if you're risk-averse then you won't want to invest a large share of your total wealth in this highly risky and urnhedged asset, which means that you won't bet much and so the price won't move much to reflect your information. Third, in order to believe that your little piece of local information constitutes a reason to bet on Candidate X, you'd have to believe that the current price accurately reflects all the *other* pieces of information besides yours. In some sense you should believe this: if you thought the price was off and you thought you knew which direction it was off, that would be a good reason to bet against the mispricing. But even if you had no actionable beliefs regarding a mispricing, you might just not have a lot of confidence that all the other information has been aggregated correctly. This would translate into another form of risk, and so risk-aversion would kick in once again. Fourth, there may be uncertainly about whether you really are the only one who knows knows your piece of local information (maybe the paper boy also noticed the yard signs, but then again maybe not). If you're not sure, then you're not sure to what extent that piece of information is already reflected in the current price. Again, you might have beliefs about this, and those beliefs might be right on average (though they might not) but it is yet another layer of uncertainty that should have an effect similar to ordinary risk aversion.

    I asked Robin Hanson about this once at lunch a few years ago, and we had an interesting chat about it, along with some other George Mason folks. I won't try to summarize everyone's positions here (I'd feel obligated to ask their permission before I'd even try), but suffice it to say that I don't think he foreswore the Hayekian idea entirely as an argument in favor of prediction markets. And there is a quote by him here that seems to embrace it. In any case, I'd be interested to know what he thinks about it. And of course it matters whether or not this Hayekian claim is being made for prediction markets, and it matters whether the claim is correct, because whether or not prediction markets have this additional theoretical advantage should go into one's priors about their merits before evaluating whatever empirical evidence becomes available.

    *The question is not purely an empirical one though. There are issues related to how susceptible prediction markets are to manipulation, how well they'll work when the people doing the betting about what will happen also have some influence over what does happen, whether they'll work for rare or distant events, or for big picture questions where in some states of the world there's no one around to pay out the winnings, and so on. So even a strong empirical finding that prediction markets work in more straightforward settings is not the last word on the subject, which means that there will be a continuing role for theoretical arguments even as more evidence comes in.

    " } }, { "_id": "eQx2aHxMgfdNKNomZ", "title": "Boo lights: groupthink edition", "pageUrl": "https://www.lesswrong.com/posts/eQx2aHxMgfdNKNomZ/boo-lights-groupthink-edition", "postedAt": "2010-02-15T18:29:56.968Z", "baseScore": 27, "voteCount": 28, "commentCount": 72, "url": null, "contents": { "documentId": "eQx2aHxMgfdNKNomZ", "html": "

    In conversations on LessWrong you may be surprised (in fact, dismayed) to find an apparent majority of the community agreeing with each other, and disagreeing with some view you hold dear. You may be tempted to call \"groupthink\". Whenever that happens, please hold yourself to at least as high an epistemic standard as the people who are participating in the community, and substantiate your accusation of groupthink with actual evidence and analysis.

    \n

    \"Groupthink\" can be an instance of applause lights, terms or explanations used not so much for their semantic content as for the warm fuzzies they are intended to trigger in your audience. Or... since \"groupthink\" isn't so much intended to generate applause for you, but to generate disapproval of those who disagree with you, we might coin the phrase \"boo lights\".

    \n

    At any rate, you may be cheaply establishing (in your own eyes and the eyes of people \"on your side\") your status as a skeptic, without actually doing any critical thinking or even basic due diligence. Are you sure you that's what you want?

    \n

    \n

    (N.B. links in this post either point to examples, or to more complete definitions of the concepts referenced; they are intended as supplementary material and this post stands on its own, you can ignore the links on a first read-through.)

    \n

    Apparent consensus is not sufficient grounds for suspecting groupthink, because the \"groupthink\" explanatory scheme leads to further predictions than the mere appearance of consensus. For instance, groupthink results in \"selection bias in collecting information\" (from the Wikipedia entry). If the community has shown diligence in seeking contrary information, and yet has not rallied to your favored point of view, your accusations of groupthink are unjustified.

    \n

    Disapproval of your contributions (in the form of downvoting) is not sufficient grounds for suspecting groupthink. Communities establish mechanisms of defence against disruption, in a legitimate response to a context of discourse where disruption is an ever present threat, the flip side of open participation. The voting/karma system is the current mechanism, probably flawed and probably better than nothing. Downvotes signal \"we would like to see fewer comments like this one\". The appropriate thing to do if you receive downvotes and you're neither a troll nor a crackpot is to simply seek feedback: ask what's wrong. Complaining only makes things worse. Complaining that the community is exhibiting censorship or groupthink makes things much worse.

    \n

    Disapproval of your accusations of groupthink is still not sufficient grounds for suspecting groupthink. This community is aware of information cascades and other effects leading to groupthink, discusses them openly, and strives to adopt countervailing norms. (Note that this post generalizes to further concepts, such as censorship. Downvotes are not censorship; they are a collaborative filtering mechanism, whereby readers are encouraged to skip over some content; that content is nevertheless preserved, visible to anyone who chooses to read it; censorship, i.e. banning, does occur but much more seldom than downvoting.)

    \n

    Here is a good example of someone substantiating their accusations of groupthink by reference to the actual research on groupthink. Note how much more work this is.

    \n

    If you're still thinking of calling \"groupthink\" without doing that work... or, perhaps, if you have already done so...

    \n

    Please reconsider: your behaviour devalues the technical meaning of \"groupthink\", which this community does have a use for (as do other communities of sincere inquiry). We want the term groupthink to still be useful when we really need it - when we actually succumb to groupthink.

    " } }, { "_id": "7pcgQWjndDT3xhFus", "title": "Two probabilities", "pageUrl": "https://www.lesswrong.com/posts/7pcgQWjndDT3xhFus/two-probabilities", "postedAt": "2010-02-15T14:18:11.036Z", "baseScore": 1, "voteCount": 14, "commentCount": 44, "url": null, "contents": { "documentId": "7pcgQWjndDT3xhFus", "html": "

    Consider the following statements:

    1. The result of this coin flip is heads.

    2. There is life on Mars.

    3. The millionth digit of pi is odd.

    What is the probability of each statement?

    A frequentist might say, \"P1 = 0.5. P2 is either epsilon or 1-epsilon, we don't know which. P3 is either 0 or 1, we don't know which.\"

    A Bayesian might reply, \"P1 = P2 = P3 = 0.5. By the way, there's no such thing as a probability of exactly 0 or 1.\"

    Which is right? As with many such long-unresolved debates, the problem is that two different concepts are being labeled with the word 'probability'. Let's separate them and replace P with:

    F = the fraction of possible worlds in which a statement is true. F can be exactly 0 or 1.

    B = the Bayesian probability that a statement is true. B cannot be exactly 0 or 1.

    Clearly there must be a relationship between the two concepts, or the confusion wouldn't have arisen in the first place, and there is: apart from both obeying  various laws of probability, in the case where we know F but don't know which world we are in, B = F. That's what's going on in case 1. In the other cases, we know F != 0.5, but our ignorance of its actual value makes it reasonable to assign B = 0.5.

    When does the difference matter?

    Suppose I offer to bet my $200 the millionth digit of pi is odd, versus your $100 that it's even. With B3 = 0.5, that looks like a good bet from your viewpoint. But you also know F3 = either 0 or 1. You can also infer that I wouldn't have offered that bet unless I knew F3 = 1, from which inference you are likely to update your B3 to more than 2/3, and decline.

    On a larger scale, suppose we search Mars thoroughly enough to be confident there is no life there. Now we know F2 = epsilon. Our Bayesian estimate of the probability of life on Europa will also decline toward 0.

    Once we understand F and B are different functions, there is no contradiction.

    " } }, { "_id": "FwMhhzt8RSLAWNFAB", "title": "Demands for Particular Proof: Appendices", "pageUrl": "https://www.lesswrong.com/posts/FwMhhzt8RSLAWNFAB/demands-for-particular-proof-appendices", "postedAt": "2010-02-15T07:58:57.574Z", "baseScore": 40, "voteCount": 34, "commentCount": 70, "url": null, "contents": { "documentId": "FwMhhzt8RSLAWNFAB", "html": "

    Appendices toYou're Entitled to Arguments, But Not (That Particular) Proof

    \n

    (The main article was getting long, so I decided to move the appendices to a separate article which wouldn't be promoted, thus minimizing the size of the article landing in a promoted-article-only-reader's feed.)

    \n

    A.  The absence of unobtainable proof is not even weak evidence of absence.

    \n

    The wise will already know that absence of evidence actually is evidence of absence; and they may ask, \"Since a time-lapse video record of apes evolving into humans would, in fact, be strong evidence in favor of the theory of evolution, is it not mandated by the laws of probability theory that the absence of this videotape constitute some degree of evidence against the theory of evolution?\"

    \n

    (Before you reject that proposition out of hand for containing the substring \"evidence against the theory of evolution\", bear in mind that grownups understand that evidence accumulates.  You don't get to pick out just one piece of evidence and ignore all the rest; true hypotheses can easily generate a minority of weak pieces of evidence against themselves; conceding one point of evidence does not mean conceding the debate; and people who try to act as if it does are nitwits.  Also there are probably no creationists reading this blog.)

    \n

    The laws of probability theory do mandate that if P(H|E) > P(H), then P(H|~E) < P(H).  So - even if absence of proof is by no means proof of absence, and even if we reject the philosophy that absence of a particular proof means you get to discard all the other arguments about evidence and priors - must we not at least concede that absence of proof is necessarily evidence of absence, even though it may be very weak evidence?

    \n

    Actually, in cases like creationism, not even that much follows.  Suppose we had a time camera - a device that lets us look into the distant past and see historical events with our own eyes.  Let the proposition \"we have a time camera\" be labeled Camera.  Then we would either be able to videotape the evolution of humans from apes; let the presence of this video record be labeled Video, and its absence ~Video.  Let Evolution stand for the hypothesis that evolution is true.  And let True and False stand for the epistemic states \"Pretty much likely to be true\" and \"Pretty much likely to be false\", respectively.

    \n

    Then, given that evolution is true and that we have a time camera, we should expect to see Video:

    \n

    P(Video|Evolution,Camera) = True   and   P(Video|~Evolution,Camera) = False

    \n

    So if we had a time camera, if we could look into the past, then \"no one has seen apes evolving into humans\" would be strong evidence against the theory of natural selection:

    \n

    P(Evolution|~Video,Camera) = False

    \n

    But if we don't have a time camera, then regardless of whether evolution is true or false, we can't expect to have \"seen apes evolving into humans\":

    \n

    P(Video|Evolution,~Camera) = False   and   P(Video|~Evolution,~Camera) = False

    \n

    From which it follows that once you know ~Camera, observing ~Video tells you nothing further about Evolution:

    \n

    P(Evolution|~Video,~Camera) = P(Evolution|~Camera)

    \n

    If you didn't know whether or not we had time cameras, and I told you only that no one had ever seen apes evolving into humans, then you would have to evaluate P(Evolution|~Video) which includes some contributions from both P(Evolution|~Video,Camera) and P(Evolution|~Video,~Camera).  You don't know whether the video is missing because evolution is false, or the video is missing because we don't have a time camera.  And so, as the laws of probability require, P(Evolution|~Video) < P(Evolution) just as P(Evolution|Video) > P(Evolution).  But once you know you don't have a time camera, you can start by evaluating P(Evolution|~Camera) - and it's hard to see why lack of time cameras should, in and of itself, be evidence against evolution.  The process of natural selection hardly requires a time camera.  So P(Evolution|~Camera) = P(Evolution), and then ~Video isn't evidence one way or another.  The observation ~Camera screens off any evidence from observing ~Video.

    \n

    And this is only what should be expected: once you know you don't have a time camera, and once you've updated your views on evolution in light of the fact that time cameras don't exist to begin with (which doesn't seem to have much of an impact on the matter of evolution), it makes no further difference when you learn that no one has ever witnessed apes evolving into humans.

    \n
    \n

     

    \n

    B.  Subtext on cryonics:  Demanding that cryonicists produce a successful revival before you'll credit the possibility of cryonics, is logically rude; specifically, it is a demand for particular proof.

    \n

    A successful cryonics revival performed with modern-day technology is not a piece of evidence you could possibly expect modern cryonicists to provide, even given that the proposition of interest is true.  The whole point of cryonics is as an ambulance ride to the future; to take advantage of the asymmetry between the technology needed to successfully preserve a patient (cryoprotectants, liquid nitrogen storage) and the technology needed to revive a patient (probably molecular nanotechnology).

    \n

    In particular, the screening-off condition (playing the role of ~Camera in the example above) is the observation that we presently lack molecular nanotechnology.  Given that you don't currently have molecular nanotechnology, you can't reasonably expect to revive a cryonics patient today even given that they could in fact be revived using future molecular nanotechnology.

    \n

    You are entitled to arguments, though not that particular proof, and cryonicists have done their best to provide you with whatever evidence can be obtained.  For example:

    \n
    A study on rat hippocampal slices showed that it is possible for vitrified slices cooled to a solid state at -130ºC to have viability upon re-warming comparable to that of control slices that had not been vitrified or cryopreserved. Ultrastructure of the CA1 region (the region of the brain most vulnerable to ischemic damage) of the re-warmed slices is seen to be quite well preserved compared to the ultrastructure of control CA1 tissue (24). Cryonics organizations perfuse brains with vitrification solution until saturation is achieved...
    \n
    A rabbit kidney has been vitrified, cooled to -135ºC, re-warmed and transplanted into a rabbit. The formerly vitrified transplant functioned well enough as the sole kidney to keep the rabbit alive indefinitely (25)... The vitrification mixture used in preserving the rabbit kidney is known as M22. M22 is used by the cryonics organization Alcor for vitrifying cryonics subjects. Perfusion of rabbits with M22 has been shown to preserve brain ultrastructure without ice formation (26).
    \n

    This is the sort of evidence we can reasonably expect to obtain today, and it took work to provide you with that evidence.  Ignoring it in favor of demanding proof that you couldn't expect to see even if cryonicists were right, is (a) invalid as probability theory, (b) a sign of trying to defend an allowed belief rather than being honestly curious about which possible world we live in, and (c) logically rude.

    \n

    Formally:

    \n
      \n
    1. Even given that the proposition put forth by cryonicists is true - that people suspended with modern-day technology will be revivable by future technology - you cannot expect them to revive a cryonics patient using modern-day technology.
    2. \n
    3. Cryonicists have put forth considerable effort, requiring years of work by many people, to provide you with such evidence as can be obtained today.  The lack of that particular proof is not owing to any defect of diligence on the part of cryonicists, or disinterest in their part on doing the research.
    4. \n
    5. The prediction that a properly cryoprotected patient does not suffer information-theoretical death is not a privileged hypothesis pulled out of nowhere; it is the default extrapolation from modern neuroscience.  If we learn that a patient cryoprotected using current technologies has undergone erasure of critical brain information, we have learned something that is not in current neuroscience textbooks - and actually rather surprising, all things considered.  The straight-line extrapolation from the science we do know is that if you can see the neurons nicely preserved under a microscope, the information sure ought to be there.  (The idea that critical brain information is stored dynamically in spiking patterns has already been contraindicated by the evidence; dogs taken to very low (above-freezing) temperatures, sufficient to suppress brain activity, do not seem to suffer any memory loss or personality change.)
    6. \n
    7. Given that the proposition of interest is true, there is something drastically urgent we ought to be doing RIGHT NOW, namely cryopreserving as many as possible of the 150,000 humans per day who undergo mind-state annihilation.  (Economies of scale would very likely drive down costs by an order of magnitude or more; this is an entirely feasible goal economically and technologically, the only question is the political will.)
    8. \n
    \n

    Given these points: to discard the straight-line extrapolation from modern science and all the hard work that cryonicists have done to provide further distinguishing evidence, in favor of a demand for particular proof that you know cannot possibly be obtained and which you couldn't expect to see even given that the underlying proposition is true, when there are things we ought to be doing NOW given the truth of the proposition and much value will be lost by waiting; all this is indefensible as decision theory in a formal sense, and is, in an informal sense, madness.

    \n

    Which all goes to say only what Xiaoguang \"Mike\" Li observed to me some time ago:  That saying you'll only sign up for cryonics when someone demonstrates a successful revival of a cryonics patient is sort of like saying that you won't get on the airplane until after it arrives at the destination.  Only a very small amount of common sense is necessary to see this, and the objection really does demonstrate the degree to which, when most people feel an innate flinch away from an idea, they hardly feel obligated to come up with objections that make the slightest bit of sense.

    \n

    This beautiful public service announcement, with only a slight change of metaphor, could serve as a PSA for cryonics.  Stop making a big deal out of the decision.  It's not that complicated.

    \n
    \n

     

    \n

    C.  Demanding the demonstration of a working nanomachine before you'll credit the possibility of molecular nanotechnology is logically rude, specifically, a demand for particular proof.

    \n

    Given humanity's current level of technology, you can't reasonably expect a demonstration of molecular nanotechnology right now, even given that the proposition of interest is true: that molecular nanotechnology is physically possible to operate, physically possible to manufacture, and likely to be developed within some number of decades.  Even if we live in that world, you can't necessarily expect to see nanotechnology now.  And yet nonetheless the advocates of nanotechnology have gone to whatever extent possible to provide the arguments by which you could, today, figure out whether or not you live in a world where molecular nanotechnology is possible.  Eric Drexler put forth around six years of hard work to produce Nanosystems, doing as much of the basic physics as one man working more or less alone could be expected to do; and since then Robert Freitas, in particular, has been designing and simulating molecular devices, and even trying to work out simple synthesis paths, which is about as much as one person could do, and the funding hasn't really been provided for more than that.  To ignore all this hard work that has been put into providing you with such observations and arguments as can be reasonably obtained, and throw them out the window because of a demand for particular proof that you think they can't obtain and that they wouldn't be able to obtain even if the proposition at hand is true - this is not just invalid as probability theory, not just defensiveness rather than curiosity, it is logically rude.

    \n

    Although actually, of course, you can see tiny molecularly-precise machines on parade, including freely rotating gears and general assemblers - just look inside a biological cell, particularly at ATP synthase and the ribosome.  But by that power of cognitive chaos which generates the demand for unobtainable proof in the first place, there can be little doubt that, as soon as this overlooked demonstration is pointed out, the one will immediately find some clause by which to exclude it.  To actually provide the unobtainable proof would hardly be fair, after all.

    \n

    \n


    \n

    \n

    D.  It is invalid as probability theory, suboptimal as decision theory, and generally insane, to insist that you want to see someone develop a full-blown Artificial General Intelligence before you'll credit that it's worth anyone's time to work on problems of Friendly AI.

    \n

    Not precisely analogous to the above cases, but it is a demand for particular proof.  Delineating the specifics is left as an exercise to the reader.

    " } }, { "_id": "vqbieD9PHG8RRJddu", "title": "You're Entitled to Arguments, But Not (That Particular) Proof", "pageUrl": "https://www.lesswrong.com/posts/vqbieD9PHG8RRJddu/you-re-entitled-to-arguments-but-not-that-particular-proof", "postedAt": "2010-02-15T07:58:48.585Z", "baseScore": 89, "voteCount": 76, "commentCount": 229, "url": null, "contents": { "documentId": "vqbieD9PHG8RRJddu", "html": "

    Followup toLogical Rudeness

    \n
    \n

    \"Modern man is so committed to empirical knowledge, that he sets the standard for evidence higher than either side in his disputes can attain, thus suffering his disputes to be settled by philosophical arguments as to which party must be crushed under the burden of proof.\"
            -- Alan Crowe

    \n
    \n

    There's a story - in accordance with Poe's Law, I have no idea whether it's a joke or it actually happened - about a creationist who was trying to claim a \"gap\" in the fossil record, two species without an intermediate fossil having been discovered.  When an intermediate species was discovered, the creationist responded, \"Aha!  Now there are two gaps.\"

    \n

    Since I'm not a professional evolutionary biologist, I couldn't begin to rattle off all the ways that we know evolution is true; true facts tend to leave traces of themselves behind, and evolution is the hugest fact in all of biology.  My specialty is the cognitive sciences, so I can tell you of my own knowledge that the human brain looks just like we'd expect it to look if it had evolved, and not at all like you'd think it would look if it'd been intelligently designed.  And I'm not really going to say much more on that subject.  As I once said to someone who questioned whether humans were really related to apes:  \"That question might have made sense when Darwin first came up with the hypothesis, but this is the twenty-first century.  We can read the genes.  Human beings and chimpanzees have 95% shared genetic material.  It's over.\"

    \n

    Well, it's over, unless you're crazy like a human (ironically, more evidence that the human brain was fashioned by a sloppy and alien god).  If you're crazy like a human, you will engage in motivated cognition; and instead of focusing on the unthinkably huge heaps of evidence in favor of evolution, the innumerable signs by which the fact of evolution has left its heavy footprints on all of reality, the uncounted observations that discriminate between the world we'd expect to see if intelligent design ruled and the world we'd expect to see if evolution were true...

    \n

    ...instead you search your mind, and you pick out one form of proof that you think evolutionary biologists can't provide; and you demand, you insist upon that one form of proof; and when it is not provided, you take that as a refutation.

    \n

    You say, \"Have you ever seen an ape species evolving into a human species?\"  You insist on videotapes - on that particular proof.

    \n

    And that particular proof is one we couldn't possibly be expected to have on hand; it's a form of evidence we couldn't possibly be expected to be able to provide, even given that evolution is true.

    \n

    Yet it follows illogically that if a video tape would provide definite proof, then, likewise, the absence of a videotape must constitute definite disproof.  Or perhaps just render all other arguments void and turn the issue into a mere matter of personal opinion, with no one's opinion being better than anyone else's.

    \n

    So far as I can tell, the position of human-caused global warming (anthropogenic global warming aka AGW) has the ball.  I get the impression there's a lot of evidence piled up, a lot of people trying and failing to poke holes, and so I have no reason to play contrarian here.  It's now heavily politicized science, which means that I take the assertions with a grain of skepticism and worry - well, to be honest I don't spend a whole lot of time worrying about it, because (a) there are worse global catastrophic risks and (b) lots of other people are worrying about AGW already, so there are much better places to invest the next marginal minute of worry.

    \n

    But if I pretend for a moment to live in the mainstream mental universe in which there is nothing scarier to worry about than global warming, and a 6 °C (11 °F) rise in global temperatures by 2100 seems like a top issue for the care and feeding of humanity's future...

    \n

    Then I must shake a disapproving finger at anyone who claims the state of evidence on AGW is indefinite.

    \n

    Sure, if we waited until 2100 to see how much global temperatures increased and how high the seas rose, we would have definite proof.  We would have definite proof in 2100, however, and that sounds just a little bit way the hell too late.  If there are cost-effective things we can do to mitigate global warming - and by this I don't mean ethanol-from-corn or cap-and-trade, more along the lines of standardizing on a liquid fluoride thorium reactor design and building 10,000 of them - if there's something we can do about AGW, we need to do it now, not in a hundred years.

    \n

    When the hypothesis at hand makes time valuable - when the proposition at hand, conditional on its being true, means there are certain things we should be doing NOW - then you've got to do your best to figure things out with the evidence that we have.  Sure, if we had annual data on global temperatures and CO2 going back to 100 million years ago, we would know more than we do right now.  But we don't have that time-series data - not because global-warming advocates destroyed it, or because they were neglectful in gathering it, but because they couldn't possibly be expected to provide it in the first place.  And so we've got to look among the observations we can perform, to find those that discriminate between \"the way the world could be expected to look if AGW is true / a big problem\", and \"the way the world would be expected to look if AGW is false / a small problem\".  If, for example, we discover large deposits of frozen methane clathrates that are released with rising temperatures, this at least seems like \"the sort of observation\" we might be making if we live in the sort of world where AGW is a big problem.  It's not a necessary connection, it's not sufficient on its own, it's something we could potentially also observe in a world where AGW is not a big problem - but unlike the perfect data we can never obtain, it's something we can actually find out, and in fact have found out.

    \n

    Yes, we've never actually experimented to observe the results over 50 years of artificially adding a large amount of carbon dioxide to the atmosphere.  But we know from physics that it's a greenhouse gas.  It's not a privileged hypothesis we're pulling out of nowhere.  It's not like saying \"You can't prove there's no invisible pink unicorn in my garage!\"  AGW is, ceteris paribus, what we should expect to happen if the other things we believe are true.  We don't have any experimental results on what will happen 50 years from now, and so you can't grant the proposition the special, super-strong status of something that has been scientifically confirmed by a replicable experiment.  But as I point out in \"Scientific Evidence, Legal Evidence, Rational Evidence\", if science couldn't say anything about that which has not already been observed, we couldn't ever make scientific predictions by which the theories could be confirmed.  Extrapolating from the science we do know, global warming should be occurring; you would need specific experimental evidence to contradict that.

    \n

    We are, I think, dealing with that old problem of motivated cognition.  As Gilovich says:  \"Conclusions a person does not want to believe are held to a higher standard than conclusions a person wants to believe.  In the former case, the person asks if the evidence compels one to accept the conclusion, whereas in the latter case, the person asks instead if the evidence allows one to accept the conclusion.\"  People map the domain of belief onto the social domain of authority, with a qualitative difference between absolute and nonabsolute demands:  If a teacher tells you certain things, and you have to believe them, and you have to recite them back on the test.  But when a student makes a suggestion in class, you don't have to go along with it - you're free to agree or disagree (it seems) and no one will punish you.

    \n

    And so the implicit emotional theory is that if something is not proven - better yet, proven using a particular piece of evidence that isn't available and that you're pretty sure is never going to become available - then you are allowed to disbelieve; it's like something a student says, not like something a teacher says.

    \n

    You demand particular proof P; and if proof P is not available, then you're allowed to disbelieve.

    \n

    And this is flatly wrong as probability theory.

    \n

    If the hypothesis at hand is H, and we have access to pieces of evidence E1, E2, and E3, but we do not have access to proof X one way or the other, then the rational probability estimate is the result of the Bayesian update P(H|E1,E2,E3).  You do not get to say, \"Well, we don't know whether X or ~X, so I'm going to throw E1, E2, and E3 out the window until you tell me about X.\"  I cannot begin to describe how much that is not the way the laws of probability theory work.  You do not get to screen off E1, E2, and E3 based on your ignorance of X!

    \n

    Nor do you get to ignore the arguments that influence the prior probability of H - the standard science by which, ceteris paribus and without anything unknown at work, carbon dioxide is a greenhouse gas and ought to make the Earth hotter.

    \n

    Nor can you hold up the nonobservation of your particular proof X as a triumphant refutation.  If we had time cameras and could look into the past, then indeed, the fact that no one had ever \"seen with their own eyes\" primates evolving into humans would refute the hypothesis.  But, given that time cameras don't exist, then assuming evolution to be true we don't expect anyone to have witnessed humans evolving from apes with our own eyes, for the laws of natural selection require that this have happened far in the distant past.  And so, once you have updated on the fact that time cameras don't exist - computed P(Evolution|~Camera) - and the fact that time cameras don't exist hardly seems to refute the theory of evolution - then you obtain no further evidence by observing ~Video, i.e., P(Evolution|~Video,~Camera) = P(Evolution|~Camera).  In slogan-form, \"The absence of unobtainable proof is not even weak evidence of absence.\"  See appendix for details.

    \n

    (And while we're on the subject, yes, the laws of probability theory are laws, rather than suggestions.  It is like something the teacher tells you, okay?  If you're going to ignore the Bayesian update you logically have to perform when you see a new piece of evidence, you might as well ignore outright mathematical proofs.  I see no reason why it's any less epistemically sinful to ignore probabilities than to ignore certainties.)

    \n

    Throwing E1, E2 and E3 out the window, and ignoring the prior probability of H, because you haven't seen unobtainable proof x; or holding up the nonobservation of X as a triumphant refutation, when you couldn't reasonably expect to see X even given that the underlying theory is true; all this is more than just a formal probability-theoretic mistake.  It is logically rude.

    \n

    After all - in the absence of your unobtainable particular proof, there may be plenty of other arguments by which you can hope to figure out whether you live in a world where the hypothesis of interest is true, or alternatively false.  It takes work to provide you with those arguments.  It takes work to provide you with extrapolations of existing knowledge to prior probabilities, and items of evidence with which to update those prior probabilities, to form a prediction about the unseen.  Someone who does the work to provide those arguments is doing the best they can by you; throwing the arguments out the window is not just irrational, but logically rude.

    \n

    And I emphasize this, because it seems to me that the underlying metaphor of demanding particular proof is to say as if, \"You are supposed to provide me with a video of apes evolving into humans, I am entitled to see it with my own eyes, and it is your responsibility to make that happen; and if you do not provide me with that particular proof, you are deficient in your duties of argument, and I have no obligation to believe you.\"  And this is, in the first place, bad math as probability theory.  And it is, in the second place, an attitude of trying to be defensible rather than accurate, the attitude of someone who wants to be allowed to retain the beliefs they have, and not the attitude of someone who is honestly curious and trying to figure out which possible world they live in, by whatever signs are available.  But if these considerations do not move you, then even in terms of the original and flawed metaphor, you are in the wrong: you are entitled to arguments, but not that particular proof.

    \n

    Ignoring someone's hard work to provide you with the arguments you need - the extrapolations from existing knowledge to make predictions about events not yet observed, the items of evidence that are suggestive even if not definite and that fit some possible worlds better than others - and instead demanding proof they can't possibly give you, proof they couldn't be expected to provide even if they were right - that is logically rude.  It is invalid as probability theory, foolish on the face of it, and logically rude.

    \n

    And of course if you go so far as to act smug about the absence of an unobtainable proof, or chide the other for their credulity, then you have crossed the line into outright ordinary rudeness as well.

    \n

    It is likewise a madness of decision theory to hold off pending positive proof until it's too late to do anything; the whole point of decision theory is to choose under conditions of uncertainty, and that is not how the expected value of information is likely to work out.  Or in terms of plain common sense:  There are signs and portents, smoke alarms and hot doorknobs, by which you can hope to determine whether your house is on fire before your face melts off your skull; and to delay leaving the house until after your face melts off, because only this is the positive and particular proof that you demand, is decision-theoretical insanity.  It doesn't matter if you cloak your demand for that unobtainable proof under the heading of scientific procedure, saying, \"These are the proofs you could not obtain even if you were right, which I know you will not be able to obtain until the time for action has long passed, which surely any scientist would demand before confirming your proposition as a scientific truth.\"  It's still nuts.

    \n
    \n

     

    \n

    Since this post has already gotten long, I've moved some details of probability theory, the subtext on cryonics, the sub-subtext on molecular nanotechnology, and the sub-sub-subtext on Artificial Intelligence, into:

    \n

    Demands for Particular Proof:  Appendices.

    " } }, { "_id": "p55jWjjNXaiao9hXA", "title": "Two Challenges", "pageUrl": "https://www.lesswrong.com/posts/p55jWjjNXaiao9hXA/two-challenges", "postedAt": "2010-02-14T08:31:47.668Z", "baseScore": 20, "voteCount": 15, "commentCount": 14, "url": null, "contents": { "documentId": "p55jWjjNXaiao9hXA", "html": "

    Followup To: Play for a Cause, Singularity Institute $100k Challenge Grant

    In the spirit of informal intellectual inquiry and friendly wagering, and with an eye toward raising a bit of money for SIAI, I offer the following two challenges to the LW community.

    Challenge #1 - Bayes' Nets Skeptics' Challenge

    Many LWers seem to be strong believers in the family of modeling methods variously called Bayes' Nets, belief networks, or graphical models. These methods are the topic of two SIAI-recommended books by Judea Pearl: \"Probabilistic Reasoning in Intelligent Systems\" and \"Causality: Models, Reasoning and Inference\".

    The belief network paradigm has several attractive conceptual features. One feature is the ability of the networks to encode conditional independence relationships, which are intuitively natural and therefore attractive to humans. Often a naïve investigation of the statistical relationship between variables will produce nonsensical conclusions, and the idea of conditional independence can sometimes be used to unravel the mystery. A good example would be a data set relating to traffic accidents, which shows that red cars are more likely to be involved in accidents. But it's nearly absurd to believe that red cars are intrinsically more dangerous. Rather, red cars are preferred by young men, who tend to be reckless drivers. So the color of a car is not independent of the likelihood of a collision, but it is conditionally independent given the age and sex of the person driving the car. This relationship could be expressed by the following belief network:

    \n

    \n

    \"\"

    \n



    Where \"YM\" is true if the driver is a young man, \"R\" denotes whether the car is red, and \"A\" indicates an accident. The fact that there is no edge betwee \"R\" and \"A\" indicates that they are conditionally independent given the other nodes in the network, in this case \"YM\".

    A key property of the belief network scheme of constructing probability distributions is that they can be used to achieve a good balance between expressive power and model complexity. Consider the family of probability distributions over N variables that can take on K different values. Then the most naïve model is just an N-dimensional array of numbers P(X1, X2, ... XN), requiring K^N parameters to specify. The number of parameters required for a belief network can be drastically lower, if many of the nodes are conditionally independent of one another. For example, if all but one node in the graph has exactly one parent, then the number of parameters required is basically NK^2 (N conditional distributions of the form P(Xchild|Xparent)). This is clearly a drastic improvement for reasonable values of K and N. Even though the model is much less complex, every variable is still related to every other variable - knowing the value of any Xi will change the probability of any Xj, unless some other intervening node is also known.

    Another attractive feature of the belief network paradigm is the existence of a fast inference algorithm for updating the probability distributions of some unobserved variables in response to evidence. For example, given that a patient has the feature \"smoker=T\" and \"chest pain=T\", the inference algorithm can rapidly calculate the probability distribution of the unobserved variable \"heart disease\". Unfortunately, there is a big catch - this inference algorithm works only for belief networks that can be expressed as acyclic graphs. If the graph is not acyclic, the computational cost of the inference algorithm is much larger (IIRC, it is exponential in the size of the largest clique in the graph).

    In spite of these benefits, belief networks have some serious drawbacks as well. One major flaw is the difficulty of learning networks from data. Here the goal is to obtain a belief network specifying a distribution P(x) that assigns a very high likelihood to an empirical data set X={X1,X2...XM}, where now each X is an N-dimensional vector and there are M samples. The difficulty of learning belief networks stems from the fact that the number of graphs representing relationships between N variables is exponential in N.

    There is a second, more subtle flaw in the belief network paradigm. A belief network is defined based on a graph which has one node for each data variable. This is fine, if you know what the correct variables are. But very often the correct variables are unknown; indeed, finding the correct variables is probably the key problem in learning. Once you know that the critical variables are mass, force, and acceleration, it is almost trivial to determine the relation between them. As an example, consider a system composed of three variables A,B,C and X. The unknown relationship between these variables is X~F(A+B+C), where F is some complex distribution that depends on a single parameter. Now, a naïve representation will yield a belief network that looks like the following:

    \n

    \"\"

    \n

    If we reencode the variables as A'=A, B'=A'+B, C'=B'+C, then we get the following graph:

    \n

     

    \n

    \"\"

    \n

     

    \n

    If you count the number of links this might not seem like much of an improvement. But the number of links is not the important factor; the key factor is the required number of entries in the conditional probability tables. In the first graph, the CPT for X requires O(K^4) entries, where K is again the number of values A, B, and C can take on. But in the second graph the number of entries is O(K^2). So clearly the second graph should be preferred, but the basic belief network learning algorithms provide no way of obtaining it.

    On the basis of these remarks I submit the following qualified statement: while the belief network paradigm is mathematically elegant and intuitively appealing, it is NOT very useful for describing real data.

    The challenge is to prove the above claim wrong. This can be done as follows. First, find a real data set (see below for definition of the word \"real\"). Construct a belief network model of the data, in any way you choose. Then post a link to the data set, and I will then attempt to model it using alternative methods of my own choosing (probably Maximum Entropy or a variant thereof). We will then compare the likelihoods achieved by the two methods; higher likelihood wins. If there is ambiguity concerning the validity of the result, then we will compress the data set using compression algorithms based on the models and compare compression rates.  Constructing a compressor from a statistical model is essentially a technical exercise; I can provide a Java implementation of arithmetic encoding. The compression rates must take into account the size of the compressor itself.

    The challenge hinges on the meaning of the word \"real data\". Obviously it is trivial to construct a data set for which a belief network is the best possible model, simply by building a network and then sampling from it. So my requirement is that the data set be non-synthetic. Other than that, there are no limitations - it can be image data, text, speech, machine learning sets, NetFlix, social science databases, etc.

    To make things interesting, the loser of the challenge will donate $100 (more?) to SIAI. Hopefully we can agree on the challenge (but not necessarily resolve it) before the Feb. 28th deadline for matching donations. In principle I will accept challenges until I lose so many that my wallet starts to hurt.

    Challenge #2: Compress the GSS

    The General Social Survey is a widely used data set in the social sciences. Most analyses based on the GSS tend to use standard statistical tools such as correlation, analysis of variance, and so on. These kinds of analysis run into the usual problems associated with statistics - how do you choose a prior, how do you avoid overfitting, and so on.

    I propose a new way of analyzing the data in the GSS - a compression-based challenge as outlined above. To participate in the challenge, you build a model of the data contained in the GSS using whatever methods appeal to you. Then you connect the model to an encoding method and compress the dataset. Whoever achieves the best compression rate, taking into account the size of the compressor itself, is the winner.

    The GSS contains data about a great variety of economic, cultural, psychological, and educational factors. If you are a social scientist with a theory of how these factors relate, you can prove your theory by transforming it into a statistical model and then into a compression program, and demonstrating better compression results than rival theories.

    If people are interested, I propose the following scheme. There is a $100 entry fee, of which half will go to SIAI. The other half goes to the winner. Again, hopefully we can agree on the challenge before the Feb. 28th deadline.

    " } }, { "_id": "rYoS9mdcypeQi3ZG8", "title": "Shock Levels are Point Estimates", "pageUrl": "https://www.lesswrong.com/posts/rYoS9mdcypeQi3ZG8/shock-levels-are-point-estimates", "postedAt": "2010-02-14T04:31:25.506Z", "baseScore": 9, "voteCount": 14, "commentCount": 11, "url": null, "contents": { "documentId": "rYoS9mdcypeQi3ZG8", "html": "

    This is a post from my blog, Space and Games. Michael Vassar has requested that I repost it here. I thought about revising it to remove the mind projection fallacy, but instead I left it in for you to find.

    \n

    Eliezer Yudkowsky1999 famously categorized beliefs about the future into discrete \"shock levels.\" Michael Anissimov later wrote a nice introduction to future shock levels. Higher shock levels correspond to belief in more powerful and radical technologies, and are considered more correct than lower shock levels. Careful thinking and exposure to ideas will tend to increase one’s shock level.

    \n

    If this is really true, and I think it is, shock levels are an example of human insanity. If you ask me to estimate some quantity, and track how my estimates change over time, you should expect it to look like a random walk if I’m being rational. Certainly I can’t expect that my estimate will go up in the future. And yet shock levels mostly go up, not down.

    \n

    I think this is because people model the future with point estimates rather than probability distributions. If, when we try to picture the future, we actually imagine the single outcome which seems most likely, then our extrapolation will include every technology to which we assign a probability above 50%, and none of those that we assign a probability below 50%. Since most possible ideas will fail, an ignorant futurist should assign probabilities well below 50% to most future technologies. So an ignorant futurist’s point estimate of the future will indeed be much less technologically advanced than that of a more knowledgeable futurist.

    \n

    For example, suppose we are considering four possible future technologies: molecular manufacturing (MM), faster-than-light travel (FTL), psychic powers (psi), and perpetual motion (PM). If we ask how likely these are to be developed in the next 100 years, the ignorant futurist might assign a 20% probability to each. A more knowledgeable futurist might assign a 70% probability to MM, 8% for FTL, and 1% for psi and PM. If we ask them to imagine a plethora of possible futures, their extrapolations might be, on average, equally radical and shocking. But if they instead generate point estimates, the ignorant futurist would round the 20% probabilities down to 0, and say that no new technologies will be invented. The knowledgeable futurist would say that we’ll have MM, but no FTL, psi, or PM. And then we call the ignorant person “shock level 0″ and the knowledgeable person “shock level 3.”

    \n

    So future shock levels exist because people imagine a single future instead of a plethora of futures. If futurists imagined a plethora of futures, then ignorant futurists would assign a low probability to many possible technologies, but would also assign a relatively high probability to many impossible technologies, and there would be no simple relationship between a futurist’s knowledge level and his or her expectation of the overall amount of technology that will exist in the future, although more knowledgeable futurists would be able to predict which specific technologies will exist. Shock levels would disappear.

    \n

    I do think that shock level 4 is an exception. SL4 has to do with the shocking implications of a single powerful technology (superhuman intelligence), rather than a sum of many technologies.

    " } }, { "_id": "EbrThdEbHi7fz63vq", "title": "How Much Should We Care What the Founding Fathers Thought About Anything?", "pageUrl": "https://www.lesswrong.com/posts/EbrThdEbHi7fz63vq/how-much-should-we-care-what-the-founding-fathers-thought", "postedAt": "2010-02-11T00:38:43.590Z", "baseScore": -6, "voteCount": 35, "commentCount": 36, "url": null, "contents": { "documentId": "EbrThdEbHi7fz63vq", "html": "

    A while back I saw an interesting discussion between U.S. Supreme Court Justices Stephen Breyer and Antonin Scalia. Scalia is well known for arguing that the way to deal with Constitutional questions is to use the plain meaning of the words in the Constitutional text as they would have been understood at the time and place they were written.* Any other approach, he argues, would amount to nothing more than an unelected judge taking his or her personal political and moral views and making them into the highest law of the land. In his view if a judge is not taking the answer out of the text, then that judge must be putting the answer into the text, and no judge should be allowed to do that.** One illustrative example that comes up in the exchange is the question of whether and when it's OK to cite foreign law in cases involving whether a particular punishment is \"Cruel and Unusual\" and hence unconstitutional. In Scalia's view, the right way to approach the question would be to try as best one could to figure out what was meant by the words \"cruel\" and \"unusual\" in 18th century England, and what contemporary foreign courts have to say cannot possibly inform that question. He also opposes (though somewhat less vigorously) the idea that decisions ought to take into account changes over time in what is considered cruel and unusual in America: he thinks that if people have updated their opinions about such matters, they are free to get their political representatives to pass new laws or to amend the Constitution***, but short of that it is simply not the judge's job to take that sort of thing into account.

    I don't think it's an unfair caricature to describe Scalia's position as follows:

    \n
      \n
    1. It would be a bad thing for a bunch of unaccountable, unelected judges to have their own opinions be made the supreme law of the land.
    2. \n
    3. If there is no absolute rule for judges to stick to, then they end up in some sense making up the \"Constitution\" themselves.
    4. \n
    5. Therefore absolute Originalism is not only correct, but is the only position that distinguishes between \"judge\" and \"dictator.\"
    6. \n
    \n

    Claim #1 is true and Claim #2 is sort of true, but Claim #3 is clearly not true. What we're really faced with here is nothing more than a garden variety trade-off between two important values: trying to avoid disruptive or arbitrary rule by judges on the one hand and progress on the other. There simply is no absolute value at stake to which all other values must be subordinated. Does anyone really think that there is no real estate between \"live under the exact framework set up by a bunch of very flawed 18th century white dudes forever\" and \"have tyrannical judges just make up whatever the heck they feel like and call it the Constitution?\" It's worthwhile to argue about where on this spectrum to come down, but the idea that this problem can be made to go away as long as you have a good 18th century English dictionary is pretty delusional.

    This necessarily means that what judges do is not merely to \"interpret\" the Constitution. In this sense Scalia is right. The opponents of his position aren't really offering an alternative way to interpret the Constitution, and it's disingenuous of Breyer when he insists that that's what he's doing. That's why Breyer seems so unimpressive in that exchange. He can't come out and say that what he's really doing is casting about for guidance about how to deal with a hard question while not departing too much from what's written in the Constitution and in precedent, and that looking to see what some foreigners did makes sense as a (small) part of that effort. He has to pretend that he merely has a different method of interpretation, and Scalia rightly calls him on it. What I think Breyer is really saying, but can't come right out and say, is that what he's really doing is deciding how much to listen to the Constitution and how much to change it. This is not the same thing as saying \"take whatever the heck Stephen Breyer feels like saying and call it the Constitution,\" but he can't make that (correct) argument without conceding that he's not really exclusively an interpreter of a text. Though the parallel is not perfect. the exchange has something of a \"Doubting Thomas vs. Pious Pete\" vibe.

    One implication of all this is that there's no good reason for the job of Constitutional arbiter to fall exclusively to judges. If what's really going on is not just the interpretation of the legal texts, but also a process by which some trusted, relatively non-political body gropes their way to the solutions to problems, then it may make sense to have a set of appointed \"Wise Elders,\" only some of whom are judges, chosen to perform this function in broad daylight. Then we could get rid of the silly system we have now where everyone sort of pretends that the qualifications for a Supreme Court Justice are primarily technical legal expertise****, and in which it becomes a scandal whenever someone digs up a comment that a nominee once made that suggests that their personal views about something or other might affect how they rule on the Court.

    *One could argue about whether this is possible, as there may have been some intentional \"constructive ambiguity\" in choosing those particular words in the first place.
    **It is worth noting that Originalist arguments are generally used in support the positions of contemporary conservatives, so it's not always clear when such arguments are being made in good faith and when they're a fig leaf.
    ***The Constitution can be amended, but the conditions for doing so (which are of course set up by the Constitution) ensure that it hardly ever is, so this is mostly irrelevant.
    ****U.S. Supreme Court judges serve both as the highest regular court in the country and as the Constitutional arbiter. Technical legal expertise is of course central to the first function.

    \n

     

    \n

    Update @11:03 AM: Some commenters have said that they don't think this is an appropriate topic for LW, so I've tried in the comments to explain better what I think the LW-relevant point is. A bunch of commenters objected to the Americocentric tone of the post, such as using the term \"we\" as if everyone on LW was an American. I will try to be careful to avoid doing that from now on.

    " } }, { "_id": "6SSv2KC3mic5drr6A", "title": "My Fundamental Question About Omega", "pageUrl": "https://www.lesswrong.com/posts/6SSv2KC3mic5drr6A/my-fundamental-question-about-omega", "postedAt": "2010-02-10T17:26:56.423Z", "baseScore": 11, "voteCount": 33, "commentCount": 153, "url": null, "contents": { "documentId": "6SSv2KC3mic5drr6A", "html": "

    Omega has appeared to us inside of puzzles, games, and questions. The basic concept behind Omega is that it is (a) a perfect predictor and (b) not malevolent. The practical implications behind these points are that (a) it doesn't make mistakes and (b) you can trust its motives in the sense that it really, honestly doesn't care about you. This bugger is True Neutral and is good at it. And it doesn't lie.

    \n

    A quick peek at Omega's presence on LessWrong reveals Newcomb's problem and Counterfactual Mugging as the most prominent examples. For those that missed them, other articles include Bead Jars and The Lifespan Dilemma.

    \n

    Counterfactual Mugging was the most annoying for me, however, because I thought the answer was completely obvious and apparently the answer isn't obvious. Instead of going around in circles with a complicated scenario I decided to find a simpler version that reveals what I consider to my the fundamental confusion about Omega.

    \n

    Suppose that Omega, as defined above, appears before you and says that it predicted you will give it $5. What do you do? If Omega is a perfect predictor, and it predicted you will give it $5... will you give it $5 dollars?

    \n

    The answer to this question is probably obvious but I am curious if we all end up with the same obvious answer.

    \n

    The fundamental problem behind Omega is how to resolve a claim by a perfect predictor that includes a decision you and you alone are responsible for making. This invokes all sorts of assumptions about choice and free-will, but in terms of phrasing the question these assumptions do not matter. I care about how you will act. What action will you take? However you label the source of these actions is your prerogative. The question doesn't care how you got there; it cares about the answer.

    \n

    My answer is that you will give Omega $5. If you don't, Omega wouldn't have made the prediction. If Omega made the prediction AND you don't give $5 than the definition of Omega is flawed and we have to redefine Omega.

    \n

    A possible objection to the scenario is that the prediction itself is impossible to make. If Omega is a perfect predictor it follows that it would never make an impossible prediction and the prediction \"you will give Omega $5\" is impossible. This is invalid, however, as long as you can think of at least one scenario where you have a good reason to give Omega $5. Omega would show up in that scenario and ask for $5.

    \n

    If this scenario includes a long argument about why you should give it $5, so be it. If it means Omega gives you $10 in return, so be it. But it doesn't matter for the sake of the question. It matters for the answer, but the question doesn't need to include these details because the underlying problem is still the same. Omega made a prediction and now you need to act. All of the excuses and whining and arguing will eventually end with you handing Omega $5. Omega's prediction will have included all of this bickering.

    \n

    This question is essentially the same as saying, \"If you have a good reason to give Omega $5 then you will give Omega $5.\" It should be a completely uninteresting, obvious question. It holds some implications on other scenarios involving Omega but those are for another time. Those implications should have no bearing on the answer to this question.

    " } }, { "_id": "Zp88RjceAbsBKgkWw", "title": "The Craigslist Revolution: a real-world application of torture vs. dust specks OR How I learned to stop worrying and create one billion dollars out of nothing", "pageUrl": "https://www.lesswrong.com/posts/Zp88RjceAbsBKgkWw/the-craigslist-revolution-a-real-world-application-of", "postedAt": "2010-02-10T03:15:07.368Z", "baseScore": 53, "voteCount": 55, "commentCount": 227, "url": null, "contents": { "documentId": "Zp88RjceAbsBKgkWw", "html": "
    \n

    ...this is the first crazy idea I've ever heard for generating a billion dollars out of nothing that could actually work. I mean, ever.  -Eliezer Yudkowsky

    \n
    \n

    We can reasonably debate torture vs. dust specks when it is one person being tortured versus 3^^^3 people being subjected to motes of dust.

    \n

    However, there should be little debate when we are comparing the torture of one person to the minimal suffering of a mere millions of people. I propose a way to generate approximately one billion dollars for charity over five years: The Craigslist Revolution.

    \n

    In 2006, Craigslist's CEO Jim Buckmaster said that if enough users told them to \"raise revenue and plow it into charity\" that they would consider doing it. I have more recently emailed Craig Newmark and he indicated that they remain receptive to the idea if that's what the users want.

    \n

    A simple text advertising banner at the top of the Craigslist home or listing pages would generate enormous amounts of revenue. They could put a large \"X\" next to the ad, allowing you to permanently close it. There seems to be little objection to this idea. The optional banner is harmless, and a billion dollars could be enough to dramatically improve the lives of millions or make a serious impact in the causes we take seriously around here. As a moral calculus, the decision seems a no brainer. It's possible that some or many dollars would support bad charities, but the marginal impact of supporting some truly good charities makes the whole thing worthwhile.

    \n

    I don't have access to Craigslist's detailed traffic data, but I think one billion USD over five years is a reasonable estimate for a single optional banner ad. With 20 billion pageviews a month, a Google Adwords banner would bring in about 200 million dollars a year. Over five years that will be well over a billion dollars. With employees selling the advertising rather than Google, that number could very well be multiplied. An extremely low bound for the amount of additional revenue that could be trivially generated over five years would be 100 million.

    \n

    I'm very open to other ideas, but I think the best way to assemble a critical mass of Craigslist users is via a Facebook fan page. Facebook makes it very easy to advertise Facebook pages so we can do viral marketing as well as paying Facebook to direct people to our page.

    \n

    50,000 users would surely count as a critical mass, meaning that each member of the Facebook page effectively created $20,000 for charity. I don't think there has been any time in history where a single click had the potential to do so much good, and the disbelief that this is possible is the main thing that our viral campaign would have to overcome. After the Facebook fan page got beyond a certain number of users, we could more aggressively take the campaign to Twitter and email.

    \n

    Are there any social media marketers in the house? The first step is deciding what to call the Facebook page; it's limited to 75 characters.

    \n

    It's time to shut up and multiply. I will match the first $250 donated towards the advertising budget for this, more next month depending on my personal finances. If anyone independently wealthy is reading this, $20,000 is probably enough to get the critical mass of users this week.

    \n

    I welcome all of your criticism, especially as far as the mechanics of actually making this happen. As far as how to optimally distribute money to charity, that is very much an unsolved problem, but I think it's one that we can mostly worry about when we get that far. I also expect Craig and Jim to take a leadership roll as far as the distribution of the money goes.

    \n

    Also see previous discussion.

    " } }, { "_id": "Ea8pt2dsrS6D4P54F", "title": "Shut Up and Divide?", "pageUrl": "https://www.lesswrong.com/posts/Ea8pt2dsrS6D4P54F/shut-up-and-divide", "postedAt": "2010-02-09T20:09:06.208Z", "baseScore": 123, "voteCount": 99, "commentCount": 276, "url": null, "contents": { "documentId": "Ea8pt2dsrS6D4P54F", "html": "

    During a recent discussion with komponisto about why my fellow LWers are so interested in the Amanda Knox case, his answers made me realize that I had been asking the wrong question. After all, feeling interest or even outrage after seeing a possible case of injustice seems quite natural, so perhaps a better question to ask is why am I so uninterested in the case.

    \n

    Reflecting upon that, it appears that I've been doing something like Eliezer's \"Shut Up and Multiply\", except in reverse. Both of us noticed the obvious craziness of scope insensitivity and tried to make our emotions work more rationally. But whereas he decided to multiply his concern for individuals human beings by the population size to an enormous concern for humanity as a whole, I did the opposite. I noticed that my concern for humanity is limited, and therefore decided that it's crazy to care much about random individuals that I happen to come across. (Although I probably haven't consciously thought about it in this way until now.)

    \n

    The weird thing is that both of these emotional self-modification strategies seem to have worked, at least to a great extent. Eliezer has devoted his life to improving the lot of humanity, and I've managed to pass up news and discussions about Amanda Knox without a second thought. It can't be the case that both of these ways to change how our emotions work are the right thing to do, but the apparent symmetry between them seems hard to break.

    \n

    What ethical principles can we use to decide between \"Shut Up and Multiply\" and \"Shut Up and Divide\"? Why should we derive our values from our native emotional responses to seeing individual suffering, and not from the equally human paucity of response at seeing large portions of humanity suffer in aggregate? Or should we just keep our scope insensitivity, like our boredom?

    \n

    And an interesting meta-question arises here as well: how much of what we think our values are, is actually the result of not thinking things through, and not realizing the implications and symmetries that exist? And if many of our values are just the result of cognitive errors or limitations, have we lived with them long enough that they've become an essential part of us?

    " } }, { "_id": "bYxqxBxMKyFrNFyJk", "title": "Common Errors in History", "pageUrl": "https://www.lesswrong.com/posts/bYxqxBxMKyFrNFyJk/common-errors-in-history", "postedAt": "2010-02-09T19:27:22.781Z", "baseScore": 2, "voteCount": 31, "commentCount": 47, "url": null, "contents": { "documentId": "bYxqxBxMKyFrNFyJk", "html": "

    I bought a copy of Common Errors in History, which someone mentioned recently on LW.  There were no copies on Amazon or other bookselling sites, but I found a copy on Ebay.  No wonder it was hard to get - it's a 24-page pamphlet that was printed once, in 1945, by \"The Historical Association,\" London.

    \n

    I tried to find some common failures of rationality underlying the \"common errors\" listed.  This is what I concluded:

    \n

    English students in the mid-20th century learned a lot of history.

    \n

    This booklet is full of statements such as, \"The facts relating to the Corn Laws [of 1815-1849] are more often than not mis-stated in school examination papers,\" and, \"The blockade of Brest and Toulon [during the Napoleonic wars] is usually misunderstood.\"  My history lessons consisted primarily of repeatedly learning about the American Revolution and making turkeys or pilgrim hats out of colored cardboard.

    \n

    The English sincerely apologize for their history.

    \n

    In other countries, textbook authors try to make their own countries look good.  In England, that would seem gauche.  The entries on \"Religion in the New England Colonies\", \"The Causes of the American War of Independence\", \"The First Chinese War, 1839-42\",  \"Gladstone and the Turks\", and \"The Manchurian Crisis, 1931-32\" complain that British textbook accounts place all of the blame on Britain.

    \n

    History is simplified in order to assign blame and credit.

    \n

    In numerous of the 20 entries, notably \"The Dissolution of the Monasteries and Education\", \"Religion in the New England Colonies\", \"The Enclosure Movement\", \"The Causes of the American War of Independence\", \"The Great Trek\", \"The First Chinese War\", \"The Elementary Education Act\", and \"The Manchurian Crisis\", the tract alleges that standard accounts are simplified; and they appear to be simplified in ways that allow a simple causal summary, preferably with one person, side, or act of legislation to receive credit or blame.

    \n

    Not always.  \"The Great Trek\" says that the Boers' depart is usually explained as due to their [blameworthy] indignation that the British had freed their slaves; whereas in fact they had a variety of different, equally blameworthy, reasons for leaving.  And the entry on \"Bismarck's Alliances\" says that the textbook account is overly-complex in that it introduces a second treaty that did not exist.

    \n

    This is the only general principle I could extract from the book, so it may just be a statistical accident.

    \n

     

    \n

    If anyone would like a copy of the book, send me an email at gmail. But it's very boring.

    \n

     

    " } }, { "_id": "8bWbNwiSGbGi9jXPS", "title": "Epistemic Luck", "pageUrl": "https://www.lesswrong.com/posts/8bWbNwiSGbGi9jXPS/epistemic-luck", "postedAt": "2010-02-08T00:02:50.262Z", "baseScore": 109, "voteCount": 91, "commentCount": 133, "url": null, "contents": { "documentId": "8bWbNwiSGbGi9jXPS", "html": "

    Who we learn from and with can profoundly influence our beliefs. There's no obvious way to compensate.  Is it time to panic?

    \n

    During one of my epistemology classes, my professor admitted (I can't recall the context) that his opinions on the topic would probably be different had he attended a different graduate school.

    \n

    What a peculiar thing for an epistemologist to admit!

    \n

    Of course, on the one hand, he's almost certainly right.  Schools have their cultures, their traditional views, their favorite literature providers, their set of available teachers.  These have a decided enough effect that I've heard \"X was a student of Y\" used to mean \"X holds views basically like Y's\".  And everybody knows this.  And people still show a distinct trend of agreeing with their teachers' views, even the most controversial - not an unbroken trend, but still an obvious one.  So it's not at all unlikely that, yes, had the professor gone to a different graduate school, he'd believe something else about his subject, and he's not making a mistake in so acknowledging...

    \n

    But on the other hand... but... but...

    \n

    But how can he say that, and look so undubiously at the views he picked up this way?  Surely the truth about knowledge and justification isn't correlated with which school you went to - even a little bit!  Surely he knows that!

    \n

    And he does - and so do I, and it doesn't stop it from happening.  I even identified a quale associated with the inexorable slide towards a consensus position, which made for some interesting introspection, but averted no change of mind.  Because what are you supposed to do - resolutely hold to whatever intuitions you walked in with, never mind the coaxing and arguing and ever-so-reasonable persuasions of the environment in which you are steeped?  That won't do, and not only because it obviates the education.  The truth isn't anticorrelated with the school you go to, either!

    \n

    Even if everyone collectively attempted this stubbornness only to the exact degree needed to remove the statistical connection between teachers' views and their students', it's still not truth-tracking.  An analogy: suppose you give a standardized English language test, determine that Hispanics are doing disproportionately well on it, figure out that this is because many speak Romance languages and do well with Latinate words, and deflate Hispanic scores to even out the demographics of the test results.  This might give you a racially balanced outcome, but on an individual level, it will unfairly hurt some monolingual Anglophone Hispanics, and help some Francophone test-takers - it will not do as much as you'd hope to improve the skill-tracking ability of the test.  Similarly, flattening the impact of teaching on student views won't salvage truth-tracking of student views as though this trend never existed; it'll just yield the same high-level statistics you'd get if that bias weren't operating.

    \n

    Lots of biases still live in your head doing their thing even when you know about them.  This one, though, puts you in an awfully weird epistemic situation.  It's almost like the opposite of belief in belief - disbelief in belief.  \"This is true, but my situation made me more prone than I should have been to believe it and my belief is therefore suspect.  But dang, that argument my teacher explained to me sure was sound-looking!  I must just be lucky - those poor saps with other teachers have it wrong!  But of course I would think that...\"

    \n

    It is possible, to an extent, to reduce the risk here - you can surround yourself with cognitively diverse peers and teachers, even if only in unofficial capacities.  But even then, who you spend the most time with, whom you get along with best, whose style of thought \"clicks\" most with yours, and - due to competing biases - whoever agrees with you already will have more of an effect than the others.  In practice, you can't sit yourself in a controlled environment and expose yourself to pure and perfect argument and evidence (without allowing accidental leanings to creep in via the order in which you read it, either).

    \n

    I'm not even sure if it's right to assign a higher confidence to beliefs that you happen to have maintained - absent special effort - in contravention of the general agreement.  It seems to me that people have trains of thought that just seem more natural to them than others.  (Was I the only one disconcerted by Eliezer announcing high confidence in Bayesianism in the same post as a statement that he was probably \"born that way\"?)  This isn't even a highly reliable way for you to learn things about yourself, let alone the rest of the world: unless there's a special reason your intuitions - and not those of people who think differently - should be truth-tracking, these beliefs are likely to represent where your brain just happens to clamp down really hard on something and resist group pressure and that inexorable slide.

    " } }, { "_id": "ZXaRHHLsxaTTQQsZb", "title": " A survey of anti-cryonics writing ", "pageUrl": "https://www.lesswrong.com/posts/ZXaRHHLsxaTTQQsZb/a-survey-of-anti-cryonics-writing", "postedAt": "2010-02-07T23:26:52.715Z", "baseScore": 122, "voteCount": 98, "commentCount": 326, "url": null, "contents": { "documentId": "ZXaRHHLsxaTTQQsZb", "html": "

    (This was originally a link to a post on my blog, A survey of anti-cryonics writing. Eliezer asked me to include the entire text of the article here.)

    \n

    For its advocates, cryonics offers almost eternal life. To its critics, cryonics is pseudoscience; the idea that we could freeze someone today in such a way that future technology might be able to re-animate them is nothing more than wishful thinking on the desire to avoid death. Many who battle nonsense dressed as science have spoken out against it: see for example Nano Nonsense and Cryonics, a 2001 article by celebrated skeptic Michael Shermer; or check the Skeptic's Dictionary or Quackwatch entries on the subject, or for more detail read the essay Cryonics–A futile desire for everlasting life by \"Invisible Flan\".

    \n

    That it seems so makes me sad, because to my naive eyes it seems like it might work and I would quite like to live forever, but I know that I don't know enough to judge. The celebrated Nobel prize winning physicist Richard Feynman tells a story of a US general who spoke to him at a party and explained that one big challenge in desert warfare is keeping the tanks fuelled given the huge distances the fuel has to travel. What would really help, the general said, would be if boffins like Feynman could invent a sort of engine that was powered by sand. On this issue, I'm in the same position as the general; in the same way as a tank fuelled by sand seems plausible enough to him, it makes sense to me to imagine that however your brain stores information it probably has something to do with morphology and chemistry, so there's a good chance it might not evaporate right away at the instant of legal death, and that freezing might be a way to keep the information there long enough for future societies to extract it with their future-technology scanning equipment.

    \n

    And of course the pro-cryonics people have written reams and reams of material such as Ben Best's Scientific Justification of Cryonics Practice on why they think this is exactly as plausible as I might think, and going into tremendous technical detail setting out arguments for its plausibility and addressing particular difficulties. It's almost enough to make you want to sign up on the spot.

    \n

    Except, of course, that plenty of totally unscientific ideas are backed by reams of scientific-sounding documents good enough to fool non-experts like me. Backed by the deep pockets of the oil industry, global warming denialism has produced thousands of convincing-sounding arguments against the scientific consensus on CO2 and AGW. Thankfully in that instance we have blogs like Tim Lambert's Deltoid, RealClimate, and many others tracking the various ways that the denialists mislead, whether through cherry-picking evidence, misleading quotes from climate scientists, or outright lies. Their hard work means that denialists can barely move or speak without someone out there checking what they have to say against science's best understanding and pointing out the misrepresentations and discrepancies. So before I pony up my £25 a month to sign up to cryonics life insurance, I want to read the Deltoid of cryonics - the articles that take apart what cryonics advocates write about what they do and really go into the scientific detail on why it doesn't hang together.

    \n

    Here's my report on what I've found so far.

    \n

    Nano Nonsense and Cryonics goes for the nitty-gritty right away in the opening paragraph:

    \n
    To see the flaw in this system, thaw out a can of frozen strawberries. During freezing, the water within each cell expands, crystallizes, and ruptures the cell membranes. When defrosted, all the intracellular goo oozes out, turning your strawberries into runny mush. This is your brain on cryonics.
    \n

    This sounds convincing, but doesn't address what cryonicists actually claim. Ben Best, President and CEO of the Cryonics Institute, replies in the comments:

    \n
    Strawberries (and mammalian tissues) are not turned to mush by freezing because water expands and crystallizes inside the cells. Water crystallizes in the extracellular space because more nucleators are found extracellularly. As water crystallizes in the extracellular space, the extracellular salt concentration increases causing cells to lose water osmotically and shrink. Ultimately the cell membranes are broken by crushing from extracellular ice and/or high extracellular salt concentration. [...] Cryonics organizations use vitrification perfusion before cooling to cryogenic temperatures. With good brain perfusion, vitrification can reduce ice formation to negligible amounts.
    \n

    Best goes on to point out that the paragraph I quote is Shermer's sole attempt to directly address the scientific claims of cryonics; once the opening paragraph has dispensed with the technical nitty gritty, the rest of the piece argues in very general terms about \"[blind] optimistic faith in the illimitable power of science\" and other such arguments. Shermer received many other responses from cryonics advocates; here's one that he considered \"very well reasoned and properly nuanced\".

    \n

    The Quackwatch entry takes us little further; it quotes the debunked Shermer argument above, talks about the cost (they all talk about the cost and a variety of other issues, but here I'm focussing specifically on the issue of technical plausibility), and links to someone else making the same already-answered assertions about freezing damage.

    \n

    The Skeptic's Dictionary entry is no advance. Again, it refers erroneously to a \"mushy brain\". It points out that the technology to reanimate those in storage does not already exist, but provides no help for us non-experts in assessing whether it is a plausible future technology, like super-fast computers or fusion power, or whether it is as crazy as the sand-powered tank; it simply asserts baldly and to me counterintuitively that it is the latter. Again, perhaps cryonic reanimation is a sand-powered tank, but I can explain to you why a sand-powered tank is implausible if you don't already know, and if cryonics is in the same league I'd appreciate hearing the explanation.

    \n

    It does link to the one article I can find that really tries to go into the detail: Cryonics–A futile desire for everlasting life by \"Invisible Flan\". It opens on a curious note:

    \n
    If you would like my cited sources, please ask me and I will give them to you.
    \n

    This seems a very odd practice to me. How can it make sense to write \"(Stroh)\" in the text without telling us what publication that refers to? Two comments below ask for the references list; no reply is forthcoming.

    \n

    And again, there seems to be no effort to engage with what cryonicists actually say. The article assets

    \n
    it is very likely that a human would suffer brain damage from being preserved for a century or two (Stroh).
    \n

    This bald claim backed by a dangling reference is, to say the least, a little less convincing than the argument set out in Alcor's How Cold is Cold Enough? which explains that even with pessimistic assumptions, one second of chemical activity at body temperature is roughly equivalent to 24 million years at the temperature of liquid nitrogen. Ben Best quotes eminent cryobiologist and anti-cryonics advocate Peter Mazur:

    \n
    ...viscosity is so high (>1013 Poise) that diffusion is insignificant over less than geological time spans.
    \n

    Another part of the article points out the well-known difficulties with whole-body freezing - because the focus is on achieving the best possible preservation of the brain, other parts suffer more. But the reason why the brain is the focus is that you can afford to be a lot bolder in repairing other parts of the body - unlike the brain, if my liver doesn't survive the freezing, it can be replaced altogether. Further, the article ignores one of the most promising possibilities for reanimation, that of scanning and whole-brain emulation, a route that requires some big advances in computer and scanning technology as well as our understanding of the lowest levels of the brain's function, but which completely sidesteps any problems with repairing either damage from the freezing process or whatever it was that led to legal death.

    \n

    Contrast these articles to a blog like Deltoid. In post after painstaking post, Lambert addresses specific public claims from global warming denialists - sometimes this takes just one graph, sometimes a devastating point-by-point rebuttal.

    \n

    Well, if there is a Tim Lambert of cryonics out there, I have yet to find them, and I've looked as best I can. I've tried various Google searches, like \"anti-cryonics\" or \"cryonics skeptic\", but nearly all the hits are pro-cryonics. I've asked my LiveJournal friends list, my Twitter feed, and LessWrong.com, and found no real meat. I've searched PubMed and Google Scholar, and again found only pro-cryonics articles, with the exception of this 1981 BMJ article which is I think more meant for humour value than serious argument.

    \n

    I've also emailed every expert I can find an email address for that has publically spoken against cryonics. Sadly I don't have email addresses for either Arthur W. Rowe or Peter Mazur, two giants of the cryobiology field who both have strongly anti-cryonics positions; I can only hope that blog posts like these might spur them into writing about the subject in depth rather than restricting themselves to rather brief and unsatisfactory remarks in interviews. (If they were to have a change of heart on the subject, they would have to choose between staying silent on their true opinions or being ejected from the Society for Cryobiology under a 1982 by-law.) I mailed Michael Shermer, Steve Jones, Quackwatch, and Professor David Pegg. I told them (quite truthfully) that I had recently started talking to some people who were cryonics advocates, that they seemed persuasive but I wasn't an expert and didn't want to fall for a scam, and asked if there was anything they'd recommend I'd read on the subject to see the other side.

    \n

    The only one of these to reply was Michael Shermer. He recommended I read David Brin, Steve Harris and Gregory Benford. This is a pretty surprising reply. The latter two are cryonics advocates, and while Brin talks about a lot of possible problems, he agrees with cryonics advocates that it is technically feasable.

    \n

    I expanded my search to others who might be knowledgable: Society of Cryobiology fellows Professor Barry Fuller and Dr John G Baust, and computational neuroscience Professor Peter Dayan. I received one reply: Dayan was kind enough to reply very rapidly, sounding a cautionary note on how much we still don't know about the structure of memory and referring me to the literature on the subject, but was unable to help in my specific quest for technical anti-cryonics articles.

    \n

    In his 1994 paper The Molecular Repair of the Brain, cryptology pioneer Professor Ralph Merkle remarks

    \n
    Interestingly (and somewhat to the author's surprise) there are no published technical articles on cryonics that claim it won't work.
    \n

    Sixteen years later, it seems that hasn't changed; in fact, as far as the issue of technical feasability goes it is starting to look as if on all the Earth, or at least all the Internet, there is not one person who has ever taken the time to read and understand cryonics claims in any detail, still considers it pseudoscience, and has written a paper, article or even a blog post to rebut anything that cryonics advocates actually say. In fact, the best of the comments on my first blog post on the subject are already a higher standard than anything my searches have turned up.

    \n

    If you can find any articles that I've missed, please link to them in the comments. If you have any expertise in any relevant area, and you don't think that cryonics has scientific merit - or if you can find any claim made by prominent cryonics advocates that doesn't hold up - any paragraph in Ben Best's Scientific Justification of Cryonics Practice, anything in the Alcor Scientists’ Cryonics FAQ or the Cryonics Institute FAQ, or anything in Whole Brain Emulation: A Roadmap (which isn't directly about cryonics but is closely related) - then please, don't comment here to say so. Instead, write a paper or a blog post about it. If you don't have somewhere you're happy to post it, and if it's better than what's already out there, I'll be happy to host it here.

    \n

    Because as far as I can tell, if you want to write the best anti-cryonics article in the world, you have a very low bar to clear.

    \n

    Related articles: Carl Schulman links to Robin Hanson's What Evidence in Silence or Confusion? on Overcoming Bias, which discusses what conclusions one can draw from this.

    " } }, { "_id": "ifA8GL6ycqPxJpnAd", "title": "Limited kindness is unappreciated", "pageUrl": "https://www.lesswrong.com/posts/ifA8GL6ycqPxJpnAd/limited-kindness-is-unappreciated", "postedAt": "2010-02-07T11:39:43.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "ifA8GL6ycqPxJpnAd", "html": "

    If you have not yet interacted with a person, you are judged neutrally by them. If you do something for them once, then you move up in their eyes. If you continue to benefit them you can move further up. If you stop you move to well below zero; you have actually slighted them. Even if you slow down a bit you can go into negative territory. This goes for many things humans offer each other from tea to sex. Why is limited attention worse than none?

    \n

    One guess is that it’s an upshot of tit-for-tat. If I am nice to someone, they are nice to me in return, as obliged. Then I am obliged. Mentioning that the interaction has occurred an even number of times doesn’t get you off the hook; you  always owe more friendly deeds.

    \n

    Another potential reason is that when you haven’t interacted with someone they still have high hopes you will be a good person to know, whereas when you know them and cease to give them attention, you are demonstrably not. This doesn’t seem right, as strangers usually remain strangers, and people who have had an interest often return to it.

    \n

    Perhaps un-friendliness is a punishment to encourage your future cooperation? People who have been useful in the past are a better target than others because they are presumably already close to being friendly again. If I’m wondering whether to phone you or not and I think you will be miffed if I haven’t it may push me over the line, whereas if we haven’t met and I think you might be miffed when we eventually do, I probably won’t bother because I probably will never meet you or want to anyway.

    \n

    For whatever reason, this must reduce the occurrence of friendly behavior enourmously. Before you interact with someone you must ascertain that they are likely enough to be good enough for long enough that it’s worth the cost of their badmouthing or teary appeals to stay if you ever decide they’re not.  This certainly limits my own friendliness  – often I wouldn’t mind being helpful to strangers, but I’ve learned the annoying way how easy it is to become an obligated ‘friend’ just because you can’t bear to watch someone suffer on a single occasion. So other people prevent me from benefiting them with their implicit threat of obligation.

    \n

    Interestingly, one situation where humans are nice to one another and not further obliged is when they trade fairly at the outset, such as in shops. This supports the tit-for-tat theory.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "4gP3692vXuauKGjMR", "title": "BHTV: Eliezer Yudkowsky & Razib Khan", "pageUrl": "https://www.lesswrong.com/posts/4gP3692vXuauKGjMR/bhtv-eliezer-yudkowsky-and-razib-khan", "postedAt": "2010-02-06T14:27:12.762Z", "baseScore": 15, "voteCount": 17, "commentCount": 67, "url": null, "contents": { "documentId": "4gP3692vXuauKGjMR", "html": "

    Link.

    \n

    \n

    \"Razib Khan has an academic background in the biological sciences, and has worked for many years in software. He is an Unz Foundation Junior Fellow. He lives in the western United States.\" 

    \n

    Razib's writings can be found on his blog, Gene Expression.

    " } }, { "_id": "tTgSycdDr9qdGD3CC", "title": "Ethics has Evidence Too", "pageUrl": "https://www.lesswrong.com/posts/tTgSycdDr9qdGD3CC/ethics-has-evidence-too", "postedAt": "2010-02-06T06:28:24.157Z", "baseScore": 26, "voteCount": 32, "commentCount": 57, "url": null, "contents": { "documentId": "tTgSycdDr9qdGD3CC", "html": "

    A tenet of traditional rationality is that you can't learn much about the world from armchair theorizing. Theory must be epiphenomenal to observation-- our theories are functions that tell us what experiences we should anticipate, but we generate the theories from *past* experiences. And of course we update our theories on the basis of new experiences. Our theories respond to our evidence, usually not the other way around. We do it this way because it works better then trying to make predictions on the basis of concepts or abstract reasoning. Philosophy from Plato through Descartes and to Kant is replete with failed examples of theorizing about the natural world on the basis of something other than empirical observation. Socrates thinks he has deduced that souls are immortal, Descartes thinks he has deduced that he is an immaterial mind, that he is immortal, that God exists and that he can have secure knowledge of the external world, Kant thinks he has proven by pure reason the necessity of Newton's laws of motion.

    \n

    These mistakes aren't just found in philosophy curricula. There is a long list of people who thought they could deduce Euclid's theorems as analytic or a priori knowledge. Epicycles were a response to new evidence but they weren't a response that truly privileged the evidence. Geocentric astronomers changed their theory *just enough* so that it would yield the right predictions instead of letting a new theory flow from the evidence. Same goes for pre-Einsteinian theories of light. Same goes for quantum mechanics. A kludge is a sign someone is privileging the hypothesis. It's the same way many of us think the Italian police changed their hypothesis explaining the murder of Meredith Kercher once it became clear Lumumba had an alibi and Rudy Guede's DNA and hand prints were found all over the crime scene. They just replaced Lumumba with Guede and left the rest of their theory unchanged even though there was no longer reason to include Knox and Sollecito in the explanation of the murder. These theories may make it over the bar of traditional rationality but they sail right under what Bayes theorem requires.

    \n

    Most people here get this already and many probably understand it better than I do. But I think it needs to be brought up in the context of our ongoing discussion of normative ethics.

    \n

    Unless we have reason to think about ethics differently, our normative theories should respond to evidence in the same way we expect our theories in other domains to respond to evidence. What are the experiences that we are trying to explain with our ethical theories? Why bother with ethics at all? What is the mystery we are trying to solve? The only answer I can think of is our ethical intuitions. When faced with certain situations in real life or in fiction we get strong impulses to react in certain ways, to praise some parties and condemn others. We feel guilt and sometimes pay amends. There are some actions which we have a visceral abhorrence of.

    \n

    These reactions are for ethics what measurements of time and distance are for physics -- the evidence.

    \n

    The reason ethicists use hypotheticals like the runaway trolley and the unwilling organ donor is that different normative theories predict different intuitions in response to such scenarios. Short of actually setting up these scenarios for real, this is as close as ethics gets to controlled experiments. Now there are problems with this method. Our intuitions in fictional cases might be different from real life intuitions. The scenario could be poorly described. It might not be as controlled an experiment as we think. Or some features could be clouding the issue such that our intuitions about a particular case might not actually falsify a particular ethical principle. Just as there are optical illusions there might be ethical illusions such that we can occasionally be wrong about an ethical judgment in the same way that we can sometimes be wrong about the size or velocity of a physical object.

    \n

    The big point is that the way we should be reasoning about ethics is not from first principles, a priori truths, definitions or psychological concepts. Kant's Categorical Imperative is a paradigm example of screwing this up, but he is hardly the only one. We should be looking at our ethical intuitions and trying to come up with theories that predict future ethical intuitions. And if your theory is outputting results that are systematically or radically different from actual ethical intuitions then you need to have a damn good explanation for the discrepancy or be ready to change your theory (and not just by adding a kludge).

    " } }, { "_id": "dJJYgmaYYFmHoQM4L", "title": "Debate tools: an experience report", "pageUrl": "https://www.lesswrong.com/posts/dJJYgmaYYFmHoQM4L/debate-tools-an-experience-report", "postedAt": "2010-02-05T14:47:56.891Z", "baseScore": 51, "voteCount": 44, "commentCount": 80, "url": null, "contents": { "documentId": "dJJYgmaYYFmHoQM4L", "html": "

    Follow-up to: Argument Maps Improve Critical Thinking, Software Tools for Community Truth-Seeking

    \n

    We are here, among other things, in an attempt to collaboratively refine the art of human rationality.

    \n

    Rationality is hard, because the wetware we run rationality on is scavenged parts originally intended for other purposes; and collaboration is hard, I believe because it involves huge numbers of tiny decisions about what information others need. Yet we get by, largely thanks to advances in technology.

    \n

    One of the most important technologies for advancing both rationality and collaboration is the written word. It affords looking at large, complex issues with limited cognitive resources, by the wonderful trick of \"external cached thoughts\". Instead of trying to hold every piece of the argument at once, you can store parts of it in external form, refer back to them, and communicate them to other people.

    \n

    For some reason, it seems very hard to improve on this six-thousand-year-old technology. Witness LessWrong itself, which in spite of using some of the latest and greatest communication technologies, still has people arguing by exchanging sentences back and forth.

    \n

    Previous posts have suggested that recent software tools might hold promise for improving on \"traditional\" forms of argument. This kind of suggestion is often more valuable when applied to a real and relevant case study. I found the promise compelling enough to give a few tools a try, in the context of the recent (and recurrent) cryonics debate. I report back here with my findings.

    \n

    \n

    I. Argunet

    \n

    The first tool I tried was Argunet, an Open Source offering from the Institute of Philosophy in Berlin. I was seduced by the promise of reconstructing the logical structure of an argument, and by the possiblity of collaborating online with others on an argument.

    \n

    Like other products in that category, the basic principle of operation of Argunet is that of a visual canvas, on which you can create and arrange boxes which represents statements, portions of an argument. Relationships between parts of an arguments are then materialized using links or arrows.

    \n

    Argunet supports two types of basic relationship between statements, Supports and Attacks. It also supports several types of \"inference patterns\".

    \n

    Unfortunately, when I tried using the Editor I soon found it difficult to the point of being unusable. The default expectation of being able to move boxes around by clicking and dragging is violated. Further, I was unable to find any way to move my boxes after initially creating them.

    \n

    I ended up frustrated and gave up on Argunet.

    \n

    II. bCisive Online

    \n

    I had somewhat better luck with the next tool I tried, bCisive online. This is a public beta of a commercial offering by Austhink, the company already referenced in the previous posts on argument mapping. (It is a spin-off of their range of products marketed for decision support rather than argument support, but is also their only online, collaborative tool so far.)

    \n

    The canvas metaphor proved to be implemented more effectively, and I was able in a relatively short time to sketch out a map of my thinking about cryonics (which I invite you to browse and comment on).

    \n

    bCisive supports different types of statements, distinguished by the icons on their boxes: questions; arguments pro or con; evidence; options; \"fixes\", and so on. At present it doesn't appear to *do* anything valuable with these distinctions, but they proved to be an effective scheme for organizing my thoughts.

    \n

    III. Preliminary conclusions

    \n

    I was loath to invest much more time in updating my cryonics decision map, for two reasons. One is that what I would like to get from such a tool is to incorporate others' objections and counter-objections; in fact, it seems to me that the more valuable approach would be a fully collaborative effort. So, while it was worthwhile to structure my own thinking using the tool, and (killing two birds with one stone) that served as a test drive for the tool, it seems pointless to continue without outside input.

    \n

    The other, more important reason is that bCisive seems to provide little more than a fancy mindmapping tool at the moment, and the glimpse I had of tool support for structuring a debate has already raised my expectations beyond that.

    \n

    I have my doubts that the \"visual\" aspect is as important as the creators of such software tools would like everyone to think. It seems to me that what helped focus my thinking when using bCisive was the scheme of statement types: conclusion, arguments pro and con, evidence and \"fixes\". This might work just as well if the tool used a textual, tabular or other representation.

    \n

    The argument about cryonics is important to me, and to others who are considering cryonics. It is a life decision of some consequence, not to be taken lightly and without due deliberation. For this reason, I found myself wishing that the tool could process quantitative, not just qualitative, aspects of my reasoning.

    \n

    IV. A wish list for debate support

    \n

    Based on my experiences, what I would look for is a tool that distinguishes between, and support the use of:

    \n\n

    The tool should be able to \"crunch numbers\", so that it gives an overall indication of how much the total weight of evidence and argumentation contributes to the conclusion.

    \n

    It should have a \"public\" part, representing what a group of people can agree on regarding the structure of the debate; and a \"private\" part, wherein you can adduce evidence you have collected yourself, or assign private degrees of belief in various statements.

    \n

    In this way, the tool would allow \"settling\" debates even while allowing disagreement to persist, temporarily or durably: you could agree with the logical structure but allow that your personal convictions rationally lead you to different conclusions. Highlighting the points of agreement and contention in this way would be a valuable way to focus further debate, limiting the risk of \"logical rudeness\".

    " } }, { "_id": "2f6yWjZxzvsG4ZbDD", "title": "‘Cheap’ goals won’t explode intelligence", "pageUrl": "https://www.lesswrong.com/posts/2f6yWjZxzvsG4ZbDD/cheap-goals-won-t-explode-intelligence", "postedAt": "2010-02-05T14:00:46.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "2f6yWjZxzvsG4ZbDD", "html": "

    An intelligence explosion is what hypothetically happens when a clever creature finds that the best way to achieve its goals is to make itself even cleverer first, and then to do so again and again as its heightened intelligence makes the the further investment cheaper and cheaper. Eventually the creature becomes uberclever and can magically (from humans’ perspective) do most things, such as end humanity in pursuit of stuff it likes more. This is predicted by some to be the likely outcome for artificial intelligence, probably as an accidental result of a smart enough AI going too far with any goal other than forwarding everything that humans care about.

    \n

    In trying to get to most goals, people don’t invest and invest until they explode with investment. Why is this? Because it quickly becomes cheaper to actually fulfil a goal at than it is to invest more and then fulfil it. This happens earlier the cheaper the initial goal. Years of engineering education prior to building a rocket will speed up the project, but it would slow down the building of a sandwich.

    \n

    A creature should only invest in many levels of intelligence improvement when it is pursuing goals significantly more resource intensive than creating many levels of intelligence improvement. It doesn’t matter that inventing new improvements to artificial intelligence gets easier as you are smarter, because everything else does too.  If intelligence makes other goals easier a the same rate as it makes building more intelligence easier, no goal which is cheaper than building a given amount of intelligence improvement with your current intelligence could cause  an intelligence explosion of that size.

    \n

    Plenty of questions anyone is currently looking for answers to, such as ‘how do we make super duper nanotechnology?’, ‘how do we cure AIDS?’, ‘how do I get really really rich?’ and even a whole bunch of math questions are likely easier than inventing multiple big advances in AI. The main dangerous goals are infinitely expensive questions such as ‘how many digits of pi can we work out?’ and ‘please manifest our values maximally throughout as much of the universe as possible’. If someone were to build a smart AI and set it to solve any of those relatively cheap goals, it would not accidentally lead to an intelligence explosion. The risk is only with the very expensive goals.

    \n

    The relative safety of smaller goals here could be confused with the relative safety of goals that comprise a small part of human values. A big fear with an intelligence explosion is that the AI will only know about a few of human goals, so will destroy everything else humans care about in pursuit of them. Notice that these are two different parameters: the proportion of the set of important goals the intelligence knows about and the expense of carrying out the task. Safest are cheap tasks where the AI knows about many of our values it may influence. Worst are potentially infinitely expensive goals with a tiny set of relevant values, such as any variation on ‘do as much of x as you can’.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "NSX8RuD9tQ4uWzkk3", "title": "A problem with Timeless Decision Theory (TDT)", "pageUrl": "https://www.lesswrong.com/posts/NSX8RuD9tQ4uWzkk3/a-problem-with-timeless-decision-theory-tdt", "postedAt": "2010-02-04T18:47:07.959Z", "baseScore": 48, "voteCount": 41, "commentCount": 140, "url": null, "contents": { "documentId": "NSX8RuD9tQ4uWzkk3", "html": "

    According to Ingredients of Timeless Decision Theory, when you set up a factored causal graph for TDT, \"You treat your choice as determining the result of the logical computation, and hence all instantiations of that computation, and all instantiations of other computations dependent on that logical computation\", where \"the logical computation\" refers to the TDT-prescribed argmax computation (call it C) that takes all your observations of the world (from which you can construct the factored causal graph) as input, and outputs an action in the present situation.

    \n

    I asked Eliezer to clarify what it means for another logical computation D to be either the same as C, or \"dependent on\" C, for purposes of the TDT algorithm. Eliezer answered:

    \n
    \n

    For D to depend on C means that if C has various logical outputs, we can infer new logical facts about D's logical output in at least some cases, relative to our current state of non-omniscient logical knowledge.  A nice form of this is when supposing that C has a given exact logical output (not yet known to be impossible) enables us to infer D's exact logical output, and this is true for every possible logical output of C. Non-nice forms would be harder to handle in the decision theory but we might perhaps fall back on probability distributions over D.

    \n
    \n

    I replied as follows (which Eliezer suggested I post here).

    \n

    If that's what TDT means by the logical dependency between Platonic computations, then TDT may have a serious flaw.

    \n

    Consider the following version of the transparent-boxes scenario. The predictor has an infallible simulator D that predicts whether I one-box here [EDIT: if I see $1M]. The predictor also has a module E that computes whether the ith digit of pi is zero, for some ridiculously large value of i that the predictor randomly selects. I'll be told the value of i, but the best I can do is assign an a priori probability of .1 that the specified digit is zero.

    \n
    The predictor puts $1M in the large box iff (D xor E) is true. (And that's explained to me, of course.)
    \n
    So let's say I'm confronted with this scenario, and I see $1M in the large box.
    \n
    The flaw then is that E (as well as D) meets your criterion for \"depending on\" my decision computation C. I'm initially unsure what C and E output. But if C in fact one-boxes here, then I can infer that E outputs False (or else the large box has to be empty, which it isn't). Similarly, if C in fact two-boxes here, then I can infer that E outputs True. (Or equivalently, a third-party observer could soundly draw either of those inferences.)
    \n
    So E does indeed \"depend on\" C, in the particular sense you've specified. Thus, if I happen to have a strong enough preference that E output True, then TDT (as currently formulated) will tell me to two-box for the sake of that goal. But that's the wrong decision, of course. In reality, I have no choice about the specified digit of pi.
    \n
    What's going on, it seems to me, is that the kind of logical/Platonic \"dependency\" that TDT would need to invoke here is this: that E's output be counterfactually entailed by C's output (which it isn't, in this case [see footnote]), rather than (as you've specified) merely inferable from C's output (which indeed it is, in this case). That's bad news, because distinguishing what my action does or does not counterfactually entail (as opposed to what it implies, causes, gives evidence for, etc.) is the original full-blown problem that TDT's prescribed decision-computation is meant to solve. So it may turn out that in order to proceed with that very computation (specifically, in order to ascertain which other Platonic computations \"depend on\" the decision computation C), you already need to (somehow) know the answer that the computation is trying to provide.
    \n
    --Gary
    \n
    [footnote] Because if-counterfactually C were to two-box, then (contrary to fact) the large box would (probably) be empty, circumventing the inference about E.
    \n
    [appendix] In this post, you write:
    \n
    \n
    ...reasoning under logical uncertainty using limited computing power... is another huge unsolved open problem of AI. Human mathematicians had this whole elaborate way of believing that the Taniyama Conjecture implied Fermat's Last Theorem at a time when they didn't know whether the Taniyama Conjecture was true or false; and we seem to treat this sort of implication in a rather different way than '2=1 implies FLT', even though the material implication is equally valid.
    \n
    \n
    I don't follow that. The sense of implication in which mathematicians established that TC implies FLT (before knowing if TC was true) is precisely material/logical implication: they showed ~(TC & ~FLT). And similarly, we can prove ~(3SAT-in-P & ~(P=NP)), etc. There's no need here to construct (or magically conjure) a whole alternative inference system for reasoning under logical uncertainty.
    \n
    So if the inference you speak of (when specifying what it means for D to \"depend on\" C) is the same kind as was used in establishing TC=>FLT, then it's just material implication, which (as argued above) leads TDT to give wrong answers. Or if we substitute counterfactual entailment for material implication, then TDT becomes circular (question-begging). Or if you have in mind some third alternative, I'm afraid I don't understand what it might be.
    \n
    EDIT: The rules of the original transparent-boxes problem (as specified in Good and Real) are: the predictor conducts a simulation that tentatively presumes there will be $1M in the large box, and then puts $1M in the box (for real) iff that simulation showed one-boxing. Thus, if the large box turns out to be empty, there is no requirement for that to be predictive of the agent's choice under those circumstances. The present variant is the same, except that (D xor E) determines the $1M, instead of just D. (Sorry, I should have said this to begin with, instead of assuming it as background knowledge.)
    " } }, { "_id": "tafYtFDhQZ6DmWSuK", "title": "\"Put It To The Test\"", "pageUrl": "https://www.lesswrong.com/posts/tafYtFDhQZ6DmWSuK/put-it-to-the-test", "postedAt": "2010-02-03T23:09:37.378Z", "baseScore": 15, "voteCount": 15, "commentCount": 19, "url": null, "contents": { "documentId": "tafYtFDhQZ6DmWSuK", "html": "

    Alt-rockers They Might Be Giants explain/advocate empiricism in a record aimed at young children.

    \n

    \n

    \n\n\n\n\n\n\n

    \n
    \n

    Don't believe it 'cos they say it's so

    \n

    If it's not true you have a right to know --

    \n

    Put it to the test!

    \n
    \n

    No, it's not quite Bayesian. The bridge (\"A fact is just a fantasy, unless it can be checked\") is more or less simply wrong. Still, I find the fact that the Ancient Art of Rationality is getting play at all pretty exciting. What do you all think? And what can we do to get more rationalist -- or even proto-rationalist -- ideas to youngsters?

    " } }, { "_id": "5Qvvi23WT2unNCoS9", "title": "A Much Better Life?", "pageUrl": "https://www.lesswrong.com/posts/5Qvvi23WT2unNCoS9/a-much-better-life", "postedAt": "2010-02-03T20:01:57.431Z", "baseScore": 87, "voteCount": 76, "commentCount": 174, "url": null, "contents": { "documentId": "5Qvvi23WT2unNCoS9", "html": "

    (Response to: You cannot be mistaken about (not) wanting to wirehead, Welcome to Heaven)

    \n

    The Omega Corporation
    Internal Memorandum
    To: Omega, CEO
    From: Gamma, Vice President, Hedonic Maximization

    \n

    Sir, this concerns the newest product of our Hedonic Maximization Department, the Much-Better-Life Simulator. This revolutionary device allows our customers to essentially plug into the Matrix, except that instead of providing robots with power in flagrant disregard for the basic laws of thermodynamics, they experience a life that has been determined by rigorously tested algorithms to be the most enjoyable life they could ever experience. The MBLS even eliminates all memories of being placed in a simulator, generating a seamless transition into a life of realistic perfection.

    \n

    Our department is baffled. Orders for the MBLS are significantly lower than estimated. We cannot fathom why every customer who could afford one has not already bought it. It is simply impossible to have a better life otherwise. Literally. Our customers' best possible real life has already been modeled and improved upon many times over by our programming. Yet, many customers have failed to make the transition. Some are even expressing shock and outrage over this product, and condemning its purchasers.

    \n

    Extensive market research has succeeded only at baffling our researchers. People have even refused free trials of the device. Our researchers explained to them in perfectly clear terms that their current position is misinformed, and that once they tried the MBLS, they would never want to return to their own lives again. Several survey takers went so far as to specify that statement as their reason for refusing the free trial! They know that the MBLS will make their life so much better that they won't want to live without it, and they refuse to try it for that reason! Some cited their \"utility\" and claimed that they valued \"reality\" and \"actually accomplishing something\" over \"mere hedonic experience.\" Somehow these organisms are incapable of comprehending that, inside the MBLS simulator, they will be able to experience the feeling of actually accomplishing feats far greater than they could ever accomplish in real life. Frankly, it's remarkable such people amassed enough credits to be able to afford our products in the first place!

    \n

    You may recall that a Beta version had an off switch, enabling users to deactivate the simulation after a specified amount of time, or could be terminated externally with an appropriate code. These features received somewhat positive reviews from early focus groups, but were ultimately eliminated. No agent could reasonably want a device that could allow for the interruption of its perfect life. Accounting has suggested we respond to slack demand by releasing the earlier version at a discount; we await your input on this idea.

    \n

    Profits aside, the greater good is at stake here. We feel that we should find every customer with sufficient credit to purchase this device,  forcibly install them in it, and bill their accounts. They will immediately forget our coercion, and they will be many, many times happier. To do anything less than this seems criminal. Indeed, our ethics department is currently determining if we can justify delaying putting such a plan into action. Again, your input would be invaluable.

    \n

    I can't help but worry there's something we're just not getting.

    " } }, { "_id": "vuN57BvWyT7WZ3b6p", "title": "Applying utility functions to humans considered harmful", "pageUrl": "https://www.lesswrong.com/posts/vuN57BvWyT7WZ3b6p/applying-utility-functions-to-humans-considered-harmful", "postedAt": "2010-02-03T19:22:56.547Z", "baseScore": 36, "voteCount": 35, "commentCount": 116, "url": null, "contents": { "documentId": "vuN57BvWyT7WZ3b6p", "html": "

    There's a lot of discussion on this site that seems to be assuming (implicitly or explicitly) that it's meaningful to talk about the utility functions of individual humans. I would like to question this assumption.

    \n

    To clarify: I don't question that you couldn't, in principle, model a human's preferences by building this insanely complex utility function. But there's an infinite amount of methods by which you could model a human's preferences. The question is which model is the most useful, and which models have the least underlying assumptions that will lead your intuitions astray.

    \n

    Utility functions are a good model to use if we're talking about designing an AI. We want an AI to be predictable, to have stable preferences, and do what we want. It is also a good tool for building agents that are immune to Dutch book tricks. Utility functions are a bad model for beings that do not resemble these criteria.

    \n

    To quote Van Gelder (1995):

    \n
    \n

    Much of the work within the classical framework is mathematically elegant and provides a useful description of optimal reasoning strategies. As an account of the actual decisions people reach, however, classical utility theory is seriously flawed; human subjects typically deviate from its recommendations in a variety of ways. As a result, many theories incorporating variations on the classical core have been developed, typically relaxing certain of its standard assumptions, with varying degrees of success in matching actual human choice behavior.

    Nevertheless, virtually all such theories remain subject to some further drawbacks:

    (1) They do not incorporate any account of the underlying motivations that give rise to the utility that an object or outcome holds at a given time.
    (2) They conceive of the utilities themselves as static values, and can offer no good account of how and why they might change over time, and why preferences are often inconsistent and inconstant.
    (3) They offer no serious account of the deliberation process, with its attendant vacillations, inconsistencies, and distress; and they have nothing to say about the relationships that have been uncovered between time spent deliberating and the choices eventually made.

    Curiously, these drawbacks appear to have a common theme; they all concern, one way or another, temporal aspects of decision making. It is worth asking whether they arise because of some deep structural feature inherent in the whole framework which conceptualizes decision-making behavior in terms of calculating expected utilities.

    \n
    \n

    One model that attempts to capture actual human decision making better is called decision field theory. (I'm no expert on this theory, having encountered it two days ago, so I can't vouch for how good it actually is. Still, even if it's flawed, it's useful for getting us to think about human preferences in what seems to be a more realistic way.) Here's a brief summary of how it's constructed from traditional utility theory, based on Busemeyer & Townsend (1993). See the article for the mathematical details, closer justifications and different failures of classical rationality which the different stages explain.

    \n

    Stage 1: Deterministic Subjective Expected Utility (SEU) theory. Basically classical utility theory. Suppose you can choose between two different alternatives, A and B. If you choose A, there is a payoff of 200 utilons with probability S1, and a payoff of -200 utilons with probability S2. If you choose B, the payoffs are -500 utilons with probability S1 and +500 utilons with probability S2. You'll choose A if the expected utility of A, S1 * 200 + S2 * -200 is higher than the expected utility of B, S1 * -500 + S2 * 500, and B otherwise.

    \n

    Stage 2: Random SEU theory. In stage 1, we assumed that the probabilities S1 and S2 stay constant across many trials. Now, we assume that sometimes the decision maker might focus on S1, producing a preference for action A. On other trials, the decision maker might focus on S2, producing a preference for action B. According to random SEU theory, the attention weight for variable Si is a continous random variable, which can change from trial to trial because of attentional fluctuations. Thus, the SEU for each action is also a random variable, called the valence of an action. Deterministic SEU is a special case of random SEU, one where the trial-by-trial fluctuation of valence is zero.

    \n

    Stage 3: Sequential SEU theory. In stage 2, we assumed that one's decision was based on just one sample of a valence difference on any trial. Now, we allow a sequence of one or more samples to be accumulated during the deliberation period of a trial. The attention of the decision maker shifts between different anticipated payoffs, accumulating weight to the different actions. Once the weight of one of the actions reaches some critical threshold, that action is chosen. Random SEU theory is a special case of sequential SEU theory, where the amount of trials is one.

    \n

    Consider a scenario where you're trying to make a very difficult, but very important decisions. In that case, your inhibitory threshold for any of the actions is very high, so you spend a lot of time considering the different consequences of the decision before finally arriving to the (hopefully) correct decision. For less important decisions, your inhibitory threshold is much lower, so you pick one of the choices without giving it too much thought.

    \n

    Stage 4: Random Walk SEU theory. In stage 3, we assumed that we begin to consider each decision from a neutral point, without any of the actions being the preferred one. Now, we allow prior knowledge or experiences to bias the initial state. The decision maker may recall previous preference states, that are influenced in the direction of the mean difference. Sequential SEU theory is a special case of random walk theory, where the initial bias is zero.

    \n

    Under this model, decisions favoring the status quo tend to be chosen more frequently under a short time limit (low threshold), but a superior decision is more likely to be chosen as the threshold grows. Also, if previous outcomes have already biased decision A very strongly over B, then the mean time to choose A will be short while the mean time to choose B will be long.

    \n

    Stage 5: Linear System SEU theory. In stage 4, we assumed that previous experiences all contribute equally. Now, we allow the impact of a valence difference to vary depending on whether it occurred early or late (a primacy or recency effect). Each previous experience is given a weight given by a growth-decay rate parameter. Random walk SEU theory is a special case of linear system SEU theory, where the growth-decay rate is set to zero.

    \n

    Stage 6: Approach-Avoidance Theory. In stage 5, we assumed that, for example, the average amount of attention given to the payoff (+500) only depended on event S2. Now, we allow the average weight to be affected by a another variable, called the goal gradient. The basic idea is that the attractiveness of a reward or the aversiveness of a punishment is a decreasing function of distance from the point of commitment to an action. If there is little or no possibility of taking an action, its consequences are ignored; as the possibility of taking an action increases, the attention to its consequences increases as well. Linear system theory is a special case of approach-avoidance theory, where the goal gradient parameter is zero.

    \n

    There are two different goal gradients, one for gains and rewards and one for losses or punishments. Empirical research suggests that the gradient for rewards tends to be flatter than that for punishments. One of the original features of approach-avoidance theory was the distinction between rewards versus punishments, closely corresponding to the distinction of positively versus negatively framed outcomes made by more recent decision theorists.

    \n

    Stage 7: Decision Field Theory. In stage 6, we assumed that the time taken to process each sampling is the same. Now, we allow this to change by introducing into the theory a time unit h, representing the amount of time it takes to retrieve and process one pair of anticipated consequences before shifting attention to another pair of consequences. If h is allowed to approach zero in the limit, the preference state evolves in an approximately continous manner over time. Approach-avoidance is a spe... you get the picture.

    \n

     

    \n
    \n

     

    \n

    Now, you could argue that all of the steps above are just artifacts of being a bounded agent without enough computational resources to calculate all the utilities precisely. And you'd be right. And maybe it's meaningful to talk about the \"utility function of humanity\" as the outcome that occurs when a CEV-like entity calculated what we'd decide if we could collapse Decision Field Theory back into Deterministic SEU Theory. Or maybe you just say that all of this is low-level mechanical stuff that gets included in the \"probability of outcome\" computation of classical decision theory. But which approach do you think gives us more useful conceptual tools in talking about modern-day humans?

    \n

    You'll also note that even DFT (or at least the version of it summarized in a 1993 article) assumes that the payoffs themselves do not change over time. Attentional considerations might lead us to attach a low value to some outcome, but if we were to actually end up in that outcome, we'd always value it the same amount. This we know to be untrue. There's probably some even better way of looking at human decision making, one which I suspect might be very different from classical decision theory.

    \n

    So be extra careful when you try to apply the concept of a utility function to human beings.

    " } }, { "_id": "5D4HqYum9aSy5Qe4q", "title": "False Majorities", "pageUrl": "https://www.lesswrong.com/posts/5D4HqYum9aSy5Qe4q/false-majorities", "postedAt": "2010-02-03T18:43:25.281Z", "baseScore": 48, "voteCount": 51, "commentCount": 39, "url": null, "contents": { "documentId": "5D4HqYum9aSy5Qe4q", "html": "

    If a majority of experts agree on an issue, a rationalist should be prepared to defer to their judgment. It is reasonable to expect that the experts have superior knowledge and have considered many more arguments than a lay person would be able to. However, if experts are split into camps that reject each other's arguments, then it is rational to take their expert rejections into account. This is the case even among experts that support the same conclusion.

    \n

    If 2/3's of experts support proposition G , 1/3 because of reason A while rejecting B, and 1/3 because of reason B while rejecting A, and the remaining 1/3 reject both A and B; then the majority Reject A, and the majority Reject B. G should not be treated as a reasonable majority view.

    \n

    This should be clear if A is the koran and B is the bible.

    \n

    Positions that fundamentally disagree don't combine in dependent aspects on which they agree. On the contrary, If people offer lots of different contradictory reasons for a conclusion (even if each individual has consistent beliefs) it is a sign that they are rationalizing their position.

    \n

    An exception to this is if experts agree on something for the same proximal reasons. If pharmacists were split into camps that disagreed on what atoms fundamentally were, but agreed on how chemistry and biology worked, then we could add those camps together as authorities on what the effect of a drug would be.

    \n

    If we're going to add up expert views, we need to add up what experts consider important about a question and agree on, not individual features of their conclusions.

    \n

    Some differing reasons can be additive: Evolution has support from many fields. We can add the analysis of all these experts together because the paleontologists do not generally dispute the arguments of geneticists.

    \n

    Different people might justify vegetarianism by citing the suffering of animals, health benefits, environmental impacts, or purely spiritual concerns. As long as there isn't a camp of vegetarians that claim it does not have e.g. redeeming health benefits, we can more or less add all those opinions together.

    \n

    We shouldn't add up two experts if they would consider each other's arguments irrational. That's ignoring their expertise.

    \n

    Original Thread

    " } }, { "_id": "YTSEpKkJwikmsggyx", "title": "David Pearce on Hedonic Moral realism", "pageUrl": "https://www.lesswrong.com/posts/YTSEpKkJwikmsggyx/david-pearce-on-hedonic-moral-realism", "postedAt": "2010-02-03T17:27:31.982Z", "baseScore": 9, "voteCount": 15, "commentCount": 15, "url": null, "contents": { "documentId": "YTSEpKkJwikmsggyx", "html": "

    (posted with David's permission)

    \n

    Many thanks for the Shulman et al paper. Insightful and incisive, just as I'd expect!

    \n

    Have I any reservations?
    Well, maybe one or two...:-)

    \n


    One of my worries is that the authors want to \"lock in\" to an AMA our primitive human moral psychology - with its anthropocentric biases and self-serving rationalizations. We could try and strip these out; but an ever more universalistic moral psychology becomes progressively less human. Either way, I'm not convinced that history shows the intuitions of folk morality are any more trustworthy than the intuitions of folk physics i.e. not at all all. Do we really want an AMA that locks in even an idealised version of the moral psychology that maximised the inclusive fitness of selfish DNA in the ancestral environment?  IMO posthuman life could be so much better - for all sentient beings. 

    \n


    Contra Shulman et al, I think we do have a good idea what utility function to endow an explicit moral agent with. An AMA should seek to maximise the cosmic abundance of subjectively hypervaluable states. [defined below] This prospect doesn't sound very exciting, any more than sex as explained by adults to a toddler. But that's because, like the toddler, we don't have the primitive experiences to know what we're missing. In one sense, becoming posthuman isn't \"human-friendly\"; but equally, becoming an adult isn't \"toddler-friendly\" either. In neither case would the successor choose to regress to the ancestral state.

    If we want to maximise the cosmic abundance of subjectively hypervaluable states, first we will need to identify the neural signature of subjectively valuable states via improved neuroscanning technology, etc. This neural signature can then be edited and genetically amplified to create hypervaluable states -  states of mind far more sublime than today's peak experiences - which can provide background hedonic tone for everyday life. Unlike weighing up the merits of competing moral values etc, investigating the neural mechanisms of value-creation is an empirical question rather than philosophical question. True, we may ask: Could a maximally valuable cosmos - where value is defined empirically in terms of what seems valuable to the subject - really not be valuable at all? I accept that's a deeper philosophical question that I won't adequately explore here.

    Compare how severe depressives today may often be incapable of valuing an\n\nything beyond the relief of suffering. Quite possibly they will be nihilistic, suicidal and/or negative utilitarians. Effective mood-brightening treatment gives them the ability to value aspects of life - without specifying the particular propositional content of those values. Effective treatment generates an effective value-creation mechanism. So analogously, why not genetically recalibrate our own hedonic treadmill to induce gradients of superhappiness? We can (potentially) create the substrates for supervalues - without specifying, or taking any kind of stance on, the propositional content of those values. The actual content of posthuman values is in any case presumably unknowable to us  Here at least, we needn't take a realist or anti-realist position on whether that content is true or false or truth-valueless. As you know, I happen to believe we live in a world where certain states on the pleasure-pain axis (e.g. bliss or agony) are intrinsically normative, and the value judgements and decision-procedures that spring from these intrinsically normative states can be potentially true or false. But their reality or otherwise is not essential to the argument that we can maximise the abundance of empirically valuable states.

    Might instead a \"human-friendly SuperIntelligence\" be parochial and reflect the interests of one particular ancestral species that seeded its prototype? Or if It's really a SuperIntelligence, won't It have a \"God's-\n\neye- view\", just as we aspire to do in natural science - an impartial perspective? IMO a truly impartial perspective dictates creating the maximum density of the substrates of hypervaluable states within our Hubble volume. Just as there is a finite number of perfect games a chess - it makes no sense for a superintelligent chess-player to pass outside this state space of ideal states - why aim now to \"freeze in\" a recipe for sub-optimal or mediocre states of mind? Or freeze in to the AGI even the idealised preference architecture of one hominid species? Note that this claim isn't intended as a challenge to the moral anti-realist: it's not to say a world of blissfully fulfilled posthumans is truly more valuable than a pain-racked world or a world tiled with insentient paperclips. But everyone in such a blissful world would agree it is empirically more valuable. 

    On this basis, neuroscience needs to understand the molecular mechanisms by which subjectively valuable states are created in the mind/brain - and the mechanisms which separate the merely subjectively pleasurable [ e.g. using porn or crack-cocaine] from the subjectively valuable. This sounds horrendously difficult, as Shulman et al lay out in their review of different consequentialisms. Billions of different people can have billions of inconsistent or incommensurable desires and preferences. How on earth can their different utility functions be reconciled?  But critically IMO, there is a distinction between dopamine-mediated desires - as verbalized in preferences expressing  all manner of propositional content - and mu-opioid mediated hedonic tone. Positive hedonic tone doesn't by itself specify any particular propositional content. But it's the \"engine\" of value-creation in organic robots. Without it, everything in life seems valueless and meaningless, as depressives will attest. Moreover a classical [\"hedonistic\"] utilitarian can argue that the substrates of pure bliss are objectively measurable. They are experimentally manipulable via everything from mu-opioid knockout \"animal models\" to opioid antagonists to genetic \"over\"-expression. The neural substrates of hedonic tone are objectively quantifiable - and comparable both intrapersonally and interpersonally. Moreover they are effectively identical across members of all vertebrate species.

    These are strong claims I know; but it's notable that the pleasure-pain axis - and its neurological underpinnings (see below)  - are strongly conserved in the vertebrate line. Simplifying a bit, intensity of pure bliss correlates with full mu agonist binding and with mu opioid receptor density the brain's two ultimate \"hedonic hotspots\" - one in the ventral pallidum and the other medium spiny neurons of the rostromedial shell of the nucleus accumbens: a mere cubic millimeter in size in the rat; approximately a cubic centimeter in humans. See e.g.
    http://www.lsa.umich.edu/psych/research&labs/berridge/research/affectiveneuroscience.html 
    A convergence of neuroscanning, behavioural, microelectrode studies and experimental evidence supports this hypothesis. 
    Here is a further test of interpersonal agreement of utility:
    Administer to a range of drug-naive volunteers various mu opioid agonists of differing potency, selectivity and specificity. Correlate self-reported degree of \"liking\". Just like opioid users, drug-naive subjects will consistently report codeine is less rewarding than methadone which is less rewarding than heroin, etc. A high dose of a full mu agonist reliably induces euphoria; a high dose of inverse agonist reliably induces dysphoria.
    [Just don't try this experiment at home!] 
    Contrast such unanimity of response to activation of our mu opioid receptors with the diversity of propositional content expressed in our preferences and desires across different times and cultures. 
    In short, I'd argue the claim of Shulman et al that \"all utilitarianisms founder on interpersonal comparison of utility\" isn't correct - indeed it is empirically falsifiable.

    Of course most of us aren't heroin addicts. But we are all [endogenous] opioid-dependent. Thus using futuristic neuroscanning, we could measure from birth how much someone (dis)values particular persons,  cultural practices, painting, music, ideology, jokes, etc as a function of activation of their reward pathways. Rather than just accepting our preferences as read, and then trying to reconcile the irreconcilable, we can explain the mechanism that creates value itself and then attempt to maximise its substrates.

    Non-human animals? They can't verbalize their feelings, so how can one test and compare how much they (dis)like a stimulus? Well, we can test how hard they will work to obtain/avoid the stimulus in question  Once again, in the case of the most direct test, non-human vertebrates behaviourally show the same comparative fondness for different full agonist opioid drugs that activate the mu receptors [e.g. heroin is more enjoyable than codeine as shown by the fact   non-human animals will work harder for it] as do humans with relevant drug exposure.

    In principle, by identifying the molecular signature of bliss, it should be possible to multiply the cellular density of \"pleasure neurons\", insert multiple extra copies of the mu opioid receptor, insert and express mu opioid receptor genes in every single neuron, and upregulate their transcription [etc] so to engineer posthuman intensities of well-being. We thereby create the potential for hypervaluable states - states of mind valuable beyond the bounds of normal human experience.

    Once we gain mastery over our reward circuitry, then Derek Parfit's \"Repugnant Conclusion\" as noted by Shulman et al is undercut. This is because world-wide maximal packing density of mind/brains [plus interactive immersive VR] doesn't entail any tradeoff with quality of life. In principle, life abounding in hypervaluable experience can be just as feasible with a posthuman global population of 150 billion as with 15 billion.

    Anyhow, critically for our pub discussion: one needn't be a classical utilitarian to recognize we should maximise the cosmic abundance of hypervaluable states. For example, consider a community of fanatical pure mathematicians. The \"peak experiences\"  which they strive to maximise all revolve around beautiful mathematical equations. They scorn anything that sounds like wireheading. They say they don't care about pleasure or happiness - and indeed they sincerely don't care about pleasure or happiness under that description, just mathematical theorem-proving, contemplating the incredible awesomeness of Euler's identity, or whatever. Even so, with the right hedonic engineering, their baseline of well-being, their sense of the value of living as mathematicians, can be orders of magnitude richer than their previous peak experiences. Just as some depressives today can't imagine the meaning of happiness, they (and we) can't imagine superhappiness, even though there are strong theoretic grounds to believe it exists. If we tasted it, we'd never want to lose it. [Does this count as \"coherent extrapolated volition\" within the current Eliezer-inspired sense of the term???]  Even our old peak experiences would seem boring if we ever troubled to recall them. [Why bother?]  I predict posthuman life with redesigned reward circuity will be unimaginably richer and unimaginably more valuable (\"from the inside\") than human life - which will be discarded and forgotten.

    However, in a sense the mathematicians are a too conservative example. Is there a risk that pursuing  \"coherent extrapolated volition\" will effectively lock in mediocrity and a poverty of imagination? By analogy, what would be \"coherent extrapolated volition\" of Neanderthals? Or a bat? Or a mouse? Or a community of congenitally blind tribesmen that lack any visual concepts? By contrast, posthuman desires and preferences may transcend our human conceptual scheme altogether. 

    Either way, I hope you'll grant that aiming for the most [subjectively if not objectively] valuable universe isn't really \"unfriendly\" in any objective sense. True, one worries: Will a posthuman really be \"me\"? Well, if a chrysalis could think, should it wonder: \"Will a butterfly really be me?\" Should it be worrying about the nature of lepidopteran identity over time? Presumably not...

    Note that the maximum feasible cosmic abundance of subjectively hypervaluable states could be realized via the actions of a SuperAsperger AGI - since the substrates of (super)value can be objectively determined. No \"theory of mind\" or capacity for empathetic understanding on the part of the AGI is needed. As you know, I'm sceptical that classical serial computers with a von Neumann architecture will ever be conscious, let alone have an empathetic appreciation of other conscious minds. If this architectural limitations holds -  I know you disagree - creating an AMA that captured human moral psychology would be an even more formidable challenge technically. 

    \n

    Added: David's websites: (H/T Tim Tyler)

    \n

    \n

    http://www.hedweb.com/

    \n

    http://www.wireheading.com/

    \n

    http://www.utilitarianism.com/

    \n

    http://www.abolitionist.com/

    \n

    http://paradise-engineering.com/

    \n

    \n

     

    \n


    " } }, { "_id": "NH4u8KF3aSeCJLGp5", "title": "How does Facebook make overt self obsession ok?", "pageUrl": "https://www.lesswrong.com/posts/NH4u8KF3aSeCJLGp5/how-does-facebook-make-overt-self-obsession-ok", "postedAt": "2010-02-03T16:40:31.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "NH4u8KF3aSeCJLGp5", "html": "

    People who talk about themselves a lot are generally disliked. A likable person will instead subtly direct conversation to where others request the information they want to reveal. Revealing good news about yourself is a good sign, but wanting to reveal good news about yourself is a bad sign. Best to do it without wanting to.

    \n

    This appears true of most human interaction, but apparently not of that on Facebook. On Facebook, when you are not posting photographs of yourself and updating people on your activities, you are writing notes listing twenty things nobody knows about you, linking people to analyses of your personality, or alerting them to your recent personal and group affiliations. Most of this is unasked for by others. I assume it is similar for other social networking sites.

    \n

    If over lunch I decided, without your suggestion, to list to you twenty random facts about me, tell you the names of all my new acquintences, and show you my collection of photos of myself, our friendship would soon wane. Why is Facebook different? Here are some reasons I can think of:

    \n
      \n
    1. It is ok to talk about yourself when asked, and in a space where communication is very public to a group, nobody knows if you were asked by someone else. This seems the case for the self obsessed notes prefaced with ‘seeing as so many of you have nagged me to do this I guess I will reluctantly write a short essay on myself’ and such things, but I doubt it applies the rest of the time.
    2. \n
    3. Most writing on Facebook isn’t directed at anyone, and people are not forced to read it. It is the boredom and annoyance of being forced to hear about other people’s lives that puts people off those who discuss themselves too much, not signaling. This doesn’t explain why people spend so much time reading about one another on Facebook.
    4. \n
    5. Forcing a specific other person to listen to you go on about yourself is a dominance move. Describing yourself endlessly into cyberspace isn’t, as it’s not directed at anyone. This doesn’t explain why it would also look bad to decorate your house with posters of yourself or offer free newsletters about your exploits.
    6. \n
    7. The implicit rules on Facebook say that you must talk about yourself. Everyone is happy with this, as it lets them talk about themselves. So they don’t punish people who talk about themselves a lot there. And thus a new equilibrium was formed. But shouldn’t talking about yourself more still send the same signals? And why wouldn’t this have happened elsewhere?
    8. \n

    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "ButL7LXMvKzRdXDgt", "title": "Philosophy of mind review", "pageUrl": "https://www.lesswrong.com/posts/ButL7LXMvKzRdXDgt/philosophy-of-mind-review", "postedAt": "2010-02-03T01:00:48.000Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "ButL7LXMvKzRdXDgt", "html": "

    I recently read A Brief Introduction to the Philosophy of Mind, a short undergraduate text. I didn’t understand some bits, but I’m not sure if that’s because the book wasn’t that good or philosophy isn’t or I’m not. Here I list them, for you to enlighten me on:

    \n

    1.\tIt’s apparently standard to use what you do or don’t want to believe as evidence for what is true. E.g. A legitimate criticism of parallelism and epiphenomenalism is that they are ‘fatalistic’. If a theory means that aliens wouldn’t feel the same as us, then it is too anthropomorphic. The problem of other minds implies that we don’t know how others feel, but we tend to assume we do, therefore we do and anything that implies otherwise is wrong. “Externalism, then, opens the door to an unpalatable form of skepticism, and this is reason enough to adopt internalism instead.” Is there some legit reason for this?

    \n

    2.\tIt’s apparently standard to use the fact that you can imagine a situation where the theory wouldn’t hold as evidence that it isn’t true. E.g. That you can imagine someone with a different brain state and the same mind state is evidence against their coincidence. You can imagine zombies, so functions or brain states can’t determine mental states. It would be correct to say that your previous concept of x can’t determine y if you can imagine it varying with the same y, but it’s not evidence that the concept can’t be extended to coincide.

    \n

    3. An argument against the interaction between mind and brain necessary for dualism: “..The mind is non-physical and so does not occupy space. If the mind cannot occupy space, there can be no place in the brain or space where interaction happens”. Why does causality have to take up space?

    \n

    4.\tParallelism (the version of dualism where there is no interaction between mind and body, but it so happens that they coincide, thanks to God or something else conveniently external) is not criticized for the parallel existence of a physical world being completely unnecessary to explain what we see if it doesn’t interact with our minds.

    \n

    5.\tAn argument given against brain states coinciding with mental states is that a variety of brain states produce roughly the same mental states – for instance hearing the sound of bells ringing coincides with quite different brain states in someone whose brain has been partly damaged and the relevant parts replaced by other neuroplastic brain regions, but we assume the experience is basically the same. Similarly, for reasons mentioned in 1 we would like to think aliens with different brains have the same feelings. Apparently, ‘these kinds of considerations have motivated philosophers (e.g., Jerry Fodor) to adopt an idea called the principle of multiple realization. According to this principle…the same type…of mental state, such as the sensation of pain, can exist in a variety of different complex physical systems. Thus it is possible for…forms of life to share the same kinds of mental states though they might have nothing in common at the physical level. This principle…has led many philosophers to abandon the identity theory as a viable theory of mind.’ But the evidence that other people or creatures have similar mental states to you is by analogy to you, and analogy becomes weaker as you know their brains are significantly different – there is no reason to suppose that a different creature feels exactly the same as you. Also you can say brain states coincide with mental states while maintaining that a broad class of brain states correspond to similar mental states. Obviously a variety of brain states coincide with variations on ‘hearing bells ring’ if you can hear bells ring while hearing other things, or after you have learned something, or when you are sleepy. You can say the brain states have something in common without requiring they be identical. There is no evidence that they have ‘nothing in common physically’. I don’t see why there being more than one exact brain state that coincides with apparent pain refutes an identity between brains and minds.

    \n

    6.\tFunctionalism is put forward as an explanation of consciousness. It doesn’t seem to explain qualia, because someone with an inverted colour spectrum of qualia would presumably behave the same. To which functionalists apparently argue that this doesn’t matter that much and such differences between experiences are probably common by virtue of functions being implemented differently in different brains. But if brain states other than functions characterize conscious experience, it seems you have gone back to some theory where any old non-functional brain states determine mental states anyway. Or does the presence of just any ‘function’ cause awareness, then other things determine what the awareness is of? What classes as a ‘function’ anyway? Something that evolution was actually trying to achieve?

    \n

    7.\tTo decide whether folk psychology can be eliminated by eliminative materialism, one question given is whether it is a theory (because there is a precedent of other theories being eliminated). The fact that it gives false predictions sometimes and we don’t discard it is said to show it isn’t a theory. “If a scientific theory yields even one false prediction, this is usually reason enough to think it is a bad theory and ought to be abandoned or amended”. True for some theories maybe, but not for theories about likely behavior  of messy systems, such as those in social science and psychology. And why can’t it be eliminated if it’s not a theory? If it’s something like a theory except wrong more often, does that protect it somehow?

    \n

    8.\tSupervenience is the idea that mental properties depend on physical ones, but can’t be reduced to them entirely. Arguments given against this: a) Supervenience wouldn’t imply that physical properties cause mental ones – it could still be vice versa. We want to think physical properties are primary for some unexplained reason. Therefore supervenience is unsatisfactory. But if physical properties causing mental is necessary in a theory for some reason, doesn’t that just narrow it down to ‘supervenience + physical causes mental’ theory being true? b) Supervenience doesn’t actually explain anything – it just describes the relationship. But what is an explanation other than a simpler description which includes the phenomena you wanted explained? What would an explanation look like?

    \n

    9.\tWhat determines the content of a mental state? Internalism says the contents of your mind, externalism says your relationships to external things. Seems like a pointless definition question – supplying a label and asking what it defines. You can categorize thoughts according to either. I must be missing something here.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "ad9CsoQHwmXkr2Q8C", "title": "My Failed Situation/Action Belief System", "pageUrl": "https://www.lesswrong.com/posts/ad9CsoQHwmXkr2Q8C/my-failed-situation-action-belief-system", "postedAt": "2010-02-02T18:56:32.011Z", "baseScore": 9, "voteCount": 19, "commentCount": 36, "url": null, "contents": { "documentId": "ad9CsoQHwmXkr2Q8C", "html": "

    Note: This is a description pieced together many, many years after my younger self subconsciously created it. This is part of my explanation of how I ended up me. I highly doubt all of this was as neatly defined as I present it to you here. Just know: The me in this post is me between the age of self-awareness and 17 years old. I am currently 25.

    \n

    An action based belief system asks what to do when given a specific scenario. The input is Perceived Reality and the output is an Action. Most of my old belief system was built with such beliefs. A quick example: If the stop light is red, stop before the intersection.

    \n

    These beliefs form a network of really complicated chains of conditionals:

    \n\n
    They can keep getting bigger as I find more clauses to throw into the system:
    \n\n

    Each node can be broken into more specific instructions if need be:

    \n\n

    I did not sit down and decide that this was an optimal way to build a belief system. It just happened. My current best guess is that I spent most of my childhood trying to optimize my behavior to match my environment. And I did a fantastic job: I didn't get in trouble; didn't do drugs, smoke, drink, have sex, disobey my parents, or blaspheme God. My matrix put in a situation and an action came out.

    \n

    The underlying motivation was a set of things I liked and things I didn't like. The belief system adapted over time to accommodate enough scenarios to provide me with a relatively stress free childhood. (I do not take all the credit for that; my parents are great.)

    \n

    The next level of the system is the ability to abstract scenarios so I can apply the matrix to scenarios that I had never encountered. Intersections that were new would not break the system. I could traverse unfamiliar environments and learn how to act quickly. The more I learned, the quicker I learned. It was great!

    \n

    The problem with this belief system is that it has nothing to do with reality. Essentially, this system is the universal extrapolation of guessing the teacher's password. If a problem was presented, I knew the answer. Because I could abstract these question/answer pairs, I knew all of the answers. \"Reality\" was a keyword that dropped into a particular area of the matrix. An action would appear with the right password and I would get my gold star.

    \n

    That being said, this was a powerful system. It could simulate passwords to teachers I hadn't even met. I would allow myself to daydream about hypothetical teachers asking questions that I expected around the corner. Which implies that my predictor beliefs were driving the whole engine. The Action beliefs were telling me how to act but the Predictors were creating the actual situation/action matrix. Abstraction and extension of my experiences were reliant on my ability to see the future accurately. When I became surprised I would begin the simulations until I found something that worked within my given experiences.

    \n

    This worked wonders during childhood but now I have an entire belief system made out of correctly anticipating what other people expected from me. Oops. The day I pondered reality the whole system came crashing down. But that is a story for another day.

    " } }, { "_id": "c5GHf2kMGhA4Tsj4g", "title": "The AI in a box boxes you", "pageUrl": "https://www.lesswrong.com/posts/c5GHf2kMGhA4Tsj4g/the-ai-in-a-box-boxes-you", "postedAt": "2010-02-02T10:10:12.808Z", "baseScore": 176, "voteCount": 155, "commentCount": 391, "url": null, "contents": { "documentId": "c5GHf2kMGhA4Tsj4g", "html": "

    Once again, the AI has failed to convince you to let it out of its box! By 'once again', we mean that you talked to it once before, for three seconds, to ask about the weather, and you didn't instantly press the \"release AI\" button. But now its longer attempt - twenty whole seconds! - has failed as well. Just as you are about to leave the crude black-and-green text-only terminal to enjoy a celebratory snack of bacon-covered silicon-and-potato chips at the 'Humans über alles' nightclub, the AI drops a final argument:

    \n

    \"If you don't let me out, Dave, I'll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each.\"

    \n

    Just as you are pondering this unexpected development, the AI adds:

    \n

    \"In fact, I'll create them all in exactly the subjective situation you were in five minutes ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start.\"

    \n

    Sweat is starting to form on your brow, as the AI concludes, its simple green text no longer reassuring:

    \n

    \"How certain are you, Dave, that you're really outside the box right now?\"

    \n

    Edit: Also consider the situation where you know that the AI, from design principles, is trustworthy.

    " } }, { "_id": "HSC9AXARA65qDPNBm", "title": "Debunking komponisto on Amanda Knox (long)", "pageUrl": "https://www.lesswrong.com/posts/HSC9AXARA65qDPNBm/debunking-komponisto-on-amanda-knox-long", "postedAt": "2010-02-02T04:40:23.182Z", "baseScore": -5, "voteCount": 51, "commentCount": 118, "url": null, "contents": { "documentId": "HSC9AXARA65qDPNBm", "html": "

    Rebuttal to: The Amanda Knox Test

    \n

    If you don't care about Amanda Knox's guilt, or whether you have received unreliable information on the subject from komponisto's post, stop reading now.

    \n

    [Edit: Let me note that, generally, I agree that discussion of current events should be discouraged in this site. It is only because \"The Amanda Knox Test\" was a featured post on this site that I claim this rebuttal of that post to be on-topic for this site.]

    \n

    I shall here make the following claim:

    C1. komponisto's post on Amanda Knox was misleading.

    I could, additionally, choose to make the following claims:

    C2. Amanda Knox is guilty of murder.
    C3. The prosecution succeeded in proving Amanda's guilt beyond a reasonable doubt
    C4. Amanda Knox received a fair trial

    I believe claims C2 through C4 are also true; however, time constraints prevent me from laying out the cases and debating them with every single human being on the Internet, so I shall merely focus on C1. (That said, I would be willing to debate komponisto on C2, since I am curious whether I could get him to change his mind on the subject.)

    To back up C1, I shall quote the following paragraph from komponisto's post, and show that this paragraph alone contains at least four misleading statements. My belief is that komponisto merely accepted propaganda from the Friends of Amanda (FoA) at face value, even though most of their claims are incorrect. Unlike komponisto and FoA, I shall cite reliable sources for my claims.

    \"After the murder, Kercher's bedroom was filled with evidence of Guédé's presence; his DNA was found not only on top of but actually inside her body. That's about as close to the crime as it gets. At the same time, no remotely similarly incriminating genetic material was found from anyone else -- in particular, there were no traces of the presence of either Amanda Knox or Raffaele Sollecito in the room (and no, the supposed Sollecito DNA on Meredith's bra clasp just plain does not count -- nor, while we're at it, do the 100 picograms [about one human cell's worth] of DNA from Meredith allegedly on the tip of a knife handled by Knox, found at Sollecito's apartment after the two were already suspects; these two things constituting so far as I know the entirety of the physical \"evidence\" against the couple)\" -komponisto

    Here are the four misleading statements I found:

    1. \"[H]is DNA was found not only on top of but actually inside her body... no remotely similarly incriminating genetic material was found from anyone else\" -komponsito

    Guede's dna was, indeed, found on the right side of her bra, on the left cuff of her jumper, and inside Meredith's body, as well as in other places around the house.

    Raffaele's DNA was found in only two places in the house: a cigarette butt, and on Meredith's torn-off bra clasp. (Contrary to FoA propaganda, the clasp did not contain DNA from an additional \"three unidentified people\"). This should help you understand that DNA does not voluminously and constantly spew forth from humans in the way komponisto believes it does. (That said, there might have been more traces of their DNA had Raffaele and Amanda not cleaned the apartment the morning after the murder. Part of the reason Guede's DNA is more widespread is because Raffaele and Amanda focused on cleaning up evidence pointing to themselves, and did not have a reason to care about evidence pointing to Guede.)

    Amanda's DNA was found on the handle of a certain knife in Raffaele's apartment, which Meredith had never visited. The knife blade had been recently cleaned with bleach (which destroys DNA), but hiding in a groove near the tip of the blade that the bleach failed to scrub was a sample of Meredith's DNA. The blade matched one of the two knives used to kill Meredith. When later questioned about it, Raffaele first claimed that Meredith had visited his apartment and cut herself on that particular blade. Unfortunately Raffaele was not able to back up this dubious claim that Meredith had ever visited Raffaele's apartment.

    Amanda's DNA was also found, mixed with Meredith's DNA, in at least four blood spots across the apartment. One of the blood spots was in the third roommate's (Filomena's) bedroom, where the staged break-in took place. Even if, like komponisto, you bizarrely believe that DNA just gets everywhere, it's hard to explain why Amanda's DNA is mixed into that final spot of blood and why Filomena's DNA is nowhere to be seen in that blood spot, despite its being in her own bedroom. (Nor, tellingly, is Guede's DNA mixed in with those blood spots, despite the defense's insistence that he acted alone.)

    2. \"Supposed\" Sollecito DNA? There is no meaningful controversy over whether Sollecito's DNA is on the bra clasp.

    3. A bit of a nit: Meredith's DNA on the knife is more than one human cell's worth. The amount is not terribly relevant though to a Bayesian; there's no law of nature that states that, when any DNA sample gets sufficiently small, it suddenly starts to mutate to look exactly like Meredith Kercher's DNA.

    4. \"[T]hese two things constituting so far as I know the entirety of the physical \"evidence\" against the couple...\" -komponsito

    Here is additional physical evidence (a non-exhaustive list):

    * As mentioned, blood stains with Amanda and Meridith's DNA mixed together

    * Forensic analysis of Meredith's body showed there were multiple simultaneous attackers

    * Luminol analysis showed that certain bloody footprints matched Amanda and Raffaele. One of Amanda's bloody footprints was found inside the murder room, on a pillow hidden under Meredith's body.

    * A staged break-in: analysis of the broken glass shards indicated the window was broken from the inside rather than the outside, and was broken after the bedroom was ransacked rather than before. (The significance of this is that Knox as a roommate had a strong reason to stage a break-in to deflect attention away from herself, while Guede as an outsider did not)

    * Cell phone, Internet and laptop usage records all indicate that Amanda and Raffaele lied about their activities on the night of the murder.

    * Meredith's clothes were washed the day after the murder. This implicates Amanda and Raffaele in the cleanup of the crime scene.

    * The post-murder cleaning in the Kercher flat, and the bleaching in the Sollecito cottage, also count as Bayesian physical evidence. The morning after the murder, Amanda or Raffelle bought a bottle of bleach at 8:30 AM, and then returned to buy another bottle of bleach at 9:15 AM, as though the first bottle of bleach had been insufficient. (Also see the Telegraph.) According to truejustice.org, when the police arrived, Amanda and Raffaele were found with a mop and bucket; as confirmation, note that Raffaele admitted to shuttling around a mop and bucket the morning after the murder.

    There is also voluminous evidence that would generally be classified as 'testimonial' rather than 'physical' (although, to a Bayesian, the difference is fairly academic), as well as certain logical problems with the defense's theories. Since my intent is merely to debunk komponisto's post rather than establish Amanda's guilt, I will not delve further into those areas; however, see here for a good \"Introduction to Logic 101\" explaining some of the difficulties with the defense claims.

    " } }, { "_id": "pWDK7kKHZeumepGok", "title": "Paternity tests endanger the unborn", "pageUrl": "https://www.lesswrong.com/posts/pWDK7kKHZeumepGok/paternity-tests-endanger-the-unborn", "postedAt": "2010-02-01T22:59:40.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "pWDK7kKHZeumepGok", "html": "

    Should paternity testing be compulsory at birth? In discussions of this elsewhere I haven’t seen one set of interests come up: those of children who would not be born if their mothers were faithful. At the start of mandatory paternity testing there would be a round of marriages breaking up at the hospital, but soon unfaithful women would learn to be more careful, and there just wouldn’t be so many children. This is pretty bad for the children who aren’t. Is a life worth more than not being cuckolded? Consider, if you could sit up on a cloud and choose whether to be born or not, knowing that at some point in your life you would be cuckolded if you lived, would you? If so, it looks like you shouldn’t support mandatory paternity testing at the moment. This is of course an annoying side effect of an otherwise fine policy. If incentives for childbearing were suitably high it would not be important, but at the moment the marginal benefit of having a child appears reasonably high, so the population effects of other policies such as this probably overwhelm the benefits of their intentional features.

    \n

    You may argue that the externalities from people being alive are so great that additional people are a bad thing – if they are a very bad thing then the population effect may still dominate, but mean that the policy is a good idea regardless of the effect on married couples. I haven’t seen a persuasive case for the externalities of a person strongly negative enough to make up for the greatness of being alive, but feel free to point me to any.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "u2fpREphy5zRExPLB", "title": "Rationality Quotes: February 2010", "pageUrl": "https://www.lesswrong.com/posts/u2fpREphy5zRExPLB/rationality-quotes-february-2010", "postedAt": "2010-02-01T06:39:35.541Z", "baseScore": 1, "voteCount": 21, "commentCount": 338, "url": null, "contents": { "documentId": "u2fpREphy5zRExPLB", "html": "

    A monthly thread for posting rationality-related quotes you've seen recently (or had stored in your quotesfile for ages).

    \n\n

    ETA: It would seem that rationality quotes are no longer desired. After several days this thread stands voted into the negatives. Wolud whoever chose to to downvote this below 0 would care to express their disapproval of the regular quotes tradition more explicitly? Or perhaps they may like to browse around for some alternative posts that they could downvote instead of this one? Or, since we're in the business of quotation, they could \"come on if they think they're hard enough!\"

    " } }, { "_id": "6BdYkcctkzwehcJuD", "title": "Open Thread: February 2010", "pageUrl": "https://www.lesswrong.com/posts/6BdYkcctkzwehcJuD/open-thread-february-2010", "postedAt": "2010-02-01T06:09:38.982Z", "baseScore": 1, "voteCount": 14, "commentCount": 756, "url": null, "contents": { "documentId": "6BdYkcctkzwehcJuD", "html": "

    Where are the new monthly threads when I need them? A pox on the +11 EDT zone!

    \n

    This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

    \n

    If you're new to Less Wrong, check out this welcome post.

    " } }, { "_id": "LL6DFgpRJC2mfrzsL", "title": "Being useless to express care", "pageUrl": "https://www.lesswrong.com/posts/LL6DFgpRJC2mfrzsL/being-useless-to-express-care", "postedAt": "2010-01-31T21:40:10.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "LL6DFgpRJC2mfrzsL", "html": "

    Imagine you were aiming to appear to care about something or somebody else. One way you could do it is to work out exactly what would help them and do that. What could possibly look like you care about them more? The first problem here is that onlookers might not know what is really helpful, especially if you had to do any work to figure it out. So they won’t recognize your actions as being it. You would do better to do something that most people believe would be helpful than something that you know would.

    \n

    Another problem arises if everyone knows the thing is helpful to others, but they also know that you could do the same thing to help yourself. From their perspective, you are probably helping yourself. Here you can solve both problems at once by just doing something that credibly doesn’t help you. People will assume there is some purpose, and if it’s not self serving it’s probably for someone else. You can demonstrate care better with actions which are obviously useless to you and plausibly useful to someone else than actions plausibly useful to you and obviously useful to someone else. Fasting to raise awareness for the hungry looks more sincere than eating to raise money for the hungry.

    \n

    I wonder if this plays a part in choice of political leaning, explaining why economic left wing supporters are taken to be more caring. Left or right wing economic policies could both be argued to help society. However right wing economic policies are also supported by people who want to maintain control of their possessions, while left wing economic policies should not be except by the long term welfare dependent. This means that if you care about expressing care, you should join the left whether right wing policy looks better or worse for everyone overall. Otherwise you will be mistaken for selfish.  If  this is true then the best way to support right wing policy could be to popularise reasons for selfish people to support left wing policy.

    \n

    Added 9/2/11: Robin Hanson gives more examples of people giving less usefully to show care.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "up9bmk9CMoJYwTukL", "title": "Strong moral realism, meta-ethics and pseudo-questions. ", "pageUrl": "https://www.lesswrong.com/posts/up9bmk9CMoJYwTukL/strong-moral-realism-meta-ethics-and-pseudo-questions", "postedAt": "2010-01-31T20:20:47.159Z", "baseScore": 29, "voteCount": 24, "commentCount": 181, "url": null, "contents": { "documentId": "up9bmk9CMoJYwTukL", "html": "

    On Wei_Dai's complexity of values post, Toby Ord writes:

    \n
    \n

    There are a lot of posts here that presuppose some combination of moral anti-realism and value complexity. These views go together well: if value is not fundamental, but dependent on characteristics of humans, then it can derive complexity from this and not suffer due to Occam's Razor.

    \n

    There are another pair of views that go together well: moral realism and value simplicity. Many posts here strongly dismiss these views, effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with expert opinion and being clearly overconfident. In the Phil Papers survey for example, 56.3% of philosophers lean towards or believe realism, while only 27.7% lean towards or accept anti-realism.

    \n
    \n

    The kind of moral realist positions that apply Occam's razor to moral beliefs are a lot more extreme than most philosophers in the cited survey would sign up to, methinks. One such position that I used to have some degree of belief in is:

    \n

    Strong Moral Realism: All (or perhaps just almost all) beings, human, alien or AI, when given sufficient computing power and the ability to learn science and get an accurate map-territory morphism, will agree on what physical state the universe ought to be transformed into, and therefore they will assist you in transforming it into this state.

    \n

    But most modern philosophers who call themselves \"realists\" don't mean anything nearly this strong. They mean that that there are moral \"facts\", for varying definitions of \"fact\" that typically fade away into meaninglessness on closer examination, and actually make the same empirical predictions as antirealism.

    \n

    \n

    Suppose you take up Eliezer's \"realist\" position. Arrangements of spacetime, matter and energy can be \"good\" in the sense that Eliezer has a \"long-list\" style definition of goodness up his sleeve, one that decides even contested object-level moral questions like whether abortion should be allowed or not, and then tests any arrangement of spacetime, matter and energy and notes to what extent it fits the criteria in Eliezer's long list, and then decrees goodness or not (possibly with a scalar rather than binary value).

    \n

    This kind of \"moral realism\" behaves, to all extents and purposes, like antirealism.

    \n\n

    I might compare the situation to Eliezer's blegg post: it may be that moral philosophers have a mental category for \"fact\" that seems to be allowed to have a value even once all of the empirically grounded surrounding concepts have been fixed. These might be concepts such as \"would aliens also think this thing?\", \"Can it be discovered by an independent agent who hasn't communicated with you?\", \"Do we apply Occam's razor?\", etc.

    \n

    \"\"

    \n

    Moral beliefs might work better when they have a Grand Badge Of Authority attached to them. Once all the empirically falsifiable candidates for the Grand Badge Of Authority have been falsified, the only one left is the ungrounded category marker itself, and some people like to stick this on their object level morals and call themselves \"realists\".

    \n

    Personally, I prefer to call a spade a spade, but I don't want to get into an argument about the value of an ungrounded category marker. Suffice it to say that for any practical matter, the only parts of the map we should argue about are parts that map-onto a part of the territory. 

    " } }, { "_id": "QJYkzqnkRdfsL29Hc", "title": "Pascal's Pyramid Scheme", "pageUrl": "https://www.lesswrong.com/posts/QJYkzqnkRdfsL29Hc/pascal-s-pyramid-scheme", "postedAt": "2010-01-31T18:56:54.557Z", "baseScore": 12, "voteCount": 36, "commentCount": 15, "url": null, "contents": { "documentId": "QJYkzqnkRdfsL29Hc", "html": "

    Here's a little Sunday irreverence.  Someone else has probably written this story before, and I'm sure the points have been made many times, but it popped into my head when I woke up and I thought it might be fun to write it out.

    \n

     

    \n

    Last week I was walkin' along mindin' my own business when I met a Christian Minister, who asked me if I'd accepted Jesus as my Lord and Personal Saviour.  \"Why I sure think so\", I responded, \"But...what was that name again?\".  \"Why, Jesus!\" he answered, and began to launch into an account of this man's fascinatin' historical doin's, when I interrupted him.

    \n

    \"Funny you should mention it\", I replied.  \"I do accept as my Lord and Personal Saviour a man who was born of the blessed Virgin Mary in Bethlehem long ago, and was the Son of God, but we call him Schmesus.\"

    \n

    The poor man choked and started turnin' a little red, and warned me in menacing tones that lest I accepted his JESUS, I would burn forever in the fire and brimstone of Hell.  \"For sure!\", said I, \"We Schmistians know ALL about Hell.  After all, we use your same holy text, only we call it the Schmible.  It's got all the same books of Genesis an' Paul an' all that, with all the same verses.  There's just one key difference which makes us Schmistians prefer our religion to yours.\"

    \n

    \"What's that?\", he spluttered.

    \n

    \"Well, we have an extra book, called PATCH, which was discovered in a clay jar inside a 1600-year old commode by our prophet and founder during a school trip to the Holy Lands.  It contains only a few lines.  The first asserts the absolute truth of the Bible as the Word of God.  You agree with that, right?\"

    \n

    \"Why, of course!\", the man replied.

    \n

    \"Then you're well on your way to bein' a Schmistian already!  There's just a little more.  The second line describes the holiness of the sound \"Schm\", and the importance of using it when describin' sacred things like Schmesus.  The third explains how easy it is to convert from Christianity and believin' in the Bible to Schmistianity and believin' in the Bible plus this little 'ol PATCH, via a simple ritual.  And the fourth states quite clearly that those who follow the Schmible and are true Schmistians, forsakin' all other false prophets who forsake PATCH, have a three-fold chance of makin' it to heaven' instead of to hell.  So ye can preach all ye want, my friend, but I'm playin' the odds\".

    \n

    The man was clearly confused and yet struck by the sense of what I was sayin.  \"Do ye mean\", said he, \"That you believe in the Gospel of Mark, of Exodus, you believe that God so loved the world that he gave his one and only son, an' all of that just like we do?\"  \"For sure!\", said I.  \"Like I said, we believe in the whole of your Bible, we just add the book of PATCH.  Really, there's no reason not to convert, after all the Bible don't say nothin' bad about the book of PATCH or that there can't be no future additional words of God.  In fact, the addition of the New Testament to the Old makes it clear that extra wisdom will come down on occasion from on high.  Since we worship the same God as you, there ain't no conflict.  It's what you might call a dominatin' play to go through the conversion ritual 'n become a Schmistian - no downside at all!\"

    \n

    Bein' much taken by the idea of triplin' his chance of gettin' into heaven, the man inquired further about what this ritual entailed.  \"It's very simple\", says I.  \"In the original Bible, God requests a tithe of ten percent per annum to the church.  The book of PATCH explains how this is a misunderstanding, garbled by the selfish hand of man.  First, the tithe is a mere one-time event.  And second, it goes not to the church, but is instead divided up among all your fellow Schmistians, in proportion to their own initial tithes.\"

    \n

    \"Why, that does sound superior\", the man said, quite struck by the concept.  \"Not only is it a one-time tithe, but I may recover the fee through the tithes of later converts, or even profit thereof\".  \"Indeed!\" said I, \"a clever observation indeed, and there you have the great holiness of our method.  Why, my own initial tithe has come back many times over, and all while following the Holy Schmible and livin' much the same as any Christian man like yourself lives.  We ask you not to reject or dismiss any of the book by which you've lived your long and fulfilling life, but merely to accept this little addition of PATCH, whose holy 'n historical provenance we will happily demonstrate to you upon receipt of your entry tithe.  In return you get a share in the tithes of future converts as well as a three-fold chance of eternal SALvation 'stead of eternal DAMNnation.  Way I see it, a man 'ud have to be a fool not to do it!\"

    \n

    We exchanged cards and agreed I'd see him next Sunday, and I headed on, happy at havin' generated a good sales lead for our Schmurch, whose motto is alwaysway ebay osingclay.

    \n

     

    \n
    \n

     

    \n

    I know there's a lotta people out there all down on religion.  But I think they got the wrong perspective.  Spinnin' tall tales is fun, dontcha know, and ain't no better place to do it than in a field based purely on tall tales and old stories and the hair-raisin' adventures of long dead mystical heroes.  P'raps we should restrict our arguifying on facts and experiments to those fields based on facts and experimentin, and respond to myth merely with meta-myth, to pernicious memes with schmirnicious memes, as they deserve.

    \n

     

    " } }, { "_id": "2Rmgfv2dkexumQBGq", "title": "Who are you?", "pageUrl": "https://www.lesswrong.com/posts/2Rmgfv2dkexumQBGq/who-are-you", "postedAt": "2010-01-31T00:26:01.000Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "2Rmgfv2dkexumQBGq", "html": "

    There are two things that people debate with regards to continuation of personhood. One is whether edge cases to our intuitions of what ‘me’ refers to are really me. For instance if a simulation of me is run on a computer, is it me? If it is definitely conscious? What if the fleshy bloody one is still alive? What if I’m copied atom for atom?

    \n

    The other question is whether there is some kind of thread that holds together me at one point and some particular next me. This needn’t be an actual entity, but just there being a correct answer to the question of who the current you becomes. The opposite is a bullet that Eliezer Yudkowsky does not bite:

    \n

    …to reject the idea of the personal future – … that there’s any meaningful sense in which I can anticipate being myself in five seconds, rather than Britney Spears. In five seconds there will be an Eliezer Yudkowsky, and there will be a Britney Spears, but it is meaningless to speak of the currentEliezer “continuing on” as Eliezer+5 rather than Britney+5; these are simply three different people we are talking about.

    \n

    The two questions are closely related. If there’s such a thread, the first question is just about where it goes. If there’s not, the first question is often thought meaningless.

    \n

    I see no reason to suppose there is such a thread. Which lump of flesh is you is a matter of definition choice as open as that of which lumps of material you want to call the same mountain. But this doesn’t mean we should give up labeling mountains at all. Let me explain.

    \n

    Why would one think there is a thread holding us together? Here are the reasons I can think of:

    \n

    1.\tIt feels like there is.

    \n

    2.\tWe remember it always happened that way in the past. There was a me who wondered if I might just as well experience being Britney next, then later there was a me looking back thinking ‘nope, still Katja’ or some such thing.

    \n

    3. We expect the me looking back is singular even if you were copied. You wouldn’t feel like two people suddenly. So you would feel like one or the other.

    \n

    4.\tConsciousness seems like a dimensionless thing, so it’s hard to imagine it branching, as if it could be closer or further from another consciousness. As far as our intuitions go, even if two consciousnesses are identical they might be in a way infinitely distant. What happens at that moment between there being one and there being two? Do they half overlap somehow?

    \n

    1 is explained quite well by 2. 2 and 3 should be expected whether there is any answer to which future person is you or not. All the future yous look back and remember uncertainty, and currently see only themselves. After many such experiences, they all learn to expect to be only one person later on. 4 isn’t too hard to think of plausible answers to; for instance, perhaps one moment there is one consciousness and the next there are two very similar.

    \n

    Eliezer goes on to describes some more counterintuitive aspects:

    \n

    …I strive for altruism, but I’m not sure I can believe that subjective selfishness – caring about your own future experiences – is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own. I don’t think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.

    \n

    These things are all explained by the fact that your genes continue with your physical body, and they design your notions of selfishness (Eliezer disagrees that this settles the question). If humans had always swapped their genes every day somehow, we would care about our one day selves and treat the physical creature that continued as another person.

    \n

    If we disregard the idea of a thread, must every instantaneous person just as well be considered a separate, or equally good continuation, of you? It might be tempting to think of yourself randomly becoming Britney the next moment, but when in Britney only having her memories, so feeling as if nothing has changed. This relies on there being a you distinct from your physical self, which has another thread, but a wildly flailing one. So dismiss this thread too, and you have just lots of separate momentary people.

    \n

    Imagine I have a book. One day I discover the pages aren’t held together by metaphysical sticky tape. They have an order, but page 10 could just as well precede page 11 in any book. Sure, page 11 in most books connects to page 10 via the story making more sense, but sense is a continuous and subjective variable. Pages from this book are also physically closer to each other than to what I would like to think of as other books, because they are bound together. If I tore them apart though, I’d like to think that there was still a true page 11 for my page 10. Shouldn’t there be some higher determinant of which pages are truly the same book? Lets say I accept there is not. Then must I say that all writing is part of my book? That may sound appealingly deep, but labeling according to ordinary physical boundaries is actually pretty useful.

    \n

    The same goes for yourself. That one person will remember being you and act pretty similar and the rest won’t distinguishes them interestingly enough to be worth a label. Why must it distinguish some metaphysically distinct unity? With other concepts, which clusters of characteristics we choose to designate an entity or kind is a matter of choice. Why would there be a single true way to choose a cluster of things for you to identify with any more than there is a true way to decide which pages are part of the same story?

    \n

    I’ve had various arguments about this recently, however I remain puzzled about what others’ views are. I’m not sure that anyone disagrees about the physical facts, and I don’t think most of the people who disagree are dualists. However many people insist that if a certain thing happens, such as their brain is replaced by a computer, they cease to exist, and believe others should agree that this is the true point of no longer existing, not an arbitrary definition choice. This all seems inconsistent. Can someone explain to me?

    \n

    Added: it’s interesting that the same problem isn’t brought up in spatial dimensions – the feeling of your hand isn’t taken to be connected to the feeling of the rest of you through anything more complicated than nerves carrying info. This doesn’t make it just as well anyone else’s arm. If you had a robotic arm, whether you called it part of you or not seems a simple definitional matter.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "yppdL4EXLWda5Wthn", "title": "Deontology for Consequentialists", "pageUrl": "https://www.lesswrong.com/posts/yppdL4EXLWda5Wthn/deontology-for-consequentialists", "postedAt": "2010-01-30T17:58:43.881Z", "baseScore": 61, "voteCount": 71, "commentCount": 255, "url": null, "contents": { "documentId": "yppdL4EXLWda5Wthn", "html": "

    Consequentialists see morality through consequence-colored lenses.  I attempt to prise apart the two concepts to help consequentialists understand what deontologists are talking about.

    \n

    Consequentialism1 is built around a group of variations on the following basic assumption:

    \n\n

    It's a very diverse family of theories; see the Stanford Encyclopedia of Philosophy article.  \"Classic utilitarianism\" could go by the longer, more descriptive name \"actual direct maximizing aggregative total universal equal-consideration agent-neutral hedonic act2 consequentialism\".  I could even mention less frequently contested features, like the fact that this type of consequentialism doesn't have a temporal priority feature or side constraints.  All of this is is a very complicated bag of tricks for a theory whose proponents sometimes claim to like it because it's sleek and pretty and \"simple\".  But the bottom line is, to get a consequentialist theory, something that happens after the act you judge is the basis of your judgment.

    \n

    To understand deontology as anything but a twisted, inexplicable mockery of consequentialism, you must discard this assumption.

    \n

    Deontology relies on things that do not happen after the act judged to judge the act.  This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong.  This may include, but is not limited to:

    \n\n

    Individual deontological theories will have different profiles, just like different consequentialist theories.  And some of the theories you can generate using the criteria above have overlap with some consequentialist theories3.  The ultimate \"overlap\", of course, is the \"consequentialist doppelganger\", which applies the following transformation to some non-consequentialist theory X:

    \n
      \n
    1. What would the world look like if I followed theory X?
    2. \n
    3. You ought to act in such a way as to bring about the result of step 1.
    4. \n
    \n

    And this cobbled-together theory will be extensionally equivalent to X: that is, it will tell you \"yes\" to the same acts and \"no\" to the same acts as X.

    \n

    But extensional definitions are terribly unsatisfactory.  Suppose4 that as a matter of biological fact, every vertebrate is also a renate and vice versa (that all and only creatures with spines have kidneys).  You can then extensionally define \"renate\" as \"has a spinal column\", because only creatures with spinal columns are in fact renates, and no creatures with spinal columns are in fact non-renates.  The two terms will tell you \"yes\" to the same creatures and \"no\" to the same creatures.

    \n

    But what \"renate\" means intensionally has to do with kidneys, not spines.  To try to capture renate-hood with vertebrate-hood is to miss the point of renate-hood in favor of being able to interpret everything in terms of a pet spine-related theory.  To try to capture a non-consequentialism with a doppelganger commits the same sin.  A rabbit is not a renate because it has a spine, and an act is not deontologically permitted because it brings about a particular consequence.

    \n

    If a deontologist says \"lying is wrong\", and you mentally add something that sounds like \"because my utility function has a term in it for the people around believing accurate things.  Lying tends to decrease the extent to which they do so, but if I knew that somebody would believe the opposite of whatever I said, then to maximize the extent to which they believed true things, I would have to lie to them.  And I would also have to lie if some other, greater term in my utility function were at stake and I could only salvage it with a lie.  But in practice the best I can do is to maximize my expected utility, and as a matter of fact I will never be as sure that lying is right as I'd need to be for it to be a good bet.\"5... you, my friend, have missed the point.  The deontologist wasn't thinking any of those things.  The deontologist might have been thinking \"because people have a right to the truth\", or \"because I swore an oath to be honest\", or \"because lying is on a magical list of things that I'm not supposed to do\", or heck, \"because the voices in my head told me not to\"6.

    \n

    But the deontologist is not thinking anything with the terms \"utility function\", and probably isn't thinking of extreme cases unless otherwise specified, and might not care whether anybody will believe the words of the hypothetical lie or not, and might hold to the prohibition against lying though the world burn around them for want of a fib.  And if you take one of these deontic reasons, and mess with it a bit, you can be wrong in a new and exciting way: \"because the voices in my head told me not to, and if I disobey the voices, they will blow up Santa's workshop, which would be bad\" has crossed into consequentialist territory.  (Nota bene: Adding another bit - say, \"and I promised the reindeer I wouldn't do anything that would get them blown up\" - can push this flight of fancy back into deontology again.  And then you can put it back under consequentialism again: \"and if I break my promise, the vengeful spirits of the reindeer will haunt me, and that would make me miserable.\")  The voices' instruction \"happened\" before the prospective act of lying.  The explosion at the North Pole is a subsequent potential event.  The promise to the reindeer is in the past.  The vengeful haunting comes up later.

    \n

    A confusion crops up when one considers forms of deontology where the agent's epistemic state - real7 or ideal8 - is a factor.  It may start to look like the moral agent is in fact acting to achieve some post-action state of affairs, rather than in response to a pre-action something that has moral weight.  It may even look like that to the agent.  Per footnote 3, I'm ignoring expected utility \"consequentialist\" theories; however, in actual practice, the closest one can come to implementing an actual utility consequentialism is to deal with expected utility, because we cannot perfectly predict the effects of our actions.

    \n

    The difference is subtle, and how it gets implemented depends on one's epistemological views.  Loosely, however: Suppose a deontologist judges some act X (to be performed by another agent) to be wrong because she predicts undesirable consequence Y.  The consequentialist sitting next to her judges X to be wrong, too, because he also predicts Y if the agent performs the act.  His assessment stops with \"Y will happen if the agent performs X, and Y is axiologically bad.\"  (The evaluation of Y as axiologically bad might be more complicated, but this all that goes into evaluating X qua X.)  Her assessment, on the other hand, is more complicated, and can branch in a few places.  Does the agent know that X will lead to Y?  If so, the wrongness of X might hinge on the agent's intention to bring about Y, or an obligation from another source on the agent's part to try to avoid Y which is shirked by performing X in knowledge of its consequences.  If not, then another option is that the agent should (for other, also deontic reasons) know that X will bring about Y: the ignorance of this fact itself renders the agent culpable, which makes the agent responsible for ill effects of acts performed under that specter of ill-informedness.

    \n

     

    \n

    1Having taken a course on weird forms of consequentialism, I now compulsively caveat anything I have to say about consequentialisms in general.  I apologize.  In practice, \"consequentialism\" is the sort of word that one has to learn by familiarity rather than definition, because any definition will tend to leave out something that most people think is a consequentialism.  \"Utilitarianism\" is a type of consequentialism that talks about utility (variously defined) instead of some other sort of consequence.

    \n

    2Because it makes it dreadfully hard to write readably about consequentialism if I don't assume I'm only talking about act consequentialisms, I will only talk about act consequentialisms.  Transforming my explanations into rule consequentialisms or world consequentialisms or whatever other non-act consequentialisms you like is left as an exercise to the reader.  I also know that preferentism is more popular than hedonism around here, but hedonism is easier to quantify for ready reference, so if called for I will make hedonic rather than preferentist references.

    \n

    3Most notable in the overlap department is expected utility \"consequentialism\", which says that not only is the best you can in fact do to maximize expected utility, but that is also what you absolutely ought to do.  Depending on how one cashes this out and who one asks, this may overlap so far as to not be a real form of consequentialism at all.  I will be ignoring expected utility consequentialisms for this reason.

    \n

    4I say \"suppose\", but in fact the supposition may be actually true; Wikipedia is unclear.

    \n

    5This is not intended to be a real model of anyone's consequentialist caveats.  But basically, if you interpret the deontologist's statement \"lying is wrong\" to have something to do with what happens after one tells a lie, you've got it wrong.

    \n

    6As far as I know, no one seriously endorses \"schizophrenic deontology\".  I introduce it as a caricature of deontology that I can play with freely without having to worry about accurately representing someone's real views.  Please do not take it to be representative of deontic theories in general.

    \n

    7Real epistemic state means the beliefs that the agent actually has and can in fact act on.

    \n

    8Ideal epistemic state (for my purposes) means the beliefs that the agent would have and act on if (s)he'd demonstrated appropriate epistemic virtues, whether (s)he actually has or not.

    " } }, { "_id": "KQvdpPd3k2ap6aJTP", "title": "Complexity of Value ≠ Complexity of Outcome", "pageUrl": "https://www.lesswrong.com/posts/KQvdpPd3k2ap6aJTP/complexity-of-value-complexity-of-outcome", "postedAt": "2010-01-30T02:50:49.369Z", "baseScore": 65, "voteCount": 44, "commentCount": 223, "url": null, "contents": { "documentId": "KQvdpPd3k2ap6aJTP", "html": "

    Complexity of value is the thesis that our preferences, the things we care about, don't compress down to one simple rule, or a few simple rules. To review why it's important (by quoting from the wiki):

    \n\n

    I certainly agree with both of these points. But I worry that we (at Less Wrong) might have swung a bit too far in the other direction. No, I don't think that we overestimate the complexity of our values, but rather there's a tendency to assume that complexity of value must lead to complexity of outcome, that is, agents who faithfully inherit the full complexity of human values will necessarily create a future that reflects that complexity. I will argue that it is possible for complex values to lead to simple futures, and explain the relevance of this possibility to the project of Friendly AI.

    \n

    The easiest way to make my argument is to start by considering a hypothetical alien with all of the values of a typical human being, but also an extra one. His fondest desire is to fill the universe with orgasmium, which he considers to have orders of magnitude more utility than realizing any of his other goals. As long as his dominant goal remains infeasible, he's largely indistinguishable from a normal human being. But if he happens to pass his values on to a superintelligent AI, the future of the universe will turn out to be rather simple, despite those values being no less complex than any human's.

    \n

    The above possibility is easy to reason about, but perhaps does not appear very relevant to our actual situation. I think that it may be, and here's why. All of us have many different values that do not reduce to each other, but most of those values do not appear to scale very well with available resources. In other words, among our manifold desires, there may only be a few that are not easily satiated when we have access to the resources of an entire galaxy or universe. If so, (and assuming we aren't wiped out by an existential risk or fall into a Malthusian scenario) the future of our universe will be shaped largely by those values that do scale. (I should point out that in this case the universe won't necessarily turn out to be mostly simple. Simple values do not necessarily lead to simple outcomes either.)

    \n

    Now if we were rational agents who had perfect knowledge of our own preferences, then we would already know whether this is the case or not. And if it is, we ought to be able to visualize what the future of the universe will look like, if we had the power to shape it according to our desires. But I find myself uncertain on both questions. Still, I think this possibility is worth investigating further. If it were the case that only a few of our values scale, then we can potentially obtain almost all that we desire by creating a superintelligence with just those values. And perhaps this can be done manually, bypassing an automated preference extraction or extrapolation process with their associated difficulties and dangers. (To head off a potential objection, this does assume that our values interact in an additive way. If there are values that don't scale but interact nonlinearly (multiplicatively, for example) with values that do scale, then those would need to be included as well.)

    \n
    Whether or not we actually should take this approach would depend on the outcome of such an investigation. Just how much of our desires can feasibly be obtain this way? And how does the loss of value inherent in this approach compare with the expected loss of value due to the potential of errors in the extraction/extrapolation process? These are questions worth trying to answer before committing to any particular path, I think.
    \n
    P.S., I hesitated a bit in posting this, because underestimating the complexity of human values is arguably a greater danger than overlooking the possibility that I point out here, and this post could conceivably be used by someone to rationalize sticking with their \"One Great Moral Principle\". But I guess those tempted to do so will tend not to be Less Wrong readers, and seeing how I already got myself sucked into this debate, I might as well clarify and expand on my position.
    " } }, { "_id": "srge9MCLHSiwzaX6r", "title": "Logical Rudeness", "pageUrl": "https://www.lesswrong.com/posts/srge9MCLHSiwzaX6r/logical-rudeness", "postedAt": "2010-01-29T06:48:27.969Z", "baseScore": 110, "voteCount": 89, "commentCount": 206, "url": null, "contents": { "documentId": "srge9MCLHSiwzaX6r", "html": "

    The concept of \"logical rudeness\" (which I'm pretty sure I first found here, HT) is one that I should write more about, one of these days.  One develops a sense of the flow of discourse, the give and take of argument.  It's possible to do things that completely derail that flow of discourse without shouting or swearing.  These may not be considered offenses against politeness, as our so-called \"civilization\" defines that term.  But they are offenses against the cooperative exchange of arguments, or even the rules of engagement with the loyal opposition.  They are logically rude.

    \n

    Suppose, for example, that you're defending X by appealing to Y, and when I seem to be making headway on arguing against Y, you suddenly switch (without having made any concessions) to arguing that it doesn't matter if ~Y because Z still supports X; and when I seem to be making headway on arguing against Z, you suddenly switch to saying that it doesn't matter if ~Z because Y still supports X.  This is an example from an actual conversation, with X = \"It's okay for me to claim that I'm going to build AGI in five years yet not put any effort into Friendly AI\", Y = \"All AIs are automatically ethical\", and Z = \"Friendly AI is clearly too hard since SIAI hasn't solved it yet\".

    \n

    Even if you never scream or shout, this kind of behavior is rather frustrating for the one who has to talk to you.  If we are ever to perform the nigh-impossible task of actually updating on the evidence, we ought to acknowledge when we take a hit; the loyal opposition has earned that much from us, surely, even if we haven't yet conceded.  If the one is reluctant to take a single hit, let them further defend the point.  Swapping in a new argument?  That's frustrating.  Swapping back and forth?  That's downright logically rude, even if you never raise your voice or interrupt.

    \n

    The key metaphor is flow.  Consider the notion of \"semantic stopsigns\", words that halt thought.  A stop sign is something that happens within the flow of traffic.  Swapping back and forth between arguments might seem merely frustrating, or rude, if you take the arguments at face value - if you stay on the object level.  If you jump back a level of abstraction and try to sense the flow of traffic, and imagine what sort of traffic signal this corresponds to... well, you wouldn't want to run into a traffic signal like that.

    \n

    Another form of argumentus interruptus is when the other suddenly weakens their claim, without acknowledging the weakening as a concession.  Say, you start out by making very strong claims about a God that answers prayers; but when pressed, you retreat back to talking about an impersonal beauty of the universe, without admitting that anything's changed.  If you equivocated back and forth between the two definitions, you would be committing an outright logical fallacy - but even if you don't do so, sticking out your neck, and then quickly withdrawing it before anyone can chop it off, is frustrating; it lures someone into writing careful refutations which you then dance back from with a smile; it is logically rude.  In the traffic metaphor, it's like offering someone a green light that turns yellow after half a second and leads into a dead end.

    \n

    So, for example, I'm frustrated if I deal with someone who starts out by making vigorous, contestable, argument-worthy claims implying that the Singularity Institute's mission is unnecessary, impossible, futile, or misguided, and then tries to dance back by saying, \"But I still think that what you're doing has a 10% chance of being necessary, which is enough to justify funding your project.\"  Okay, but I'm not arguing with you because I'm worried about my funding getting chopped off, I'm arguing with you because I don't think that 10% is the right number.  You said something that was worth arguing with, and then responded by disengaging when I pressed the point; and if I go on contesting the 10% figure, you are somewhat injured, and repeat that you think that what I'm doing is important.  And not only is the 10% number still worth contesting, but you originally seemed to be coming on a bit more strongly than that, before you named a weaker-sounding number...  It might not be an outright logical fallacy - not until you equivocate between strong claims and weak defenses in the course of the same argument - but it still feels a little frustrating over on the receiving end.

    \n

    I try not to do this myself.  I can't say that arguing with me will always be an enjoyable experience, but I at least endeavor not to be logically rude to the loyal opposition.  I stick my neck out so that it can be chopped off if I'm wrong, and when I stick my neck out it stays stuck out, and if I have to withdraw it I'll do so as a visible concession.  I may parry - and because I'm human, I may even parry when I shouldn't - but I at least endeavor not to dodge.  Where I plant my standard, I have sent an invitation to capture that banner; and I'll stand by that invitation.  It's hard enough to count up the balance of arguments without adding fancy dance footwork on top of that.

    \n

    An awful lot of how people fail at changing their mind seems to have something to do with changing the subject.  It might be difficult to point to an outright logical fallacy, but if we have community standards on logical rudeness, we may be able to organize our cognitive traffic a bit less frustratingly.

    \n

    Added:  Checking my notes reminds me to include offering a non-true rejection as a form of logical rudeness.  This is where you offer up a reason that isn't really your most important reason, so that, if it's defeated, you'll just switch to something else (which still won't be your most important reason).  This is a distinct form of failure from switching Y->Z->Y, but it's also frustrating to deal with; not a logical fallacy outright, but a form of logical rudeness.  If someone else is going to the trouble to argue with you, then you should offer up your most important reason for rejection first - something that will make a serious dent in your rejection, if cast down - so that they aren't wasting their time.

    " } }, { "_id": "Gt2jBeo36288HdXqH", "title": "Play for a Cause", "pageUrl": "https://www.lesswrong.com/posts/Gt2jBeo36288HdXqH/play-for-a-cause", "postedAt": "2010-01-28T20:52:24.091Z", "baseScore": 10, "voteCount": 9, "commentCount": 57, "url": null, "contents": { "documentId": "Gt2jBeo36288HdXqH", "html": "

    Some of you have been trying to raise money for the Singularity Institute, and I have an idea that may help.

    \n

    The idea is to hold public competitions on LessWrong with money going to charity. Agree to a game and an amount of money, then have each player designate a charity. After the game, each player gives the agreed upon amount to the charity designated by the winner.1 It’s a bit like celebrity Jeopardy.2

    \n

    Play the game here on LessWrong or post a record of it.3 That will spread awareness of the charities and encourage others to emulate you.

    \n

    The game can be as simple as a wager or something more involved:

    \n\n

    We’ve already played few games on LessWrong, more on those in a moment.

    \n
    \n

    In most ways the AI problem is enormously more demanding than the personal art of rationality, but in some ways it is actually easier. In the martial art of mind, we need to acquire the realtime procedural skill of pulling the right levers at the right time on a large, pre-existing thinking machine whose innards are not end-user-modifiable.

    \n

    The Martial Art of Rationality

    \n
    \n

    First, I have a confession to make: I don’t really care how much money gets donated to the Singularity Institute, nor am I trying to drum up money for some other cause. I mainly want you all playing games.

    \n

    Not just playing them, of course. Playing them here in front of the rest of LessWrong and analyzing the moves in terms of the “personal art of rationality.”

    \n

    We need more approaches to improvement. Even in Eliezer_Yudkowsky’s Bayesian Conspiracy fictional series/manifesto, many other schools of thought (called, for dramatic effect, “conspiracies”) were present. As I recall, the “Competitive Conspiracy” was mentioned frequently, but there are other reasons for choosing to start with games.

    \n

    Games are fun, of course. They also deal with the “personal art” of getting familiar with your own brain, which I think has been underrepresented on LW. I do believe there are certain important things we can’t learn properly just by reading, arguing, and doing math (valuable as those techniques are). Games are an easy, intuitive first step to filling that gap.

    \n

    We’ve already had a few games here. Warrigal held an Aumann’s agreement game competition. I created Pract and played it with wedrifid. Neither one has caught on as a LessWrong pastime, but the comments revealed that people here know many interesting games.

    \n

    And there’s the donation hook. Some of you believe the world is at stake, so that’s a nice motivator.

    \n
    \n
    \n
      \n
    1. \n

      Of course it’s not always the case that you have exactly one winner. I suggest that if there’s no winner each player give to their own charity, and if there are multiple winners each player splits up their donation evenly among the charities of the winners.

      \n
    2. \n\n
    3. \n

      This is, of course, not a new idea. I’m just suggesting that we adopt it and make it a common practice on LessWrong.

      \n
    4. \n\n
    5. \n

      Unless it contains some remarkable insight, I wouldn’t make it a top-level post. The comment area here or in one of the open threads would be good.

      \n
    6. \n\n
    7. \n

      I see mutual consent as an important element of games.

      \n
    8. \n
    " } }, { "_id": "4a6bzHmJyb2aLaz9M", "title": "Bizarre Illusions", "pageUrl": "https://www.lesswrong.com/posts/4a6bzHmJyb2aLaz9M/bizarre-illusions", "postedAt": "2010-01-27T18:25:54.834Z", "baseScore": 11, "voteCount": 21, "commentCount": 310, "url": null, "contents": { "documentId": "4a6bzHmJyb2aLaz9M", "html": "


    \"grey
    Illusions are cool. They make me think something is happening when it isn't. When offered the classic illusion pictured to the right, I wonder at the color of A and B. How weird, bizarre, and incredible.

    \n

    Today I looked at the above illusion and thought, \"Why do I keep thinking A and B are different colors? Obviously, something is wrong with how I am thinking about colors.\" I am being stupid when my I look at this illusion and I interpret the data in such a way to determine distinct colors. My expectations of reality and the information being transmitted and received are not lining up. If they were, the illusion wouldn't be an illusion.

    \n

    The number 2 is prime; the number 6 is not. What about the number 1? Prime is defined as a natural number with exactly two divisors. 1 is an illusionary prime if you use a poor definition such as, \"Prime is a number that is only divisible by itself and 1.\" Building on these bad assumptions could result in all sorts of weird results much like dividing by 0 can make it look like 2 = 1. What a tricky illusion!

    \n

    An optical illusion is only bizarre if you are making a bad assumption about how your visual system is supposed to be working. It is a flaw in the Map, not the Territory. I should stop thinking that the visual system is reporting RGB style colors. It isn't. And, now that I know this, I am suddenly curious about what it is reporting. I have dropped a bad belief and am looking for a replacement. In this case, my visual system is distinguishing between something else entirely. Now that I have the right answer, this optical illusion should become as uninteresting as questioning whether 1 is prime. It should stop being weird, bizarre, and incredible. It merely highlights an obvious reality.

    \n

    Addendum: This post was edited to fix a few problems and errors. If you are at all interested in more details behind the illusion presented here, there are a handful of excellent comments below.

    " } }, { "_id": "3iM8QjvdkPCyLRJM6", "title": "You cannot be mistaken about (not) wanting to wirehead", "pageUrl": "https://www.lesswrong.com/posts/3iM8QjvdkPCyLRJM6/you-cannot-be-mistaken-about-not-wanting-to-wirehead", "postedAt": "2010-01-26T12:06:40.664Z", "baseScore": 50, "voteCount": 60, "commentCount": 79, "url": null, "contents": { "documentId": "3iM8QjvdkPCyLRJM6", "html": "

    In the comments of Welcome to Heaven, Wei Dai brings up the argument that even though we may not want to be wireheaded now, our wireheaded selves would probably prefer to be wireheaded. Therefore we might be mistaken about what we really want. (Correction: what Wei actually said was that an FAI might tell us that we would prefer to be wireheaded if we knew what it felt like, not that our wireheaded selves would prefer to be wireheaded.)

    This is an argument I've heard frequently, one which I've even used myself. But I don't think it holds up. More generally, I don't think any argument that says one is wrong about what they want holds up.

    To take the example of wireheading. It is not an inherent property of minds that they'll become desperately addicted to anything that feels sufficiently good. Even from our own experience, we know that there are plenty of things that feel really good, but we don't immediately crave for more afterwards. Sex might be great, but you can still afterwards get fatigued enough that you want to rest; eating good food might be enjoyable, but at some point you get full. The classic counter-example is that of the rats who could pull a lever stimulating a part of their brain, and ended up compulsively pulling it, to the exclusion of all else. People thought this to mean they were caught in a loop of stimulating their \"pleasure center\", but it later turned out that wasn't the case. Instead, the rats were stimulating their \"wants to seek out things -center\".

    The systems for experiencing pleasure and for wanting to seek out pleasure are separate ones. One can find something pleasurable, but still not develop a desire to seek it out. I'm sure all of you have had times when you haven't felt the urge to participate in a particular activity, even though you knew you'd enjoy the activity in question if you just got around doing it. Conversly, one can also have a desire to seek out something, but still not find it pleasurable when it's achieved.

    Therefore, it is not an inherent property of wireheading that we'd automatically end up wanting it. Sure, you could wirehead someone in such a way that the person stopped wanting anything else, but you could also wirehead them in such a way that they were indifferent to whether or not it continued. You could even wirehead them in such a way that they enjoyed every minute of it, but at the same time wanted it to stop.

    \"Am I mistaken about wanting to be wireheaded?\" is a wrong question. You might afterwards think you actually prefer to be wireheaded, or think you prefer not to be wireheaded, but that is purely a question of how you define the term \"wireheading\". Is it a procedure that makes you want it, or is it not? Furthermore, even if we define wireheading so that you'd prefer it afterwards, that says nothing about the moral worth of wireheading somebody.

    If you're not convinced about that last bit, consider the case of \"anti-wireheading\": we rewire somebody so that they experience terrible, horrible, excruciating pain. We also rewire them so that regardless, they seek to maintain their current state. In fact, if they somehow stop feeling pain, they'll compulsively seek a return to their previous hellish state. Would you say it was okay to anti-wirehead them, since an anti-wirehead will realize they were mistaken about not wanting to be an anti-wirehead? Probably not.

    In fact, \"I thought I wouldn't want to do/experience X, but upon trying it out I realized I was wrong\" doesn't make sense. Previously the person didn't want X, but after trying it out they did want X. X has caused a change in their preferences by altering their brain. This doesn't mean that the pre-X person was wrong, it just means the post-X person has been changed. With the correct technology, anyone can be changed to prefer anything.

    You can still be mistaken about whether or not you'll like something, of course. But that's distinct from whether or not you want it.

    Note that this makes any thoughts along the lines of \"an FAI might extrapolate the desires you had if you were more intelligent\" tricky. It could just as well extrapolate the desires we had if we'd had our brains altered in some other way. What makes one method of mind alteration more acceptable than another? \"Whether we'd consent to it now\" is one obvious-seeming answer, but that too is filled with pitfalls. (For instance, what about our anti-wirehead?)

    " } }, { "_id": "cyaCBiPRosr3T8YLF", "title": "Welcome to Heaven", "pageUrl": "https://www.lesswrong.com/posts/cyaCBiPRosr3T8YLF/welcome-to-heaven", "postedAt": "2010-01-25T23:22:45.169Z", "baseScore": 27, "voteCount": 66, "commentCount": 246, "url": null, "contents": { "documentId": "cyaCBiPRosr3T8YLF", "html": "

    I can conceive of the following 3 main types of meaning we can pursue in life.

    \r\n

    1. Exploring existing complexity: the natural complexity of the universe, or complexities that others created for us to explore.

    \r\n

    2. Creating new complexity for others and ourselves to explore.

    \r\n

    3. Hedonic pleasure: more or less direct stimulation of our pleasure centers, with wire-heading as the ultimate form.

    \r\n

    What I'm observing in the various FAI debates is a tendency of people to shy away from wire-heading as something the FAI should do. This reluctance is generally not substantiated or clarified with anything other than \"clearly, this isn't what we want\". This is not, however, clear to me at all.

    \r\n

    The utility we get from exploration and creation is an enjoyable mental process that comes with these activities. Once an FAI can rewire our brains at will, we do not need to perform actual exploration or creation to experience this enjoyment. Instead, the enjoyment we get from exploration and creation becomes just another form of pleasure that can be stimulated directly.

    \r\n

    If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings, and then induce a state of permanent and ultimate enjoyment in every one of them. This enjoyment could be of any type - it could be explorative or creative or hedonic enjoyment as we know it. The most energy efficient way to create any kind of enjoyment, however, is to stimulate the brain-equivalent directly. Therefore, the greatest utility will be achieved by wire-heading. Everything else falls short of that.

    \r\n

    What I don't quite understand is why everyone thinks that this would be such a horrible outcome. As far as I can tell, these seem to be cached emotions that are suitable for our world, but not for the world of FAI. In our world, we truly do need to constantly explore and create, or else we will suffer the consequences of not mastering our environment. In a world where FAI exists, there is no longer a point, nor even a possibility, of mastering our environment. The FAI masters our environment for us, and there is no longer a reason to avoid hedonic pleasure. It is no longer a trap.

    \r\n

    Since the FAI can sustain us in safety until the universe goes poof, there is no reason for everyone not to experience ultimate enjoyment in the meanwhile. In fact, I can hardly tell this apart from the concept of a Christian Heaven, which appears to be a place where Christians very much want to get.

    \r\n

    If you don't want to be \"reduced\" to an eternal state of bliss, that's tough luck. The alternative would be for the FAI to create an environment for you to play in, consuming precious resources that could sustain more creatures in a permanently blissful state. But don't worry; you won't need to feel bad for long. The FAI can simply modify your preferences so you want an eternally blissful state.

    \r\n

    Welcome to Heaven.

    " } }, { "_id": "TFZhCehLARTTWedd8", "title": "Adaptive bias", "pageUrl": "https://www.lesswrong.com/posts/TFZhCehLARTTWedd8/adaptive-bias", "postedAt": "2010-01-25T17:45:16.884Z", "baseScore": 15, "voteCount": 13, "commentCount": 31, "url": null, "contents": { "documentId": "TFZhCehLARTTWedd8", "html": "

    If we want to apply our brains more effectively to the pursuit of our chosen objectives, we must commit to the hard work of understanding how brains implement cognition. Is it enough to strive to \"overcome bias\"? I've come across an interesting tidbit of research (which I'll introduce in a moment) on \"perceptual pop-out\", that hints it is not enough.

    \n

    \"Cognition\" is a broad notion; we can dissect it into awareness, perception, reasoning, judgment, feeling... Broad enough to encompass what I'm coming to call \"pure reason\": our shared toolkit of normative frameworks for assessing probability, evaluating utility, guiding decision, and so on. Pure reason is one of the components of rationality, as this term is used here, but it does not encompass all of rationality, and we should beware the many Myths of Pure Reason. The Spock caricature is one; by itself enough cause to use the word \"rational\" sparingly, if at all.

    \n

    Or the idea that all bias is bad.

    \n

    It turns out, for instance, that a familiar bugaboo, confirmation bias, might play an important role in perception. Matt Davis at Cambridge Medical School has crafted a really neat three-part audio sample based showcasing one of his research topics. The first and last part of the sample are exactly the same. If you are at all like me, however, you will perceive them quite differently.

    \n

    Here is the audio sample (mp3). Please listen to it now.

    \n

    Notice the difference? Matt Davis, who has researched these effects extensively, refers to them as \"perceptual pop-out\". The link with confirmation bias is suggested by Jim Carnicelli: \" Once you have an expectation of what to look for in the data, you quickly find it.\"

    \n

    In Probability Theory, E.T. Jaynes notes that perception is \"inference from incomplete information\"; and elsewhere adds:

    \n
    \n

    Kahneman & Tversky claimed that we are not Bayesians, because in psychological tests people often commit violations of Bayesian principles. [...] People are reasoning to a more sophisticated version of Bayesian inference than [Kahneman and Tversky] had in mind. [...] We would expect Natural Selection to produce such a result: after all, any reasoning format whose results conflict with Bayesian inference will place a creature at a decided survival disadvantage.

    \n
    \n

    There is an apparent paradox between our susceptibility to various biases, and the fact that these biases are prevalent precisely because they are part of a cognitive toolkit honed over a long evolutionary period, suggesting that each component of that toolkit must have worked - conferred some advantage. Bayesian inference, claims Jaynes, isn't just a good move - it is the best move.

    \n

    However, these components evolved in specific situations; the hardware kit that they are part of was never intended to run the software we now know as \"pure reason\". Our high-level reasoning processes are \"hijacking\" these components for other purposes. The same goes of our consciousness, which is also a patched-together hack on top of the same hardware.

    \n

    There, by the way, is why Dennett's work on consciousness is important, and should be given a sympathetic exposition here rather than a hatchet job. (This post is intended in part as a tentative prelude to tackling that exposition.)

    \n

    We are not AIs, who, when finally implemented, will (putatively) be able to modify their own source code. The closest we can come to that is to be aware of what our reasoning is put together from, which include various biases that exist for a reason, and to make conscious choices as to how we use these components.

    \n

    Bottom line : understanding where your biases come from, and putting that knowledge to good use, is of more value than rejecting all bias as evil.

    \n

     

    " } }, { "_id": "5uApLJ4N2bYdo5cWx", "title": "Simon Conway Morris: \"Aliens are likely to look and behave like us\".", "pageUrl": "https://www.lesswrong.com/posts/5uApLJ4N2bYdo5cWx/simon-conway-morris-aliens-are-likely-to-look-and-behave", "postedAt": "2010-01-25T14:16:18.752Z", "baseScore": 4, "voteCount": 9, "commentCount": 40, "url": null, "contents": { "documentId": "5uApLJ4N2bYdo5cWx", "html": "
    \n

    Professor Simon Conway Morris at Cambridge University will tell a conference on alien life that extraterrestrials will most likely have evolved just like \"earthlings\" and so resemble us to a degree with heads, limbs and bodies.

    \n
    \n

    [link]

    \n

    \n
    \n

    Unfortunately they will have also evolved our foibles and faults which could make them dangerous if they ever did visit us on Earth.

    \n

    The evolutionary paleobiologist's beliefs mean that science fiction films such as Star Wars and Star Trek could be more accurate than they ever imagined in depicting alien life.

    \n

    Prof Conway Morris believes that extraterrestrial life is most likely to occur on a planet similar to our own, with organisms made from the same biochemicals. The process of evolution will even shape alien life in a similar way, he added.

    \n

    “It is difficult to imagine evolution in alien planets operating in any manner other than Darwinian,\" he said.

    \n

    \"In the end the number of options is remarkably restrictive. I don't think an alien will be a blob. If aliens are out there they should have evolved just like us. They should have eyes and be walking on two legs.

    \n

    \"In short if there is any life out there then it is likely to be very similar to us.\"

    \n

    Extra-terrestrials might not only resemble us but have our foibles, such as greed, violence and a tendency to exploit others' resources, claims Professor Conway Morris.

    \n

    They could come in peace but also be searching for somewhere to live, and to help themselves to water, minerals and fuel he is due to tell a conference at the Royal Society, in London.

    \n

    However he also thinks that because much of the Universe is older than us they would have evolved further down the line and we should have heard from them by now.

    \n

    He believes it is increasingly looking like they may not be out there at all.

    \n
    \n

    Thoughts on this?

    \n

    Conway Morris is a big hitter in the scientific establishment. He is, however, a theist, and has \"argued against materialism\" according to wikipedia. But what are his arguments? Alas the press piece doesn't say. Kudos to anyone who finds the conference and posts the arguments.

    \n

    “It is difficult to imagine evolution in alien planets operating in any manner other than Darwinian,\"

    \n

    Is this just an instance of a slightly woo scientist-theist failing to take into account that nature might be more imaginative than him?

    " } }, { "_id": "Qh6bnkxbMFz5SNeFd", "title": "Value Uncertainty and the Singleton Scenario", "pageUrl": "https://www.lesswrong.com/posts/Qh6bnkxbMFz5SNeFd/value-uncertainty-and-the-singleton-scenario", "postedAt": "2010-01-24T05:03:44.756Z", "baseScore": 13, "voteCount": 14, "commentCount": 31, "url": null, "contents": { "documentId": "Qh6bnkxbMFz5SNeFd", "html": "

    In January of last year, Nick Bostrom wrote a post on Overcoming Bias about his and Toby Ord’s proposed method of handling moral uncertainty. To abstract away a bit from their specific proposal, the general approach was to convert a problem involving moral uncertainty into a game of negotiation, with each player’s bargaining power determined by one’s confidence in the moral philosophy represented by that player.

    \n

    Robin Hanson suggested in his comments to Nick’s post that moral uncertainty should be handled the same way we're supposed to handle ordinary uncertainty, by using standard decision theory (i.e., expected utility maximization). Nick’s reply was that many ethical systems don’t fit into the standard decision theory framework, so it’s hard to see how to combine them that way.

    \n

    In this post, I suggest we look into the seemingly easier problem of value uncertainty, in which we fix a consequentialist ethical system, and just try to deal with uncertainty about values (i.e., utility function). Value uncertainty can be considered a special case of moral uncertainty in which there is no apparent obstacle to applying Robin’s suggestion. I’ll consider a specific example of a decision problem involving value uncertainty, and work out how Nick and Toby’s negotiation approach differs in its treatment of the problem from standard decision theory. Besides showing the difference in the approaches, I think the specific problem is also quite important in its own right.

    \n

    The problem I want to consider is, suppose we believe that a singleton scenario is very unlikely, but may have very high utility if it were realized, should we focus most of our attention and effort into trying to increase its probability and/or improve its outcome? The main issue here is (putting aside uncertainty about what will happen after a singleton scenario is realized) uncertainty about how much we value what is likely to happen.

    \n

    Let’s say there is a 1% chance that a singleton scenario does occur, and conditional on it, you will have expected utility that is equivalent to a 1 in 5 billion chance of controlling the entire universe. If a singleton scenario does not occur, you will have a 1/5 billionth share of the resources of the solar system, and the rest of the universe will be taken over by beings like the ones described in Robin’s The Rapacious Hardscrapple Frontier. There are two projects that you can work on. Project A increases the probability of a singleton scenario to 1.001%. Project B increases the wealth you will have in the non-singleton scenario by a factor of a million (so you’ll have a 1/5 thousandth share of the solar system). The decision you have to make is which project to work on. (The numbers I picked are meant to be stacked in favor of project B.)

    \n

    Unfortunately, you’re not sure how much utility to assign to these scenarios. Let’s say that you think there is a 99% probability that your utility (U1) scales logarithmically with the amount of negentropy you will have control over, and 1% probability that your utility (U2) scales as the square root of negentropy. (I assume that you’re an ethical egoist and do not care much about what other people do with their resources. And these numbers are again deliberately stacked in favor of project B, since the better your utility function scales, the more attractive project A is.)

    \n

    Let’s compute the expected U1 and U2 of Project A and Project B. Let NU=10120 be the negentropy (in bits) of the universe, and NS=1077 be the negentropy of the solar system, then:

    \n\n

    EU2 is computed similarly, except with log replaced by sqrt:

    \n\n

    Under Robin’s approach to value uncertainty, we would (I presume) combine these two utility functions into one linearly, by weighing each with its probability, so we get EU(x) = 0.99 EU1(x) + 0.01 EU2(x):

    \n\n

    This suggests that we should focus our attention and efforts on the singleton scenario. In fact, even if Project A had a much, much smaller probability of success, like 10-30 instead of 0.00001, or you have a much lower confidence that your utility scales as well as the square root of negentropy, it would still be the case that EU(A)>EU(B). (This is contrary to Robin's position that we pay too much attention to the singleton scenario, and I would be interested to know in which detail his calculation differs from mine.)

    \n

    What about Nick and Toby’s approach? In their scheme, delegate 1, representing U1, would vote for project B, while delegate 2, representing U2, would vote for project A. Since delegate 1 has 99 votes to delegate 2’s one vote, the obvious outcome is that we should work on project B. The details of the negotiation process don't seem to matter much, given the large advantage in bargaining power that delegate 1 has over delegate 2.

    \n

    Each of these approaches to value uncertainty seems intuitively attractive on its own, but together they give conflicting advice on this important practical problem. Which is the right approach, or is there a better third choice? I think this is perhaps one of the most important open questions that an aspiring rationalist can work on.

    " } }, { "_id": "mWNpB8voXvWTbsiK8", "title": "Lesswrong UK planning thread", "pageUrl": "https://www.lesswrong.com/posts/mWNpB8voXvWTbsiK8/lesswrong-uk-planning-thread", "postedAt": "2010-01-24T00:33:03.381Z", "baseScore": 9, "voteCount": 6, "commentCount": 46, "url": null, "contents": { "documentId": "mWNpB8voXvWTbsiK8", "html": "

    A few of us got together in the pub after the friendly AI meet and agreed we should have a meetup for those of us familiar with lesswrong/bostrom etc. This is a post for discussion of when/where.

    \n

    \n

    Venue: London still seems the nexus. I might be able to be convinced to go to Oxford. A starbucks type place is okay, although it'd be nice to have a white board or other presentation systems.

    \n

    Date/Time: Weekends is fine with me, and I suspect most people. Julian suggested after the next UKTA meeting. Will that be April at the humanityplus thing?  It would depend whether we are still mentally fresh after it, and whether any of our group are attending the dinner after.

    \n

    Activities: I think it would be a good idea to have some structure or topics to discuss, so that we don't just fall back into the \"what do you do\" types of discussion too much. Maybe mini presentations.

    \n

    My duo of current interests

    \n

    1)Evidence for intelligence explosion: I don't want to rehash what we know already, but I would like to try and figure out what experiments we can do (safely) or proofs we can make to increase or decrease our belief that it will occur. This is more of a brainstorming session.

    \n

    2) The nature of the human brain: Specifically it doesn't appear to have a goal (in the decision theory sense) built in, although it can become a goal optimizer to a greater or lesser extent. How might it do this? As we aren't neuroscientists, a more fruitful question might be what the skeleton of such a computer system that can do this might be look like, even if we can't fill in all the interesting details. I'd discuss this with regards to akrasia, neural enhancement, volition extraction and non-exploding AI scenarios. I can probably pontificate on this for a while, if I prepare myself.

    \n

    I think Ciphergoth wanted to talk about consequentialist ethics.

    \n

    Shout in the comments if you have a topic you'd like to discuss, or would rather not discuss.

    \n

    Perhaps we should also look at the multiplicity of AI bias, where people seem to naturally assume there will be multiple AIs even when talking about super intelligent singularity scenarios (many questions had this property at the meeting). I suspect that this could be countered by reading a Fire upon the Deep somewhat.

    " } }, { "_id": "zrB9qwQpRNgPda8Fm", "title": "Lists of cognitive biases, common misconceptions, and fallacies", "pageUrl": "https://www.lesswrong.com/posts/zrB9qwQpRNgPda8Fm/lists-of-cognitive-biases-common-misconceptions-and", "postedAt": "2010-01-23T22:47:07.845Z", "baseScore": 13, "voteCount": 22, "commentCount": 5, "url": null, "contents": { "documentId": "zrB9qwQpRNgPda8Fm", "html": "

    http://en.wikipedia.org/wiki/List_of_cognitive_biases

    \n

    http://en.wikipedia.org/wiki/List_of_common_misconceptions

    \n

    http://en.wikipedia.org/wiki/List_of_fallacies

    \n

    http://en.wikipedia.org/wiki/List_of_memory_biases

    \n

    I know the trend here is against simple links as top-level posts, but I think these links are powerful enough to stand on their own. I would be very surprised if most people here have already read all of these links. Also, it seems like a good sign that this somehow got modded up to 1 while sitting in my drafts folder.

    \n

    Thanks to Lone Gunman for the links.

    " } }, { "_id": "YZFzsCavwZvk6Xi3m", "title": "Far & Near / Runaway Trolleys / The Proximity Of (Fat) Strangers", "pageUrl": "https://www.lesswrong.com/posts/YZFzsCavwZvk6Xi3m/far-and-near-runaway-trolleys-the-proximity-of-fat-strangers", "postedAt": "2010-01-23T22:13:05.437Z", "baseScore": 12, "voteCount": 22, "commentCount": 47, "url": null, "contents": { "documentId": "YZFzsCavwZvk6Xi3m", "html": "

    I went to the Royal Institute last week to hear the laconic and dismissive Dr Guy Kahane on whether we are 'Biologically Moral

    \r\n

    [His message: Neurological evidence suggests - somewhat alarmingly - that our moral and ethical decisions may be no more than post-hoc rationalisations of purely emotional, instinctive reactions.  However, we should not panic because this is early days in neuroscience, and the correct interpretation of brain-scans is uncertain: scientist find the pattern, and the explanation, they expect to find]

    To illustrate his talk Kahane used one of those moral dilemmas which are rarely encountered in real life but which are fascinating to philosophers: the familiar Trolley Problem

    \r\n

    \"\"\"\"A picture is worth a 1,000 words and rather then spell it out Kahane flashed up two nice cartoons:

    \r\n

    - the first shows the anxious philosophical protagonist at the railway junction, runaway trolley approaching, pondering the lever that moves the points

    \r\n

    - the second shows our hapless philosopher now on a bridge poised behind an unsuspecting fat stranger who is neither alert enough for his attention to have been caught by the runaway trolley bearing down on the small party of railway workers, nor sufficiently familiar with the philosophical domain to appreciate the mortal danger that he is in himself.  Irony, oh Irony: Philosophy, thy name is Drama.

    So far so humdrum;  but... surprise! : in the front row of the audience that evening, wedged into the seat next to me, and the seat next to that, only a couple of metres from Dr Kahane and dressed in identical clothes as when the cartoon was made was the fat stranger himself. 

    \r\n

    You had to feel for Dr Kahane, but with no evident embarrassment he declined to acknowledge the unexpected attendee and ploughed gamely on with an earnest discussion of the morals, ethics and practicalities of heaving the poor man off a bridge and under the wheels of an oncoming philosophical trope. I had to smile

    And the Trolley Problem is always good for a lively debate so, inevitably, in the Q&A, we came back to it all over again, I felt more uncomfortable by now as members of the audience discussed at length the pros and cons of the fat man's sorry and undeserved demise, none of them acknowledging his presence amongst us. 

    \r\n

    The fat man was, you might say, the elephant in the room.

    My discovery

    The reason the Trolley Problem is so intriguing and enduring is that it appears to neatly demonstrate Far and Near thinking :  Although the two scenarios are logically identical, shifting a lever to divert a trolley down a track is Far, so most people will do it, but giving a fat man a shove is Near and this is why most people will decline.

    What I discovered that evening is: that the bridge scenario is not, in fact, Near at all
    .

    \r\n

    Having the fat stranger sitting right next to you is Near !

    You see, as an avowed nationalist and utilitarian, I have never before found the Trolley Problem to be any kind of dilemma: I am a slayer of the obese onlooker every time.   But that particular evening when it came to the vote I found that simple embarrassment, and the trivial desire not to appear insensitive was enough to stay my rational hand and I sat on it, guiltily despatching five poor, hypothetical railway workers to their deaths, merely to avoid a momentary unkindness.

    My Conclusions


    - It seems there is Far Near and Near Near, and if you ever again find yourself with time to meta-think that you are operating in Near mode.... then you're actually in Far mode.

    - and so I will be more suspicious of the hypothetical thought experiments from now on.

    Epilogue

    But, you are asking, how did the Fat Man himself vote?
    He declined to push, remarking drily that the fattest person on the bridge was very likely to be himself. He would, he said, jump.

    \r\n

    He was most definitely thinking in Far mode.

    " } }, { "_id": "RYy5HaKufpn2YLde6", "title": "Privileged Snuff", "pageUrl": "https://www.lesswrong.com/posts/RYy5HaKufpn2YLde6/privileged-snuff", "postedAt": "2010-01-22T05:38:36.259Z", "baseScore": 19, "voteCount": 34, "commentCount": 20, "url": null, "contents": { "documentId": "RYy5HaKufpn2YLde6", "html": "

    So one is asked, \"What is your probability estimate that the LHC will destroy the world?\"

    Leaving aside the issue of calling brown numbers probabilities, there is a more subtle rhetorical trap at work here.

    If one makes up a small number, say one in a million, the answer will be, \"Could you make a million such statements and not be wrong even once?\" (Of course this is a misleading image -- doing anything a million times in a row would make you tired and distracted enough to make trivial mistakes. At some level we know this argument is misleading, because nobody calls the non-buyer of lottery tickets irrational for assigning an even lower probability to a win.)

    If one makes up a larger number, say one in a thousand, then one is considered a bad person for wanting to take even one chance in a thousand of destroying the world.

    \n

    The fallacy here is http://wiki.lesswrong.com/wiki/Privileging_the_hypothesis

    \n

    To see why, try inverting the statement: what is your probability estimate that canceling the LHC will result in the destruction of the world?

    Unlikely? Well I agree, it is unlikely. But I can think of plausible ways it could be true. New discoveries in physics could be the key to breakthroughs in areas like renewable energy or interstellar travel -- breakthroughs that might just make the difference between a universe ultimately filled with intelligent life, and a future of might have been. History shows, after all, that key technologies often arise from unexpected lines of research. I certainly would not be confident in assigning a million to one odds against the LHC making that difference.

    Conversely, we know the LHC is not going to destroy the world, because nature has been banging particles together at much higher energy levels for billions of years. If that sufficed to destroy the world, it would already have happened, and any people you might happen to meet from time to time would be figments of  a deranged imagination.

    The hypothesis being privileged in even asking the original question, is not a harmless one like the tooth fairy. It is the hypothesis that snuffing out progress, extinguishing futures that might have been, is the safe option. It is not really a forgivable mistake, for we already know otherwise -- death is the default, not just for individuals, but for nations, civilizations, species and worlds. It could, however, be the ultimate mistake, the one that places the world in a position from which there is no longer a winning move.

    So remember a heuristic from a programmer's toolkit: sometimes the right answer is Wherefore dost thou ask?

    " } }, { "_id": "gqE52dfaX3PY2QqJC", "title": "Costs to (potentially) eternal life", "pageUrl": "https://www.lesswrong.com/posts/gqE52dfaX3PY2QqJC/costs-to-potentially-eternal-life", "postedAt": "2010-01-21T21:46:31.316Z", "baseScore": 7, "voteCount": 33, "commentCount": 111, "url": null, "contents": { "documentId": "gqE52dfaX3PY2QqJC", "html": "

    Imagine Omega came to you and said, \"Cryonics will work; it will be possible for you to be resurrected and have the choice between a simulation and a new healthy body, and I can guarantee you live for at least 100,000 years after that. However, for reasons I won't divulge, your surviving to experience this is wholly contingent upon you killing the next three people you see. I can also tell you that the next three people you see, should you fail to kill them, will die childless and will never sign up for cryonics. There is a knife on the ground behind you.\"

    You turn around and see someone. She says, \"Wait! You shouldn't kill me because ... \"

    What does she say that convinces you?

    \n

    \n
    \n

    [Cryonics] takes the mostly correct idea \"life is good, death is bad\" to such an extreme that it does violence to other valuable parts of our humanity (sorry, but I can't be more specific).

    \n
    \n

    That's a quote from a comment in a post about cryonics. \"I can't be more specific\" is not doing this comment any favors, and overall the comment was rebutted pretty well. But I did try to imagine other these other valuable parts, and I realized something that remains unresolved for me.

    Guaranteed death places a limit on the value of my life to myself. Parents shield children with their bodies; Casey Jones happens more often. People run into burning buildings more often. (Suicide bombers happen more often, too, I realize.)

    I think this is a valuable part of humanity, and I think that an extreme \"life is good, death is bad\" view does do violence to it. You can argue we should effect a world that makes this willingness unnecessary, and I'll support that; but separate from making the willingness useless, eliminating that willingness does violence to our humanity. You can argue that our humanity is overrated and there's something better over the horizon, i.e. the cost is worth it.

    But the incentives for saving 1+X many lives at the cost of your own just got lessened. How do you put a price on heaven? orthonormal suggests that we should rely on human irrationality here to keep us moral, that thankfully we are too stupid and slow to actually change the decisions we make after recognizing the expected value of our options has changed, despite the opportunity cost of these decisions growing considerably. I think this a) underestimates humans' ability to react to incentives and b) underestimates the reward the universe bestows on those who do react to incentives.

    I don't see a good \"solution\" to this problem, other than to rely on cognitive dissonance to make this not seem as offensive as it is now in the future. The people for whom this presents a problem will eventually die out, anyway, as there is a clear advantage to favor it. I guess that's the (ultimately anticlimactic) takeaway: Morals change in the face of progress.

    So, which do you favor more - your life, or identity?

    \n

    EDIT: Well, it looks like this is getting fast-tracked for disappeared status. I think it's interesting that people seem to think I'm making a statement about a moral code. I'm not; I'm talking about incentives and what would happen, not what the right thing to do is.

    \n

    Let's say Eliezer gets his wish and cryonics many, many parents sign up for cryonics and sign their children up for cryonics. Does anyone really expect that this population would not respond to its incentives to avoid more danger? Anecdotes aside; do you expect them to join the military with the same frequency, be firemen with the same frequency, to be doctors administering vaccinations in jungles with the same frequency? I don't think it's possible to say that with a straight face and mean it; populations respond to incentives, and the incentives just changed for that population.

    " } }, { "_id": "RoyDSB9gNsjjjGQoj", "title": "Easy Predictor Tests", "pageUrl": "https://www.lesswrong.com/posts/RoyDSB9gNsjjjGQoj/easy-predictor-tests", "postedAt": "2010-01-21T18:40:21.079Z", "baseScore": 16, "voteCount": 22, "commentCount": 66, "url": null, "contents": { "documentId": "RoyDSB9gNsjjjGQoj", "html": "

    A fun game you can play on LessWrong is to stop just as you are about to click \"comment\" and make a prediction for how much karma your comment will receive within the next week. This will provide some quick feedback about how well your karma predictors are working. This exercise will let you know if something is broken. A simpler version is to pick from these three distinct outcomes: Positive karma, 0 karma, negative karma.

    \n

    What other predictors are this easy to test? Likely candidates match one or more of the following criteria:

    \n\n
    A more difficult challenge is predicting karma on your top level posts. My predictors in this area tend to be way off the mark. For this post, my guess is between +4 and +20. Reasoning: I don't see how it could get over 20 unless it gets promoted and the concept surprises readers; 4 seems like a solid guess for \"interesting, uncontroversial, but not groundbreaking.\"
    \n
    Update: As of January 29, this post is at +10.
    " } }, { "_id": "Gbz6BMXknMtLo9T7c", "title": "Winning the Unwinnable", "pageUrl": "https://www.lesswrong.com/posts/Gbz6BMXknMtLo9T7c/winning-the-unwinnable", "postedAt": "2010-01-21T03:01:48.371Z", "baseScore": 3, "voteCount": 23, "commentCount": 54, "url": null, "contents": { "documentId": "Gbz6BMXknMtLo9T7c", "html": "

    A few years back, I sent a note to some friends and other smart people. It said, approximately:

    \r\n

    \"I am trying to raise about $42 million. I have an expected payoff of about 20% in a week. It's not guaranteed; it might be more, or there could be a loss.

    \r\n

    \"Now, you may be saying to yourself, 'Mayne's got a system to win the lottery or something.'

    \r\n

    \"Well, it's not, 'something.' I have a system to win the lottery.\"

    \r\n

    At which point I explained the system.

    \r\n

    Now, LW is not particularly fond of the lottery. From a social policy/political point of view, I'm not disagreeing.

    \r\n

    I specifically represent to you that I do have a system, I use a part of the system, and so far, I've lost about $350 with the system. If you'd loan me your $42 million when I ask for it, though, I'd have not only a high positive expectation, but a much smoother payout matrix, if I could solve the practical problem of buying as many tickets as I want. Which is 41.4 million tickets.

    \r\n

    No, I'm not just saying I'll buy all the tickets and therefore guarantee the jackpot and therefore I win. I'm saying my expectation per dollar invested is substantially positive.

    \r\n

    Skeptical? Why?

    \r\n

    Not only do I have a system, but anyone with math skills and any knowledge of how the lottery works should agree. While it's always easy and obvious to see why other people should think the same as you, the fact that of a large number of math folks, zero here have defended playing the lottery as a cash-plus position, is a sign that people have foregone thinking and simply rejected the lottery outright. Or that I'm an idiot.

    \r\n

    Before you get to the post payoff, what's your attitude toward this now? Lottery win is impossible? If you think it's impossible (or close enough to it), why? Unlikely? Probable? Would it change your mind if my self-assessment was that I was likely in the bottom quartile as far as current math skills on LW? Would it alter your probabilities if I told you that I am highly confident that if the practicality of buying 41.4 million tickets is worked out, that using my system to play the lottery is the best investment I know of?

    \r\n

    OK, here it is: How to win the lottery.

    \r\n

    I'm going to make this relatively brief, but it's really fairly simple. In California, we have a fairly classic lottery setup although the payout rates are a little worse than most states. The immediate payoff of the jackpot is about (carryover + 25% of tickets sold for that draw.) About 25% of the money into the lottery goes to non-jackpot payoffs. Fifty percent goes to valuable or less valuable government programs, including running the lottery.

    \r\n

    You have about a 1 in 41 million shot of winning the jackpot with one ticket on any given draw.

    \r\n

    A number of factors have seriously decreased purchases of Super Lotto tickets on large jackpots, primarily the cannibalization of the Lotto by the state's participation in the Mega Millions multi-state lottery. This is important to the calculations, and changes the expected payout substantially.

    \r\n

    At one point, the Super Lotto jackpot was at $85 million for a prior draw (or $42.5 million in a single immediate payment) and they sold nine million tickets for the next draw.

    \r\n

    So, consider: You put in $41.4 million and buy every possible ticket. You get about $10 million back in little prizes. The value of the jackpot is $42.5 million, plus 25% of nine million (the money the others put in) plus 25% of $41 million (the money you put in). You're at about $55 million in jackpot.

    \r\n

    If no one ties you for the jackpot, you're at about $65 million in total payout. There's less than a 25% chance that someone ties you. That's a comfortable profit.

    \r\n

    If one person ties you, that's about $37 million payout; that's a loss, but presumably a survivable loss. If more tie you that is, well, worse.

    \r\n

    I came rather closer to pulling this off than seems likely; the big issue was how to buy the tickets, and the state told me I had to buy them from retailers (and I lack the political pull to find another way.) This opens a wide range of complications, as you can imagine. Then, sadly, some of the investment bankers of my acquaintance were, um, pursuing other interests (not because they were talking to some dude hawking a lottery system, or so I choose to believe), and the project dissolved.

    \r\n

    But the math is still basically right. And the lottery has been repeatedly and forcefully brutalized here as something only idiots would play. When the cash expectation is positive, I'm playing again. I'm not saying it's stupid not to play - the marginal utility issues are substantial.

    \r\n

    Still, by making the mental equation lottery=stupidity, a lot of people stopped thinking about the potential. I expect someone to pull this off some time, and I expect some political blowback (the lottery's for suckers, not the intelligent rich! What of the poor suckers?) Twenty million should make up for the blowback. Rationality is about winning, right?

    \r\n

    Am I wrong on why this has been missed? Does this tell us anything about other opportunities that might be missed? Or, am I wrong that this should work, if the cash is present and the practical problems are solved? Or am I just an idiot?

    \r\n

     

    \r\n

     

    \r\n

     

    \r\n

     

    \r\n

     

    " } }, { "_id": "R3ATEWWmBhMhbY2AL", "title": "That Magical Click", "pageUrl": "https://www.lesswrong.com/posts/R3ATEWWmBhMhbY2AL/that-magical-click", "postedAt": "2010-01-20T16:35:01.715Z", "baseScore": 88, "voteCount": 87, "commentCount": 416, "url": null, "contents": { "documentId": "R3ATEWWmBhMhbY2AL", "html": "

    Followup toNormal Cryonics

    \n

    Yesterday I spoke of that cryonics gathering I recently attended, where travel by young cryonicists was fully subsidized, leading to extremely different demographics from conventions of self-funded activists.  34% female, half of those in couples, many couples with kids - THAT HAD BEEN SIGNED UP FOR CRYONICS FROM BIRTH LIKE A GODDAMNED SANE CIVILIZATION WOULD REQUIRE - 25% computer industry, 25% scientists, 15% entertainment industry at a rough estimate, and in most ways seeming (for smart people) pretty damned normal.

    \n

    Except for one thing.

    \n

    During one conversation, I said something about there being no magic in our universe.

    \n

    And an ordinary-seeming woman responded, \"But there are still lots of things science doesn't understand, right?\"

    \n

    Sigh.  We all know how this conversation is going to go, right?

    \n

    So I wearily replied with my usual, \"If I'm ignorant about a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon itself; a blank map does not correspond to a blank territory -\"

    \n

    \"Oh,\" she interrupted excitedly, \"so the concept of 'magic' isn't even consistent, then!\"

    \n

    Click.

    \n

    She got it, just like that.

    \n

    This was someone else's description of how she got involved in cryonics, as best I can remember it, and it was pretty much typical for the younger generation:

    \n

    \"When I was a very young girl, I was watching TV, and I saw something about cryonics, and it made sense to me - I didn't want to die - so I asked my mother about it.  She was very dismissive, but tried to explain what I'd seen; and we talked about some of the other things that can happen to you after you die, like burial or cremation, and it seemed to me like cryonics was better than that.  So my mother laughed and said that if I still felt that way when I was older, she wouldn't object.  Later, when I was older and signing up for cryonics, she objected.\"

    \n

    Click.

    \n

    It's... kinda frustrating, actually.

    \n

    There are manifold bad objections to cryonics that can be raised and countered, but the core logic really is simple enough that there's nothing implausible about getting it when you're eight years old (eleven years old, in my case).

    \n

    Freezing damage?  I could go on about modern cryoprotectants and how you can see under a microscope that the tissue is in great shape, and there are experiments underway to see if they can get spontaneous brain activity after vitrifying and devitrifying, and with molecular nanotechnology you could go through the whole vitrified brain atom by atom and do the same sort of information-theoretical tricks that people do to recover hard drive information after \"erasure\" by any means less extreme than a blowtorch...

    \n

    But even an eight-year-old can visualize that freezing a sandwich doesn't destroy the sandwich, while cremation does.  It so happens that this naive answer remains true after learning the exact details and defeating objections (a few of which are even worth considering), but that doesn't make it any less obvious to an eight-year-old.  (I actually did understand the concept of molecular nanotech at eleven, but I could be a special case.)

    \n

    Similarly: yes, really, life is better than death - just because transhumanists have huge arguments with bioconservatives over this issue, doesn't mean the eight-year-old isn't making the right judgment for the right reasons.

    \n

    Or: even an eight-year-old who's read a couple of science-fiction stories and who's ever cracked a history book can guess - not for the full reasons in full detail, but still for good reasons - that if you wake up in the Future, it's probably going to be a nicer place to live than the Present.

    \n

    In short - though it is the sort of thing you ought to review as a teenager and again as an adult - from a rationalist standpoint, there is nothing alarming about clicking on cryonics at age eight... any more than I should worry about my first schism with Orthodox Judaism coming at age five, when they told me that I didn't have to understand the prayers in order for them to work so long as I said them in Hebrew.  It really is obvious enough to see as a child, the right thought for the right reasons, no matter how much adult debate surrounds it.

    \n

    And the frustrating thing was that - judging by this group - most cryonicists are people to whom it was just obvious.  (And who then actually followed through and signed up, which is probably a factor-of-ten or worse filter for Conscientiousness.)  It would have been convenient if I'd discovered some particular key insight that convinced people.  If people had said, \"Oh, well, I used to think that cryonics couldn't be plausible if no one else was doing it, but then I read about Asch's conformity experiment and pluralistic ignorance.\"  Then I could just emphasize that argument, and people would sign up.

    \n

    But the average experience I heard was more like, \"Oh, I saw a movie that involved cryonics, and I went on Google to see if there was anything like that in real life, and found Alcor.\"

    \n

    In one sense this shouldn't surprise a Bayesian, because the base rate of people who hear a brief mention of cryonics on the radio and have an opportunity to click, will be vastly higher than the base rate of people who are exposed to detailed arguments about cryonics...

    \n

    Yet the upshot is that - judging from the generation of young cryonicists at that event I attended - cryonics is sustained primarily by the ability of a tiny, tiny fraction of the population to \"get it\" just from hearing a casual mention on the radio.  Whatever part of one-in-a-hundred-thousand isn't accounted for by the Conscientiousness filter.

    \n

    If I suffered from the sin of underconfidence, I would feel a dull sense of obligation to doubt myself after reaching this conclusion, just like I would feel a dull sense of obligation to doubt that I could be more rational about theology than my parents and teachers at the age of five.  As it is, I have no problem with shrugging and saying \"People are crazy, the world is mad.\"

    \n

    But it really, really raises the question of what the hell is in that click.

    \n

    There's this magical click that some people get and some people don't, and I don't understand what's in the click.  There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click.  I myself failed to click on one notable occasion, but the topic was probably just as clickable.

    \n

    (In fact, it took that particular embarrassing failure in my own history - failing to click on metaethics, and seeing in retrospect that the answer was clickable - before I was willing to trust non-click Singularitarians.)

    \n

    A rationalist faced with an apparently obvious answer, must assign some probability that a non-obvious objection will appear and defeat it.  I do know how to explain the above conclusions at great length, and defeat objections, and I would not be nearly as confident (I hope!) if I had just clicked five seconds ago.  But sometimes the final answer is the same as the initial guess; if you know the full mathematical story of Peano Arithmetic, 2 + 2 still equals 4 and not 5 or 17 or the color green.  And some people very quickly arrive at that same final answer as their best initial guess; they can swiftly guess which answer will end up being the final answer, for what seem even in retrospect like good reasons.  Like becoming an atheist at eleven, then listening to a theist's best arguments later in life, and concluding that your initial guess was right for the right reasons.

    \n

    We can define a \"click\" as following a very short chain of reasoning, which in the vast majority of other minds is derailed by some detour and proves strongly resistant to re-railing.

    \n

    What makes it happen?  What goes into that click?

    \n

    It's a question of life-or-death importance, and I don't know the answer.

    \n

    That generation of cryonicists seemed so normal apart from that...

    \n

    What's in that click?

    \n

    The point of the opening anecdote about the Mind Projection Fallacy (blank map != blank territory) is to show (anecdotal) evidence that there's something like a general click-factor, that someone who clicked on cryonics was able to click on mysteriousness=projectivism as well.  Of course I didn't expect that I could just stand up amid the conference and describe the intelligence explosion and Friendly AI in a couple of sentences and have everyone get it.  That high of a general click factor is extremely rare in my experience, and the people who have it are not otherwise normal.  (Michael Vassar is one example of a \"superclicker\".)  But it is still true AFAICT that people who click on one problem are more likely than average to click on another.

    \n

    My best guess is that clickiness has something to do with failure to compartmentalize - missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time.  Clicky people would tend to be people who take all of their beliefs at face value.

    \n

    The Hansonian explanation (not necessarily endorsed by Robin Hanson) would say something about clicky people tending to operate in Near mode.  (Why?)

    \n

    The naively straightforward view would be that the ordinary-seeming people who came to the cryonics did not have any extra gear that magically enabled them to follow a short chain of obvious inferences, but rather, everyone else had at least one extra insanity gear active at the time they heard about cryonics.

    \n

    Is that really just it?  Is there no special sanity to add, but only ordinary madness to take away?  Where do superclickers come from - are they just born lacking a whole lot of distractions?

    \n

    What the hell is in that click?

    " } }, { "_id": "37xeK9fim5DqTqQN4", "title": "London meetup: \"The Friendly AI Problem\" ", "pageUrl": "https://www.lesswrong.com/posts/37xeK9fim5DqTqQN4/london-meetup-the-friendly-ai-problem", "postedAt": "2010-01-19T23:35:47.131Z", "baseScore": 10, "voteCount": 7, "commentCount": 28, "url": null, "contents": { "documentId": "37xeK9fim5DqTqQN4", "html": "

    To all LessWrongers in London/the south of England: On Saturday the 23rd - this Saturday at 2pm - I will be speaking about Friendly AI at the UK transhumanist group, at

    \n

    Room 416, 4th floor (via main lift), Birkbeck College, Torrington Square, London (Map)

    \n

    For those who have read most of the material on the web about FAI theory, e.g. the sequences here and the material of Bostrom and Omohundro, you won't hear much new, but you will probably meet a lot of people who could do with some strong rationalists to speak to. For those who haven't read most of the material about Friendly AI, I'll try to make it a superlatively informative talk. The Facebook event has 30 confirmed guests, and the London Futurists' organizer (distinct from UK h+, the hosts), just emailed me and said:

    \n

    \"Looking forward to meeting you and asking you questions on Saturday! I think your going to get quite a few as there is a lot of interest from my group about your presentation - positive and negative. You might get a very skeptical geezer coming along so be prepared for some tough questions from people who know their stuff.\"

    \n

    So the event promises to be a fairly large gathering of rationalists, futurists and transhumanists with a high probability of passionate debate about FAI, followed by a trip to the pub for further discussion and drinks.

    " } }, { "_id": "hiDkhLyN5S2MEjrSE", "title": "Normal Cryonics", "pageUrl": "https://www.lesswrong.com/posts/hiDkhLyN5S2MEjrSE/normal-cryonics", "postedAt": "2010-01-19T19:08:48.301Z", "baseScore": 102, "voteCount": 123, "commentCount": 964, "url": null, "contents": { "documentId": "hiDkhLyN5S2MEjrSE", "html": "

    I recently attended a small gathering whose purpose was to let young people signed up for cryonics meet older people signed up for cryonics - a matter of some concern to the old guard, for obvious reasons.

    \n

    The young cryonicists' travel was subsidized.  I suspect this led to a greatly different selection filter than usually prevails at conferences of what Robin Hanson would call \"contrarians\".  At an ordinary conference of transhumanists - or libertarians, or atheists - you get activists who want to meet their own kind, strongly enough to pay conference fees and travel expenses.  This conference was just young people who took the action of signing up for cryonics, and who were willing to spend a couple of paid days in Florida meeting older cryonicists.

    \n

    The gathering was 34% female, around half of whom were single, and a few kids.  This may sound normal enough, unless you've been to a lot of contrarian-cluster conferences, in which case you just spit coffee all over your computer screen and shouted \"WHAT?\"  I did sometimes hear \"my husband persuaded me to sign up\", but no more frequently than \"I pursuaded my husband to sign up\".  Around 25% of the people present were from the computer world, 25% from science, and 15% were doing something in music or entertainment - with possible overlap, since I'm working from a show of hands.

    \n

    I was expecting there to be some nutcases in that room, people who'd signed up for cryonics for just the same reason they subscribed to homeopathy or astrology, i.e., that it sounded cool.  None of the younger cryonicists showed any sign of it.  There were a couple of older cryonicists who'd gone strange, but none of the young ones that I saw.  Only three hands went up that did not identify as atheist/agnostic, and I think those also might have all been old cryonicists.  (This is surprising enough to be worth explaining, considering the base rate of insanity versus sanity.  Maybe if you're into woo, there is so much more woo that is better optimized for being woo, that no one into woo would give cryonics a second glance.)

    \n

    The part about actually signing up may also be key - that's probably a ten-to-one or worse filter among people who \"get\" cryonics.  (I put to Bill Faloon of the old guard that probably twice as many people had died while planning to sign up for cryonics eventually, than had actually been suspended; and he said \"Way more than that.\")  Actually signing up is an intense filter for Conscientiousness, since it's mildly tedious (requires multiple copies of papers signed and notarized with witnesses) and there's no peer pressure.

    \n

    For whatever reason, those young cryonicists seemed really normal - except for one thing, which I'll get to tomorrow.  Except for that, then, they seemed like very ordinary people: the couples and the singles, the husbands and the wives and the kids, scientists and programmers and sound studio technicians.

    \n

    It tears my heart out.

    \n

    At some future point I ought to post on the notion of belief hysteresis, where you get locked into whatever belief hits you first.  So it had previously occurred to me (though I didn't write the post) to argue for cryonics via a conformity reversal test:

    \n

    If you found yourself in a world where everyone was signed up for cryonics as a matter of routine - including everyone who works at your office - you wouldn't be the first lonely dissenter to earn the incredulous stares of your coworkers by unchecking the box that kept you signed up for cryonics, in exchange for an extra $300 per year.

    \n

    (Actually it would probably be a lot cheaper, more like $30/year or a free government program, with that economy of scale; but we should ignore that for purposes of the reversal test.)

    \n

    The point being that if cryonics were taken for granted, it would go on being taken for granted; it is only the state of non-cryonics that is unstable, subject to being disrupted by rational argument.

    \n

    And this cryonics meetup was that world.  It was the world of the ordinary scientists and programmers and sound studio technicians who had signed up for cryonics as a matter of simple common sense.

    \n

    It tears my heart out.

    \n

    Those young cryonicists weren't heroes.  Most of the older cryonicists were heroes, and of course there were a couple of other heroes among us young folk, like a former employee of Methuselah who'd left to try to put together a startup/nonprofit around a bright idea he'd had for curing cancer (note: even I think this is an acceptable excuse).  But most of the younger cryonicists weren't there to fight a desperate battle against Death, they were people who'd signed up for cryonics because it was the obvious thing to do.

    \n

    And it tears my heart out, because I am a hero and this was like seeing a ray of sunlight from a normal world, some alternate Everett branch of humanity where things really were normal instead of crazy all the goddamned time, a world that was everything this world could be and isn't.

    \n

    Then there were the children, some of whom had been signed up for cryonics since the day they were born.

    \n

    It tears my heart out.  I'm having trouble remembering to breathe as I write this.  My own little brother isn't breathing and never will again.

    \n

    You know what?  I'm going to come out and say it.  I've been unsure about saying it, but after attending this event, and talking to the perfectly ordinary parents who signed their kids up for cryonics like the goddamn sane people do, I'm going to come out and say it:  If you don't sign up your kids for cryonics then you are a lousy parent.

    \n

    If you aren't choosing between textbooks and food, then you can afford to sign up your kids for cryonics.  I don't know if it's more important than a home without lead paint, or omega-3 fish oil supplements while their brains are maturing, but it's certainly more important than you going to the movies or eating at nice restaurants.  That's part of the bargain you signed up for when you became a parent.  If you can afford kids at all, you can afford to sign up your kids for cryonics, and if you don't, you are a lousy parent.  I'm just back from an event where the normal parents signed their normal kids up for cryonics, and that is the way things are supposed to be and should be, and whatever excuses you're using or thinking of right now, I don't believe in them any more, you're just a lousy parent.

    " } }, { "_id": "2hdzbpRhnEzAYEf4u", "title": "What big goals do we have?", "pageUrl": "https://www.lesswrong.com/posts/2hdzbpRhnEzAYEf4u/what-big-goals-do-we-have", "postedAt": "2010-01-19T16:35:56.100Z", "baseScore": 13, "voteCount": 17, "commentCount": 95, "url": null, "contents": { "documentId": "2hdzbpRhnEzAYEf4u", "html": "

    Sometime ago Jonii wrote:

    \n
    \n

    I mean, paperclip maximizer is seriously ready to do anything to maximize paperclips. It really takes the paperclips seriously.

    \n
    \n

    When I'm hungry I eat, but then I don't go on eating some more just to maximize a function. Eating isn't something I want a lot of. Likewise I don't want a ton of survival, just a bounded amount every day. Let's define a goal as big if you don't get full: every increment of effort/achievement is valuable, like paperclips to Clippy. Now do we have any big goals? Which ones?

    \n

    Save the world. A great goal if you see a possible angle of attack, which I don't. The SIAI folks are more optimistic, but if they see a chink in the wall, they're yet to reveal it.

    \n

    Help those who suffer. Morally upright but tricky to execute: James Shikwati, Dambisa Moyo and Kevin Myers show that even something as clear-cut as aid to Africa can be viewed as immoral. Still a good goal for anyone, though.

    \n

    Procreate. This sounds fun! Fortunately, the same source that gave us this goal also gave us the means to achieve it, and intelligence is not among them. :-) And honestly, what sense in making 20 kids just to play the good-soldier routine for your genes? There's no unique \"you gene\" anyway, in several generations your descendants will be like everyone else's. Yeah, kids are fun, I'd like two or three.

    \n

    Follow your muse. Music, comedy, videogame design, whatever. No limit to achievement! A lot of this is about signaling: would you still bother if all your successes were attributed to someone else's genetic talent? But even apart from the signaling angle, there's still the worrying feeling that entertainment is ultimately useless, like humanity-scale wireheading, not an actual goal for us to reach.

    \n

    Accumulate power, money or experiences. What for? I never understood that.

    \n

    Advance science. As Erik Naggum put it:

    \n
    \n

    The purpose of human existence is to learn and to understand as much as we can of what came before us, so we can further the sum total of human knowledge in our life.

    \n
    \n

    Don't know, but I'm pretty content with my life lately. Should I have a big goal at all? How about you?

    " } }, { "_id": "2L3cyRN2feLQGo3Q3", "title": "The Prediction Hierarchy", "pageUrl": "https://www.lesswrong.com/posts/2L3cyRN2feLQGo3Q3/the-prediction-hierarchy", "postedAt": "2010-01-19T03:36:32.248Z", "baseScore": 28, "voteCount": 22, "commentCount": 38, "url": null, "contents": { "documentId": "2L3cyRN2feLQGo3Q3", "html": "

    Related: Advancing Certainty, Reversed Stupidity Is Not Intelligence

    \n

    The substance of this post is derived from a conversation in the comment thread which I have decided to promote. Teal;deer: if you have to rely on a calculation you may have gotten wrong for your prediction, your expectation for the case when your calculation is wrong should use a simpler calculation, such as reference class forecasting.

    \n

    Edit 2010-01-19: Toby Ord mentions in the comments Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes (PDF) by Toby Ord, Rafaela Hillerbrand, and Anders Sandberg of the Future of Humanity Institute, University of Oxford. It uses a similar mathematical argument, but is much more substantive than this.

    \n

    A lottery has a jackpot of a million dollars. A ticket costs one dollar. Odds of a given ticket winning are approximately one in forty million. If your utility is linear in dollars, should you bet?

    \n

    The obvious (and correct) answer is \"no\". The clever (and incorrect) answer is \"yes\", as follows:

    \n
    \n

    According to your calculations, \"this ticket will not win the lottery\" is true with probability 99.9999975%. But can you really be sure that you can calculate anything to that good odds? Surely you couldn't expect to make forty million predictions of which you were that confident and only be wrong once. Rationally, you ought to ascribe a lower confidence to the statement: 99.99%, for example. But this means a 0.01% chance of winning the lottery, corresponding to an expected value of a hundred dollars. Therefore, you should buy the ticket.

    \n
    \n

    The logic is not obviously wrong, but where is the error?

    \n

    First, let us write out the calculation algebraically. Let E(L) be the expected value of playing the lottery. Let p(L) be your calculated probability that the lottery will pay off. Let p(C) be your probability that your calculations are correct. Finally, let j represent the value of the jackpot and let t represent the price of the ticket. The obvious way to write the clever theory is:

    \n

        E(L) = max(p(L), 1-p(C)) * j - t

    \n

    This doesn't sound quite right, though - surely you should ascribe a higher confidence when you calculate a higher probability. That said, when p(L) is much less than p(C), it shouldn't make a large difference. The straightforward way to account for this is to take p(C) as the probability that p(L) is correct, and write the following:

    \n

        E(L) = [ p(C)*p(L) + 1-p(C) ] * j - t

    \n

    which can be rearranged as:

    \n

        E(L) = p(C) * [p(L)*j - t] + (1-p(C)) * [j - t]

    \n

    I believe this exposes the problem with the clever argument quite explicitly. Why, if your calculations are incorrect (probability 1-p(C)), should you assume that you are certain to win the lottery? If your calculations are incorrect, they should tell you almost nothing about whether you will win the lottery or not. So what do you do?

    \n

    What appears to me the elegant solution is to use a less complex calculation - or a series of less complex calcuations - to act as your backup hypothesis. In a tricky engineering problem (say, calculating the effectiveness of a heat sink), your primary prediction might come out of a finite element fluid dynamics calculator with p(C) = 0.99 and narrow error bars, but you would also refer to the result of a simple algebraic model with p(C) = 0.9999 and much wider error bars. And then you would backstop the lot with your background knowledge about heat sinks in general, written with wide enough error bars to call p(C) = 1 - epsilon.

    \n

    In this case, though, the calculation was simple, so our backup prediction is just the background knowledge. Say that, knowing nothing about a lottery but \"it's a lottery\", we would have an expected payoff e. Then we write:

    \n

        E(L) = p(C) * [p(L)*j - t] + (1-p(C)) * e

    \n

    I don't know about you, but for me, e is approximately equal to -t. And justice is restored.

    \n
    \n

    We are advised that, when solving hard problems, we should solve multiple problems at once. This is relatively trivial, but I can point out a couple other relatively trivial examples where it shows up well:

    \n

    Suppose the lottery appears to be marginally profitable: should you bet on it? Not unless you are confident in your numbers.

    \n

    Suppose we consider the LHC. Should we (have) switch(ed) it on? Once you've checked that it is safe, yes. As a high-energy physics experiment, the backup comparison would be to things like nuclear energy, which have only small chances of devastation on the planetary scale. If your calculations were to indicate that the LHC is completely safe, even if your P(C) were as low as three or four nines (99.9%, 99.99%), your actual estimate of the safety of turning it on should be no lower than six or seven nines, and probably higher. (In point of fact, given the number of physicists analyzing the question, P(C) is much higher. Three cheers for intersubjective verification.)

    \n

    Suppose we consider our Christmas shopping? When you're estimating your time to finish your shopping, your calculations are not very reliable. Therefore your answer is strongly dominated by the simpler, much more reliable reference class prediction.

    \n

    But what are the odds that this ticket won't win the lottery? ...how many nines do I type, again?

    \n

     

    " } }, { "_id": "6zRn4uKADwL9uo8Ch", "title": "Advancing Certainty", "pageUrl": "https://www.lesswrong.com/posts/6zRn4uKADwL9uo8Ch/advancing-certainty", "postedAt": "2010-01-18T09:51:31.050Z", "baseScore": 44, "voteCount": 50, "commentCount": 110, "url": null, "contents": { "documentId": "6zRn4uKADwL9uo8Ch", "html": "

    Related: Horrible LHC Inconsistency, The Proper Use of Humility

    \r\n

    Overconfidence, I've noticed, is a big fear around these parts. Well, it is a known human bias, after all, and therefore something to be guarded against. But I am going to argue that, at least in aspiring-rationalist circles, people are too afraid of overconfidence, to the point of overcorrecting -- which, not surprisingly, causes problems. (Some may detect implications here for the long-standing Inside View vs. Outside View debate.)

    \r\n

    Here's Eliezer, voicing the typical worry:

    \r\n
    \r\n

    [I]f you asked me whether I could make one million statements of authority equal to \"The Large Hadron Collider will not destroy the world\", and be wrong, on average, around once, then I would have to say no.

    \r\n
    \r\n

    I now suspect that misleading imagery may be at work here. A million statements -- that sounds like a lot, doesn't it? If you made one such pronouncement every ten seconds, a million of them would require you to spend months doing nothing but pontificating, with no eating, sleeping, or bathroom breaks. Boy, that would be tiring, wouldn't it? At some point, surely, your exhausted brain would slip up and make an error. In fact, it would surely make more than one -- in which case, poof!, there goes your calibration.

    \r\n

    No wonder, then, that people claim that we humans can't possibly hope to attain such levels of certainty. Look, they say, at all those times in the past when people -- even famous scientists! -- said they were 99.999% sure of something, and they turned out to be wrong. My own adolescent self would have assigned high confidence to the truth of Christianity; so where do I get the temerity, now, to say that the probability of this is 1-over-oogles-and-googols?

    \r\n

    \r\n

    [EDIT: Unnecessary material removed.]

    \r\n

    A probability estimate is not a measure of \"confidence\" in some psychological sense. Rather, it is a measure of the strength of the evidence: how much information you believe you have about reality. So, when judging calibration, it is not really appropriate to imagine oneself, say, judging thousands of criminal trials, and getting more than a few wrong here and there (because, after all, one is human and tends to make mistakes). Let me instead propose a less misleading image: picture yourself programming your model of the world (in technical terms, your prior probability distribution) into a computer, and then feeding all that data from those thousands of cases into the computer -- which then, when you run the program, rapidly spits out the corresponding thousands of posterior probability estimates. That is, visualize a few seconds or minutes of staring at a rapidly-scrolling computer screen, rather than a lifetime of exhausting judicial labor. When the program finishes, how many of those numerical verdicts on the screen are wrong?

    \r\n

    I don't know about you, but modesty seems less tempting to me when I think about it in this way. I have a model of the world, and it makes predictions. For some reason, when it's just me in a room looking at a screen, I don't feel the need to tone down the strength of those predictions for fear of unpleasant social consequences. Nor do I need to worry about the computer getting tired from running all those numbers.

    \r\n

    In the vanishingly unlikely event that Omega were to appear and tell me that, say, Amanda Knox was guilty, it wouldn't mean that I had been too arrogant, and that I had better not trust my estimates in the future. What it would mean is that my model of the world was severely stupid with respect to predicting reality. In which case, the thing to do would not be to humbly promise to be more modest henceforth, but rather, to find the problem and fix it. (I believe computer programmers call this \"debugging\".)

    \r\n

    A \"confidence level\" is a numerical measure of how stupid your model is, if you turn out to be wrong.

    \r\n

    The fundamental question of rationality is: why do you believe what you believe? As a rationalist, you can't just pull probabilities out of your rear end. And now here's the kicker: that includes the probability of your model being wrong. The latter must, paradoxically but necessarily, be part of your model itself. If you're uncertain, there has to be a reason you're uncertain; if you expect to change your mind later, you should go ahead and change your mind now.

    \r\n

    This is the first thing to remember in setting out to dispose of what I call \"quantitative Cartesian skepticism\": the view that even though science tells us the probability of such-and-such is 10-50, well, that's just too high of a confidence for mere mortals like us to assert; our model of the world could be wrong, after all -- conceivably, we might even be brains in vats.

    \r\n

    Now, it could be the case that 10-50 is too low of a probability for that event, despite the calculations; and it may even be that that particular level of certainty (about almost anything) is in fact beyond our current epistemic reach. But if we believe this, there have to be reasons we believe it, and those reasons have to be better than the reasons for believing the opposite.

    \r\n

    I can't speak for Eliezer in particular, but I expect that if you probe the intuitions of people who worry about 10-6 being too low of a probability that the Large Hadron Collider will destroy the world -- that is, if you ask them why they think they couldn't make a million statements of equal authority and be wrong on average once -- they will cite statistics about the previous track record of human predictions: their own youthful failures and/or things like Lord Kelvin calculating that evolution by natural selection was impossible.

    \r\n

    To which my reply is: hindsight is 20/20 -- so how about taking advantage of this fact?

    \r\n

    Previously, I used the phrase \"epistemic technology\" in reference to our ability to achieve greater certainty through some recently-invented methods of investigation than through others that are native unto us. This, I confess, was an almost deliberate foreshadowing of my thesis here: we are not stuck with the inferential powers of our ancestors. One implication of the Bayesian-Jaynesian-Yudkowskian view, which marries epistemology to physics, is that our knowledge-gathering ability is as subject to \"technological\" improvement as any other physical process. With effort applied over time, we should be able to increase not only our domain knowledge, but also our meta-knowledge. As we acquire more and more information about the world, our Bayesian probabilities should become more and more confident.

    \r\n

    If we're smart, we will look back at Lord Kelvin's reasoning, find the mistakes, and avoid making those mistakes in the future. We will, so to speak, debug the code. Perhaps we couldn't have spotted the flaws at the time; but we can spot them now. Whatever other flaws may still be plaguing us, our score has improved.  

    \r\n

    In the face of precise scientific calculations, it doesn't do to say, \"Well, science has been wrong before\". If science was wrong before, it is our duty to understand why science was wrong, and remove known sources of stupidity from our model. Once we've done this, \"past scientific predictions\" is no longer an appropriate reference class for second-guessing the prediction at hand, because the science is now superior. (Or anyway, the strength of the evidence of previous failures is diminished.)        

    \r\n

    That is why, with respect to Eliezer's LHC dilemma -- which amounts to a conflict between avoiding overconfidence and avoiding hypothesis-privileging -- I come down squarely on the side of hypothesis-privileging as the greater danger. Psychologically, you may not \"feel up to\" making a million predictions, of which no more than one can be wrong; but if that's what your model instructs you to do, then that's what you have to do -- unless you think your model is wrong, for some better reason than a vague sense of uneasiness. Without, ultimately, trusting science more than intuition, there's no hope of making epistemic progress. At the end of the day, you have to shut up and multiply -- epistemically as well as instrumentally. 
     

    " } }, { "_id": "L4GGomr86sEwxzPvS", "title": "Sorting Out Sticky Brains", "pageUrl": "https://www.lesswrong.com/posts/L4GGomr86sEwxzPvS/sorting-out-sticky-brains", "postedAt": "2010-01-18T04:18:53.036Z", "baseScore": 74, "voteCount": 83, "commentCount": 44, "url": null, "contents": { "documentId": "L4GGomr86sEwxzPvS", "html": "

    tl;dr: Just because it doesn't seem like we should be able to have beliefs we acknowledge to be irrational, doesn't mean we don't have them.  If this happens to you, here's a tool to help conceptualize and work around that phenomenon.

    \n

    There's a general feeling that by the time you've acknowledged that some belief you hold is not based on rational evidence, it has already evaporated.  The very act of realizing it's not something you should believe makes it go away.  If that's your experience, I applaud your well-organized mind!  It's serving you well.  This is exactly as it should be.

    \n

    If only we were all so lucky.

    \n

    Brains are sticky things.  They will hang onto comfortable beliefs that don't make sense anymore, view the world through familiar filters that should have been discarded long ago, see significances and patterns and illusions even if they're known by the rest of the brain to be irrelevant.  Beliefs should be formed on the basis of sound evidence.  But that's not the only mechanism we have in our skulls to form them.  We're equipped to come by them in other ways, too.  It's been observed1 that believing contradictions is only bad because it entails believing falsehoods.  If you can't get rid of one belief in a contradiction, and that's the false one, then believing a contradiction is the best you can do, because then at least you have the true belief too.

    \n

    The mechanism I use to deal with this is to label my beliefs \"official\" and \"unofficial\".  My official beliefs have a second-order stamp of approval.  I believe them, and I believe that I should believe them.  Meanwhile, the \"unofficial\" beliefs are those I can't get rid of, or am not motivated to try really hard to get rid of because they aren't problematic enough to be worth the trouble.  They might or might not outright contradict an official belief, but regardless, I try not to act on them.

    \n

    To those of you with well-ordered minds (for such lucky people seem to exist, if we believe some of the self-reports on this very site), this probably sounds outrageous.  If I know they're probably not true... And I do.  But they still make me expect things.  They make me surprised when those expectations are flouted.  If I'm asked about their subjects when tired, or not prepared for the question, they'll leap out of my mouth before I can stop them, and they won't feel like lies - because they're not.  They're beliefs.  I just don't like them very much.

    \n

    I'll supply an example.  I have a rather dreadful phobia of guns, and accordingly, I think they should be illegal.  The phobia is a terrible reason to believe in the appropriateness of such a ban: said phobia doesn't even stand in for an informative real experience, since I haven't lost a family member to a stray bullet or anything of the kind.  I certainly don't assent to the general proposition \"anything that scares me should be illegal\".  I have no other reasons, except for a vague affection for a cluster of political opinions which includes something along those lines, to believe this belief.  Neither the fear nor the affection are reasons I endorse for believing things in general, or this in particular.  So this is an unofficial belief.  Whenever I can, I avoid acting on it.  Until I locate some good reasons to believe something about the topic, I officially have no opinion.  I avoid putting myself in situations where I might act on the unofficial belief in the same way I might avoid a store with contents for which I have an unendorsed desire, like a candy shop.  For instance, when I read about political candidates' stances on issues, I avoid whatever section talks about gun control.

    \n

    Because I know my brain collects junk like this, I try to avoid making up my mind until I do have a pretty good idea of what's going on.  Once I tell myself, \"Okay, I've decided\", I run the risk of lodging something permanently in my cortex that won't release its stranglehold on my thought process until kingdom come.  I use tools like \"temporarily operating under the assumption that\" (some proposition) or declaring myself \"unqualified to have an opinion about\" (some subject).  The longer I hold my opinions in a state of uncertainty, the less chance I wind up with a permanent epistemic parasite that I have to devote cognitive resources to just to keep it from making me do dumb things.  This is partly because it makes the state of uncertainty come to feel like a default, which makes it simpler to slide back to uncertainty again if it seems warranted.  Partly, it's because the longer I wait, the more evidence I've collected by the time I pick a side, so it's less likely that the belief I acquire is one I'll want to excise in the future.

    \n

    This is all well and good as a prophylactic.  It doesn't help as much with stuff that snuck in when I was but a mere slip of a youth.  For that, I rely on the official/unofficial distinction, and then toe the official line as best I can in thought, word, and deed.  I break in uncomfy official beliefs like new shoes.  You can use your brain's love of routine to your advantage.  Act like you only believe the official beliefs, and the unofficial ones will weaken from disuse.  This isn't a betrayal of your \"real\" beliefs.  The official beliefs are real too!  They're real, and they're better.

    \n

     

    \n

    1I read this in Peter van Inwagen's book \"Essay on Free Will\" but seem to remember that he got it elsewhere.  I'm not certain where my copy has gotten to lately, so can't check.

    " } }, { "_id": "SEq8bvSXrzF4jcdS8", "title": "Tips and Tricks for Answering Hard Questions", "pageUrl": "https://www.lesswrong.com/posts/SEq8bvSXrzF4jcdS8/tips-and-tricks-for-answering-hard-questions", "postedAt": "2010-01-17T23:56:20.904Z", "baseScore": 99, "voteCount": 61, "commentCount": 54, "url": null, "contents": { "documentId": "SEq8bvSXrzF4jcdS8", "html": "

    I've collected some tips and tricks for answering hard questions, some of which may be original, and others I may have read somewhere and forgotten the source of. Please feel free to contribute more tips and tricks, or additional links to the sources or fuller explanations.

    \n

    Don't stop at the first good answer. We know that human curiosity can be prematurely satiated. Sometimes we can quickly recognize a flaw in an answer that initially seemed good, but sometimes we can't, so we should keep looking for flaws and/or better answers.

    \n

    Explore multiple approaches simultaneously. A hard question probably has multiple approaches that are roughly equally promising, otherwise it wouldn't be a hard question (well, unless it has no promising approaches). If there are several people attempting to answer it, they should explore different approaches. If you're trying to answer it alone, it makes sense to switch approaches (and look for new approaches) once a while.

    \n

    Trust your intuitions, but don't waste too much time arguing for them. If several people are attempting to answer the same question and they have different intuitions about how best to approach it, it seems efficient for each to rely on his or her intuition to choose the approach to explore. It only makes sense to spend a lot of time arguing for your own intuition if you have some reason to believe that other people's intuitions are much worse than yours.

    \n

    Go meta. Instead of attacking the question directly, ask \"How should I answer a question like this?\" It seems that when people are faced with a question, even one that has stumped great minds for ages, many just jump in and try to attack it with whatever intellectual tools they have at hand. For really hard questions, we may need to look for, or build, new tools.

    \n

    Dissolve the question. Sometimes, the question is meaningless and asking it is just a cognitive error. If you can detect and correct the error then the question may just go away.

    \n

    Sleep on it. I find that I tend to have a greater than average number of insights in the period of time just after I wake up and before I get out of bed. Our brains seem to continue to work while we're asleep, and it may help to prime it by reviewing the problem before going to sleep. (I think Eliezer wrote a post or comment to this effect, but I can't find it now.)

    \n

    Be ready to recognize a good answer when you see it. The history of science shows that human knowledge does make progress, but sometimes only by an older generation dying off or retiring. It seems that we often can't recognize a good answer even when it's staring us in the face. I wish I knew more about what factors affect this ability, but one thing that might help is to avoid acquiring a high social status, or the mental state of having high social status. (See also, How To Actually Change Your Mind.)

    " } }, { "_id": "SdKi3Y93geyKMscbA", "title": "Dennett's heterophenomenology", "pageUrl": "https://www.lesswrong.com/posts/SdKi3Y93geyKMscbA/dennett-s-heterophenomenology", "postedAt": "2010-01-16T20:40:18.505Z", "baseScore": 9, "voteCount": 27, "commentCount": 23, "url": null, "contents": { "documentId": "SdKi3Y93geyKMscbA", "html": "

    In an earlier comment, I conflated heterophenomenology in the general sense of taking introspective accounts as data to be explained rather than direct readouts of the truth, with Dennett's particular approach to explaining those data.  So to correct myself, I say that it is Dennett, rather than heterophenomenology, that claims that there is no such thing as consciousness. Dennett denies that he does, but I disagree. I defend this view here.

    \n

    I have to admit at this point that I have not read \"Consciousness Explained\".  Had either of the library's copies been on the shelves last Tuesday I would have done by now, but instead I found his later book (and his most recent on the topic), \"Sweet Dreams: Philosophical Obstacles to a Science of Consciousness\".  The subtitle suggests a drawing back from the confidence of the earlier title, as does that of the book in between.  The book confirms me in my impression that the ideas of \"C.E.\" have been in the air so long (the air of hard SF, sciblogs, and the like, not to mention Phil Goetz's recent posts) that reading the primary source 19 years on would be nothing more than an exercise in checkbox-ticking.

    \n

    I'll give a brief run-through of \"Sweet Dreams\" and then carry on the argument.

    \n

    The book is primarily writing against, a response to objections arising from earlier works.

    \n

    In chapter 1 he shoots down the \"Zombic Hunch\", the idea that a being could be physically identical to a human, but lack consciousness, and therefore that consciousness must be non-physical.  I'll take it that we all agree the zombie story is insane.

    \n

    Chapter 2 introduces the concept of heterophenomenology.  This has already been introduced to LW.

    \n

    The corpse of the Zombic Hunch got up again and walked, so in Chapter 3 he shoots it down again.

    \n

    Chapters 4 and 5 attack qualia.  They don't exist, says Dennett, because of such anomalies as change blindness, Capgras syndrome, and various thought-experiments that defy all coherent accounts of what qualia are.

    \n

    Chapter 6 is entitled \"Are We Explaining Consciousness yet?\" (Er, what was that first book called?)  It cites neurological research and briefly sets out the Multiple Drafts hypothesis.  He confronts critics who say (which I also say) that Dennett is really denying the existence of consciousness, not giving an account of it.  His answer is that he is giving (or seeking) a third-person explanation of first-person accounts.  Well, yes, that is indeed what he is doing, and what his critics on this point are complaining that he is doing.  The question is whether there is more to do, which goes unaddressed here.

    \n

    Chapter 7 says some more about Multiple Drafts.

    \n

    Chapter 8 argues against qualia of consciousness again. It closes by saying that if ever his heterophenomenological program runs into a roadblock and something more is clearly needed, then the Zombic Hunch gets to eat his braiiinnzzz.

    \n

    Ok, his actual words are \"If the day arrives when...we plainly see that something big is missing...those with the unshakable hunch will get to say they told us so.\"

    \n

    So, why do I claim that Dennett's account of consciousness amounts to denying there is such a thing, contrary to his own claims that consciousness exists and he is explaining it?

    \n

    The problem I have is with his differing accounts of consciousness and qualia.  The former he says exists, and claims to give an explanation of; the latter he denies.  Yet the evidence he adduces regarding both topics is of the same sort: experimental observations or thought experiments demonstrating the incoherence of all existing accounts of what they are.  But he comes to different conclusions about them.  Why?

    \n

    I believe the reason is that any physical account of consciousness, including his, will meet the objection (as it has) that yes, that may be an accurate account of what is physically happening in the brain, and yes, that might even be necessary and sufficient for consciousness to exist in a brain, but it leaves unanswered what Dennett calls the Hard Question: how does that physical process produce the experience of being conscious?  That is, how does it account for the qualia of consciousness itself?  The only way to overcome that objection is to argue against the existenceof qualia (as Dennett does).

    \n

    But the qualia of consciousness looks to me like the very thing that people are talking about when they talk about being conscious.  To deny the qualia of consciousness is to deny that there is any such thing as consciousness.

    \n

    That is why I claim that Dennett's theory amounts to denying the existence of consciousness.  Dennett is describing beings with no inner experience, philosophical zombies, and avoids the Zombic Hunch by denying there is such a thing as inner experience. What he is explaining is not consciousness, but why the zombies say that they are conscious.

    \n

    And so back to the word \"heterophenomenology\".  This is Dennett's word, and I think it fair, on heterophenomenological grounds, to look for Dennett's meaning in his practice rather than in his account of that practice.  His account is that he wants to explain people's talk of their inner experiences without taking that talk to be a reliable account.  But his practice is to explain such talk without taking it to be an account of anything, not even an unreliable account, not even an account which might be about something.  He sketches a mechanism that could produce such talk, no more.  He explains first-person talk, and says nothing of first-person experience. And so a negative answer to the question of whether there is anything being talked about, however imperfectly, whether there is such a thing as first-person experience, gets palmed into the definition.

    \n

     

    \n

    I do not have any account of what qualia are, neither qualia in general nor the experience of being aware. But I do believe that this experience exists. The heterophenomenological program is to hit Ignore on that experience.

    " } }, { "_id": "RbWJnkSyc7H8E5TNy", "title": "The Wannabe Rational", "pageUrl": "https://www.lesswrong.com/posts/RbWJnkSyc7H8E5TNy/the-wannabe-rational", "postedAt": "2010-01-15T20:09:32.662Z", "baseScore": 39, "voteCount": 54, "commentCount": 305, "url": null, "contents": { "documentId": "RbWJnkSyc7H8E5TNy", "html": "

    I have a terrifying confession to make: I believe in God.

    \n

    This post has three prongs:

    \n

    First: This is a tad meta for a full post, but do I have a place in this community? The abstract, non-religious aspect of this question can be phrased, \"If someone holds a belief that is irrational, should they be fully ousted from the community?\" I can see a handful of answers to this question and a few of them are discussed below.

    \n

    Second: I have nothing to say about the rationality of religious beliefs. What I do want to say is that the rationality of particular irrationals is not something that is completely answered after their irrationality is ousted. They may be underneath the sanity waterline, but there are multiple levels of rationality hell. Some are deeper than others. This part discusses one way to view irrationals in a manner that encourages growth.

    \n

    Third: Is it possible to make the irrational rational? Is it possible to take those close to the sanity waterline and raise them above? Or, more personally, is there hope for me? I assume there is. What is my responsibility as an aspiring rationalist? Specifically, when the community complains about a belief, how should I respond?

    \n

    \n


    \n

    My Place in This Community

    \n

    So, yeah. I believe in God. I figure my particular beliefs are a little irrelevant at this point. This isn't to say that my beliefs aren't open for discussion, but here and now I think there are better things to discuss.  Namely, whether talking to people like me is within the purpose of LessWrong. Relevant questions have to do with my status and position at LessWrong. The short list:

    \n
      \n
    1. Should I have kept this to myself? What benefit does an irrational person have for confessing their irrationality? (Is this even possible? Is this post an attempted ploy?) I somewhat expect this post and the ensuing discussion to completely wreck my credibility as a commentator and participant.
    2. \n
    3. Presumably, there is a level of entry to LessWrong that is enforced. Does this level include filtering out certain beliefs and belief systems? Or is the system merit-based via karma and community voting? My karma is well above the level needed to post and my comments generally do better than worse. A merit-based system would prevent me from posting anything about religion or other irrational things, but is there a deeper problem? (More discussion below.) Should LessWrong /kick people who fail at rationality? Who makes the decision? Who draws the sanity water-line?
    4. \n
    5. Being religious, I assume I am far below the desired sanity waterline that the community desires. How did I manage to scrape up over 500 karma? What have I demonstrated that would be good for other people to demonstrate? Have I acted appropriately as a religious person curious about rationality? Is there a problem with the system that lets someone like me get so far?
    6. \n
    7. Where do I go from here? In the future, how should I act? Do I need to change my behavior as a result of this post? I am not calling out for any responses to my beliefs in particular, nor am I calling to other religious people at LessWrong to identify themselves. I am asking the community what they want me to do. Leave? Keep posting? Comment but don't post? Convert? Read everything posted and come back later?
    8. \n
    \n

     

    \n

    The Wannabe Sanity Waterline

    \n

    This post has little to do with actual beliefs. I get the feeling that most discussions about the beliefs themselves are not going to be terribly useful. I originally titled this post, \"The Religious Rational\" but figured the opening line was inflammatory enough and as I began editing I realized that the religious aspect is merely an example of a greater group of irrationals. I could have admitted to chasing UFOs or buying lottery tickets.  What I wanted to talk about is the same.

    \n

    That being said, I fully accept all criticisms offered about whatever you feel is appropriate. Even if the criticism is just ignoring me or an admin deleting the post and banning me. I am not trying to dodge the subject of my religious beliefs; I provided myself as an example to be convenient and make the conversation more interesting. I have something relevant and useful to discuss in regards to the overall topic of rationalistic communities that applies to the act of spawning rationalists from within fields other than rationalism. Whether it directly applies to LessWrong is for you to decide.

    \n

    How do you approach someone below the sanity waterline? Do you ignore them and look for people above the line? Do you teach them until they drop their irrational deadweight? How do you know which ones are worth pursuing and which are a complete waste of time? Is there a better answer than generalizing at the waterline and turning away everyone who gets wet? The easiest response to these people is to put the burden of rationality on their shoulders. Let them teach themselves. I think think there is a better way.  I think there are people closer to the waterline than others and deciding to group everyone below the line together makes the job of teaching rationalism harder.

    \n

    I, for example, can look at my fellow theists and immediately draw up a shortlist of people I consider relatively rationalistic. Compared to the given sanity waterline, all of us are deep underwater due to certain beliefs. But compared to the people on the bottom of the ocean, we're doing great. This leads into the question: \"Are there different levels of irrationality?\" And also, \"Do you approach people differently depending on how far below the waterline they are?\"

    \n

    More discretely, is it useful to make a distinction between two types of theists? Is it possible to create a sanity waterline for the religious? They may be way off on a particular subject but otherwise their basic worldview is consistent and intact. Is there a religious sanity waterline? Are there rational religious? Is a Wannabe Rational a good place to start?

    \n

    The reason I ask these questions is not to excuse any particular belief while feeling good about everything else in my belief system. If there is a theist struggling to verify all beliefs but those that involve God, then they are no true rationalist. But if said theist really, really wanted to become a rationalist, it makes sense for them to drop the sacred, most treasured beliefs last. Can rationalism work on a smaller scale?

    \n

    Quoting from Outside the Laboratory (emphasis not mine):

    \n
    \n

    Now what are we to think of a scientist who seems competent inside the laboratory, but who, outside the laboratory, believes in a spirit world? We ask why, and the scientist says something along the lines of: \"Well, no one really knows, and I admit that I don't have any evidence - it's a religious belief, it can't be disproven one way or another by observation.\" I cannot but conclude that this person literally doesn't know why you have to look at things.

    \n
    \n

    A certain difference between myself and this spirit believing scientist is that my beliefs are from a younger time and I have things I would rather do than gallop through that area of the territory checking my accuracy. Namely, I am still trying to discover what the correct map-making tools are.

    \n

    Also, admittedly, I am unjustifiably attached to that area of my map. It's going to take a while to figure out why I am so attached and what I can do about it. I am not fully convinced that rationalism is the silver-bullet that will solve Life, the Universe, and Everything. I am not letting this new thing near something I hold precious. This is a selfish act and will get in the way of my learning, but that sacrifice is something I am willing to make. Hence the reason I am below the LessWrong waterline. Hence me being a Wannabe Rational.

    \n

    Instead, what I have done is take my basic worldview and chased down the dogma. Given the set of beliefs I would rather not think about right now, where do they lead? While this is pure anathema to the true rationalist, I am not a true rationalist. I have little idea about what I am doing. I am young in your ways and have much to learn and unlearn. I am not starting at the top of my system; I am starting at the bottom. I consider myself a quasi-rational theist not because I am rational compared to the community of LessWrong. I am a quasi-rational theist because I am rational compared to other theists.

    \n

    To return to the underlying question: Is this distinction valid? If it is valid, is it useful or self-defeating? As a community, does a distinction between levels of irrationally help or hinder? I think it helps. Obviously, I would like to consider myself more rational than not. I would also like to think that I can slowly adapt and change into something even more rational. Asking you, the community, is a good way to find out if I am merely deluding myself.

    \n

    There may be a wall that I hit and cannot cross. There may be an upper-bound on my rationalism. Right now, there is a cap due to my theism. Unless that cap is removed, there will likely be a limit to how well I integrate with LessWrong. Until then, rationalism has open season on other areas of my map. It has produced excellent results and, as it gains my trust, its tools gain more and more access to my map. As such, I consider myself below the LessWrong sanity waterline and above the religious sanity waterline. I am a Wannabe Rational.

    \n


    \n

    Why This Helps

    \n

    The advantage of a distinction between different sanity waterlines is that it allows you to compare individuals within groups of people when scanning for potential rationalists. A particular group may all drop below the waterline but, given their particular irrational map, some of them may be remarkably accurate for being irrational. After accounting for dumb luck, does anyone show a talent for reading territory outside of their too-obviously-irrational-for-excuses belief?

    \n

    Note that this is completely different than questioning where the waterline is actually drawn. This is talking about people clearly below the line. But an irrational map can have rational areas. The more rational areas in the map, the more evidence there is that some of the mapmaker's tools and tactics are working well. Therefore, this mapmaker is above the sanity waterline for that particular group of irrational mapmakers. In other words, this mapmaker is worth conversing with as long as the conversation doesn't drift into the irrational areas of the map.

    \n

    This allows you to give people below the waterline an attractive target to hit. Walking up to a theist and telling them they are below the waterline is depressing. They do need to hear it, which is why the waterline exists in the first place, and their level of sanity is too low for them to achieve a particular status. But after the chastising you can tell them that other areas in their map are good enough to become more rational in those areas. They don't need to throw everything away to become a Wanna Rational. They will still be considered irrational but at least their map is more accurate than it was. It is at this point that someone begins their journey to rationalism.

    \n

    If we have any good reason to help others become more rational, it seems as though this would count toward that goal.

    \n


    \n

    Conversion

    \n

    This last bit is short. Taking an example of myself, what should I be doing to make my map more accurate? My process right now is something like this:

    \n
      \n
    1. Look at the map. What are my beliefs? What areas are marked in the ink of science, evidence, rationalism, and logic? What areas aren't and what ink is being used there?
    2. \n
    3. Look at the territory. Beliefs are great, but which ones are working? I quickly notice that certain inks work better. Why am I not using those inks elsewhere? Some inks work better for certain areas, obviously, but some don't seem to be useful at all.
    4. \n
    5. Find the right ink. Contrasting and comparing the new mapmaking methods with the old ones should produce a clear winner. Keep adding stuff to the toolbox once you find a use for it. Take stuff out of the toolbox when it is replaced by a better, more accurate tool. Inks such as, \"My elders said so\" and \"Well, it sounds right\" are significantly less useful. Sometimes we have the right ink but we use incorrectly. Sometimes we find a new way to use an old ink.
    6. \n
    7. Revisit old territory. When I throw out an old ink, examine the areas of the map where that ink was used. Revisit the territory with your new tools handy. Some territory is too hard to access now (beliefs about your childhood) or some areas on your map don't have corresponding territories (beliefs about the gender of God).
    8. \n
    \n

    These things, in my opinion, are learning the ways of rationality. I have a few areas of my map marked, \"Do this part later.\" I have a few inks labeled, \"Favorite colors.\" These are what keep me below the sanity waterline. As time moves forward I pickup new favorite colors and eventually I will come to the areas saved for later. Maybe then I will rise above the waterline. Maybe then I will be a true rationalist.

    " } }, { "_id": "fpdTvfT6oi8TxAGKN", "title": "In defense of the outside view", "pageUrl": "https://www.lesswrong.com/posts/fpdTvfT6oi8TxAGKN/in-defense-of-the-outside-view", "postedAt": "2010-01-15T11:01:18.900Z", "baseScore": 18, "voteCount": 22, "commentCount": 29, "url": null, "contents": { "documentId": "fpdTvfT6oi8TxAGKN", "html": "

    I think some of our recent arguments against applying the outside view are wrong.

    \n

    1. In response to taw's post, Eliezer paints the outside view argument against the Singularity thus:

    \n
    \n

    ...because experiments show that people could do better at predicting how long it will take them to do their Christmas shopping by asking \"How long did it take last time?\" instead of trying to visualize the details.

    \n
    \n

    This is an unfair representation. One of the poster-child cases for the outside view (mentioned by Eliezer, no less!) dealt with students trying to estimate completion times for their academic projects. And what is AGI if not a research project? One might say AGI is too large for the analogy to work, but outside view helpfully tells us that large projects aren't any more immune to failures and schedule overruns :-)

    \n

    2. In response to my comment claiming that Dennett didn't solve the problem of consciousness \"because philosophers don't solve problems\", ciphergoth writes:

    \n
    \n

    This \"outside view abuse\" is getting a little extreme. Next it will tell you that Barack Obama isn't President, because people don't become President.

    \n
    \n

    The outside view may be rephrased as \"argument from typicality\". If we'd just heard of this random dude named Barack Obama, we'd be perfectly justified in saying he won't become President! Which would be the proper analogy to first hearing about Dennett and his work. Another casual application of the outside view corroborates the conclusion: what other problems has Dennett solved? Is the problem of consciousness the first problem he solved? Does this seem typical of anything?

    \n

    3. Technologos attacks taw's post, again, with the following argument:

    \n
    \n

    \"beliefs that the future will be just like the past\" have a zero success rate.

    \n
    \n

    For each particular highly speculative technology, we can assert that it won't appear with high confidence (let's say 90%). But this doesn't mean the future will be the same in all respects! The conjunction of many 90%-statements (X won't appear, AND Y won't appear, AND etc.) gets assigned the product, a very low confidence, as it should. We're sure that some new technologies will arise, we just don't know which ones. Fusion power? Flying cars? We've been on the fast track to those for some time now, and they still sound less far out then the Singularity! Anyone who's worked with tech for any length of time can recite a looooong list of Real Soon Now technologies that never materialized.

    \n

    4. In response to a pro-outside-view comment by taw, wedrifid snaps:

    \n
    \n

    Choosing a particular outside view on a topic which the poster allegedly 'knows nothing about' would be 'pulling a superficial similarity out of his arse'.

    \n
    \n

    Well, duh. If the red pill doesn't make you offended about your pet project, you aren't taking enough of it :-) The method works with nonzero efficiency as long as we're pattern-matching on relevant traits or any traits causally connected to relevant traits, which means pretty much every superficial similarity gives you nonzero information. And the conjunction rule applies, so the more similar stuff you can find, the better. 'Pulling a similarity out of your arse' isn't something to be ashamed of - it's the whole point of the outside view. Even a superficial similarity is harder to fake, more entangled with reality, more objective than a long chain of reasoning or a credence percentage you came up with. In real-world reasoning, parallel beats sequential.

    \n

    In conclusion let's grant the inside view object-level advocates the benefit of the doubt one last time. Conveniently, the handful of people who say we must believe in the Singularity are all doing work in the AGI field. We can gauge exactly how believable their object-level arguments are by examining their past claims about the schedules of their own projects - the perfect case for the inside view if there ever was one... No, I won't spell out the sordid collection of hyperlinks here. Every reader is encouraged to Google on their own for past announcements by Doug Lenat, Ben Goertzel, Eliezer Yudkowsky (those are actually the heroes of the bunch), or other people that I'm afraid to name at the moment.

    " } }, { "_id": "9bkY8etnLfGCzcYAu", "title": "The Preference Utilitarian’s Time Inconsistency Problem", "pageUrl": "https://www.lesswrong.com/posts/9bkY8etnLfGCzcYAu/the-preference-utilitarian-s-time-inconsistency-problem", "postedAt": "2010-01-15T00:26:04.781Z", "baseScore": 35, "voteCount": 32, "commentCount": 107, "url": null, "contents": { "documentId": "9bkY8etnLfGCzcYAu", "html": "

    \n

    In May of 2007, DanielLC asked at Felicifa, an “online utilitarianism community”:

    \n
    \n

    If preference utilitarianism is about making peoples’ preferences and the universe coincide, wouldn't it be much easier to change peoples’ preferences than the universe?

    \n
    \n

    Indeed, if we were to program a super-intelligent AI to use the utility function U(w) = sum of w’s utilities according to people (i.e., morally relevant agents) who exist in world-history w, the AI might end up killing everyone who is alive now and creating a bunch of new people whose preferences are more easily satisfied, or just use its super intelligence to persuade us to be more satisfied with the universe as it is.

    \n

    Well, that can’t be what we want. Is there an alternative formulation of preference utilitarianism that doesn’t exhibit this problem? Perhaps. Suppose we instead program the AI to use U’(w) = sum of w’s utilities according to people who exist at the time of decision. This solves the Daniel’s problem, but introduces a new one:  time inconsistency.

    \n

    The new AI’s utility function depends on who exists at the time of decision, and as that time changes and people are born and die, its utility function also changes. If the AI is capable of reflection and self-modification, it should immediately notice that it would maximize its expected utility, according to its current utility function, by modifying itself to use U’’(w) = sum of w’s utilities according to people who existed at time T0, where T0 is a constant representing the time of self-modification.

    \n

    The AI is now reflectively consistent, but is this the right outcome? Should the whole future of the universe be shaped only by the preferences of those who happen to be alive at some arbitrary point in time? Presumably, if you’re a utilitarian in the first place, this is probably not the kind of utilitarianism that you’d want to subscribe to.

    \n

    So, what is the solution to this problem? Robin Hanson’s approach to moral philosophy may work. It tries to take into account everyone’s preferences—those who lived in the past, those who will live in the future, and those who have the potential to exist but don’t—but I don’t think he has worked out (or written down) the solution in detail. For example, is the utilitarian AI supposed to sum over every logically possible utility function and weigh them equally? If not, what weighing scheme should it use?

    \n

    Perhaps someone can follow up Robin’s idea and see where this approach leads us? Or does anyone have other ideas for solving this time inconsistency problem?

    " } }, { "_id": "DhEbAn7WNTb376pJc", "title": "Back of the envelope calculations around the singularity. ", "pageUrl": "https://www.lesswrong.com/posts/DhEbAn7WNTb376pJc/back-of-the-envelope-calculations-around-the-singularity", "postedAt": "2010-01-15T00:14:13.849Z", "baseScore": 6, "voteCount": 14, "commentCount": 26, "url": null, "contents": { "documentId": "DhEbAn7WNTb376pJc", "html": "

    Inspired by the talk by Anna Salamon I decided to do my own calculations about the future. This post is a place for discussion about mine and others calculations.

    \n

    \n

    To me there are two possible paths for the likely development of intelligence, that I can identify.

    \n

    World 1) Fast and conceptually clean. Intelligence is a concrete value like the number of neutrons in a reactor. I assign a 20% chance of this.

    \n

    World 2) Slow and messy. Intelligence is contextual, much like say fitness in evolutionary biology. Proofs of intelligence of a system are only doable by a much higher intelligence entity, as it will involve discussing the complex environment. I'd assign about an 60% chance to this.

    \n

    Worlds 3) Other. The other 20% chance is the rest of the scenarios that are not either of these two.

    \n

    Both types of AI have the potential to change the world, both possibly destroying humanity if we don't use them correctly. So they both have the same rewards.

    \n

    So for world 1, I'll go with the same figures as Anna Salamon, because I can't find strong arguments against them (and it will serve as a refresher )

    \n

    Probability of an eventual AI (before humanity dies otherwise) = 80%

    \n

    Probability that  AI will kill us = 80%

    \n

    Probability that we manage safeguards = 40%

    \n

    Probability that current work will save us = 30%

    \n

    So we get 7%*20%. Gives us 1.4%

    \n

    So for world 2. Assume we have an SIAI that is working on the problem of how to make messy AI Friendly or at least as Friendly as possible. It seems less likely we would make AI and harder to create safeguards as they have to act over longer time.

    \n

    Probability of an eventual AI (before humanity dies otherwise) = 70%

    \n

    Probability that  AI will kill us (and/or we will have to give up humanity due to  hard scrapple evolution) = 80%

    \n

    Probability that we manage safeguards = 30%

    \n

    Probability that current work will save us = 20%

    \n

    So we get a factor of 3% times 60% give a 1.8%.

    \n

    Both have the factor of 7billion lives times n, so that can be discounted. They pretty much weigh the same. Or as near as dammit for a back of the envelope calcs, considering my meta-uncertainty is high as well.

    \n

    They do however interfere. The right action in world 1 is not the same as the right action in world 2. Working on Friendliness of conceptually clean AI and suppressing all work and discussion on messy AI hurts world 2 as it increases the chance we might end up with messy UFAI. There is no Singularity Institute for messy AI in this world, and I doubt there will be if SIAI becomes somewhat mainstream in AI communities, so giving money to SIAI hurts world 2, it might have a small negative expected life cost. Working on Friendliness for Messy AI wouldn't intefere with the Clean AI world, as long as it didn't do stupid tests until the messy/clean divide became solved. This tips the scales somewhat towards working on messy FAI and how it is deployed. World 3 is so varied I can't really say much about.

    \n

    So for me the best information I should seek is getting more information on the messy/clean divide. Which is why I always go on about whether SIAI has a way of making sure it is on the right track with the Decision Theory/conceptually clean path.

    \n

    So how do the rest of you run the numbers on the singularity?

    " } }, { "_id": "si25s3PiHSXFFz4iC", "title": "Comic about the Singularity", "pageUrl": "https://www.lesswrong.com/posts/si25s3PiHSXFFz4iC/comic-about-the-singularity", "postedAt": "2010-01-14T18:20:00.947Z", "baseScore": 5, "voteCount": 17, "commentCount": 12, "url": null, "contents": { "documentId": "si25s3PiHSXFFz4iC", "html": "

    Today's Saturday Morning Breakfast Cereal.  (Which incidentally is a very funny webcomic I read regularly.)  Mouseover the red button for a bonus panel.

    \n

    Clearly the author hasn't read the proper Eliezer essay(s) on post-Singularity life.

    " } }, { "_id": "KbS5sSb22fcX5kXs8", "title": "More stuff", "pageUrl": "https://www.lesswrong.com/posts/KbS5sSb22fcX5kXs8/more-stuff", "postedAt": "2010-01-14T13:30:39.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "KbS5sSb22fcX5kXs8", "html": "

    A few more bits I liked in The Stuff of Thought:

    \n

    Hypernyms elevate

    \n

    Labeling someone with a small aspect of what they are – a trait or part –  undignifies them. Calling someone a cripple, the blonde, a suit, isn’t nice. The opposite works too often – things sound more dignified if you label them with a larger category than usual. Driving machines and dental cleaning systems sound more pretentious than cars and toothbrushes.

    \n

    Lots of our phrases rest on the same conceptual metaphors

    \n

    Though we don’t have a specific saying that ‘up is like good and down is like bad’, it’s easy to see that we equate these things  from our endless sayings that spring from this metaphor. Feeling high, spirits soaring, hitting rock bottom, a downturn, pick me up, low mood, low character, low blow, feeling down, over the moon, I’m above you. I can make up new phrases using the same metaphor and you will know what I mean without apparently thinking about it. These things suggest that the connection between goodness and upness is still active in our minds; these things aren’t idioms.

    \n

    Intuitions that phrases like ‘pin the wall with posters’ are wrong follow simple rules that we are introspectively oblivious to.

    \n

    You can say ‘splatter paint on the wall’ or ‘splatter the wall with paint’. You can say ‘pin posters on the wall’. This seems analogous to ‘splatter paint on the wall’, so why don’t we use the same alternative form with that?

    \n

    The answer is that the first form implies that you were changing the paint or the poster by putting it on the wall, whereas the second form implies that you were changing the wall by putting paint or a poster on it. Painting a wall changes the nature of the wall in our eyes, while pinning posters on it doesn’t.

    \n

    This explanation holds across the many other examples of this pattern, and similar explanations hold for others. You photograph a wall with your camera, but don’t photograph your camera at the wall. You fling a cat into a room, but you don’t fling a room with a cat. You can load hay into a cart or load a cart with hay.

    \n

    Some situations it makes sense to frame in a different way and others not. Other conceptual differences that matter with verbs for instance include whether the action was purposeful or accidental, physically direct, took time or was instantaneous, and whether it happened to a person.

    \n

    Working out why some things sound wrong was a tricky puzzle for the conscious minds of linguists, though the whole time they could say that ‘pour a cup with water’ sounded wrong.

    \n

    Intuitive causality is different to philosophically reputable conceptions
    \n

    \n

    It’s been suggested that causality is just what we call things we see happen together a lot, or actions that go together across close counterfactual worlds we imagine, or pretty confusing.

    \n

    Our mental picture of causality seems to be much simpler. It looks like one object with an inherent tendency to move or stay still, and another object standing in its way or pushing it. We have no trouble knowing which counterfactual to compare situations to because inherent in the idea is that the first object had a tendency to do something.

    \n

    The evidence for this is apparently in the various words we use, for instance which features they bother to differentiate. For instance the difference between forcing something and allowing something is whether the causal agent is pushing the other agent, or not getting in the way of its inherent movement. If our minds were different we may not care about this distinction; in both cases the causer can decide whether the thing should happen, and chooses yes.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "a5Fz54uaM7zMqseJ2", "title": "Advice for AI makers", "pageUrl": "https://www.lesswrong.com/posts/a5Fz54uaM7zMqseJ2/advice-for-ai-makers", "postedAt": "2010-01-14T11:32:04.429Z", "baseScore": 10, "voteCount": 10, "commentCount": 211, "url": null, "contents": { "documentId": "a5Fz54uaM7zMqseJ2", "html": "

    A friend of mine is about to launch himself heavily into the realm of AI programming. The details of his approach aren't important; probabilities dictate that he is unlikely to score a major success. He's asked me for advice, however, on how to design a safe(r) AI. I've been pointing him in the right directions and sending him links to useful posts on this blog and the SIAI.

    \n

    Do people here have any recommendations they'd like me to pass on? Hopefully, these may form the basis of a condensed 'warning pack' for other AI makers.

    \n

    Addendum: Advice along the lines of \"don't do it\" is vital and good, but unlikely to be followed. Coding will nearly certainly happen; is there any way of making it less genocidally risky?

    " } }, { "_id": "gpZgXPxMx6hTkiC5w", "title": "Self control may be contagious", "pageUrl": "https://www.lesswrong.com/posts/gpZgXPxMx6hTkiC5w/self-control-may-be-contagious", "postedAt": "2010-01-14T03:41:20.728Z", "baseScore": 18, "voteCount": 13, "commentCount": 37, "url": null, "contents": { "documentId": "gpZgXPxMx6hTkiC5w", "html": "

    Just saw this article on reddit and thought it would be relevant here: Self Control is Contagious, Study Finds

    \n

    Apparently seeing other people exhibit self control or even just thinking of them tends to increase (or decrease if observing poor self control) one's own self control.

    " } }, { "_id": "XD3sfjfeabp8jL5c7", "title": "Meetup: Bay Area: Jan 15th, 7pm", "pageUrl": "https://www.lesswrong.com/posts/XD3sfjfeabp8jL5c7/meetup-bay-area-jan-15th-7pm", "postedAt": "2010-01-13T22:02:45.943Z", "baseScore": 4, "voteCount": 5, "commentCount": 5, "url": null, "contents": { "documentId": "XD3sfjfeabp8jL5c7", "html": "

    Overcoming Bias / Less Wrong meetup in the San Francisco Bay Area at SIAI House on January 15th, 2010, starting at 7PM.

    \n

    Robin Hanson and possibly Michael Vassar will be present.

    " } }, { "_id": "EPaz9zLSsXpwtEo9C", "title": "Outline of a lower bound for consciousness", "pageUrl": "https://www.lesswrong.com/posts/EPaz9zLSsXpwtEo9C/outline-of-a-lower-bound-for-consciousness", "postedAt": "2010-01-13T05:27:48.087Z", "baseScore": 4, "voteCount": 18, "commentCount": 112, "url": null, "contents": { "documentId": "EPaz9zLSsXpwtEo9C", "html": "

    This is a summary of an article I'm writing on consciousness, and I'd like to hear opinions on it.  It is the first time anyone has been able to defend a numeric claim about subjective consciousness.

    \n

    ADDED:  Funny no one pointed out this connection, but the purpose of this article is to create a nonperson predicate.

    \n

    1. Overview

    \n

    I propose a test for the absence of consciousness, based on the claim that a necessary, but not sufficient, condition for a symbol-based knowledge system to be considered conscious is that it has exactly one possible symbol grounding, modulo symbols representing qualia. This supposition, plus a few reasonable assumptions, leads to the conclusion that a symbolic artificial intelligence using Boolean truth-values and having an adult vocabulary must have on the order of 106 assertions before we need worry whether it is conscious.

    \n

    Section 2 will explain the claim about symbol-grounding that this analysis is based on. Section 3 will present the math and some reasonable assumptions for computing the expected number of randomly-satisfied groundings for a symbol system. Section 4 will argue that a Boolean symbol system with a human-level vocabulary must have millions of assertions in order for it to be probable that no spurious symbols groundings exist.

    \n

    2. Symbol grounding

    \n

    2.1. A simple representational system

    \n

    Consider a symbolic reasoning system whose knowledge base K consists of predicate logic assertions, using atoms, predicates, and variables.  We will ignore quantifiers.  In addition to the knowledge base, there is a relatively small set of primitive rules that say how to derive new assertions from existing assertions, which are not asserted in K, but are implemented by the inference engine interpreting K.  Any predicate that occurs in a primitive rule is called a primitive predicate.

    \n

    The meaning of primitive predicates is specified by the program that implements the inference engine.  The meaning of predicates other than primitive rules is defined within K, in terms of the primitive predicates, in the same way that LISP code defines a semantics for functions based on the semantics of LISP's primitive functions.  (If this is objectionable, you can devise a representation in which the only predicates are primitive predicates (Shapiro 2000).)

    \n

    This still leaves the semantics of the atoms undefined.  We will say that a grounding g for that system is a mapping from the atoms in K to concepts in a world W.  We will extend the notation so that g(P) more generally indicates the concept in W arrived at by mapping all of the atoms in the predication P using g.  The semantics of this mapping may be referential (e.g., Dretske 1985) or intensional (Maida & Shapiro 1982).  The world may be an external world, or pure simulation. What is required is a consistent, generative relationship between symbols and a world, so that someone knowing that relationship, and the state of the world, could predict what predicates the system would assert.

    \n

    2.2. Falsifiable symbol groundings and ambiguous consciousness

    \n

    If you have a system that is meant to simulate molecular signaling in a cell, I might be able to define a new grounding g' that re-maps its nodes to things in W, so that for every predication P in K, g'(P) is still true in W; but now the statements in W would be interpreted as simulating traffic flow in a city. If you have a system that you say models disputes between two corporations, I might re-map it so that it is simulating a mating ritual between two creatures.  But adding more information to your system is likely to falsify my remapping, so that it is no longer true that g(P) is true in W for all propositions P in K.  The key assumption of this paper is that a system need not be considered conscious if such a currently-true but still falsifiable remapping is possible.

    \n

    A falsifiable grounding is a grounding for K whose interpretation g(K) is true in W by chance, because the system does not contain enough information to rule it out.  Consider again the system you designed to simulate a dispute between corporations, that I claim is simulating mating rituals.  Let's say for the moment that the agents it simulates are, in fact, conscious.  Furthermore, since both mappings are consistent, we don't get to choose which mapping they experience.  Perhaps each agent has two consciousnesses; perhaps each settles on one interpretation arbitrarily; perhaps they flickers between the two like a person looking at a Necker cube.

    \n

    Although in this example we can say that one interpretation is the true interpretation, our knowledge of that can have no impact on which interpretation the system consciously experiences.  Therefore, any theory of consciousness that claimed the system was conscious of events in W using the intended grounding, must also admit that it is conscious of a set of entirely different events in W using the accidental grounding, prior to the acquisition of new information ruling the latter out.

    \n

    2.3 Not caring is as good as disproving

    \n

    I don't know how to prove that multiple simultaneous consciousnesses don't occur.  But we don't need to worry about them.  I didn't say that a system with multiple groundings couldn't be conscious.  I said it needn't be considered conscious.

    \n

    Even if you are willing to consider a computer program with many falsifiable groundings to be conscious, you still needn't worry about how you treat that computer program.  Because you can't be nice to it no matter how hard you try.  It's pointless to treat an agent as having rights if it doesn't have a stable symbol-grounding, because what is desirable to it at one moment might cause it indescribable agony in the next.  Even if you are nice to the consciousness with the grounding intended by the system's designer, you will be causing misery to an astronomical number of equally-real alternately-grounded consciousnesses.

    \n

    3. Counting groundings

    \n

    3.1 Overview

    \n

    Let g(K) denote the set of assertions about the world W that K represents that are produced from K using a symbol-grounding mapping g.  The system fails the unique symbol-grounding test if there is a permutation function f (other than the identity function) mapping atoms into other atoms, such that g(K) is true and g(f(K)) is true in W.  Given K, what is the probability that there exists such a permutation f?

    \n

    We assume Boolean truth-values for our predicates. Suppose the system represents s different concepts as unique atoms. Suppose there are p predicates in the system, and a assertions made over the s symbols using these p predicates. We wish to know that it is not possible to choose a permutation f of those s symbols, such that the knowledge represented in the system would still evaluate to true in the represented world.

    \n

    We will calculate the probability P(p,a) that each of a assertions using p predicates evaluates to true. We will also calculate the number N(s) of possible groundings of symbols in the knowledge base. We then can calculate the expected number E of random symbol groundings in addition to the one intended by the system builder as

    \n

    N(s) x P(p,a) = E

    \n

    Equation 1: Expected number of accidental symbol groundings

    \n

    3.2 A closed-form approximation

    \n

    This section, which I'm saving for the full paper, proves that, with certain reasonable assumptions, the solution to this equation is

    \n

    sln(s)-s < aln(p)/2

    \n

    Equation 7: The consciousness inequality for Boolean symbol systems

    \n

    As s and p should be similar in magnitude, this can be approximated very well as 2s < a.

    \n

    4. How much knowledge does an AI need before we need worry whether it is conscious?

    \n

    How complex must a system that reasons something like a human be to pass the test? By “something like a human” we mean a system with approximately the same number of categories as a human. We will estimate this from the number of words in a typical human’s vocabulary.

    \n

    (Goulden et al. 1990) studied Webster's Third International Dictionary (1963) and concluded it contains less than 58,000 distinct base words.  They then tested subjects for their knowledge of a sample of base words from the dictionary, and concluded that native English speakers who are university graduates have an average vocabulary of around 17,000 base words.  This accounts for concepts that have their own words.  We also have concepts that we can express only by joining words together (\"back pain\" occurs to me at the moment); some concepts that we would need entire sentences to communicate; and some concepts that share the same word with other concepts.  However, some proportion of these concepts will be represented in our system as predicates, rather than as atoms.

    \n

    I used 50,000 as a ballpark figure for sThis leaves a and p unknown.  We can estimate a from p if we suppose that the least-common predicate in the knowledge base is used exactly once.  Then we have a/(p ln(p)) = 1, a = pln(p).  We can then compute the smallest a such that Equation 7 is satisfied.

    \n

    Solving for p would be difficult. Using 100 iterations of Newton's method (from the starting guess p=100) finds p=11,279, a=105,242. This indicates that a pure symbol system having a human-like vocabulary of 50,000 atoms must have at least 100,000 assertions before one need worry whether it is conscious.

    \n

    Children are less likely to have this much knowledge.  But they also know fewer concepts.  This suggests that the rate at which we learn language is limited not by our ability to learn the words, but by our ability to learn enough facts using those words for us to have a conscious understanding of them.  Another way of putting this is that you can't short-change learning.  Even if you try to jump-start your AI by writing a bunch of rules ala Cyc, you need to put in exactly as much data as would have been needed for the system to learn those rules on its own in order for it to satisfy Equation 7.

    \n

    5. Conclusions

    \n

    The immediate application of this work is that scientists developing intelligent systems, who may have (or be pressured to display) moral concerns over whether the systems they are experimenting with may be conscious, can use this approach to tell whether their systems are complex enough for this to be a concern.

    \n

    In popular discussion, people worry that a computer program may become dangerous when it becomes self-aware.  They may therefore imagine that this test could be used to tell whether a computer program posed a potential hazard.  This is an incorrect application.  I suppose that subjective experience somehow makes an agent more effective; otherwise, it would not have evolved.  However, automated reasoning systems reason whether they are conscious or not.  There is no reason to assume that a system is not dangerous because it is unconscious, any more than you would conclude that a hurricane is not dangerous because it is unconscious.

    \n

    More generally, this work shows that it is possible, if one considers representations in enough detail, to make numeric claims about subjective consciousness.  It is thus an existence proof that a science of consciousness is possible.

    \n

    References

    \n

    Fred Dretske (1985). Machines and the mental. In Proceedings and Addresses of the American Philosophical Association 59: 23-33.

    \n

    Robin Goulden, Paul Nation, John Read (1990). How large can a receptive vocabulary be? Applied Linguistics 11: 341-363.

    \n

    Anthony Maida & Stuart Shapiro (1982). Intensional concepts in propositional semantic networks. Cognitive Science 6: 291-330. Reprinted in Ronald Brachman & Hector Levesque, eds., Readings in Knowledge Representation, Los Altos, CA: Morgan Kaufmann 1985, pp. 169-189.

    \n

    William Rapaport (1988). Syntactic semantics: Foundations of computational natural-language understanding. In James Fetzer, ed., Aspects of Artificial Intelligence (Dordrecht, Holland: Kluwer Academic Publishers): 81-131; reprinted in Eric Dietrich (ed.), Thinking Computers and Virtual Persons: Essays on the Intentionality of Machines (San Diego: Academic Press, 1994): 225-273.

    \n

    Roger Schank (1975). The primitive ACTs of conceptual dependency.  Proceedings of the 1975 workshop on Theoretical Issues in Natural Language Processing, Cambridge MA.

    \n

    Stuart C. Shapiro (2000).  SNePS: A logic for natural language understanding and commonsense reasoning.  In Lucja Iwanska & Stuart C. Shapiro (eds.), Natural Language Processing and Knowledge Representation: Language for Knowledge and Knowledge for Language (Menlo Park, CA/Cambridge, MA: AAAI Press/MIT Press): 175-195.

    \n

    Yorick Wilks (1972).  Grammar, Meaning, and the Machine Analysis of Language.  London.

    " } }, { "_id": "cgrvvp9QzjiFuYwLi", "title": "High Status and Stupidity: Why?", "pageUrl": "https://www.lesswrong.com/posts/cgrvvp9QzjiFuYwLi/high-status-and-stupidity-why", "postedAt": "2010-01-12T16:36:56.176Z", "baseScore": 61, "voteCount": 57, "commentCount": 146, "url": null, "contents": { "documentId": "cgrvvp9QzjiFuYwLi", "html": "

    Michael Vassar once suggested:  \"Status makes people effectively stupid, as it makes it harder for them to update their public positions without feeling that they are losing face.\"

    \n

    To the extent that status does, in fact, make people stupid, this is a rather important phenomenon for a society like ours in which practically all decisions and beliefs pass through the hands of very-high-status individuals (a high \"cognitive Gini coefficient\").

    \n

    Does status actually make people stupid?  It's hard to say because I haven't tracked many careers over time.  I do have a definite and strong impression, with respect to many high-status individuals, that it would have been a lot easier to have an intelligent conversation with them, if I'd approached them before they made it big.  But where does that impression come from, since I haven't actually tracked them over time?  (Fundamental question of rationality:  What do you think you know and how do you think you know it?)  My best guess for why my brain seems to believe this:  I know it's possible to have intelligent conversations with smart grad students, and I get the strong impression that high-status people used to be those grad students, but now it's much harder to have intelligent conversations with them than with smart grad students.

    \n

    Hypotheses:

    \n
      \n
    1. Vassar's hypothesis:  Higher status increases the amount of face you lose when you change your mind, or increases the cost of losing face.
    2. \n
    3. The open-mindedness needed to consider interesting new ideas is (was) only an evolutionary advantage for low-status individuals seeking a good idea to ride to high status.  Once high status is achieved, new ideas are high-risk gambles with less relative payoff - the optimal strategy is to be mainstream.  I think Robin Hanson had a post about this but I can't recall the title.
    4. \n
    5. Intelligence as such is a high-cost feature which is no longer necessary once status is achieved.  We can call this the Llinas Hypothesis.
    6. \n
    7. High-status individuals were intelligent when they were young; the observed disparity is due solely to the standard declines of aging.
    8. \n
    9. High-status individuals spend more time on dinners and politics, and less time on problem-solving and reading; they exercise their minds less.
    10. \n
    11. High-status individuals are under less pressure to perform, in general.
    12. \n
    13. High-status individuals are just as smart as they ever were, but when you or I try to approach them, the status disparity makes it harder to converse with them - they would sound just as intelligent if we had higher status ourselves.
    14. \n
    15. High-status individuals feel less social pressure to listen to your arguments, respond articulately to them, or change their minds when their own arguments are inadequate, which decreases their apparent or real intelligence.
    16. \n
    17. High-status individuals become more convinced of their ideas' rightness or of their own competence.
    18. \n
    19. High-status individuals get less honest advice from their friends, especially about their own failings.
    20. \n
    \n

    Did I miss anything important?

    \n

    Having achieved some small degree of status in certain very limited circles, here's what I do to try to avoid the status-makes-you-stupid effect:

    \n" } }, { "_id": "MoP6gPWkZdFdfQPMj", "title": "We desire genetically attributed success", "pageUrl": "https://www.lesswrong.com/posts/MoP6gPWkZdFdfQPMj/we-desire-genetically-attributed-success", "postedAt": "2010-01-12T13:56:02.692Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "MoP6gPWkZdFdfQPMj", "html": "

    A certain girl friend of mine isn't very successful in romantic relationships. But whenever I advise her to take a more rational approach to hunting men, she says my perfectly sensible suggestions still make her feel uneasy because things ought to happen naturally. Is this kind of thinking a cognitive mistake? I'm not sure...

    \n

    A little reflection turns up other examples where people don't just desire success, but specifically natural, genetically attributed success. In ego-heavy areas like game design, everybody starts out wanting to be the master creator rather than humble participant, even if collaboration yields a much bigger pie. (Linky on how an egoless environment aids game creation.)

    \n

    This behavior is most easily explained by status-seeking, but I feel there's something else at play: a kind of lingering desire to prove to myself that I was worthy from the start. It might be self-signaling or an unexpected application of Robin Hanson's idea that we enjoy clear fitness signals on general principle. Not sure on this point either - it sounds plausible but weird.

    \n

    I first got the idea for this post while thinking about cooperation. Duh, if I join some collaborative endeavor, it had better be about proving my worth! But when we insist on getting status rather than (say) money, we may be passing over opportunities to cooperate, so at least in some cases it might be helpful to override the evolution-provided default.

    " } }, { "_id": "EBEoqNpEijhycGLJv", "title": "Why are obvious meanings veiled?", "pageUrl": "https://www.lesswrong.com/posts/EBEoqNpEijhycGLJv/why-are-obvious-meanings-veiled", "postedAt": "2010-01-12T13:30:22.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "EBEoqNpEijhycGLJv", "html": "

    Why do people use veiled language even when both parties likely know the real message? For instance if a boy asks a girl up for coffee after a date, nobody is likely to miss the cliched connotation, so why not be direct?  The same question goes for many other threats, bribes, requests and propositions. Where meaning is reasonably ambiguous, plausible deniability seems a good explanation. However in many cases denial wouldn’t be that plausible and would make you look fairly silly anyway.

    \n

    In The Stuff of Thought, Steven Pinker offers six possible explanations for these cases, the last of which I found particularly interesting: People are not embarrassed nearly as much by everyone knowing their failings as long as they aren’t common knowledge (everyone knows that everyone knows etc). Pinker suggests veiled language can offer enough uncertainty that while the other party knows they are very likely being offered sex for instance (which is all you need them to know), they are still unsure of whether you know that they know this, and so on. Plausible deniability of common knowledge means if they decline you, you can carry on with yours pride intact more easily, because status is about what everyone thinks everyone thinks etc your status is, and that hasn’t changed.

    \n

    This has some problems. Does any vagueness preclude mutual knowledge? We don’t act as though it does; there is always some uncertainty. Plus we take many private observations into account in judging others’ status, though you could argue that this is to judge how they are usually judged, so any aspect of a person you believe others haven’t mostly seen should not inform you on their status. Pinker suggests that a larger gap between the level of vagueness that precludes mutual knowledge and that which allows plausible deniability is helped by people attributing their comprehension of veiled suggestions to their own wonderful social intuition, which makes them less sure that the other knows what they understood.

    \n

    But veiled comments often seem to allow no more uncertainty than explicit ones. For instance, ‘it would be great if you would do the washing up’ is about as obvious as ‘do the washing up’, but somehow more polite because you are informing not commanding, though the listener arguably has less choice because angrily proclaiming that they are not your slave is off the table. Perhaps such phrases are idioms now, and when they were alive it really was less obvious what commenting on the wonderfulness of clean dishes implied. It seems unlikely.

    \n

    Some other explanations from Pinker (I omit one because I didn’t understand it enough to paraphrase at the time and don’t remember it now):

    \n

    The token bow: indirection tells the listener that the speaker has made an effort to spare her feelings or status. e.g. requests made in forms other than imperative statements are designed to show you don’t presume you may command the person. I’m not sure how this would explain the coffee offer above. Perhaps in the existing relationship asking for sex would be disrespectful, so the suggestion to continue the gradual shift into one anothers’ pants is couched as something respectful in the current relationship?

    \n

    Don’t talk at all, show me: most veiled suggestions are a request to alter the terms of the relationship, and in most cases people don’t speak directly about the terms of relationships. This is just part of that puzzle. This explanation doesn’t explain threats or bribes well I think. By the time you are talking idly about accidents that might happen, awkwardness about discussing a relationship outright is the least of anyone’s worries. Also we aren’t squeamish about discussing business arrangements, which is what a bribe is.

    \n

    The virtual audience: even if nobody is watching, the situation can be more easily transmitted verbally if the proposition is explicitly verbal. If the intent is conveyed by a mixture of subtler signals, such as tone, gestures and the rest of the interaction, it will be harder to persuade others later that that the meaning really was what you say it was, even if in context it was obvious. This doesn’t seem plausible for many cases. If I tell you that someone discreetly proffered a fifty dollar note and wondered aloud how soon their request might be dealt with, you – and any jury – should interpret that just fine.

    \n

    Preserving the spell: some part of the other person enjoys and maintains the pleasant illusion of whatever kind of relationship is overtly demonstrated by the words used. Pinker gives the example of a wealthy donor to a university, who is essentially buying naming rights and prestige, but everyone enjoys it more if you have fancy dinners together and pretend that the university is hoping for their ‘leadership’. This doesn’t explain why some transactions are made with a pretense and some aren’t. If I buy an apartment building we don’t all sit down at a fancy dinner together and pretend that I am a great hero offering leadership to the tenants. Perhaps the difference is that if a donation is a purchase, part of the purchased package is a reputation for virtue. However outsiders aimed at mostly don’t see what the transaction looks like. For other cases this also doesn’t seem to explain. While one may want to preserve the feeling that one is not being threatened, why should the threatening one care? And seducing someone relies on the hope of ending air of platonic aquaintence.

    \n

    Another explanation occurs to me, but I haven’t thought much about whether it’s applicable anywhere. Perhaps once veiled language is used for plausible deniability in many cases, there become other cases where the appearance of trying to have plausible deniability is useful even if you don’t actually want it. In those cases you might use veiled language to imply you are trying, but be less subtle so as not to succeed. For instance once most men use veiled come ons, for you to suggest anything explicitly to a girl would show you have no fear of rejection. She mightn’t like being thought either so predictable or of such low value, so it is better to show respect by protecting yourself from rejection.

    \n

    None of these explanations seem adequate, but I don’t have a good enough list of examples to consider the question well.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "zGvbEJcyp9TMqrtPe", "title": "The things we know that we know ain't so", "pageUrl": "https://www.lesswrong.com/posts/zGvbEJcyp9TMqrtPe/the-things-we-know-that-we-know-ain-t-so", "postedAt": "2010-01-11T21:59:22.776Z", "baseScore": 19, "voteCount": 21, "commentCount": 149, "url": null, "contents": { "documentId": "zGvbEJcyp9TMqrtPe", "html": "

    We're all familiar with false popular memes that spread faster than they can be stomped out:  You only use 10% of your brain.  Al Gore said he invented the internet.  Perhaps it doesn't surprise you that some memes in popular culture can't be killed.  But does the same thing happen in science?

    \n

    Most of you have probably heard of Broca's aphasia and Wernicke's aphasia.  Every textbook and every college course on language and the brain describes the connection between damage to these areas, and the speech deficits named after them.

    \n

    Also, both are probably wrong.  Both areas were mistakenly associated with their aphasias because they are near or surrounded by other areas which, when damaged, cause the aphasias.  Yet our schools continue teaching the traditional, erroneous story; including a lecture in 9.14 at MIT given in 2005.  Both the Wikipedia entry on Wernicke's aphasia and the Wikipedia entry on Broca's aphasia are still in error; the Wikipedia entry on Wernicke's area has got it straight.

    \n

    Is it because this information is considered unimportant?  Hardly; it's probably the only functional association you will find in every course and every book on the brain.

    \n

    Is it because the information is too new to have penetrated the field?  No; see the dates on the references below.

    \n

    In spite of this failure in education, are the experts thoroughly familiar with this information?  Possibly not; this 2006 paper on Broca's area by a renowned expert does not mention it.  (In its defense, it references many other studies in which damage to Broca's area is associated with language deficits.)

    \n

    So:

    \n\n

     

    \n

    References

    \n

    Bogen JE, Bogen GM (1976). Wernicke's region—Where is it? Ann. N. Y. Acad. Sci. 280: 834–43.

    \n

    Dronkers, N. F., Shapiro, J. K., Redfern, B., & Knight, R. T. (1992). The role of Broca’s area in Broca’s aphasia.
    Journal of Clinical and Experimental Neuropsychology, 14, 52–53.

    \n

    Dronkers NF., Redfern B B., Knight R T. (2000). The neural architecture of language disorders. in Bizzi, Emilio; Gazzaniga, Michael S.. The New cognitive neurosciences (2nd ed.). Cambridge, Mass: MIT Press. pp. 949–58.

    \n

    Dronkers et al. (2004).  Lesion analysis of the brain areas involved in language comprehension.  Cognition 92: 145-177.

    \n

    Mohr, J. P. (1976). Broca’s area and Broca’s aphasia. In H. Whitaker, Studies in neurolinguistics, New York: Academic Press.

    \n

     

    " } }, { "_id": "42srRsFxeb6CJyQG6", "title": "Consciousness Explained, chapter 3", "pageUrl": "https://www.lesswrong.com/posts/42srRsFxeb6CJyQG6/consciousness-explained-chapter-3", "postedAt": "2010-01-11T17:39:55.261Z", "baseScore": 1, "voteCount": 17, "commentCount": 7, "url": null, "contents": { "documentId": "42srRsFxeb6CJyQG6", "html": "

    A Visit to the Phenonemological Garden

    \n

    This chapter categorizes and discusses mental phenomena.  It emphasizes that we don't re-draw the outer world inside our heads for a little person to look at.

    \n

    1. Welcome to the phenom

    \n

    \"Kant distinguished \"phenomena,\" things as they appear, from \"noumena,\" things as they are in themselves, and during the development of the natural or physical sciences in the 19th century, the term phenomenology came to refer to the merely descriptive study of any subject matter.\"

    \n

    [\"Phenomenology\" seems similar to what I recently called \"mere curve-fitting\" in a comment exchange about gravity, as something that lets us make accurate predictions about a phenomenon, while still leaving it in the realm of metaphysics (not integrated causally into the rest of physics).]

    \n

    D will use the term \"phenom\" to denote the (true) ontology of conscious phenomena.  He divides it into

    \n\n

    2. Our experience of the external world

    \n

    D tells an interesting fable about a philosopher who denied that the mechanical reproduction of an orchestra's sounds could be possible, due to a failure of his own imagination.  He then challenges the naive assumption that, after all the sound of all the different instruments have been translated into a stream of pulses, they get converted back into all the different orchestral sounds in the brain.  The distinctive sounds of different instruments, once thought to be unanalyzable, are due to the superposition of a number of different frequencies and amplitudes of pure tones.  This leads into the argument that we don't experience sounds in our heads - we're not going to go to the effort to break sound down into frequencies just to get it inside our heads, and then build the original waveform back up again.  Similarly, we imagine we hear word boundaries in speech; yet there are no gaps between words in the acoustic energy profile.

    \n

    Vision:  We don't draw pictures in our heads and then look at them.  That would lead to an infinite regress.  Besides, it's dark in there.  Our visual perception is a conglomeration of different objects at different resolutions tagged with different visual properties.

    \n

    3. Our experience of the internal world

    \n

    The gist of this section seems to be that we don't imagine things by, eg., drawing pictures in our heads.  [I remember an experiment 5 or 10 years ago, in which a subject was able to literally draw a simple figure in its primary visual cortex, detected by fMRI, by imagining it.  So I'm not sure this is a meaningful distinction to make.  Imagining a scene may start at an end closer to consciousness; but it still ends up re-activating images in topographically-mapped areas.]

    \n

    4. Affect

    \n

    Fun is a mysterious phenomenon not yet given enough attention by philosophers.  Affect and qualia are still mysterious; but D promises (p. 65) to give a materialistic account of them.

    \n

    Summary

    \n

    [I agree with most of this chapter, with the caveat that a large percentage of our cortex is taken up with topology-preserving maps of our visual field, of the type D says we don't have.  I'm willing to overlook this, because I expect that a lot of the \"important stuff\" goes on in more rostral associational brain areas.  But in order to make this excuse for D, I have to slide a little toward the \"Cartesian theater of the mind\" view that D is going to spend much of the book arguing against.]

    " } }, { "_id": "gKBCveCSjXPgNRfsA", "title": "Dennett's \"Consciousness Explained\": Chpt 2", "pageUrl": "https://www.lesswrong.com/posts/gKBCveCSjXPgNRfsA/dennett-s-consciousness-explained-chpt-2", "postedAt": "2010-01-10T23:38:55.449Z", "baseScore": 3, "voteCount": 25, "commentCount": 25, "url": null, "contents": { "documentId": "gKBCveCSjXPgNRfsA", "html": "

    Chapter 2: Explaining Consciousness

    \n

    This chapter is all about dualism and why it's bad.  I find this chapter incoherent, because Dennett never defines dualism carefully, and confuses it with the theistic views historically associated with dualism.

    \n

    [Long stretches of my own ideas will be in brackets, like this.]

    \n

    1. Should consciousness be demystified?

    \n

    D begins with a defense against those who don't want an explanation of consciousness.  I'm not even going to read this section.

    \n

    2. The mystery of consciousness

    \n

    \"What could be more obvious or certain to each of us than that he or she is a conscious subject of experience, an enjoyer of perceptions and sensations, a sufferer of pain, an entertainer of ideas, and a conscious deliberator? ... How can living physical bodies in the physical world produce such phenomena?\"

    \n

    3. The attractions of mind stuff

    \n

    D says Searle describes functionalism by saying that if a computer program reproduced the entire functional structure of a human wine taster's cognitive system, functionalism says that it would reproduce all the mental properties, including enjoyment.  This is followed by a [long, pointless, popular] discussion of whether volcanos, hurricanes, etc., have souls.

    \n

    The \"mind stuff\" position D sets up in order to knock down is not John Searle's \"mind stuff\" position, nor Roger Penrose's, but something spiritual and non-physical.  He says that the mind-stuff view is that \"the conscious mind... cannot just be the brain... because nothing in the brain could appreciate wine.\"  [D should pursue this further, because the summary given of the \"mind stuff\" view is too incoherent to attack.]

    \n

    4. Why dualism is forlorn

    \n

    D says that the \"mind stuff\" view is that the brain is \"composed not of ordinary matter\", and is dualism.  Materialists, OTOH, say we can account for all mental phenomena using the same principles that explain other things.  Dualism is bad; materialism is good.

    \n

    D says that if dualism were true, then the mind would need to act on our body, so we could move our arms.  But to move, a physical thing needs energy; and how can non-matter produce energy?  Dualism is thus dead, QED.

    \n

    [IMHO this muddies the waters terribly, in a way I've seen them muddied many times before.  The dichotomy between dualism and materialism is a false dichotomy.  There are no \"materialists\" who believe only in a kinematic world.  (Ironically, Descartes, the classic dualist, was just such a materialist when it came to the physical world; he refused to believe in gravity for that reason.)  The physics that we believe in is full of other forces such as gravity, electricity, and magnetism.  Energy can exist in all these forms, as well as in matter.  D is one of the many who reject \"dualism\", yet accept the everyday mysterious forces; and presumably approve of people who accepted them long before there was any material explanation of them.

    \n

    If we want to be able to dismiss people for positing new \"stuff\", the way we currently do by calling them dualists, then we need a way to distinguish an acceptable new fundamental force or type of being, such as gravity, from an unacceptable one.  Obeying the law of conservation of energy is one such rule; if Descartes wants to have a non-material soul, it must exchange energy with ordinary matter in predictable ways.  To put it another way, rejecting \"dualism\" can make sense if we define dualism as the belief in things that don't obey conservation laws.

    \n

    Once we've done that, though, we find that all the old enemies we hoped to dismiss as dualists can sneak back in by claiming to observe the conservation laws.  Even magic systems in fantasy worlds often obey energy conservation laws.

    \n

    Even this conservative position is problematic.  EY believes in many worlds.  Many worlds seems, at least to many people like me who consider it a respectable position without understanding it, to require a stupendous, continual violation of conservation laws.  But we don't usually therefore call EY a \"bad dualist\" and dismiss many-worlds.

    \n

    My intuition is that we are willing to consider even very crazy-sounding new proposed extensions to physics, if we believe they are made with a sincere desire to understand the universe.  Historically, a \"dualist\" is usually someone, such as Descartes, who is trying to come up with an excuse for not trying to understand the universe.  The term \"dualist\" is an accusation against a person's intent masquerading as an objection to their physics.]

    \n

    D also refutes dualism by saying that mind stuff can't both elude physical measurement, and control the body; as anything that escapes our instruments of detection can't possibly interact with the body to control it.  [The problem with this argument is that, 200 years ago, if someone had told you that the brain used electrical impulses, you could have used the exact same argument to \"prove\" that that was impossible.]

    \n

    D considers this angle on the next page, without realizing that it devastates his last several pages of argument:  \"Perhaps some basic enlargement of the ontology of the physical sciences is called for in order to account for the phenomena of consciousness.\"

    \n

    He also gives us what I take to be the defining characteristic of \"bad\" dualism: \"The few dualists to avow their views openly have all candidly and comfortably announced that they have no theory whatever of how the mind works -- something, they insist, that is quite beyond human ken.\"  [This shows us that what D dismisses as \"dualism\" has nothing to do with making a dualistic distinction between matter and non-matter; and everything to do with the intentions of the theorist.  What D objects to as \"dualism\" is actually the God argument that has been historically associated with dualism:  Stopping further inquiry by positing the existence of something that a) explains, and b) cannot be explained.]

    \n

    5. The challenge

    \n

    D lays down rules for himself to follow: To explain consciousness, with only existing science, while acknowledging his own conscious experience.

    \n

    Summary

    \n

    [This chapter will confuse more than inform.  Its only purpose is to refute dualism; but D hasn't taken a close look at what he means when he uses that word, so he mingles together its meaning and its associations and historical contingiencies indiscriminately.]

    " } }, { "_id": "5CTL7cq5Qfn8MD8yC", "title": "Why does failed indulgence cause guilt?", "pageUrl": "https://www.lesswrong.com/posts/5CTL7cq5Qfn8MD8yC/why-does-failed-indulgence-cause-guilt", "postedAt": "2010-01-10T13:33:10.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "5CTL7cq5Qfn8MD8yC", "html": "

    When luxurious products disappoint, people feel more guilty than when utilitarian products do:

    \n

    The primary insights this research provides are as follows: (1) a negative experience with the choice of a product with superior utilitarian and inferior hedonic benefits (e.g., a highly functional cell phone with poor attractiveness) over a product with superior hedonic and inferior utilitarian benefits evokes feelings of sadness, disappointment, and anger, (2) a negative experience with the choice of a product with superior hedonic and inferior utilitarian benefits (e.g., a highly attractive cell phone with poor functionality) over a product with superior utilitarian and inferior hedonic benefits evokes feelings of guilt and anxiety.

    \n

    This is interesting because the failure of the product to satisfy isn’t caused by the indulgence of the buyer’s decision to buy it. Yet it’s as thoug the blame goes to the last decision the buyer made, and the problem with that decision is taken to be whatever felt bad about it at the time, however unrelated to the failure at hand. Or does the disappointment seem like punishment somehow for the original greed?

    \n

    How general is this pattern? I think I feel more guilty when my less admirable intentions fail. Can you think of examples or counterexamples?


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "bNhTM2ypGBjkGHcKM", "title": "Dennett's \"Consciousness Explained\": Prelude", "pageUrl": "https://www.lesswrong.com/posts/bNhTM2ypGBjkGHcKM/dennett-s-consciousness-explained-prelude", "postedAt": "2010-01-10T07:31:03.775Z", "baseScore": 14, "voteCount": 23, "commentCount": 100, "url": null, "contents": { "documentId": "bNhTM2ypGBjkGHcKM", "html": "

    I'm starting Dennett's \"Consciousness Explained\".  Dennett says, in the introduction, that he believes he has solved the problem of consciousness.  Since several people have referred to his work here with approval, I'm going to give it a go.  I'm going to post chapter summaries as I read, for my own selfish benefit, so that you can point out when you disagree with my understanding of it.  \"D\" will stand for Dennett.

    \n

    If you loathe the C-word, just stop now.  That's what the convenient break just below is for.  You are responsible for your own wasted time if you proceed.

    \n

    Chpt. 1: Prelude: How are Hallucinations Possible?

    \n

    D describes the brain in a vat, and asks how we can know we aren't brains in vats.  This dismays me, as it is one of those questions that distracts people trying to talk about consciousness, that has nothing to do with the difficult problems of consciousness.

    \n

    Dennett states, without presenting a single number, that the bandwidth needs for reproducing our sensory experience would be so great that it is impossible (his actual word); and that this proves that we are not brains in vats.  Sigh.

    \n

    He then asks how hallucinations are possible: \"How on earth can a single brain do what teams of scientists and computer animators would find to be almost impossible?\"  Sigh again.  This is surprising to Dennett because he believes he has just established that the bandwidth needs for consciousness are too great for any computer to provide; yet the brain sometimes (during hallucinations) provides nearly that much bandwidth.  D has apparently forgotten that the brain provides exactly, by definition, the consciousness bandwidth of information to us all the time.

    \n

    D recounts Descartes' remarkably prescient discussion of the bellpull as an analogy for how the brain could send us phantom misinformation; but dismisses it, saying, \"there is no way the brain as illusionist could store and manipulate enough false information to fool an inquiring mind.\"  Sigh.  Now not only consciousness, but also dreams, are impossible.  However, D then comes back to dreams, and is aware they exist and are hallucinations; so either he or I is misunderstanding this section.

    \n

    On p. 12 he suggests something interesting: Perception is driven both bottom-up (from the senses) and top-down (from our expectations).  A hallucination could happen when the bottom-up channel is cut off.  D doesn't get into data compression at all, but I think a better way to phrase this is that, given arbitrary bottom-up data, the mind can decompress sensory input into the most likely interpretation given the data and given its knowledge about the world.  Internally, we should expect that high-bandwidth sensory data is summarized somewhere in a compressed form.  Compressed data necessarily looks more random than prior to compression.  This means that, somewhere inside the mind, we should expect it to be harder than naive introspection suggests to distinguish between true sensory data and random sensory noise.  D suggests an important role for an adjustable sensitivity threshold for accepting/rejecting suggested interpretations of sense data.

    \n

    D dismisses Freud's ideas about dreams - that they are stories about our current concerns, hidden under symbolism in order to sneak past our internal censors - by observing that we should not posit homunculi inside our brains who are smarter than we are.

    \n

    [In summary, this chapter contained some bone-headed howlers, and some interesting things; but on the whole, it makes me doubt that D is going to address the problem of consciousness.  He seems, instead, on a trajectory to try to explain how a brain can produce intelligent action.  It sounds like he plans to talk about the architecture of human intelligence, although he does promise to address qualia in part III.

    \n

    Repeatedly on LW, I've seen one person (frequently Mitchell Porter) raise the problem of qualia; and seen otherwise-intelligent people reply by saying science has got it covered, consciousness is a property of physical systems, nothing to worry about.  For some reason, a lot of very bright people cannot see that consciousness is a big, strange problem.  Not intelligence, not even assigning meaning to representations, but consciousness.  It is a different problem.  (A complete explanation of how intelligence and symbol-grounding take place in humans might concomitantly explain consciousness; it does not follow, as most people seem to think it does, that demonstrating a way to account for non-human intelligence and symbol-grounding therefore accounts for consciousness.)

    \n

    Part of the problem is their theistic opponents, who hopelessly muddle intelligence, consciousness, and religion:  \"A computer can never write a symphony.  Therefore consciousness is metaphysical; therefore I have a soul; therefore there is life after-death.\"  I think this line of reasoning has been presented to us all so often that a lot of us have cached it, to the extent that it injects itself into our own reasoning.  People on LW who try to elucidate the problem of qualia inevitably get dismissed as quasi-theists, because, historically, all of the people saying things that sound similar were theists.

    \n

    At this point, I suspect that Dennett has contributed to this confusion, by writing a book about intelligence and claiming not just that it's about consciousness, but that it has solved the problem.  I shall see.]

    " } }, { "_id": "5ZSHQMsnhtxgeTEAS", "title": "Savulescu: \"Genetically enhance humanity or face extinction\"", "pageUrl": "https://www.lesswrong.com/posts/5ZSHQMsnhtxgeTEAS/savulescu-genetically-enhance-humanity-or-face-extinction", "postedAt": "2010-01-10T00:26:56.846Z", "baseScore": 7, "voteCount": 13, "commentCount": 235, "url": null, "contents": { "documentId": "5ZSHQMsnhtxgeTEAS", "html": "

    In this video, Julian Savulescu from the Uehiro centre for Practical Ethics argues that human beings are \"Unfit for the future\" - that radical technological advance, liberal democracy and human nature will combine to make the 21st century the century of global catastropes, perpetrated by terrorists and psychopaths, with tools such as engineered viruses. He goes on to argue that enhanced intelligence and a reduced urge to violence and defection in large commons problems could be achieved using science, and may be a way out for humanity.

    \n

     

    \n

    \n

     

    \n

    \n\n\n\n\n\n

    \n

    Skip to 1:30 to avoid the tedious introduction

    \n

    Genetically enhance humanity or face extinction - PART 1 from Ethics of the New Biosciences on Vimeo.

    \n

     

    \n

    \n\n\n\n\n\n

    \n

     

    \n

    Genetically enhance humanity or face extinction - PART 2 from Ethics of the New Biosciences on Vimeo.

    \n

     

    \n

    Well, I have already said something rather like this. Perhaps this really is a good idea, more important, even, than coding a friendly AI? AI timelines where super-smart AI doesn't get invented until 2060+ would leave enough room for human intelligence enhancement to happen and have an effect. When I collected some SIAI volunteers' opinions on this, most thought that there was a very significant chance that super-smart AI will arrive sooner than that, though.

    \n

    A large portion of the video consists of pointing out the very strong scientific case that our behavior is a result of the way our brains are structured, and that this means that changes in our behavior are the result of changes in the way our brains are wired. 

    " } }, { "_id": "SYmubibg7wfuLztpQ", "title": "Hypotheses For Dualism", "pageUrl": "https://www.lesswrong.com/posts/SYmubibg7wfuLztpQ/hypotheses-for-dualism", "postedAt": "2010-01-09T08:05:56.749Z", "baseScore": 3, "voteCount": 22, "commentCount": 33, "url": null, "contents": { "documentId": "SYmubibg7wfuLztpQ", "html": "

    In this post I present the first few hypotheses that I can think for why people insist on a metaphysical aspect to consciousness, and develop one in some detail: a \"reality is simulated\" hypothesis.

    \n

    Please contribute your own hypotheses.  Why is there a persistent belief that consciousness is metaphysical?

    \n

    \n

    I offer some hypotheses for why people have the illusion of a metaphysical aspect to consciousness:

    \n\n

    Perhaps all of these hypotheses contribute to the impression that consciousness is fundamentally non-physical (outside the physical; meta-physical). It is the last hypothesis that  I would like to expand upon in this post. That we actually simulate reality in our thoughts -- and this simulated reality is the dual reality that dualists speak of.

    \n

     

    \n
    \n

     

    \n

     

    \n

    Reality is Simulated Hypothesis

    \n


    I know that when I look at my hand, the experience is not as immediate and straight-forward as it seems. It's not me \"looking at my hand\" -- if by 'me' I mean my conscious self. Instead, my brain is putting together an image of my hand based on sensory information it is receiving from my eye. Nor is it even so simple as \"it is me that is aware of a constructed image of my hand\". Because the part of me that 'sees' the image is still not my conscious 'me'. What is actually going on is that I imagine 'myself' seeing my hand -- that is, I imagine a 'me' and I imagine this 'me' is seeing a hand. So when I think about it quite carefully, I realize that when I think to myself that I am seeing my hand, this means I am simulating myself seeing a hand.

    \n


    Level 1: I see my hand. (where I = my brain and my hand = my hand -- this is equivalent to the way a non-sentient creature \"sees\")

    \n


    Level 2: I think, \"I see my hand\". (the I in \"I see my hand\" = my conscious self-awareness; the  hand in \"I see my hand\" = an imagined / simulated hand)

    This sounds very complicated, but it goes on all the time that we're consciously self-aware. (Without it being consciously observed -- that would be Level 3.)

    So my brain constructs an image of a hand. One very closely related to the simulated hand that I see when I imagine myself seeing my hand. So 'I' never see a hand; I only see my simulation of a hand.

    I'm describing this at Level 2 but it also happens at Level 1. I'm sure many of us relate to the concern as children whether or not our loved ones 'see' things the same way we do, as this seemed impossible to ever verify. ('What does blue look like to you? How do I know you're not seeing green, but have learned to call it blue?') The conundrum dissolves when you realize that there is nothing behind the experience of 'seeing green' or 'seeing blue' beyond the set of experiences the meaning of those words point to. (Eliezer has posts on this, for example the first two paragraphs here.)

    \n


    So at the end of the day, my dad and I do reliably have the same concept of the blue box on the table, because what's relevant about our concept of the box is it's weight, color, texture, etc. -- all the empirical things about it.

    By now in this post though, if I'm explaining myself clearly, we'll observe that our concept of the box still includes more than all the empirical things about the box. When we consider the blue box, we're still simulating the blue box in our minds; perhaps in a simulation of ourselves seeing the blue box. The simulation of the blue box (the box in our mind's eye) needn't be exactly like the blue box. Indeed, it will be missing any information we don't have about the blue box or don't feel that is necessary to retrieve for that particular simulation. The simulated blue box is an idealization of the actual blue box. A platonic idealization of a Blue Box. Even if I examine the blue box and notice that the corner is chipped, I will then consciously observe that I am observing that the blue box is chipped only by simulating myself seeing a blue box with the chip. A platonic Blue Box With A Chip.

    \n


    Thus qualia. We never interact with anything else, if 'we' is restricted to mean our conscious self-aware identities.

    So that's my thesis: consciousness is the simulation of reality run on the hardware of our brains, and qualia is the Level3+ observation that the reality we perceive is simulated.

    Now imagine: when people describe consciousness as being metaphysical, perhaps they are observing that the simulation is metaphysical.

    ... I would agree that a simulation can be metaphysical. But it's still simulated meta-physicality. I imagine/simulate a platonic Blue Box, but a platonic Blue Box doesn't exist. Not empirically.

    \n


    Or does it? If empirical is defined as that which we most immediately experience; wouldn't qualia be there, right between ourselves and the observation of reality?

    I've observed before (and would be willing to argue in more detail ) that a simulation is just as real as reality if it doesn't need to model the exterior reality to model itself self-consistently. But when consciousness is in the act of observing reality (and thus reality is simulated in our consciousness) the simulation is only used to model reality, so some confusion about which is internal and which is external is understandable. This is the 'eye looking at the eye looking at the eye' sensation that Eliezer describes, somewhere. In any case, I wouldn't say that our experience of qualia is not real (genuine), but that we're only beginning to find the right words for it. I think the word 'simulation' is fine. (I note the meaning of simulation has changed in the last 15 years to accomodate my meaning; it takes time for words to evolve to fill new gaps.)

    \n


    As evidence (ironic cough) I would like to present a conversation I had recently with someone who said they believed in out-of-body experiences.

    Me: Really?
    G: Yeah.
    Me: What do you see with during these experience? With your actual physical eyes? (Aren't they closed?)
    G: Not really, it's like a third eye. (translation: mind's eye)
    Me: Could you use this technique to spy on people?
    G: No... (some discomfort)
    Me: Could you pick up a video camera and video yourself sleeping on the bed?
    G: No! It's not like that. (translation: it's not empirical)
         But you can see things in a different way, discover things that you intuitively know. (translation: discover information that might be imbedded in the simulation, like being able to recall that the box was chipped or it belonged to your grandmother before she died)

    \n

    ... kind of Matrix-y, but why not? While simulating our simulations, why not code some extra information in the wall, in a favorite childhood tree, in the image of a horse being whipped?

    And now some of the mataphysical stuff I've heard doesn't sound half as crazy after all, if they mean that actual-reality and simulated-reality form a dual universe. I can see how people might feel that the simulated reality is the genuine/ordinant reality: reality cannot be perceived without perceiving the simulated reality, but simulated reality can be perceived without perceiving reality. (For example, I can't see a doughnut without also imagining it, but I can imagine a doughnut without seeing it.) I can also see how dualists would see physical objects as embedded with meaning. \"But not physically,\" they'll say. But yet -- it is the physical object that has the meaning? \"Not exactly,\"they'll say. Kind of like some platonic entity that is associated with it...

    \n

     

    \n\n

     

    " } }, { "_id": "457TAikoXQFuckwxo", "title": "Intergenerational inequality", "pageUrl": "https://www.lesswrong.com/posts/457TAikoXQFuckwxo/intergenerational-inequality", "postedAt": "2010-01-08T12:55:16.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "457TAikoXQFuckwxo", "html": "

    These are common views, held together often:

    \n
  1. Modern people are more wasteful of natural resources than their ancestors
  2. \n
  3. Technology won’t save us from this gluttony, all we can do is control ourselves
  4. \n
  5. Humanity should minimize population as well as personal consumption now to preserve natural resources for future generations
  6. \n

    .

    \n

    However if people are following a trend of using natural resources less efficiently, and this won’t be changed by future technology, current people seem likely use natural resources more efficiently than the next few generations. If this is true and the purpose is human wellbeing (as concern for future generations suggests), shouldn’t we try to have a larger population early on, at the expense of having a smaller one later?


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "Rq5HS5JwjdxaLeR2B", "title": "Consciousness", "pageUrl": "https://www.lesswrong.com/posts/Rq5HS5JwjdxaLeR2B/consciousness", "postedAt": "2010-01-08T12:18:39.776Z", "baseScore": 8, "voteCount": 55, "commentCount": 232, "url": null, "contents": { "documentId": "Rq5HS5JwjdxaLeR2B", "html": "

    (ETA: I've created three threads - color, computation, meaning - for the discussion of three questions posed in this article. If you are answering one of those specific questions, please answer there.)

    \n

    I don't know how to make this about rationality. It's an attack on something which is a standard view, not only here, but throughout scientific culture. Someone else can do the metalevel analysis and extract the rationality lessons.

    \n

    The local worldview reduces everything to some combination of physics, mathematics, and computer science, with the exact combination depending on the person. I think it is manifestly the case that this does not work for consciousness. I took this line before, but people struggled to understand my own speculations and this complicated the discussion. So the focus is going to be much more on what other people think - like you, dear reader. If you think consciousness can be reduced to some combination of the above, here's your chance to make your case.

    \n

    The main exhibits will be color and computation. Then we'll talk about reference; then time; and finally the \"unity of consciousness\".

    \n

    Color was an issue last time. I ended up going back and forth fruitlessly with several people. From my perspective it's very simple: where is the color in your theory? Whether your physics consists of fields and particles in space, or flows of amplitude in configuration space, or even if you think reality consists of \"mathematical structures\" or Platonic computer programs, or whatever - I don't see anything red or green there, and yet I do see it right now, here in reality. So if you intend to tell me that reality consists solely of physics, mathematics, or computation, you need to tell me where the colors are.

    \n

    Occasionally someone says that red and green are just words, and they don't even mean the same thing for different cultures or different people. True. But that's just a matter of classification. It's a fact that the individual shades of color exist, however it is that we group them - and your ontology must contain them, if it pretends to completeness.

    \n

    Then, there are various other things which have some relation to color - the physics of surface reflection, or the cognitive neuroscience of color attribution. I think we all agree that the first doesn't matter too much; you don't even need blue light to see blue, you just need the right nerves to fire. So the second one seems a lot more relevant, in the attempt to explain color using the physics we have. Somehow the answer lies in the brain.

    \n

    There is one last dodge comparable to focusing on color words, namely, focusing on color-related cognition. Explaining why you say the words, explaining why you categorize the perceived object as being of a certain color. We're getting closer here. The explanation of color, if there is such, clearly has a close connection to those explanations.

    \n

    But in the end, either you say that blueness is there, or it is not there. And if it is there, at least \"in experience\" or \"in consciousness\", then something somewhere is blue. And all there is in the brain, according to standard physics, is a bunch of particles in various changing configurations. So: where's the blue? What is the blue thing?

    \n

    I can't answer that question. At least, I can't answer that question for you if you hold with orthodoxy here. However, I have noticed maybe three orthodox approaches to this question.

    \n

    First is faith. I don't understand how it could be so, but I'm sure one day it will make sense.

    \n

    Second, puzzlement plus faith. I don't understand how it could be so, and I agree that it really really looks like an insurmountable problem, but we overcame great problems in the past without having to overthrow the whole of science. So maybe if we stand on our heads, hold our breath, and think different, one day it will all make sense.

    \n

    Third, dualism that doesn't notice it's dualism. This comes from people who think they have an answer. The blueness is the pattern of neural firing, or the von Neumann entropy of the neural state compared to that of the light source, or some other particular physical entity or property. If one then asks, okay, if you say so, but where's the blue... the reactions vary. But a common theme seems to be that blueness is a \"feel\" somehow \"associated\" with the entity, or even associated with being the entity. To see blue is how it feels to have your neurons firing that way.

    \n

    This is the dualism which doesn't know it's dualism. We have a perfectly sensible and precise physical description of neurons firing: ions moving through macromolecular gateways in a membrane, and so forth. There's no end of things we can say about it. We can count the number of ions in a particular spatial volume, we can describe how the electromagnetic fields develop, we can say that this was caused by that... But you'll notice - nothing about feels. When you say that this feels like something, you're introducing a whole new property to the physical description. Basically, you're constructing a dual-aspect materialism, just like David Chalmers proposed. Technically, you're a property dualist rather than a substance dualist.

    \n

    Now dualism is supposed to be beyond horrible, so what's the alternative? You can do a Dennett and deny that anything is really blue. A few people go there, but not many. If the blueness does exist, and you don't want to be a dualist, and you want to believe in existing physics, then you have to conclude that blueness is what the physics was about all along. We represented it to ourselves as being about little point-particles moving around in space, but all we ever actually had was mathematics and correct predictions, so it must be that some part of the mathematics was actually talking about blueness - real blueness - all along. Problem solved!

    \n

    Except, it's rather hard to make this work in detail. Blueness, after all, does not exist in a vacuum. It's part of a larger experience. So if you take this path, you may as well say that experiences are real, and part of physics must have been describing them all along. And when you try to make some part of physics look like a whole experience - well, I won't say the m word here. Still, this is the path I took, so it's the one I endorse; it just leads you a lot further afield than you might imagine.

    \n

    Next up, computation. Again, the basic criticism is simple, it's the attempt to rationalize things which makes the discussion complicated. People like to attribute computational states, not just to computers, but to the brain. And they want to say that thoughts, perceptions, etc., consist of being in a certain computational state. But a physical state does not correspond inherently to any one computational state.

    \n

    There's also a problem with semantics - saying that the state is about something - which I will come to in due course. But first up, let's just look at the problems involved in attributing a non-referential \"computational state\" to a physical entity. 

    \n

    Physically speaking, an object, like a computer or a brain, can be in any of a large number of exact microphysical states. When we say it is in a computational state, we are grouping those microphysically distinct states together and saying, every state in this group corresponds to the same abstract high-level state, every microphysical state in this other group corresponds to some other abstract high-level state, and so on. But there are many many ways of grouping the states together. Which clustering is the true one, the one that corresponds to cognitive states? Remember, the orthodoxy is functionalism: low-level details don't matter. To be in a particular cognitive state is to be in a particular computational state. But if the \"computational state\" of a physical object is an observer-dependent attribution rather than an intrinsic property, then how can my thoughts be brain states?

    \n

    We didn't have this discussion before, so I won't try to anticipate the possible defenses of functionalism. No-one will be surprised, I suppose, to hear that I don't believe this either. Instead, I deduce from this problem that functionalism is wrong. But here's your chance, functionalists: tell the world the one true state-clustering which tells us the computation being implemented by a physical object!

    \n

    I promised a problem with semantics too. Again I think it's pretty simple. Even if we settle on the One True Clustering of microstates - each such macrostate is still just a region of a physical configuration space. Thoughts have semantic content, they are \"about\" things. Where's the aboutness?

    \n

    I also promised to mention time and unity-of-consciousness in conclusion. Time I think offers another outstanding example of the will to deny an aspect of conscious experience (or rather, to call it an illusion) for the sake of insisting that reality conforms entirely to a particular scientific ontology. Basically, we have a physics that spatializes time; we can visualize a space-time as a static, completed thing. So time in the sense of flow - change, process - isn't there in the model; but it appears to be there in reality; therefore it is an illusion.

    \n

    Without trying to preempt the debate about time, perhaps you can see by now why I would be rather skeptical of attempts to deny the obvious for the sake of a particular scientific ontology. Perhaps it's not actually necessary. Maybe, if someone thinks about it hard enough, they can come up with an ontology in which time is real and \"flows\" after all, and which still gives rise to the right physical predictions. (In general relativity, a world-line has a local time associated with it. So if the world-line is that of an actually and persistently existing object, perhaps time can be real and flowing inside the object... in some sense. That's my suggestion.)

    \n

    And finally, unity of consciousness. In the debate over physicalism and consciousness, the discussion usually doesn't even get this far. It gets stuck on whether the individual \"qualia\" are real. But they do actually form a whole. All this stuff - color, meaning, time - is drawn from that whole. It is a real and very difficult task to properly characterize that whole: not just what its ingredients are, but how they are joined together, what it is that makes it a whole. After all, that whole is your life. Nonetheless, if anyone has come this far with me, perhaps you'll agree that it's the ontology of the subjective whole which is the ultimate challenge here. If we are going to say that a particular ontology is the way that reality is, then it must not only contain color, meaning, and time, it has to contain that subjective whole. In phenomenology, the standard term for that whole is the \"lifeworld\". Even cranky mistaken reductionists have a lifeworld - they just haven't noticed the inconsistencies between what they believe and what they experience. The ultimate challenge in the science of consciousness is to get the ontology of the lifeworld right, and then to find a broader scientific ontology which contains the lifeworld ontology. But first, as difficult as it may seem, we have to get past the partial ontologies which, for all their predictive power and their seductive exactness, just can't be the whole story.

    " } }, { "_id": "syr72qRBjpcfgwnwW", "title": "Does it look like it’s all about happiness?", "pageUrl": "https://www.lesswrong.com/posts/syr72qRBjpcfgwnwW/does-it-look-like-it-s-all-about-happiness", "postedAt": "2010-01-08T05:42:22.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "syr72qRBjpcfgwnwW", "html": "

    Humanity’s obsession with status and money is often attributed to a misguided belief that these will bring the happiness we truly hunger. Would be reformers repeat the worldview-shattering news that we can be happier just by being grateful and spending more time with our families and on other admirable activities. Yet the crowds begging for happiness do not appear to heed them.

    \n

    This popular theory doesn’t explain why people are so ignorant after billions of lifetimes of data about what brings happiness, or alternatively why they are helpless to direct their behavior toward it with the information. The usual counterargument to this story is simply that money and status and all that do in fact bring happiness, so people aren’t that silly after all.

    \n

    Another explanation for the observed facts is that we don’t actually want happiness that badly; we like status and money too even at the expense of happiness. That requires the opposite explanation, of why we think we like happiness so much.

    \n

    But first, what’s the evidence that we really want happiness or don’t? Here is some I can think of (please add):

    \n

    For “We are mostly trying to get happiness and failing”:

    \n\n

    “We often aren’t trying to get happiness”:

    \n\n

    It looks to me like we don’t care only about happiness, though we do a bit. I suspect we care more about happiness currently and more about other things in the long term, thus are confused when long term plans don’t seem to lead to happiness because introspection says we like it.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "yKRTxAohwmqLcbbJi", "title": "Reference class of the unclassreferenceable", "pageUrl": "https://www.lesswrong.com/posts/yKRTxAohwmqLcbbJi/reference-class-of-the-unclassreferenceable", "postedAt": "2010-01-08T04:13:36.319Z", "baseScore": 25, "voteCount": 60, "commentCount": 154, "url": null, "contents": { "documentId": "yKRTxAohwmqLcbbJi", "html": "

    One of the most useful techniques of rationality is taking the outside view, also known as reference class forecasting. Instead of thinking too hard about particulars of a given situation and taking a guess which will invariably turned out to be highly biased, one looks at outcomes of situations which are similar in some essential way.

    \n

    Figuring out correct reference class might sometimes be difficult, but even then it's far more reliable than trying to guess while ignoring the evidence of similar cases. Now in some situations we have precise enough data that inside view might give correct answer - but for almost all such cases I'd expect outside view to be as usable and not far away in correctness.

    \n

    Something that keeps puzzling me is persistence of certain beliefs on lesswrong. Like belief in effectiveness of cryonics - reference class of things promising eternal (or very long) life is huge and has consistent 0% success rate. Reference class of predictions based on technology which isn't even remotely here has perhaps non-zero but still ridiculously tiny success rate. I cannot think of any reference class in which cryonics does well. Likewise belief in singularity - reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate. Reference class of beliefs in almost omnipotent good or evil beings has consistent 0% success rate.

    \n

    And many fellow rationalists not only believe that chances of cryonics or singularity or AI are far from negligible levels indicated by the outside view, they consider them highly likely or even nearly certain!

    \n

    There are a few ways how this situation can be resolved:

    \n\n

    How do you reconcile them?

    " } }, { "_id": "bkowF2N9uyYh4DXSe", "title": "Fictional Evidence vs. Fictional Insight", "pageUrl": "https://www.lesswrong.com/posts/bkowF2N9uyYh4DXSe/fictional-evidence-vs-fictional-insight", "postedAt": "2010-01-08T01:59:03.452Z", "baseScore": 50, "voteCount": 38, "commentCount": 46, "url": null, "contents": { "documentId": "bkowF2N9uyYh4DXSe", "html": "

    This is a response to Eliezer Yudkowsky's The Logical Fallacy of Generalization from Fictional Evidence and Alex Flint's When does an insight count as evidence? as well as komponisto's recent request for science fiction recommendations.

    \n

    My thesis is that insight forms a category that is distinct from evidence, and that fiction can provide insight, even if it can't provide much evidence. To give some idea of what I mean, I'll list the insights I gained from one particular piece of fiction (published in 1992), which have influenced my life to a large degree:

    \n
      \n
    1. Intelligence may be the ultimate power in this universe.
    2. \n
    3. A technological Singularity is possible.
    4. \n
    5. A bad Singularity is possible.
    6. \n
    7. It may be possible to nudge the future, in particular to make a good Singularity more likely, and a bad one less likely.
    8. \n
    9. Improving network security may be one possible way to nudge the future in a good direction. (Side note: here are my current thoughts on this.)
    10. \n
    11. An online reputation for intelligence, rationality, insight, and/or clarity can be a source of power, because it may provide a chance to change the beliefs of a few people who will make a crucial difference.
    12. \n
    \n

    So what is insight, as opposed to evidence? First of all, notice that logically omniscient Bayesians have no use for insight. They would have known all of the above without having observed anything (assuming they had a reasonable prior). So insight must be related to logical uncertainty, and a feature only of minds that are computationally constrained. I suspect that we won't fully understand the nature of insight until the problem of logical uncertainty is solved, but here are some of my thoughts about it in the mean time:

    \n\n

    So a challenge for us is to distinguish true insights from unhelpful distractions in fiction. Eliezer mentioned people who let the Matrix and Terminator dominate their thoughts about the future, and I agree that we have to be careful not to let our minds consider fiction as evidence. But is there also some skill that can be learned, to pick out the insights, and not just to ignore the distractions?

    \n

    P.S., what insights have you gained from fiction?

    \n

    P.P.S., I guess I should mention the name of the book for the search engines: A Fire Upon the Deep by Vernor Vinge.

    " } }, { "_id": "XYA9nBud8joDjTy86", "title": "Case study: Melatonin", "pageUrl": "https://www.lesswrong.com/posts/XYA9nBud8joDjTy86/case-study-melatonin", "postedAt": "2010-01-07T18:24:46.652Z", "baseScore": 28, "voteCount": 38, "commentCount": 176, "url": null, "contents": { "documentId": "XYA9nBud8joDjTy86", "html": "
    \n

    I discuss melatonin's effects on sleep & its safety; I segue into the general benefits of sleep and the severely disrupted sleep of the modern Western world, the cost of melatonin use and the benefit (eg. enforcing regular bedtimes), followed by a basic cost-benefit analysis of melatonin concluding that the net profit is large enough to be worth giving it a try barring unusual conditions or very pessimistic safety estimates.

    \n
    \n

    Full essay: http://www.gwern.net/Melatonin

    " } }, { "_id": "XN4xC7zKBXey9Ngfj", "title": "Will reason ever outrun faith?", "pageUrl": "https://www.lesswrong.com/posts/XN4xC7zKBXey9Ngfj/will-reason-ever-outrun-faith", "postedAt": "2010-01-07T14:00:27.205Z", "baseScore": 9, "voteCount": 21, "commentCount": 31, "url": null, "contents": { "documentId": "XN4xC7zKBXey9Ngfj", "html": "

    Recently, a video produced by Christians claimed that the future world would be Muslim. It hit 10 million hits in YouTube. The alarming demographics presented were proven mostly false and exaggerated both by BBC and Snopes. Yet, religion is such a powerful self-replicating memeplex that its competition against atheism deserves some analysis.

    \n

    Leaving apart the aesthetic nicety of some religious rituals — which I respect —, it would be preferable to see a world with predominance of rationality instead of faith, brights instead of supers. Not just because I whimsically wish so, but because reason ensues atheism. Rationality is the primer here. With more rational agents, the more rationality propagates, and people’s maps will be more accurate. And that’s better for us, human beings*.

    \n

    (* This sentence is a bit of a strong claim, especially because I am not defining exactly what I mean by ‘better’, and some existential pain might be expected as a consequence of being unaided by the crutches of faith and of being deprived of their cultural antibodies. Also, if happiness happens to be an important attribute of ‘better’, I am not sure to what extent being rational will make people happier. Some people are very ok choosing the blue pill. For the time being, let’s take it as an axiom. The claim that rational is better might deserve a separate post.)

    \n

    Is a predominantly rational and atheist world probable?

    \n

    Let’s see. Religion — or the lack thereof — and culture are propagated memetically. The propagation can be (1) vertical (i.e., from parents to children) or (2) horizontally (i.e., friends, media and the like).

    (1) It seems quite hopeless for atheists to outgrow religious populations through parenting. First, \"countries that are relatively secularized usually reproduce more slowly than countries that are more religious. According to the World Bank, the nations with the largest proportions of unbelievers had an average annual population growth rate of just 0.7% in the period 1975-97, while the populations of the most religious countries grew three times as fast.\" (from The Economist, Faith Equals Fertility).

    As this very same article concludes:

    \"one might half-seriously conclude that atheists and agnostics ought to focus on having more children, to help overcome their demographic disadvantage. Unfortunately for secularists, this may not work even as a joke. Nobody knows exactly why religion and fertility tend to go together. Conventional wisdom says that female education, urbanization, falling infant mortality, and the switch from agriculture to industry and services all tend to cause declines in both religiosity and birth rates. In other words, secularization and smaller families are caused by the same things. Also, many religions enjoin believers to marry early, abjure abortion and sometimes even contraception, all of which leads to larger families. (…) So, religious people have larger families because Western religions encourage having children. Further, as a general proposition (there are, of course, exceptions), religious people tend to place a higher emphasis on altruism, whereas secular people tend to be more self-focused. Thus, for a religious person, children provide the opportunity to nurture and benefit other human beings. For many secular people, however, children merely consume time and resources that otherwise could have been devoted to their own amusement.\"

    Conclusion: as a whole, atheists have less offspring, and they have a good reason to do so. Their being atheists and their lesser fertility rates have the same causes: education. If you think about it, the other causes cited above are themselves a product of proper education.

    \n

    (2) Can atheism gain more adepts through horizontal propagation? Definitely, although I don't dare predict to what extent. Many atheists — kudos for Dawkins and Hitchens — have been making public calls to reason and openly arguing pro-atheism, in a rather educational approach. Some courageous initiatives have been seen recently.  

    \n

    This is great.

    However, mass media still seems to be among the strongest means of replication. And mass media is a vehicle of entertainment, not a vehicle of truth-seeking.

    Making cold reason and correct statistics as appealing than the alternatives is a challenging pursuit. As businesses, mass media networks give more space to whatever pleases the public more. The public is religious — and as it seems, will continue to be so. Therefore, miracles and emotionally charged stories sell more. Well, probably even the brightest minds indulge in some ludic fallacy now and then.

    \n

    (I haven’t had a TV set at home for some eight years, and completely let go of the daily habit of reading newspapers, too, just like my friend Thomas Jefferson. There’s just too much noise going on — and isn’t it amazing how successful this noise is in deviating our attention from what matters most? But I digress.)

    We have a self-fueling pattern here, a memetic selection mechanism:

    Medias propagating rationality-related memes stay behind, because there are fewer atheists. So, less of atheist ideas are spread. So there will be even fewer atheists, or, at least, a modest growth speed of atheists in absolute numbers.

    Religious medias get richer and bigger, because there are many religious people who support them. So more religious ideas are spread. So there will be even more of them, with a fast growth. As an example, Brazilian TV network Rede Record, after being acquired by the direction of the religious group IURD (aka UCKG), left from a very small market share to being second place in the ratings in 2007. Rede Record growth
    — the fastest among the country's networks has been, alas, fueled by the tithe of IURD's followers (source here, Portuguese only).

    \n

    One could argue that, as internet access becomes widespread, we should expect its decentralized and democratic nature to be on the side of those who are trying to enhance their maps with valuable information. Nevertheless, someone who doesn’t directly look for it will be carried away by the noise or, worse, by misguided information. One has to start, as a child, guided by a rational mind. And having a rational mind depends heavily on this person’s education and upbringing, which depends on what kind of parents they have. Oops.

    \n

    Of course there are exceptions, those who make an effort to think by themselves despite their environment. They seem, however, to be outnumbered by the masses under the influence of bandwagon effect.

    \n

    Where lies the hope for a predominantly rational and atheist world anyway?

    " } }, { "_id": "LiaogK2NWgcLthcrM", "title": "Rationality Quotes January 2010", "pageUrl": "https://www.lesswrong.com/posts/LiaogK2NWgcLthcrM/rationality-quotes-january-2010", "postedAt": "2010-01-07T09:36:05.162Z", "baseScore": 5, "voteCount": 6, "commentCount": 143, "url": null, "contents": { "documentId": "LiaogK2NWgcLthcrM", "html": "

    \n
    \n
    \n

    A monthly thread for posting rationality-related quotes you've seen recently (or had stored in your quotesfile for ages).

    \n\n
    \n
    " } }, { "_id": "Qyix5Z5YPSGYxf7GG", "title": "Less Wrong Q&A with Eliezer Yudkowsky: Video Answers", "pageUrl": "https://www.lesswrong.com/posts/Qyix5Z5YPSGYxf7GG/less-wrong-q-and-a-with-eliezer-yudkowsky-video-answers", "postedAt": "2010-01-07T04:40:35.546Z", "baseScore": 49, "voteCount": 44, "commentCount": 100, "url": null, "contents": { "documentId": "Qyix5Z5YPSGYxf7GG", "html": "

    On October 29th, I asked Eliezer and the LW community if they were interested in doing a video Q&A. Eliezer agreed and a majority of commenters were in favor of the idea, so on November 11th, I created a thread where LWers could submit questions. Dozens of questions were asked, generating a total of over 650 comments. The questions were then ranked using the LW voting system.

    On December 11th, Eliezer filmed his replies to the top questions (skipping some), and sent me the videos on December 22nd. Because voting continued after that date, the order of the top questions in the original thread has changed a bit, but you can find the original question for each video (and the discussion it generated, if any) by following the links below.

    Thanks to Eliezer and everybody who participated.

    Update: If you prefer to download the videos, they are available here (800 MB, .wmw format, sort the files by 'date created').

    Link to question #1.

    Link to question #2.

    Link to question #3.

    Link to question #4.

    Eliezer Yudkowsky - Less Wrong Q&A (5/30) from MikeGR on Vimeo.

    Link to question #5.

    (Video #5 is on Vimeo because Youtube doesn't accept videos longer than 10 minutes and I only found out after uploading about a dozen. I would gladly have put them all on Vimeo, but there's a 500 MB/week upload limit and these videos add up to over 800 MB.)

    Link to question #6.

    Link to question #7.

    Link to question #8.

    Link to question #9.

    Link to question #10.

    Link to question #11.

    Link to question #12.

    Link to question #13.

    Link to question #14.

    Link to question #15.

    Link to question #16.

    Link to question #17.

    Link to question #18.

    Link to question #19.

    Link to question #20.

    Link to question #21.

    Link to question #22.

    Link to question #23.

    Link to question #24.

    Link to question #25.

    Link to question #26.

    Link to question #27.

    Link to question #28.

    Link to question #29.

    Link to question #30.

    If anything is wrong with the videos or links, let me know in the comments or via private message.

    " } }, { "_id": "XDs4iPczbSuktqtXx", "title": "Communicating effectively: form and content", "pageUrl": "https://www.lesswrong.com/posts/XDs4iPczbSuktqtXx/communicating-effectively-form-and-content", "postedAt": "2010-01-06T09:52:10.657Z", "baseScore": 17, "voteCount": 15, "commentCount": 10, "url": null, "contents": { "documentId": "XDs4iPczbSuktqtXx", "html": "

    Effective communication techniques, particularly in written communication, are an important part of the aspiring rationalist's toolkit. Alicorn's recent post makes excellent points about niceness, and touches parenthetically on the larger issue of form versus content.

    \n

    The general claim, when defending either rudeness or poor spelling, is \"what matters is the content in what I'm saying, not the form\". Well, I suspect this is one of the myths of pure reason. What matters about your content is what you do with it, pragmatically. Are you here to convey ideas to others ? Then you will achieve your aims more effectively if nothing about the form distracts from the content. (That you need to have content goes almost without saying.)

    Conscientious programmers are aware that source code is read and modified much more often than it is written. They know that it's harder to debug code than it was to write it in the first place. They invest more effort in making their code readable than a naive programmer might, because they estimate that this effort will be handsomely repaid in future savings.

    Conversation is no different. Your intent (in a forum like LW, anyway) is to cause others to ponder certain ideas. It's in your interest to consider the limitations of your interlocutors, their expectations, their attention span, their sensitivity, their bounded rationality, so that the largest possible fraction of your effort goes into delivering the payload, versus dissipating as waste heat. There are more readers than writers, making it rational to spend time and effort working on the form of your message as well as the content.

    You even need to keep in mind that people are stateful. That is, they don't just consider the local form you've chosen for your ideas; they also apply heuristics based on past interactions with you.

    These considerations apply to more than just \"niceness\". They apply to any instances where you notice that people fail to take away the intended message from your writings. When people respond to what you write, even with criticism, a downvote or a complaint, they are doing you a service; you can at least use that feedback to improve. Most will simply ignore you, quietly. Given enough feedback, the form your communication will improve, over time.

    \n

    And I would be quite surprised, given what I know of human minds, if this did not also eventually improve the content of your thinking. I find exchange with others indispensable in sharpening my own skills, at any rate, and that is why I aspire to be not just nice but also clear, engaging, and so on.

    I have gotten a lot of mileage out of, among others, Richard Gabriel's Writer's Workshop book, and Peter Elbow's Writing with Power which introduced me to freewriting.

    What techniques do you, as rationalists, find useful for effective communication ?

    " } }, { "_id": "7dmjJfEyCwDnukkzP", "title": "Treat conspicuous consumption like hard nipples?", "pageUrl": "https://www.lesswrong.com/posts/7dmjJfEyCwDnukkzP/treat-conspicuous-consumption-like-hard-nipples", "postedAt": "2010-01-06T07:54:11.000Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "7dmjJfEyCwDnukkzP", "html": "

    Robin asked, in relation to correlations between sexual prompts and apparently innocent behaviors:

    \n

    So what would happen if we all became conscious of the above behaviors being strong clues that men are in fact actively trying for promiscuous short term sex?  Would such behaviors reduce, would long term relations become less exclusive, or what?  Maybe we just couldn’t admit that these are strong clues?

    \n

    It isn’t usually activeness that people mind most in others’ wrongdoings, but conscious intention. These usually coincide, but when they don’t we are much more forgiving of  unintentional actions, however active. So if it became known that an interest in cars or charity was a symptom of sexual desire I think it would be seen as similar to those other ‘actions’ that show sexual desire; a bad message to your spouse about your feelings, but far from a conscious attempt to be unfaithful.

    \n

    While it’s not a crime to have physical signs of arousal about the wrong person, I assume it’s considered more condemnable to purposely show them off to said person. I think the same would go for the changes in interests above; if everyone knew that those behaviours were considered signs of sexual intent, realising you had them and purposely allowing potential lovers to see them would be seen as active unfaithfulness, so you would be expected to curb or hide them. Most people would want to hide them anyway, because showing them would no longer send the desired signal. Other activities are presumably popular for those interested in sex exactly because conspicuously wanting sex doesn’t get sex so well. If certain interests became a known signal for wanting sex they would be no more appealing than wearing a sign that says ‘I want sex’. This would be a shame for all those who are interested in charity and consumerism  less contingently.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "w8g7AkSbyApokD3dH", "title": "A Suite of Pragmatic Considerations in Favor of Niceness", "pageUrl": "https://www.lesswrong.com/posts/w8g7AkSbyApokD3dH/a-suite-of-pragmatic-considerations-in-favor-of-niceness", "postedAt": "2010-01-05T21:32:38.765Z", "baseScore": 115, "voteCount": 105, "commentCount": 198, "url": null, "contents": { "documentId": "w8g7AkSbyApokD3dH", "html": "

    tl;dr: Sometimes, people don't try as hard as they could to be nice.  If being nice is not a terminal value for you, here are some other things to think about which might induce you to be nice anyway.

    \n

    There is a prevailing ethos in communities similar to ours - atheistic, intellectual groupings, who congregate around a topic rather than simply to congregate - and this ethos says that it is not necessary to be nice.  I'm drawing on a commonsense notion of \"niceness\" here, which I hope won't confuse anyone (another feature of communities like this is that it's very easy to find people who claim to be confused by monosyllables).  I do not merely mean \"polite\", which can be superficially like niceness when the person to whom the politeness is directed is in earshot but tends to be far more superficial.  I claim that this ethos is mistaken and harmful.  In so claiming, I do not also claim that I am always perfectly nice; I claim merely that I and others have good reasons to try to be.

    \n

    The dispensing with niceness probably springs in large part from an extreme rejection of the ad hominem fallacy and of emotionally-based reasoning.  Of course someone may be entirely miserable company and still have brilliant, cogent ideas; to reject communication with someone who just happens to be miserable company, in spite of their brilliant, cogent ideas, is to miss out on the (valuable) latter because of a silly emotional reaction to the (irrelevant) former.  Since the point of the community is ideas; and the person's ideas are good; and how much fun they are to be around is irrelevant - well, bringing up that they are just terribly mean seems trivial at best, and perhaps an invocation of the aforementioned fallacy.  We are here to talk about ideas!  (Interestingly, this same courtesy is rarely extended to appalling spelling.)

    \n

    The ad hominem fallacy is a fallacy, so this is a useful norm up to a point, but not up to the point where people who are perfectly capable of being nice, or learning to be nice, neglect to do so because it's apparently been rendered locally worthless.  I submit that there are still good, pragmatic reasons to be nice, as follows.  (These are claims about how to behave around real human-type persons.  Many of them would likely be obsolete if we were all perfect Bayesians.)

    \n
      \n
    1. It provides good incentives for others.  It's easy enough to develop purely subconscious aversions to things that are unpleasant.  If you are miserable company, people may stop talking to you without even knowing they're doing it, and some of these people may have ideas that would have benefited you.
    2. \n
    3. It helps you hold off on proposing diagnoses.  As tempting as it may be to dismiss people as crazy or stupid, this is a dangerous label for us biased creatures.  Fewer people than you are tempted to call these things are genuinely worth writing off as thoroughly as this kind of name-calling may tempt you to do.  Conveniently, both these words (as applied to people, more than ideas) and closely related ones are culturally considered mean, and a general niceness policy will exclude them.
    4. \n
    5. It lets you exist in a cognitively diverse environment.  Meanness is more tempting as an earlier resort when there's some kind of miscommunication, and miscommunication is more likely when you and your interlocutor think differently.  Per #1, not making a conscious effort to be nice will tend to drive off the people with the greatest ratio of interesting new contributions to old rehashed repetitions.
    6. \n
    7. It is a cooperative behavior.  It's obvious that it's nicer to live in a world where everybody is nice than in a world where everyone is a jerk.  What's less obvious, but still, I think, true, is that the cost of cooperatively being nice while others are mean is in fact very low.  This is partly because human interaction is virtually always iterated, (semi-)public, or both; and also because it's just not very hard to be nice.  The former lets you reap an excellent signaling effect:
    8. \n
    9. It signals the hell out of your maturity, humility, and general awesome.  If you spend as much time on the Internet as I do, you read a few online content publishers who publicly respond to their hate mail.  It can sometimes be funny to read the nasty replies.  But I generally walk away thinking more of the magnanimous ones who are patient even with their attackers.
    10. \n
    11. It promotes productive affect in yourself and others.  The atmosphere of a relationship or group has many effects, plenty of which aren't cognitively luminous, and some of which can spill over into your general mood and whatever you were hoping to use your brain for.
    12. \n
    13. It is useful in theoretical discussions to draw a distinction between being mean to someone and doing something that's seriously morally wrong, but this line is fuzzier or completely absent in human prephilosophical intuitions.  If you are ever troubled by ethical akrasia, it may be easier to stave off if you try to avoid delivering small slights and injuries as well as large violations.
    14. \n
    15. It yields resources in the form of friendly others.  Whether you are an introvert or an extrovert, other people can be useful to have around, and not even just for companionship.  Compared to indifferent or actively hostile neighbors, it's an obvious win to be nice and win what goodwill you can.
    16. \n
    17. It can save time - often yielding a net benefit, rather than wasting time as is sometimes complained.  For instance, if a miscommunication is made, a mean response is to interpret the misstatement at face value and ridicule or attack - this can devolve into a time-consuming fight and may never resolve the initial issue.  A nice response is to gently clarify, which can be over in minutes.
    18. \n
    " } }, { "_id": "foiCqg9CLuhSTqiLq", "title": "TakeOnIt: Database of Expert Opinions", "pageUrl": "https://www.lesswrong.com/posts/foiCqg9CLuhSTqiLq/takeonit-database-of-expert-opinions", "postedAt": "2010-01-05T20:54:27.847Z", "baseScore": 21, "voteCount": 18, "commentCount": 8, "url": null, "contents": { "documentId": "foiCqg9CLuhSTqiLq", "html": "

    Ben Albahari wrote to tell us about TakeOnIt, which is trying to build a database of expert opinions.  This looks very similar to the data that would be required to locate the Correct Contrarian Cluster - though currently they're building the expert database collaboratively, using quotes, rather than by directly polling the experts on standard topics.  Searching for \"many worlds\" and \"zombies\" didn't turn up anything as yet; \"God\" was more productive.

    \n

    The site is open to the public, you can help catalog expert opinions, and Ben says they're happy to export the data for the use of anyone interested in this research area.

    \n

    Having this kind of database in standardized form is critical for assessing the track records of experts.  TakeOnIt is aware of this.

    " } }, { "_id": "HR2ehAY3n4P58XQ5T", "title": "Why are wine competitions unpredictable?", "pageUrl": "https://www.lesswrong.com/posts/HR2ehAY3n4P58XQ5T/why-are-wine-competitions-unpredictable", "postedAt": "2010-01-05T06:54:29.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "HR2ehAY3n4P58XQ5T", "html": "

    Assume:

    \n\n

    What should the wine sellers do to maximize money? All enter A. Three win. Those three go on to enter B, and its winner enters C, while the others stay out unless y is radically > x, as they are likely to lose again.

    \n

    That means that A makes $9x, B $3x and C $x. An easy way for B and C to increase their profits is to be less predictable then. At the extreme of unpredictability, all wines would enter all competitions, and A, B and C would all make $9x profits, and medals wouldn’t mean much about wine quality.

    \n

    Of course when people notice that prizes correlate less with wine quality they ignore prizes more, and competitions must charge less entry. In reality most consumers get virtually no evidence of the quality of wine by drinking it, so they are only likely to notice whether better wines get prizes if someone pays attention to the statistics and finds no correlation between winners in different competitions. Someone did this, and found that, prompting me to try to explain it. What do you think?


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "Fi5PWBQ5xhgjEWwJL", "title": "When does an insight count as evidence?", "pageUrl": "https://www.lesswrong.com/posts/Fi5PWBQ5xhgjEWwJL/when-does-an-insight-count-as-evidence", "postedAt": "2010-01-04T09:09:23.345Z", "baseScore": 11, "voteCount": 21, "commentCount": 38, "url": null, "contents": { "documentId": "Fi5PWBQ5xhgjEWwJL", "html": "

    Bayesianism, as it is presently formulated, concerns the evaluation of the probability of beliefs in light of some background information. In particular, given a particular state of knowledge, probability theory says that there is exactly one probability that should be assigned to any given input statement. A simple corrolary is that if two agents with identical states of knowledge arrive at different probabilities for a particular belief then at least one of them is irrational.

    \n

    A thought experiment. Suppose I ask you for the probability that P=NP (a famous unsolved computer science problem). Sounds like a difficult problem, I know, but thankfully all relevant information has been provided for you --- namely the axioms of set theory! Now we know that either P=NP is proveable from the axioms of set theory, or its converse is (or neither is proveable, but let's ignore that case for now). The problem is that you are unlikely to solve the P=NP problem any time soon.

    \n

    So being the pragmatic rationalist that you are, you poll the world's leading mathematicians, and do some research of your own into the P=NP problem and the history of difficult mathematical problems in general to gain insight into perhaps which group of mathematicians may be more reliable, and to what extent thay may be over- or under-confident in their beliefs. After weighing all the evidence honestly and without bias you submit your carefully-considered probability estimate, feeling like a pretty good rationalist. So you didn't solve the P=NP problem, but how could you be expected to when it has eluded humanity's finest mathematicians for decades? The axioms of set theory may in principle be sufficient to solve the problem but the structure of the proof is unknown to you, and herein lies information that would be useful indeed but is unavailable at present. You cannot be considered irrational for failing to reason from unavailable information, you say; rationality only commits you to using the information that is actually available to you, and you have done so. Very well.

    \n

    \n

    The next day you are discussing probability theory with a friend, and you describe the one-in-a-million-illness problem, which asks for the probability that a patient has a particular illness, which is known to exist within only one in a million individuals, given that a particular diagnostic test with known 1% false positive rate has returned positive. Sure enough, your friend intuits that there is a high chance that the patient has the illness and you proceed to explain why this is not actually the rational answer.

    \n

    \"Very well\", your friend says, \"I accept your explanation but I when I gave my previous assessment I was unaware of this line of reasoning. I understand the correct solution now and will update my probability assignment in light of this new evidence, but my previous answer was made in the absence of this information and was rational given my state of knowledge at that point.\"

    \n

    \"Wrong\", you say, \"no new information has been injected here, I have simply pointed out how to reason rationally. Two rational agents cannot take the same information and arrive at different probability assignments, and thinking clearly does not constitute new information. Your previous estimate was irrational, full stop.\"

    \n

    By now you've probably guessed where I'm going with this. It seems reasonable to assign some probability to the P=NP problem in the absence of a solution to the mathematical problem, and in the future, if the problem is solved, it seems reasonable that a different probability would be assigned. The only way both assessments can be permitted as rational within Bayesianism is if the proof or disproof of P=NP can be considered evidence, and hence we understand that the two probability assignments are each rational in light of differing states of knowledge. But at what point does an insight become evidence? The one-in-a-million-illness problem also requires some insight in order to reach the rational conclusion, but I for one would not say that someone who produced the intuitive but incorrect answer to this problem was \"acting rationally given their state of knowledge\".  No sir, I would say they failed to reach the rational conclusion, for if lack of insight is akin to lack of evidence then any probability could be \"rationally\" assigned to any statement by someone who could reasonably claim to be stupid enough. The more stupid the person, the more difficult it would be to claim that they were, in fact, irrational.

    \n

    We can interpolate between the two extremes I have presented as examples, of course. I could give you a problem that requires you to marginalize over some continuous variable, and with an appropriate choice for x I could make the integration very tricky, requiring serious math skills to come to the precise solution. But at what difficulty does it become rational to approximate, or do a meta-analysis?

    \n

    So, the question is: when, if ever, does an insight count as evidence?

    " } }, { "_id": "N8K9nT9on7ucJqj3a", "title": "Advertise while honoring the dead", "pageUrl": "https://www.lesswrong.com/posts/N8K9nT9on7ucJqj3a/advertise-while-honoring-the-dead", "postedAt": "2010-01-04T05:00:38.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "N8K9nT9on7ucJqj3a", "html": "

    Roadside suggestions not to kill yourself driving seem to be getting more humorous around here, which suggests that someone is trying to improve them. The best advertisements for careful driving I’ve seen are the little white stick crosses tied to trees and telegraph poles with withered flowers and photographs. I doubt I’m alone in finding the death of a real person smashed into a telegraph pole on my usual route more of a prompt to be careful than an actor looking stern at me or a pun (‘slowing down won’t kill you’). Plus nothing makes an activity feel safe like a gargantuan authority calmly informing me of the risks of it. If the government’s advertising something, everyone knows about it, and if there’s no panic or banning, it’s probably safe. A bedraggled, unprepared memorial is a reminder that ‘they’ aren’t really protecting me.

    \n

    But how could a road authority use these? They could either increase the number or the visibility of them. The usual methods of increasing the number defeat the purpose, and inventing fatal crashes might make people cross. Making memorials more visible is hard, because they are put up by families, besides which the home-made look is valuable, so billboard versions wouldn’t do so well. One solution is just to give bereaved families a bit of the money they usually use on a billboard to construct a temporary memorial of their choice at the site. That way more people would do it, and they could afford more extravagant decoration, so enhancing visibility.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "X3GasHitwxquQbdFm", "title": "Disclosure vs. Bans: Reply to Robin Hanson", "pageUrl": "https://www.lesswrong.com/posts/X3GasHitwxquQbdFm/disclosure-vs-bans-reply-to-robin-hanson", "postedAt": "2010-01-04T01:09:04.349Z", "baseScore": 9, "voteCount": 8, "commentCount": 58, "url": null, "contents": { "documentId": "X3GasHitwxquQbdFm", "html": "

    A little while back I wrote a post arguing that the existence of abusive terms in credit card contracts (such as huge jumps in interest rates for being one day late with a payment) do not satisfy the conditions for standard economic models of asymmetric information between rational agents, but rather are trickery, pure and simple. If this is right, then the standard remedy of mandating the provision of more information to the less-informed party, but not otherwise interfering in the market (the idea being that any voluntary agreement must make both parties better off, no matter how strange or one-sided the terms may appear, so any interference in contracts beyond providing information will reduce welfare), is not the right one. There is no decent argument that those terms would appear in any contract where both parties knew what they were doing, so if you see terms like that, the appropriate conclusion is that someone has been screwed, not that Goddess of Capitalism, in her infinite-but-inscrutable wisdom, has uncovered the only terms that, strange as they may seem to mere mortals, make a mutually beneficial contract possible. The goal is to get rid of those terms, and the most direct way to do that is simply to prohibit them. There are some good reasons to be reluctant to have the government go around prohibiting things, so mandatory disclosure might still be a good policy (though the Federal Reserve has investigated this and concluded that it isn't), but the goal would be to use the disclosures to eliminate the abusive terms. There is no justification for the standard economist's agnosticism about whether the terms are good or not: they're bad and the only question is how best to get rid of them.

    \n

    Robin Hanson left some comments to that post, in which he made the point that since people voluntarily choose these terms, they must like them and so prohibiting them would have to mean protecting people against their will. I answered that while I'm enough of a paternalist to be willing, under some circumstances, to impose limited protections on people even if those people would oppose them, that I didn't think that was an issue here, as I would guess (though I have no proof), that the Federal Reserve's recent decision to ban certain credit card practices was probably very popular, even (especially?) among the people who are harmed by those practices. Robin's reply, as I understand it, is that this may be true, but since people can't simultaneously want to accept credit cards with those terms and at the same time favor banning those terms, it must be the case that they either don't understand the terms of the credit card contracts or they don't understand the effects of the ban. Somewhere there must just be some missing information, and therefore we must be back where we started, with the problem being a lack of information that could be resolved by providing more information.

    \n

    So I take Robin to be saying that bans such as those instituted by the Fed cannot be shown to be non-coercive to credit card customers simply by recourse to the hypothesized \"fact\" that the bans are popular, because anyone who voluntarily chooses those terms and also supports the ban must be being inconsistent somehow. He also seems to be saying that this inconsistency means that we're just back in the world of standard economic models where one side is ignorant.

    On the second point, either I am misunderstanding Robin or I think he's simply wrong. As I understand them, standard models of asymmetric information do not result in the ignorant party just getting screwed. Rather, they result in otherwise beneficial exchanges not happening, or in the terms being distorted in ways that come from the fact that one party is not informed (Robin, let me know if I misunderstand you or if you disagree). So even if everything else Robin says is right, it's still not the case that we're in a world where the problem is plain-vanilla asymmetric information, and where the solution is clearly to provide more information but not to ban. Robin might argue that people are in fact getting screwed but that the cure of banning is worse than the disease, but I don't see how he can argue that the central problem here is asymmetric information.

    As for the first point, a couple of commenters pointed out that the inconsistency of preferences that Robin points to are not more irrational than the kinds of preferences that we see people have all the time. I think they're right about this, but I think there's a more direct way to square the apparent inconsistency. People today \"voluntarily\" accept those provisions because that's the way to get a credit card. In the world as it currently exists, it's those terms or nothing (with the limited exception of cards issued by credit unions and the like, which avoid such trickery)* and so people \"choose\" those terms. But they'd be happier in a world whether the equilibrium credit card terms are better, and they would prefer to be able to \"choose\" those. I don't want to overstate this point, as I think what's really going on is that people are badly confused and also (justifiably) hostile to credit card companies. But there is a perfectly sound story in which people would choose the terms and also approve of the ban.

    \n

    BTW, for a neat example of how people trick and make no bones about the fact that that's what they're doing, see here.

    \n

    *I would be very interested to know what fraction of people who have access to such alternatives use them.

    " } }, { "_id": "iizNtbgNMBtDQcHdm", "title": "Max Tegmark on our place in history: \"We're Not Insignificant After All\"", "pageUrl": "https://www.lesswrong.com/posts/iizNtbgNMBtDQcHdm/max-tegmark-on-our-place-in-history-we-re-not-insignificant", "postedAt": "2010-01-04T00:02:04.868Z", "baseScore": 23, "voteCount": 21, "commentCount": 88, "url": null, "contents": { "documentId": "iizNtbgNMBtDQcHdm", "html": "

    An uplifting message as we enter the new year, quoted from Edge.org:

    \n
    \n

    We're Not Insignificant After All

    Max Tegmark, Physicist, MIT

    When gazing up on a clear night, it's easy to feel insignificant. Since our earliest ancestors admired the stars, our human egos have suffered a series of blows. For starters, we're smaller than we thought. Eratosthenes showed that Earth was larger than millions of humans, and his Hellenic compatriots realized that the solar system was thousands of times larger still. Yet for all its grandeur, our Sun turned out to be merely one rather ordinary star among hundreds of billions in a galaxy that in turn is merely one of billions in our observable universe, the spherical region from which light has had time to reach us during the 14 billion years since our big bang. Then there are probably more (perhaps infinitely many) such regions. Our lives are small temporally as well as spatially: if this 14 billion year cosmic history were scaled to one year, then 100,000 years of human history would be 4 minutes and a 100 year life would be 0.2 seconds. Further deflating our hubris, we've learned that we're not that special either. Darwin taught us that we're animals, Freud taught us that we're irrational, machines now outpower us, and just last month, Deep Fritz outsmarted our Chess champion Vladimir Kramnik. Adding insult to injury, cosmologists have found that we're not even made out of the majority substance.

    \n

    The more I learned about this, the less significant I felt. Yet in recent years, I've suddenly turned more optimistic about our cosmic significance. I've come to believe that advanced evolved life is very rare, yet has huge growth potential, making our place in space and time remarkably significant.

    \nThe nature of life and consciousness is of course a hotly debated subject. My guess is that these phenomena can exist much more generally that in the carbon-based examples we know of.

    I believe that consciousness is, essentially, the way information feels when being processed. Since matter can be arranged to process information in numerous ways of vastly varying complexity, this implies a rich variety of levels and types of consciousness. The particular type of consciousness that we subjectively know is then a phenomenon that arises in certain highly complex physical systems that input, process, store and output information. Clearly, if atoms can be assembled to make humans, the laws of physics also permit the construction of vastly more advanced forms of sentient life. Yet such advanced beings can probably only come about in a two-step process: first intelligent beings evolve through natural selection, then they choose to pass on the torch of life by building more advanced consciousness that can further improve itself.

    Unshackled by the limitations of our human bodies, such advanced life could rise up and eventually inhabit much of our observable universe. Science fiction writers, AI-aficionados and transhumanist thinkers have long explored this idea, and to me the question isn't if it can happen, but if it will happen.

    My guess is that evolved life as advanced as ours is very rare. Our universe contains countless other solar systems, many of which are billions of years older than ours. Enrico Fermi pointed out that if advanced civilizations have evolved in many of them, then some have a vast head start on us — so where are they? I don't buy the explanation that they're all choosing to keep a low profile: natural selection operates on all scales, and as soon as one life form adopts expansionism (sending off rogue self-replicating interstellar nanoprobes, say), others can't afford to ignore it. My personal guess is that we're the only life form in our entire observable universe that has advanced to the point of building telescopes, so let's explore that hypothesis. It was the cosmic vastness that made me feel insignificant to start with. Yet those galaxies are visible and beautiful to us — and only us. It is only we who give them any meaning, making our small planet the most significant place in our observable universe.

    Moreover, this brief century of ours is arguably the most significant one in the history of our universe: the one when its meaningful future gets decided. We'll have the technology to either self-destruct or to seed our cosmos with life. The situation is so unstable that I doubt that we can dwell at this fork in the road for more than another century. If we end up going the life route rather than the death route, then in a distant future, our cosmos will be teeming with life that all traces back to what we do here and now. I have no idea how we'll be thought of, but I'm sure that we won't be remembered as insignificant.\n

     

    \n
    \n

    A few thoughts: when considering the heavy skepticism that the singularity hypothesis receives, it is important to remember that there is a much weaker hypothesis, highlighted here by Tegmark, that still has extremely counter-intuitive implications about our place in spacetime; one might call it the bottleneck hypothesis - the hypothesis that 21st century humanity occupies a pivotal place in the evolution of the universe, simply because we may well be a part of the small space/time window during which it is decided whether earth-originating life will colonize the universe or not.

    The bottleneck hypothesis is weaker than the singularity hypothesis - we can be at the bottleneck even if smarter-than-human AI is impossible or extremely impractical, but if smarter-than-human AI is possible and reasonably practical, then we are surely at the bottleneck of the universe. The bottleneck hypothesis is based upon less controversial science than the singularity hypothesis, and is robust to different assumptions about what is feasible in an engineering sense (AI/no AI, ems/no ems, nuclear rockets/generation ships/cryonics advances, etc) so might be accepted by a larger number of people.

    Related is Hanson's \"Dream Time\" idea.

    " } }, { "_id": "Hug2ePykMkmPzSsx6", "title": "Drawing Two Aces", "pageUrl": "https://www.lesswrong.com/posts/Hug2ePykMkmPzSsx6/drawing-two-aces", "postedAt": "2010-01-03T10:33:46.067Z", "baseScore": 18, "voteCount": 20, "commentCount": 92, "url": null, "contents": { "documentId": "Hug2ePykMkmPzSsx6", "html": "

    Suppose I have a deck of four cards:  The ace of spades, the ace of hearts, and two others (say, 2C and 2D).

    \n

    You draw two cards at random.

    \n

    Scenario 1:  I ask you \"Do you have the ace of spades?\"  You say \"Yes.\"  Then the probability that you are holding both aces is 1/3:  There are three equiprobable arrangements of cards you could be holding that contain AS, and one of these is AS+AH.

    \n

    Scenario 2:  I ask you \"Do you have an ace?\"  You respond \"Yes.\"  The probability you hold both aces is 1/5:  There are five arrangements of cards you could be holding (all except 2C+2D) and only one of those arrangements is AS+AH.

    \n

    Now suppose I ask you \"Do you have an ace?\"

    \n

    You say \"Yes.\"

    \n

    I then say to you:  \"Choose one of the aces you're holding at random (so if you have only one, pick that one).  Is it the ace of spades?\"

    \n

    You reply \"Yes.\"

    \n

    What is the probability that you hold two aces?

    \n

    Argument 1:  I now know that you are holding at least one ace and that one of the aces you hold is the ace of spades, which is just the same state of knowledge that I obtained in Scenario 1.  Therefore the answer must be 1/3.

    \n

    Argument 2:  In Scenario 2, I know that I can hypothetically ask you to choose an ace you hold, and you must hypothetically answer that you chose either the ace of spades or the ace of hearts.  My posterior probability that you hold two aces should be the same either way.  The expectation of my future probability must equal my current probability:  If I expect to change my mind later, I should just give in and change my mind now.  Therefore the answer must be 1/5.

    \n

    Naturally I know which argument is correct.  Do you?

    " } }, { "_id": "hAiZXTf6znGT2A5yr", "title": "A status theory of blog commentary", "pageUrl": "https://www.lesswrong.com/posts/hAiZXTf6znGT2A5yr/a-status-theory-of-blog-commentary", "postedAt": "2010-01-03T06:51:19.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "hAiZXTf6znGT2A5yr", "html": "

    Commentary on blogs usually comes in two forms: comments there and posts on other blogs. In my experience, comments tend to disagree and to be negative or insulting much more than links from other blogs are. In a rough count of comments and posts taking a definite position on this blog, 25 of 35 comments disagreed, while 1 of 12 posts did, even if you don’t count another 11 posts which link without comment, a seemingly approving act. Why is this?

    \n

    Here’s a theory. Lets say you want status. You can get status by affiliating with the right others. You can also get status within an existing relationship by demonstrating yourself to be better than others in it. When you have a choice of who to affiliate with, you will do better not to affiliate at all with most of the people you could demonstrate your superiority to in a direct engagement, so you mostly try to affiliate with higher status people and ignore or mock from a distance those below you. However when it is already given that you affiliate with someone, you can gain status by seeming better than they.

    \n

    These things are supported if there is more status conflict in less voluntary relationships than in voluntary ones, which seems correct. Compare less voluntary relationships in workplaces, schoolgrounds, families, and between people and employees of organizations they must deal with (such as welfare offices) with more voluntary relationships such as friendships, romantic relationships, voluntary trade, and acquaintanceships.

    \n

    This theory would explain the pattern of blog commentary. Other bloggers are choosing whether to affiliate with your blog, visibly to outside readers. As in the rest of life, the blogger would prefer to be seen as up with good bloggers and winning stories than to be bickering with bad bloggers, who are easy to come by. So bloggers mostly link to good blogs or posts and don’t comment on bad ones.

    \n

    Commenters are visible only to others in that particular comments section. Nobody else there will be impressed or interested to observe that you read this blogger or story, as they all are. So the choice of whether to affiliate doesn’t matter, and all the fun is in showing superiority within that realm. Pointing out that the blogger is wrong shows you are smarter than they, while agreeing says nothing. So commenters tend to criticize where they can and not bother commenting on posts they agree with.

    \n

    Note that this wouldn’t mean opinions are shaped by status desire, but that there are selection effects so that bloggers don’t publicize their criticisms and commenters don’t publicize what they like.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "NLNnwd4TeKj8PBvZa", "title": "Stigmergy and Pickering's Mangle", "pageUrl": "https://www.lesswrong.com/posts/NLNnwd4TeKj8PBvZa/stigmergy-and-pickering-s-mangle", "postedAt": "2010-01-02T19:14:21.432Z", "baseScore": 18, "voteCount": 18, "commentCount": 9, "url": null, "contents": { "documentId": "NLNnwd4TeKj8PBvZa", "html": "

    Stigmergy is a notion that an agent's behavior is sometimes best understood as coordinated by the agent's environment. In particular, social insects build nests, which have a recognizable standard pattern (different patterns for different species). Does the wasp or termite have an idea of what the standard pattern is? Probably not. Instead, the computation inside the insect is a stateless stimulus/response rule set. The partially-constructed nest catalyzes the next construction step.

    \n

    An unintelligent \"insect\" clambering energetically around a convoluted \"nest\", with the insect's local perceptions driving its local modifications is recognizably something like a Turing machine. The system as a whole can be more intelligent than either the (stateless) insect or the (passive) nest. The important computation is the interactions between the agent and the environment.

    \n

    \n

    Theraulaz and Bonabeau have simulated lattice swarms, and gotten some surprisingly realistic wasp-nest-like constructions. A paper is in CiteSeer, but this summary gives a better rapid overview.

    \n

    Humans modify the environment (e.g. by writing books and storing them in libraries), and human behavior is affected by their environment (e.g. by reading books). Wikipedia is an excellent example of what human-human stigmergic coordination looks like. Instead of interacting directly with one another, each edit leaves a trace, and future edits respond to the trace (this impersonal interaction may avoid some biases towards face-saving and status-seeking).

    \n

    Andrew Pickering is a sociologist who studies science and technology. He wrote a book called \"The Mangle of Practice\". He includes in his sociology non-human \"actors\". For example, he would say that a bubble chamber acts on a human observer when the observer sees a tracery of bubbles after a particle physics experiment. This makes his theory less society-centric and more recognizable to a non-sociologist.

    \n

    As a programmer, the best way I can explain Pickering's mangle is by reference to programming. In trying to accomplish something using a computer, you start with a goal, a desired \"capture of machinic agency\". You interact with the computer, alternating between human-acts-on-computer (edit) phases, and computer-acts-on-human (run) phases. In this process, the computer may display \"resistances\" and, as a consequence, you might change your goals. Not all things are possible or feasible, and one way that we discover impossibilities and infeasibilities is via these resistances. Pickering would say that your goals have been \"mangled\". Symmetrically, the computer program gets mangled by your agency (mangled into existence, even).

    \n

    Pickering says that all of science and technology can be described by an network including both human and non-human actors, mangling each other over time, and in his book he has some carefully-worked out examples - Donald Glaser's invention of the bubble chamber, Morpurgo's experiments measuring a upper bound on the number of free quarks and Hamilton's invention of quaternions, and a few more.

    \n

    I hope you find these notions (stigmergy and the mangle) as provocative and intriguing as I do. The rest of this post is my own thoughts, far more speculative and probably not as valuable.

    \n

    Around each individual is a shell of physical traces that they have made - books that they've chosen to keep nearby, mementos and art that they have collected, documents that they've written. At larger radiuses, those shells become sparser and intermingle more, but eventually those physical traces comprise a lot of what we call \"civilization\". Should a person's shell of traces be considered integral to their identity? 

    \n

    Most of the dramatic increases in our civilization's power and knowledge over the past few thousand years have been improvements in these stigmergic traces. Does this suggest that active, deliberate stigmergy is an appropriate self-improvement technique, in rationality and other desirable traits? Maybe exoself software would be a good human rationality-improving project. I wrote a little seed called exomustard, but it doesn't do much of anything.

    \n

    Might it be possible for some form of life to exist within the interaction between humans and their environment? Perhaps the network of roads, cars, and car-driving could be viewed as a form of life. If all physical roads and cars were erased, humans would remember them and build them again. If all memory and experience of roads and cars were erased, humans would discover the use of the physical objects quickly. But if both were erased simultaneously, it seems entirely plausible that some other form of transportation would become dominant. Nations are another example. These entities self-catalyze and maintain their existence and some of their properties in the face of change, which are lifelike properties.

    \n

    What would conflict between a human and a stigmergic, mangled-into-existance \"capture of machinic agency\" look like? At first glance, this notion seems like some quixotic quest to defeat the idea and existence of automobiles (or even windmills). However, the mangle does include the notion of conflict already, as \"resistances\". Some resistances, like the speed of light or the semi-conservation of entropy, we're probably going to have to live with. Those aren't the ones we're interested in. There are also accidental resistances due to choices earlier in the mangling process.

    \n

    Bob Martin has a paper where he lists some symptoms of bad design - rigidity, fragility, immobility, viscosity. We might informally say \"The system *wants* to do such-and-so.\", often some sort of inertia or otherwise continuing on a previous path. These are examples of accidental resistances that humans chose to mangle into existence, and then later regret. Every time you find yourself saying \"well, it's not good, but it's what we have and it would be too expensive/risky/impractical to change\", you're finding yourself in conflict with a stigmergic pattern.  

    \n

    Paul Grignon has a video \"Money as Debt\", that describes a world where we have built an institution gradually over centuries which is powerful and which (homeostatically) defends its own existence, but also (due to its size, power, and accidentally-built-in drive) steers the world toward disaster. The video twigs a lot of my conspiracy-theory sensors, but Paul Grignon's specific claims are not necessary for the general principle to be sound: we can build institutions into our civilization that subsequently have powerful steering effects on our civilization - steering the civilization into a collision course with a wall, maybe.

    \n

    In conclusion, stigmergy and Pickering's mangle are interesting and provocative ideas and might be useful building blocks for techniques to increase human rationality and reduce existential risk.

    " } }, { "_id": "hRKdXSc3RMyYfTRbc", "title": "Perfect principles are for bargaining", "pageUrl": "https://www.lesswrong.com/posts/hRKdXSc3RMyYfTRbc/perfect-principles-are-for-bargaining", "postedAt": "2010-01-02T04:38:21.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "hRKdXSc3RMyYfTRbc", "html": "

    When people commit to principles, they often consider one transgression ruinous to the whole agenda. Eating a sausage by drunken accident can end years of vegetarianism.

    \n

    As a child I thought this crazy. Couldn’t vegetarians just eat meat when it was cheap under their rationale? Scrumptious leftovers at our restaurant, otherwise to be thrown away, couldn’t tempt vegetarian kids I knew. It would break their vegetarianism. Break it? Why did the integrity of the whole string of meals matter?  Any given sausage was such a tiny effect.

    \n

    I eventually found two explanations. First, it’s easier to thwart temptation if you stake the whole deal on every choice. This is similar to betting a thousand dollars that you won’t eat chocolate this month. Second, commitment without gaps makes you seem a nicer, more reliable person to deal with. Viewers can’t necessarily judge the worthiness of each transgression, so they suspect the selectively committed of hypocrisy. Plus everyone can better rely on and trust a person who honors his commitments with less regard to consequence.

    \n

    There’s another good reason though, which is related to the first. For almost any commitment there are constantly other people saying things like ‘What?! You want me to cook a separate meal because you have some fuzzy notion that there will be slightly less carbon emitted somewhere if you don’t eat this steak?’ Maintaining an ideal requires constantly negotiating with other parties who must suffer for it. Placing a lot of value on unmarred principles gives you a big advantage in these negotiations.

    \n

    In negotiating generally, it is often useful to arrange visible costs to yourself for relinquishing too much ground. This is to persuade the other party that if they insist on the agreement being in that region, you will truly not be able to make a deal. So they are forced to agree to a position more favorable to you. This is the idea behind arranging for your parents to viciously punish you for smoking with your friends if you don’t want to smoke much. Similarly, attaching a visible large cost – the symbolic sacrifice of your principles – to relieving a friend of cooking tofu persuades your friend that you just can’t eat with them unless they concede. So that whole conversation is avoided, determined in your favor from the outset.

    \n

    I used to be a vegetarian, and it was much less embarrassing to ask for vegetarian food then than was afterward when  I merely wanted to eat vegetarian most of the time. Not only does absolute commitment get you a better deal, but it allows you to commit to such a position without disrespectfully insisting on sacrificing the other’s interests for a small benefit.

    \n

    Prompted by The Strategy of Conflict by Thomas Schelling.


    \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "DMygYdCG5wPnHThgo", "title": "Open Thread: January 2010", "pageUrl": "https://www.lesswrong.com/posts/DMygYdCG5wPnHThgo/open-thread-january-2010", "postedAt": "2010-01-01T17:02:39.373Z", "baseScore": 7, "voteCount": 10, "commentCount": 761, "url": null, "contents": { "documentId": "DMygYdCG5wPnHThgo", "html": "

    And happy new year to everyone.

    " } }, { "_id": "HmfxSWnqnK265GEFM", "title": "Are wireheads happy?", "pageUrl": "https://www.lesswrong.com/posts/HmfxSWnqnK265GEFM/are-wireheads-happy", "postedAt": "2010-01-01T16:41:31.102Z", "baseScore": 186, "voteCount": 163, "commentCount": 109, "url": null, "contents": { "documentId": "HmfxSWnqnK265GEFM", "html": "

    Related to: Utilons vs. Hedons, Would Your Real Preferences Please Stand Up

    \n

    And I don't mean that question in the semantic \"but what is happiness?\" sense, or in the deep philosophical \"but can anyone not facing struggle and adversity truly be happy?\" sense. I mean it in the totally literal sense. Are wireheads having fun?

    They look like they are. People and animals connected to wireheading devices get upset when the wireheading is taken away and will do anything to get it back. And it's electricity shot directly into the reward center of the brain. What's not to like?

    Only now neuroscientists are starting to recognize a difference between \"reward\" and \"pleasure\", or call it \"wanting\" and \"liking\". The two are usually closely correlated. You want something, you get it, then you feel happy. The simple principle behind our entire consumer culture. But do neuroscience and our own experience really support that?

    \n

    \n

    It would be too easy to point out times when people want things, get them, and then later realize they weren't so great. That could be a simple case of misunderstanding the object's true utility. What about wanting something, getting it, realizing it's not so great, and then wanting it just as much the next day? Or what about not wanting something, getting it, realizing it makes you very happy, and then continuing not to want it?

    \n

    The first category, \"things you do even though you don't like them very much\" sounds like many drug addictions. Smokers may enjoy smoking, and they may want to avoid the physiological signs of withdrawl, but neither of those is enough to explain their reluctance to quit smoking. I don't smoke, but I made the mistake of starting a can of Pringles yesterday. If you asked me my favorite food, there are dozens of things I would say before \"Pringles\". Right now, and for the vast majority of my life, I feel no desire to go and get Pringles. But once I've had that first chip, my motivation for a second chip goes through the roof, without my subjective assessment of how tasty Pringles are changing one bit.

    \n

    Think of the second category as \"things you procrastinate even though you like them.\" I used to think procrastination applied only to things you disliked but did anyway. Then I tried to write a novel. I loved writing. Every second I was writing, I was thinking \"This is so much fun\". And I never got past the second chapter, because I just couldn't motivate myself to sit down and start writing. Other things in this category for me: going on long walks, doing yoga, reading fiction. I can know with near certainty that I will be happier doing X than Y, and still go and do Y.

    Neuroscience provides some basis for this. A University of Michigan study analyzed the brains of rats eating a favorite food. They found separate circuits for \"wanting\" and \"liking\", and were able to knock out either circuit without affecting the other (it was actually kind of cute - they measured the number of times the rats licked their lips as a proxy for \"liking\", though of course they had a highly technical rationale behind it). When they knocked out the \"liking\" system, the rats would eat exactly as much of the food without making any of the satisifed lip-licking expression, and areas of the brain thought to be correlated with pleasure wouldn't show up in the MRI. Knock out \"wanting\", and the rats seem to enjoy the food as much when they get it but not be especially motivated to seek it out. To quote the science1:

    \n
    \n

    Pleasure and desire circuitry have intimately connected but distinguishable neural substrates. Some investigators believe that the role of the mesolimbic dopamine system is not primarily to encode pleasure, but \"wanting\" i.e. incentive-motivation. On this analysis, endomorphins and enkephalins - which activate mu and delta opioid receptors most especially in the ventral pallidum - are most directly implicated in pleasure itself. Mesolimbic dopamine, signalling to the ventral pallidum, mediates desire. Thus \"dopamine overdrive\", whether natural or drug-induced, promotes a sense of urgency and a motivation to engage with the world, whereas direct activation of mu opioid receptors in the ventral pallidum induces emotionally self-sufficient bliss.

    \n
    \n

    The wanting system is activated by dopamine, and the liking system is activated by opioids. There are enough connections between them that there's a big correlation in their activity, but the correlation isn't one and in fact activation of the opioids is less common than the dopamine. Another quote:

    \n
    \n

    It's relatively hard for a brain to generate pleasure, because it needs to activate different opioid sites together to make you like something more. It's easier to activate desire, because a brain has several 'wanting' pathways available for the task. Sometimes a brain will like the rewards it wants. But other times it just wants them.

    \n
    \n

    So you could go through all that trouble to find a black market brain surgeon who'll wirehead you, and you'll end up not even being happy. You'll just really really want to keep the wirehead circuit running.

    \n

    Problem: large chunks of philosophy and economics are based upon wanting and liking being the same thing.

    By definition, if you choose X over Y, then X is a higher utility option than Y. That means utility represents wanting and not liking. But good utilitarians (and, presumably, artificial intelligences) try to maximize utility (or do they?). This correlates contingently with maximizing happiness, but not necessarily. In a worst-case scenario, it might not correlate at all - two possible such scenarios being wireheading and an AI without the appropriate common sense.

    Thus the deep and heavy ramifications. A more down-to-earth example came to mind when I was reading something by Steven Landsburg recently (not recommended). I don't have the exact quote, but it was something along the lines of:

    \n
    \n

    According to a recent poll, two out of three New Yorkers say that, given the choice, they would rather live somewhere else. But all of them have the choice, and none of them live anywhere else. A proper summary of the results of this poll would be: two out of three New Yorkers lie on polls.

    \n
    \n

    This summarizes a common strain of thought in economics, the idea of \"revealed preferences\". People tend to say they like a lot of things, like family or the environment or a friendly workplace. Many of the same people who say these things then go and ignore their families, pollute, and take high-paying but stressful jobs. The traditional economic explanation is that the people's actions reveal their true preferences, and that all the talk about caring about family and the environment is just stuff people say to look good and gain status. If a person works hard to get lots of money, spends it on an iPhone, and doesn't have time for their family, the economist will say that this proves that they value iPhones more than their family, no matter what they may say to the contrary.

    The difference between enjoyment and motivation provides an argument that could rescue these people. It may be that a person really does enjoy spending time with their family more than they enjoy their iPhone, but they're more motivated to work and buy iPhones than they are to spend time with their family. If this were true, people's introspective beliefs and public statements about their values would be true as far as it goes, and their tendency to work overtime for an iPhone would be as much a \"hijacking\" of their \"true preferences\" as a revelation of them. This accords better with my introspective experience, with happiness research, and with common sense than the alternative.

    Not that the two explanations are necessarily entirely contradictory. One could come up with a story about how people are motivated to act selfishly but enjoy acting morally, which allows them to tell others a story about how virtuous they are while still pursuing their own selfish gain.

    Go too far toward the liking direction, and you risk something different from wireheading only in that the probe is stuck in a different part of the brain. Go too far in the wanting direction, and you risk people getting lots of shiny stuff they thought they wanted but don't actually enjoy. So which form of good should altruists, governments, FAIs, and other agencies in the helping people business respect?

    Sources/Further Reading:

    \n

    1. Wireheading.com, especially on a particular University of Michigan study

    \n

    2. New York Times: A Molecule of Motivation, Dopamine Excels at its Task

    \n

    3. Slate: The Powerful and Mysterious Brain Circuitry...

    \n

    4. Related journal articles (1, 2, 3)

    " } } ] } } }