Thought prompted by having watched way too much television at work this week:
what we need are variable length television episodes. The entire problem with Law and Order is that, if they’ve found the bad guy and we’re only at minute 20, he’s not the bad guy. Even if it’s four minutes from the ending there’s probably still a twist coming, so someone is going to pull out a gun and/or jump out a window. You know everything you need to know about how an episode is going to play out just by looking at the clock.
Movies have this problem too. No, the protagonist isn’t going to die, we’re only 45 minutes in. No, their grand plan to crush the villain isn’t going to work, we’ve still got another hour that they’re going to have to fill somehow. Okay, this grand plan is going to work, because we’re down to eight minutes.
Reading a detective story or law story is pretty much the exact same problem - setup, obvious misdirection, apparent resolution that we know is a lie because we’re only halfway though the page count. I knew Harry Potter wasn’t dead because I could feel seventy more pages in my hand.
And that’s print, so we can’t fix it, but now that lots of people read on ebooks I’m astonished there’s not an app that lets authors set false endings and false lengths to their stories. And has no one recut Law and Order to be a thousand times less predictable just by virtue of not always lasting exactly 43 minutes plus commercial breaks? I would pay a lot of money for a Netflix-of-lies full of television episodes and movies of varying length and thus, for once, genuinely unpredictable.
Anonymous asked: Is it wrong of me to want to be able to talk to my friends a lot, to at least get more of a conversation then shallow "I agree" and "makes sense"? Throughout my life I've meet people who started out being the friends I wanted,who talked to me and I got along with, but slowly end up just turning our talking into just a sentence at most of an answer,or just slowly quitting talking to me. I just don't know if its wrong of me to expect that a friend could be that way, or what, or if I'm a bad person
Feelings, needs, and wants never make you a bad person.
This is tumblr so I think the convention for emphasis is to repeat that, three times, emphasized and bolded and maybe in all caps. But saying it three times won’t make you believe it any more than saying it once.
This is my rule 1 because it’s the one I have to remind myself of most often. It’s just so easy to slip into a headspace of “that was an evil thought” or “that is a wrong preference” or “I am a bad person for needing this”, and it’s also easy to make the corresponding mistake, “he’s a cruel person for abandoning me when I was depressed” and “it’s wrong for people to cut me out of their life when I haven’t done anything wrong”.
Internalizing this rule will not just make you a happier person; I think it also makes you a kinder one. My needs and desires and preferences are not evil and I am not a bad person for them, and it’s only by believing that and giving myself that level of respect that I can fully respect the needs and desires and boundaries of others.
You sound as if you think all of these relationships ending is proof that there’s something wrong with what you want or who you are. There isn’t. There might be something wrong with how you’re going about it (not wrong as in “bad”, wrong as in “not strategic”), but there’s nothing at all wrong with wanting a deep and meaningful connection with people, or with being sad when you can’t find one.
It is pretty common for friends to grow apart. In my experience it is even more common for friendships that are based mostly on shared intellectual interests or on conversations to fall apart once one person loses interest in the topic or energy for those conversations. The situation you describe is more common with online/text-based friendships than in-person ones - often for in-person friendships there is a shared activity or experience that acts as a reinforcer, while online if people grow apart or develop different interests they just stop talking. If that’s part of your problem you could seek out more in-person relationships, but I realize that for some people that’s not an option at all.
It could help to have friendships that are built around a common activity or interest other than talking - for example, friendships in a team activity or video game or fandom - so that, when one person’s interest in intellectual conversation is waning, you have another way of connecting with that person to fall back on. If you notice that conversations are becoming one-sided, you could suggest “hey, is there a movie you’ve been wanting to see? we should watch it together?” or “want to make a dungeon run in [I don’t actually play any MMOs that have dungeon runs but I’m sure there are some]” or “I saw this and thought of you” with a link to something pretty, funny, or interesting.
And (I hope this doesn’t come across as condescending or obvious, I didn’t learn it until I was 19) it’s important to keep in mind why people make friends. People mostly are friends with people who are fun to be around. As in, time spent in their company is pleasant.
When trying to make friends I used to fall into the trap of trying to be a certain kind of person - trying to show them that I was smart, or useful, or funny - instead of trying to make them feel a certain way - happy, relaxed, intrigued. Making it pleasant to spend time with you is a much better way to make lasting friendships than being an objectively impressive person. Doing interesting things with people is a better way to get close to them than trying to be a certain sort of person.
Anonymous asked: What are your thoughts on a utilitarian marriage to a man, i.e. for the explicit purposes of fiances, children, and professional development. I mean a pretty good man too, one you like just not like like due to your Sinful Sexual Perversions. As aromatic partners both could choose their own sexual lifestyle, and in annoying situations where your taste for the taco could cause some turbulence, you just play Ozzy and Harriet. It seems a stable situation with role models of both genders for tykes.
Do you mean, like, in general or for me personally? In general I approve wholeheartedly of people getting married for immigration, or for benefits, or for money, or to satisfy your conservative parents, or whatever. I think more marriages that are not ‘romantically and sexually exclusive partnership with one other person to whom I’m attracted’ would be a good thing, on the whole. I think that if you are planning to have kids you should have a co-parent and a legal arrangement that ensures the kids are provided for and the people who plan to parent them have a fair and predictable path to dispute custody if the relationship ends. Marriage is the best-known way of doing this but if people object to marriage and just get a lawyer to draw up a contract that’s great, and if people want to co-parent with platonic friends or agreeable ex-partners or relatives or sworn enemies, that’s also great (whether they marry them or just get a contract with equivalent protections).
Personally I think it’d be a bad idea for me to marry a man if he even sorta wanted to have sex with me. This is because I am super terrible awful bad at saying no to people if the thing they are asking of me is something it is socially acceptable of them to ask and so this marriage likely results in sex I don’t want and don’t enjoy and am bad at saying ‘no’ to and if he’s a half-decent guy that’s an outcome he’d want to avoid also. This rules out any convenience marriages if there are feelings on one side.
I could marry an asexual guy (if he is also okay with no kissing) or a gay guy or a guy who has enough other partners he’s fine with his marriage categorically not involving sex, but I am not sure there are very many of those. And I really haven’t encountered enough homophobia for ‘marry a man to avoid homophobia’ to come out ahead on the cost/benefit analysis. I actually think on the whole I get social capital by being gay.
Altogether I am thinking this is not the best relationship model for me personally.
Maybe if he were veeeery rich and wasn’t going to donate any of the money unless I married him.
Today is my parents’ 25th wedding anniversary. They were 23 and 24 when they married. “So, two years older than I am now,” I said to my mom, and she looked absolutely appalled and jumped all over herself to declare that they were far too young and I’d better be planning to wait another five, which I find very reassuring.
Do wonder if she’d say the same thing if I were dating a guy.
So people are talking again about what it means to disagree with effective altruism.
The last time this came up I expressed doubt that you could really separate out apparently contingent facts about the movement (demographics, politics, prominence of certain ideas and organizations) from the near-definitional good of EA-as-a-question (how can we best promote global welfare?).
I’m also uneasy with proclamations to the effect that “effective altruism is just asking how to best promote global welfare, that’s not controversial!” I’m uneasy with it because it is controversial - there are lots of people who don’t think best promoting global welfare ought to be the purpose of charity, though rather few who think it’s actively a bad one.
I’m also uneasy with it because “our highly demanding ideology is really just unobjectionable common sense” seems to me to be….sort of a stealth insult to critics (”see, you don’t actually disagree with us” or else “you’d have to be really stupid to disagree with us”) and even inadvertently discouraging critics would be really bad.
The average American gives about 3% of their income to charity (rich people give a smaller percentage). Effective altruism demands that middle class and rich people give 10-50% of their income to charity. That is not trivial; it is not common sense; it cannot be assumed without justification.
Most people give to their church, school, and community. I give to those things too, but I don’t count it towards my 30% pledge - the pledged money goes to whoever needs it most, and none of those people live in Palo Alto, California. The claim that it is morally important to research and identify the charities doing the most good - instead of giving to causes you care about personally, or are knowledgeable about personally - is not trivial. Like more-whales pointed out, one argument against it is that if most variance is within-cause, acting in an area where you are knowledgeable would be better than giving in a domain where you don’t have any information.
(I think there’s overwhelming evidence that most expected-value variance is between-cause, but that can’t be assumed, it must be established.)
And - even if the philosophical case were trivial, acting on what you believe to be morally right is not easy. Conservative Christians think that the philosophical case against extramarital sex is obvious and straightforward, but I’ve never seen them say “it’s trivial to not have extramarital sex”. What effective altruism will have to do, to actually be meaningful, is not ‘construct an airtight case for giving’ but ‘actually convince people to give’. To be fair, no one has said “giving is trivial!” But I worry that some arguments that effective altruism is trivial/obvious could be misread in that way. To the extent we fail to emphasize this, we’re selling ourselves short - EA does motivate people to give, it’s one of the things we’re really pretty good at, and we should talk about that more openly.
Saying “effective altruism is trivial and obvious” just doesn’t seem to capture, to me, what made it compelling. My effective altruism isn’t trivial or obvious; it is complicated and fraught and full of self-doubt and not always gratifying. It could be catastrophically wrong. Effective altruism is demanding. Effective altruism asks a lot of you. Effective altruism doesn’t give you confidence or moral certainty; it just gives you new questions to ask.
Yeah, you get compensating benefits. You save lives. You get to join a community of people who are - and I don’t think this is a coincidence - among the most compassionate people I’ve ever known, in their personal lives as well as in their giving. You change the world. But you’re choosing the path of non-trivial questions and hard tradeoffs, and I suspect anyone I lured in by claiming otherwise would quickly leave.
like, to be clear, I don’t think it’s morally obligatory to support anorexics. There are a lot of people who, for their own mental health, cannot have a supportive healthy relationship with someone who has disordered thoughts around their weight/the desire to lose weight dangerously/scary beliefs about their own body and habits/whatever.
And if you think that it’s just not going to be healthy for you, it’s okay to say to a friend “I can’t be part of those conversations” and if the conversations are turning out to be unavoidable to say “maybe we need some distance”.
What I think is a problem is when people think it’s “supportive” to be there emotionally for your friend only when they’re saying the right things, only when they’re expressing a desire to beat the eating disorder and get to a healthy weight, only when they’re not experiencing distorted thinking. Because that just creates a dynamic where most of our intimate relationships are founded on not admitting (to ourselves or to other people) that we feel conflicted about recovery, that we’ve found mental workarounds that don’t actually challenge our distorted thoughts but which help make us functional, that we actually don’t think of our eating disorder as a separate beast that lunged at us from outside but as a natural outgrowth of our own preferences.
If you say “I can only support you when you’re working on recovery”, what you get is people who will learn, automatically, to lie and assure you they’re working on recovery, and who will have to seek out the actual hard emotional support from someone else.
And if you say “I can only support you while you’re working on recovery, because you expressing your distorted thoughts is an evil and malicious act on your part”, then, congrats, you’ve just given someone with an anxiety/self-loathing disorder something new to be anxious and self-loathing about!
It’s always okay to say “I can’t listen to this”. It’s pretty much never okay to say “how dare you experience this.”
constipation-isnt-very-romantic:
“I totally support anorexic people, unless of course they are using dangerous methods to try to lose weight”
I totally support anorexic people, unless of course they encourage others to use dangerous methods to lose weight. Having an ED is not an excuse for harming others.
The thing about ‘anti-pro-ana’ people, though, is that the things they consider “encouraging others” include “expressing a desire to lose weight unhealthily” or “having disordered thoughts about eating” and sometimes “publicly existing or being happy while having an eating disorder”. Even though there’s no scientific evidence that pro-ana causes anyone to develop an eating disorder, people use the “encouraging others” excuse to close those sites down.
Like, if a blogger posts “today I didn’t eat, and that felt amazing, I’m going to try for tomorrow as well”, that is objectively not ‘encouraging others’ to do anything. And yet it’s exactly the sort of thing that you consider sufficient license to ‘stop supporting’ that person because now they’re guilty of ‘harming others’.
In other words, you only support mentally ill people who never publicly admit to having disordered thoughts or objectionable goals, even for themselves. You only support anorexics who unconditionally want to recover, or who keep silent whenever they’re conflicted or uncertain about whether they want to recover. You think of anorexics as dangerous contagions who are morally obligated to lie about our own experiences lest the unvarnished truth seduce others down our path. Which means, let’s face it, that you don’t support us at all.
Problem: Justify the use of induction in science
Answer: Well, it’s worked so far.
Problem: Justify the use of anti-induction in science
Answer: Well, it’s never worked before.
(Not original to me - nor to the Sequences either, though it appears there.)
Interesting. When I read that article, I thought it was something Yudkowsky had come up with. At least, that’s the impression the essay gave me. I admit that sounds idiotic now that I write it down – the joke is too obvious to not already be known.
I guess we can add “plagiarism” to the list of reasons Yudkowsky is a bad writer. (If it wasn’t already on there because of his dearth of citations.)
This problem is discussed in philosophy as part of the problem of induction. In the blog post in question, “Where Recursive Justification Hits Bottom”, Eliezer writes:
But this doesn’t answer the legitimate philosophical dilemma: If every belief must be justified, and those justifications in turn must be justified, then how is the infinite recursion terminated?
And if you’re allowed to end in something assumed-without-justification, then why aren’t you allowed to assume anythingwithout justification?
A similar critique is sometimes leveled against Bayesianism—that it requires assuming some prior—by people who apparently think that the problem of induction is a particular problem of Bayesianism, which you can avoid by using classical statistics. I will speak of this later, perhaps.
But first, let it be clearly admitted that the rules of Bayesian updating, do not of themselves solve the problem of induction.[…]
There are possible minds in mind design space who have anti-Occamian and anti-Laplacian priors; they believe that simpler theories are less likely to be correct, and that the more often something happens, the less likely it is to happen again.
And when you ask these strange beings why they keep using priors that never seem to work in real life… they reply, “Because it’s never worked for us before!”
Now, one lesson you might derive from this, is “Don’t be born with a stupid prior.” This is an amazingly helpful principle on many real-world problems, but I doubt it will satisfy philosophers.
If you read that and thought that Eliezer had invented, or claimed to have invented, the problem of induction - or even that his engagement with the problem of induction presents it as novel and fails to mention that it already exists and is discussed in the philosophical literature - then I think that is a misreading on your part.
Saying that (because you got that impression) Eliezer’s presentation of the problem of induction in this post is guilty of “plagiarism” is just patently unfair.
I was referring to the joke. As far as I can tell, no credit for that joke was given. So, if the joke isn’t his – and reliable sources indicate that it isn’t – that’s plagiarism.
I’m aware the problem of induction is well-known. I read Hume in undergrad!
Wait a second. If someone tells me “Eliezer Yudkowsky? I don’t take him seriously because he engaged in plagiarism” and what they meant is “he made a joke in a blog post and didn’t claim the joke was his, but left some people with that impression” I would be really, really angry with them.
But it’s worse than that. ogingat said “this joke not original to the Sequences”, and from that you got “if the joke isn’t [Eliezer’s] – and reliable sources indicate that it isn’t - that’s plagiarism”? Even if we agreed that retelling jokes in a blog post without acknowledging where you heard them was plagiarism, you don’t have “reliable sources” to that effect. You don’t even have a link to a telling of the joke pre-Eliezer. You are vastly overstating the evidence in favor of your accusation and the gravity of the accusation itself to get from “Eliezer made a joke and some people on tumblr think it’s a joke that’s been around for a while” to “Yudkowsky is guilty of plagiarism”.
I’m just…really not comfortable with that. I guess technically all restating of things that you heard from someone else without disclaiming their nonoriginality, in any context, is “plagiarism”, but it feels a little bit like saying “I don’t trust Eliezer because of all the animals he’s ordered people to torture” when what you mean is that he eats meat.
Plagiarism is rightly a career-destroying accusation and accusing someone of plagiarism over the content of the above blog post is kind of horrifying (by contrast, if Eliezer did claim to have invented the problem of induction plagiarism would then be a fair accusation, which is why I assumed that was what you meant.) I’ll double down that it’s not fair to level a charge of that gravity over something that trivial. I mean, come on, dude, it’s a one-line throwaway joke whose conceptual origins in the philosophical tradition is absolutely acknowledged in the blog post. (though ogingat has the good point that reading the posts out of order would make it less obvious that he’s acknowledging the relevant philosophical tradition; maybe you were thinking of a different blog post when you made the accusation?)
Problem: Justify the use of induction in science
Answer: Well, it’s worked so far.
Problem: Justify the use of anti-induction in science
Answer: Well, it’s never worked before.
(Not original to me - nor to the Sequences either, though it appears there.)
Interesting. When I read that article, I thought it was something Yudkowsky had come up with. At least, that’s the impression the essay gave me. I admit that sounds idiotic now that I write it down – the joke is too obvious to not already be known.
I guess we can add “plagiarism” to the list of reasons Yudkowsky is a bad writer. (If it wasn’t already on there because of his dearth of citations.)
This problem is discussed in philosophy as part of the problem of induction. In the blog post in question, “Where Recursive Justification Hits Bottom”, Eliezer writes:
But this doesn’t answer the legitimate philosophical dilemma: If every belief must be justified, and those justifications in turn must be justified, then how is the infinite recursion terminated?
And if you’re allowed to end in something assumed-without-justification, then why aren’t you allowed to assume anythingwithout justification?
A similar critique is sometimes leveled against Bayesianism—that it requires assuming some prior—by people who apparently think that the problem of induction is a particular problem of Bayesianism, which you can avoid by using classical statistics. I will speak of this later, perhaps.
But first, let it be clearly admitted that the rules of Bayesian updating, do not of themselves solve the problem of induction.[…]
There are possible minds in mind design space who have anti-Occamian and anti-Laplacian priors; they believe that simpler theories are less likely to be correct, and that the more often something happens, the less likely it is to happen again.
And when you ask these strange beings why they keep using priors that never seem to work in real life… they reply, “Because it’s never worked for us before!”
Now, one lesson you might derive from this, is “Don’t be born with a stupid prior.” This is an amazingly helpful principle on many real-world problems, but I doubt it will satisfy philosophers.
If you read that and thought that Eliezer had invented, or claimed to have invented, the problem of induction - or even that his engagement with the problem of induction presents it as novel and fails to mention that it already exists and is discussed in the philosophical literature - then I think that is a misreading on your part.
Saying that (because you got that impression) Eliezer’s presentation of the problem of induction in this post is guilty of “plagiarism” is just patently unfair.