Plato’s weekend readings, episode 56

readingsHere it is, our regular Friday diet of suggested readings for the weekend

A brief history of the term “liberalism.

‪The weird history of the concept of empathy, and a question: is it really that useful?

How stable are democracies? Possibly not as much as we’d like.

Fish have feelings. Maybe. And they recognize human faces. Possibly.

Passive aggressiveness: not a good idea.

A critique of the concept of “post-truth” that quickly goes nowhere.

Boredom: the elucidation of the obvious.‬

‪Newcomb’s problem divides philosophers: which box(es) would you take?

203 thoughts on “Plato’s weekend readings, episode 56

  1. Liam Uber

    As a well-adjusted conservative liberal with empathy for all living creatures except flies in my kitchen, I think Newcomb’s problem is suitable for bored children that have nothing better to do. Some people play the lottery and some people don’t – perhaps it has something to do with the ineffability of truth, or the intractability of our political process.

    Liked by 2 people

  2. SocraticGadfly

    You saw my G+ answer on the Newcomb boxes …. as a secularist, I simply cut that Gordian knot of its premise with “the word of philosophy, sharper than any two-edged sword.”

    Like

  3. Daniel Kaufman

    This sort of thing — i.e. the plants and fish etc — is why scientists and the public desperately need philosophy. People read this stuff and think that eating a salad or linguine con vongole is like eating your uncle.

    Liked by 2 people

  4. stevenjohnson

    The Super-Intelligent Being has predicted what you’re going to choose, and filled the boxes accordingly. That is the premise according to the presentation of the problem. Our choice then, is which do we want, $1 000 or $1 000 000? That doesn’t seem to be much of a problem.

    I’m not quite sure what the talk about free will is about. One reasonable meaning of free will is doing what you want without compulsion or restraint. No issue for this kind of free will arises in this problem. Another meaning of free will sees it as the ability to freely choose what you want, such as choosing to have nice fingernails instead of choosing the temporary relief to your nerves from biting them. Except this kind of free will is not free, it costs. And I think it can cost more than the person can pay. I think this notion of free will is an absurdity.

    If the problem is about denying the premise that your choice can be predicted, then it is illogical to accept the premise anybody is giving away $1 000 dollars, much less a million.

    Straining as hard as I could, it occurred to me that maybe this is a problem of induction a la Hume and Popper, where your due diligence about the Super-Intelligent Being’s track record is entirely irrelevant, that her perfect record was basically luck of miraculous. But the problem doesn’t seem to be well formulated to address this issue. Also, this kind of objection to induction applies to an individual’s perception of a seemingly predictable outside, objective world, too. The question then is how thinking of any sort is feasible.

    And that’s not Newcomb’s problem…I think. I really can only conclude I don’t really understand what Newcomb’s problem is, bereft of philosophy as I am. Can anyone explain what Newcomb’s problem is about?

    Liked by 1 person

  5. synred

    I mofd of less knew the history of ‘liberal’. The article seems to have a bit on Conservative bent.

    It would have been more interesting if they disscussed the transition of the meaning and what commanlities there are between the old and newer meanings and how it got made into a dirty word such wussy liberals stopped calling themselves ‘liberals.’

    Like

  6. Robin Herbert

    If the premiss were that the super intelligent being was an infallible predictor then the answer would be easy. Picking two boxes could never possibly net you more than $1,000 and picking box B will infallibly net you $1,000,000.

    Then you should try to pick box B, although the fact of which box you choose has already been decided in advance, you may not be able to.

    Those things would strictly follow from the premisses.

    But since the premiss is that the super intelligent being has had a perfect track record so far then there is no strict logical conclusion and we are left constrained by the fact that there is no such thing as a super intelligent being which predicts everything perfectly, as far as we know, and a lot hinges on what is not known about universes in which such beings exist.

    If we are living in a universe where there are super intelligent beings with perfect track records of prediction, then how do we know that this universe does not also have retro-causality, so that our decision may well effect the choice of the super-intelligent being? So Dr Edmond’s premiss that you cannot influence a decision in the past depends entirely on whether or not we are imagining a universe in which your decision can influence the past.

    Similarly Dr Ahmed’s premiss that the being is always right is not quite correct. The being has always been right in the past. In Universes in which there are super intelligent beings with perfect track records of prediction, does it follow that it is a good bet that she will be correct this time? Maybe the Gambler’s Fallacy holds in such a universe and the more often she gets it right, the higher the probability that the next guess will be wrong (for example her predictive power may be drained by each prediction).

    Or maybe in such a universe of perfect predictors there are also a small number of perfectly unpredictable choosers and when one comes across the other the universe is destroyed in a massive paradox. And you might be one such perfectly unpredictable chooser. In which case it might be wiser to refrain from taking part in the exercise.

    Formulated as it is, there is no right or wrong answer.

    Liked by 2 people

  7. Coel

    Re: Newcomb’s boxes. Isn’t the best choice blatantly obvious?

    If there is a fact of the matter as to what the boxes contain, at the time of your decision, then you cannot possibly lose by taking both.

    The only way in which you could lose is if retro-active causation enabled your decision to affect that is in the boxes. Yet I’m pretty sure that retro-active causation is not a feature of our world on the macroscopic level of boxes and wads of dosh. (It just about might feature with quantum entangled particles and Bell’s inequalities, though it is weird enough there.)

    Hence, am I right in thinking that this problem is actually about human psychology, rather than a genuine puzzle?

    Liked by 2 people

  8. Robin Herbert

    Hi Coel,

    Yet I’m pretty sure that retro-active causation is not a feature of our world on the macroscopic level of boxes and wads of dosh.

    But, as I point out above, I am pretty sure that super intelligent perfect predictors are also not a feature of our world.

    So is this thought experiment about our world?

    Like

  9. Coel

    Hi Robin,

    So Dr Edmond’s premiss that you cannot influence a decision in the past depends entirely on whether or not we are imagining a universe in which your decision can influence the past.

    But, if, in constructing our answer, we’re not supposed to assume basic facts about how the world is, then the problem becomes ill-posed. It becomes a non-problem because not enough information is given to arrive at an answer.

    Effectively, adding in “retro-active causation affects what the Super-Being put in the envelope” is a bit like adding “but quantities of money put in boxes by Super-Beings always develop minus signs when taken out”. If we’re allowed to add in extra things like that then we can always change the answer.

    Like

  10. synred

    The super intelligent being has had a perfect track record

    The problem of induction.

    There once wss a scientific chicken who noticed that the big ‘robot’ always brought chicken feed every morning. The chickens developed z hypotheis that the ‘robot’ was a chicken feeding machine. The hypothesis worked and the chicken began to run too the ‘robot’ when it appeared. The scientific chickens hypothesis was repeatedly verified.

    The one day when the chicken ran to the ‘robot’ it grabged here, cut her head off, plucked her and cooked her for dinner.

    The problem of induction.

    Like the chicken we have to make assumptions. Let’s hope we’re right and we’re not living in some ‘To Serve Man’ scenario.

    Like

  11. Coel

    Hi Robin,

    So is this thought experiment about our world?

    To be in any way sensible, any such thought experiment has to presume: “everything about our world holds, except as specified”. Thus super-intelligent beings are allowed because the thought experiment specifies them.

    Retro-causation is not allowed, because it’s not specified. As above, if you’re allowed to add in extra non-world features at will, then the problem is simply not specified sufficiently to be sensible.

    In fact, if the being has accurately predicted our choice and filled the box accordingly then, strictly speaking, macro level retro causality has occurred

    That doesn’t follow. A correct prediction does not require retro-causation.

    Like

  12. Robin Herbert

    Suppose the predictor is a massive computer and all the relevant data is fed into it and it makes 99.99% accurate predictions about what I choose and fill the boxes accordingly.

    This would mean that 99.99% of people who choose both boxes get only $1,000 and 99.99% of people who choose box B get $1,000,000.

    Knowing this, what would you choose?

    Like

  13. Coel

    Hi Robin,

    Knowing this, what would you choose?

    Let’s distinguish between the questions of what I would choose, and what I should choose to maximise my money.

    If I were a deterministic decision-making device, and the computer knew enough about me to compute what decision I would make, then, mostly likely I would make the decision that the computer predicts.

    If the question is what I should choose (to maximise my money) then — in the absence of retro-active causation — I should choose both boxes. If I were competent enough to think this through and choose accordingly, then that’s also what I would choose.

    Like

  14. Robin Herbert

    Hi Coel,

    “But, if, in constructing our answer, we’re not supposed to assume basic facts about how the world is, then the problem becomes ill-posed.”

    Are we also not supposed to assume that there could be a perfectly accurate predictor without there being, de facto retro causality at the macro level?

    Like

  15. synred

    Apply Bayes’ theorem.

    But also if your middle class and 1000 doesn’t mean that much to you, chose the bigger pay out. regardless). This is why I think some people play the litter. The cost is minimal and the payout seems infinite.

    Like

  16. Robin Herbert

    Hi Coel,

    ” If I were competent enough to think this through and choose accordingly, then that’s also what I would choose.”

    Remember the computer has predicted your choice with 99.99% accuracy and left box B empty accordingly.

    So you would maximise your money by taking the choice that would guarantee you only $1,000?

    And the person who chooses box B will know that the computer has predicted his choice with 99.99% accuracy and has put $1,000,000 in box B accordingly has minimised his money by making the choice that will guarantee him $1,000,000?

    Like

  17. Robin Herbert

    Let me put it another way.

    If the computer I described existed and 10,000 people chose both boxes and 10,000 people chose box B then on average 9,999 of the people who chose both boxes would get only $1,000 and one would get $1,000,000.

    Of the people who chose box B, 9,999 would receive $1,000,000 and one would receive nothing at all.

    Which group would you prefer to be in?

    Like

  18. Coel

    Hi Robin,

    Are we also not supposed to assume that there could be a perfectly accurate predictor without there being, de facto retro causality at the macro level?

    I guess one could argue that, though in that case the specification of the problem is ambiguous. (And, note that the specification is only a perfect past track record, not a future one.)

    But, anyhow, the resolution of the problem seems clear:

    If one doesn’t allow for retro-causation then one picks both.
    If one does allow for retro-causation then one picks B.

    Sorted.

    Like

  19. synred

    As the pridictor is perfect A+B, A and A,A+B can not occur. Thus, If the predictor’/s prediction is based on you working this out (rather than mysterious powers), then it will always predict A and you will always get the million by picking A. 0 payout never occurs. You can’t lose even if you act irrationally.

    As there’s no loss, if you get to paly multiple times, you get rich no matter what choice you make. I doubt Monty Hall will introduce this game.

    Like

Comments are closed.