Talk:Two envelopes problem/Arguments

From Wikipedia, the free encyclopedia

Notice This page is devoted to discussions and arguments concerning the two envelopes problem itself. Previous discussions of this kind have been moved from the main talk page, which is now reserved for editorial discussions only.


Contents

[edit] Simple Solution to Hardest Problem

"Let the amount in the envelope you chose be A. Then by swapping, if you gain you gain A but if you lose you lose A/2. So the amount you might gain is strictly greater than the amount you might lose."

Yes, but if you swap again, as it's the FIRST envelope that has A dollars, then by switching again, you either gain A/2 or lose A. Therefore, you can't switch forever and gain forever. It operates under the false assumpation that whatever envelope you have has A, but that A is also a constant. You can't have both.

                 -t3h 1337 r0XX0r

Oh, just noticed it's irrelevance. Sorry, won't do this again... 1337 r0XX0r 15:39, 19 January 2006 (UTC)


Its irrelevance? I'm confused. The solution to the hardest problem seems pretty simple... you are switching the amount in the envelope depending on if you look at it like it has more, or less, from Y to 2Y. You could do the same thing with Y (you gain a Y if the other envelope has more) and 100Y (you lose 50 Y if your envelope had the most). ~Tever

  • "To be rational I will thus end up swapping envelopes indefinitely" - This sentence is clearly false. After the first swap, I already know the contents of both the envelopes, and were I allowed a second swap, the decision whether on not to do it is simple. - Mike Rosoft 23:42, 24 March 2006 (UTC)

No, you never look into any envelopes in the original statement of the problem. INic 21:14, 27 March 2006 (UTC)

[[ The solution is simple. Everyone is trying to discern the probability of getting the greater amount, thus gaining the most out of the problem. However, if you were to not open either one, and continiously follow the pattern of swapping between envelopes, you would thus gain nothing. So, in order to gain the most, one would have to be content upon gaining at all, thus gaining something, and thus gaining the most out of the probablity, for without taking one, you gain nothing. ]] - Apocalyptic_Kisses, April 6th, 2006

You are right. But what rule of decision theory do you suggest should be added to allow for this behaviour? It seems to me to be a meta-rule rather than an ordinary rule. Something like "When decision theory leads to absurd results please abandon decision theory!" INic 17:43, 18 April 2006 (UTC)

I may have missed something here, but my own analysis is based on the fact that money exists in units and A or whatever the amount is in the envelope is therefore a discrete variable. Any amount which is an odd number of units can only be doubled, which means that for any reasonable range on monetary amounts, amounts with even numbers of units are more likely to be halved than doubled. If fact if you assume that there is a limit on the amount of cash available, any large even number can only be halved, and there are no large odd numbers. This means that across the distribution the losses from halving come out equal to the gains from doubling - because the larger numbers cannot be doubled. If you were to open the first envelope (which you might expect to tell you nothing), you would immediately swap any odd amount and stick with an even amount unless you were sufficiently confident it was a small amount. (Mark Bennet 26 Nov 07) —Preceding unsigned comment added by MDBennet (talk • contribs) 21:35, 26 November 2007 (UTC)

That money comes in discrete units is irrelevant to the problem itself as it is easy to restate the problem using a continuous reward. We can put gold instead of money in the envelopes for example. As gold doesn't come in discrete units (except at atomic levels but then you can't see it anyway) your reasoning based on odd/even amounts doesn't help us at all. We can assume that there is a limit to the amount of money/gold available (even though I don't know what the upper limit would be), but it's irrelevant to this problem too as the subject opening the first envelope doesn't know the limit anyway. It is also possible to restate the problem in such a way that no limit exists, so any solution based on a limit assumption will fail to shed light upon the real cause of the problem. iNic (talk) 23:33, 28 November 2007 (UTC)

[edit] What's the problem?

Can someone explain why this is such a "paradox"? It seems to me to be so much mathematical sleight of hand. Representing the payoffs as Y and 2Y or 2A and A/2 are all just misdirection. Just being allowed to play the game means you're getting paid Y. The decision involved is all about the other chunk. So you've got 1/2 chance of getting it right the first time and 1/2 of getting it wrong. Switching envelopes doesn't change that... 1/2*0 + 1/2*Y = Y/2, which, added to the guaranteed payoff of Y gives 3Y/2, which is the expectation that we had before we started playing. 68.83.216.237 03:59, 30 May 2006 (UTC)

The problem is not to find another way to calculate that doesn't lead to contradictions (that is easy), but to pinpoint the erroneous step in the presented reasoning leading to the contradiction. That includes to be able to say exactly why that step is not correct, and under what conditions it's not correct, so we can be absolutely sure we don't make this mistake in a more complicated situation where the fact that it's wrong isn't this obvious. So far none have managed to give an explanation that others haven't objected to. That some of the explanations are very mathematical in nature might indicate that at least some think that this is a subtle problem in need of a lot of mathematics to be fully understood. You are, of course, free to disagree! INic 21:17, 30 July 2006 (UTC)

[edit] The problem is logically flawed

The solution of the paradox is that it is incorrect to assume that, with just two amounts of money available, you have a fixed amount of money in your hand and still the possibility of getting a larger and smaller amount by swapping (this isn't changed at all by opening the first envelope).

As an example, assume that there are two envelopes on the table, one with $50 and one with $100. If you choose the $50 envelope first, you gain $50 by swapping, and if you choose the $100 envelope first, you lose $50 by swapping, so on average you neither gain or lose anything.

The formula resulting in a gain of 5/4 of the original amount applies only if there are three amounts of money available (with the ratios 4/2/1) and you have the middle amount in the first place. So if you have $50 and there are $100 and $25 on the table you gain (0.5*$100 +0.5*$25 -$50) =$12.50.
If on the other hand you have initially $25, you gain (0.5*$100 +0.5*$50 -$25) =$50 ,
whereas if you have the $100 initially, you lose (0.5*$50 +0.5*$25 -$100) = -$62.50 ,
so on average you wouldn't gain or lose anything either with this 'three envelope' situation.

Thomas

[edit] The problem of measuring gain/loss

I suggest that the problem exists with the fact that the "gain" or "loss" of the envelope holder is being considered. Assuming A is the smaller amount in one of the envelopes, then the average amount of money to be holding after picking the first envelope at random is:

{1 \over 2} A + {1 \over 2} 2A = {3 \over 2}A

Contrarily, assuming A is the larger amount in one of the envelopes, the average amount of money to be holding after picking the first envelope at random is:

{1 \over 2} A + {1 \over 2}{A \over 2} = {3 \over 4}A

In both cases, after having picked the first envelope, the envelope holder is unable to determine whether the amount in his/her envelope is the greater or lesser amount. If he/she were to assume that he/she had the average amount, in both cases, he/she would realise that he/she had the same amount to gain/lose in switching envelopes ({A \over 2} and {A \over 4} respectively).
--TechnocratiK 16:56, 22 September 2006 (UTC)

[edit] Another proposed solution

Motion to include this in the main article... its proposed by me. All in favour say "I"... seriously though, please give me feedback (rmessenger@gmail.com if you like), and please read the whole thing:

First, take A to be the quantity of money in the envelope chosen first, and B to be that of the other envelope. Then take A and B to be multiples of a quantity Q. Even if the quantity in A is known, Q is not. In this situation, there are two possible outcomes:

(I) A = Q, B = 2Q
(II) A = 2Q, B = Q

The question is whether we would stand to benefit, on average, by taking the money in B instead of our random first choice A. Both situations (I) and (II) are equally likely. In situation (I), we would gain Q by choosing B. In situation (II), we would lose Q by choosing B. Thus, the average gain would be zero:

{1 \over 2} Q + {1 \over 2} (-Q) = 0

The average proportion of B to A is irrelevant in the determination of whether to choose A or B. The entire line of reasoning in the problem is a red herring. It is true that the average proportion of B to A is greater than one, but it does not follow from that determination that the better option is to choose B.

For Example: Assume one of the envelopes contains $100 and the other $50. The two possibilities are:

(I) A = $50, B = $100
(II) A = $100, B = $50

If you repeat the above event many, many times, each time recording the following:

{R_i}={{B_i} \over {A_i}}, {A_i}, {B_i}

then \bar{R} will approach {5 \over 4}, and both \bar{A} and \bar{B} will approach $75. The value of \bar{R} is totally irrelevant. What is relevant is that \bar{A} = \bar{B}. This means that it makes no different which envelope you choose. On average, you will end up with the same amount of money, which is the expected result.

Step 8 assumes that because \bar{R} \neq 1, one will gain on average by choosing B. This is simply and plainly false, and represents the source of the "paradox."

I like your reasoning! The only reason I deleted your addition of it was that it was original research, and that is strictly forbidden. However, it's not forbidden here at the talk page (but not encouraged either). I think you should elaborate on your idea some and try to get it published somewhere. What I lack in your explanation right now is a clear statement of the scope of your reasoning; how can I know in some other context that the expected value is of no use? I don't think you say it's always of no use, right? And another thing that is interesting with your solution is that you say it's correct to calculate the expected value as in step 7, only that it's of no use. But if it's of no use, in what sense is it correct? iNic 22:46, 13 November 2006 (UTC)
No, I was wrong about that. The expected value calculation is done wrong. This is because we are supposed to be comparing the expected values of two options. One option is to keep A, the other is to go with B. If we do two different EV calculations for both, we will see that we can expect the same value for each. Take Q to be the smallest value between the two envelopes. There are two equally likely possibilities for each option: we hold the greater amount, or we don't. If we hold the greater amount, we hold 2Q, if not, we hold Q; as such:
E_A = {1 \over 2}Q + {1 \over 2}2Q = {3 \over 2}Q
E_B = {1 \over 2}Q + {1 \over 2}2Q = {3 \over 2}Q
E_A = E_B = {3 \over 2}Q
They are equal, so we can expect the same amount of money regardless of whether we keep A or go with B. Amen.
And, their equation for EB is wrong for one simple reason: the value of A is different in both of the two possibilities. As soon as the two envelopes are on the table, the value of Q never changes. Since we don't know Q from opening one envelope, A must take on two different values in the two terms of their EV equation. A is not a constant in the equation even if we know what it is. Here's their equation:
E_B = {1 \over 2}{1 \over 2}A + {1 \over 2}2A
In the first term, A=Q, in the second, A=2Q. Since Q is constant, A must change. An EV equation must be written in terms of constants. Since theirs isn't, it's wrong.
Think of it this way: since there are two different possibilities, their are two different A's. You don't know which one you have:
Possibility 1: A1 = 2Q,B1 = Q
Possibility 2: A2 = Q,B2 = 2Q
So their EV equation would look like this:
E_B = {1 \over 2}{1 \over 2}A_1 + {1 \over 2}2A_2
OK, so this means that you now agree with all other authors that step 7 and not step 8 is the erroneous step in the first case? And you seem to claim that step 7 is the culprit even in the next case where we look in the envelope, as you say that "A is not a constant in the equation even if we know what it is." Does this mean that you disagree with the common opinion that it's step 2 that is the erroneous step there? And what about the next variant? Is the expected value calculation not valid there either? And how can i know, in some other context, that a number, say 1, is not really a constant? You still have to specify how we can avoid this paradox in general. As long as you haven't generalized your idea to a general rule you haven't really proposed any new solution, I'm afraid. iNic 02:26, 15 November 2006 (UTC)
Your right. But its simple: if you forget probability for a second, and imagine that as soon as the envelopes are on the table, you repeat the possible outcomes of the event millions of times. Each time the event takes place, there is some Ai and Bi which are the A and B values you got that time. These are different each time you do it. So as a simple rule, they can play no part in an expected value calculation; unless A actually was a constant. Here's where it gets interesting: It seems like the same situation, but its not. If we imagine A is a constant, and if the same event were repeated, A would not change, as is explained in steps 1-6. If you model the situation this way (I have done computer models for both), such that the experimenter, if you will (the computer), first chooses a value for A, then randomly selects a value for B as a function of A based on steps 1-6, then the expected value of B actually does end up equaling five fourths of the EV of A!
If instead you tell the experimenter to first choose a value to be the smallest value, then choose which envelope to put it in, and put two times that in the other: The result is as you would expect for the real life scenario. So while the possibility of B>A is the same for each instance of both situations... and the two possible values of Bi with respect to Ai are the same, 'tis not the same situation.
I can make it even clearer how there are two different situations being represented by separating out the probabilistic variable:
Starting with the true event: the experimenter chooses a smallest value, call it Q, and puts it in one envelope, and puts 2Q in the other. Then, lets say you flip a coin: if it's heads, Zi = 0, if tails, Zi = 1 for instance... now observe:
Ai = (1 + Zi)Q
Bi = (2 − Zi)Q
B_i = {{2-Z_i} \over {1+Z_i}}A_i
As you can see, now we have an unchanging algebraic expression for all the variables. Its easy to see the results of the coin toss simply dictates which envelope you pick up. Please note that (referring to the last expression) Bi is either (1/2)A or 2A with equal probability. Its also easy to write our expected value expression! There is only one (well understood) probabilistic event: a coin toss. We know the expected value of Z is exactly 1/2. Plug it in and we get what we would expect: EB = EA = (3 / 2)Q, which holds up regardless of whether Q is the average of a constantly changing value, or simply a constant through all instances. NOW, the "OTHER EVENT":
The experimenter first chooses a value for A, then chooses Bi based on steps 1-6. If we accommodate the coin toss, we get:
B_i = ({1 \over 2} + {3 \over 2}Z_i)A
First, notice that if Z=0, Bi = (1 / 2)A, and if Z=1, Bi = 2A, and since the probability of either is equal, we would expect that this is identical to the true event, but it's not! If we try to solve for the expected value of B, we plug in our known expected value for Z and what do we get? five fourths. What a surprise! So its easy to see that, while these too situations appear similar, the similarities break down when Z assumes value other than 0 or 1. It's like saying: I'm a function with roots at 2, 4 and 7.. what function am I? There can be many seemingly similar situations at first glance, with the primitive tools used to analyze this problem. But they behave differently.
So the moral: If it changes from instance to instance AND its expected value isn't already known, it CAN'T be used as a term in an expected value calculation! Just as you'd expect! We can't even use Bi to find the expected value of B! because if an EV equation was written in terms of a variable that changes all the time, then the expected value would always be changing!
And specifically addressing the problem: Step one fine IF we remember that they have set A to equal the money in the envelope THIS TIME. Step 2 is testably TRUE. Steps 3-5 are stating the obvious. Step 6 is the problem. It is true within the parameters of step 1: that is, AT EACH INSTANCE, half of the time, B=2A, and the other half, B=(1/2)A. But this relates specifically to values of Bi and Ai. As I stated, if we want to calculate expected value in this type of situation, It must be written in terms of ALREADY KNOWN expected values! We don't already know the expected values of either A or B, hence we can't squeeze blood from a turnip!
In the true situation, there are two truely random probabilistic variables: Q and Z. A and B are simply functions of Q and Z (Z being the binary decision, Q being the random number). In their situation, they took the random probabilistic variables to be A and Z, and let B be a function of those. This is different because it assumes that the probability distribution of A is totally random, when in fact it depends on that of the true independent variables (the ones that are actually chosen at random!!!) A was never chosen at random! only Q and Z are!
When determining the expected value of a variable: Express the variable at any instance in terms of the random decisions or numbers that compose it. This equation will always hold true when relating the expected value of the variable to the known expected values of the random decisions or numbers.
i.e. suppose I said I was giving away between 0-5 dollars to one of two people, AND between 10-15 dollars to the other. You're one of those people, and.. say, Joe is the other. so Y is the amount of money you just got, and J is the amount Joe got. Is it worth you're while to switch with Joe?
First, identify the variables: i am choosing between two people, so there is one binary decision Z (independent). There is the amount of money between 0-5 dollars, call it q (independent), and the other quantity between 0-5 dollars (to be added to 10), call it Q (independent). Y and J are dependent upon these as such:
Yi = Ziqi + (1 − Zi)(10 + Qi)
Ji = (1 − Zi)qi + Zi(10 + Qi)
And write expected value expressions as follow:
EY = EZEq + (1 − EZ)(10 + EQ)
EJ = (1 − EZ)Eq + EZ(10 + EQ)
We know EQ = Eq = 2.5;EZ = 1 / 2; As such:
EY = EJ = $7.50
The point is, had you assumed your money Yi was an independent variable and somehow tried to define poor Joe's money Ji in a given instance in terms of your own, you would have gotten it wrong! Neither J nor Y are independent variables, hence we know nothing of their expected values, SO: if we had an expression for J in terms of Y, we wouldn't be able to fill in the expected value of Y, because its dependent! We would be forced to do something tragic.. assume Y is a constant, and define Joe's money in terms of this magical made up constant! Therein lies the mistake! Note: When I give you your money, you know Yi. This doesn't mean you know anything about the infinitely many other possible values of Y!


Why is it that this thorough and indisputable resolution is not in the article? It just doesn't make sense that the original "problem", which is an insult to intelligence, has its own article, and the rational argument debunking the "problem" is disallowed from that article. Instead the article remains, an inane problem with a bunch of inane pseudo-solutions, while the real rational solution will not be submitted for reasons of academic bureaucracy. What happened to you Wikipedia, you used to be cool... Denito 10:28, 28 June 2007 (UTC)

[edit] History of the problem

Someone correct me if I'm wrong here, but in the original problem

Two people, equally rich, meet to compare the contents of their wallets. Each is ignorant of the contents of the two wallets. The game is as follows: whoever has the least money receives the contents of the wallet of the other (in the case where the amounts are equal, nothing happens). One of the two men can reason: "Suppose that I have the amount A in my wallet. That's the maximum that I could lose. If I win (probability 0.5), the amount that I'll have in my possession at the end of the game will be more than 2A. Therefore the game is favourable to me." The other man can reason in exactly the same way. In fact, by symmetry, the game is fair. Where is the mistake in the reasoning of each man?

the mistake in reasoning is two-fold. The first mistake is that the probability of winning is not 0.5. If they both are equally rich, then they both have a total wealth of W. The amount in one wallet would be any amount A such that 0 <= A <= W. So the probablity that A is greater than the amount in the other wallet is A/W, which may or may not be 0.5.

Second, if A is greater than B, then A + B < 2A and the amount in his possession at the end of the game cannot be more than 2A.

Fryede 20:42, 11 December 2006 (UTC)

[edit] Three envelopes

I don't have a solution to propose but the formula (1/2 * 2A) + (1/2 * A/2) can also be interpreted as: there are three envelops, they contain the amount A/2, A and 2A, you know you are holding the envelop with the amount A, you are given the choice to keep it or swap with one of the other two envelops. So in this case if you choose to swap envelop, the average or expected value is indead 5/4A, but with this interpretation the above formula can be applied only for the first swap.H eristo 00:17, 5 March 2007 (UTC)


I agree with Heristo, in my opinion the paradox is originated because it consider only 2 envelopes, but the envelopes are 3 conteining:

X/2, X, 2X.

Only 2 are actual envelopes, and only the couples: {x/2, x} e {x, 2x}.

If we assume X = 100 e and we consider that we have the same probability to have one of the three envelopes (2 real and one virtual)

$ Probability Value x Prob.
50 1/3 16.66
100 1/3 33.33
200 1/3 66.66


If we have the first possible couple {X/2, X}:


We will have:


$ Probability Value x Prob.
50 1/3 16.66
100 1/3 33.33
Sum 50


And will be indifferent to change envelop.


If we have the second possible couple {X, 2X}:

We will have:


$ Probability Value x Prob.
100 1/3 33.33
200 1/3 66.66
Sum 100


And again will be indifferent to change envelop.

In all possible scenarios will be indifferent to change the envelop.


Salvacam 14:23, 24 April 2007 (UTC)



Comment

The fallacy or source of error may be in that the formula seems to allow 3 possible amounts for 2 envelopes. There is A, 2A and A/2.

I would suggest assigning only values A and 2A OR A and 0.5A. Using A and 2A: By selecting one envelope, I have a 50% chance of selecting amount A and 50% of selecting 2A. Without knowing what A may be, that gives me odds of winning 1.5A with my initial selection.

If I have selected the envelope holding A, then the other holds 2A. If I selected 2A, then the other holds A.

The two outcomes are equally likely, so the probability is that opposite envelope would contain:

0.5 * A + 0.5 * 2A, or 1.5A.

This is no more than what I am holding. I have no incentive to switch.

Srobidoux 18:19, 13 May 2007 (UTC) srobidoux

[edit] Commentary 1

In general, I agree with Chase's answer. However, I also believe there is another, which is more straightforward, and ties in with his.

This paradox results from a wishful thinking bias. It only addresses the "destined to win" sequence, while there is a whole other "destined to lose" possibility that is not being addressed. If there is an apparently correct rationale for the "destined to win" sequence (which this article provides) then there must also be an apparently correct rationale for the "destined to lose" sequence which suffers from the same fallacy in and of itself. Combining the two and balancing them against each other gives you the correct answer of "coinflip". --76.217.81.40 16:21, 29 May 2007 (UTC)

[edit] my comment

Suppose that all we know is one envelope contains more money and the other less money. Then there is no paradox, right? This double/half version is just a special case of that, so there should be no paradox there either.

[edit] I think it starts off wrong

The steps and resulting formula do not take into account the initial state of the participant. The way I see it, anybody who is presented with this situation first has the option to choose A, B, or N (for Neither). Obviously, since N does not result in any benefit, it is not likely that a rational participant would choose to decline either A or B. However, it is important for N to be included, because it IS an option. The importance of including N becomes apparent when either A, B, or become a loss of benefit. In other words, if the scenario was...

You have $50. Before you are two envelopes that appear identical to you. One contains $100, while the other contains nothing. You may purchase either envelope for $50, but since that is all you have, you may only purchase one. You may also, if you so desire, choose neither and keep your existing $50. ...everything changes. We can now see how valuable having the option to choose N becomes. Now the "player" has a chance to have either $0, $50, or $100. Unlike the original scenario, we have an additional variable to consider, initial condition. The danger of losing money makes the option to not participate a valuable benefit to be considered. While the scenario I presented does vary from the "officially" stated scenario, I still think that the option to consider the initial condition applies. I'm nobody special, but this is my "math statements" for it.

N is a known amount that the participant currently has. A,B are unknown amounts that the participant does not have. N+A or N+B are comfirmed, however, as being greater-than N. Therefore, both A and B are better options than N alone.

I guess I can't represent this in numbers like other people can, but my understanding is that the value of switching is equal to the value of keeping. There is no further benefit, so except for non-rational reasons (greed, random choice, insanity, preference to choose left over right, etc) there should be no switching.

Maybe someone else can tell me how the math works out? - Nathaniel Mitchell 63.239.183.126 20:30, 9 August 2007 (UTC)

[edit] Problematic problem formulation

The reason of the paradox is incorrectly formulated problem: We need to notice that we are dealing with a random experiment.

Lets analyze the problem mathematically. First, two definitios:

- A random variable is a mathematical function that maps outcomes of random experiments to numbers.

- Expected value of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff ("value").

In order to calculate an expected value we must:

1. Define our experiment (Done in 'the setup' part)

2. Declare / Find-out the set of possible outcomes (constant values), and the probability of each outcome (constant probability)

3. Define the random variable (Define a payoff for each outcome of the experiment)

4. Calculate the expected value of our random variable.

Now, regarding the problem, as formulated:

Argument (1) States ‘I denote by A the amount in my selected envelope’.

This argument can be understood in two manners:

a. We are talking about a specific instance of the experiment. A is the amount in the selected envelope in this specific instance, hence, A is constant.

b. A is a random variable that denotes the amount in the selected envelope in our experiment.

Since arguments (4) and (5) deals with the value of A, and not the expected-value of A, we can rule out option (b).

So we are left with the interpretation (a).

Hence, Arguments 1 through 6 are referring to our specific instance of the experiment, and they are flawless.

The problem is with argument (7): it calculates the expected value of a constant. Expected values can only be calculated for random valiables. This is where the problem is incorrectly formulated.

[edit] solution

The first proposed solution solves the paradox, the rest of the article just muddies the issue in my view. Petersburg 21:11, 12 August 2007 (UTC)

I disagree. The first solution states that step 7 is the cause of the paradox. If one denotes by A the value in a selected envelope then A must remain constant even if the value of A is unknown (the amount of money in the envelope is not changed, nor is the amount of money in the other envelope assigned to A). While the mistake becomes obvious in step 7, the critical mistake is done at step 6. If your envelope contains A then the other envelope contains either 2A or 1/2A, but the chance is 100/0 rather than 50/50. After all, one of the envelopes (2A or 1/2A) never even existed. Oddly enough, step 2 still remains true as a way for one to calculate the propability, even if there really is only one possibility. One must keep in mind that since we chose to denote one envelope by A, the 50/50 chance has already happened and determined the value in the other envelope (2A or 1/2A) as well as the value of A. Thus it would be foolish to use these values linked to the past propability and use them with it as if they were not linked. --NecaEum 91.155.63.118 00:31, 26 August 2007 (UTC)

[edit] Solution

The flaw in the switching argument is as follows:

In the calculation we assume two different cases. One in which the envelopes contain A and A/2 and one in which the envelopes contain A and 2A. These two cases give four possible combinations of the content of the envelopes:

1. A in my envelope and A/2 in the other.

2. A/2 in my envelope and A in the other.

3. A in my envelope and 2A in the other.

4. 2A in my envelope and A in the other.


In the erroneous calculation of the expected value we use only two of the four possible values and assume that the probability of each of them is 0.5. The correct calculation would be to sum all the four possible values multiplied by 0.25. As we can see that the possible values of both envelopes are the same, the expected value of them is also the same.

More generally

Expected value is defined as the sum of the multiplication of each possible value of a variable by the probability of its occurrence. In this problem we have no information about possible values. The only thing we know is the proportion between the values in the two envelopes. We cannot calculate expected value based on this proportion only. Yet, we can see that every possible value in one envelope is also possible with the same probability in the other envelope; so the expected value of both envelopes is the same. It is easy to be seen in a case where there is a limited number of possible values with equal probability. For example in one envelope there may be any sum from $1 to $1000 randomly and in the other its double. In the example above we took only two values: A/2 and A. But the same is also true if we have infinite number of possible values at any distribution.


--Rafimoor (talk) 22:22, 2 May 2008 (UTC)

[edit] Simple Solution

I propose this very simple solution:

You entered with $0. You just got an envelope with more than $0 in it. Don't debate about switching indefinitely, it doesn't matter which envelope you take, you just got something for nothing. —Preceding unsigned comment added by 129.3.157.107 (talk) 15:18, 8 May 2008 (UTC)