Coupon collector's problem

From Wikipedia, the free encyclopedia

In probability theory, the coupon collector's problem describes the "collect all coupons and win" contests. It pertains to the probability that in a process of collecting random n coupons with replacement more than t sample trials is needed. The mathematical analysis of the problem leads to the number of samples growing as O(n×log(n)). For example, when n = 50 it takes about 225 samples to collect all 50 coupons.

Contents

[edit] Understanding the problem

The key to solving the problem is understanding that it takes very little time to collect the first few coupons. On the other hand, it takes a long time to collect the last few coupons. In fact, for 50 coupons, it takes on average 50 trials to collect the very last coupon after the other 49 coupons have been collected. This is why the expected time to collect all coupons is much longer than 50. The idea now is to split the total time into 50 intervals where the expected time can be calculated.

[edit] Solution

[edit] Calculating the expectation

Let T be the time to collect all n coupons, and let ti be the time to collect i-th coupons after the i − 1 coupons have been collected. Think of T and ti as random variables. Observe that the probability of collecting a new coupon given i − 1 coupons is pi = (n − i + 1)/n. Therefore, ti has geometric distribution with expectation 1/pi. By the linearity of expectations we have:


\begin{align}
\operatorname{E}(T)& = \operatorname{E}(t_1) + \operatorname{E}(t_2) + \cdots + \operatorname{E}(t_n) 
= \frac{1}{p_1} + \frac{1}{p_2} +  \cdots + \frac{1}{p_n} \\
& = \frac{n}{n} + \frac{n}{n-1} +  \cdots + \frac{n}{1}  = n \cdot \left(\frac{1}{1} + \frac{1}{2} + \cdots + \frac{1}{n}\right) \, = \, n \cdot H_n.
\end{align}

Here Hn is the harmonic number. Using the asymptotics of the harmonic numbers, we obtain:


\operatorname{E}(T)  = n \cdot H_n = n \ln n + \gamma n + \frac1{2} + o(1), \ \ 
\text{as}  \ n \to \infty,

where \gamma \approx 0.5772156649 is the Euler–Mascheroni constant.

Now one can use the Markov inequality to bound the desired probability:

\operatorname{P}(T \geq c n H_n) \le \frac1c.

[edit] Calculating the variance

Using the independence of random variables ti, we obtain:


\begin{align}
\operatorname{Var}(T)& = \operatorname{Var}(t_1) + \operatorname{Var}(t_2) + \cdots + \operatorname{Var}(t_n) \\ 
& = \frac{1-p_1}{p_1^2} + \frac{1-p_2}{p_2^2} +  \cdots + \frac{1-p_n}{p_n^2} \\
& \leq \frac{n^2}{n^2} + \frac{n^2}{(n-1)^2} +  \cdots + \frac{n^2}{1^2} \\
& \leq n^2 \cdot \left(\frac{1}{1^2} + \frac{1}{2^2} + \cdots \right) = \frac{\pi^2}{6} n^2 \leq 2 n^2,
\end{align}

where the last equality is the value of the Riemann zeta function known as the Basel problem.

Now one can use the Chebyshev inequality to bound the desired probability:

\operatorname{P}\left(|T- n H_n| \geq cn\right) \le \frac{2}{c^2}.

[edit] Connection to probability generating functions

Another, combinatorial technique, can also be used to resolve the problem: Coupon collector's problem (generating function approach).

[edit] See also

[edit] References

[edit] External links

Languages