Talk:Jensen's inequality
From Wikipedia, the free encyclopedia
Contents |
[edit] University logo
Jensen's inequality serves as logo for the mathematics department of Copenhagen University.
[edit] In the language of measure theory
The statement in the language of measure theory is true iff μ is a positive measure. That is, if it is indeed a probability measure. So, I do not see the point of keeping the two different statements (language of measure theory and probability theory). They are exactly the same, with two different notations! And there are several other coherent notations used in measure/probability theory that we could then use here (but of course, the Jensen article is not the right place to discuss about general notations). Therefore, I purpose to delete the language of measure theory section, and just leave the theorem stated with the
notation. I will delete it in a few days if I do not receive comments. gala.martin (what?) 18:39, 30 April 2006 (UTC)
[edit] Use of g
In the measure theoretic notation, the use of g is IMHO misleading. We should replace it by x simply. Indeed, the inequality written with x is not less general than the one with g(x), since the generality of the measure mu allows to recover any function with 0 effort. Adding functions instead of the identity in this case is not generalizing, is confusing. I'll try to be clearer. If you want to write a theorem about a random variable, you say let X be a random variable with property A and B, then X has the property C, this is not less general than let X be a random variable, such that g(X) has the property A and B. Than g(X) has the property C. I think that's exactly what we are writing. Am I right? --gala.martin (what?) 09:36, 29 August 2006 (UTC)
[edit] Different proofs
The graphical proof can be made more clear with a concrete example, say φ(x) = ex, and a particular distribution, say discretely uniform random variable. The abstract proof number 2 using measure theoretic notation can also be illustrated graphically and that it can tie in perfectly with the intuitive graphical argument.
The first proof by induction does not appear simple in the generalization step with the use of delta function and other notions. The third proof appears to have overly complicated notations. The proof idea is unclear at the end, which a summary or conclusion would help clarify. It would be good to point out the difference compared to the second proof, if any, in addition to the notations.
The second proof is concise yet general. A translation to the probability notation should simply involve rewriting the integral as the expectation, and translating the linearity of integration to linearity of expectation. It would be better if it is put first with the following changes:
- use X instead of g for the random variable
- point out at the end that any subderivative could have been used in place of the right-handed derivative
- tie it in with the graphical proof and a concrete example.
--Chungc 05:55, 4 December 2006 (UTC)

