Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

The Wikipedia Reference Desk covering the topic of mathematics.
WP:RD/MA

Welcome to the mathematics reference desk.
Before asking a question
  • Search first. Wikipedia is huge, and you can probably find the answer to your question much more quickly by looking for it yourself. Use the search box on the left or Google site search for searching Wikipedia. If there is no relevant information on Wikipedia, try an Internet search engine.
  • Do your own homework. The reference desk will not give you answers for your homework, although we will try to help you out if there is a specific part of your homework you do not understand. Make an effort to show that you have tried solving it first.
  • Do not request medical or legal advice. Any such questions may be removed. If you need medical or legal advice, do not ask it here. Ask a doctor, dentist, veterinarian, or lawyer instead. See also Wikipedia:Medical disclaimer and Wikipedia:Legal disclaimer.
  • Do not start debates or post diatribes. The reference desk is not a soapbox.

How to ask a question

  • Include a title and a question. It is easier for our volunteers if question formatting is consistent.
  • Be specific. Make the title meaningful, so volunteers who can help with your question will find it. Clearly state your question and include any information that might help to understand the context (for example, a wikilink or a link to an online resource). If your question deals with local or national issues, make sure you specify what area of the world it applies to.
  • Do not provide contact information, such as your e-mail address, home address, or telephone number. Be aware that the content on Wikipedia is extensively copied to many websites; making your e-mail address public here may make it very public throughout the Internet.
  • Sign your question. Type ~~~~ (four tildes) at the end of your question, to let the reference desk know who is asking.
  • Do not cross-post. Post your question at one section of the reference desk only.
  • Be patient. Your question probably will not be answered right away, so come back later and check for a response. Questions are normally answered at the same page on which they were asked. A complete answer to your question may be developed over a period of up to four days.
After reading the above, you may
ask a new question by clicking here.
How to answer a question
  • Be thorough. Provide as much of the answer as you are able to.
  • Be concise, not terse. Please write in a clear and easily understood manner. Keep your answer within the scope of the question as stated.
  • Provide links when available, such as wikilinks to related articles, or links to the information that you used to find your answer.
  • Be polite and assume good faith, especially with users new to Wikipedia.
  • Don't edit others' comments, except to fix formatting errors that interfere with readability.
  • Don't give any legal or medical advice. The reference desk cannot answer these types of questions.
 
See also:
Help desk
Village pump
Help manual


Contents

[edit] June 3

[edit] Non measurable sets having continuous boundaries

Is there any example of a non-measurable set in the plane whose boundary is a continuous closed curve?--Pokipsy76 (talk) 07:09, 3 June 2008 (UTC)

Yes. You can choose a Vitali set which is dense in [0,1]. Cross this with the interval, and we get a non-measurable subset of [0,1]x[0,1] whose boundary is the whole square. But [0,1]x[0,1] is a continuous closed curve. Algebraist 10:25, 3 June 2008 (UTC)
Ok, let's consider a more restrictive hypothesis: the boundary must be a Jordan curve, that means it must also be non self-intersecting. Can we still find an example?--Pokipsy76 (talk) 13:31, 3 June 2008 (UTC)
Let B be a set in the plane with boundary being some set C. Suppose we are given that B and C are disjoint. This implies that B is open, and thus Lebesgue measurable.
If C is a Jordan curve, then we know from the Jordan curve theorem that its complement in the place is two connected components, one unbounded component A and one bounded component B, whose boundaries equal C. By definition B and C are disjoint, so the above argument applies and B is Lebesgue measure. Eric. 144.32.89.104 (talk) 14:35, 3 June 2008 (UTC)
I don't follow your argument. By what definition are B and C disjoint? You've assumed they are, but you defined B to be any set in the plane... Doesn't that proof still allow for a non-measurable set which is the union of an open set and part (or all) of its Jordan curve boundary? I suspect that it isn't actually a problem (I haven't really studied any measure theory, so I'm not 100% sure), but it still needs to be addressed. --Tango (talk) 16:08, 3 June 2008 (UTC)
Suppose B is non-measurable and has boundary C, a Jordan curve. By subtracting the interior of B, we may assume that B is contained in C. Since Lebesgue measure is complete, it's enough to show that Jordan curves have zero area. Is this true? Algebraist 18:16, 3 June 2008 (UTC)
JSTOR tells me it is not. Damn. Algebraist 18:17, 3 June 2008 (UTC)
I was thinking the same thing, but didn't know enough measure theory to know either way. I know not all curves have zero area (Peano curves being the obvious example), but wasn't sure about Jordan curves. Did your investigation find a example of a such a curve? Although, we don't actually need the measure to be zero, as long as it is measurable, correct? Did JSTOR tell you they weren't necessarily measurable, or just that the measure wasn't necessarily zero? --Tango (talk) 18:42, 3 June 2008 (UTC)
A Jordan curve is measurable (it's closed, after all), but it is not obvious to me that any dense subset of one is measurable, which is what we need. Any subset of a zero area set is measurable, so I was hoping for that. Yes, Osgood's ancient paper gives a construction, though I didn't read it. Algebraist 18:45, 3 June 2008 (UTC)
Assuming property 15 in Lebesgue measure is true, we're fine (any Jordan curve is a continuous injective image of the circle, all of whose subsets are measurable). That property is neither obvious nor referenced. Algebraist 19:01, 3 June 2008 (UTC)
Well, this conclusion seems to be incompatible with the assertion that there's a Jordan curve of positive area -- any set of positive measure has a non-measurable subset, in fact a subset of inner measure zero and outer measure equal to the measure of the original set.
Are you sure that any Jordan curve is a continuous image of a circle embedded in the plane? You seem to need that to get your conclusion from the property 15 stated above. But I think the stated property is just false -- there are continuous (in fact order-preserving) injections that blow the middle-thirds Cantor set up to a set of positive measure, so if you take a non-measurable subset of that set and pull it back, you refute the claim. --Trovatore (talk) 03:28, 4 June 2008 (UTC)
Actually I seem to have misread it -- I thought it was talking about continuous functions from Rn to Rn. It doesn't matter; it's wrong in either case. I went ahead and took it out. --Trovatore (talk) 03:36, 4 June 2008 (UTC)
So we know that any subset of a Jordan curve is measurable and this completely solve the problem. However I'm still wondering if a Jordan curve can really have non-zero measure...--Pokipsy76 (talk) 19:24, 3 June 2008 (UTC)
Go read A Jordan Curve of Positive Area, William F. Osgood, Transactions of the American Mathematical Society, Vol. 4, No. 1 (Jan., 1903), pp. 107-112 if you really want to. Unfortunately JSTOR doesn't have the diagrams, and I don't feel up to understanding it without them. Algebraist 19:32, 3 June 2008 (UTC)
Tango, yes for some reason I was (incorrectly) assuming that the set whose boundary was a Jordan curve was necessarily one of the two connected components produced by Jordan's theorem. But fortunately Algebraist has sorted it out. Eric. 144.32.89.104 (talk) 21:16, 3 June 2008 (UTC)

The answer to the original question is YES:

  1. Any Lebesgue set of positive measure has non-measurable subsets.
  2. There are Jordan curves with positive Lebesgue measure.
  3. Let γ be a Jordan curve with positive Lebesgue measure and let A be a non-measurable subset of γ, then the domain surrounded by γ union with A is a non-measurable set as required.

I'll be happy to provide a reference or proof to any of these claims apon demand. Oded (talk) 05:07, 4 June 2008 (UTC)

Yes, I thought your (1) was true, but I couldn't find a reference online. Could you provide one, so I can add it to Lebesgue measure? Algebraist 09:42, 4 June 2008 (UTC)
While you are at it, you may consider adding 2 to Lebesgue measure. There are references for it at books.google.com and curve mentions it. I don't know a reference for 1, but it is easy to prove. First, we construct a measure preserving transformation from the set to an interval in R. Then argue that the pre-image of a non measurable set in the interval has to be non-measurable as well. (Alternatively, perhaps one of the proofs of the existence of non-measurable sets in R applies directly to this case as well.) I got to go now, but I'll come back to this with more details soon. Oded (talk) 10:25, 4 June 2008 (UTC)
So does that mean property 15 in Lebesgue measure isn't actually true? --Tango (talk) 12:03, 4 June 2008 (UTC)
Yes. Trovatore has removed it. Algebraist 12:07, 4 June 2008 (UTC)
I don't have a reference, but I can give you a proof: Let A be a set of positive measure, and together wellorder the open sets of measure less than m(A) and the closed sets of positive measure, in order-type 2^{\aleph_0}. Now we're going to build up disjoint sets B and C by transfinite recursion. When you hit a closed set K of positive measure, pick an element of K that is not yet committed, and throw it into C (guaranteeing that K will not be a subset of B). You can do that because so far you've committed fewer than 2^{\aleph_0} points, and K has cardinality 2^{\aleph_0}. Similarly, when you come to an open set U, pick an element of A\U that is not yet committed, and throw it into B, guaranteeing that B will not be a subset of U.
At the end, B is a subset of A that has no closed subsets of positive measure, so it has inner measure 0. Also, B is not contained in any open set of measure less than m(A), so B has outer measure at least (and therefore exactly) m(A). --Trovatore (talk) 17:51, 4 June 2008 (UTC)
Thanks. Algebraist 21:22, 4 June 2008 (UTC)
By the way, in answer to your question above, it is the case that for any Jordan curve in the plane, there is a homeomorphism of the plane that maps the unit circle onto your curve. This is the Jordan–Schönflies theorem. Algebraist 21:27, 4 June 2008 (UTC)
I think the following proof is easier and more transparent than the one by transfinite induction. (I mean a proof that any measurable set of positive Lebesgue measure has a non-measurable subset.) Let A be the set, and with no loss of generality assume that A is bounded (this is anyway the case in the present application). Say that two points in A are equivalent if the difference between them is a vector with rational coordinates. Let B be a subset of A which contains one element from each equivalence class. (I guess that the existence of B is proved using Zorn's lemma, the axiom of choice is used here.) We claim that B is not measurable. Fix a bounded open set U, and let U' be the set of points in U with rational coordinates. For u in U' set B_u := B+u. Then these sets are disjoint by construction. If they were measurable, the union would have measure equal to the sum of the measures, and this has to be zero or infinity, since the measure of each B_u is the same as that of B. But now we get a contradiction, since if U is sufficiently large, then on the one hand the union of all the B_u (for u in U') contains A, while on the other hand it is bounded. Thus, at least on of the B_u is non-measurable, which implies that B is non-measurable. (This proof mimicks one of the standard constructions of a non-measurable set in R.) Oded (talk) 21:57, 4 June 2008 (UTC)
Thanks again. You don't need Zorn there, by the way: it's just a direct application of the axiom of choice. Algebraist 22:05, 4 June 2008 (UTC)
Hmm -- it's a nice proof, and I suppose it is a little easier. But more transparent? With the transfinite recursion proof, you just figure out what you need to do (refute all possible sets witnessing positive inner measure, or outer measure less than m(A)), and straightforwardly deal with them one at a time. It also gets you more information. There's a diffidence about transfinite recursion in a lot of circles that I don't really understand. I think it's a basic technique that every mathematician (well, at least every mathematician who deals with infinite structures) should know. --Trovatore (talk) 22:52, 4 June 2008 (UTC)
Agreed. I also agree with your sentiment regarding transfinite recursion and with your observation that some mathematicians disdain it. I guess it all depends on one's background and taste. Oded (talk) 04:23, 5 June 2008 (UTC)

[edit] Set Notation, bis

In a previous question on this reference desk, someone asked about set notation. One thing that ended up being mentioned was this:

\sum_{d \in D,\;dBc} \ldots

I've seen things that look like that on various Wikipedia articles, but my question is, what exactly does notation like that mean? Digger3000 (talk) 14:03, 3 June 2008 (UTC)

It means "sum over all the elements of D that are related to c by the relation B". --Tango (talk) 14:09, 3 June 2008 (UTC)
Perhaps slightly more generally, "sum over all the elements of D satisfying dBc". Of course, in our case dBc means "d is related to c by the relation B". -- Meni Rosenfeld (talk) 14:36, 3 June 2008 (UTC)

And what is a relation, exactly? Digger3000 (talk) 20:37, 3 June 2008 (UTC)

Well, we do have an article about everything. The short version: Things like "=" and "<" are relations. -- Meni Rosenfeld (talk) 20:41, 3 June 2008 (UTC)

[edit] question about equations and expressions

What is the difference between an equation and an expression?Could I solve for a varible in an expression?Could I solve for a varible in an equation? —Preceding unsigned comment added by Lighteyes22003 (talkcontribs) 14:46, 3 June 2008 (UTC)

Take a look at equation and expression. It may be helpful to think of an "equation" as a complete sentence, and an "expression" as a noun: for example, the equation "8 = 3 + 5" is like a complete sentence asserting that 8 is the sum of 3 and 5, whereas the expression "3 + 5" is like a noun, representing the sum of 3 and 5. After all, in English the phrase "the sum of three and five" is a noun, and the phrase "the sum of three and five is equal to eight" is a sentence. Eric. 144.32.89.104 (talk) 15:06, 3 June 2008 (UTC)
Put simply, an equation has a equals sign in it (hence the name). An expression is a more general term, equations are examples expressions, but so is pretty much any other meaningful sequence of mathematical symbols. "Solving" doesn't make sense for general expressions. You can solve an equation, a system of equations, an inequality or system of inequalities, and probably a few other types of expression, but not all types (and even then, they need to contain a variable, 1+1=2 is an equation, but it would be meaningless to try and "solve" it). Of course, even if you have an equation with a variable, that doesn't mean it actually has a solution. There are no real (or even complex) solutions to "ex=0", for example, but that's a perfectly valid equation. --Tango (talk) 16:00, 3 June 2008 (UTC)
Also, an equation too might not contain any variables, so you won't be able to solve 1+1=2. -- Xedi (talk) 21:59, 3 June 2008 (UTC)
I said that... I even used that example... --Tango (talk) 23:49, 3 June 2008 (UTC)
Haha, sorry, I must've read your answer very absent-mindedly. --XediTalk 05:49, 4 June 2008 (UTC)
Expressions are phrases made up of combinations of variables. Equations represent relationships that expressions obey.--Fangz (talk) 22:43, 3 June 2008 (UTC)
Combinations of variables, constants, operators, etc. --Tango (talk) 23:49, 3 June 2008 (UTC)


[edit] June 4

[edit] Laws of Cosines

I understand how the laws of cosines work, but what I can't figure out is how, when they give you all three sides of a triangle, you do the problem. I get to a certain point and then get stuck... Could someone go step by step through an example and explain what they are doing each step? --Devol4 (talk) 06:45, 4 June 2008 (UTC)

Easy!

side_A = 10 cm
side_B = 7 cm
angle_C = 20 degrees

Find the length of side_C

Solution:

side_C ^ 2 = side_A ^2 + side_B ^2 - 2 * side_A * side_B * Cosine(angle_C)
side_C ^ 2 = 10^2 + 7^2 - 2 * 10 * 7 * Cosine(20 degrees)
side_C ^ 2 = 100 + 49 - 140 * Cosine(20 degrees)
side_C ^ 2 = 100 + 49 - 140 * 0.9397
side_C ^ 2 = 100 + 49 - 131.558
side_C ^ 2 = 17.442
thus after we take the (positive) square root
side_C = 4.176 cm

The end. 122.107.152.72 (talk) 08:42, 4 June 2008 (UTC)

I think he actually wanted to know how it works when you have all three sides known. Suppose you know the length of sides a and b, which are adjacent to angle C, and you also know side c which is opposite C. Then you just rearrange the formula to put cosC on its own. Then all the values on the opposite side you should know, and so you have the exact value of cosC. Take the inverse cosine of each side and you're there. -mattbuck (Talk) 08:59, 4 June 2008 (UTC)

Alternatively, you can be given all three sides and are asked to find an angle.

side_A = 10 cm
side_B = 7 cm
side_C = 4.176 cm

Find the value of angle_C

Solution:

side_C ^ 2 = side_A ^2 + side_B ^2 - 2 * side_A * side_B * Cosine(angle_C)

After moving "side_A ^2 + side_B ^2" across to the Right Hand Side

side_C ^ 2 - side_A ^2 - side_B ^2 = - 2 * side_A * side_B * Cosine(angle_C)
Next we multiple both sides with negative one (-1)
side_A ^2 + side_B ^2 - side_C ^ 2 = 2 * side_A * side_B * Cosine(angle_C)
Next we divide both sides with "2 * side_A * side_B"
(side_A ^2 + side_B ^2 - side_C ^ 2) / (2 * side_A * side_B) = Cosine(angle_C)

Now we put the numerical values in

(100 + 49 - 17.442) / ( 2 * 10 * 7) = Cosine(angle_C)
131.558 / 140 = Cosine(angle_C)
0.9397 = Cosine(angle_C)
Now I look up my ArcCosine Table for the entry 0.9397 to get
19.99 degrees = angle_C

Thank you Ohanian (talk) 08:57, 4 June 2008 (UTC)

Alright, thanks. I managed to figure out that I was typing it into the calculator wrong... Bad me. But thanks for the help, I now understand it better and it should help me in the test next period.

--Devol4 (talk) 11:43, 4 June 2008 (UTC)

[edit] ax^n+bx+c

I think this is a special class of equation that has been studied a bit, but can’t figure out what they're called. Equations of the form ax^n+bx+c. —Preceding unsigned comment added by 130.127.186.122 (talk) 19:22, 4 June 2008 (UTC)

Those would most likely be polynomials, assuming that n is a non-negative integer. — Lomn 19:32, 4 June 2008 (UTC)

[edit] A Mathematical Puzzle

You have 25 horses, and a track on which you can race five of them at a time. You can determine in what order the horses in a race finished, but not how long they took, and so can not compare times from one race to another. A given horse runs at the same speed under all circumstances, and no two horses run at the same speed. How many races does it take to find the three fastest? —Preceding unsigned comment added by 171.69.159.185 (talk) 22:07, 4 June 2008 (UTC)

It was never completely answered, though: no-one proved that you can't do it in six races. Algebraist 22:26, 4 June 2008 (UTC)
I still say I could do it in only 53,130 races. -mattbuck (Talk) 22:32, 4 June 2008 (UTC)
Well done. Algebraist 22:37, 4 June 2008 (UTC)
I have no idea. But if you think bout it like this: 1st race:5 horses, who wins means nothing because they could all be the fastest or all be the slowest, so take the top three against 2,so far, you have 7 horses and you know your relative fastest three. continue the top three against next two and you get 11 races. But I probably did something wrong, and knowing how everything in math has to do with patterns I'm going to say 15, 1+2+3+4+5, or the 5th triangle number.--Xtothe3rd (talk) 03:12, 5 June 2008 (UTC)
As mentioned in the archived discussion, it's fairly easy to see that seven races are enough for the top three, and that five can't even give you the top one. Probably six aren't enough, but I at least can't prove it. The problem is that six races are enough to demonstrate which are the top three if you somehow know already, so the naive proof fails. Algebraist 07:36, 5 June 2008 (UTC)
If you decided to follow the following strategy - which in general is *not* optimal;
  • Race the first 5 horses
  • Race the next four horses with the current overall 3rd placed horse, based on the races already done.
  • If the 3rd placed overall horse wins, race it again against the next four horses.
  • If the 3rd placed overall horse does not win, race the current overall fastest and 2nd fastest against the first three that beats the 3rd overall fastest.
If it turns out that the winner, runner up, and 3rd place of race one are actually the fastest three horses, then, if you followed the above strategy, then the top three horses would be known after only 6 races. So for some cases, it would be possible to do it in less than 7. Richard B (talk) 14:06, 5 June 2008 (UTC)
You could look at it as a combinatorial game: one player chooses which horses to race in each round and tries to deduce the top three, the other chooses the outcomes of the races (subject to consistency with earlier rounds) and tries to stop the first player from succeeding in six rounds or less. It's obvious that one player or the other must have a winning strategy, you just need to find out which one it is. Anyone have a good game tree solver? —Ilmari Karonen (talk) 14:46, 5 June 2008 (UTC)
By the way, just to clarify, does the task require determining the order of the fastest three horses, or just which ones they are? That is, would a method that allowed one to determine that A, B and C are the fastest three horses, but not whether A is faster than B, qualify? —Ilmari Karonen (talk) 14:46, 5 June 2008 (UTC)
Having thought about this while on the bus, I can now prove that six races are not enough, at least not if the fastest three horses have to be ranked. In fact, I can prove a stronger result: six races are not enough to uniquely determine both the fastest and the second-fastest horse. The proof proceeds as follows:
To determine the fastest horse, 24 contenders for the first place must be eliminated. This can be done in six races, but only just: each race can eliminate at most four contenders, for a total of 6 × 4 = 24. To achieve this, every horse in each race must be a contender for the first place; that is, none of them may have lost an earlier race.
Thus, the sixth and final race must be between five horses that have not lost any previous race. Further, at least one of them must've won an earlier race; after five races there can be at most four horses who haven't yet participated in any race. Call that horse X. If horse X wins the final race, there will be at least two other horses that have only ever lost to X: the one that came second in the last race, and the one that came second in the previous race that X won. Thus, the second-fastest horse will not be uniquely determined in only six races, Q.E.D. —Ilmari Karonen (talk) 20:37, 5 June 2008 (UTC)
Looks good to me. --Tango (talk) 21:11, 5 June 2008 (UTC)
How about if they don't have to be ranked? Black Carrot (talk) 01:13, 6 June 2008 (UTC)
This was the interpretation I took, that it asks only for the 3 fastest though not necessarily ordering those 3. Does this change the question significantly? 98.221.167.113 (talk) 02:08, 6 June 2008 (UTC) whoops, mine Someletters<Talk> 02:09, 6 June 2008 (UTC)
Probably not, but it invalidates the proof technique I used (consider all strategies for finding the winner in six races, then show that none of them is able to always determine the second place too). By the way, note that my proof above has a minor, fortunately inconsequential omission: to eliminate four contenders for the first place in one race, it's not necessary for all of the entrants to have never lost before, merely for all but the winner. However, since one cannot determine the winner of such a race in advance, there's no way to take advantage of this. —Ilmari Karonen (talk) 11:42, 6 June 2008 (UTC)
But what about my situation above? I chose a different strategy in deciding who races - in picking a horse for a subsequent race that *had* already been beaten (into 3rd place) - and if you were fortuitous enough that the fastest three horses were all selected to take part in the first race - then the top three horses can be both determined - and ranked - in 6 races.Richard B (talk) 12:15, 6 June 2008 (UTC)
That works only if you already know which the fastest three horses will be, or if you're really lucky. The original request was for a method that would allow determining the fastest three horses in six races in every case, no matter how unlucky you might be. —Ilmari Karonen (talk) 13:01, 6 June 2008 (UTC)
(ec) I think a slight modification will work, though. This is off the top of my head, and so probably more convoluted than necessary:
Even if you don't need to order the fastest three horses, that still means you need to eliminate at least 22 contenders for the first place (since, clearly, any horse who might be the fastest could also be one of the three fastest). In five races you can eliminate at most 20, which means that, before the last race, there will be at least five horses who could be fastest (not to mention several more who could be second- or third-fastest) and again, at least one of them (call it X) must've won at least one earlier race.
If there are more than five, the last race can disqualify at most two of them from being among the fastest three, which is obviously not enough. If there are exactly five, that means exactly 20 must've been eliminated in the first five rounds, and thus no horse can have lost more than one race. Thus, the horse who came second in the previous race that X won has not lost to any other horse, and is thus still a contender for the second place. We'll call that horse Y.
If at most three of the five remaining first-place contenders participate in the last race, we'll assume they score among the fastest three in that race, and will thus all remain contenders for the first three places overall. If four do, we'll assume three of them take the first three places in that race, and so there will still be at least four contenders for the first three places overall. Finally, if all of the five first-place contenders participate in the last race, then we'll assume that the winner of that race will be X. Then Y, together with the first three horses from the last race, will still be a contender for the first three places overall. In each case, there will be at least four horses left who might be among the fastest three, Q.E.D. —Ilmari Karonen (talk) 12:59, 6 June 2008 (UTC)
Yup. Sounds good. Black Carrot (talk) 18:38, 6 June 2008 (UTC)


[edit] June 5

[edit] Limits

I am studying the basics of limits and I have two questions:

1.In calculating limits of some functions, you will have the situation that the limit is undefined since it has to be divided by 0. For example:

 \lim_{x \to \ 3}\frac{\sqrt{x-3}}{x^2-9}

You can't get the limit unless you eliminate the x-3 somehow. But basically, what changes is not the value of the fucntion but what it looks like. I feel like I have skipped the problem of zero division rather than solve it right through the limit. The feeling I have is similar to the history that people couldn't notate some numbers because they didn't have the concept of zero. Is there also some imperfection in modern notating system so that we don't know how to manage zero division?

2.limit of a function should be "a value that a function can infinitely APPROACH to". This implies that the function is not necessarily continuous at the limit. However, the process of limit calculation is just substitution, as if the function can surely reach the limit without being undefined. Isn't there any problem?

--Lowerlowerhk (talk) 05:03, 5 June 2008 (UTC)

I think your two questions are very much related, and I hope that what I say below will help you. It is not true that the process of limit calculation is just substitution. Let's consider some function f(x) and consider the limit as x tends to zero. Then think of f as undefined at zero (even if it is) and figure out the value to assign to f at zero so that it would become continuous. This would be the limit then. A very simple example would be \frac{x-3}{x-3}. It is undefined at 3, but the only way to define it at 3 in order for it to be continuous is to set it to 1. Oded (talk) 05:42, 5 June 2008 (UTC)
(edit conflict) The best way to think of it is that the value of a limit is equal to the number you approach when you choose a value at a point arbitrarily close to the intended point, without actually choosing that point. That is, you pick a point close to the given point, then you pick one that's closer, and one that's closer than that, and so on. The smaller the distance is, the closer the number is to the actual value of the limit, regardless of what would happen if you actually tried to evaluate the function at the target value. As for your question on "imperfection in modern notating system", see the article on infinitesimals. « Aaron Rotenberg « Talk « 05:50, 5 June 2008 (UTC)
In cases like this you can use L'Hôpital's rule, I will not go over the specifics unless you wish me to, but in a nutshell you can use it where substitution would give \frac{0}{0} or \frac{\infty}{\infty}. Rambo's Revenge (talk) 12:25, 5 June 2008 (UTC)
Woooah ... in the case of
 \frac{f(x)}{g(x)}=\frac{\sqrt{x-3}}{x^2-9}
L'Hôpital's rule does not help us because
 \lim_{x \to \ 3}\frac{f'(x)}{g'(x)}
does not exist, and so this tells us nothing about the original limit (L'Hôpital's rule is an if, not an if and only if). Anyway, there is a simpler algebraic solution - cancel the common factor from numerator and denominator to get a function that is identical to the original function except at the point x=3, then use the behaviour of the new function as x approaches 3 to draw a conclusion about the existence of the original limit.
Returning to Lowerlowerhk's original questions, I suspect that some of the underlying confusion may be due to their exposure to an informal and non-rigorous treatment of limits and continuity. Throw away the "infinities" and work through a proper epsilon and delta treatement of limits, and all will become much clearer. Gandalf61 (talk) 12:57, 5 June 2008 (UTC)
Of course if we do that, i.e. cancel the common factor, you get a function that by inspection you can see doesn't converge as x->3.Richard B (talk) 13:48, 5 June 2008 (UTC)
Getting back to L'Hôpital's rule for a second, doesn't it still apply if  \lim_{x \to \ 3}\frac{f'(x)}{g'(x)} is \infty? Specifically, if  \lim_{x \to \ 3}\frac{f'(x)}{g'(x)} = \infty then doesn't it follow that  \lim_{x \to \ 3}\frac{f(x)}{g(x)} = \infty? 63.95.36.13 (talk) 16:03, 5 June 2008 (UTC)
The rule actually does apply here: note that the requirement is only that the resulting "derivative limit" take on a value in the extended reals, so an infinite result does carry back to the original limit. --Tardis (talk) 16:05, 5 June 2008 (UTC)
Yes, I stand corrected. But I still think that l'Hôpital's rule is an unnecessarily baroque approach for this particular problem. Gandalf61 (talk) 16:27, 5 June 2008 (UTC)
Woooah, I opened a can of worms here. Whilst theoretically it can be applied, i did overlook the precise question, and practically it is not the best method. Apologies Rambo's Revenge (talk) 17:14, 5 June 2008 (UTC)
We might want to talk about left and right limits. Note that the original expression is imaginary for x<3, so there is no (real) left limit. For the right limit:
\lim_{x \to 3^+} \frac{\sqrt{x-3}}{x^2-9} = \lim_{x \to 3^+} \frac{\sqrt{x-3}}{(x+3)(x-3)} = \lim_{x \to 3^+} \frac{1}{(x+3)\sqrt{x-3}} = +\infty
--Prestidigitator (talk) 18:37, 5 June 2008 (UTC)
And, since positive infinity isn't a real number, there's no real right limit, either... --Tango (talk) 19:50, 5 June 2008 (UTC)
Mmm...You can't say the value of a function is infinite somewhere, but an infinite limit is fine as far as I know. --Prestidigitator (talk) 20:13, 6 June 2008 (UTC)
There's nothing wrong with functions that have infinite limits, or for that matter functions that take infinite values. But in neither case is the answer a real number, which was Tango's point. It's an extended real number. --Trovatore (talk) 20:22, 6 June 2008 (UTC)

[edit] This was placed on the science desk..you guys might be able to answer it

What is the name of this equation and what is it's significance? What does it mean? \langle\Omega|\mathcal{T}\{\hat{\phi}(x_1)\cdots \hat{\phi}(x_n)\}|\Omega\rangle=\frac{\int \mathcal{D}\phi \phi(x_1)\cdots \phi(x_n) e^{i\int d^4x \left({1\over 2}\partial^\mu \phi \partial_\mu \phi -{m^2 \over 2}\phi^2-{\lambda\over 4!}\phi^4\right)}}{\int \mathcal{D}\phi e^{i\int d^4x \left({1\over 2}\partial^\mu \phi \partial_\mu \phi -{m^2 \over 2}\phi^2-{\lambda\over 4!}\phi^4\right)}} Ζρς ι'β' ¡hábleme! 05:15, 5 June 2008 (UTC)

I think it means, "I really wanted to show off when I was writing this equation". Or maybe, "Nah, nah, my brain is bigger than yours!" « Aaron Rotenberg « Talk « 06:23, 5 June 2008 (UTC)
Holy shit, is that an integral inside an exponent inside an integral???? Someguy1221 (talk) 06:44, 5 June 2008 (UTC)


Wisdom89 (T / C) 06:52, 5 June 2008 (UTC)

I think this actually belongs in the physics desk. This looks more like something a physicist would write. More precisely, it looks like statistical physics / quantum statistical physics / conformal field theory. I would not assume that this is necessarily show off. Oded (talk) 09:49, 5 June 2008 (UTC)

Yeah, looks like physics to me. I think some context would be useful - where did you find the equation? --Tango (talk) 14:32, 5 June 2008 (UTC)
I found it on Uncyclopedia and YouTube; however, it has been verified as a legitimate equation on the Science refdesk. Note: I didn't cross-post this so you don't have to tell me not to. I don't know who did though. Wisdom 89 did. Ζρς ι'β' ¡hábleme! 19:23, 5 June 2008 (UTC)

I think I've seen this before in a book about prime numbers. I think it's something to do with Srinivasa Ramanujan; although I'm most probably wrong... Jonny23415552 (talk) 19:52, 5 June 2008 (UTC)

Guessing: The left hand side is a Dirac bracket of quantum mechanics. Ω is a state vector and T is an operator. The right hand side includes covariant tensor derivations of a potential Φ. The denominator is a normalization constant that makes the fraction a probability. The integration inside the exponent is over a four-dimensional volume. The m is a mass and the λ is the cosmological constant. The value of this integral is a phase. The Planck constant does not enter explicitely into the equation, so the units chosen make the Planck constant equal to unity. So this may be an equation attempting to express quantum gravity. I am merely guessing. Bo Jacoby (talk) 21:53, 5 June 2008 (UTC).
Someguy1221: single-variable ODEs tend to produce integrals in the exponential when you solve them. But why on earth does this formula have two bars in the angle brackets? – b_jonas 07:49, 9 June 2008 (UTC)

[edit] Operator

What does this operator mean (the "\bigoplus" one)? V=\bigoplus_{n=0}^\infty V_n --Ζρς ι'β' ¡hábleme! 19:23, 5 June 2008 (UTC)

Direct sum. --Tango (talk) 19:49, 5 June 2008 (UTC)
(ec) The "\bigoplus" is just a big version of "\oplus"; it can be used to depict iterated application of the operator, like the sigma is used for summation. Your formula \bigoplus_{n=0}^\infty V_n is thus equivalent to V_0 \oplus V_1 \oplus V_2 \oplus \cdots, as you probably know.
The interpretation of \oplus itself depends on context; I've seen it used for the XOR operation and for the intersection of constraints in constraint satisfaction literature (the latter is thus identical to the intersection operator \cap). Oliphaunt (talk) 19:56, 5 June 2008 (UTC)
With the Vs, though, it's extremely likely to be the direct sum of vector spaces. Algebraist 23:03, 5 June 2008 (UTC)
See also Wikipedia:Reference desk archive/Mathematics/2006 August 18#What is this called?.  --Lambiam 05:14, 7 June 2008 (UTC)

[edit] June 6

[edit] Quotient of an algebraic expression?

I have no idea how to do this one, I just can't figure it out. Please help! I think it's because the coefficient in the first part can't be factored out and I'm not sure how to solve it.

\frac{4x^2-8x+10} {2x+1}

Thanks a bunch! --71.98.9.18 (talk) 02:23, 6 June 2008 (UTC)

Are you wanting to factor the numerator? What are you wanting to solve for? There is no equation here, just an expression. Ζρς ι'β' ¡hábleme! 02:47, 6 June 2008 (UTC)
Never mind, I think I see what you seek. Cancel the part of the top from the bottom. That should leave you with a simple polynomial to factor. Ζρς ι'β' ¡hábleme! 02:51, 6 June 2008 (UTC)
Yes I want to simplify it...and I know I must simplfy the top and bottom, then cross out the same terms, but I don't understand how to do it because of the 4 as the coefficient in front of the x2. --71.98.9.18 (talk) 03:01, 6 June 2008 (UTC)
If you're hoping this simplifies into a simple expression (like a monic linear polynomial or something), I think you'll be disappointed. At least as written, the function has a singularity at x = − .5. 98.221.167.113 (talk) 03:17, 6 June 2008 (UTC)I swear I forget to log in every time! Someletters<Talk> 03:18, 6 June 2008 (UTC)
Hmm, I thought so. I graphed it. It is a rational function. Ζρς ι'β' ¡hábleme! 03:22, 6 June 2008 (UTC)
What "simplified" means may depend on your teacher's goals, but one approach would be to rewrite it as:
a*x+b+{c \over 2x +1}
where a, b, c are constant you find. If that is what you want to do, then a generalizable approach is to realize that the ratio:
4x^2-8x+10 \over 2x+1
can also be written as:
{A*(2x+1)^2+B*(2x+1)+C \over 2x+1} = A*(2x+1) + B + {C \over 2x+1}
Where A, B, and C are things you find by expanding the terms and comparing coefficients. Dragons flight (talk) 03:31, 6 June 2008 (UTC)
Okay...ha sorry I'm just having a difficult time understanding this. So my answer sheet I have says the answer is \frac{2x-5+15} {2x+1} but I'm not sure how to get that. Would this be correct according to the stuff you listed dragon? --71.98.9.18 (talk) 03:36, 6 June 2008 (UTC)
Btw- I really appreciate the help and responses! --71.98.9.18 (talk) 03:37, 6 June 2008 (UTC)
I think you must mean 2x-5+\frac{15}{2x+1}, yes? That's what Dragon's flight's first method was doing - take a look at polynomial division to see what's happening. Confusing Manifestation(Say hi!) 03:48, 6 June 2008 (UTC)
Take a ratio of the two highest-degree terms: \frac{4x^2}{2x}={\color{Red}2x}. Now multiple the denominator by itt: {\color{Red}2x}\cdot(2x+1)={\color{OliveGreen}4x^2+2x} and exclude the result from the numerator: (4x^2-8x+10)-({\color{OliveGreen}4x^2+2x})={\color{Blue}-10x+10}. That makes a simplified form of your expression:
\tfrac{4x^2-8x+10} {2x+1} = \tfrac{({\color{OliveGreen}4x^2+2x})+({\color{Blue}-10x+10})} {2x+1} = \tfrac{{\color{Red}2x}\cdot(2x+1)+(-10x+10)} {2x+1} = 2x - \tfrac{10x-10}{2x+1}.
Iterate the method to further reduce the numerator degree, until it gets lower than the denominator degree. --CiaPan (talk) 06:35, 6 June 2008 (UTC)
See also Polynomial long division.  --Lambiam 05:17, 7 June 2008 (UTC)
Ahh, okay, thanks guys! I think I'm getting it! :D --71.117.39.109 (talk) 17:40, 7 June 2008 (UTC)

[edit] Evolution

Why have we evolved an ability to visualize objects in three dimensions, but no more than that. If that was the case, then that means 3 dimensions has more importance than 2 dimensions as well as more importance than 4 dimensions or more.68.148.164.166 (talk) 04:26, 6 June 2008 (UTC)

Likely because being able to understand and deal with a 3D world was essential to survival. A key factor in evolution. -- Tcncv (talk) 04:35, 6 June 2008

(UTC)

One of the answer is that human can only survive in a 3D world,seeAnthropic principle--Lowerlowerhk (talk) 08:06, 6 June 2008 (UTC)
I don't think the Anthropic principle quite applies. The Anthropic Principle says that only universes which are capable of supporting intelligent life can contain intelligent observers within it, but that doesn't preclude the possibility that a higher-dimensional universe could support higher-dimensional intelligent creatures. A universe with five dimensions of space-time could theoretically have five dimensional intelligent creatures within it. We just don't happen to be in such a universe.
Au contraire! This paper from the peer-reviewed journal Classical and Quantum Gravity [1] shows that 3+1 dimensional spacetime is the only one that allows for the existence of intelligent observers who can make reasonable predictions about the future. —Keenan Pepper 18:09, 6 June 2008 (UTC)
Thanks for the interesting sounding link, Keenan, although unfortunately it appears that I can't read the paper without coughing up money. :( From the abstract it sounds like they are ruling out multiple temporal dimensions, so higher dimensional beings would have to occupy a universe with one temporal dimension and 3+ spatial dimensions. However it doesn't sound like they necessarily ruled out observers in a universe with 4+ spatial dimensions and 1 temporal dimension saying only that "in a space with more than three dimensions, there can be no traditional atoms and perhaps no stable structures." Notice the word "perhaps", implying that they left the door open on the question of having stable "atomic" structures. 63.111.163.13 (talk) 19:25, 6 June 2008 (UTC)
If you want to read it, you can give me an email address and I'll send you a PDF. —Keenan Pepper 18:09, 7 June 2008 (UTC)
Getting back to the original question, though, we can only internally visualize the universe based on the reflection of observable energy like light and sound and through movement of matter and energy in space-time. Therefore we can only internally visualize objects in terms of three spatial dimensions and one temporal dimension. The only way we would have been able to evolve to conceptualize more than three spatial dimensions would be if there was some observable form of energy transmission that travelled through a fourth spatial dimension. 63.111.163.13 (talk) 14:56, 6 June 2008 (UTC)
It's not completely the case that we can't visualize things in four spatial dimensions, and it's not completely the case that we can in three. The most natural things for us to visualize are two-dimensional, since that's the shape of our retina and our internal viewing screen. That is, however visual information is processed, it comes to us as a flat image with a bit of extra info scattered around to imply depth, and especially to imply overlap. The transformation from three dimensions to two is roughly projective, and there's a lot that gets distorted in the process. The only way to actually directly visualize three dimensions would be to have something like a three-dimensional array of memory spaces plus something to keep track of their geometric relationships. That can certainly be finangled within a single person's imagination, and can be generalized to a few more dimensions with effort. Past that, it's got to be hard to hold 20 dimensions of any kind of information in your head at once, but that doesn't mean it's impossible. And yes, it appears that most of that would be unhelpful in ordinary life, which is probably why we don't do it. Two dimensions and a bit is more than enough to get by. I have a problem with the previous post, incidentally. I don't think it's the case that we can only visualize things we've sensed, or could sense. I get more the impression that we have some capacities, and some needs, and try to fit them together as efficiently as is practical. For instance, I've heard (and could probably track down) that people blind from birth still use the same parts of the cortex to keep track of spatial information, suggesting that they experience the world the same way we do (other than the not-colored part). What happens in our heads has to be to some extent independent of what happens in the world around us. For a more mathematically-inclined example, how about curved space? To many people for a very long time, it was flat-out inconceivable. Not just that it couldn't happen in the real world, but that it couldn't even be imagined. Now we know that it is real, and can be imagined, but so can flat space, or space curved differently. Likewise continuous versus discrete. We can imagine all kinds of things that can't exist at the same time because they're mutually exclusive. We're much more flexible than the real world seems to be. Black Carrot (talk) 18:06, 6 June 2008 (UTC)
Your point that humans have an easier time envisioning two dimensional projections is valid, and we have more difficulty envisioning 3D spaces. However the original question was why humans evolved to visualize objects in 3 dimensions and not more, and the answer is that we have no 4D spatial objects and energy does not travel in 4 spatial dimension. Everything we encounter and all the lines of transmission are 3D.
As an analogy, consider that we could hypothetically program a computer to work with 4D objects, and such a program could in essence internally view such objects completely accurately, given proper sensory input for all the object's 4D spatial coordinates. But that would be an "intelligent design", you might say, of an AI capable of envisioning things in higher spatial dimensions. By contrast there would be no need to be able to envision things in four dimensions in nature because we never encounter such objects. So mutations which might assist a human to envision things four dimensionally offer no obvious benefit to the organism and therefore are probably not likely to be successful evolutionary off-shoots. 63.95.36.13 (talk) 19:43, 6 June 2008 (UTC)

[edit] I need to use this equality but I can't without knowing it's true.

This page says that \scriptstyle\sin^4\theta = \frac{3 - 4 \cos 2\theta + \cos 4\theta}{8}, but there is no proof, even here, where a proof would logically be. I'm unsure that it is in fact the case. Please supply a proof, so I can be sure it's true. Thanks in advance, 71.220.219.115 (talk) 19:00, 6 June 2008 (UTC)

Alas, we do not provide proofs for every single degree of n in sinnθ. Nevertheless, I suggest you start on the right-hand-side and use cos2θ = 1 − 2sin2θ and sin2θ = 2sinθcosθ several times. x42bn6 Talk Mess 19:09, 6 June 2008 (UTC)
(ec) Have you tried breaking it down in terms of sin(θ)’s and cos(θ)’s? GromXXVII (talk) 19:11, 6 June 2008 (UTC)
It's definitely true, but try and prove it as Grom suggests. You should be able to do it in 5 lines. -mattbuck (Talk) 20:28, 6 June 2008 (UTC)
Just toss in some numbers and check if it works.--Fangz (talk) 20:52, 6 June 2008 (UTC)
Yeah, "proof by example" isn't particularly rigorous... --Tango (talk) 22:18, 6 June 2008 (UTC)
Well, sure. But for something like trig functions, it's vanishingly unlikely that it will work for 3 or 4 integers by pure chance. If you just want to confirm something that probably can be proved with more effort, five minutes on a calculator can save you lots of work at something that is pretty unimportant for your problem. But then again I work in statistics. --Fangz (talk) 22:27, 6 June 2008 (UTC)
Because precision is fun: trigonometric functions are analytic, and thus if two trigonometric expressions are different, then they are equal at at most countably many points. Thus for any sensible random choice of argument, the values of the two expression will almost surely be different. Less precisely: if you plug in a random value into two trig expressions and get the same value, this is very very good evidence that the expressions are equal. Algebraist 22:35, 6 June 2008 (UTC)
Thinking, in fact, there is a reasoning here - cos and sin are analytic functions. Hence, this means (roughly) that either it is constantly zero, or the set over which it is zero has zero measure, making it hard to hit it by chance - provided you are selecting numbers actually randomly. Sadly, I doubt your teacher will be pleased with this approach.Dammit, I was going to say the same thing--Fangz (talk) 22:42, 6 June 2008 (UTC)
More to the point, the zeros of an analytic function do not have limit points. This is important since it implies that, for example, the probability of an analytic function being zero at n / 10p, for a given integer p and a uniformly chosen integer 0 ≤ n ≤ 10p (which is what you get in practice if you ask someone to pick a random number between zero and one), tends to zero as p tends to infinity. Mere countability of the set of zeros is not enough to show that, since e.g. the set of all rational numbers is countable. —Ilmari Karonen (talk) 23:35, 6 June 2008 (UTC)

"Alas, we do not provide proofs for every single degree of n in sinnθ. " Well we should! As it is a general formula with a general answer. \sin^n\theta = \frac{1}{(2i)^n}(e^{i\theta} - e^{-i\theta})^n. so then its just a matter of binomial expansion in this case \sin^4\theta  = \frac{1}{(2i)^4}(e^{i\theta} - e^{-i\theta})^4 = \frac{1}{16}(e^{i4\theta} - 4e^{i2\theta} + 6 - 4e^{i2\theta} + e^{-i4\theta}) = \frac{1}{16}(2\cos 4\theta - 8\cos 2\theta + 6) Hope this answers your question. Sorry its taken several days. Philc 0780 23:49, 6 June 2008 (UTC)

I've added the general formula here. Please check to see if I've done the right thing. Thanks, --hydnjo talk 12:23, 7 June 2008 (UTC) Guess not! --hydnjo talk 19:01, 7 June 2008 (UTC)

[edit] .999...=1

Alright, my dad and I have differing opinions on this. He refuses this fact and I have tried to convince him of it by presenting the algebraic and fraction proofs. This question is about the proof by a fraction.

\frac{1}{3}=.\overset{-}{3}

\frac{2}{3}=.\overset{-}{6}

.\overset{-}{3}+.\overset{-}{6}=.\overset{-}{9}

\frac{1}3 + \frac{2}3 =1

Therefore:

.\overset{-}{9}=.\overset{-}{3}+.\overset{-}{6}=\frac{1}3 + \frac{2}3 =1

My dad refuses this proof because he says the repeating decimal never actually exactly equals the fraction; therefore, .999... never quite equals 1. I tell him that "after and infinite number of decimal places" (for lack of a better term) the decimal is exactly equal to fraction. He refuses this by claiming I am treating an infinite number of places as if it were finite. In this situation, which case is correct, and what is the way to disprove the contrary? Thank you, Ζρς ι'β' ¡hábleme! 23:05, 6 June 2008 (UTC)Oops. I changed "refute" to "refuse" now.

See 0.999.... --Prestidigitator (talk) 23:20, 6 June 2008 (UTC)

Just by the way, "refute" means "prove false". If you disagree with him, then it follows that you don't think he's refuted the arguments. --Trovatore (talk) 23:56, 6 June 2008 (UTC)

Your dad believes that 0.9999 etc. does not equal 1. Here's the standard proof:

x = 0.\overset{-}{9}

10x = 9.\overset{-}{9}

10x - x = 9x = 9.\overset{-}{9} - 0.\overset{-}{9}

9x = 9

x = 1

Wikiant (talk) 00:26, 7 June 2008 (UTC)

Yes, I have read the article and shown him that proof too; however, I would prefer something regarding the proof by fractions. Thanks, Ζρς ι'β' ¡hábleme! 01:50, 7 June 2008 (UTC)

11 X 1/11=.09090909.... X 11
11/11=.9999...
same with 1/3 and .333...--Xtothe3rd (talk) 02:10, 7 June 2008 (UTC)
Sometimes it helps if you ask the sufferer of this delusion what they think the result is of subtracting 0.999... from 1.  --Lambiam 05:05, 7 June 2008 (UTC)
He says 1-.\overset{-}{9}=.\overset{-}{0}1 Ζρς ι'β' ¡hábleme! 07:14, 7 June 2008 (UTC)
Then what is the result of multiplying that number by 10?  --Lambiam 09:24, 7 June 2008 (UTC)
Unfortunately, that combination of symbols has no meaning. Take a look at decimal expansion. When you see something like x = 0.d_1 d_2 d_3 \ldots d_n \ldots, you have to understand that there is a precise definition lurking underneath. In this case, the decimal expansion article tells you that, by definition,
0.\overline{9} = \lim_{N \to \infty} \sum_{j=1}^N \frac{9}{10^j}.
A limit is not "moving" or approaching anything. It has a formal definition, and when the limit exists it specifies a single number (or element of the topological space, or whatever). Applying basic limit rules,
\lim_{N \to \infty} \sum_{j=1}^N \frac{9}{10^j} = 9 * \lim_{N \to \infty} \sum_{j=1}^N \frac{1}{10^j} = 9 * \frac{1/10}{1-1/10} = 1.
The derivation of the second to last equality is in geometric series, and it only relies on some basic algebra and properties of the limit. Any other explanation is simply an attempt at persuasion. If someone is uninterested in being persuaded (that is, uninterested in accepting a plausibility argument), then they are obligated to acquire a sufficient understanding of the formal definitions. 24.8.49.212 (talk) 07:52, 7 June 2008 (UTC)
Yeah, I'm also personally uncomfortable with the original fractions proof, (and to an extent, the algebraic proof), since both of them rely on theorems like the fact that you can add, deduct and multiply convergent sequences. It's far better to return to how 0.999... is defined, and say it itself isn't 'a number' (because numbers can be denoted using any number system), but rather it is a statement referring to a number which has 0.9999... as its decimal approximations. Perhaps something that works would be to show him how some 'nice numbers' have an infinite expansion in e.g. base 2.--Fangz (talk) 08:04, 7 June 2008 (UTC)
Maybe you should try thinking of it this way. Your dad says that 0.333... never exactly equals 1/3. But just assume it is another symbol for 1/3, one that is very unintuitive to humans, but does ultimately represent 1/3 because we say so. Then once he accept that 0.333...=1/3 he might accept the proof —Preceding unsigned comment added by RMFan1 (talkcontribs) 17:41, 7 June 2008 (UTC)


[edit] June 7

[edit] How small can small get?

Is there an infinity for the small? Or is there a certain point where you simply can't divide anymore? I've heard of Planck lengths, but I've always had a suspicion that these are just man's mental limits because the math backs me up (1/2, 1/4, 1/8, etc.)Of course this begs the question of how big something can be also. It seems there's no limit there either.--Sam Science (talk) 16:44, 7 June 2008 (UTC)

In maths, there is no limit to how big or small numbers can get. If there was a smallest positive number, you could just half it and get a smaller one, so there clearly isn't a smallest. Likewise, you could double the biggest number and get a bigger one, so there can't be a biggest. The equivalent of infinity for the very small is called "infinitesimal", however for regular numbers (real numbers), the only infinitesimal is actually zero. You keep halving again and again and again and you get closer and closer to zero, you'll never actually get there after a finite of steps, but the limit as the number of steps approaches infinity is zero. If you want the technical version, see Archimedean property.
However, in physics, things are a little different. Mathematical numbers are just an abstract concept, they don't always apply to the real world. While you can keep halving numbers, you can't keep halving physical objects. In quantum mechanics, things are "quantised", which means they can only come in integer multiples of a particular value, so the smallest it can get (without being zero) is fixed at one times that value. That's where the Planck length comes in, roughly speaking (the Planck length isn't quite "the smallest length possible", but it's kind of related - read the article for more information). Zero-point energy is a better example of a smallest possible value - it's the lowest energy level a given physical system can have, and it's not zero. --Tango (talk) 17:37, 7 June 2008 (UTC)

Thank you for answering my question. While were on the subject of size extremes I think I will ask if there is indeed a limit to "big". A small limit I can kinda understand because at some point small can become meaningless. But, big- that seems to be a different story. You can always add matter or space, correct? I hear the universe is continually expanding, but into what? Doesn't that mean it gets "bigger"? Is there a limit to how big big can get? Or at some point does matter's own weight collapse upon itself and cease to be?--Sam Science (talk) 21:39, 7 June 2008 (UTC)

I don't believe the laws of physics have any upper limits in the way they have lower limits.I'm an idiot, see below. Yes, the universe is expanding, which does indeed means it is getting bigger. It's not really meaningful to ask what it's expanding into, it's more space itself stretching. There isn't an "edge" that's moving further and further from the "centre", each bit of space is just getting bigger and bigger. It might help to imagine a balloon being blown up. Ignore the 3D space that the balloon occupies and just think about the 2D rubber surface as being the universe. There is no centre and no edge to the rubber (ignoring where you're blowing the air into, at least!), it expands by each bit of rubber stretching more and more. There is no limit to how big the universe can get (it may stop expanding eventually, but not because it's reached any kind of size limit - current observations suggest it will just keep going, anyway). I think the closest you get to an upper limit of anything is black holes. Once density increases past a certain point in a region (that point depends on the size of the region, it's smaller for larger regions), it will collapse into a black hole. That kind of puts a limit on densities. --Tango (talk) 21:54, 7 June 2008 (UTC)
Well, universal expansion doesn't necessarily mean the universe is getting bigger (it may already be infinite); what it means is that things are moving farther apart. As for the observable universe, it may be getting smaller. --Trovatore (talk) 22:15, 7 June 2008 (UTC)
True, I guess doubling something infinite doesn't really count as making it bigger. --Tango (talk) 22:49, 7 June 2008 (UTC)
... aaand while we're on the subject, may as well cover the middle. What is the ultimate middlest middle? The "planck middle", if you will? The absolute centerest center of all existence? It can't be my consciousness, because the universe was here long before my chaotically firing neurons showed up  :) --Sam Science (talk) 00:00, 8 June 2008 (UTC)
Well, how about zero? That's the middle of numbers. As for a middle in physics, there isn't one. The universe is generally assumed to be (roughly) homogeneous, which means every point is pretty much the same as every other point. You'll always have an origin for your coordinates, which is basically the centre, but you can move that origin to a different place and, on the cosmic scale, it makes no difference to the laws of physics (obviously, on a smaller scale, it matters - things are far different in deep space than they are on the surface of the Earth, for example, but there isn't much difference between the local cluster of galaxies and some cluster billions of light years away). --Tango (talk) 01:04, 8 June 2008 (UTC)
I would think that, assuming there was a big bang of sorts, that the location of that would be a middle of sorts. I’m thinking along the lines of suppose one could easily travel anywhere in the universe, if they travel out far enough from this place they should be able to see it taking place. But travel any farther and they would be able to see nothing that resulted from it. e.g. there is a sphere from that center (that constantly gets larger at the speed of light) outside of which the light-borne information from the big bang cannot be observed. GromXXVII (talk) 11:02, 8 June 2008 (UTC)
The observable universe has a centre - the point where the observer is. The universe as a whole doesn't. The big bang happened "everywhere", since the whole universe was a single point at the beginning (assuming it's finite, which it could well not be, but the basic concept is the same for an infinite universe, just more difficult to visualise). See a couple of paragraphs up where I explained the balloon analogy, note that the balloon has no centre, it's basically a sphere and no point on it is special. --Tango (talk) 11:19, 8 June 2008 (UTC)
[ec]This is a common misconception regarding the big bang. Think about the balloon again (assuming it is perfectly round with no actual air filling hole). When there is no air in the balloon, its surface is a point and its density is infinite. When you start filling it with air, it expands and the distances start increasing. But there is no point on the surface in which the expansion "started". In other words, it's not like there was an infinite space and all the mass was concentrated in a point in this space. The entire space was a point. -- Meni Rosenfeld (talk) 11:21, 8 June 2008 (UTC)
Hmm. I guess whether or not it was valid I was thinking of the universe as being nested within some larger space where it therein expanded. GromXXVII (talk) 16:28, 8 June 2008 (UTC)
Yes, it's very tempting to do that, but it's best not to if you can help it, it can lead to significant confusion. --Tango (talk) 20:27, 8 June 2008 (UTC)
The free physics textbook Motion Mountain describes how all the results of general relativity can be derived starting from the assumption that there is a maximum power (physics) in the universe (or equivalently maximum force, or equivalently maximum rate of mass flow). These maximum amounts are all equal to 1/4G times the appropriate power of the speed of light, e.g. the maximum power is c5/4G. (G is the gravitational constant.) —Keenan Pepper 03:51, 8 June 2008 (UTC)
I'm an idiot... of course the laws of physics have upper limits, that's the whole premise of relativity... thanks! (See speed of light for the most obvious one.) --Tango (talk) 11:19, 8 June 2008 (UTC)

[edit] June 9

[edit] 20th Century Maths

How have developments in 20st century maths affected our living and social life?

nb: This question has been asked before, but I need some starters. Thanks, 220.244.76.78 (talk) 00:36, 9 June 2008 (UTC)

This is an interseting question. I am sure I will miss some important items, but here is a list of some effects that I jump to mind:

  1. Encription and specifically public key encription schemes have enabled a lot of what we do on the web today (e-commerce, e-banking,...)
  2. Game theory and mathematical economics effects commerce, and to a lesser extent law and politics. For example, the idea of emissions trading can be seen as a game theory idea.
  3. Communication, image compression and for example DVD technology are based on mathematics such as Fourier analysis, information theory (entropy coding), coding theory
  4. Statistics plays a very significant role in many different disciplines, e.g., health, political planning,...
  5. There is a lot of sophisticated math that goes into various engineering projects: cars, planes, bridges, buildings... —Preceding unsigned comment added by OdedSchramm (talkcontribs) 01:37, 9 June 2008 (UTC)