Talk:Eigenvalue, eigenvector and eigenspace
From Wikipedia, the free encyclopedia
Old discussion can be found in the archive.
[edit] Vibrational modes - erroneous treatment in the article
The current version of the article claims that the eigenvalue for a standing-wave problem is the amplitude. This is an absurd and totally nonstandard way to formulate the problem, if it even can be made to make sense at all. For vibrational modes, one writes down the equation for the acceleration in the form:
where
is the amplitude and
is the operator giving the acceleration = force/mass (i.e. from the coupled force equations). In a linear, lossless system, this is a real-symmetric (Hermitian) linear operator. (More generally, e.g. if the density is not constant, one writes it as a generalized Hermitian eigenproblem.) Ordinarily,
is positive semi-definite too.
To get the normal modes, one writes
in the form of a harmonic mode:
for a frequency ω. (Of course, the physical solution is the real part of this.) Then one obtains the eigenequation:
and so one obtains the frequency from the eigenvalue. Since
is Hermitian and positive semi-definite, the frequencies are real, corresponding to oscillating modes (as opposed to decaying/growing modes for complex ω). Furthermore, because
is Hermitian the eigenvectors are orthogonal (hence, the normal modes), and form a complete basis (at least, for a reasonable physical
). Other nice properties follow, too (e.g. the eigenvalues are discrete in an infinite-dimensional problem with compact support.)
If losses (damping) are included, then
becomes non-Hermitian, leading to complex ω that give exponential decay.
Notice that the amplitude per se is totally arbitrary, as usual for eigenvalue problems. One can scale
by any constant and still have the same normal mode.
User:Jitse Niesen claimed at Wikipedia:Featured article review/Eigenvalue, eigenvector and eigenspace that the article was talking about the eigenvalues of the "time-evolution operator"
. The first problem with this is that
is not the time-evolution operator, because this is a second-order problem and is not determined by the initial value
. You could convert it to a set of first-order equations of twice the size, but then your eigenvector is not the mode shape any more, it is the mode shape along with the velocity profile. Even then, the eigenvalue is only a factor multiplying the amplitude, since the absolute amplitude is still arbitrary. Anyway, it's a perverse approach; the whole point of working with harmonic modes is to eliminate t in favor of ω.
Note that the discussion above is totally general and applies to all sorts of oscillation problems, from discrete oscillators (e.g. coupled pendulums) to vibrating strings, to acoustic waves in solids, to optical resonators.
As normal modes of oscillating systems, from jumpropes to drumheads, are probably the most familiar example of eigenproblems to most people, and in particular illustrate the important case of Hermitian eigenproblems, this subject deserves to be treated properly. (I'm not saying that the initial example needs the level of detail above; it can just be a brief summary, with more detail at the end, perhaps for a specific case. But it should summarize the right approach in any case.)
—Steven G. Johnson 15:29, 24 August 2006 (UTC)
- I readily agree that my comment at FAR was sloppy, and I'm glad you worked out what I had in mind. Originally, I thought that the "perverse" approach was not a bad idea to explain the concept of eigenvalues/functions, but I now think that it's too confusing for those that have already seen the standard approach. -- Jitse Niesen (talk) 12:14, 25 August 2006 (UTC)
-
- Thanks for your thoughtful response. Let me point out another problem with saying that the "amplitude" is the eigenvalue. Knowing the amplitude at any given time is not enough to know the behavior or frequency. You need to know the amplitude for at least two times. By simply calling the eigenvalue "the" amplitude, you've underspecified the result. —Steven G. Johnson 15:23, 25 August 2006 (UTC)
[edit] Reqest for Clarificaion of the Standing wave example for eigen values
The Standing wave example of eigen values isn't very clear. It is just stated that the standing waves are the eigen values. Why is this the case? How do they fit the definition / satisfy the criterion for being an eigen value? —The preceding unsigned comment was added by 67.80.149.169 (talk • contribs) .
- Actually, the wave is the eigenfunction, not the eigenvalue. Did you not see this part?:
The standing waves correspond to particular oscillations of the rope such that the shape of the rope is scaled by a factor (the eigenvalue) as time passes. Each component of the vector associated with the rope is multiplied by this time-dependent factor. This factor, the eigenvalue, oscillates as times goes by.
- I don't see any way to improve that. —Keenan Pepper 03:10, 7 September 2006 (UTC)
-
- Except that calling the amplitude the "time-dependent eigenvalue" is horribly misleading and bears little relation to how this problem is actually studied, as I explained above. Sigh. —Steven G. Johnson 20:54, 7 September 2006 (UTC)
I think the new version of the rope example is exactly what I was trying to avoid! I think the time-evolution operator is something anybody can understand. This is also the only operator which is interesting from an experimental point of view. It doesn't require any math knowlege. It doesn't even require the rope to be an Hamiltonian system. The shape of the rope is an eigenfunction of this operator if it remains proportional to itself as time passes by. That it is an eigenvector of the frequence operator (the Hamiltonian) is irrelevant and also valid only if the system has an Hamiltonian! Vb
- As I explained above, the shape of the rope isn't the eigenvector of this operator. Because it is second order in time, you would need the shape of the rope plus the velocity profile. And there are other problems as well, as I explained above. —Steven G. Johnson 12:48, 8 September 2006 (UTC)
-
- Sorry for the delay. Perhaps then the statment "the standing was is the egienfunction" could be elobrated on a little more. I'm having trouble visualising what that means, and how the notition of a vector whos direction reamains unchanged by the transformation applies to this example. My apologies for the confusion. --165.230.132.126 23:35, 11 October 2006 (UTC)
[edit] Orthogonality
When are eigenvectors orthogonal? Symmetric matrix says "Another way of stating the spectral theorem is that the eigenvectors of a symmetric matrix are orthogonal." So A is symmetric => A has orthogonal eigenvectors, but does that relation go both ways? If not, is there some non-trivial property of A such that A has orthogonal eigenvectors iff ___? —Ben FrantzDale 20:36, 7 September 2006 (UTC)
- eigenvectors corresponding to distinct eigenvalues are orthogonal iff the matrix is normal. in general, eigenvectors corresponding to distinct eigevalues are linear independent. Mct mht 22:31, 7 September 2006 (UTC)
- btw, "symmetric matrix" in the above quote should mean symmetric matrix with real entries. Mct mht 22:34, 7 September 2006 (UTC)
-
- Thanks. Normal matrix does say this; I'm updating other places which should refer to it such as symmetric matrix, spectral theorem, and eigenvector. —Ben FrantzDale 13:30, 8 September 2006 (UTC)
[edit] An arrow from the center of the Earth to the Geographic South Pole would be an eigenvector of this transformation
i think the geographic pole is different from the magnetic pole...-- Boggie 09:57, 21 November 2006 (UTC)
- You are correct that the geographic south pole is different from the magnetic south pole. However, it is the geographic pole that we need here, isn't it? -- Jitse Niesen (talk) 11:33, 21 November 2006 (UTC)
-
- yes, u r right, i just mixed them up;-) -- Boggie 16:51, 22 November 2006 (UTC)
[edit] Existence of Eigenvectors, eigenvalues and eigenspaces
Some confusion about the existence of eigenvectors, eigenvalues and eigenspaces for transformations can possibly arise when reading this article (as it did when I asked a question at the mathematics reference desk, after reading the article half-asleep - my bad!).
My question here would be, what could be done to correct it. The existence of eigenvectors is a property of the transformation not a property of the eigenvector itself, making it unclear which of the pages needs revision. Does anybody have any ideas?
Also this article seems a bit long, I suggest placing the applications in a separate article (as are some of the theorems in the earlier sections).
-
- Putting the application elsewhere would be very damaging. Math without applications is useless, i.e. not interesting!
[edit] Continuous spectrum
Figure 3 in the article has a nice picture of the absorption spectrum of chlorine. The caption says that the sharp lines correspond to the discrete spectrum and that the rest is due to the continuous spectrum. Could someone explain some things?:
- What operator it is that this is the spectrum of?
- The Hamiltonian of the Chlorine atom
- If the discrete spectrum corresponds to eigenvalues, are those eigenvalues shown on the x or y axis of the graph?
- The x-values, i.e. the energies of the tomic eigenstates.
- What would the corresponding eigenfunctions be for elements of the spectrum?
- Those are multielectronic wavefunctions which are often approximated (better expanded in a basis of) Slater determinants.
Thanks. —Ben FrantzDale 15:35, 20 December 2006 (UTC)
[edit] Going back to featured status!
I am so sad this article is not featured anymore. I did the job to get it featured. I have no time to improve it now. I wish someone could do the job. The article is not paedagogic anymore. Such a pain! Vb 09:38, 28 December 2006 (UTC)~
[edit] minor typo?
Excerpt from the text: "Having found an eigenvalue, we can solve for the space of eigenvectors by finding the nullspace of A − (1)I = 0."
Shouldn't this be ...finding the nullspace of A - λI = 0?
216.64.121.34 04:06, 8 February 2007 (UTC)
[edit] small technical question
It says in the article: A vector function A is linear if it has the following two properties:
Additivity \ A(\mathbf{x}+\mathbf{y})=A(\mathbf{x})+A(\mathbf{y})
Homogeneity \ A(\alpha \mathbf{x})=\alpha A(\mathbf{x})
where x and y are any two vectors of the vector space L and α is any real number.[10]
doesn't alpha not necessarily have to be a real number, just a scalar from the base field of the vector space? LkNsngth (talk) 23:43, 11 April 2008 (UTC)
- Thanks for the good and concrete question. I am checking it and will try to answer ASAP. --Lantonov (talk) 06:14, 14 April 2008 (UTC)
- Seems that you are right. Korn and Korn (section 14.3-1) say that α is a scalar. I could not find said there that α is necessarily real. I will look through some more books for this definition and change it accorsingly. --Lantonov (talk) 10:24, 14 April 2008 (UTC)
- On the other hand, Akivis, where I took this definition from, explicitly states that α is a real number. --Lantonov (talk) 15:11, 14 April 2008 (UTC)
- Mathworld says that α is any scalar. --Lantonov (talk) 15:14, 14 April 2008 (UTC)
- Shilov - any α --Lantonov (talk) 15:17, 14 April 2008 (UTC)
- Strang - all numbers ... --Lantonov (talk) 15:24, 14 April 2008 (UTC)
- Linear transformation - any scalar α --Lantonov (talk) 15:28, 14 April 2008 (UTC)
- Kuttler - a and b scalars --Lantonov (talk) 15:34, 14 April 2008 (UTC)
- Beezer -
--Lantonov (talk) 15:37, 14 April 2008 (UTC) - Now the last one really convinced me. Changing ... --Lantonov (talk) 15:37, 14 April 2008 (UTC)
[edit] Left and right eigenvectors
Where to put this in the article? Something to the effect: A right eigenvector v corresponding to eigenvalue λ satisfies the equation Av = λv for matrix A. Contrast this with the concept of left eigenvector u which satisfies the equation uTA = λuT. ?? --HappyCamper 04:39, 15 March 2007 (UTC)
- It's briefly mentioned in the section "Entries from a ring". Are you saying that you want to have it somewhere else? -- Jitse Niesen (talk) 06:16, 15 March 2007 (UTC)
-
- No no...just that I didn't see the note on the first reading. I think the little note is plenty for the article already. --HappyCamper 23:36, 16 March 2007 (UTC)
[edit] Just a Note
I think the Mona Lisa picture and it's description help to clarify these concepts very much! Thanks to the contributor(s)!
[edit] Comment on level of presentation
As far as mathematics articles goes, this is certainly one of the best. To the general reader, however, it remains quite arcane, but I'm not too sure how it could be improved, or whether a little dumbing down would be either possible or desirable.
To begin with the earliest eigenvalue problem, I understand, was that of the frequency of a stretched string. Pythagoras didn't solve it, but observed the simple relationship between string lengths, which formed harmonies, and also that the frequency depended on the parameters (string tension and length) and not on the excitation.
Using the examples of the Earth's axis and Schrodinger's equation, endows the article with a certain ivory tower feel, as if this subject were only relevant to the 'difficult' problems which 'clever' people preoccupy themselves with. Maybe the intention is to impress, rather than inform, but I doubt it, the attempt to communicate with joe public appears sincere.
However, eigenvalue problems are to be found everywhere, they are not obscure mathematical concepts applicable to arcane problems, they are to be found in stock market crashes and population dynamics. Even the 12inch ruler on the desk, with its buckling load, illustrates an eigenvalue problem. They are to be found in every musical instrument from tin whistle to church organ. The swinging of a pendulum, the stability of a bullet, the automatic pilot of an aircraft. Such questions as why does a slender ship turn turtle? Why does a train sway at high speed? Why is the boy racer's dream car a 'whiteknuckles special'? Why do arrows have flights? Why doesn't a spinning top fall over? All of these are much more immediate and relevant to the lay person, who is surely the one we seek to educate.
Some articles do indeed fall into the category of 'ignotum per ignotius'; an explanation which is more obscure that the thing it purports to explain. This is not one of them.Gordon Vigurs 07:45, 10 April 2007 (UTC)
- The introduction seems particularly opaque. I'm going to try to say something that a non-mathematician could understand. Rick Norwood 14:29, 27 April 2007 (UTC)
[edit] picture
The picture of the Mona Lisa appears only have been turned a little, and it is not clear that the blue arrow has changed direction. A more extreme distortion, where the blue arrow was clearly pointing in a different direction, would make the point better. If the length of the red arrow were also changed, that would also be good. Rick Norwood 15:02, 27 April 2007 (UTC)
- You misunderstand. It hasn't been turned at all. It's been sheared. There's a link in the caption. The change of the arrow direction is quite clear, and the length of the red arrow does not change in a shear. -- GWO
I understand that it has been sheared. That's why I said "appears" instead of "is". But the average viewer is going to assume that it has been turned, because that's what it "looks like". Also, the change in direction of the blue arrow is not clear, it is a few degrees at most. Which is why a more extreme distortion would better convey the idea that the red arrow keeps its direction while the blue arrow changes direction. Rick Norwood 12:42, 28 April 2007 (UTC)
I find the caption of the Mona Lisa picture a little misleading. The red eigenvector is not an eigenvector because it hasn't been stretched or compressed (which would be the special case of its eigenvalue being equal to 1); it's an eigenvector because axis it lies along hasn't changed direction. I feel that this should be clarified. 18.56.0.51 03:10, 25 May 2007 (UTC)
[edit] Illustration
I don't think the current Mona Lisa illustration is as helpful as it could be. In very many applications, the operator is symmetric and so there is a set of n real orthogonal eigenvectors. In the case of an image transformation, that would be an orthotropic scaling, not necessarily axis-aligned. The current image shows shearing which has only one eigenvector and so isn't representative of most "typical" eigenvector problems. I propose replacing the sheared Mona Lisa with an orthotropically stretched Mona Lisa with two eigenvectors. —Ben FrantzDale 07:14, 29 April 2007 (UTC)
- Let me amend that. I think the example would do better with a Cartesian grid rather than a picture since a Cartesian grid has a clear origin and so is easier for novice to connect to linear algebra. Those new to linear algebra won't necessarily connect image transformation with matrix operations. —Ben FrantzDale 12:01, 25 May 2007 (UTC)
-
- Perhaps a grid could be superimposed on the image? I think it is not such a bad idea to illustrate an "atypical" problem (e.g. to combat the misperception that there is always a basis of eigenvectors). On the other hand, I don't think it is a good lead image, and the text there should be used to annotate the copy of the image in the "Mona Lisa" example (which should be renamed to avoid OR). Something like the electron eigenstates image seems like a more appealing lead image to me. Geometry guy 16:54, 9 June 2007 (UTC)
[edit] Great contribution
The lead has been very much improved recently. The definition of linear transformation is now very clear! Thanks to the editors! Vb 12:50, 30 April 2007 (UTC)
[edit] Return the serial comma?
The article summary repeatedly uses the serial comma for lists — it seems to me that the title is the only place without it. Admittedly, I'd like to rename the article just for my love of commas; but, as a legitimate reason, shouldn't we rename it for consistency? ~ Booya Bazooka 15:43, 12 June 2007 (UTC)
- Support. —Ben FrantzDale 16:59, 4 November 2007 (UTC)
- Wikipedia on the whole uses the serial comma reasonably consistently. For this reason, and because the serial comma is used throughout the article, and is often needed for clarity, the title of the page, and any remaining outliers should be fixed, for consistency above all.--169.237.30.164 (talk) 11:05, 13 December 2007 (UTC)
[edit] Sense or direction?
"An eigenvalue of +1 means that the eigenvector stays the same, while an eigenvalue of −1 means that the eigenvector is reversed in direction."
Shouldn't it be "sense" instead of direction? By definition, the direction of an eigenvector doesn't change.
Somebody should fix it up.
Answer: I'm not sure what you mean here. "Sense" has the same meaning as "direction", but "direction" is clearer because "sense" has other meanings. And yes, the direction of an eigenvector can be reversed by a transformation. The basic equation is AX = λX, where A = transformation matrix, X = eigenvector and λ = eigenvalue. So if λ is negative, the transformed eigenvector AX points in the direction opposite to the original eigenvector X. I believe that this sentence is correct as is. Dirac66 22:01, 3 August 2007 (UTC)
- I think "reversed in direction" makes excellent sense to a lay audience. The direction of an eigenvector is only unchanged if you believe that North is the same direction as South. -- GWO
[edit] Misleading colors
The colors used to highlight the text in the first picture is rather misleading - blue is used for links usually while red is usually for broken links. I think just bolding them would be sufficient, the colors are perhaps unnecessary.--Freiddie 18:55, 4 August 2007 (UTC)
[edit] Note of thanks and a brief suggestion regarding need for eigenvalues/eigenvectors
Just a quick note of thanks especially for the practical examples of eigenvectors/values such as the Mona Lisa picture and the description of the vectors of the earth and the rubber sheet stretching. I find that having a number of different descriptions helps me generalise the concept, much more so than the mathematical treatment.
I certainly don't imply that the rigorous mathematical treatment should be removed, as this is also important, however I do feel that often mathematical symbology (while very concise and important) hides the meaning of what can actually be simple concepts (at least to the beginner). Thanks to all for providing both practical and theoretical aspects in this article.
As a suggestion, it would also be very helpful to have some simple examples of why eigenvalues/eigenvectors are useful in some different problem domains. For example, why would I want to find the eigenvectors in the first place? How do they help us?
131.181.33.112 07:12, 4 September 2007 (UTC)
[edit] Poor Example
The example of how to find an eigenvalue value and eigenvector is far too simplistic. Steps are skipped or poorly explained so often that if the reader doesn't already have a good understanding of the material he will not be able to follow along. I think the whole thing needs to be redone starting with a more typical problem. One with two distinct eigenvalues and two distinct eigenvectors. Because the example has only one eigen vector it is misleading about what to expect when working with eigenvalues.Marcusyoder 05:35, 17 October 2007 (UTC)
[edit] Infinite dimensional spaces
I removed the following paragraph from the Infinite-dimensional spaces section.
The exponential growth or decay <!---of WHAT??--> provides an example of a continuous spectrum, as does the vibrating string example illustrated above. The hydrogen atom is an example where both types of spectra appear. The bound states of the hydrogen atom correspond to the discrete part of the spectrum while the ionization processes are described by the continuous part.
(I have also made visible a hidden note in the paragraph.) I am not sure this fits with the rest of the section, and I have a hard time seeing what it is driving at. Thoughts? --TeaDrinker 23:27, 10 November 2007 (UTC)
In answer to "of WHAT??", I suggest the sentence makes more sense if the first word "The" is deleted, leaving "Exponential growth or decay provides an example ..." which would refer to exponential growth or decay in general. Dirac66 00:03, 11 November 2007 (UTC)
- I reintroduced the paragraph and tried and respond to your comments. Vb 15:24, 16 January 2008 (UTC) —Preceding unsigned comment added by 87.78.200.17 (talk)
[edit] Algebraic multiplicity
Redirects here, but was not mentioned in the text, so I put in a short definition. The formal definition is:
an eigenvalue λ of A has algebraic multiplicity
, if πA(z) = (z − λ)ap(z), where πA(z) = det(A − zI) is the characteristic polynomial and
.
Thomas Nygreen (talk) 00:10, 6 December 2007 (UTC)
[edit] Applications
Factor_analysis,Principal_components_analysis,Eigenfaces,Tensor of inertia,Stress_tensor.
To me at least the first three are actually the exact same application, just with different types of data. Each of those three have a spread of data, looked at from it's center of mass, in the case of the tensor of inertia the data points are each differential element of mass. Then to find the principle axes, if you didn't know about the eigenvalue/eigenvector shortcut, you'd ask what direction of a best fit line would give you the least squared error/moment of inertia. The derivative of error with respect to direction will be zero at the local minima & maxima.
Once you have the components into the covariance matrix/tensor of inertia, you notice that stresses and shears behave exactly the same way as the variance and covariance components. I mean I think it's no accident that people use sigma as the symbol for stresses. In fact I've found it helpful to use Mohr's circle in those earlier cases of statistics and inertia.
This is the long way of saying that I don't think we should let terminology get in the way of the explanation. There seems to be a few people very interested in this article, so I won't just redo that section without your input.Sukisuki (talk) 14:58, 29 December 2007 (UTC)
- I agree with these comments. I however think this section should not appear as one more mathematical concept but as example of practical application in utterly distinct domains. The stress should be on the applications not on the theory behind. This section must be appealing to the user of math not to the mathematicians! Vb 15:04, 16 January 2008 (UTC) —Preceding unsigned comment added by 87.78.200.17 (talk)
[edit] When is P, the matrix of the Eigenvectors, to be normed?
Is it right, that P is never to be normed, but it's gonna be easier to calculate P-1, since then, if P=orthogonally => P − 1 = PT ?
On my calculator (TI-89), however, he norms P always, when searching the EigenVectors ... --Saippuakauppias ⇄ 09:14, 16 January 2008 (UTC)
[edit] Vibrating System
I feel that this article should reflect the importance of the eigenvalue problem to solving the frequency and damping ratios of reciprocating systems. I think this should be in the 'applications' section. 130.88.114.226 (talk) 13:03, 23 January 2008 (UTC)
[edit] Structure
I think the definite point of turnoff are the definitions. Do we have to make so much definitions on such small space, even of things we haven't mentioned so far? MathStyle says that it is good to have 2 definitions for a thing: 1 formal, and 1 informal, and many explanations around each definition. Changing the structure of the article to introduce everything one in a time will make it more accessible. Lead section and history are tolerable. My suggestion for structure after them is:
- Introduce eigenvector by easy examples
- Make a formal definition
- Make informal definition or, alternatively
- Explain the formal definition with more examples
- Do 1-4 for eigenvalue
- Do 1-4 for eigenspace
- Introdice and/or define the characteristic equation
- Introduce and define eigenmatrix
- Finding eigens for the 2-case
- Finding eigens for the 3-case
- Introduce complications: zero eigenvectors and equal eigenvalues
- Calculations in the complicated cases: complex matrices
- Linking those concepts with matrix orthogonalization as it is where most applications come from
- Applications from simple to complex, starting with geometry on a plane, space, n-dim, then electrodynamics, relativity, etc.
Although I like the image of Mona Lisa, having it twice is too much. Also, a heading "Mona Lisa" on a section, which is an example of finding specific eigenvalues is just a little bit misleading. --Lantonov (talk) 13:33, 11 February 2008 (UTC)
Five of the 8 definitions in section "Definitions" are dealt with. The remaining 3 will go to characteristic equation when that section is repaired. --Lantonov (talk) 10:43, 13 February 2008 (UTC)
[edit] Concern over new "Eigenvalue in mathematical analysis" section
I see the section Eigenvalue, eigenvector and eigenspace#Eigenvalue in mathematical analysis has just been added. Now, I'm no expert mathematician, but I fail to see what this has to do with eigenvalues and eigenfunctions, in that the way this section defines them appears to have nothing to do with the "standard" definition, as used by the rest of this article.
The "standard" definition of "eigenvalue" is, roughly, the scaling that an eigenvector/eigenfunction undergoes when the transformation in question is applied. This new section appears to define it as, roughly, the value of the transformation parameter such that non-zero solutions are obtained when the transformation is inverted. I just don't see how this is in any way related!
Oli Filth(talk) 12:32, 13 February 2008 (UTC)
- After some thought, I see what the example in that section is getting at, i.e. it may be rewritten as:
- However, this connection is less than clear. Furthermore, the next example is x2 = λ, which appears to be a non-linear transformation. Again, it is less than clear what this has to do with eigenvalues, etc., as the rest of the article defines them for linear transforms only.
- Going back to the first example, even if the connection were explained more clearly, is this really anything more than a specific application of linear algebra, which is only possible because the RHS happened to be [0 0]T? In other words, is this section really imparting anything new to the reader, which they wouldn't have got already by reading the rest of the article? Oli Filth(talk) 12:55, 13 February 2008 (UTC)
I put this section under this provisional name because I was concerned with the definition, and I wanted to make it more general. Of course, it is out of place here. Most of the material will go to extend the "standard" definition, after it becomes clarified in the "standard" way by illustrating it with simple examples of linear transformations. After the basics of eigenvector scaling are introduced, come the matrix notation of the definition of eigenvectors, and also matrix notation of systems of linear equations, and matrix methods for their solution (Kramer, Gram-Schmid, etc.). The latter bear direct relationships to eigenvectors, and eigenvalues. I put the non-linear transformation in an attempt to extend the definition to non-linear transformations but am hesitating whether to introduce it that early. I think better not. As you see, there is still much work on the definition and introductory material, so most of the effort goes there. --Lantonov (talk) 13:19, 13 February 2008 (UTC)
As for the notation, I prefer to use HTML wherever possible because I hate the way math symbols come out when transformed from TeX. --Lantonov (talk) 13:34, 13 February 2008 (UTC)
- In general, the style should at least be consistent with the rest of the existing article. Oli Filth(talk) 13:43, 13 February 2008 (UTC)
Agreed, sure enough. --Lantonov (talk) 13:45, 13 February 2008 (UTC)
Scrapped the section. Some of the material in it can be fitted somewhere later. --Lantonov (talk) 14:08, 13 February 2008 (UTC)
[edit] Request for attention from an expert no longer needed.
At the top of this article there is a notice saying that "This article or section is in need of attention from an expert on the subject." In view of the very thorough revision by Lantonov over the last four weeks, I think that this notice is no longer necessary. Also this talk page does not indicate what is supposedly wrong with the article. Therefore I have decided to remove the notice, with thanks to Lantonov for much hard work well done. If anyone else wants to replace the notice, please indicate what you think should still be changed. Dirac66 (talk) 17:06, 11 March 2008 (UTC)
- Thanks, Dirac. I am not yet half done. In the moment I am making figures with Inkscape to illustrate shear and rotation. Good program but not easy to work with. --Lantonov (talk) 17:14, 11 March 2008 (UTC)
[edit] Incorrect Rotation Illustration?
At first I thought, I found a nice illustration for complex linear maps, but then I had to find out, that there seem to occur several mistakes: Here the author assigns u1=1+i and calls this a Vector. Also the illustration I found very confusing: The complex plane is the Im(y) part, but has also a x-component?? The "Eigenvector" has one complex component, not two, as it should. I believe thats the reason for all this mess - the space is complex-2d, and thus real-4d and so cannot be displayed... Flo B. 62.47.208.168 (talk) 21:07, 1 May 2008 (UTC)
- I cannot understand the source of your confusion. Vectors on the complex plane correspond to complex numbers and have one real x component, and one imaginary iy component. The complex plane itself is determined by one real X axis and one imaginary iY axis. The iY axis alone is one-dimensional and cannot determine a plane. The complex plane is NOT the Im(y) part, in fact, Im(y) for complex y is a REAL number. Real numbers are a subset of the complex numbers, and in the same way vectors that lie on the x axis are a subset of vectors in the complex plane for which Im(y) = 0. The two complex conjugated eigenvectors have a common real component and opposite sign complex components, as they should. The X axis is the only geometric site of points that are common for the real and complex planes (this quoted from the text). The only difference between the complex and Euclidean planes is the definition of the point z = Infinity so that the complex plane represents a projection of a sphere on a plane. See, e.g., Korn & Korn, section 1.3-2 and section 7.2-4, or some other more detailed book on complex geometry. Also, see the following link [1] for an animated illustration of roots of quadratic equation on the complex plane or this one [2] for the roots of a cubic. --Lantonov (talk) 05:55, 7 May 2008 (UTC)
- On thinking a bit more, I think I found where the confusion springs from. You are thinking about a general vector in the complex plane which is determined by two points on the complex plane which are, in general, complex numbers. Instead, the eigenvectors u1 and u2 are radius vectors, respectively, of the points 1 + i and 1 - i, and as such are determined only by those two points (the other ends of those vectors are in the coordinate origin O, and that's why they are complex conjugates). --Lantonov (talk) 10:02, 7 May 2008 (UTC)
- I have to admit there is one mistake though, which I knew from the start but did not correct for the sake of not burdening the exposition with details because this requires a long explanation. It concerns normalization of eigenvectors. As they are now in the picture, u1 and u2 are not normalized as required for eigenvectors. To be normalized, they must have modules equal to 1. As it is now, their modules are
≈ 1.414 > 1. --Lantonov (talk) 06:00, 8 May 2008 (UTC)
- So it is, the accepted practice is to represent normalized eigenvectors in normalizable spaces. Anyway, I may leave the picture as it is for the time being until it needs some more important change. --Lantonov (talk) 12:01, 8 May 2008 (UTC)
- Ok, thanks for participating, and allaying my qualms. Cheers. --Lantonov (talk) 05:48, 9 May 2008 (UTC)
- Lantonov -- what you've written above literally makes no sense. Please don't revert the page until you can explain what you think
- The complex eigenvectors u1 = 1 + i and u2 = 1 − i are radius vectors of the complex conjugated eigenvalues and lie in the complex plane
- means? How can an eigenvalue have a radius vector?
- Do you not see that a 2D complex vector CANNOT be represented meaningfully in a 3D picture?
- That arbitrarily adding the imaginary part of Y as a third dimension is not at all helpful to anyones understanding, since without the complex part of X no-one can see how that cause the vector to be scaled? That since complex multiplication on the argand plane looks like rotation, even if the picture made sense (which it doesn't) it doesn't inform, because it STILL looks like rotation, not multiplication.
- Seriously, trying to show 4D vectors rotating on a 3D picture is doomed to failure, and your picture, with its nonsense caption, does not aid understanding. -- GWO (talk)
- See section "Complex plane" below. 4D plane? 2D vector? Where have you taken such notions from? --Lantonov (talk) 08:31, 14 May 2008 (UTC)
[edit] Complex plane
Hi, Gareth Owen. I reverted back your deletion of the rotation matrix illustration and text. As much as I appreciate the other corrections you did on the article, I am certain that you are wrong on this one.
First, the definition of complex plane. A plane cannot be be 4D, it is always 2D, be that real or complex. To quote the definition from the article Complex plane: "In mathematics, the complex plane is a geometric representation of the complex numbers established by the real axis and the orthogonal imaginary axis. It can be thought of as a modified Cartesian plane, with the real part of a complex number represented by a displacement along the x-axis, and the imaginary part by a displacement along the y-axis. The complex plane is sometimes called the Argand plane because it is used in Argand diagrams. ..." If you do not believe in Wiki, look in the first convenient textbook of complex geometry. As a most widely used, I can recommend to you Korn & Korn (see the References in article). Section 1.3-2 there reads: "The complex number z = x + iy is conveniently drawn as a point z = (x, y) or the corresponding radius vector on the complex plane. Axes Ox and Oy (in the orthogonal Cartesian coordinates), are called, respectively, real and imaginary axes. The absciss and the ordinate of each point z are drawn, respectively, as the real part x and the imaginary part y of the number z." This answers also your second objection: radius vector on the complex plane corresponds to the point (a complex number) on the complex plane in the same way as radius vector on the real plane corresponds to a point (a real number) on the real plane. I can quote half a dozen more books I have at my disposal but do not find it necessary because they do not differ on these points. The diagram in the beginning of the article Complex plane is almost exact copy of the same diagram in Alexandrov, Shilov, Pigolkina, and almost any other book of analytic geometry I leaf through. The same diagram is also part of the rotation picture here. Another good source is [3] and references therein. Please do not mix a dimension of an object with number of coordinates that are used to represent it. Thus, a point in the real 3D space has 0 dimensions, altough it is represented with 3 coordinates (x,y,z). Its radius vector is one-dimensional, and it is expressed also with these 3 coordinates. In the same way a complex radius vector, if it is on the complex plane, is represented with 2 coordinates (components) - 1 real (X axis) and one imaginary (iY axis). A point in 3D projective space is represented with 4 homogenous coordinates, and so on. Your third objection, about Y axis (real) and iY axis (imaginary), look in the rotation picture to see that they are not collinear. Y and iY axes intersect in O so they are at angle different from 0, 180, 360, ... degrees. I use colors in diagrams, in order for people (to be more precise, the majority of us that are not color-blind) to orient themselves better in 3D pictures drawn on 2D sheet. Thus, in this picture which depicts a 3D space, the real plane (the plane of rotation) is in blue, and the complex plane is in yellow. The 3 axes (basis) of this 3D space are the following: real X axis, real Y axis, and imaginary iY axis that determines a complex dimension (one dimension, one axis). The real and complex planes are at angle ≠ 0, 180, 360 degrees to each other and intersect at the X axis. I really cannot be more clear or explicit than this. If you have any objections, or read some different books than those listed here, please discuss this before deleting. I would like especially to see where is it written that a complex plane is 4D and a complex vector is 2D. --Lantonov (talk) 07:02, 13 May 2008 (UTC)
Sorry but when I wrote this you wrote before me on the talk page, resulting in edit conflict. I will not revert back your changes today because of the 3RR rule. Read all above, also read the references listed, and convince yourself. I expect you to reinstate the rotation picture yourself. --Lantonov (talk) 07:06, 13 May 2008 (UTC)
There is one thing wrong in the caption, though, and I will correct it. The eigenvectors are radius vectors in the complex plane, all right, but they are not radius vectors of the eigenvalues, they are radius vectors of the complex numbers (points) u = 1 + i and ū = 1 - i (more precisely, the normalized u =
and ū =
, as they are given in many books), respectively, while the eigenvalues are λ1 = cos φ + i sin φ and λ2 = cos φ - i sin φ. --Lantonov (talk) 08:00, 13 May 2008 (UTC) Anyway, thanks to this discussion, I got an idea for a simpler picture. I will change it and reinsert again. --Lantonov (talk) 09:58, 14 May 2008 (UTC)
- Really, it's hard to know where to begin...
- First, the definition of complex plane. A plane cannot be be 4D, it is always 2D, be that real or complex.
- That depends. In the vector space sense C is a one-dimensional complex vector space, but its isomorphic to R^2 - i.e. it requires two real numbers to pin down a location.
- But you're dealing with C^2 -- thats a 2D complex space, buts its isomorphic to R^4, i.e. it requires 4 real numbers to pin down a single point. So when we're thinking about drawing a picture, we need four dimensions. Just like we need two dimensions to draw the complex plane (which is a 1D vector space over the complex numbers).
- And that's why your diagram cannot help but be confusing. There's simply no way to draw a picture of those 4-real-dimensions.
- Let's imagine were only interested in C, not C^2.
- Considered a vector space over the complex numbers, this is a 1D space, and every linear map is simply multiplication by a complex constant. Lets pick a simple T, corresponding to multiplication by the complex unit i. Now, every complex number, z, is an eigenvector with eigenvalue i, because Tz = iz, by definition.
- So lets draw the Argand diagram and see what happens to 1+i under this map:
| o o |
| |
| maps to |
| |
--------------------- --------------------
1+i maps to -1+i
- So, when we plot the complex numbers onto two real axes (i.e. a 1d Complex space mapped onto a 2D real space), the transformation that we know to be a 1D scaling by i looks exactly like a rotation in the real plane. And that diagram could not convince a lay-person that (in the complex space) our point is just being scaled, not rotated. It's simply not a helpful explanatory tool to someone who doesn't already know whats going on.
- Now imagine what happens if you try to draw this diagram based on C^2, resulting in a picture that require 4 real axes... Well, I hope you can see why I don't think your picture is very informative.
- GWO (talk) 06:12, 15 May 2008 (UTC)
- The eigenvectors are radius vectors in the complex plane, all right, but they are not radius vectors of the eigenvalues, they are radius vectors of the complex numbers (points) u = 1 + i and ū = 1 - i (more precisely, the normalized u = \scriptstyle{\frac{\sqrt{2}}{2} + \frac{\sqrt{2}}{2} i} and ū = \scriptstyle{\frac{\sqrt{2}}{2} - \frac{\sqrt{2}}{2} i}, as they are given in many books), respectively, while the eigenvalues are λ1 = cos φ + i sin φ and λ2 = cos φ - i sin φ
- I'm sorry, but your use of the standard terminology is all over the place here.
- It simply makes no sense, when dealing with C^2 (as we are) to talk about "radius vectors of the complex number 1+i. Single complex numbers are not elements of C^2, they're elements of the scalar field. It's quite apparent that you're terribly confused about whats going on. The complex eigenvectors of a 2x2 rotation matrix each have two complex components. A 2D complex vector simple cannot be a "radius vector of a scalar".
- In fact, that last phrase is literally meaningless. Restricting ourselves to a +90degree rotation, please explain to me what you think it means to say that the vector
is a radius vector of the scalar 
- -- GWO (talk)
-
- "It simply makes no sense, when dealing with C^2 (as we are) to talk about "radius vectors of the complex number 1+i." I do not understand what are you talking about. No, we are not dealing with C^2 (whatever you mean by this). We are dealing with a rotation in a 2D real plane in the positive (counterclockwise) direction about the origin O.
- "Single complex numbers are not elements of C^2, they're elements of the scalar field." Of course, as geometric elements, complex numbers are points that lie in a 2D complex plane, real numbers (Im(z) = 0) are points that lie on the real X axis, purely imaginary numbers (no real part - Re(z) = 0) are points that lie on the imaginary iY axis. I do not understand where do you get such strange notions as the above quote. Scalars can be real or complex, algebraically they are numbers, and geometrically they are points. You can see this in every single geometry book that you care to open. Also, scalars are 0-dimensional, vectors and lines are 1-dimensional, planes and surfaces are 2-dimensional, always. Elements with more than 2 dimensions are called hypersurfaces, and their dimensions are specified additionally. Thus, the sphere is a 3-dimensional hypersurface, and a tessaract is a 4-dimensional hypersurface. That's standard elementary geometry which is all that is needed to understand rotation.
- "The complex eigenvectors of a 2x2 rotation matrix each have two complex components." Now this is outright wrong. The 2x2 rotation matrix has 2 complex eigenvectors, and each of these eigenvectors has one real component which is measured on the real X axis and one imaginary component that is measured on the imaginary iY axis. It appears that you mix here the terms "complex" - "imaginary" and "component" - "dimension". The components of the eigenvector that is associated with the eigenvalue λ1 are cos φ (real component, on the X axis) and i sin φ (imaginary component on the iY axis). The components of the eigenvector that is associated with the eigenvalue λ2 are cos φ (real component, on the X axis) and - i sin φ (imaginary component on the iY axis). Please look at Fig. 1 in Complex plane to get visual input (you can see this also in the figure that you deleted). Geometrically, those eigenvectors are radius vectors of their corresponding eigenvalues and lie in the complex plane which has 2 dimensions (basis (e1, e2)) with the 2 basis vectors respectively along the X and iY axes. Note that the real components of the two eigenvectors are the same: cos φ, and they both lie on the real X axis (they are congruent). This is why the 2 eigenvalues and their associated radius vectors (eigenvectors) are called complex conjugated - their real parts (components) are conjugated (fused, congruent, ...). No need for additional dimensions (basis vectors, coordinate axes), sorry. --Lantonov (talk) 10:12, 15 May 2008 (UTC)
- I'm sorry, but your use of the standard terminology is all over the place here. No need to be sorry for this. This is an encyclopedia to be read by all - specialist and non-specialist alike and this is why use of a standard terminology is highly recommended. I always strive to use standard terminology in order to be better understood. Non-standard terminology is only to be used in highly specialised texts. --Lantonov (talk) 10:26, 15 May 2008 (UTC)
- About your use of Argand diagrams. Here we do not talk about a rotation in the complex plane. We are talking about a rotation in the real plane. To obtain the complex plane, we do not map the whole real plane (2D). We map only the real Y axis into an imaginary iY axis, so that, e.g. number 1 becomes i, and sin φ becomes i sin φ. --Lantonov (talk) 10:38, 15 May 2008 (UTC)
- Restricting ourselves to a +90degree rotation, please explain to me what you think it means to say that the vector
is a radius vector of the scalar
. Readily. It is easy to explain, although too long (I lost the better part of the day explaining here). First, there is no such animal as a vector
. Those are two vectors:
and
. Geometrically, these are vectors with equal modules (lenghts),
. They lie on the complex plane which has a real axis X and imaginary axis iY. The origin of both vectors is point O which is the coordinate origin. The end of the first vector is the point (complex scalar) C(1, i), and the end of the second vector is the point (complex scalar) C*(1, −i)(BTW, there is no scalar
). All 3 points (O(0,0), C(1, i), and C*(1, −i)) lie in the complex plane (which is 2D determined by the 2 axes X (real) and iY (imaginary)). Therefore, the two vectors also lie in the complex plane. Moreover they are radius vectors of the 2 said scalars in the complex plane because they originate at the origin of the coordinates. If we look at these 2 vectors as rotated relative to the X axis, then
is rotated counterclockwise at a 45° and
is rotated clockwise at − 45°. Sorry but I have no more time to lose in explanations. If you want to clear this out for you, please look in the references that I gave for you above as it appears that you have not done this effectively so far. --Lantonov (talk) 11:27, 15 May 2008 (UTC) - Considered a vector space over the complex numbers, this is a 1D space, and every linear map is simply multiplication by a complex constant. Lets pick a simple T, corresponding to multiplication by the complex unit i. Now, every complex number, z, is an eigenvector with eigenvalue i, because Tz = iz, by definition. I am not sure what you want to show with this but if you want to present T as a linear transformation, it goes by the rules of all linear transformations. You say that T is simply the complex unit i (number), therefore T is a scalar. This transformation transforms each vector z into into vector iz. Now let's consider dimensions. A 1D space has only one basis vector and therefore only one coordinate axis, so the vector z will have only one component (if it is not zero). It could be a real component (on the real X axis) or an imaginary component (on the imaginary iY) axis. If it is imaginary, when multiplied with i, it will give i × i × z = − z. In your example it gives iz when multiplied by i, therefore z is a real number. Geometrically, z is a point on the X axis whose radius vector is ze1, with e1 (also designated i) being the single basis vector lying on the X axis. iz is a point on the iY axis which is a purely imaginary number. Thus, the transformation T maps a point (or its corresponding radius vector) on the X axis onto a point (or its corresponding radius vector) on the imaginary iY axis. Now let us consider the general case of a complex 2D plane. As repeated everywhere, this has one real X axis and one complex iY axis. Each vector not on the axes will have 1 real component and 1 imaginary component, the vector itself remaining 1-dimensional. Let the component on the X axis is x and the component on the iY axis is iy. The transformation is
. This is not homothety because, in general, x ≠ y. This transformation maps the complex vector
onto the complex vector
so we have mapping of X axis on iY axis and simultaneous mapping of iY axis on X axis, and corresponding change of components. This is not the case that we consider here. The above transformation never can be a proper rotation. For one thing, AT A = − I and not I. The matrix
is not in the rotation group. The complex rotation matrix is
and it acts on real vectors (all components of which are real). When such rotation matrix is multiplied with its Hermitian transposed, it gives the identity matrix. The 2x2 rotation matrices considered here act on real vectors in the real space and the eigenvalues and eigenvectors that are obtained are generally complex numbers and complex vectors in the complex 2D space (complex plane). --Lantonov (talk) 13:23, 15 May 2008 (UTC)
- Tell me then, what is
. Is it a vector of two real numbers? A single point in the real plane? A single complex number? A single point in the complex plane? Or none of these things? -- ~~
- Well now we got to the crux of the matter. The above is a rotation in the real plane (axes X and Y real) of a real vector with components 7 units on the X axis and 5 units on the Y axis. It is in the first quadrant. This vector rotates to 45° (π/4) counterclockwise in the real plane. The result will be a real vector in the second quadrant of the real plane whose components are real numbers. I can calculate the exact components of the rotated vector but I have really no time now, I have to go. Listen to my advice, read the references I gave and you will receive answers to your questions. --Lantonov (talk) 16:51, 15 May 2008 (UTC)
- No. You see. It's not. I have a PhD in applied mathematics, and I don't need your references to do basic matrix algebra.
- If you'll excuse the use of decimals:
- i) exp(iπ / 4) = 0.70711 + 0.70711i
- ii) exp(iπ / 4) = 0.70711 − 0.70711i
- Therefore:

- So

- Don't believe me: try running this code in GNU Octave
octave> m = [exp(i*pi/4) 0; 0 exp(-i*pi/4)]; octave> m * [7;5] ans = 4.9497 + 4.9497i 3.5355 - 3.5355i
[edit] OK I've got it
- "It simply makes no sense, when dealing with C^2 (as we are) to talk about "radius vectors of the complex number 1+i." I do not understand what are you talking about. No, we are not dealing with C^2 (whatever you mean by this). We are dealing with a rotation in a 2D real plane in the positive (counterclockwise) direction about the origin O.
I understand now. You have a serious miscomprehension of what it means to say that the matrix
has complex eigenvalues and eigenvectors. You seem to believe that this means this means we can treat a planar rotation a multiplication by a complex. It is true that we CAN do this, but this is not what we mean when we talk about the complex eigenvectors. What this means is we can find a vector
where z1 and z2 are complex numbers. That's what "C^2" means. It means an ordered pair of complex numbers. Something like this: 
It's not a single complex number, and it doesn't exist on a single complex plane. It's a vector of complex numbers and it exists in the Cartesian product of two complex planes (which we call C^2). Specifically, the vector
is NOT the same thing as the complex scalar 1+i.
I can add elements of C^2 together
but I cannot add a complex vector to a complex scalar
.
- They lie on the complex plane which has a real axis X and imaginary axis iY
No no no no and no. The complex scalar 1+i lives on the complex plane. The complex vector
, like the complex vector
, is an entirely different entity. You are confusing the two. -- GWO (talk)
-
- I do not think that "we can treat a planar rotation a multiplication by a complex" whatever you mean by this. By complex number? By complex matrix? You are right (and I am wrong) to say that
will give a complex vector. However, this is not a vector function (transformation) if the vector
is on the real plane. To quote from the article: "In a vector space L a vector function A is defined if for each vector x of L there corresponds a unique vector y = A(x) of L." - The vector transformation acts on a vector in L, and the result is also a vector in L. This is an endomorphism. Anything else is not a vector transformation. Implication is, e.g., that if a real vector is multiplied by a complex scalar, this is not a vector transformation. The elements of the transformation matrix must be scalars from the base field of the vector space. With this in mind, the above example:
can be a vector transformation iff the vector space have as its base field complex scalars. The vector space must be defined by defining a basis in it in order that a transformation has a meaning as a vector transformation.
implies that the base field of C^2 are complex scalars so that
is
. With the deleted figure, I did not intend to show, e.g., that the real plane is transformed by rotation to a complex plane. This is very wrong. All that I was trying to show is to illustrate the eigenvalues and eigenvectors on the complex plane. I do not mix vectors
and scalars (1 + i) either. I think that they are different elements the same as the point P(1,1) is different from the vector
which is its radius vector in the real Cartesian plane. Could you explain in some more detail your strong objection to They lie on the complex plane which has a real axis X and imaginary axis iY. Maybe it is in using i in the column vector
? This i should become part of the basis (e1, i e2)? Or something else? --Lantonov (talk) 08:12, 17 May 2008 (UTC)
- I do not think that "we can treat a planar rotation a multiplication by a complex" whatever you mean by this. By complex number? By complex matrix? You are right (and I am wrong) to say that
-
-
- All that I was trying to show is to illustrate the eigenvalues and eigenvectors on the complex plane.
-
- The problem with trying to show this is that it is not true. The eigenvalues are on the complex plane. The eigenvectors are elements of the Cartesian product of two complex planes, denoted C2 (just as the Cartesian product of to real lines is called R2.) That's my strong objection.
- The eigenvalues lie in the field, the eigenvectors lie in the vector space. OK?
- The complex eigenvalues of a 2x2 matrix lie in
. - The complex eigenvectors of a 2x2 matrix lie in

- You can plot the eigenvalues on C. You can't plot the eigenvectors on C, because they are not complex numbers, they are ordered pairs of complex numbers.
- I'll demonstrate (again). Let A be a 45 degree rotation matrix

- The eigenvalues of A are
and
. Ok. They are complex scalars (i.e. elements of C, and you can plot them on an Argand diagram. - The corresponding eigenvectors of A are
and 
- Now, I don't know how many times I need to keep saying it but THESE VECTORS ARE NOT THE COMPLEX SCALARS 1+i and 1-i. OK? Can you agree with that? They ARE NOT ELEMENTS OF THE SET OF COMPLEX NUMBERS in exactly the same way that the vector
is NOT AND ELEMENT OF THE SET OF REAL NUMBERS. - A vector of real numbers is not itself a real number. It is not an elemnt of the real line R, but it is in R2 (the Cartesian product of two copies of the real line).
- A vector of complex numbers is not itself a complex number. It is not an element of the complex plane C, but it is in C2 (the Cartesian product of two copies of the complex plane).
- You cannot plot the vector
on the real line, because it is not a real number. - You cannot plot the vector
on the complex plane, because it is not a complex number. If you try, and you have, you will fail, and you did. - The basis is still
,
. The difference is you make complex vectors by multiplying the basis vectors by complex scalars. So

- or

- or in general

- where z1 and z2 are complex numbers.
- Look, I'm fed up with politely correcting you, so I've asked for arbitration. I appreciate you're well meaning, but your grasp of this subject leaves an awful lot to be desired. If you really attend VA Tech, please stroll over to the maths department and ask a complex variable professor whether they think the complex eigenvectors of a 2x2 matrix can be plotted on the complex plane, because I'm not getting paid to wade through any more of you half-informed screeds. -- GWO (talk) 09:40, 17 May 2008 (UTC)
- Ok, thanks for the detailed explanation. Sorry for losing your time. This explanation will help me edit and correct myself what I have written about rotation. Please accept my apology for offending you with the advice to read references. In fact, I am glad to have you here correcting mistakes. --Lantonov (talk) 10:06, 17 May 2008 (UTC)
- See answers to your comments in the following section. --Lantonov (talk) 11:27, 21 May 2008 (UTC)
[edit] Complex vectors
These are point by point answers to comments above:
- The eigenvectors of rotation are
and
. True. - Now, I don't know how many times I need to keep saying it but THESE VECTORS ARE NOT THE COMPLEX SCALARS 1+i and 1-i. OK? Can you agree with that? True. Agreed.
- They ARE NOT ELEMENTS OF THE SET OF COMPLEX NUMBERS in exactly the same way that the vector
is NOT AND ELEMENT OF THE SET OF REAL NUMBERS. True. - A vector of real numbers is not itself a real number. True.
- It is not an elemnt of the real line R, but it is in R2 (the Cartesian product of two copies of the real line). False. A vector (directed intercept) can be an element of the real line. Namely, if we take a point with coordinates (a, 0) where a is real number as point 1, and a point with coordinates (b, 0) where b is real number as point 2 and build a vector between point 1 and point 2, this vector
will be an element of the real line because it lies in the real line R. This is so because in the underlying field of scalars of R2 there are scalars (a,0). Vectors on the real line are subset of vectors in R2. Furthermore, the real line can be a one-dimensional vector space because it is closed under addition and multiplication by real scalar. Check:
and
and the resulting vectors are in R. - A vector of complex numbers is not itself a complex number. True. However, we must be careful what we understand by a "vector of complex numbers". In the notation,
where z1 and z2 is a vector of complex numbers (scalars) and where
is supposed to mean multiplication of vector by scalar and not vector product, a vector for which z1 is real and z2 is complex is also a complex vector. Note that the 2 eigenvectors of rotation are exactly of this type: z1 is real, and z2 is complex. More specifically, z1 = 1 and z2 = i or z2 = − i. - It is not an element of the complex plane C, but it is in C2 (the Cartesian product of two copies of the complex plane). False, if stated in this way. True, if it is supposed to mean that the complex plane is not a vector space of complex vectors. A complex vector can be an element of a complex plane C for a similar reason as the above. Such plane is defined if one takes 2 lines (one-dimensional spaces) such that one line contains the whole set of real numbers (line X) while the other contains the whole set of purely imaginary numbers (line Y). Then make the Cartesian product
to obtain a set of ordered pairs (x, iy). If these elements are points, they will form a two dimensional space, in which each point has coordinates (x, iy) where x and y are real. If we take in this plane a point with coordinates (x1, iy1) as point 1 and a point with coordinates (x2, iy2) as point 2 and direct an intercept between these two points in such a way that point 1 is origin, and point 2 is end, we will obtain (build) a vector in the complex plane which as a column matrix is
. This vector will be element of the complex plane because it is built in this plane as vector and lies in this plane. Whether such plane is a vector space is another matter. It is NOT a vector space because it is not closed under multiplication by a complex number and such numbers are in the set of the underlying field of scalars. - You cannot plot the vector
on the complex plane, because it is not a complex number. If you try, and you have, you will fail, and you did. False. I can plot the vector
on the complex plane. For this I must use the standard textbook definition of complex plane: it is a plane of 2 real axes (rays, directed lines, composed of real points only) that are geometric sites of points corresponding to the real R(z) and imaginary Im(z) parts of the complex number z = x + iy. Axis X contains the points for x (all real numbers), and axis Y contains the points for y (all real numbers). X and Y are orthogonal. The Cartesian product
gives the set of ordered pairs (x,y). In order for this set to be a ring with a unit, we define two operations: addition (x1, y1) + (x2, y2) = (x1 + x2, y1 + y2) and multiplication (x1, y1) . (x2, y2) = (x1 . x2 + y1 . y2, x1 . y2 + x2 . y1). These operations satisfy the conditions of associativity, distributivity, and multiplicative closure. Also define the unit as (1,1) and zero as (0,0). We do not need to define inverse element for the present purpose (do not need a field) because a ring with unit is all that is needed to define a vector space. Those elements (x,y) defined in such way are the scalars. We define vectors as
. To define vector space over the ring of scalars, we define the following two operations: addition of vectors
and multiplication of vector by scalar
. These two operations should satisfy all necessary conditions for vector space: commutative addition, closure upon multiplication by scalar, associativity for multiplication by scalar, distibutive laws and so on (too lazy to check this). To cut it short, now we have a vector space which is one dimensional relative to the scalars (x,y) but two dimensional relative to the real numbers x and y (which are scalars (x,0) and (0,y) on the X and Y axes). I need again to stress: this is a real vector space and the vectors and scalars in it are real. However, vectors
have all the properites of complex vectors
and scalars (x,y) can be added and multiplied as complex scalars x + iy. Other properties of complex numbers can be easily obtained if we suitably define a field of (x,y) to have division and other operations with complex numbers. Now we have all that is needed to plot the eigenvalues x + iy and eigenvectors
on the complex plane. I did it, and I succeeded. Note that we need only 2 real numbers x and y and this helps to plot eigenvalues and eigenvectors in two dimensions.
I understand what all those comments are driving at: to represent the diagonalized rotation matrix
as a homothety in the complex C^2 space. This can also be drawn on two-dimensional sheet because we need only two real numbers. For it, X and Y can contain all complex numbers but this is not good for further theorizing. Therefore, it is preferrable that X contains only the complex numbers x + iy and Y contains their conjugates x - iy. In this way axis X can be represented by the upper half-(complex planes) + real axis X and Y can be represented by the lower half-(complex planes) + real axis X which, in fact, are the eigenspaces of the two eigenvectors of rotation. As a unit on X we will choose 1 + i, and as a unit on Y -- 1 - i. All vectors on such plane must be defined so that they form a vector space with vectors of the type
where x and y are real. We need only 2 real numbers so we can draw this on a sheet. Then,
is a homothety with eigenvalue sin φ. I might be mistaken somewhere in the last paragraph, especially in the construction of this supposed complex space, and welcome any good-faith corrections. Also, I would welcome any arbitration or third-editor opinion on this. --Lantonov (talk) 11:14, 21 May 2008 (UTC)
I see first mistake - not closed under multiplication by scalars. --Lantonov (talk) 13:04, 21 May 2008 (UTC)
- You said "A vector (directed intercept) can be an element of the real line." That is true in a way: as you point out, some 2-D vectors lie on the x axis, so in as much as the x axis is isomorphic to the real number line, two-vectors of the form (x, 0) are isomorphic to the real numbers. However, arbitrary two-vectors cannot be considered elements of the real line. The thing is, vectors are first-order mathematical objects, even if we often write them in a coordinate system. Using vector operations alone, how can I write the two-vector v as an element on the real line? I can't: R2 is not isomorphic to R. —Ben FrantzDale (talk) 21:37, 23 May 2008 (UTC)
I can add vectors in R space and I can multiply them by scalars of R space. The resulting vectors are in R. Fill the rest by yourself because it seems someone tries to set limits on my discussion space. --Lantonov (talk) 12:24, 26 May 2008 (UTC)
[edit] Movements
Movements are affine transformations, this is true. Affine transformations are a more general class, in
which the new reference frame is, in general, not necessarily the same as the initial reference frame. The necessary and sufficient conditions for a transformation to be linear or not are additivity and homogeneity. Geometrically, planar movements are classified as three types: those that preserve the direction of infinitely many lines (translations), those that preserve the direction of exactly one line (shears: horizontal and vertical), and those that preserve only one point (rotations). Of those, shear and rotation are linear transformations in non-homogenous coordinates. Translation is linear transformation in homogenous coordinates. In addition to those three, affine transformations include scaling, which is not movement. Do you agree with the above? If not, what is wrong? --Lantonov (talk) 08:22, 19 May 2008 (UTC)
This was helpful. I will not answer immediately, need specialist in projective geometry and projective spaces for this. For starters, check this: "Since a translation is an affine transformation but not a linear transformation, homogeneous coordinates are normally used to represent the translation operator by a matrix and thus to make it linear." in Translation (geometry). Mistake? --Lantonov (talk) 13:53, 21 May 2008 (UTC)
- Translation is linear transformation in homogenous coordinates.
- Again, you're writing sentences that don't mean anything. In vector space theory the concept of "homogenous coordinates" has absolutely no relevance. Sentences don't become true just because you add a lot more irrelevant jargon. Vector spaces are simple structures. Their properties do not depend on your choice of representation.
- Translations are not linear maps, because they are not additive (except for translation by zero). Please don't discuss them in this article; they simply do not belong here.
- You are simply not allowed to move the origin of vector spaces about, only change the bases with respect to the same origin. The zero element of a vector space is not dependent on the choice of basis. Bases are not the same as co-ordinate systems.
- End of discussion.
- Please stop writing articles about subjects you simply don't understand. -- GWO (talk)
The above is truer than true. Translation defies both conditions of linearity: additivity and homogeneity. I will not start arguing against that. However, there are some things that are nagging. In the matrix representation of translation in the plane (a 3x3 matrix, I don't have time to write it here), the eigenvalue is 1 with algebraic multiplicity 3. It generates 2 non-zero eigenvectors that must be interpreted in some way, probably in terms of lines at infinity (2-dimensional objects in projective geometry). I am not going to do it because projective geometry is inherently distasteful to me. I may heed your advice and go on to more interesting subjects. --Lantonov (talk) 06:10, 22 May 2008 (UTC) Somehow, I got interested in this subject (a fleeting interest though), and dug out some more. In projective geometry, we have a class of elements at infinity: point at infinity (one-dimensional), line at infinity (two-dimensional), and one plane at infinity (three-dimensional) which is the whole space. Each element at infinity is represented by infinitely many finite elements. Finite elements are the usual elements of Euclidean geometry -- points, lines, and planes. So, point at infinity is represented by infinitely many parallel lines, and line at infinity is represented by infinitely many parallel planes. All finite parallel lines intersect at a point at infin ity, and all finite parallel planes intersect at a line at infinity. Finite elements that are representatives of one element at infinity are exactly one and the same element -- the element at infinity. Thus, all parallel lines are a point at infinity, and all parallel planes are a line at infinity. In homogenous coordinates, this central concept is expressed like this: In the plane, all elements are written with 3 coordinates: (x/r, y/r, r) where x and y are the usual coordinates and / is division. r is any real number. Because we have division, there are 2 important cases: 1) r is 0 and 2) r is not 0. If r is 0, the element is at infinity. If r is not 0, the element is finite. There are infinitely many elements for which r is not 0, but only one element for which r is 0. Those infinitely many finite elements with r not 0 are representatives of that one element with r = 0. In coordinates: (x, y, 1) is exactly the same as, e.g. (x/2, y/2, 2). This is all that is needed initially to treat translation. Going to vectors:
- It's wrong in this context, yes. You can write a translation like that, as a matrix, but when you do it ceases to be a map on a vector space. Vector spaces are closed under addition and multiplication so the set of all vectors (w1,w2,w3,1) does not form a vector space since (a) 0 is not an element and (b) it's not closed under addition, (c) it's not closed under scalar multiplication. We gain that trick to represent translations as matrices, but we lose the ability to add vectors in any meaningful sense. Since that representation breaks vector addition, we also break the definition of linear. You agree that linearity means
- T(x + y) = T(x) + T(y) and
- T(λx) = λT(x)?
- That's the very definition of linearity, right?
- Let T be a translation by p:
- T(u + v) = u + v + p right? But
- T(u) + T(v) = (u + p) + (v + p) = u + v + 2 * p.
- T(λx) = λx + p right? But
- λT(x) = λ(x + p) = λx + λp..
- Forget about matrix representations, are you really telling me that's a linear map?
- It violates both the axioms that define linearity.
- If that's a linear map, I'm Ann Coulter. -- GWO (talk)
and
is one and the same vector: a representative of a vector that lies in a point at infinity (which is a one-dimensional object): the vector
. Check the conditions for additivity and homogeneity for translation with such vectors (the finite ones: r not 0) and you will see that both conditions are satisfied if you have in mind the above. I knew this before but I have difficulty in interpretation of the 2 eigenvectors of translation:
and
. Anyway, I do not intend to pursue this topic. --Lantonov (talk) 09:18, 22 May 2008 (UTC)
"A linear map (also called a linear transformation or linear operator) is a function between two vector spaces that preserves the operations of vector addition and scalar multiplication. The term "linear transformation" is in particularly common use, especially for linear maps from a vector space to itself (endomorphisms)." The linear map (since you prefer this term over transformation or operator)
- You're overcomplicating everything again. It's really nothing to do with projective geometry (another subject of which you clearly have only a sufficiently-superficial understanding to write long rambling incoherent paragraphs. A little knowledge is a dangerous things, especially among over-eager wikipedians.).
- The problem is that the switch to and from homogeneous coordinates is not itself a linear map (because of the "1" in the last co-ordinate). So when you compose your linear matrix operations with this non-linear map, the overall transformation is not linear.
- I've no interest in discussing projective geometry with someone who doesn't know what a vector space is, or what linear maps are, so go and annoy the projective geometry people. Or, better, go work on some biology articles, as your CV suggests you do actually know something about biology. -- GWO (talk)
transforms the vector
into vector
. Both vectors are in the same vector space, so this linear map is endomorphism. Now tell me, where do you see here "a switch to and from homogeneous coordinates"? Also, how can I "compose a linear matrix operations with a non-linear map" when "linear matrix operation" (more exactly, operator) is the same as "linear map". And this time try to be more polite because my patience with you is running out. --Lantonov (talk) 12:08, 22 May 2008 (UTC) About vector space:
Axioms of closure:
- 1) associative
- 2) commutative
- 3) identity element (there is only one zero vector over the whole vector space)
- 4) inverse elements
- 5) distributivity for scalar multiplication over vector addition:
- 6) distributivity for scalar multiplication over field (better: ring) addition:
- 7) scalar multiplication is compatible with multiplication in the field (better: ring) of scalars:
- 8) identity element:
I do not need "a switch to and from homogeneous coordinates" in defining the vector space either. The definition in vector space is incorrect in one place though: "vector space over the field F". We do not need a field of scalars (no need for a−1) to define a vector space, it is sufficient to have a ring with multiplicative unit. In this respect, the definition in Korn & Korn, section 12.4-1 is better. You can see from this how that statement of yours: Vector spaces are closed under addition and multiplication so the set of all vectors (w1,w2,w3,1) does not form a vector space since (a) 0 is not an element and (b) it's not closed under addition, (c) it's not closed under scalar multiplication is plain wrong. --Lantonov (talk) 15:30, 22 May 2008 (UTC)
- 9)
- 10)
- What is it that you claim to be a vector space V here?
- Is it the set of vectors
- or the set
?
- Because if its the former, then surely
?
The third element is NOT the coordinate z, it is any real number.
- If it's the former, you've not got closure, unless you now claim that 1 + 1 = 1, which might (and I stress, might) be your most wrong-headed suggestion yet.
- If it's the latter then your zero element is not in the space.
- Either way, you've not defined a vector space.
- If you define precisely your set, how to add two such vectors and how to multiply a vector and a scalar. I'll be able to tell you more precisely why you're wrong. At the moment, you're molding your definitions on the fly.
- Oh, and you absolutely need a field for a vector space. Otherwise, you've just got a module (mathematics). And modules are a pain, because you can't find bases very easily -- GWO (talk)
All vectors in the definition for vector space are vectors in the plane, and they have only two non-homogenous coordinates. So this is the set
- This doesn't mean anything. I'm not hung up on z. We can call it q if you like.
- Is your space the set of vectors
where x,y,q are real numbers. If so, what is your concept of vector addition in this space?
and not the set
. And the zero element is in the space because vectors of the type
are in the space. Adding of vectors and multiplication by scalars is the same as in non-homogenous coordinates. "Module over a ring" is a generalization of vector space which is more general than module over a ring with multiplicative identity. --Lantonov (talk) 16:48,
`22 May 2008 (UTC)
-
- So, all the vectors in the set are of the form
... but
is in the set? What? Can you not see that that's completely contradictory? (And, again, you haven't defined vector addition or scalar multiplication). - You can, of course define a vector space by the set
with addition defined by
and scalar multiplication
- That's a perfectly well defined vector space (its just a copy the real plane, with a completely superfluous "1" attached to every pair of co-ordinates). Your zero element is just

- Only get this: because you're not using the standard definition of "addition" on the third component, multiplication by a 3x3 matrix does not define a linear map. Matrix multiplication is only linear when we use the standard rules of vector addition!
- Check this:
but
- And, once again, translation is not linear!
- Alternatively, if you claim
is in the set, so we can use normal vector addition 
- So now we've got a normal matrix multiplication, hence linearity, but the matrix isn't a translation anymore! It's only a translation when we force the last co-ordinate to be "1".
- And, once again, you're completely wrong! You're batting 1.000 on wrongness this! You're the Ted Williams of incorrect! The Don Bradman of not-the-right-answer! The Babe Ruth of completely-wrongheaded-nonsense! Want to play again?
- -- GWO (talk)
- So, all the vectors in the set are of the form
And, once again, you're completely wrong! You're batting 1.000 on wrongness this! You're the Ted Williams of incorrect! The Don Bradman of not-the-right-answer! The Babe Ruth of completely-wrongheaded-nonsense! Want to play again?
I like this. Too bad that I understand nothing about the sport you refer to. --Lantonov (talk) 08:42, 23 May 2008 (UTC)
[edit] Oooh, I'd missed this bit
-
- Axioms of closure:
- 9)

- 10)

- 9)
- Axioms of closure:
This is genius. I love this. At exactly the crux of your definition of V, you dodge the bullet. At precisely the point where you had to choose between
or 
you suddenly get all coy and just write.
. Similarly when you have to choose between
and 
again, no detail, just a dishonest little
. You spell absolutely everything else out long hand, and when you get to the bit you can't be consistent about, you write
. That's truly magnificent.
Until now, I've assumed good faith, but I've just realised: you know you're wrong. If you didn't know you where wrong, you had provided enough detail to hang yourself right there. -- GWO (talk)
-
- I tried to explain to you above the whole projective geometry and homogenous coordinates in a nutshell but you dismissed everything as "rambling" and evidently didn't read it, much less try to understand it. That's why you completely fail to understand what this formulae are talking to you. Well, I will try to explain for a second and I hope, last, time. You failed to see that I write homogenous coordinates in the form (x/r, y/r, r), didn't you? This is not my whim, that's how those coordinates are defined. So don't make me explain about double quotient, etc. because it will be too much ramble without anybody listening to it. Usually, I send my students with a D for such homework.
Let me see where to start from. Probably with your biggest mistake of failing to see the meaning of homogenous coordinates. Now heed this. !!!VERY IMPORTANT!!! Vector
is exactly the same vector as
,
,
, and infinitely many other vectors. All these vectors are the same as the vector (in non-homogenous coordinates)
and all these vectors are in the vector space. If the last coordinate is not 0, these vectors are the usual vectors in the Euclidian plane. If the last coordinate is 0, then the vector
is in the projective space, more exactly, it is built on the point at infinity (1D object). The vector
is in the vector space. For starters, this is enough to debunk the composition you are gloating over. I will start from the last of your statements because I do not want to read everything. Go back and correct the rest accordingly.
This is genius. I love this. At exactly the crux of your definition of V, you dodge the bullet. At precisely the point where you had to choose between
or 
you suddenly get all coy and just write.
.
I will write this more explicitly, to fill the details you missed.
is the same as
.
The vector
is in the vector space, and is exactly the same as vector
. Got this? Now correct the rest to see if you understand it. --Lantonov (talk) 06:56, 23 May 2008 (UTC)
- I think you have the normalization backwards. For example, it should be (x y 1) = (2x 2y 2). But with that, addition no longer works: (x y 1) + (x y 1) = (2 x/2 2y/2 2/2) = (x y 1). Also, you say the vector (x/0 y/0 0) is in the vector space. I assume you mean (0x 0y 0) is in the vector space; but how do you normalize that? Divide by zero? While homogeneous coordinates are tremendously useful, I don't think they are a vector space in the usual sense. After all, the entire idea is to remove the zero from the vector space to get an affine space. That is why addition and scalar multiplication don't work. The only operation that makes sense is matrix operations. That's not to say the eigenvectors of those matrix operations aren't meaningful, but it's not going to have the usual meaning after perspective division. —Ben FrantzDale (talk) 11:29, 23 May 2008 (UTC)
-
- BenFrantzDale, welcome to this discussion. It promises to get interesting and productive. Sorry, but your additions are wrong too. Your mistake is dividing the last coordinate by two in your addition. You see, the last coordinate is not a coordinate in the usual sense. It is any real number, including 0. So to write your examples explicitly, it is:
- If you do not understand why this is so, give more examples to thrash that out.
- BenFrantzDale, welcome to this discussion. It promises to get interesting and productive. Sorry, but your additions are wrong too. Your mistake is dividing the last coordinate by two in your addition. You see, the last coordinate is not a coordinate in the usual sense. It is any real number, including 0. So to write your examples explicitly, it is:
-
-
-
-
is wrong too. It should be:
-
-
-
-
- The trick is that the divisor should equal the last coordinate. The other 2 coordinates should be multiplied according to the divisor so that the balance is maintained: x = x/1 = 2*x/2 = 1000*x/1000 and so on. This is done depending on the last coordinate.
-
-
-
- You correctly observed that
is not written very correctly if by x an y are understood the usual, non-homogenous coordinates. In such case, it should be written
but the geometric meaning of this vector is now different, it is not exactly the same as
because
is in projective space while
is in the Euclidean plane. Let's also observe that 0/0 is finite number, so it can be a bona fide coordinate.
- You correctly observed that
-
-
-
- After all, the entire idea is to remove the zero from the vector space to get an affine space. Exact. Objects with 0 as the third (or fourth) coordinate are the pets of projective geometry, and are defined extensively as point (1D), line (2D) and plane (3D) at infinity. Plane at infinity is only one single object and it is the whole space. Translation sends the eigenvectors in projective space and rotation sends the eigenvectors in complex space and this is why they are inherently difficult to understand. Translation has eigenvalue 1 with algebraic multiplicity 3 and geometric multiplicity 2. The two eigenvectors are
and
and are in the projective space (0 as the third coordinate). Regards. --Lantonov (talk) 12:20, 23 May 2008 (UTC)
- After all, the entire idea is to remove the zero from the vector space to get an affine space. Exact. Objects with 0 as the third (or fourth) coordinate are the pets of projective geometry, and are defined extensively as point (1D), line (2D) and plane (3D) at infinity. Plane at infinity is only one single object and it is the whole space. Translation sends the eigenvectors in projective space and rotation sends the eigenvectors in complex space and this is why they are inherently difficult to understand. Translation has eigenvalue 1 with algebraic multiplicity 3 and geometric multiplicity 2. The two eigenvectors are
-
-
-
-
- I disagree; I maintain that in homogeneous coordinates, (x y 1) = (2x 2y 2) up to perspective division. You say that
- (x y 1) = (2x/2 2y/2 2),
- but if we apply perspective division to that, we get
- (x y 1) = (x/2 y/2 1)
- which implies x = x/2. Surly you aren't claiming that? —Ben FrantzDale (talk) 00:10, 24 May 2008 (UTC)
- When one works only with finite elements, perspective is applied as you state, by multiplying the coordinates with the perspective factor. In this case, yes, we have indeed (x y 1) = (2x 2y 2) with a perspective factor 2. The difficulty comes with infinite elements when the perspective factor is 0. Then all vectors have coordinates 0 so there is no way to work with them. If instead of perspective factor, we apply perspective quotient, in this case, 2/2, then finite quotients 0/0 can be handled. We must be careful in handling the perspective quotient in this case as we have to distinguish between 2*x/2, and 1*x/1. We cannot have vectors of the type (x/2 y/2 1). All three numbers (3rd coordinate, divisor and multiplier) must be equal. --Lantonov (talk) 06:34, 26 May 2008 (UTC)
- Lantonov, thanks for the reply. You've convinced me that you are talking about something I don't quite understand but that you have read and seem to understand. I may read this back over and try to understand what you are talking about. That said, I think this discussion has strayed from talk of Eigen-things. —Ben FrantzDale (talk) 00:19, 28 May 2008 (UTC)
- I can explain the rationale behind it but since talk strayed from Eigens, I can explain it on my or your talk pages, if you want. The reason is not in what I said here. --Lantonov (talk) 09:04, 28 May 2008 (UTC)
- When one works only with finite elements, perspective is applied as you state, by multiplying the coordinates with the perspective factor. In this case, yes, we have indeed (x y 1) = (2x 2y 2) with a perspective factor 2. The difficulty comes with infinite elements when the perspective factor is 0. Then all vectors have coordinates 0 so there is no way to work with them. If instead of perspective factor, we apply perspective quotient, in this case, 2/2, then finite quotients 0/0 can be handled. We must be careful in handling the perspective quotient in this case as we have to distinguish between 2*x/2, and 1*x/1. We cannot have vectors of the type (x/2 y/2 1). All three numbers (3rd coordinate, divisor and multiplier) must be equal. --Lantonov (talk) 06:34, 26 May 2008 (UTC)
- I disagree; I maintain that in homogeneous coordinates, (x y 1) = (2x 2y 2) up to perspective division. You say that
-
-
[edit] Enough is enough
I commend the other editors for their inordinate patience with Lantonov, but at some point one simply has to say that he/she obviously doesn't know what he/she is talking about, and cannot comprehend the limits of his/her knowledge in this subject. There is no point in expending kilobytes of discussion on this nonsense. Wikipedia is not a debating society.
Here is a simple test: Lantonov, can you point to a reputable published source that describes your so-called vector space etc., using your terminology and definitions? If not, we will all merely point you to the WP:NOR policy every time you post.
—Steven G. Johnson (talk) 14:39, 23 May 2008 (UTC)
-
-
- In the future, the WP:NOR policy is a great way to avoid at least some of these endless debates (and was originally designed for just such a purpose). Just ask them for a reference supporting their novel interpretation/theory/definition/notation/etc., and if they can't provide one, then you can dismiss the issue without further argument. —Steven G. Johnson (talk) 21:59, 23 May 2008 (UTC)
- Yes, I can point to a reputable published source that describes "my so-called vector space". It is my textbook in analytic geometry, which I studied in my second year of undegradute study in Mathematics in Sofia University about 30 years ago. It is: Petkanchin, Boyan (1966) Analytic Geometry. Science and Arts Publishers, 3rd edition, Sofia, 846 pp. Unfortunately, this book is not with me at the moment. Tomorrow I will take it from my home and provide you with exact locations and citations from it, should you need them. About the reputabilty of Academician Prof. Dr. Boyan Petkanchin, see the article in Bulgarian Wiki bg:Боян Петканчин. There, this book is listed as Аналитична геометрия. Наука и изкуство, С, 1952, 807 с; II изд. 1961, 843 с, III изд., 1966, 846 с. Note that most of my contributions in this article are provided with a source, and the larger parts of references and inline citations here are provided by me, including proofs of theorems in Wiki books. --Lantonov (talk) 14:04, 26 May 2008 (UTC)
- The book is with me now. The axioms for the vector space are on pp. 330-331. --Lantonov (talk) 06:59, 27 May 2008 (UTC)
- In the future, the WP:NOR policy is a great way to avoid at least some of these endless debates (and was originally designed for just such a purpose). Just ask them for a reference supporting their novel interpretation/theory/definition/notation/etc., and if they can't provide one, then you can dismiss the issue without further argument. —Steven G. Johnson (talk) 21:59, 23 May 2008 (UTC)
-
-
-
-
-
- How convenient that it is impossible for almost anyone on the English Wikipedia to read this book and check that it backs you up, or whether you are fundamentally misunderstanding it (or mistranslating its terminology into English). We have to insist that you provide a reputable reference in English. If this is a textbook definition as you claim, then you should be able to find a widely available English textbook that has the same material. —Steven G. Johnson (talk) 02:33, 28 May 2008 (UTC)
- One of the reasons I did not include this material in the article was that I was unable to find an English textbook on it. Another reason was that it is too old. I do not know who are the "we" and on what grounds they are insisting but I have to point out that there is no requirement in the guideline for scientific references that the text be in English. The most important requirement is that the text be academic, and the book that I point to is purely academic as you can see it in the Sofia Universtity curricula up until the mid 1990s when it was replaced by more modern textbooks. I can translate relevant parts of the text verbatim but given the mistrust with which my contributions are met here, I doubt it is worth the effort. Alternatively, I can scan the relevant pages and send them to anyone interested; however, I am not sure if this infringes the author's copyright. I see no point in it anyway, as most of my text, sourced from widely available English books and provided with inline references, was deleted. Still, I will look for English sources and will give them here when I find some. I have 3 sources in English that describe the general properties of projective spaces but none of them explicitly includes the defining axioms as the above textbook does. For such general description, you can look, of course, in the article Projective space and references therein. I can give you, however, an English book that says verbatim: "in the homogenous coordinates the translation is a linear transformation". It is: Treil, Sergei. 2004. Linear Algebra Done Wrong. Brown University Press. Sergei Treil is from the Department of Mathematics at Brown University. The cited text is on p. 33. When you read it, you will see that there is nothing wrong in it, in spite of its title which is obviously a calque of the Schaum series book: Linear Algebra Done Right. However, if you tell me the opposite statement: "In homogenous coordinates translation is NOT linear transformation", I will believe you and will not argue about it. I will accept that I "fundamentally misunderstand" something. Obviously, translation cannot be both linear, and non-linear. Anyway, things may be more complicated and you may not want to explain them to me because you think they are beyond my ability to comprehend.--Lantonov (talk) 05:35, 28 May 2008 (UTC)
- I was wrong in saying that Petkanchin was discontinued in mid 1990s. See thе official curriculum from Sofia University for 2005 [4] and (in English) in 2008 [5] for Analytical Geometry where 1. Петканчин. Аналитична геометрия. 1966. София and 1. B. Petkanchin, Analytical Geometry, Nauka & Izkustvo, Sofia, 1966 (in Bulgarian) is still present as a textbook. --Lantonov (talk) 09:55, 28 May 2008 (UTC)
-
-
-



transforms the vector
. Both vectors are in the same vector space, so this linear map is endomorphism. Now tell me, where do you see here "a switch to and from homogeneous coordinates"? Also, how can I "compose a linear matrix operations with a non-linear map" when "linear matrix operation" (more exactly, operator) is the same as "linear map". And this time try to be more polite because my patience with you is running out. --








?
where 


