Talk:Graph isomorphism
From Wikipedia, the free encyclopedia
Contents |
[edit] ?
http://arxiv.org/abs/math/0607770 —Preceding unsigned comment added by 212.22.96.208 (talk) 12:24, 30 January 2008 (UTC)
[edit] Known Algorithms
Nauty, VF2, and others are discussed in this paper http://amalfi.dis.unina.it/graph/db/papers/benchmark.pdf
Here is an algorithm that I've been using to solve the ISOMORPHISM problem in the general case of non-directed graphs.
Okay... here's my algorithm for determining graph isomorphims.
Let G1 and G2 be graphs. G1 has m,n vertices and edges; G2 has x,y vertices and edges.
First, does m =x ? If yes = they are potentially isomorphims (as they have the same number of vertices) Next, does n = y ? If yes = they are potentially isomorphims (as they have the same number of edges)
Next, compute eatch vertex degree in G1, and G2. (that is, count how many edges are associated with each vertex).
For G1, sort the vetrex degrees in decending order - we'll call this S1 For G2, sort the vertex degrees in decending order - we'll call this S2
If S1 = S2, they could be isomorphic as they are equal in the sense that the vertex weights are the same.
Loop:
Now from G1 remove all vertices with the weight of (m-1). Now from G2 remove all vertices with the weight of (m-1). Now the number of edges and the sorted vertex weights may be different! Update the edge count of all vertices
For G1, sort the vetrex degrees in decending order - we'll call this S3 For G2, sort the vertex degrees in decending order - we'll call this S4 If S3 = S4 they could be isomorphic as they are equal in the sense that the vertex weights are the same. Count the number of edges in G1 and G2; if equal, we're still portentially isomorphic
Go back to Loop: and repeat with (m-2), (m-3) until it is 0.
Copyright August 2007, Andrew Walduck. Permission granted to Wikiversity to licences under the GFDL.
andrew.walduck@gmail.com
74.99.31.225 04:54, 7 September 2007 (UTC)
<NOTES>* I tried to prove this: for I = 0 - trival case for I = 1 - trival case assume its true for I = N, then prove N+1. Consider the graph X1 with n, Y1 with n nodes also - they're isomorphisms Add node P to graph X1, and Q to Y1 (the N+1 part) ( If P == Q then the sorted node weight graphs will be the same, if p != Q then the sorted node weight graphs will be different. 48NQTF5
Retrieved from "http://en.wikiversity.org/wiki/School_of_Mathematics:Introduction_to_Graph_Theory:Problems_1"
- This is a standard approach but it runs into difficulties with regular graphs. You can try to compute more powerful information at each vertex than just its degree that might help better to distinguish them from each other, say the vector of distances to all other vertices, but you will still run into trouble with distance-regular graphs. —David Eppstein 05:26, 7 September 2007 (UTC)
Hi David... could you provide an example of both 2 regular graphs that are isomorphims? Also, distance regular graphs??? Thanks Andrew Walduck 74.99.31.225 17:54, 11 September 2007 (UTC)
- It's not quite the same as distance-regular, but Brendan McKay has encoded adjacency matrices of many strongly regular graphs at http://cs.anu.edu.au/~bdm/data/graphs.html —Preceding unsigned comment added by David Eppstein (talk • contribs) 18:13, 11 September 2007 (UTC)
Hi David... I tried to come up with a regular matrix and its isomorphism. I came up with 2 triangles, basicially a 6 node graph that has all vertices with the same edge count of 2. I tried to come up with an isomorphism but failed... :-( However, if you have two graphs, both 2-regular with 6 nodes each, the algorithm still works, you remove the 6 nodes with 2 vertices... note that my algorithm does not give the mapping array, it will only tell you "yes or no"... Cheers Andrew Walduck 74.99.31.225 20:34, 11 September 2007 (UTC)
If you could come up with a 3 regular graph
- 2-regular graphs are just cycles and disjoint unions of cycles, not interesting from a graph isomorphism point of view. The page I pointed you at has large numbers of 3-regular graphs, but if you want two specific nonisomorphic ones with the same numbers of vertices, try Desargues graph and dodecahedron. Of course, one of these graphs is bipartite and the other isn't, so they're not difficult to tell apart, but it doesn't sound like your algorithm will determine that. —David Eppstein 20:38, 11 September 2007 (UTC)
Hi David, I looked at the Desargues graph and the dodecahedron they clearly break my algorithm! both have vertices = 20, edges = 30, and edges per vertex of 3... so they disappear on the first pass if you remove all vertices with edge count of 3 at the same time!! Back to the drawing board... maybe we should just delete this mess??? Andrew Walduck 74.99.31.225 21:10, 11 September 2007 (UTC) Hi David... I walked through the algorithm on the Desargues graph and the dodecahedron. If you only remove 1 node with the maximum number of edges each time, it "works for a while" and then the sorted node-edge-count arrays (the S1, S2 etc) eventually become different... If you look at my "proof" its clear that it (my algorithm) doesn't work for "regular graphs" - you can't just add one node to each and maintain regularity... Its interesting that the smallest and largest cycle size is different for these two graphs... for Desargues graph its 6 and for the dodecahedron its 5... this is probably another condition that should be checked for isomorphism...(thanks for the counter example!) Can you find a "regular graph" with the same cycle sizes (in addition to node count, and edge count) that is not isomorphic?
cheers Andrew Walduck 74.99.31.225 21:45, 11 September 2007 (UTC)
- Andrew, before inventing bicycles, please take a look into several quite old but still quite informative surveys of the graph isomorphism problem. About 20-30 years ago it was a fashionable topic, dubbed "Graph Isomorphism Disease". There were literally hundreds of articles og GI-problem published, with numerous algorithms. It seems that new generations have thoroughly forgotten the "prior art". I am adding references to surveys into the article.
- What is more, wikipedia talk pages are for discussion article content. It is not a message board for doing research in math. Please feel free to communicate with David via private channels and don't clutter this page, which is workspace for wikipedia editors working on the article. `'Míkka 22:20, 11 September 2007 (UTC)
[edit] IMHO
following new important result may be added to the article:
An effective algorithm for graph isomorphim testing was used for organic chemistry tasks, the algorithm complexity does not depend directly on the number of vertices of given graphs. Authors tested this algorithm for ~95,000,000 graphs, and did not find any limitation to use this approach for other types of graphs.
M.I.Trofimov, E.A.Smolenskii, Russian Chemical Bulletin, 2005, Vol. 54, 9, 2235. (http://dx.doi.org/10.1007/s11172-006-0105-6).
Tim32 16:15, 7 October 2007 (UTC)
- I'm not convinced that this is a sufficiently notable contribution to the problem to include in the article. I note for instance that Google scholar doesn't find any citations. It is not new or surprising that graph isomorphism may be solved efficiently on many practical instances. Assuming it's the one entitled "application of electronegativity indices...", it's even less surprising that nontrivial vertex labelings can help a lot in speeding up the task. —David Eppstein 04:08, 22 October 2007 (UTC)
Dear Prof Eppstein,
The fact is that this algorithm complexity does not depend directly on the number of vertices of given graphs. Perhaps, you know another similar facts? Yes, "It is not new or surprising that graph isomorphism may be solved efficiently" for example by Nauty. The Nauty program is very effective realization of very effective algorithm, but the algorithm is O(exp(n)) complete, where n is the number of vertices of given graphs.
Yes, I also very like Google, but very often Google is unable to find any info for me. So, IMHO, that Google scholar doesn't find any citations is Google problem ;-) And also, are you sure that citations counting is the best method to get scientific truth?
Also, I am surprised that I wrote about this my idea to edit this page a long time ago. You did not send this your comment before this editing work was done, but just after the moment when the page was updated. It looks like indelicate act against my work for Wiki.
BTW, did you read this article?
--Tim32 08:37, 22 October 2007 (UTC)
- Since you insist on being responded to: In fact, I have not read the article. My reading your article and making a judgement here about its worthiness would be perilously close to WP:OR, precisely because I do know something about the subject. At Wikipedia, we should not be in the business of sieving through uncited recent research papers attempting to pick out the diamonds among so much dross — there is no particular hurry, we can wait until the research attracts the attention it deserves, and then judge its notability by the quality of that attention. That goes not just for whether or not to create an article on a subject, but also for whether to include new results in an article. Graph isomorphism is such a big subject that one can't simply cite everything relevant (as I might do for some topics with smaller literatures) so we need to use good encyclopedic principles to choose what's important, and that means having the patience to let the dust settle in the scientific literature first. —David Eppstein 06:48, 25 October 2007 (UTC)
1) This my addition was not an original research, because there was the source printed in scientific journal by well-known publisher (Springer). "Primary sources that have been published by a reliable source may be used in Wikipedia, but only with care, because it is easy to misuse them. " (WP:OR#Primary, secondary, and tertiary sources) I used this source with care.
2) You wrote: "we can wait until the research attracts the attention" - No! very often we can not wait! Otherwise, we lost the general Wiki advantage: please see, Wikipedia:About#Wikipedia vs. paper encyclopedias. Otherwise, we replace the neutral point of view with conservative point of view.
3) Also, you wrote: "Graph isomorphism is such a big subject that one can't simply cite everything relevant" Yes, agreed in common sense. But disagreed for this case. It is not "everything relevant", but very-very important. NP-complexity is the most important point for Graph isomorphism problem. If you know another facts pro- or contra- you have to add them to the article. The current edition looks like very poor informed in contrast with paper encyclopedias. However, you want to wait, you do not want to improve the article, and moreover you block other user to edit it. And moreover you do not want to discuss it: before my edition I asked about here and I heard nothing from you. Sorry, but may I remember you that this article is not your personal article.
4) May I note, also, that you tell me a lot about Wiki rules, but say nothing about the subject. ("In fact, I have not read the article.") However we have "WP:Ignore all rules" rule ;-) "Rules have zero importance compared to that goal." (Wikipedia:What "Ignore all rules" means) So, let you read this article to explain here why it is not so important, or let you restore my edition, before somebody else will get any valid argument against. This article was printed two years ago, and during this time period nobody reports about any error in it.
In a few words: I noted very important fact ("...complexity does not depend directly on the number of vertices of given graphs") that has been published by a reliable source. I asked you "Do you know another similar facts?" -- You did not answer this my question.
--Tim32 10:09, 25 October 2007 (UTC)
- I didn't read the entire article, but searched it for some relevant terms. I couldn't find the authors' claim that the algorithm complexity is not directly dependent on the number of vertices. They do state "(...) the number of iterations also depend [sic] only slightly on the number of vertices (...)" (pages 2241 and 2244, emphasis mine); they do not clarify what they mean by this 'slight' dependence.
- Were you maybe confused by their statement that "(...) the result is independent of enumeration of vertices." (page 2242, emphasis mine)? As far as I can tell, they mean the order in which the vertices are processed, not their number.
- Finally, I note that the authors only considered graphs with up to 12 vertices in their tests.
- Regards, Phaunt 12:00, 4 November 2007 (UTC)
"The recursion depth and the number of iterations in algorithm 3 depend on the number and size of partition blocks for the sets of the solutions of the systems of linear equations for the vertex weights obtained by consecutive replacement of the initial weights by the modified ones […], being only slightly dependent on the number of vertices m, just like the number of identical solutions of the system of linear equations of the type (2) is not simply related to the number, m, of the unknown quantities. The number and sizes of the partition blocks depend on the initial weights and degrees of vertices and on the discriminating capability of the index employed for the weights. They also depend, in a complex fashion, on the symmetry of the system of equations. In turn, this symmetry is directly related to the symmetry of the adjacency matrix of the corresponding graph. Each replacement of the initial weight of a vertex by the modified weight leads to a strong lowering of the symmetry of the system of equations, thus dividing the partition into progressively decreasing blocks and minimizing the enumeration." (p.2244).
Also you wrote: "I note that the authors only considered graphs with up to 12 vertices in their tests" it is not right. "The test set for the method proposed included a total of 95,000,000 molecules containing up to sixty carbon atoms." (p.2235, abstracts)
"Sixty carbon atoms" mean graphs up to 120 vertices.
--Regards, Tim32 17:27, 5 November 2007 (UTC)
- I still don't see what this "slight" dependency means. AFAIK, they don't give a general mathematical relation between the number of vertices and number of iterations.
- As to the number of vertices, the number of 95,000,000 in the abstract seems to refer to the summed value of K in Table 1 (p.2239). You can see that this table considers graphs of up to 12 vertices only.
- In addition, as David Eppstein said, it is well known that the GI problem can be solved relatively easily in many cases; this is already in the article. In my opinion, the paper you refer to doesn't make any further sweeping statements that should be incorporated.
- Regards, Phaunt 10:53, 7 November 2007 (UTC)
One thing missing in the above discussion is that the paper is NOT, in fact, a reliable source. It is a practical solution to a problem in applied chemistry, written by chemists, reviewed by their chemist peers, and published by a chemical journal. Such a publication is a perfectly acceptable source for issues in chemistry, but certainly not for issues in theoretical computer science. -- EJ 11:58, 7 November 2007 (UTC)
- To:Phaunt, Re: there are Tables 2,3,4 (p.2171) in the next page also:"The total number of generated graphs G was 93,160,816 (Table 1) + 560,000 (Table 2) + 1,112,718(Table 3) + 200,000 (Table 4) = 95,033,534." (p.2241) "K is the number of external calls of the IsoTest procedure (Algorithm 3);" (p. 2239)
- Also, do you know that the number of identical solutions of the system of linear equations is not related to the number of the unknown quantities? (If not, please, read any textbook of linear algebra before.) And anyhow please, read the Algorithm 3 description. Sorry, but it is too hard to explain something from the text, which text you did not read.
- To: EJ,Re: Very strange reason. (But, firstly, a bit correction: this is not applied chemistry, this is fundamental (theoretical) chemistry.) It seems you want to say that "theoretical computer science" is absolutely isolated from other scientific areas and so any achievement in any area has no influence on the computer sci. However, there are a lot of opposite well-known examples in history of science. Quantum physics and quantum chemistry, for instance. Didn't you think that 2*2 is 4 for theoretical computer science, but is 5 for chemistry? ;-) Or may be you think, that well-known scientific journal printed by well-known publisher has no specialist in graph theory area? BTW, articles by Smolenskii were noted in well-known book: Alexander A. Zykov, Fundamentals of Graph Theory, Moscow, Idaho, USA: BCS Assotiates, 1990. --Tim32 13:48, 7 November 2007 (UTC)
Colleagues, you are wasting your time on a totally moot issue simply because you didn't take a boring step to actually look into the article or at least its summary. The author clearly writes: "The test set for the method proposed included a total of 95,000,000 molecules containing up to sixty carbon atoms." 60 for the number of vertices is a ridiculously small number to talk about in terms of computational complexity. A possible talk could be the efficiency of implementation, but the author gives no comparison with other approaches. There are no mentions of third-party, independent evaluations of the algorithm. There are hundreds of GI algorithms published, and each author says theirs is the best. Wikipedia is not a place for peer review of numerous math articles. I suggest to no longer waste time on this discussion unless sources are provided of other experts publish the discussion of this algorithm. `'Míkka 17:15, 7 November 2007 (UTC)
- Wow! You used absolute weapon for any discussion - Paradox of the heap : you wrote: "60 for the number of vertices is a ridiculously small number to talk about in terms of computational complexity". It is universal method against every investigation: "authors used too small test set". And it has no sense what is exact size of this test set: 10 members, 100 members, 1K or 1 M members sizes may be classified as "too small". But unfortunately for this your "argument" this test set is for illustration only in the article, because the Algorithm 3 is not heuristic algorithm. For example, well-known Euclidean algorithm does not need any test set to see its correctness, because it is not heuristic algorithm as well.
- Also you wrote: "A possible talk could be the efficiency of implementation, but the author gives no comparison with other approaches." – Other approaches are O(exp(n)) complete, where n is the number of vertices of given graphs. This one does not depend on n. Any comparison more?
- You wrote: "There are hundreds of GI algorithms published, and each author says theirs is the best." Yes, but it is not argument at all. Moreover, I noted this algorithm not because it is the best of all, but only because this one does not depend on n. Do you see the difference?
- And also you wrote: "Colleagues, you are wasting your time…" IMHO, the colleagues wasting MY time, because they did not read the article (you noted this fact also:), and so, one of the colleagues, mixed the total number of generated graphs and the number of external calls etc.
- BTW, you wrote "the author…", but note, please, the article signed by two authors. --Tim32 08:47, 8 November 2007 (UTC)
-
- First off, take it easy and remain civil, please.
- The number of 95,000,000 test cases you wanted to add to the article is hardly relevant: the great majority of these (not all of them, I submit) is for graphs with up to 12 vertices as you surely also see.
- Most importantly, the authors nowhere write that the complexity does not depend on the number of vertices. They say that the number of iterations is "not simply related" to the number of vertices (which just means they cannot express an upper bound in terms of the number of vertices, not that it doesn't exist); however, clearly the matrix operations depend on the number of vertices. The authors themselves write at the very end of the paper that "to this day, it remains unclear whether GI is NP-complete or not". If they had found an algorithm with time complexity unrelated (or even polynomially related) to the number of vertices, surely they would have noted that.
- Again, the authors note that their approach works well on a number of specific cases, but this is already in the wikipedia article.
- Finally, if you feel this is a waste of your time, nobody is forcing you to pursue this debate. I don't think I myself will continue it for much longer. Phaunt 11:01, 8 November 2007 (UTC)
Yes, of course, please, remain civil!
And I think, for any civil discussion of any text, is necessary to read this text before…:) Othervise totalitar style of discussion will win. For example, and only for example!, I do remember totalitar style of some discussions, which discussions took place many years ago in the Soviet Union, when everybody had began his/her speech from the words: "I did not read this novel by Solzhenitsyn [or Boris Pasternak ], but I want to say…" The similar discussion took place in chemical area and so on… Sorry for this reminiscence…
The most important point (thank you for this interesting question): you wrote: "however, clearly the matrix operations depend on the number of vertices". Yes! Ok! But there is no contradiction. The Gauss algorithm was used, its complexity is O(n3). For simplified understanding I can try to write down following expressions (I do understand that some criticism may be possible for these expressions; note, please, following is only for simplification): for this case the function k(f+n3) may be estimated as O(ku), where n is the number of graph vertices; f,u are some functions…; k is the number of Gauss calls; k,f,u does not depend on n. For example, something similar W. Lipski reproduced in his book Kombinatoryka dla Programistow, [Combinatorics for Programmers], Wydawnictwa Naukowo-Techniczne, Warzawa, 1982, for Hopcroft–Karp algorithm (maximum matchings): estimation O((m+n) n1/2) he replaced with O(n5/2), where m is number of edges; n is number of vertices.
About "95,000,000 test cases": there are 93,160,816 test cases for graphs with up to 12 vertices, and ~2 M for other cases (up to 120). I think, that inspite of 93 M >> 2M, this 2 M set is hardly relevant as well. However, it is an example (an illustration only) – please, see my reply to 'Míkka.
Also you wrote: "The authors themselves write at the very end of the paper that "to this day, it remains unclear whether GI is NP-complete or not". If they had found an algorithm with time complexity unrelated (or even polynomially related) to the number of vertices, surely they would have noted that." -- It is very interesting for me to read about myself: what we wanted to say... Please, be attentive and compare my email from Wiki user page with email from the article -- you will see this is the same email address, which begins from mtrofimov@… Yes, my name is Michael Trofimov and I am the author (one from two) of this article. (I thought, you had found this fact yourself.) First of all, we wrote also:
"In spite of the fact that this result is surely expected, its explanation in terms of formal estimates of computational complexity faces problems. Criticism has been repeatedly expressed in the graph theory studies, concerning practical limitations of the method of estimates of the hard-to-solve problems and the notion of NP-completeness (see., e.g., Ref. 28). [...] therefore, we will only mention that real performance has as much in common with the theoretical complexity as ideal gas with real gas or as famous MIX ideal computer by Knuth with Pentium 4."(p.2245)
Yes, today I do not know "whether GI is NP-complete or not", moreover I am not sure that NP concept is valid for GI;)
Anyhow, I am sure that this discussion is very important for Wiki. (I used Wiki a few years, but only a month ago I became registered user.) As I wrote to Prof Eppstein:
"You wrote: "we can wait until the research attracts the attention" - No! very often we can not wait! Otherwise, we lost the general Wiki advantage: please see, Wikipedia:About#Wikipedia vs. paper encyclopedias. Otherwise, we replace the neutral point of view with conservative point of view."
And finally, yes "nobody is forcing you[me] to pursue this debate". Yes, that investigation was finished, and this time I have another work and another problems to solve. I only want to add a little more info to this Wiki article. I believe it may be useful for somebody who finds in Internet more info that he/she is able to read in paper encyclopedias. (Also, as I wrote already, this discussion is becoming very important for "the general Wiki advantage".) I do not think that I waste of my time for productive work for Wiki. And to make our work productive: ladies and gentelmen, please, read any text attentively before discuss it. --Tim32 17:56, 10 November 2007 (UTC)
- No, I will not read the entire article. I spent enough time on this issue as it is; I'm sorry. However, I do believe I have seen enough of the article (in particular, I already read all the phrases you cite above) to be able to make the following statements, which are partially a reiteration from above:
- First you assert ("this one does not depend on n", above) that the algorithm's complexity is constant with regard to the number of vertices, which I dismissed by pointing out the graph operations, and now you write that the polynomial complexity of the matrix operations (a clear dependency) does not matter. That is a drastic change of view, but probably you just miswrote. But still, you did not adress my point that the authors do not express an upper bound on the amount of necessary iterations in the number of vertices, nor do they claim that there is no relation at all (i.e. that the amount of iterations is constant in the number of vertices).
- I mentioned the amount of 95,000,000 because your edit included this figure. I am glad to see that you now agree that this is not relevant.
- The authors demonstrated an approach that they claim works well for some cases; they included some experiments to support this claim. However, this does not render their approach interesting enough to include in the article; the article already mentions that efficient algorithms exist for specific instances. If and when the authors are able to clearly define a specific set of instances and express a theoretical polynomial upper bound on the time complexity of their algorithm for these instances, this might warrant inclusion.
- Finally, as others already wrote and regardless of any of the previous points I made, it still holds that the medium of publication is not a reliable source for graph theory. If the authors' claim is truly valid, they are able to formally prove it, and it is indeed a nontrivial new result, it would probably be easy to get it published in a reliable resource on graph theory. At that point, it can certainly be incorporated into Wikipedia. It wouldn't have mattered if I completely agreed with your claim or not; verifiability is of prime importance.
- I am sorry to have to criticise the result you published. I continued writing in the third person because we are debating a published result, not a wikipedian's opinion (which would be inappropriate). Please be aware of possible conflicts of interest, however nice it would be to have your publication cited here. I hope you can see reason in my points. It seems that there is little hope for including your findings at this point; you could try to bring attention to this debate via WP:RFC, but I'm afraid it won't help.
- With this, I think I'll stop contributing to this discussion. Best regards, Phaunt 00:52, 12 November 2007 (UTC)
The fact is that today the majority of mathematics considers GI problem as NP complete, at the same time nobody can prove it. It looks like a myth. In this situation every opposite fact is very important to keep the neutral point of view. My article reviewed another opposite facts from other sources also. The current state of Graph isomorphism article is very incomplete. (For example, I can not find the word "automorphism" in this article. I think, you know that "nauty" means "no automorphism, yes?".) A few references to special cases (trees, planar graphs), to non-deterministic algorithms and a few common words about specific cases are not serious arguments to keep the neutral point of view. Because you will not read my article the discussion about it is not serious. Also I note again and again you use mixing instead of strong arguments. For example, you wrote: "I mentioned the amount of 95,000,000 because your edit included this figure. I am glad to see that you now agree that this is not relevant." However, I had written: "I think, that inspite of 93 M >> 2M, this 2 M set is hardly relevant as well." Another example, again and again you do not want to hear "what this "slight" dependency means", please, compare: "If a function f(n) can be written as a finite sum of other functions, then the fastest growing one determines the order of f(n)."(Big O notation#Properties) Of course, my case is more complicated, because function f has a few arguments (I illustrated similar approach to computational complicity by example from Lipski’s book.) But again, let’s stop this discussion about my article, because you will not read this article, and let’s continue discussion about Graph isomorphism article. Now, in the result, we can consider that we have different points of view. To keep the neutral point of view I insist on addition of this my point to the Graph isomorphism article. Last time I edit a lot of Wiki articles and nowhere else I have similar problem like here. For example, please, see Comparison of Pascal and C: there is very hard situation in this article edition - the first part of editors dislike C (like me), another part dislike Pascal. Anyhow, the stable improving process is obvious, in contrast with Graph isomorphism article, its context seems to be frozen and the current context looks like a resource for the myth distribution, which myth I noted at the beginning of this my message.
Finally, the "reason" "that the medium of publication is not a reliable source for graph theory" is absurd. For example, recently I added a reference to Bull. Chem. Soc. Japan into another graph theory article, and nobody said something against. I think you know that unfortunately not all graph theory problems are interesting for practical applications as well as for other scientific areas. From other side, some mathematics do not want to recognize the graph theory as a real mathematical area. And this situation is very hard problem for many graph theory specialists. So, every graph theory application has significant importance for graph theory status. Also, historically, outside ideas have significant importance for graph theory progress.
You wrote: "I am sorry to have to criticise the result you published" – No problem, you did not criticise the result – you did not read my article ;)--Tim32 11:19, 12 November 2007 (UTC)
-
- I don't know where you get that "the majority of mathematics (sic)" considers GI to be NP-complete, but the wikipedia article certainly doesn't share this point of view; it states that this is unknown.
- 'Hardly relevant' means 'nearly irrelevant'. You probably agreed with me without intending to.
- Regards, Phaunt 14:06, 12 November 2007 (UTC)
- "[…] wikipedia article certainly doesn't share this point of view; it states that this is unknown" – common words only, like following: "In both organic chemical research and circuit design it is important to build databases of graphs"(Graph isomorphism) there is no example for and reference to the organic chemical research, where would be important to build databases of graphs. But this is not too simple: many chem. databases ignore GI problem ;)
- Sorry for mistake: I wanted to say "very relevant". So, I disagreed with you: this 2 M set is very relevant as well.
- No more? Very "detailed" reply! You prefer to play words games, rather than to read small paper to ask interesting (for Wiki) questions :-(
Regards,--Tim32 18:47, 12 November 2007 (UTC)
- I am not interested in pursuing this debate, as I stated above. Best regards, Phaunt 22:18, 12 November 2007 (UTC)
- In 8 November 2007, Phaunt wrote: "First off, take it easy and remain civil". Now I see, his original understanding of these concepts. Sorry for that.
- I had written: "I can not find the word automorphism in this article", just the next day this word was added. - Perhaps in the result of this discussion, perhaps not. Anyhow I am glad to see this improvement.
--Tim32 09:08, 14 November 2007 (UTC)
[edit] Applications
"In both organic chemical research and circuit design it is important to build databases of graphs"
-Inorganic chemical compaunds are too simple to use GI test!--Tim32 10:01, 14 November 2007 (UTC)
- What? Inorganic molecules can be complex too, and are routinely searched using graph-based algorithms (for example, using Scifinder or the Cambridge Structural Database). --Itub 11:28, 15 November 2007 (UTC)
Please, give a reference. I am focused in organic chemistry problems and may not know last achievements from other chemical areas. Yes, inorganic molecules can be complex as well, but very often elemental composition is sufficient: NaCl, KSCN, K3[Fe(CN)6] etc. Anyhow, organic compounds are more obvious objects for GI-testing.--Tim32 12:17, 15 November 2007 (UTC)
- There is no fundamental difference between organic and inorganic molecules, and no need for a reference. As I said, both of the chemical databases I just mentioned have many thousands of inorganic molecules and use graph representations of them. Inorganic molecules do have structure, even Fe(CN)6! And that's without even getting into the huge variety of boron and silicon compounds, for example. --Itub 13:11, 15 November 2007 (UTC)
You wrote: “There is no fundamental difference between organic and inorganic molecules, and no need for a reference. As I said, both of the chemical databases I just mentioned have many thousands of inorganic molecules and use graph representations of them.” Does it mean that you have no reference, and it is your own idea only? The Chemical Abstracts printed in paper has a lot of organic and inorganic compounds, but it does not use GI-test to find any. CAS-online uses SMILES. It is canonical notation to avoid GI-testing, generally, for organic compounds. Majority of inorganic compounds is very simple, and standard chemical notations are sufficient to avoid GI-testing also. Yes, there are boron and silicon compounds, which molecules have many atoms, but these are very special cases and I am not sure that GI-testing is necessary for them. For example, biochemical molecules seem more complex rather than usual organic molecules studied by classical organic chemistry. But there are special notations for biochemical systems, and as a rule GI-testing is not necessary for biochemical database. Again, have you any reference?--Tim32 10:39, 16 November 2007 (UTC)
- Do you have one? The point that inorganic chemistry databases exist and use graph isomorphism is so blatantly obvious that you are the one who should have to provide a reference for your extraordinary claim. --Itub 11:06, 16 November 2007 (UTC)
- Note, please, I did not use words like "blatantly".
- You wrote: "The point that inorganic chemistry databases exist and use graph isomorphism" -- I do not understand what point do you mean? I do not know what reference I have to give you.
- I note, you have no reference, so the statement that inorganic chemistry databases uses GI-test is your idea only.---- Tim32 (talk) 17:50, 16 November 2007 (UTC)
- Do you have a reference for your idea that GI can only be used for organic molecules? ---- Itub (talk) 17:55, 16 November 2007 (UTC)
I had not idea "that GI can only be used for organic molecules". The GI-testing may be used for very simple graphs also. But I am not sure that it had been used REALLY. You must prove it, if you want. ---- Tim32 (talk) 18:19, 16 November 2007 (UTC)
[edit] Oops, I reversed meaning
In my last edit, I committed a "typo" that reversed my meaning. I meant to say that the earlier version implies that P is not equal to NP. BTW, I am surprised by the flaming associated with this article. Such things are usually reserved for controversial articles, such as those associated with global warming. Vegasprof (talk) 10:05, 5 January 2008 (UTC)
[edit] One reference
I am quite a neutral reader and I don't know much about the rules of Wikipedia. But it seems obvious to me that it is simply unjust for a scientist, whoever he or she was, to advertise his or her own article in an encyclopedia. A fair scientist should attract attention through the quality of his or her scientific work. Even more is it unjust to advertise a work without agreement of most of the (very professional) editors. I know that Wikipedia has a good means to ensure democracy inside. Meanwhile, I am surprised to see the article in question as the first reference and even in blue - as an outsider I would be made to think that this article is the most important in the area. —Preceding unsigned comment added by 129.67.117.253 (talk) 01:57, 7 January 2008 (UTC)
-
- You mean a WP:SOCKPUPPET, I assume? Phaunt (talk) 12:46, 17 January 2008 (UTC)
[edit] Friedland, January 2008
For the record: On January 2, 2008, Shmuel Friedland submitted a paper "Graph isomorphism is polynomial" to arxiv.org, see arXiv:0801.0398v1. Friedland is a mathematician with more than 100 publications in peer-reviewed journals. Claims about this paper were added to our page on Jan 3.
However, Friedland replaced his arxiv paper by a new version, this one titled "On the graph isomorphism problem", see arXiv:0801.0398v2. This new version seems to retract the original claim. So I think that his paper should not be mentioned in the article -- at least for the moment. If it turns out to be an important step towards determining the complexity, we should mention it, of course.
arXiv.org also contains an earlier paper "A Polynomial Time Algorithm for Graph Isomorphism" by Reiner Czerwinski, arXiv:0711.2010v3. Unlike Friedland, Czerwinski has no refereed publications in mathematical journals so far. (Or possibly one?) Czerwinski quotes a 2005 paper by Trofimov and Smolenskii in the "Russian Chemical Bulletin" (sic), and claims that already that article contained a "program for graph isomorphism with polynomial run time".
Aleph4 (talk) 10:09, 8 January 2008 (UTC)
- This is only a way for scientific progress!;)Only this way is possible... --Tim32 (talk) 12:21, 17 January 2008 (UTC)
- Please, let us not use the ad hominem "has many publications" argument as a tool for assessing the quality of a paper or (dis)crediting the author... 87.64.17.84 (talk) 09:34, 24 April 2008 (UTC)
[edit] The absurd reason that looks like vandalism against NPOV
“Please do not use "vandalism" to describe edits that are clearly in good faith, as you did when reverting this edit. It is a violation of our policies and guidelines on assuming good faith and civil behavior. —David Eppstein (talk) 22:12, 16 January 2008 (UTC)”
- "self-promotion of ref to an insignificant paper which adds nothing to the article" is absurd reason, you should prove before that this is "insignificant paper" firstly for chemistry and secondly for graph theory. This is not task for Wiki editors and nobody here should do similar proving and claming (Wiki editors are not official experts), but everybody has to keep the neutral point of view: "NPOV is absolute and non-negotiable", so when anybody deletes a reference to opposite view this action looks like vandalism. I have written a lot for Wiki, and I write only items about I have professional knowleges. So, I use references to my papers (I have written more than 100) as well as to other papers. There is no rule in Wiki against, hence "self-promotion" is absurd reason. There are a lot of references to my papers in Wiki and nowhere else I heard about "self-promotion"!
- Not long ago I wrote you that you deleted one link to arxiv (because it “is not peer-reviewed”) and added another link to the arxiv and that it seems that your criteria of notability is too original. It seems now you remember this incedent…;)
- Also not long ago I wrote to User:Mikkalai: “You wrote the Hosoya index article just after I had inserted its definition to the matching article…” It seems now he remember this incedent…;)
- --Tim32 (talk) 12:06, 17 January 2008 (UTC)
- Wikipedia is not a collection of bibliographical references to various scientific articles. The goal of a wikipedian is to add article content. Please explain which content was added by this added reference. The existing link was to a whole book specifically devoted to the particulat application in question and I highly doubt that the reference to the article adds anything but self-promotion of the author. And by the way what is your problem with Hosoya index? Instead, I do remember a long-lasted and insistent promotion pushing right here, in Graph isomorphism. `'Míkka>t 20:22, 17 January 2008 (UTC)
-
-
- It is well known, that every similar book is based on articles, which articles had been printed before book writing process. Also, it is well known that book writing and printing are very long processes, so it is not surprising that no book is capable to include the newest info. The book you mention is quite suitable as a link to some classical approaches, also it describes some new ideas about the subject. I am do not agree with all comments from this book and my results prove it. The article, I added as second reference, adds significant info, which could not be inserted into the book, because this book was printed in 2005 and the article was printed in 2005. There is 10 pages in the article and if you want to understand details, then read it (or did you seriously think that I must to tell you all this 10 pages here? :). But, first of all, your mistake is that you claimed that the big article printed in well-known scientific journal by Springer is not significant for its general subject! that it has no new important results and so on. This your contradiction official experts (who inspected this article before printing) seems absurdity. Moreover this article is final article for 20 years original investigations which had been done in well-known N.D.Zelinsky Institute of Organic Chemistry of Russian Academy of Science, this article also concentrated results from many previous articles written by my colleagues and me (see list of references in the article). And nobody has a right for such claims without very significant proves. Think a little about!--Tim32 (talk) 15:48, 18 January 2008 (UTC)
-
Wow! —David Eppstein wrote that big article printed in well-known scientific journal by Springer is "non-notable paper" - is it vandalism?--Tim32 (talk) 17:32, 18 January 2008 (UTC)
- It's clearly a reliable source — it's a published, refereed, scientific paper. But that's not the same as being important enough to add a reference to it to this article, among the (roughly) 6000 papers on graph isomorphism listed by Google scholar. —David Eppstein (talk) 18:11, 18 January 2008 (UTC)
Where did you find 6000 published papers on the subject of graph isomorphism? In google? Every graph theory textbook has a few words about graph isomorphism. The number of original papers may be less. But this is not reference to graph isomorphism problem!!! This is reference to its chemical application! Only this application is noted and only one another reference is used in Graph isomorphism:
“The graph isomorphism problem arises in a variety of practical applications. For example, in cheminformatics and in mathematical chemistry, graph isomorphism and other graph matching techniques are used to identify a chemical compound within a chemical database.” [1] [2] [1] Christophe-André Mario Irniger (2005) "Graph Matching: Filtering Databases of Graphs Using Machine Learning", ISBN 1586035576 [2] M.I.Trofimov, E.A.Smolenskii, Russian Chemical Bulletin, 2005, Vol. 54, 9, 2235. (http://dx.doi.org/10.1007/s11172-006-0105-6)
And I have already written in Talk:Graph isomorphism “It is well known, that every similar book is based on articles, which articles had been printed before book writing process. Also, it is well known that book writing and printing are very long processes, so it is not surprising that no book is capable to include the newest info. The book you mention is quite suitable as a link to some classical approaches, also it describes some new ideas about the subject. I am do not agree with all comments from this book and my results prove it. The article, I added as second reference, adds significant info, which could not be inserted into the book, because this book was printed in 2005 and the article was printed in 2005.”
I can add that there are differences ([1] vs [2]) for following significant problems: planar graphs in organic chemistry and regular graphs in organic chemistry. It is important enough to add a reference to it to this article!!! Stop edit war and read article by Trofimov and Smolenskii! Note, please, you will need some skills in organic chemistry...--Tim32 (talk) 21:26, 21 January 2008 (UTC)
I reverted the edits of Tim32. It seems his edits are used mainly for self-promotion and in any case we are in no hurry to integrate very recent research (2005) into wikipedia. MathMartin (talk) 13:04, 22 January 2008 (UTC)
[edit] Edits by Tim32
User Tim32 recently edited the section ==Applications== and added the following text
- However, for non-trivial research in organic mathematical chemistry effective algorithm for filtration of data containing information on the structures of organic molecules based on strong isomorphism testing may be quite necessary.
with a citation to
- M.I.Trofimov, E.A.Smolenskii, Russian Chemical Bulletin, 2005, Vol. 54, 9, 2235. (http://dx.doi.org/10.1007/s11172-006-0105-6), to the article:
I have removed the sentence and the citation from the article. Tim32 please clarify what this sentence means (what is strong isomorphism testing, what is filtration of data), and explain why the citation is important enough to be included in the article. David Eppstein, Míkka and myself removed the citation in the article several times because it does not seem notable and relevant enough to be included. As the cited paper is pretty recent (2005) Tim32 has to supply sufficient proof of its importance, which he has not done so far, and if its importance is unclear it should not be included. MathMartin (talk) 13:59, 23 January 2008 (UTC)

