Talk:Low-density parity-check code

From Wikipedia, the free encyclopedia

WP:TEL This article is within the scope of WikiProject Telecommunications, an attempt to build a comprehensive and detailed guide to telecommunications on Wikipedia. If you would like to participate, you can edit the article attached to this page, or visit the project page, where you can join the project as a "full time member" and/or contribute to the discussion.

For a person who has not studied information theory, the introduction chapter says absolutely nothing. It's some state of the art code, rrright. For what/Where/Why is it used? I think the introduction should answer these questions first. Then it's okay to jump into the technics. Besides, what's Forney's factor graph notation anyway? Maybe an article about it should be created, or at least Forney linked somewhere. (Elsewhere than to Forney, Texas). --ZeroOne 19:52, 12 Oct 2004 (UTC)

Contents

[edit] LDPC advantage

I'd agree with the first comment. I am not an information theory specialist and have been trying to get an idea what advantage LDPC gives over eg Reed Solomon, but cannot find this basic information anywhere. Andrew P, UK.

Not impressed with the article either. But in answer to your question: LDPC codes approach the theoretical (Shannon) limit as the block size increases. (I.E. They are the best (in terms of error recovery) that an error correcting code can be.) A Google of LDPC turns up a lot of good articles. Nahaj 15:25, 8 November 2005 (UTC)

[edit] Generating the parity-check matrix

Although Gallager's original codes used a randomly generated parity-check matrix, this doesn't have to be the case. A subset of Gallager codes can be built analytically by an algebraic method based on shortened Reed-Solomon codes with two information symbols. LDPC codes can also be constructed algebraically based on the points and lines of finite geometries. The highest performance LDPC codes are constructed using a method known as density evolution.

Density evolution is a measure of the code, not a method of code construction. To quote Moon's "Error Correction Coding": "Density evolution is an analytical technique which can be used to understand limits of performance of LDPC decoders. " Although, obviously, ANYTHING that measures a code can be used to AID in generating good codes. (And there have been many articles on building various subsets of Gallager codes by many many different methods.) If the author of the anonymous comment above would like to provide a *current* reference for their interesting claim that "The highest performance LDPC codes are constructed using a method known as density evolution.", I'd appreciate it. Work on finding GOOD non-randomly generated LDPC codes is still ongoing in a number of places... I'd recommed the "IEEE Trans. Information Theory", and the "IEEE Trans. Communications" as jumping in points. Nahaj 17:18, 16 January 2006 (UTC)

The reference I had in mind when making the statement about "The highest performance LDPC codes..." was S.-Y. Chung, G. D. Forney, T. J. Richardson, R. L. Urbanke, "On the Design of Low-Density Parity-Check Codes within 0.0045 dB of the Shannon Limit," IEEE Communications Letters, vol. 5, no. 2, Feb. 2001. The authors develop an improved implementation of density evolution called discretized density evolution. Using this algorithm and an improved optimization algorithm they design good rate-1/2 irregular LDPC codes for binary-input AWGN channels that approach the Shannon limit very closely, i.e., to within 0.0045 dB. I'm not sure whether this is regarded as a current reference or not. Duncan Aitken, UK.

The claim was made that "The highest performance LDPC codes" are constructed by the method. I'm still waiting for a reference supporting that extreme claim. When it comes to "highest", or best, or any other such term for an algorithm, the status can change rapidly... and only a current article can tell which is currently best. An article from several years ago [ which is very good, by the way ] showing that a method was good doesn't address the issue whether it is the method that currently produces the highest performance codes. Nahaj 15:32, 18 February 2006 (UTC)

There are two separate issues here: the origin of good parity-check matrices, and the current "best" performance.

  • I agree with Nahaj that there is more than one way of creating a good parity-check matrix; some documentation of good methods needs to be documented. The LDPC world splits slightly into two camps, psuedo-random generation and non-random approaches. I mainly have experience of the former and am happy to write about the approaches used in this community (ie determination of good degree sequences via denisty evolution or an approximation, and then construction of codes with this degree sequence). Do we have a good author to write about deterministic methods?
  • As far as I'm aware the best performance is still from SY Chung's paper. I'm not sure what standard of proof is required for Wikipedia as the lack of specific counter evidence can not be proved without citing every LDPC paper since!

Edratzer 17:17, 17 September 2006 (UTC)

[edit] Applications of LDPC

Other than the mention that LDPC is adopted for satellite broadcasting it would be useful if someone added more text on other applications of LDPC. In particular, any thoughts on comparsions with turbo codes in "real world" cases (mobile, etc.).--Alistair9210 08:42, 4 July 2006 (UTC)

Is not LDPC used in 802.11n? 192.114.175.2 (talk) 11:57, 5 December 2007 (UTC)

[edit] Forgotten?

Were LDPC codes "forgotten"? Gallager was a famous name in the information theory community in the 1970s and 1980s, so I doubt they were truly forgotten; they just couldn't be practically used. Calbaer 01:03, 21 July 2006 (UTC)

In the paper that effectively reintroducted LDPC codes (MacKay and Neal "Near Shannon Limit Performance of Low Density Parity Check Codes") there is a brief mention of this topic. The suggestion is that a slightly related family of codes (concatenated codes) were believed to be better and hence LDPC codes were ignored. This paper cites personal communication from Gallager on this topic. Edratzer 17:25, 17 September 2006 (UTC)

[edit] Links

I have just reverted the addition of a link added by an anonymous user to a University repository of various LDPC papers. The repository does not appear to be a general source of reference or contain VHDL code that may aid the new reader. I propose adding a link to [1] instead as this page contains open-source hardware source code. Any comments? Can the anonymous link adder come forward? Edratzer 08:07, 8 January 2007 (UTC)

[edit] Expert attention needed

Apparently, some diagrams were deleted due to no source because the diagram creator did not use a *-self template, making this article incomplete because the text depends on them. Could someone who knows about LDPC codes redraw the missing diagrams? Jesse Viviano 07:24, 26 September 2007 (UTC)

Can someone let me know what diagrams need redrawing? There is a pretty good chance that I can redraw them. User dfhayes 24/04/2008 —Preceding unsigned comment added by Dfhayes (talk • contribs) 11:03, 24 April 2008 (UTC)

  • The page used to have a series of steps of belief propagation on a channel with erasures. It would be good to reinstate these. If you have a good way of showing belief propagation on a channel with soft messages that would be good too. Edratzer (talk) 19:08, 24 April 2008 (UTC)

[edit] Gallager Codes

I think that the title of this page should also be Gallager Codes so that it will come up easily in an internet search. I don't know how to do this or I would do it myself —Preceding unsigned comment added by Lansey (talk • contribs) 09:12, 4 June 2008 (UTC)