Talk:Digital

From Wikipedia, the free encyclopedia

I like the terminology of digital being a transmission of symbols. - Omegatron 21:52, May 24, 2005 (UTC)

Contents

[edit] the human eye

The article currently claims

the human eye may be able to detect tens of thousands of different intensities of pure green

Really? I've been told that humans cannot distinguish even 256 intensities of pure green. See "How many bits do I need to smoothly shade from black to white?" by Charles Poynton. (Should this be posted on the eye article ?) --DavidCary 09:37, 26 Jun 2005 (UTC)

I agree, and changed the example. But the article misses an important point, and I'm not sure how to best add it in: Compared with digital, analog has higher resolution but lower accuracy (due to noise, amplifier distortion, etc.). Compared with analog, digital has lower resolution but higher accuracy. In a sense, using digital sacrifices resolution for accuracy. --Rick Sidwell 17:14, 22 July 2005 (UTC)

[edit] Removed text

I removed the following text because, while probably true, it doesn't really apply to the subject at hand. --Rick Sidwell 01:04, 6 September 2005 (UTC)

It should be noted that photographic film is not perfect, being subject to aberrations. Losses in analog systems are often modelled as a noise spectrum and modulation transfer function (MTF). The MTF of many analog systems, including film, typically "rolls off" with increasing frequency.

[edit] Nits, and the picking thereof

I think the article accurately reflects the way the term is most commonly used, but the comparison of digital to discrete and the comparison of digital to analog are flawed if you're being very precise.

The problem is that there is a tacit assumption when calling digital "numbers" and that is the assumption of fixed length representation inherent in computer system design.

In the limit, digital and analog systems have the same resolution and the same accuracy. In practice, most of the time, people are more familiar with situations, like audio recording, where analog systems behave as described above. But there are other, less common situations, like solutions of certain partial differential equations, where the analog systems are both higher resolution and more accurate. The most extreme example of this is the use of a windtunnel versus a computer to simulate fluid flow. Both are models. The first is analog, the second digital. The analog system is more accurate and of a higher resolution than the digital system.

YUp so YUp loser hahahahaha

[edit] Confused

The entry begins with:

A digital system is one that uses numbers, especially binary numbers, for input, processing, transmission, storage, or display, rather than a continuous spectrum of values (an analog system) or non-numeric symbols such as letters or icons.

The entry then concludes with a catalogue of non-numeric symbols, which are identified as "Historical Digital Systems." Isn't this a bit incoherent? I think the entry would over-all benefit by concentrating on the discrete nature of digital encoding (as opposed to the non-discrete nature of analogue).

I'd also like to query the supposition that digital encoding necessarily implies some form of numerical representation. I may be out on a limb here (I'm not a mathematician) but - while strictly speaking it's true that any enumerable quantity is a number - isn't it a bit misleading to say that "digital" fundamentally or necessarily implies the representation of number? Doesn't that privilege one possible interpretation of a digital encoding? Wouldn't it be more correct to say that number can be represented digitally; i.e. the "digital representation of number," but that a digital encoding can easily represent a letter in the alphabet (ASCII), part of a sound recording, etc?

[edit] Confused? Me too!

I think the "confused" entry is making a great point. We really need to talk about the fundamental differences between digital and analog in this article; and we have not achieved this yet.

The concept of digital is very well defined and very straight forward. It has nothing to do with numbers, computers, binary numerals, transmission, storage, or display of data or signals. This is a laymen's definition. And that might very well be ok for this article since it's a high-level definition of the term. But something more is needed.

The opening three paragraphs are, in fact, inaccurate. Most digital systems so not use binary numbers to represent data. And the first sentence of paragraph two is just wrong. 'The distinction of "digital" versus "analog"' has nothing to do with the "method of input, data storage and transfer, or the internal working of a device." Though, "internal workings of a device" is a vague term for sure. These terms are all so vague.

Can't we talk about the concept of having a pre-arranged alphabet of symbols, which reduces randomness in the system? And that REGENERATION of the original data, rather than amplification, is really what makes a system digital. Analog systems can only amplify and reproduce. Digital systems can RECREATE the original data. These are fundamental concepts that are at the heart of the definition of digital.

Thoughts? —The preceding unsigned comment was added by Dje (talk • contribs) 03:58, 6 April 2007 (UTC).

The comment you're reacting to is over a year old. If you have a good improvement over the present definition, let's see it. I agree that the especially binary numbers is quite lame; the rest is not bad as general-audience definitions go, I think, but I agree it can be improved. Note that your complaint about the distinction having nothing to do with input, output, etc., is a bit off base, since it does not imply that those uses are part of the distinction. Dicklyon 06:11, 6 April 2007 (UTC)

[edit] 'Unreferenced' tag

An article such as this should be capable of being written by a reasonably competant engineer without refence to other material. Unfortunatly, this article is very poor in it current state. It tries to be too complicated.

It has a number of major problems. For example, there is a section on noise being introduced to the digital signal, but it totally fails to mention the single largest source of noise - the digitisation process itself.

It also gives an example of a digital system - the smoke signals. Fair enough, the smoke (or absence thereof) is a fair example of digital communication. But it is spoilt because it refers to the smoke itself as "an analogue carrier". It is not an analogue carrier precisely because it does not represent anything else (i.e. it is not an analogue).

For morse code it refers to five digital states. Not so. There are two digital states, the presence or absence of an electric current or carrier. What the various intervals represent is purely a matter of interpretation.

For the modem it refers once again to an 'analogue carrier signal' that is most definitely not an analogue. This discussion is by no means exhaustive.

Should I get the time, I will try to provide a better article, but it won't be anytime real soon. I B Wright 10:37, 8 August 2007 (UTC)


I think both the analog and digital column need pictures. Show a smoothly varying soundwave, and then show a discrete approximation of it. —Preceding unsigned comment added by 75.50.59.217 (talk) 00:06, 24 October 2007 (UTC)


A "digital" system is more abstract than what is presented in this article. John Haugeland defines a digital system as a "set of positive and reliable techniques (methods, devices) for producing and reidentifying tokens, or configurations of tokens..." I think someone should start from something like this, and then move into examples of more concrete digital systems, and digital vs. analog... —Preceding unsigned comment added by 24.20.30.127 (talk) 04:31, 5 November 2007 (UTC)