GeForce 256
From Wikipedia, the free encyclopedia
| Nvidia GeForce 256 | |
|---|---|
![]() |
|
| Codename(s) | NV10 |
| Created | 1999 |
| Mid-Range GPU | GeForce 256 SDR |
| High-end GPU | GeForce 256 DDR |
| Direct3D and Shader version | Direct3D 7.0 |
The GeForce 256 was the first of Nvidia's "GeForce" product-line. Released on August 31, 1999, the GeForce 256 improved on its predecessor (RIVA TNT2) by increasing the number of fixed pixel-pipelines, offloading host geometry calculations to a hardware transform and lighting (T&L) engine, and adding hardware motion-compensation for MPEG-2 video. It offered a notably large leap in 3D gaming performance and was the first fully Direct3D 7-compliant 3D accelerator released. The GeForce 256 firmly established Nvidia as the industry leader and resulted in the demise of competitors in the discrete graphics industry, most notably 3dfx. One year later, only ATI with their comparable Radeon series would remain in direct competition with Nvidia.
The GeForce 256 name originated from a contest held by Nvidia in early 1999. Called "Name That Chip", the contest called out to the public to name the successor to the RIVA TNT2 line of graphics boards. Over 12,000 entries were received and 7 winners received a RIVA TNT2 Ultra graphics board as a reward.[1][2]
Contents |
[edit] Overview
[edit] Architecture
Upon release, GeForce 256 offered an industry-leading 3D rendering performance. The graphics board cemented Nvidia's position as a key figure in the PC graphics industry. Upon introduction, it was marketed as "the world's first 'GPU', or Graphics Processing Unit," a term Nvidia had just coined and defined at the time as "a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second."
The new "GPU" term was intended to set GeForce 256 apart from professional graphics cards with on-board, separate geometry processors, and previous products with less powerful on-chip T&L support like the 3Dlabs Permedia 2. Notably, while Direct3D 7 was the first release of that API to support hardware T&L, OpenGL had supported it much longer and was typically the purview of these older professionally-oriented 3D accelerators which were designed for computer-aided design (CAD) instead of games. NV10's T&L engine allowed Nvidia to enter this market for the first time with a product called Quadro. The Quadro line uses the same silicon chips as the GeForce cards, but has different driver support and certifications tailored to the unique requirements of CAD applications.[3] Nonetheless, the GeForce 256 performed competently as a "poor man's" workstation card, and undercut sales of its expensive Quadro sibling.
As studies of the architecture progressed, it was determined that GeForce 256 was quite memory bandwidth constrained, especially the SDRAM model. This was a similar problem in the succeeding GeForce 2 line as well, as both GPUs did not have memory bandwidth and fill-rate saving mechanisms, unlike its equivalent the ATI Radeon series which had HyperZ. As such, GeForce 256 wasn't able to reach its peak fill rate in actual game use, especially when 32-bit color depth was used instead of 16-bit. The later GeForce 4 MX line, while based upon GeForce 256, was more efficient because it gained various efficiency-boosting technologies from the GeForce 3 and GeForce 4 Ti. As a result, the 2 pixel pipeline GeForce4 MX 460 could rival even the 4 pixel pipeline GeForce 3 and Radeon 8500 in some games.[4]
[edit] Performance and value
GeForce 256 offered exceptional rendering performance that surpassed existing high-end graphics cards. Compared to previous high-end 3D game accelerators, such as 3dfx Voodoo3 3500 and Nvidia RIVA TNT2 Ultra, the card provided a 50% or greater improvement in frame rate in a number of then-popular game titles.[5] Its support of the full Direct3D 7 API also assured the card of a strong future, unlike its initial Direct3D 6 competition. The GeForce 256 was supported in games up until approximately 2006, in games such as Star Wars: Empire at War.
However, without broad application support at the time, critics contended that the T&L technology had little real-world value. It was only somewhat beneficial in a few OpenGL-based 3D first-person shooter titles of the time, most notably Quake III Arena. 3dfx and other competing graphics card companies contended that a fast CPU would make up for the lack of a T&L unit. The GeForce 256 was also quite expensive for the time and its performance outside state-of-the-art 3D gaming was mediocre, leaving it confined it to a niche market as a high-end "gamer's card."
Only after the GeForce 256 was replaced by the GeForce 2, and ATI's T&L-equipped Radeon was also on the market, did hardware T&L become a heavily-utilized feature with games. While the GeForce 2 GTS supplanted the GeForce 256 as the mid to high-end card, the GeForce 2 MX offered performance close to the GeForce 256, at a lower production cost and power consumption.
[edit] Specifications
[edit] Competitors
Nvidia's success with GeForce 256 came at the expense of 3Dfx, Matrox, and S3 Graphics, whose products could not perform competitively with the new chip. These companies' products, including cards such as 3dfx Voodoo3, Matrox G400, and S3 Savage4 were based upon the older Direct3D 6 paradigm and lacked hardware T&L.
S3 Graphics launched the Savage 2000 accelerator in late 1999, shortly after GeForce 256. The Savage 2000 supposedly had similar performance to the GeForce 256, while having around half the number of transistors (12 million vs 23 million), which could have lowered production costs. Its features included hardware T&L as well, but it was not activated since only Direct3D 6.0 drivers were available when the card shipped. (Though hardware T&L could be forced on through a registry setting, the result was not a performance increase but instability and rendering errors instead.) Consequently, Savage 2000 did not perform as well as GeForce 256 in general and S3 Graphics ended up never developing working Direct3D 7.0 drivers.[6]
ATI's initial response to GeForce 256 was the dual-chip Rage Fury MAXX. By using two Rage 128 chips, each rendering an alternate frame, the card was able to somewhat approach the performance of SDR memory GeForce 256 cards, but the GeForce 256 DDR still retained the top speed.[7] ATI's T&L hardware capable Radeon (since renamed Radeon 7200) launched months later and exceeded the GeForce 256 in features and performance. However, at the time it was released, it had to compete with the NV10 refresh product, GeForce 2 GTS, and generally did not outperform that card except at 32-bit color.[8]
[edit] References
- ^ Nvidia "Name That Chip" contest results, nvidia.com from wayback machine.
- ^ Taken, Femme. Nvidia "Name that chip" contest, tweakers.net, April 17, 1999.
- ^ Nvidia Workstation Products, Nvidia, accessed October 2, 2007.
- ^ Worobyew, Andrew and Medvedev, Alexander. MSI GF4MX420, GF4MX440 and GF4MX460 Video Cards Review, Digit Life, accessed October 2, 2007.
- ^ Ross, Alex and Wood, Joan. Nvidia GeForce 256 DDR Guide, Sharky Extreme, October 14, 1999.
- ^ Yu, James. Diamond Viper II Z200 Savage2000 Review, Firing Squad, November 15, 1999.
- ^ Fastsite. ATI RAGE FURY MAXX Review, X-bit Labs, February 4, 2000.
- ^ Witheiler, Matthew. ATI Radeon 64MB DDR, Anandtech, July 17, 2000.
[edit] See also
[edit] External links
|
|||||||||||||||||||||||||||||||


