IOPS
From Wikipedia, the free encyclopedia
IOPS (Input/Output operations Per Second) is a standard benchmark (computing) provided by applications such as IOMeter (originally developed by Intel) and is primarily used with servers to determine their best configuration settings. The specific number of IOPS possible in any server configuration will vary greatly depending upon the variables the tester enters into the program, including the balance of read and write operations, the simulated randomness of the data, the number of worker threads and queue length, as well as the data block sizes. Because of this, published IOPS by drive and SAN vendors are often misleading and generally represent the best case scenario.
The most common performance characteristics that are measured or defined are:
- Total IOPS: Average number of I/O operations per second.
- Read IOPS: Average number of read I/O operations per second.
- Write IOPS: Average number of write I/O operations per second.
Some common published IOPS numbers:
- Solid-state drive: 40,000 IOPS SSD
- Standard SATA hard drive: 75 IOPS (1 Outstanding IO)
- Western Digital Raptor 150GB: 126 IOPS (1 Outstanding IO)
- Seagate Savvio 2.5" 15K SAS: 183 IOPS (1 Outstanding IO)
Some hard drives will improve in performance as the number of outstanding IO's increases. This is usually the result of more advanced controller logic on the drive performing command queuing and reordering commonly called either TCQ or NCQ. Most commodity SATA drives either cannot do this, or their implementation is so poor, no performance benefit can be seen. Good SATA drives, such as the Western Digital Raptor will improve slightly--usually by no more than 50%. High end SCSI drives more commonly found in servers generally show much greater improvement with the Seagate Savvio exceeding 400 IOPS--more than doubling its performance.
| This article may require cleanup to meet Wikipedia's quality standards. Please improve this article if you can. (August 2007) |

