Page (computing)

From Wikipedia, the free encyclopedia

In a context of computer virtual memory, a page, memory page, or virtual page is a fixed-length block of main memory, that is contiguous in both physical memory addressing and virtual memory addressing. A page is usually a smallest unit of data for:

Virtual memory abstraction allows a page that does not currently reside in main memory to be addressed and used. If a program tries to access a location in such page, it generates an exception called page fault. The hardware or operating system is notified and loads the required page from auxiliary store automatically. A program addressing the memory has no knowledge of a page fault or a process following it. In consequence a program may be easily allowed to address more RAM than actually exists in the computer.

A transfer of pages between main memory and an auxiliary store, such as hard disk drive, is referred to as paging or swapping.[1]

Contents

[edit] Page size trade-off

Page size is usually determined by a processor architecture. Traditionally, pages in a system had uniform size, for example 4096 bytes. However, processor designs often allow two or more, sometimes simultaneous, page sizes due to the benefits and penalties. There are several points that can factor into choosing the best page size.

[edit] Page size versus page table size

A system with a smaller page size uses more pages, requiring a page table that occupies more space. For example, if a 232 virtual address space is mapped to 4KB (212 bytes) pages, the number of virtual pages is 220 (20 = 32 - 12). However, if the page size is increased to 32KB (215 bytes), only 217 pages are required.

[edit] Page size versus TLB usage

Processors need to maintain a Translation Lookaside Buffer (TLB), mapping virtual to physical addresses, which are checked on every memory access. The TLB is typically of limited size, and when it cannot satisfy a given request (a TLB miss) the page tables must be searched manually (either in hardware or software, depending on the architecture) for the correct mapping, a time-consuming process. Larger page sizes mean that a TLB cache of the same size can keep track of larger amounts of memory, which avoids the costly TLB misses.

[edit] Internal fragmentation of pages

Rarely do processes require the use of an exact number of pages. As a result, the last page will likely only be partially full, wasting some amount of memory. Larger page sizes clearly increase the potential for wasted memory this way, as more potentially unused portions of memory are loaded into main memory. Smaller page sizes ensure a closer match to the actual amount of memory required in an allocation.

As an example, assume the page size is 1MB. If a process allocates 1025KB, two pages must be used, resulting in 1023KB of unused space.

[edit] Page size versus disk access

When transferring from disk, much of the delay is caused by the seek time. Because of this, large, sequential transfers are more efficient than several smaller transfers. Transferring larger pages from disk to memory therefore, does not require much more time than smaller pages.

[edit] Determining the page size in a program

Most operating systems allow programs to determine the page size at run time. This allows programs to use memory more efficiently by aligning allocations to this size and reducing overall internal fragmentation of pages.

[edit] UNIX and POSIX-based Operating Systems

UNIX and POSIX-based systems use the system function sysconf(), as illustrated in the following example written in the C programming language.

#include <stdio.h>
#include <unistd.h>    // sysconf(3)
 
int main()
{
        printf("The page size for this system is %ld bytes\n", sysconf(_SC_PAGESIZE)); //_SC_PAGE_SIZE is OK too.
        return 0;
}

[edit] Windows-based operating systems

Win32-based operating system, such as Windows 9x, NT, ReactOS, use the system function GetSystemInfo() from kernel32.dll.

#include <stdio.h>
#include <windows.h>
 
int main()
{
        SYSTEM_INFO si;
 
        GetSystemInfo(&si);
        printf("The page size for this system is %u bytes\n", si.dwPageSize);
 
        return 0;
}

[edit] Huge pages

Intel x86 supports 4MB pages (2MB pages if using PAE) in addition to its standard 4kB pages, and other architectures may often have similar features. IA-64 supports as many as eight different page sizes, from 4kB up to 256MB. This support for huge pages (or, in Microsoft Windows terminology, large pages) allows for "the best of both worlds", reducing the pressure on the TLB cache (sometimes increasing speed by as much as 15%, depending on the application and the allocation size) for large allocations while still keeping memory usage at a reasonable level for small allocations.

Huge pages, despite being implemented in most contemporary personal computers, are not in common use except in large servers and computational clusters. Commonly, their use requires elevated privileges, cooperation from the application making the large allocation (usually setting a flag to ask the operating system for huge pages), or manual administrator configuration; operating systems commonly, sometimes by design, cannot page them out to disk.

Linux has supported huge pages on several architectures since the 2.6 series. Windows Server 2003 (SP1 and newer), Windows Vista and Windows Server 2008 support huge pages under the name of large pages, but Windows XP does not. Solaris beginning with version 9 supports large pages on SPARC and x86. [2] [3]

[edit] References

  1. ^ Belzer, Jack; Holzman, Albert G. & Kent, Allen, eds. (1981), “Virtual memory systems”, Encyclopedia of computer science and technology, vol. 14, CRC Press, pp. 32, ISBN 0824722140, <http://books.google.com/books?id=KUgNGCJB4agC&printsec=frontcover> 
  2. ^ Supporting Multiple Page Sizes in the Solaris Operating System. Sun BluePrints Online. Sun Microsystems. Retrieved on 2008-01-19.
  3. ^ Supporting Multiple Page Sizes in the Solaris Operating System Appendix. Sun BluePrints Online. Sun Microsystems. Retrieved on 2008-01-19.

Dandamudi, Sivarama P. (2003). Fundamentals of Computer Organization and Design. Springer, 740-741. ISBN 038795211X. 

[edit] See also

Languages