Data parallelism
From Wikipedia, the free encyclopedia
Data parallelism (also known as loop-level parallelism) is a form of parallelization of computing across multiple processors in parallel computing environments. Data parallelism focuses on distributing the data across different parallel computing nodes. It contrasts to task parallelism as another form of parallelism.
Contents |
[edit] Description
In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.
For instance, if we are running code on a 2-processor system (CPUs A and B) in a parallel environment, and we wish to do a task on some data D, it is possible to tell CPU A to do that task on one part of D and CPU B on another part simultaneously, thereby reducing the runtime of the execution. The data can be assigned using conditional statements as described below. As a specific example, consider adding two matrices. In a data parallel implementation, CPU A could add all elements from the top half of the matrices, while CPU B could add all elements from the bottom half of the matrices. Since the two processors work in parallel, the job of performing matrix addition would take one half the time of performing the same operation in serial using one CPU alone.
Data parallelism emphasizes the distributed (parallelized) nature of the data, as opposed to the processing (task parallelism). Most real programs fall somewhere on a continuum between Task parallelism and Data parallelism.
[edit] Example
The pseudocode below illustrates data parallelism:
program:
...
if CPU="a" then
low_limit=1
upper_limit=50
else if CPU="b" then
low_limit=51
upper_limit=100
end if
do i = low_limit , upper_limit
Task on d(i)
end do
...
end program
The goal of the program is to do some task on the data array "d" of size 100 (for example). If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it as follows.
- In a SIMD system, both CPUs will execute the code.
- In a parallel environment, both will have access to "d".
- A mechanism is presumed to be in place whereby each CPU will create its own copy of "low_limit" and "upper_limit" that is independent of the other
- The "if" clause differentiates between the CPUs. CPU "a" will read true on the "if" and CPU "b" will read true on the "else if", thus having their own values of "low_limit" and "upper_limit"
- Now, both CPUs execute "Task on d(i)", but since each cpu has different values of the "limits", they operate on different parts of "d" simultaneously, thereby distributing the task among themselves. Obviously, this will be faster than doing it on a single CPU.
Code executed by CPU "a":
program:
...
low_limit=1
upper_limit=50
do i = low_limit , upper_limit
Task on d(i)
end do
...
end program
Code executed by CPU "b":
program:
...
low_limit=51
upper_limit=100
do i = low_limit , upper_limit
Task on d(i)
end do
...
end program
This concept can now be generalized to any number of processors.
[edit] References
- Hillis, W. Daniel and Steele, Guy L., Data Parallel Algorithms Communications of the ACM December 1986
- Blelloch, Guy E, Vector Models for Data-Parallel Computing MIT Press 1990. ISBN 0-262-02313-X
[edit] See also
|
||||||||||||||||||||||||||||||||

