Concurrency (computer science)

From Wikipedia, the free encyclopedia

The "Dining Philosophers", a classic problem involving concurrency and shared resources
The "Dining Philosophers", a classic problem involving concurrency and shared resources

In computer science, concurrency is a property of systems in which several computational processes are executing at the same time, and potentially interacting with each other.[1] The study of concurrency encompasses a broad range of systems, from tightly-coupled, largely synchronous parallel computing systems, to loosely-coupled, largely asynchronous distributed systems.[2] The concurrent processes may be executing truly simultaneously, in the case that they run on separate processors, or their execution steps may be interleaved to produce the appearance of concurrency, as in the case of separate processes running on a multitasking system. Because the processes in a concurrent system can interact with each other while they are executing, the number of possible execution paths in the system can be extremely large, and the resulting behavior can be very complex. The difficulties associated with concurrency have been tackled both through the construction of languages and concepts to make the complexity of concurrent execution manageable, and through the development of theories for reasoning about interacting concurrent processes.[1]

Contents

[edit] Issues

The difference between a sequential system and a concurrent system is the fact that the processes which make up a concurrent system can interact with each other.[1] Concurrent use of shared resources is the source of many difficulties. Race conditions involving shared resources can result in unpredictable system behavior. The introduction of mutual exclusion can prevent race conditions, but can lead to problems such as deadlock, and starvation.[2]

In addition to internal interactions, many concurrent systems, such as operating systems and databases, are intended to participate in an ongoing interaction with users, and with other systems. Traditional notions of program correctness, which are based on relating an initial input to the output expected to appear at program termination, are not really applicable. Alternative ways of defining what it means for the operation of a concurrent system to be correct are required.[2]

The design of concurrent systems often entails finding reliable techniques for coordinating their execution, data exchange, memory allocation, and execution scheduling to minimize response time and maximise throughput.[citation needed]

[edit] Theory

Concurrency theory has been an active field of research in theoretical computer science since Carl Adam Petri's seminal work on Petri Nets in the early 1960s.[3] In the years since, a wide variety of formalisms have been developed for modeling and reasoning about concurrency.

[edit] Models

A number of formalisms for modeling and understanding concurrent systems have been developed, including:[4]

Some of these models of concurrency are primarily intended to support reasoning and specification, while others can be used through the entire development cycle, including design, implementation, proof, testing and simulation of concurrent systems.

The proliferation of different models of concurrency has motivated some researchers to develop ways to unify these different theoretical models. For example, Lee and Sangiovanni-Vincentelli have demonstrated that a so-called "tagged-signal" model can be used to provide a common framework for defining the denotational semantics of a variety of different models of concurrency,[5] while Nielsen, Sassone, and Winskel have demonstrated that category theory can be used to provide a similar unified understanding of different models.[6]

[edit] Logics

Various types of temporal logic[7] can be used to help reason about concurrent systems. Some of these logics, such as linear temporal logic and computational tree logic, allow assertions to be made about the sequences of states that a concurrent system can pass through. Others, such as action computational tree logic, Hennessy-Milner logic, and Lamport's temporal logic of actions, build their assertions from sequences of actions (changes in state). The principal application of these logics is in writing specifications for concurrent systems.[2]

[edit] Practice

Concurrent programming encompasses the programming languages and algorithms used to implement concurrent systems. Concurrent programming is usually considered to be more general than parallel programming because it can involve arbitrary and dynamic patterns of communication and interaction, whereas parallel systems generally have a predefined and well-structured communications pattern. The base goals of concurrent programming include correctness, performance and robustness. Concurrent systems such as operating systems are generally designed to operate indefinitely and not terminate unexpectedly. Some concurrent systems implement a form of transparent concurrency, in which concurrent computational entities may compete for and share a single resource, but the complexities of this competition and sharing are shielded from the programmer.

Because they use shared resources, concurrent systems in general require the inclusion of some kind of arbiter somewhere in their implementation (often in the underlying hardware), to control access to those resources. The use of arbiters introduces the possibility of indeterminacy in concurrent computation which has major implications for practice including correctness and performance. For example arbitration introduces unbounded nondeterminism which raises issues with model checking because it causes explosion in the state space and can even cause models to have an infinite number of states.

[edit] See also

[edit] References

  1. ^ a b c Roscoe, A. W. (1997). The Theory and Practice of Concurrency. Prentice Hall. ISBN 0-13-674409-5. 
  2. ^ a b c d Cleaveland, Rance; Scott Smolka (December, 1996). "Strategic Directions in Concurrency Research". ACM Computing Surveys 28 (4): 607. doi:10.1145/242223.242252. 
  3. ^ J.C.M. Baeten, A brief history of process algebra, Rapport CSR 04-02, Vakgroep Informatica, Technische Universiteit Eindhoven, 2004
  4. ^ Filman, Robert; Daniel Friedman (1984). [http://ic.arc.nasa.gov/people/filman/text/dpl/dpl.html Coordinated Computing - Tools and Techniques for Distributed Software]. McGraw-Hill. ISBN 0-07-022439-0. 
  5. ^ Lee, Edward; Alberto Sangiovanni-Vincentelli (December, 1998). "A Framework for Comparing Models of Computation". IEEE Transactions on CAD 17 (12): 1217-1229. 
  6. ^ Mogens Nielsen; Vladimiro Sassone and Glynn Winskel (1993). "Relationships Between Models of Concurrency". REX School/Symposium. 
  7. ^ Roscoe, Colin (2001). Modal and Temporal Properties of Processes. Springer. ISBN 0-387-98717-7. 

[edit] Further reading

  • Lynch, Nancy A. (1996). Distributed Algorithms. Morgan Kauffman. ISBN 1558603484. 
  • Tanenbaum, Andrew S.; Van Steen, Maarten (2002). Distributed Systems: Principles and Paradigms. Prentice Hall. ISBN 0-13-088893-1. 
  • Kurki-Suonio, Reino (2005). A Practical Theory of Reactive Systems. Springer. ISBN 3-540-23342-3. 
  • Garg, Vijay K. (2002). Elements of Distributed Computing. Wiley-IEEE Press. ISBN 0-471-03600-5. 
  • Magee, Jeff;, Kramer, Jeff (2006). Concurrency: State Models and Java Programming. Wiley. ISBN 0-470-09355-2. 

[edit] External links