Tree-adjoining grammar
From Wikipedia, the free encyclopedia
Tree-adjoining grammar (TAG) is a grammar formalism defined by Aravind Joshi. Tree-adjoining grammars are somewhat similar to context-free grammars, but the elementary unit of rewriting is the tree rather than the symbol. Whereas context-free grammars have rules for rewriting symbols as strings of other symbols, tree-adjoining grammars have rules for rewriting the nodes of trees as other trees (see tree (graph theory) and tree data structure).
Contents |
[edit] History
TAG originated in investigations by Joshi and his students into the family of adjunction grammars (AG),[1] the "string grammar" of Zellig Harris. AGs handle endocentric properties of language in a natural and effective way, but do not have a good characterization of exocentric constructions; the converse is true of rewrite grammars, or phrase-structure grammar (PSG). In 1969, Joshi introduced a family of grammars that exploits this complementarity by mixing the two types of rules. A few very simple rewrite rules suffice to generate the vocabulary of strings for adjunction rules. This family is distinct from the Chomsky hierarchy but intersects it in interesting and linguistically relevant ways.[2]
[edit] Description
The rules in a TAG are trees with a special leaf node known as the foot node, which is anchored to a word. There are two types of basic trees in TAG: initial trees (often represented as 'α') and auxiliary trees ('β'). Initial trees represent basic valency relations, while auxiliary tree allow for recursion.[3] Auxiliary trees have the root (top) node and foot node labeled with the same symbol. A derivation starts with an initial tree, combining via either substitution or adjunction. Substitution replaces a frontier node with another tree whose top node has the same label. Adjunction inserts an auxiliary tree into the center of another tree.[4]. The root/foot label of the auxiliary tree must match the label of the node at which it adjoins.
Other variants of TAG allow multi-component trees, trees with multiple foot nodes, and other extensions.
[edit] Complexity and application
Tree-adjoining grammars are often described as mildly context-sensitive, meaning that they possess certain properties that make them more powerful (in terms of weak generative capacity) than context-free grammars, but less powerful than indexed or context-sensitive grammars. Mildly context-sensitive grammars are conjectured to be powerful enough to model natural languages while remaining efficiently parseable in the general case.[5]
Due to their formal weakness, TAG is often used in computational linguistics and natural language processing.
A TAG can describe the language of squares (in which some arbitrary string is repeated), and the language
. This type of processing can be represented by an embedded pushdown automaton.
Languages with cubes (ie triplicated strings) or with more than four distinct character strings of equal length cannot be generated by tree-adjoining grammars.
For these reasons, languages generated by tree-adjoining grammars are referred to as mildly context-sensitive languages.
[edit] References
- ^ Joshi, Aravind; S. R. Kosaraju, H. Yamada. "String Adjunct Grammars". . Proceedings Tenth Annual Symposium on Automata Theory, Waterloo, Canada
- ^ Joshi, Aravind. "Properties of Formal Grammars with Mixed Types of Rules and Their Linguistic Relevance". . Proceedings Third International Symposium on Computational Linguistics, Stockholm, Sweden
- ^ Jurafsky, Daniel; James H. Martin (2000). Speech and Language Processing. Upper Saddle River, NJ: Prentice Hall, 354.
- ^ Joshi, Aravind; Owen Rambow (2003). "A Formalism for Dependency Grammar Based on Tree Adjoining Grammar". Proceedings of the Conference on Meaning-Text Theory.
- ^ Joshi, Aravind (1985). "How much context-sensitivity is necessary for characterizing structural descriptions", in D. Dowty, L. Karttunen, and A. Zwicky, (eds.): Natural Language Processing: Theoretical, Computational, and Psychological Perspectives. New York, NY: Cambridge University Press, 206–250.
[edit] External links
- The XTAG project, which uses a TAG for natural language processing.
- A tutorial on TAG
- Another tutorial with focus on comparison with Lexical Functional Grammar and grammars extraction from Treebank
- SemConst Documentation A quick survey on Syntax and Semantic Interface problematic within the TAG framework.
- The TuLiPa project The Tübingen Linguistic Parsing Architecture (TuLiPA) is a multi-formalism syntactic (and semantic) parsing environment, designed mainly for Multi-Component Tree Adjoining Grammars with Tree Tuples
- The Metagrammar Toolkit which provides several tools to edit and compile MetaGrammars into TAGs. It also include a wide coverage French Metagrammars.
- LLP2 A Lexicalized Tree Adjoining Grammar parser which provides an easy to use graphical environment (page in french)
| Chomsky hierarchy |
Grammars | Languages | Minimal automaton |
|---|---|---|---|
| Type-0 | Unrestricted | Recursively enumerable | Turing machine |
| n/a | (no common name) | Recursive | Decider |
| Type-1 | Context-sensitive | Context-sensitive | Linear-bounded |
| n/a | Indexed | Indexed | Nested stack |
| n/a | Tree-adjoining etc. | (Mildly context-sensitive) | Embedded pushdown |
| Type-2 | Context-free | Context-free | Nondeterministic pushdown |
| n/a | Deterministic context-free | Deterministic context-free | Deterministic pushdown |
| Type-3 | Regular | Regular | Finite |
| n/a | Star-free | Counter-Free | |
| Each category of languages or grammars is a proper subset of the category directly above it, and any automaton in each category has an equivalent automaton in the category directly above it. |
|||

