Nested word

From HandWiki
Short description: Formal language concept

In computer science, more specifically in automata and formal language theory, nested words are a concept proposed by Alur and Madhusudan as a joint generalization of words, as traditionally used for modelling linearly ordered structures, and of ordered unranked trees, as traditionally used for modelling hierarchical structures. Finite-state acceptors for nested words, so-called nested word automata, then give a more expressive generalization of finite automata on words. The linear encodings of languages accepted by finite nested word automata gives the class of visibly pushdown languages. The latter language class lies properly between the regular languages and the deterministic context-free languages. Since their introduction in 2004, these concepts have triggered much research in that area.[1]

Formal definition

To define nested words, first define matching relations. For a nonnegative integer [math]\displaystyle{ \ell }[/math], the notation [math]\displaystyle{ [\ell] }[/math] denotes the set [math]\displaystyle{ \{1,2,\ldots,\ell-1,\ell\} }[/math], with the special case [math]\displaystyle{ [0]=\emptyset }[/math].

A matching relation ↝ of length [math]\displaystyle{ \ell\ge 0 }[/math] is a subset of [math]\displaystyle{ \{-\infty, 1,2,\ldots,\ell-1,\ell\}\times\{1,2,\ldots,\ell-1,\ell,\infty\} }[/math] such that:

  1. all nesting edges are forward, that is, if i ↝ j then i < j;
  2. nesting edges never have a finite position in common, that is, for −∞ < i < ∞, there is at most one position h such that h ↝ i, and there is at most one position j such that ij; and
  3. nesting edges never cross, that is, there are no i < i ′ ≤ j < j ′ such that both i ↝ j and i ′ ↝ j ′.

A position i is referred to as

  • a call position, if ij for some j,
  • a pending call if i ↝ ∞,
  • a return position, if hi for some h,
  • a pending return if −∞ ↝ i, and
  • an internal position in all remaining cases.

A nested word of length [math]\displaystyle{ \ell }[/math] over an alphabet Σ is a pair (w,↝), where w is a word, or string, of length [math]\displaystyle{ \ell }[/math] over Σ and ↝ is a matching relation of length [math]\displaystyle{ \ell }[/math].

Encoding nested words into ordinary words

Nested words over the alphabet [math]\displaystyle{ \Sigma=\{a_1,a_2,\ldots,a_n\} }[/math] can be encoded into "ordinary" words over the tagged alphabet [math]\displaystyle{ \hat{\Sigma} }[/math], in which each symbol a from Σ has three tagged counterparts: the symbol ⟨a for encoding a call position in a nested word labelled with a, the symbol a⟩ for encoding a return position labelled with a, and finally the symbol a itself for representing an internal position labelled with a. More precisely, let φ be the function mapping nested words over Σ to words over [math]\displaystyle{ \hat{\Sigma} }[/math] such that each nested word ([math]\displaystyle{ w_1w_2\cdots w_\ell }[/math],↝) is mapped to the word [math]\displaystyle{ x_1x_2...x_\ell }[/math], where the letter [math]\displaystyle{ x_i }[/math] equals ⟨a, a, and a⟩, if [math]\displaystyle{ w_i=a }[/math] and i is a (possibly pending) call position, an internal position, and a (possibly pending) return position, respectively.

Example

For illustration, let n = (w,↝) be the nested word over a ternary alphabet with w=abaabccca and matching relation ↝ = {(−∞,1),(2,∞),(3,4),(5,7),(8,∞)}. Then its encoding as word reads as φ(n) = a⟩⟨baa⟩⟨bcc⟩⟨ca.

Automata

Nested word automaton

A nested word automaton has a finite number of states, and operates in almost the same way as a deterministic finite automaton on classical strings: a classical finite automaton reads the input word [math]\displaystyle{ w = w_1\cdots w_\ell }[/math] from left to right, and the state of the automaton after reading the jth letter [math]\displaystyle{ w_j }[/math] depends on the state in which the automaton was before reading [math]\displaystyle{ w_j }[/math].

In a nested word automaton, the position [math]\displaystyle{ j }[/math] in the nested word (w,↝) might be a return position; if so, the state after reading [math]\displaystyle{ w_j }[/math] will not only depend on the linear state in which the automaton was before reading [math]\displaystyle{ w_j }[/math], but also on a hierarchical state propagated by the automaton at the time it was in the corresponding call position. In analogy to regular languages of words, a set L of nested words is called regular if it is accepted by some (finite-state) nested word automaton.

Visibly pushdown automaton

Nested word automata are an automaton model accepting nested words. There is an equivalent automaton model operating on (ordinary) words. Namely, the notion of a deterministic visibly pushdown automaton is a restriction of the notion of a deterministic pushdown automaton.

Following Alur and Madhusudan,[2] a deterministic visibly pushdown automaton is formally defined as a 6-tuple [math]\displaystyle{ M=(Q, \hat{\Sigma}, \Gamma, \delta, q_0, F) }[/math] where

  • [math]\displaystyle{ Q }[/math] is a finite set of states,
  • [math]\displaystyle{ \hat{\Sigma} }[/math] is the input alphabet, which – in contrast to that of ordinary pushdown automata – is partitioned into three sets [math]\displaystyle{ \Sigma_\text{c} }[/math], [math]\displaystyle{ \Sigma_\text{r} }[/math], and [math]\displaystyle{ \Sigma_\text{int} }[/math]. The alphabet [math]\displaystyle{ \Sigma_\text{c} }[/math] denotes the set of call symbols, [math]\displaystyle{ \Sigma_\text{r} }[/math] contains the return symbols, and the set [math]\displaystyle{ \Sigma_\text{int} }[/math] contains the internal symbols,
  • [math]\displaystyle{ \Gamma }[/math] is a finite set which is called the stack alphabet, containing a special symbol [math]\displaystyle{ \bot\in\Gamma }[/math] denoting the empty stack,
  • [math]\displaystyle{ \delta = \delta_\text{c} \cup \delta_\text{r} \cup \delta_\text{int} }[/math] is the transition function, which is partitioned into three parts corresponding to call transitions, return transitions, and internal transitions, namely
    • [math]\displaystyle{ \delta_\text{c}\colon Q \times \Sigma_\text{c} \to Q \times \Gamma }[/math], the call transition function
    • [math]\displaystyle{ \delta_\text{r}\colon Q \times \Sigma_\text{r} \times \Gamma \to Q }[/math], the return transition function
    • [math]\displaystyle{ \delta_\text{int}:Q \times \Sigma_\text{int} \to Q }[/math], the internal transition function,
  • [math]\displaystyle{ q_0\in\, Q }[/math] is the initial state, and
  • [math]\displaystyle{ F \subseteq Q }[/math] is the set of accepting states.

The notion of computation of a visibly pushdown automaton is a restriction of the one used for pushdown automata. Visibly pushdown automata only add a symbol to the stack when reading a call symbol [math]\displaystyle{ a_\text{c}\in \Sigma_\text{c} }[/math], they only remove the top element from the stack when reading a return symbol [math]\displaystyle{ a_\text{r}\in\Sigma_\text{r} }[/math] and they do not alter the stack when reading an internal event [math]\displaystyle{ a_\text{i}\in\Sigma_\text{int} }[/math]. A computation ending in an accepting state is an accepting computation.

As a result, a visibly pushdown automaton cannot push to and pop from the stack with the same input symbol. Thus the language [math]\displaystyle{ L=\{a^nba^n \mid n\in\mathrm{N} \} }[/math] cannot be accepted by a visibly pushdown automaton for any partition of [math]\displaystyle{ \Sigma }[/math], however there are pushdown automata accepting this language.

If a language [math]\displaystyle{ L }[/math] over a tagged alphabet [math]\displaystyle{ \hat{\Sigma} }[/math] is accepted by a deterministic visibly pushdown automaton, then [math]\displaystyle{ L }[/math] is called a visibly pushdown language.

Nondeterministic visibly pushdown automata

Nondeterministic visibly pushdown automata are as expressive as deterministic ones. Hence one can transform a nondeterministic visibly pushdown automaton into a deterministic one, but if the nondeterministic automaton had [math]\displaystyle{ s }[/math] states, the deterministic one may have up to [math]\displaystyle{ 2^{s^2} }[/math] states.[3]

Decision problems

Let [math]\displaystyle{ |A| }[/math] be the size of the description of an automaton [math]\displaystyle{ A }[/math], then it is possible to check if a word n is accepted by the automaton in time [math]\displaystyle{ O(|A|^3\ell) }[/math]. In particular, the emptiness problem is solvable in time [math]\displaystyle{ O(|A|^3) }[/math]. If [math]\displaystyle{ A }[/math] is fixed, it is decidable in time [math]\displaystyle{ O(\ell) }[/math] and space [math]\displaystyle{ O(d) }[/math] where [math]\displaystyle{ d }[/math] is the depth of n in a streaming seeing. It is also decidable with space [math]\displaystyle{ O(\log(\ell)) }[/math] and time [math]\displaystyle{ O(\ell^2 \log (\ell)) }[/math], and by a uniform boolean circuit of depth [math]\displaystyle{ O(\log \ell) }[/math].[2]

For two nondeterministic automata A and B, deciding whether the set of words accepted by A is a subset of the word accepted by B is EXPTIME-complete. It is also EXPTIME-complete to figure out if there is a word that is not accepted.[2]

Languages

As the definition of visibly pushdown automata shows, deterministic visibly pushdown automata can be seen as a special case of deterministic pushdown automata; thus the set VPL of visibly pushdown languages over [math]\displaystyle{ \,\hat{\Sigma} }[/math] forms a subset of the set DCFL of deterministic context-free languages over the set of symbols in [math]\displaystyle{ \,\hat{\Sigma} }[/math]. In particular, the function that removes the matching relation from nested words transforms regular languages over nested words into context-free languages.

Closure properties

The set of visibly pushdown languages is closed under the following operations:[3] [2]

  • set operations:
    • union
    • intersection
    • complement,
thus giving rise to a boolean algebra.

For the intersection operation, one can construct a VPA M simulating two given VPAs [math]\displaystyle{ M_1 }[/math] and [math]\displaystyle{ M_2 }[/math] by a simple product construction (Alur Madhusudan): For [math]\displaystyle{ i=1,2 }[/math], assume [math]\displaystyle{ M_i }[/math] is given as [math]\displaystyle{ (Q_i,\ \hat{\Sigma},\ \Gamma_i,\ \delta_i, \ s_{i},\ Z_i, \ F_i) }[/math]. Then for the automaton M, the set of states is [math]\displaystyle{ \, Q_1\times Q_2 }[/math], the initial state is [math]\displaystyle{ \left(s_{1}, s_2\right) }[/math], the set of final states is [math]\displaystyle{ F_1 \times F_2 }[/math], the stack alphabet is given by [math]\displaystyle{ \,\Gamma_1\times\Gamma_2 }[/math], and the initial stack symbol is [math]\displaystyle{ (Z_1,Z_2) }[/math].

If [math]\displaystyle{ M }[/math] is in state [math]\displaystyle{ (p_1,p_2) }[/math] on reading a call symbol [math]\displaystyle{ \left\langle a\right. }[/math], then [math]\displaystyle{ M }[/math] pushes the stack symbol [math]\displaystyle{ (\gamma_1,\gamma_2) }[/math] and goes to state [math]\displaystyle{ (q_1,q_2) }[/math], where [math]\displaystyle{ \gamma_i }[/math] is the stack symbol pushed by [math]\displaystyle{ M_i }[/math] when transitioning from state [math]\displaystyle{ p_i }[/math] to [math]\displaystyle{ q_i }[/math] on reading input [math]\displaystyle{ \left\langle a\right. }[/math].

If [math]\displaystyle{ M }[/math] is in state [math]\displaystyle{ (p_1,p_2) }[/math] on reading an internal symbol [math]\displaystyle{ a }[/math], then [math]\displaystyle{ M }[/math] goes to state [math]\displaystyle{ (q_1,q_2) }[/math], whenever [math]\displaystyle{ M_i }[/math] transitions from state [math]\displaystyle{ p_i }[/math] to [math]\displaystyle{ q_i }[/math] on reading a.

If [math]\displaystyle{ M }[/math] is in state [math]\displaystyle{ (p_1,p_2) }[/math] on reading a return symbol [math]\displaystyle{ \left. a\right\rangle }[/math], then [math]\displaystyle{ M }[/math] pops the symbol [math]\displaystyle{ (\gamma_1,\gamma_2) }[/math] from the stack and goes to state [math]\displaystyle{ (q_1,q_2) }[/math], where [math]\displaystyle{ \gamma_i }[/math] is the stack symbol popped by [math]\displaystyle{ M_i }[/math] when transitioning from state [math]\displaystyle{ p_i }[/math] to [math]\displaystyle{ q_i }[/math] on reading [math]\displaystyle{ \left. a\right\rangle }[/math].

Correctness of the above construction crucially relies on the fact that the push and pop actions of the simulated machines [math]\displaystyle{ M_1 }[/math] and [math]\displaystyle{ M_2 }[/math] are synchronized along the input symbols read. In fact, a similar simulation is no longer possible for deterministic pushdown automata, as the larger class of deterministic context-free languages is no longer closed under intersection.

In contrast to the construction for concatenation shown above, the complementation construction for visibly pushdown automata parallels the standard construction[4] for deterministic pushdown automata.

Moreover, like the class of context free languages the class of visibly pushdown languages is closed under prefix closure and reversal, hence also suffix closure.

Relation to other language classes

(Alur Madhusudan) point out that the visibly pushdown languages are more general than the parenthesis languages suggested in (McNaughton 1967). As shown by (Crespi Reghizzi Mandrioli), the visibly pushdown languages in turn are strictly contained in the class of languages described by operator-precedence grammars, which were introduced by (Floyd 1963) and enjoy the same closure properties and characteristics (see (Lonati Mandrioli) for ω languages and logic and automata-based characterizations). In comparison to conjunctive grammars, a generalization of context-free grammars, (Okhotin 2011) shows that the linear conjunctive languages form a superclass of the visibly pushdown languages. The table at the end of this article puts the family of visibly pushdown languages in relation to other language families in the Chomsky hierarchy. Rajeev Alur and Parthasarathy Madhusudan[5][6] related a subclass of regular binary tree languages to visibly pushdown languages.

Other models of description

Visibly pushdown grammars

Visibly pushdown languages are exactly the languages that can be described by visibly pushdown grammars.[2]

Visibly pushdown grammars can be defined as a restriction of context-free grammars. A visibly pushdown grammar G is defined by the 4-tuple:

[math]\displaystyle{ G = (V=V^0\cup V^1\,, \Sigma\,, R\,, S\,) }[/math] where

  • [math]\displaystyle{ V^0\, }[/math] and [math]\displaystyle{ V^1\, }[/math] are disjoint finite sets; each element [math]\displaystyle{ v\in V }[/math] is called a non-terminal character or a variable. Each variable represents a different type of phrase or clause in the sentence. Each variable defines a sub-language of the language defined by [math]\displaystyle{ G\, }[/math], and the sub-languages of [math]\displaystyle{ V^0\, }[/math] are the one without pending calls or pending returns.
  • [math]\displaystyle{ \Sigma\, }[/math] is a finite set of terminals, disjoint from [math]\displaystyle{ V\, }[/math], which make up the actual content of the sentence. The set of terminals is the alphabet of the language defined by the grammar [math]\displaystyle{ G\, }[/math].
  • [math]\displaystyle{ R\, }[/math] is a finite relation from [math]\displaystyle{ V\, }[/math] to [math]\displaystyle{ (V\cup\Sigma)^{*} }[/math] such that [math]\displaystyle{ \exist\, w\in (V\cup\Sigma)^{*}: (S,w)\in R }[/math]. The members of [math]\displaystyle{ R\, }[/math] are called the (rewrite) rules or productions of the grammar. There are three kinds of rewrite rules. For [math]\displaystyle{ X,Y\in V ,Z\in V^0 }[/math], [math]\displaystyle{ a\in \hat\Sigma }[/math] and [math]\displaystyle{ b\in \hat\Sigma }[/math]
    • [math]\displaystyle{ X\to \epsilon }[/math]
    • [math]\displaystyle{ X\to aY }[/math] and if [math]\displaystyle{ X\in V^0 }[/math] then [math]\displaystyle{ Y\in V^0 }[/math] and [math]\displaystyle{ a\in \Sigma }[/math]
    • [math]\displaystyle{ X\to \langle aZb\rangle Y }[/math] and if [math]\displaystyle{ X\in V^0 }[/math] then [math]\displaystyle{ Y\in V^0 }[/math]
  • [math]\displaystyle{ S\in V\, }[/math] is the start variable (or start symbol), used to represent the whole sentence (or program).

Here, the asterisk represents the Kleene star operation and [math]\displaystyle{ \epsilon }[/math] is the empty word.

Uniform Boolean circuits

The problem whether a word of length [math]\displaystyle{ \ell }[/math] is accepted by a given nested word automaton can be solved by uniform boolean circuits of depth [math]\displaystyle{ \Omicron(\log\ell) }[/math].[2]

Logical description

Regular languages over nested words are exactly the set of languages described by monadic second-order logic with two unary predicates call and return, linear successor and the matching relation ↝.[2]

See also

Notes

  1. Google Scholar search results for "nested words" OR "visibly pushdown"
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 (Alur Madhusudan)
  3. 3.0 3.1 (Alur Madhusudan)
  4. (Hopcroft Ullman).
  5. Alur, R.; Madhusudan, P. (2004). "Visibly pushdown languages". Proceedings of the thirty-sixth annual ACM symposium on Theory of computing - STOC '04. pp. 202–211. doi:10.1145/1007352.1007390. ISBN 978-1581138528. http://www.cis.upenn.edu/~alur/Stoc04.pdf.  Sect.4, Theorem 5,
  6. Alur, R.; Madhusudan, P. (2009). "Adding nesting structure to words". Journal of the ACM 56 (3): 1–43. doi:10.1145/1516512.1516518. http://www.cis.upenn.edu/~alur/Jacm09.pdf.  Sect.7

References

  • Floyd, R. W. (July 1963). "Syntactic Analysis and Operator Precedence". Journal of the ACM 10 (3): 316–333. doi:10.1145/321172.321179. 
  • McNaughton, R. (1967). "Parenthesis Grammars". Journal of the ACM 14 (3): 490–500. doi:10.1145/321406.321411. 
  • Alur, R.; Arenas, M.; Barcelo, P.; Etessami, K.; Immerman, N.; Libkin, L. (2008). Grädel, Erich. ed. "First-Order and Temporal Logics for Nested Words". Logical Methods in Computer Science 4 (4). doi:10.2168/LMCS-4(4:11)2008. 
  • Crespi Reghizzi, Stefano; Mandrioli, Dino (2012). "Operator precedence and the visibly pushdown property". Journal of Computer and System Sciences 78 (6): 1837–1867. doi:10.1016/j.jcss.2011.12.006. 
  • Lonati, Violetta; Mandrioli, Dino; Panella, Federica; Pradella, Matteo (2015). "Operator Precedence Languages: Their Automata-Theoretic and Logic Characterization". SIAM Journal on Computing 44 (4): 1026–1088. doi:10.1137/140978818. 
  • Okhotin, Alexander: Comparing linear conjunctive languages to subfamilies of the context-free languages, 37th International Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM 2011).
  • Hopcroft, John E.; Ullman, Jeffrey D. (1979). Introduction to Automata Theory, Languages, and Computation. Addison-Wesley. ISBN 978-0-201-02988-8. 

External links