Foundations Of Cognitive Science

Context-Sensitive Grammar

Consider a grammar that has an alphabet of nonterminal symbols, and an alphabet of terminal symbols.  Let this grammar have rules that conform to one basic pattern:  ab, where both a and b can be any combination of terminal and nonterminal symbols.  The only restriction on this rule is that a cannot be empty.   If these properties are true of a grammar, then the grammar is called context-sensitive (Parkes, 2002).  This is because in this grammar, on the left side of the rule, some symbols can flank others to provide these other symbols a context that determines how they might be rewritten by the right side of the rule (Chomsky, 1965). 

Note that in a context-sensitive grammar, it is possible to have expressions grow to the left, to the right, or to grow from the middle by inserting internal “clauses”.  As well, because contextual symbols are permitted on the left side of the rule, a context-sensitive grammar is less restricted than a context-free grammar, and must therefore be accommodated by a more powerful computational device. While a context-free grammar is the domain of a pushdown automaton, a context-sensitive grammar falls into the domain of linear bounded automata (i.e., a device like a Turing machine, but with a ticker tape of bounded length). It goes without saying that context-sensitive grammars are too complex (too unrestricted) to be accepted or generated by a finite state automaton.


  1. Chomsky, N. (1965). Aspects Of The Theory Of Syntax. Cambridge, MA: MIT Press.
  2. Parkes, A. (2002). Introduction to Languages, Machines and Logic: Computable Languages, Abstract Machines and Formal Logic. London ; New York: Springer.

(Added September 2010)