Chap 5 Context free Languages (3)

  Chapter 5 Chapter 5 Context-free Languages

Computation Theory

  (1)

Context-free Grammars (CFG) Definition

   A grammar G = (V, T, S, P) is said to be

  

context-free if all production rules in P have the

  form A  x where and

  

A  V x  (V  T)*

   A language is said to be context-free iff there is a context free grammar G such that L = L(G)

  (2)

Context-free Grammars (CFG)

   Context-free means that there is a single variable on the left side of each grammar rule.

   Example of rule where this condition does not hold :

  1Z1  101

  • The variable Z goes to 0 only in the context of a 1 on its left and a 1 to the right. This is a

  context-sensitive rule

Non-regular languages

   There are non-regular languages that can be generated by CFG.

  • rules: is context-free

  The grammar G = ({S}, {a, b}, S, P), with production

  S  aSa | bSb | λ

  • This grammar is linear (at most a single variable on RHS), but is neither right-linear nor left-linear , so it is not regular .

  n n

  • Example: The language {a b : n  0} is not regular, is generated by the grammar S  aSb | λ

Example of a CFL : Palindromes

   Palindromes are strings which are spelled the same way backwards and forwards. The language of palindromes, PAL, is not regular  Given the grammar G = ({S}, {a, b}, S, P), with production rules: S  aSa | bSb | λ  A typical derivation in this grammar might be:

  aabbaa

  S  aSa  aaSaa  aabSbaa   The language generated by this grammar is:

  R

  L(G) = {ww } : w  {a, b}

  

Regular vs. context-free

 Are regular languages context-free ?

  • • Yes , because context-free means that there is a

  

single variable on the LHS of each rule. All

  regular languages are generated by grammars that have a single variable on the LHS of each grammar rule

  • • But , as we have seen, not all context-free

    grammars are regular.
  • • So regular languages are a proper subset of the

    class of context-free languages.

Derivation

  Given the grammar, S  aaSB | λ B  bB | b the string aab can be derived in different ways.

  aab

  S  aaSB  aaB 

  aab

  S  aaSB  aaSb 

Parse tree

   Both derivations on the previous slide correspond to the following parse (or derivation) tree .

  S

  aab S  aaSB  aaB 

  a a S B

  aab S  aaSB  aaSb 

  b λ

  • The tree structure shows the rule that is applied to each non terminal, without showing the order of rule applications.
  • Each

  internal node of the tree corresponds to a non terminal , and the leaves of the derivation tree represent the string of terminals .

  (1)

Leftmost (rightmost) derivation

   In the derivation

  S  aaSB  aaB  aab

  • the first step was to replace S with λ, and then to replace B with b.
  • we moved from left to right, replacing the leftmost variable at each step. a a S B S • this is called a leftmost derivation . λ b

   Similarly, the derivation

  S  aaSB  aaSb  aab rightmost derivation .

  • is called a

  (2)

Leftmost (rightmost) derivation

   Definition In a leftmost derivation, the leftmost nonterminal is replaced at each step. In a rightmost derivation, the rightmost nonterminal is replaced at each step.

  • Many derivations are neither leftmost nor rightmost.
  • If there is a single parse tree, there is also a single leftmost derivation.

  (1)

Parse (derivation) trees

   Definition Let G = (V, T, S, P) be a context-free grammar. An ordered tree is a derivation tree for G iff it has the following properties:

  1. The root is labeled S

  2. Every leaf has a label from T  {λ}

  3. Every interior vertex (not a leaf) has a label from V .

  5. If a vertex

  has label A  V, and its children are labeled (from left to right) a , a ,..., a , then P must contain a production of the 1 2 n a ...a form A  a 1 2 n

  5. A leaf labeled λ has no siblings ; that is, a vertex with a child

  labeled λ can have no other children

  (2)

Parse (derivation) trees

   A partial derivation tree is one in which property

  1 does not necessarily hold and in which property 2 is replaced by :

  Every leaf has a label from V  T  {λ}  The yield of the tree is the string of symbols in the order they are encountered when the tree is traversed in a depth- first manner , always taking the leftmost unexplored branch.

   A partial derivation tree yields a sentential form of the grammar G that the tree is associated with.

   A derivation tree yields a sentence of the grammar G that the tree is associated with.

  (3)

Parse (derivation) trees

   Theorem Let G = (V, T, S, P) be a context-free grammar. Then for every w  L(G) there exists a derivation tree of G whose yield is w. Conversely, the yield of any derivation tree of G is in L(G). is any partial derivation tree for G whose root is labeled S,

   If t G then the yield of t is a sentential form of G. G  Any w  L(G) has a leftmost and a rightmost derivation.

  • The leftmost derivation is obtained by always expanding the leftmost variable in the derivation tree at each step • Similarly for the rightmost derivation.

  

Ambiguity

   A grammar is ambiguous if there is a string with two possible parse trees.

   A string has more than one parse tree if and only if it has more than one leftmost derivation.

   Example :

  V = {S} T = {+, *, (, ), 0, 1} P

  = {S  S + S | S * S | (S) | 1 | 0}

  • The string

  0 * 0 + 1 has two different parse trees. The derivation

  begins from S , the leftmost variable is S . We can replace it with:

  S + S or S * S or (S) or 1 or

  0. Pick one of these at

  random, say S + S

  • This parse corresponds to: compute 0 * 0 first, then add it to 1, which equals 1 S S + S
    • * 1 S S

    Example

       Our string is still 0 * 0 + 1 V = {S} T = {+, *, (, ), 0, 1}

    P

      = {S  S + S | S * S | (S) | 1 | 0}

    • But

      there is another different parse tree that also generates the string

      

    0 * 0 + 1. The derivation begins from S , the leftmost variable is S . we

      can replace it with : S + S or S * S or (S) or

      1 or

      0. Pick

      another one of these at random, say S * S

    • This parse corresponds to: take 0, and then multiply it by the sum of * 0 + 1, which equals 0 S S S
      • + S S
      • 1 Here is the parse tree

      Equivalent grammars

         Here is a non-ambiguous grammar that generates the same language.

        S  S + A | A A  A * B | B

      B  (S) | 1 | 0

         Two grammars that generate the same language are said to be equivalent .

         To make parsing easier, we prefer grammars that are not ambiguous .

        

      Ambiguous grammars &

      equivalent grammars

         There is no general algorithm for determining whether a given CFG is ambiguous.

         There is no general algorithm for determining whether a given CFG is equivalent to another CFG.

      Dangling else

        x = 3; if x > 2 then if x > 5 then x = 1; else x = 5; What value does x have at the end?

        Ambiguous grammar <statement> := IF < expression> THEN <statement> |

        IF <expression> THEN <statement> ELSE <statement> | <otherstatement> Unambiguous grammar

        <statement> := <st1> | <st2> <st1> := IF <expression> THEN <st1> ELSE <st1> | <otherstatement> <st2> := IF <expression> THEN <statement> |

        IF <expression> THEN <st1> ELSE <st2>

        Ambiguous grammars

         Definition If L is a context-free language for which there an unambiguous grammar, then L is said to be unambiguous. If every grammar that generates L is ambiguous, then the language is called inherently ambiguous.

        (1)

      Parsing

         In practical applications, it is usually not enough to decide whether a string belongs to a language. It is also important to know how to derive the string from the language.

         Parsing uncovers the syntactical structure of a string, which is represented by a parse tree. (The syntactical structure is important for assigning semantics to the string -- for example, if it is a program)

        (2) Parsing  Let G be a context-free grammar for C++.

      Let the string w be a C++ program

         One thing a compiler does - in particular, the part of the compiler called the “parser” - is determine whether w is a syntactically correct C++ program. It also constructs a parse tree for the program that is used in code generation.

         There are many sophisticated and efficient algorithms for parsing. You may study them in more advanced classes (for example, on compilers).

        (1)

      The Decision question for CFL’s

         What if a string w belongs to L(G) generated by a CFG, can we always decide that it does belong to L(G) ?  Yes . Just do top-down parsing, in which we list all the sequential forms that can be generated in one step, two steps, three steps, etc. This is a type of exhaustive search parsing. Eventually, w will be generated.

         What if

        w does not belong to L(G). Can we always decide that it doesn’t ?

         Not unless we restrict the kinds of rules we can have in our

        grammar. Suppose we ask if w = aab is a string in L(G). If we have λ-rules, such as B  λ, in G, we might have a sentential 5000 form like aabB and still be able to end up with aab.

      The Decision question for CFL’s

         What we need to do is to restrict the kinds of rules in our CFG’s so that each rule, when it is applied, is guaranteed to either increase the length of the sentential form generated or to increase the number of terminals in the sentential form.

         That means that we don’t want rules of the following two forms in our CFG’s: A  λ A  B

         If we have a CFG that lacks these kinds of rules, then as soon as a sentential form is generated that is longer than our string, w, we can abandon any attempt to generate w from this sentential form.

        The Decision question for CFL’s

         If the grammar does not have these two kinds of rules, then, in a finite number of steps, applying our exhaustive search parsing technique to G will generate all possible sentential forms of G with a length  |w|. If w has not been generated by this point, then w is not a string in the language, and we can stop generating sentential forms.

        

      The Decision question for CFL’s

        Consider the grammar G = ({S}, {a, b}, S, P), where P is: S  SS | aSb | bSa | ab |ba

        Looking at the production rules, it is easy to see that the length of the sentential form produced by the application of any rule grows by at least one symbol during each derivation step.

        Thus, in  |w| derivation steps, G will produce either produce a string of all terminals, which may be compared directly to w, or a sentential form too long to be capable of producing w. +

        , the exhaustive search parsing Hence, given any w  {a, b}

        

      The Decision question for CFL’s

         Theorem : Assume that G = (V, T, S, P) is a context-free grammar with no rules of the form A  λ or A  B, where A, B  V. Then the exhaustive search parsing technique can be made into an , either algorithm which, for any w  

      • produces a parsing for w or tells us that no parsing is possible.

      The Decision question for CFL’s

        Since we don’t know ahead of time which derivation sequences to try, we have to try all of the possible applications of rules which result in one of two conditions: a string of all terminals of length |w|, or a sentential form of length |w| + 1.

        The application of any one rule must result in either: replacing a variable with one or more terminals, or increasing the length of a sentential form by one or more characters.

        The worst case scenario is applying |w| rules that increase the length of a sentential form to |w|, and then applying |w| rules that replace each variable with a terminal symbol, and ending up with a string of |

      The Decision question for CFL’s

        How many sentential forms will we have to examine? Restricting ourselves to leftmost derivations, it is obvious that, with |P| production rules, applying each rule one time to S gives us |P| sentential forms. Example: Given the 5 production rules

        S  SS | aSb | bSa | ab |ba, one round of leftmost derivations produces 5 sentential forms: S  SS S  aSb S  bSa S  ab

      The Decision question for CFL’s

        The second round of leftmost derivations produces 15 sentential forms:

        SS  SSS SS  aSbS SS  bSaS SS  abS SS  baS

      aSb  aSSb aSb  aaSbb aSb  abSab aSb  aabb aSb  abab

      bSa  bSSa bSa  baSba bSa  bbSaa bSa  baba bSa  bbaa

        ab and ba don’t produce any new sentential forms, since they consist of all terminals. If they had contained variables, then the second 2 round of leftmost derivations would have produced 25, or |P| sentential forms. 3 Similarly, the third round of leftmost derivations can produce |P|

      The Decision question for CFL’s

        We know from our worst case scenario that we never have to run through more than 2|w| rounds of rule applications in any one derivation sequence before being able to stop the derivation.

        Therefore, the total number of sentential forms that we may have to generate to decide whether string w belongs to L(G) generated by grammar G = (V, T, S, P) is 2 2|w|

         |P| + |P| + ... + |P|

        Unfortunately, this means that the work we might have to do to answer the decision question for CFG’s could grow exponentially with the length of the string.

      The Decision question for CFL’s

        It can be shown that some more efficient parsing techniques for CFG’s exist.

        

      Theorem 5.3: For every context-free grammar there exists

        an algorithm that parses any w  L(G) in a number of steps

        3 proportional to |w| .

        Your textbook does not offer a proof for this theorem.

        Anyway, what is needed is a linear-time parsing algorithm for CFG’s. Such an algorithm exists for some special cases

        

      S-grammars

      Definition 5.5: A context-free grammar G = (V, T, S, P) is

        said to be a simple grammar or s-grammar if all of its productions are of the form A  ax,

      • most once in P. Example: The following grammar is an s-grammar:

        , and any pair (A, a) occurs at where A  V, a  T, x  V

        S  aS | bSS | c The following grammar is not an s-grammar. Why not?

        S-grammars

        If G is an s-grammar, then any string w in L(G) can be parsed with an effort proportional to |w|.

      S-grammars

        Let’s consider the grammar expressed by the following production rules: S  aS | bSS | c Since G is an s-grammar, all rules have the form A  ax. Assume that w = abcc.

        Due to the restrictive condition that any pair (A,

        a) may occur at most once in P, we know immediately which production rule must have generated the a in abcc – the rule S  aS. Similarly, there is only one way to produce the b and the two c’s. So we can parse w in no more

      Exercise

        Let G be the grammar S  abSc | A A  cAd | cd 1) Give a derivation of ababccddcc.

        

      Programming languages

      • Programming languages are context-free, but not regular
      • Programming languages have the following features that require infinite “stack memory”
        • – matching parentheses in algebraic expressions
        • – nested if .. then .. else statements, and nested loops
        • – block structure

      Programming languages

      • Programming languages are often defined using a convention for specifying grammars called Backus-Naur form, or BNF.

        Example: <expression> ::= <term> | <expression> + <term>

        

      Programming languages

        Backus-Naur form is very similar to the standard CFG grammar form, but variables are listed within angular brackets, ::= is used instead of , and {X} is used to mean 0 or more occurrences of X. The | is still used to mean “or”.

        Pascal’s if statement: <if-statement> ::= if <expression> <then-clause> <else-clause>

      Programming languages

        S-grammars are not sufficiently powerful to handle all the syntactic features of a typical programming language

        LL grammars and LR grammars (see next chapter) are normally used for specifying programming languages. They are more complicated than s- grammars, but still permit parsing in linear time.

        Some aspects of programming languages (i.e., semantics) cannot be handled by context-free

        

      Example of a Non-linear Context-

      free Grammar

        Consider the grammar G = ({S}, {a, b}, S, P), with production rules:

      S  aSa | SS | λ

        This grammar is context-free. Why? Is this grammar linear? Why or why not?