# Context-free Grammar, Part II Botton-Up Parsing

## 1 自底向上分析法（Bottom-up）

An LR parser scans and parses the input text in one forward pass over the text. The parser builds up the parse tree incrementally, bottom up, and left to right, without guessing or backtracking.

Figure 1: Bottom-up parse of A*2 + 1

### 1.1 可行前缀和句柄

E' －> E
E －> E + n | n


2 展示了串n + n使用这个文法的自底向上分析过程。

### 1.2 LR(0)分析

#### 1.2.1 利用自动机进行自底向上语法分析

DFA构造好后，就可以用它来识别某个具体的输入串了。

#### 1.2.2 什么是LR(0)项

LR(0)项就是在其右边带有“位置信息”的产生式选择。 一般用一个点号来表示这个位置信息。

LR(0)项可以作为一个保存有分析栈和移进——归约分析过程的信息的有穷自动机状态来使用。

#### 1.2.3 由LR(0)项构造NFA（定义状态转换规则）

8个LR(0)项对应着NFA的8个状态，由前面介绍的构造LR(0)项对应NFA的规则可以得到其NFA为（下图中， ε-转换是这样添加的：对每个A->α.Xβ，添加ε-转换到所有可能的X->.β ）：

#### 1.2.4 LR(0)项中内核项和非内核项（又称为闭包项）

(1) 内核项：包括初始产生式 S' -> .S 和点不在最左端的所有项。
(2) 非内核项（闭包项）：除了 S' -> .S 之外的点在最左端的所有项。

LR(0)项的闭包实际上体现的就是LR(0)项的自动机中转换规则定义中的第二情况（如何添加“ε-转换”）。

#### 1.2.5 LR(0)分析算法

LR(0)分析算法很简单，描述如下：

1.如果s中包含了形式为A->α.Xβ的项（称为“移入”项），其中X是一个终结符。则动作是将当前的输入记号移入栈中，若输入记号为X，则压入栈中的新state为包含了A->αX.β的state；若输入记号不为X，则报错。
2.如果s中包含了形式为A->γ.的项（称为“完整”项）。则动作是用产生式A->γ归约，分析栈按下面规则更新：首先把γ和相应的state从分析栈中删除，这样分析栈回到了构造γ前的state（这个state肯定含有一个形式为B->α.Aβ的项），把A压入到分析栈，再压入那个包含项B->αA.β的state。

#### 1.2.7 LR(0)文法实例

DFA常用文法分析表来表示，文法“A －> (A) | a”对应的分析表：

#### 1.2.8 另解LR(0)自动机

LR(0)自动机也可用可行前缀和句柄来表达。
The LR(0) automaton is a DFA which accepts viable prefixes of right sentential forms, ending in a handle.

### 1.3 SLR(1)分析

#### 1.3.1 构造SLR(1)对应的自动机

SLR(1)分析使用的自动机就是LR(0)项对应的自动机。

#### 1.3.2 SLR(1)分析算法

SLR(1)分析是在LR(0)上的一个简单有效的扩展（SLR(1)通过向前看符号的信息使得其分析能力得到增强），这个扩展使得SLR(1)能处理很多实际的语言结构。

SLR(1)分析算法如下所示：

#### 1.3.3 SLR(1)文法的等价描述

SLR(1)文法的定义不是很直观。下面描述一个等价的直观描述。

#### 1.3.5 SLR(1)不够强大

《编译原理（第2版）》4.6.4节例4.48的文法不是SLR(1)，例4.61表明这个文法属于LALR(1)。

### 1.4 LR(1)分析（即Canonical LR分析）

LR(1)的思想是除了用“先行符号”来指导动作外，再进一步利用“先行符号”，把“先行符号”的信息构建在DFA中！

$[A \to \alpha . \beta , \, a]$

#### 1.4.1 构造LR(1)项对应的自动机

$[A\to\alpha.X\gamma,\,a]\overset{X}{\longrightarrow}[A\to\alpha X.\gamma,\,a]$

$[A\to\alpha.B\gamma,\,a]\overset{\epsilon}{\longrightarrow}[B\to . \beta,\,lookahead]$

$[A\to\alpha.B\gamma,\,a]\overset{\epsilon}{\longrightarrow}[B\to . \beta,\,b]\quad b\in First(\gamma a)$

#### 1.4.2 LR(1)项中的内核项和非内核项

$bison --report=state calc.y # 仅会报告内核项$ bison --report=itemset calc.y        # 报告内核项和非内核项（闭包项）


#### 1.4.4 实例说明“LR(1)项的NFA和DFA”

S -> ABc
A -> a
B -> b
B -> ε


#### 1.4.5 LR(1)分析算法

LR(1)分析算法和SLR(1)分析算法很相似。

#### 1.4.6 LR(1)文法的等价描述

LR(1)文法的直观且等价的描述：

### 1.5 LALR(1)分析

LALR(1)的功能比SLR(1)强，比LR(1)弱，但LALR(1)同时具有SLR(1)状态数量少的优点及LR(1)适用范围广的优点。

#### 1.5.1 LALR(1)的基础——LR(1)项的DFA的两个特点

(1) LR(1)项的DFA的状态的核心是LR(0)项的DFA的一个状态；
(2) 若有具有相同核心的LR(1)项的DFA的两个状态s1和s2，假设在符号X上有一个从s1到状态t1的转换，则在X上一定还有一个从状态s2到状态t2的转换，且状态t1和t2具有相同的核心。

LALR(1)项的DFA由下面方法得到：合并LR(1)项的DFA中具有相同核心的状态，并把对应先行符号求并集作为LALR(1)项的第2部分。这种合并会不会引入冲突呢？结论是：不会引入shift/reduce冲突，但可能引入reduce/reduce冲突。 （参见：编译原理第2版，4.7.4节）。

#### 1.5.2 LALR(1)和LR(1)的区别

SLR, LALR and LR parsers can all be implemented using exactly the same table-driven machinery.
The difference between LALR and LR has to do with the table generator. LR parser generators keep track of all possible reductions from specific states and their precise lookahead set; you end up with states in which every reduction is associated with its exact lookahead set from its left context. This tends to build rather large sets of states. LALR parser generators are willing to combine states if the GOTO tables and lookahead sets for reductions are compatible and don't conflict; this produces considerably smaller numbers of states, at the price of not be able to distinguish certain symbol sequences that LR can distinguish. So, LR parsers can parse a larger set of languages than LALR parsers, but have very much bigger parser tables. In practice, one can find LALR grammars which are close enough to the target languages that the size of the state machine is worth optimizing; the places where the LR parser would be better is handled by ad hoc checking outside the parser.

#### 1.5.3 是LR(1)但不是LALR(1)的例子

It is well known that there are grammars which are LR(1) but not LALR(1). For exemple:

 S -> aEa | bEb | aFb | bFa
E -> e
F -> e


#### 1.5.4 LALR(1)分析表构造

LALR(1)分析表构造算法有很多，Parsing Techniques,2nd一书中对其进行了总结。

#### 1.5.5 构造LALR(1)分析表的经典算法——Yacc,Lemon所用算法

##### 1.5.5.1 为什么要把先行符号分为自发生成的和传播的？

(1) 传播关系不取决于某个特定的先行符号，也就是说要么所有的先行符号都从一个项传播到另一个项，要么都不传播。
(2) 传播来的先行符号，既然是从上一个项是传播过来的，那么它最开始也源于自发生成的先行符号。

## 2 Bottom-up和Top-down的比较

Bottom-up(LR)和Top-down(LL)分析树的建立过程如图 38 所示。

Bottom-up和Top-down的比较一，如图 39 所示（摘自：http://www.montefiore.ulg.ac.be/~geurts/Cours/compil/2011/03-syntaxanalysis-3.pdf）。

Bottom-up和Top-down的比较二：

LL or LR?
This question has already been answered much better by someone else, so I'm just quoting his news message in full here:
I hope this doesn't start a war…
First - - Frank, if you see this, don't shoot me. (My boss is Frank DeRemer, the creator of LALR parsing…)
(I borrowed this summary from Fischer&LeBlanc's "Crafting a Compiler")

Simplicity - - LL
Generality - - LALR
Actions - - LL
Error repair - - LL
Table sizes - - LL
Parsing speed - - comparable (me: and tool-dependent)

Simplicity - - LL wins
========
The workings of an LL parser are much simpler. And, if you have to debug a parser, looking at a recursive-descent parser (a common way to program an LL parser) is much simpler than the tables of a LALR parser.

Generality - - LALR wins
========
For ease of specification, LALR wins hands down. The big difference here between LL and (LA)LR is that in an LL grammar you must left-factor rules and remove left recursion.
Left factoring is necessary because LL parsing requires selecting an alternative based on a fixed number of input tokens.
Left recursion is problematic because a lookahead token of a rule is always in the lookahead token on that same rule. (Everything in set A is in set A…) This causes the rule to recurse forever and ever and ever and ever…
To see ways to convert LALR grammars to LL grammars, take a look at my page on it:
http://www.jguru.com/thetick/articles/lalrtoll.html
Many languages already have LALR grammars available, so you'd have to translate. If the language doesn't have a grammar available, then I'd say it's not really any harder to write a LL grammar from scratch. (You just have to be in the right "LL" mindset, which usually involves watching 8 hours of Dr. Who before writing the grammar… I actually prefer LL if you didn't know…)

Actions - - LL wins
=====
In an LL parser you can place actions anywhere you want without introducing a conflict

Error repair - - LL wins
==========
LL parsers have much better context information (they are top-down parsers) and therefore can help much more in repairing an error, not to mention reporting errors.

Table sizes - - LL
=========
Assuming you write a table-driven LL parser, its tables are nearly half the size. (To be fair, there are ways to optimize LALR tables to make them smaller, so I think this one washes…)

Parsing speed - comparable (me: and tool-dependent)

Created: <2013-01-19 Sat 00:00>

Last updated: <2017-12-09 Sat 13:57>

Creator: Emacs 25.3.1 (Org mode 9.1.4)