94
|
1 |
\documentclass[a4paper,UKenglish]{lipics}
|
|
2 |
\usepackage{graphic}
|
|
3 |
\usepackage{data}
|
|
4 |
\usepackage{tikz-cd}
|
114
|
5 |
\usepackage{tikz}
|
138
|
6 |
|
|
7 |
%\usetikzlibrary{graphs}
|
|
8 |
%\usetikzlibrary{graphdrawing}
|
|
9 |
%\usegdlibrary{trees}
|
|
10 |
|
94
|
11 |
%\usepackage{algorithm}
|
|
12 |
\usepackage{amsmath}
|
110
|
13 |
\usepackage{xcolor}
|
94
|
14 |
\usepackage[noend]{algpseudocode}
|
|
15 |
\usepackage{enumitem}
|
|
16 |
\usepackage{nccmath}
|
113
|
17 |
\usepackage{soul}
|
94
|
18 |
|
|
19 |
\definecolor{darkblue}{rgb}{0,0,0.6}
|
|
20 |
\hypersetup{colorlinks=true,allcolors=darkblue}
|
|
21 |
\newcommand{\comment}[1]%
|
|
22 |
{{\color{red}$\Rightarrow$}\marginpar{\raggedright\small{\bf\color{red}#1}}}
|
|
23 |
|
|
24 |
% \documentclass{article}
|
|
25 |
%\usepackage[utf8]{inputenc}
|
|
26 |
%\usepackage[english]{babel}
|
|
27 |
%\usepackage{listings}
|
|
28 |
% \usepackage{amsthm}
|
|
29 |
%\usepackage{hyperref}
|
|
30 |
% \usepackage[margin=0.5in]{geometry}
|
|
31 |
%\usepackage{pmboxdraw}
|
|
32 |
|
|
33 |
\title{POSIX Regular Expression Matching and Lexing}
|
|
34 |
\author{Chengsong Tan}
|
|
35 |
\affil{King's College London\\
|
|
36 |
London, UK\\
|
|
37 |
\texttt{chengsong.tan@kcl.ac.uk}}
|
|
38 |
\authorrunning{Chengsong Tan}
|
|
39 |
\Copyright{Chengsong Tan}
|
|
40 |
|
|
41 |
\newcommand{\dn}{\stackrel{\mbox{\scriptsize def}}{=}}%
|
|
42 |
\newcommand{\ZERO}{\mbox{\bf 0}}
|
|
43 |
\newcommand{\ONE}{\mbox{\bf 1}}
|
101
|
44 |
\def\erase{\textit{erase}}
|
94
|
45 |
\def\bders{\textit{bders}}
|
|
46 |
\def\lexer{\mathit{lexer}}
|
|
47 |
\def\blexer{\textit{blexer}}
|
107
|
48 |
\def\fuse{\textit{fuse}}
|
|
49 |
\def\flatten{\textit{flatten}}
|
|
50 |
\def\map{\textit{map}}
|
94
|
51 |
\def\blexers{\mathit{blexer\_simp}}
|
95
|
52 |
\def\simp{\mathit{simp}}
|
94
|
53 |
\def\mkeps{\mathit{mkeps}}
|
|
54 |
\def\bmkeps{\textit{bmkeps}}
|
|
55 |
\def\inj{\mathit{inj}}
|
|
56 |
\def\Empty{\mathit{Empty}}
|
|
57 |
\def\Left{\mathit{Left}}
|
|
58 |
\def\Right{\mathit{Right}}
|
|
59 |
\def\Stars{\mathit{Stars}}
|
|
60 |
\def\Char{\mathit{Char}}
|
|
61 |
\def\Seq{\mathit{Seq}}
|
|
62 |
\def\Der{\mathit{Der}}
|
|
63 |
\def\nullable{\mathit{nullable}}
|
|
64 |
\def\Z{\mathit{Z}}
|
|
65 |
\def\S{\mathit{S}}
|
|
66 |
\def\flex{\textit{flex}}
|
|
67 |
\def\rup{r^\uparrow}
|
|
68 |
\def\retrieve{\textit{retrieve}}
|
|
69 |
\def\AALTS{\textit{AALTS}}
|
|
70 |
\def\AONE{\textit{AONE}}
|
|
71 |
%\theoremstyle{theorem}
|
|
72 |
%\newtheorem{theorem}{Theorem}
|
|
73 |
%\theoremstyle{lemma}
|
|
74 |
%\newtheorem{lemma}{Lemma}
|
|
75 |
%\newcommand{\lemmaautorefname}{Lemma}
|
|
76 |
%\theoremstyle{definition}
|
|
77 |
%\newtheorem{definition}{Definition}
|
|
78 |
\algnewcommand\algorithmicswitch{\textbf{switch}}
|
|
79 |
\algnewcommand\algorithmiccase{\textbf{case}}
|
|
80 |
\algnewcommand\algorithmicassert{\texttt{assert}}
|
|
81 |
\algnewcommand\Assert[1]{\State \algorithmicassert(#1)}%
|
|
82 |
% New "environments"
|
|
83 |
\algdef{SE}[SWITCH]{Switch}{EndSwitch}[1]{\algorithmicswitch\ #1\ \algorithmicdo}{\algorithmicend\ \algorithmicswitch}%
|
|
84 |
\algdef{SE}[CASE]{Case}{EndCase}[1]{\algorithmiccase\ #1}{\algorithmicend\ \algorithmiccase}%
|
|
85 |
\algtext*{EndSwitch}%
|
|
86 |
\algtext*{EndCase}%
|
|
87 |
|
|
88 |
|
|
89 |
\begin{document}
|
|
90 |
|
|
91 |
\maketitle
|
|
92 |
|
|
93 |
\begin{abstract}
|
|
94 |
Brzozowski introduced in 1964 a beautifully simple algorithm for
|
|
95 |
regular expression matching based on the notion of derivatives of
|
|
96 |
regular expressions. In 2014, Sulzmann and Lu extended this
|
|
97 |
algorithm to not just give a YES/NO answer for whether or not a
|
|
98 |
regular expression matches a string, but in case it does also
|
|
99 |
answers with \emph{how} it matches the string. This is important for
|
|
100 |
applications such as lexing (tokenising a string). The problem is to
|
|
101 |
make the algorithm by Sulzmann and Lu fast on all inputs without
|
100
|
102 |
breaking its correctness. Being fast depends on a complete set of
|
|
103 |
simplification rules, some of which
|
|
104 |
have been put forward by Sulzmann and Lu. We have extended their
|
105
|
105 |
rules in order to obtain a tight bound on the size of regular expressions.
|
|
106 |
We have tested these extended rules, but have not
|
|
107 |
formally established their correctness. We have also not yet looked
|
100
|
108 |
at extended regular expressions, such as bounded repetitions,
|
94
|
109 |
negation and back-references.
|
|
110 |
\end{abstract}
|
|
111 |
|
126
|
112 |
|
|
113 |
\section{Introduction}
|
129
|
114 |
%Regular expressions' derivatives, which have received
|
|
115 |
%renewed interest in the new millenium, is a beautiful....
|
126
|
116 |
While we believe derivatives of regular expressions, written
|
|
117 |
$r\backslash s$, are a beautiful concept (in terms of ease of
|
|
118 |
implementing them in functional programming languages and in terms of
|
|
119 |
reasoning about them formally), they have one major drawback: every
|
|
120 |
derivative step can make regular expressions grow drastically in
|
|
121 |
size. This in turn has negative effect on the runtime of the
|
|
122 |
corresponding lexing algorithms. Consider for example the regular
|
|
123 |
expression $(a+aa)^*$ and the short string $aaaaaaaaaaaa$. The
|
|
124 |
corresponding derivative contains already 8668 nodes where we assume
|
|
125 |
the derivative is given as a tree. The reason for the poor runtime of
|
|
126 |
the derivative-based lexing algorithms is that they need to traverse
|
|
127 |
such trees over and over again. The solution is to find a complete set
|
|
128 |
of simplification rules that keep the sizes of derivatives uniformly
|
|
129 |
small.
|
|
130 |
|
128
|
131 |
This has been partially addressed by the function $\blexer_{simp}$,
|
|
132 |
which after the simplification the $(a+aa)^*$ example's 8000 nodes will be
|
|
133 |
reduced to just 6 and stays constant in each derivative step.
|
|
134 |
The part that still needs more work is the correctness proof of this
|
|
135 |
function, namely,
|
|
136 |
\begin{equation}\label{mainthm}
|
|
137 |
\blexers \; r \; s = \blexer \;r\;s
|
|
138 |
\end{equation}
|
|
139 |
|
|
140 |
\noindent
|
|
141 |
and this is what this report is mainly about. A condensed
|
|
142 |
version of the last report will be provided in the next section
|
|
143 |
to help the reader understand the report, and the attempts
|
|
144 |
on the problem will follow.
|
|
145 |
|
126
|
146 |
|
107
|
147 |
\section{Recapitulation of Concepts From the Last Report}
|
108
|
148 |
|
|
149 |
\subsection*{Regular Expressions and Derivatives}
|
107
|
150 |
Suppose (basic) regular expressions are given by the following grammar:
|
108
|
151 |
|
107
|
152 |
\[ r ::= \ZERO \mid \ONE
|
|
153 |
\mid c
|
|
154 |
\mid r_1 \cdot r_2
|
|
155 |
\mid r_1 + r_2
|
|
156 |
\mid r^*
|
|
157 |
\]
|
|
158 |
|
|
159 |
\noindent
|
108
|
160 |
The ingenious contribution of Brzozowski is the notion of \emph{derivatives} of
|
|
161 |
regular expressions, written~$\_ \backslash \_$. It uses the auxiliary notion of
|
|
162 |
$\nullable$ defined below.
|
107
|
163 |
|
|
164 |
\begin{center}
|
|
165 |
\begin{tabular}{lcl}
|
|
166 |
$\nullable(\ZERO)$ & $\dn$ & $\mathit{false}$ \\
|
|
167 |
$\nullable(\ONE)$ & $\dn$ & $\mathit{true}$ \\
|
|
168 |
$\nullable(c)$ & $\dn$ & $\mathit{false}$ \\
|
|
169 |
$\nullable(r_1 + r_2)$ & $\dn$ & $\nullable(r_1) \vee \nullable(r_2)$ \\
|
|
170 |
$\nullable(r_1\cdot r_2)$ & $\dn$ & $\nullable(r_1) \wedge \nullable(r_2)$ \\
|
|
171 |
$\nullable(r^*)$ & $\dn$ & $\mathit{true}$ \\
|
|
172 |
\end{tabular}
|
|
173 |
\end{center}
|
|
174 |
|
|
175 |
\begin{center}
|
|
176 |
\begin{tabular}{lcl}
|
|
177 |
$\ZERO \backslash c$ & $\dn$ & $\ZERO$\\
|
|
178 |
$\ONE \backslash c$ & $\dn$ & $\ZERO$\\
|
|
179 |
$d \backslash c$ & $\dn$ &
|
|
180 |
$\mathit{if} \;c = d\;\mathit{then}\;\ONE\;\mathit{else}\;\ZERO$\\
|
|
181 |
$(r_1 + r_2)\backslash c$ & $\dn$ & $r_1 \backslash c \,+\, r_2 \backslash c$\\
|
|
182 |
$(r_1 \cdot r_2)\backslash c$ & $\dn$ & $\mathit{if} \, nullable(r_1)$\\
|
|
183 |
& & $\mathit{then}\;(r_1\backslash c) \cdot r_2 \,+\, r_2\backslash c$\\
|
|
184 |
& & $\mathit{else}\;(r_1\backslash c) \cdot r_2$\\
|
|
185 |
$(r^*)\backslash c$ & $\dn$ & $(r\backslash c) \cdot r^*$\\
|
|
186 |
\end{tabular}
|
|
187 |
\end{center}
|
130
|
188 |
\noindent
|
|
189 |
And defines how a regular expression evolves into
|
|
190 |
a new regular expression after all the string it contains
|
|
191 |
is chopped off a certain head character $c$.
|
107
|
192 |
|
|
193 |
The main property of the derivative operation is that
|
|
194 |
|
|
195 |
\begin{center}
|
|
196 |
$c\!::\!s \in L(r)$ holds
|
|
197 |
if and only if $s \in L(r\backslash c)$.
|
|
198 |
\end{center}
|
|
199 |
|
|
200 |
\noindent
|
108
|
201 |
We can generalise the derivative operation shown above for single characters
|
|
202 |
to strings as follows:
|
107
|
203 |
|
|
204 |
\begin{center}
|
|
205 |
\begin{tabular}{lcl}
|
|
206 |
$r \backslash (c\!::\!s) $ & $\dn$ & $(r \backslash c) \backslash s$ \\
|
|
207 |
$r \backslash [\,] $ & $\dn$ & $r$
|
|
208 |
\end{tabular}
|
|
209 |
\end{center}
|
|
210 |
|
|
211 |
\noindent
|
108
|
212 |
and then define Brzozowski's regular-expression matching algorithm as:
|
|
213 |
|
107
|
214 |
\[
|
|
215 |
match\;s\;r \;\dn\; nullable(r\backslash s)
|
|
216 |
\]
|
|
217 |
|
|
218 |
\noindent
|
130
|
219 |
Assuming the a string is given as a sequence of characters, say $c_0c_1..c_n$,
|
108
|
220 |
this algorithm presented graphically is as follows:
|
|
221 |
|
107
|
222 |
\begin{equation}\label{graph:*}
|
|
223 |
\begin{tikzcd}
|
|
224 |
r_0 \arrow[r, "\backslash c_0"] & r_1 \arrow[r, "\backslash c_1"] & r_2 \arrow[r, dashed] & r_n \arrow[r,"\textit{nullable}?"] & \;\textrm{YES}/\textrm{NO}
|
|
225 |
\end{tikzcd}
|
|
226 |
\end{equation}
|
|
227 |
|
|
228 |
\noindent
|
|
229 |
where we start with a regular expression $r_0$, build successive
|
|
230 |
derivatives until we exhaust the string and then use \textit{nullable}
|
|
231 |
to test whether the result can match the empty string. It can be
|
|
232 |
relatively easily shown that this matcher is correct (that is given
|
|
233 |
an $s = c_0...c_{n-1}$ and an $r_0$, it generates YES if and only if $s \in L(r_0)$).
|
|
234 |
|
|
235 |
|
108
|
236 |
\subsection*{Values and the Lexing Algorithm by Sulzmann and Lu}
|
107
|
237 |
|
|
238 |
One limitation of Brzozowski's algorithm is that it only produces a
|
|
239 |
YES/NO answer for whether a string is being matched by a regular
|
|
240 |
expression. Sulzmann and Lu~\cite{Sulzmann2014} extended this algorithm
|
|
241 |
to allow generation of an actual matching, called a \emph{value} or
|
|
242 |
sometimes also \emph{lexical value}. These values and regular
|
|
243 |
expressions correspond to each other as illustrated in the following
|
|
244 |
table:
|
|
245 |
|
|
246 |
|
|
247 |
\begin{center}
|
|
248 |
\begin{tabular}{c@{\hspace{20mm}}c}
|
|
249 |
\begin{tabular}{@{}rrl@{}}
|
|
250 |
\multicolumn{3}{@{}l}{\textbf{Regular Expressions}}\medskip\\
|
|
251 |
$r$ & $::=$ & $\ZERO$\\
|
|
252 |
& $\mid$ & $\ONE$ \\
|
|
253 |
& $\mid$ & $c$ \\
|
|
254 |
& $\mid$ & $r_1 \cdot r_2$\\
|
|
255 |
& $\mid$ & $r_1 + r_2$ \\
|
|
256 |
\\
|
|
257 |
& $\mid$ & $r^*$ \\
|
|
258 |
\end{tabular}
|
|
259 |
&
|
|
260 |
\begin{tabular}{@{\hspace{0mm}}rrl@{}}
|
|
261 |
\multicolumn{3}{@{}l}{\textbf{Values}}\medskip\\
|
|
262 |
$v$ & $::=$ & \\
|
|
263 |
& & $\Empty$ \\
|
|
264 |
& $\mid$ & $\Char(c)$ \\
|
|
265 |
& $\mid$ & $\Seq\,v_1\, v_2$\\
|
|
266 |
& $\mid$ & $\Left(v)$ \\
|
|
267 |
& $\mid$ & $\Right(v)$ \\
|
|
268 |
& $\mid$ & $\Stars\,[v_1,\ldots\,v_n]$ \\
|
|
269 |
\end{tabular}
|
|
270 |
\end{tabular}
|
|
271 |
\end{center}
|
|
272 |
|
|
273 |
\noindent
|
|
274 |
The contribution of Sulzmann and Lu is an extension of Brzozowski's
|
|
275 |
algorithm by a second phase (the first phase being building successive
|
|
276 |
derivatives---see \eqref{graph:*}). In this second phase, a POSIX value
|
|
277 |
is generated in case the regular expression matches the string.
|
|
278 |
Pictorially, the Sulzmann and Lu algorithm is as follows:
|
|
279 |
|
|
280 |
\begin{ceqn}
|
|
281 |
\begin{equation}\label{graph:2}
|
|
282 |
\begin{tikzcd}
|
|
283 |
r_0 \arrow[r, "\backslash c_0"] \arrow[d] & r_1 \arrow[r, "\backslash c_1"] \arrow[d] & r_2 \arrow[r, dashed] \arrow[d] & r_n \arrow[d, "mkeps" description] \\
|
|
284 |
v_0 & v_1 \arrow[l,"inj_{r_0} c_0"] & v_2 \arrow[l, "inj_{r_1} c_1"] & v_n \arrow[l, dashed]
|
|
285 |
\end{tikzcd}
|
|
286 |
\end{equation}
|
|
287 |
\end{ceqn}
|
|
288 |
|
|
289 |
\noindent
|
|
290 |
For convenience, we shall employ the following notations: the regular
|
|
291 |
expression we start with is $r_0$, and the given string $s$ is composed
|
|
292 |
of characters $c_0 c_1 \ldots c_{n-1}$. In the first phase from the
|
|
293 |
left to right, we build the derivatives $r_1$, $r_2$, \ldots according
|
|
294 |
to the characters $c_0$, $c_1$ until we exhaust the string and obtain
|
|
295 |
the derivative $r_n$. We test whether this derivative is
|
|
296 |
$\textit{nullable}$ or not. If not, we know the string does not match
|
|
297 |
$r$ and no value needs to be generated. If yes, we start building the
|
|
298 |
values incrementally by \emph{injecting} back the characters into the
|
|
299 |
earlier values $v_n, \ldots, v_0$. This is the second phase of the
|
|
300 |
algorithm from the right to left. For the first value $v_n$, we call the
|
|
301 |
function $\textit{mkeps}$, which builds the lexical value
|
|
302 |
for how the empty string has been matched by the (nullable) regular
|
|
303 |
expression $r_n$. This function is defined as
|
|
304 |
|
|
305 |
\begin{center}
|
|
306 |
\begin{tabular}{lcl}
|
|
307 |
$\mkeps(\ONE)$ & $\dn$ & $\Empty$ \\
|
|
308 |
$\mkeps(r_{1}+r_{2})$ & $\dn$
|
|
309 |
& \textit{if} $\nullable(r_{1})$\\
|
|
310 |
& & \textit{then} $\Left(\mkeps(r_{1}))$\\
|
|
311 |
& & \textit{else} $\Right(\mkeps(r_{2}))$\\
|
|
312 |
$\mkeps(r_1\cdot r_2)$ & $\dn$ & $\Seq\,(\mkeps\,r_1)\,(\mkeps\,r_2)$\\
|
|
313 |
$mkeps(r^*)$ & $\dn$ & $\Stars\,[]$
|
|
314 |
\end{tabular}
|
|
315 |
\end{center}
|
|
316 |
|
|
317 |
|
|
318 |
\noindent
|
|
319 |
After the $\mkeps$-call, we inject back the characters one by one in order to build
|
|
320 |
the lexical value $v_i$ for how the regex $r_i$ matches the string $s_i$
|
|
321 |
($s_i = c_i \ldots c_{n-1}$ ) from the previous lexical value $v_{i+1}$.
|
|
322 |
After injecting back $n$ characters, we get the lexical value for how $r_0$
|
|
323 |
matches $s$. For this Sulzmann and Lu defined a function that reverses
|
|
324 |
the ``chopping off'' of characters during the derivative phase. The
|
|
325 |
corresponding function is called \emph{injection}, written
|
|
326 |
$\textit{inj}$; it takes three arguments: the first one is a regular
|
|
327 |
expression ${r_{i-1}}$, before the character is chopped off, the second
|
|
328 |
is a character ${c_{i-1}}$, the character we want to inject and the
|
|
329 |
third argument is the value ${v_i}$, into which one wants to inject the
|
|
330 |
character (it corresponds to the regular expression after the character
|
|
331 |
has been chopped off). The result of this function is a new value. The
|
|
332 |
definition of $\textit{inj}$ is as follows:
|
|
333 |
|
|
334 |
\begin{center}
|
|
335 |
\begin{tabular}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
|
|
336 |
$\textit{inj}\,(c)\,c\,Empty$ & $\dn$ & $Char\,c$\\
|
|
337 |
$\textit{inj}\,(r_1 + r_2)\,c\,\Left(v)$ & $\dn$ & $\Left(\textit{inj}\,r_1\,c\,v)$\\
|
|
338 |
$\textit{inj}\,(r_1 + r_2)\,c\,Right(v)$ & $\dn$ & $Right(\textit{inj}\,r_2\,c\,v)$\\
|
|
339 |
$\textit{inj}\,(r_1 \cdot r_2)\,c\,Seq(v_1,v_2)$ & $\dn$ & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\
|
|
340 |
$\textit{inj}\,(r_1 \cdot r_2)\,c\,\Left(Seq(v_1,v_2))$ & $\dn$ & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\
|
|
341 |
$\textit{inj}\,(r_1 \cdot r_2)\,c\,Right(v)$ & $\dn$ & $Seq(\textit{mkeps}(r_1),\textit{inj}\,r_2\,c\,v)$\\
|
|
342 |
$\textit{inj}\,(r^*)\,c\,Seq(v,Stars\,vs)$ & $\dn$ & $Stars((\textit{inj}\,r\,c\,v)\,::\,vs)$\\
|
|
343 |
\end{tabular}
|
|
344 |
\end{center}
|
|
345 |
|
|
346 |
\noindent This definition is by recursion on the ``shape'' of regular
|
|
347 |
expressions and values.
|
|
348 |
|
|
349 |
|
108
|
350 |
\subsection*{Simplification of Regular Expressions}
|
107
|
351 |
|
|
352 |
The main drawback of building successive derivatives according
|
|
353 |
to Brzozowski's definition is that they can grow very quickly in size.
|
|
354 |
This is mainly due to the fact that the derivative operation generates
|
|
355 |
often ``useless'' $\ZERO$s and $\ONE$s in derivatives. As a result, if
|
|
356 |
implemented naively both algorithms by Brzozowski and by Sulzmann and Lu
|
|
357 |
are excruciatingly slow. For example when starting with the regular
|
|
358 |
expression $(a + aa)^*$ and building 12 successive derivatives
|
|
359 |
w.r.t.~the character $a$, one obtains a derivative regular expression
|
|
360 |
with more than 8000 nodes (when viewed as a tree). Operations like
|
|
361 |
$\textit{der}$ and $\nullable$ need to traverse such trees and
|
|
362 |
consequently the bigger the size of the derivative the slower the
|
|
363 |
algorithm.
|
|
364 |
|
|
365 |
Fortunately, one can simplify regular expressions after each derivative
|
131
|
366 |
step.
|
|
367 |
Various simplifications of regular expressions are possible, such
|
107
|
368 |
as the simplification of $\ZERO + r$, $r + \ZERO$, $\ONE\cdot r$, $r
|
131
|
369 |
\cdot \ONE$, and $r + r$ to just $r$.
|
|
370 |
Suppose we apply simplification after each derivative step, and compose
|
|
371 |
these two operations together as an atomic one: $a \backslash_{simp}\,c \dn
|
|
372 |
\textit{simp}(a \backslash c)$. Then we can build values without having
|
|
373 |
a cumbersome regular expression, and fortunately if we are careful
|
|
374 |
enough in making some extra rectifications, the POSIX value of how
|
|
375 |
regular expressions match strings will not be affected---although is much harder
|
107
|
376 |
to establish. Some initial results in this regard have been
|
|
377 |
obtained in \cite{AusafDyckhoffUrban2016}.
|
|
378 |
|
131
|
379 |
|
107
|
380 |
If we want the size of derivatives in Sulzmann and Lu's algorithm to
|
131
|
381 |
stay even lower, we would need more aggressive simplifications.
|
107
|
382 |
Essentially we need to delete useless $\ZERO$s and $\ONE$s, as well as
|
|
383 |
deleting duplicates whenever possible. For example, the parentheses in
|
108
|
384 |
$(a+b) \cdot c + b\cdot c$ can be opened up to get $a\cdot c + b \cdot c + b
|
107
|
385 |
\cdot c$, and then simplified to just $a \cdot c + b \cdot c$. Another
|
|
386 |
example is simplifying $(a^*+a) + (a^*+ \ONE) + (a +\ONE)$ to just
|
108
|
387 |
$a^*+a+\ONE$. Adding these more aggressive simplification rules help us
|
131
|
388 |
to achieve a very tight size bound, namely,
|
|
389 |
the same size bound as that of the \emph{partial derivatives}.
|
|
390 |
And we want to get rid of complex and fragile rectification of values.
|
130
|
391 |
|
|
392 |
|
107
|
393 |
In order to implement the idea of ``spilling out alternatives'' and to
|
108
|
394 |
make them compatible with the $\textit{inj}$-mechanism, we use
|
|
395 |
\emph{bitcodes}. They were first introduced by Sulzmann and Lu.
|
|
396 |
Here bits and bitcodes (lists of bits) are defined as:
|
107
|
397 |
|
|
398 |
\begin{center}
|
114
|
399 |
$b ::= 1 \mid 0 \qquad
|
130
|
400 |
bs ::= [] \mid b::bs
|
107
|
401 |
$
|
|
402 |
\end{center}
|
|
403 |
|
|
404 |
\noindent
|
114
|
405 |
The $1$ and $0$ are not in bold in order to avoid
|
107
|
406 |
confusion with the regular expressions $\ZERO$ and $\ONE$. Bitcodes (or
|
130
|
407 |
bit-lists) can be used to encode values (or potentially incomplete values) in a
|
107
|
408 |
compact form. This can be straightforwardly seen in the following
|
|
409 |
coding function from values to bitcodes:
|
|
410 |
|
|
411 |
\begin{center}
|
|
412 |
\begin{tabular}{lcl}
|
|
413 |
$\textit{code}(\Empty)$ & $\dn$ & $[]$\\
|
|
414 |
$\textit{code}(\Char\,c)$ & $\dn$ & $[]$\\
|
114
|
415 |
$\textit{code}(\Left\,v)$ & $\dn$ & $0 :: code(v)$\\
|
|
416 |
$\textit{code}(\Right\,v)$ & $\dn$ & $1 :: code(v)$\\
|
107
|
417 |
$\textit{code}(\Seq\,v_1\,v_2)$ & $\dn$ & $code(v_1) \,@\, code(v_2)$\\
|
114
|
418 |
$\textit{code}(\Stars\,[])$ & $\dn$ & $[0]$\\
|
|
419 |
$\textit{code}(\Stars\,(v\!::\!vs))$ & $\dn$ & $1 :: code(v) \;@\;
|
107
|
420 |
code(\Stars\,vs)$
|
|
421 |
\end{tabular}
|
|
422 |
\end{center}
|
|
423 |
|
|
424 |
\noindent
|
|
425 |
Here $\textit{code}$ encodes a value into a bitcodes by converting
|
114
|
426 |
$\Left$ into $0$, $\Right$ into $1$, and marks the start of a non-empty
|
|
427 |
star iteration by $1$. The border where a local star terminates
|
|
428 |
is marked by $0$. This coding is lossy, as it throws away the information about
|
107
|
429 |
characters, and also does not encode the ``boundary'' between two
|
|
430 |
sequence values. Moreover, with only the bitcode we cannot even tell
|
114
|
431 |
whether the $1$s and $0$s are for $\Left/\Right$ or $\Stars$. The
|
107
|
432 |
reason for choosing this compact way of storing information is that the
|
|
433 |
relatively small size of bits can be easily manipulated and ``moved
|
|
434 |
around'' in a regular expression. In order to recover values, we will
|
|
435 |
need the corresponding regular expression as an extra information. This
|
|
436 |
means the decoding function is defined as:
|
|
437 |
|
|
438 |
|
|
439 |
%\begin{definition}[Bitdecoding of Values]\mbox{}
|
|
440 |
\begin{center}
|
|
441 |
\begin{tabular}{@{}l@{\hspace{1mm}}c@{\hspace{1mm}}l@{}}
|
|
442 |
$\textit{decode}'\,bs\,(\ONE)$ & $\dn$ & $(\Empty, bs)$\\
|
|
443 |
$\textit{decode}'\,bs\,(c)$ & $\dn$ & $(\Char\,c, bs)$\\
|
115
|
444 |
$\textit{decode}'\,(0\!::\!bs)\;(r_1 + r_2)$ & $\dn$ &
|
107
|
445 |
$\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}\;
|
|
446 |
(\Left\,v, bs_1)$\\
|
115
|
447 |
$\textit{decode}'\,(1\!::\!bs)\;(r_1 + r_2)$ & $\dn$ &
|
107
|
448 |
$\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_2\;\textit{in}\;
|
|
449 |
(\Right\,v, bs_1)$\\
|
|
450 |
$\textit{decode}'\,bs\;(r_1\cdot r_2)$ & $\dn$ &
|
|
451 |
$\textit{let}\,(v_1, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}$\\
|
|
452 |
& & $\textit{let}\,(v_2, bs_2) = \textit{decode}'\,bs_1\,r_2$\\
|
|
453 |
& & \hspace{35mm}$\textit{in}\;(\Seq\,v_1\,v_2, bs_2)$\\
|
115
|
454 |
$\textit{decode}'\,(0\!::\!bs)\,(r^*)$ & $\dn$ & $(\Stars\,[], bs)$\\
|
|
455 |
$\textit{decode}'\,(1\!::\!bs)\,(r^*)$ & $\dn$ &
|
107
|
456 |
$\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r\;\textit{in}$\\
|
|
457 |
& & $\textit{let}\,(\Stars\,vs, bs_2) = \textit{decode}'\,bs_1\,r^*$\\
|
|
458 |
& & \hspace{35mm}$\textit{in}\;(\Stars\,v\!::\!vs, bs_2)$\bigskip\\
|
|
459 |
|
|
460 |
$\textit{decode}\,bs\,r$ & $\dn$ &
|
|
461 |
$\textit{let}\,(v, bs') = \textit{decode}'\,bs\,r\;\textit{in}$\\
|
|
462 |
& & $\textit{if}\;bs' = []\;\textit{then}\;\textit{Some}\,v\;
|
|
463 |
\textit{else}\;\textit{None}$
|
|
464 |
\end{tabular}
|
|
465 |
\end{center}
|
|
466 |
%\end{definition}
|
|
467 |
|
|
468 |
Sulzmann and Lu's integrated the bitcodes into regular expressions to
|
|
469 |
create annotated regular expressions \cite{Sulzmann2014}.
|
|
470 |
\emph{Annotated regular expressions} are defined by the following
|
|
471 |
grammar:%\comment{ALTS should have an $as$ in the definitions, not just $a_1$ and $a_2$}
|
|
472 |
|
|
473 |
\begin{center}
|
|
474 |
\begin{tabular}{lcl}
|
115
|
475 |
$\textit{a}$ & $::=$ & $\ZERO$\\
|
|
476 |
& $\mid$ & $_{bs}\ONE$\\
|
|
477 |
& $\mid$ & $_{bs}{\bf c}$\\
|
132
|
478 |
& $\mid$ & $_{bs}\sum\,as$\\
|
115
|
479 |
& $\mid$ & $_{bs}a_1\cdot a_2$\\
|
|
480 |
& $\mid$ & $_{bs}a^*$
|
107
|
481 |
\end{tabular}
|
|
482 |
\end{center}
|
|
483 |
%(in \textit{ALTS})
|
|
484 |
|
|
485 |
\noindent
|
|
486 |
where $bs$ stands for bitcodes, $a$ for $\bold{a}$nnotated regular
|
|
487 |
expressions and $as$ for a list of annotated regular expressions.
|
132
|
488 |
The alternative constructor($\sum$) has been generalized to
|
107
|
489 |
accept a list of annotated regular expressions rather than just 2.
|
|
490 |
We will show that these bitcodes encode information about
|
|
491 |
the (POSIX) value that should be generated by the Sulzmann and Lu
|
|
492 |
algorithm.
|
|
493 |
|
|
494 |
|
|
495 |
To do lexing using annotated regular expressions, we shall first
|
|
496 |
transform the usual (un-annotated) regular expressions into annotated
|
|
497 |
regular expressions. This operation is called \emph{internalisation} and
|
|
498 |
defined as follows:
|
|
499 |
|
|
500 |
%\begin{definition}
|
|
501 |
\begin{center}
|
|
502 |
\begin{tabular}{lcl}
|
116
|
503 |
$(\ZERO)^\uparrow$ & $\dn$ & $\ZERO$\\
|
|
504 |
$(\ONE)^\uparrow$ & $\dn$ & $_{[]}\ONE$\\
|
|
505 |
$(c)^\uparrow$ & $\dn$ & $_{[]}{\bf c}$\\
|
107
|
506 |
$(r_1 + r_2)^\uparrow$ & $\dn$ &
|
132
|
507 |
$_{[]}\sum[\textit{fuse}\,[0]\,r_1^\uparrow,\,
|
116
|
508 |
\textit{fuse}\,[1]\,r_2^\uparrow]$\\
|
107
|
509 |
$(r_1\cdot r_2)^\uparrow$ & $\dn$ &
|
116
|
510 |
$_{[]}r_1^\uparrow \cdot r_2^\uparrow$\\
|
107
|
511 |
$(r^*)^\uparrow$ & $\dn$ &
|
116
|
512 |
$_{[]}(r^\uparrow)^*$\\
|
107
|
513 |
\end{tabular}
|
|
514 |
\end{center}
|
|
515 |
%\end{definition}
|
|
516 |
|
|
517 |
\noindent
|
|
518 |
We use up arrows here to indicate that the basic un-annotated regular
|
|
519 |
expressions are ``lifted up'' into something slightly more complex. In the
|
|
520 |
fourth clause, $\textit{fuse}$ is an auxiliary function that helps to
|
|
521 |
attach bits to the front of an annotated regular expression. Its
|
|
522 |
definition is as follows:
|
|
523 |
|
|
524 |
\begin{center}
|
|
525 |
\begin{tabular}{lcl}
|
117
|
526 |
$\textit{fuse}\;bs \; \ZERO$ & $\dn$ & $\ZERO$\\
|
|
527 |
$\textit{fuse}\;bs\; _{bs'}\ONE$ & $\dn$ &
|
|
528 |
$_{bs @ bs'}\ONE$\\
|
|
529 |
$\textit{fuse}\;bs\;_{bs'}{\bf c}$ & $\dn$ &
|
|
530 |
$_{bs@bs'}{\bf c}$\\
|
132
|
531 |
$\textit{fuse}\;bs\,_{bs'}\sum\textit{as}$ & $\dn$ &
|
|
532 |
$_{bs@bs'}\sum\textit{as}$\\
|
117
|
533 |
$\textit{fuse}\;bs\; _{bs'}a_1\cdot a_2$ & $\dn$ &
|
|
534 |
$_{bs@bs'}a_1 \cdot a_2$\\
|
|
535 |
$\textit{fuse}\;bs\,_{bs'}a^*$ & $\dn$ &
|
|
536 |
$_{bs @ bs'}a^*$
|
107
|
537 |
\end{tabular}
|
|
538 |
\end{center}
|
|
539 |
|
|
540 |
\noindent
|
|
541 |
After internalising the regular expression, we perform successive
|
|
542 |
derivative operations on the annotated regular expressions. This
|
|
543 |
derivative operation is the same as what we had previously for the
|
|
544 |
basic regular expressions, except that we beed to take care of
|
|
545 |
the bitcodes:
|
|
546 |
|
109
|
547 |
|
|
548 |
\iffalse
|
107
|
549 |
%\begin{definition}{bder}
|
|
550 |
\begin{center}
|
|
551 |
\begin{tabular}{@{}lcl@{}}
|
|
552 |
$(\textit{ZERO})\,\backslash c$ & $\dn$ & $\textit{ZERO}$\\
|
|
553 |
$(\textit{ONE}\;bs)\,\backslash c$ & $\dn$ & $\textit{ZERO}$\\
|
|
554 |
$(\textit{CHAR}\;bs\,d)\,\backslash c$ & $\dn$ &
|
|
555 |
$\textit{if}\;c=d\; \;\textit{then}\;
|
|
556 |
\textit{ONE}\;bs\;\textit{else}\;\textit{ZERO}$\\
|
|
557 |
$(\textit{ALTS}\;bs\,as)\,\backslash c$ & $\dn$ &
|
|
558 |
$\textit{ALTS}\;bs\,(as.map(\backslash c))$\\
|
|
559 |
$(\textit{SEQ}\;bs\,a_1\,a_2)\,\backslash c$ & $\dn$ &
|
|
560 |
$\textit{if}\;\textit{bnullable}\,a_1$\\
|
|
561 |
& &$\textit{then}\;\textit{ALTS}\,bs\,List((\textit{SEQ}\,[]\,(a_1\,\backslash c)\,a_2),$\\
|
|
562 |
& &$\phantom{\textit{then}\;\textit{ALTS}\,bs\,}(\textit{fuse}\,(\textit{bmkeps}\,a_1)\,(a_2\,\backslash c)))$\\
|
|
563 |
& &$\textit{else}\;\textit{SEQ}\,bs\,(a_1\,\backslash c)\,a_2$\\
|
|
564 |
$(\textit{STAR}\,bs\,a)\,\backslash c$ & $\dn$ &
|
|
565 |
$\textit{SEQ}\;bs\,(\textit{fuse}\, [\Z] (r\,\backslash c))\,
|
|
566 |
(\textit{STAR}\,[]\,r)$
|
|
567 |
\end{tabular}
|
|
568 |
\end{center}
|
|
569 |
%\end{definition}
|
|
570 |
|
109
|
571 |
\begin{center}
|
|
572 |
\begin{tabular}{@{}lcl@{}}
|
|
573 |
$(\textit{ZERO})\,\backslash c$ & $\dn$ & $\textit{ZERO}$\\
|
|
574 |
$(_{bs}\textit{ONE})\,\backslash c$ & $\dn$ & $\textit{ZERO}$\\
|
|
575 |
$(_{bs}\textit{CHAR}\;d)\,\backslash c$ & $\dn$ &
|
|
576 |
$\textit{if}\;c=d\; \;\textit{then}\;
|
|
577 |
_{bs}\textit{ONE}\;\textit{else}\;\textit{ZERO}$\\
|
118
|
578 |
$(_{bs}\textit{ALTS}\;\textit{as})\,\backslash c$ & $\dn$ &
|
|
579 |
$_{bs}\textit{ALTS}\;(\textit{as}.\textit{map}(\backslash c))$\\
|
109
|
580 |
$(_{bs}\textit{SEQ}\;a_1\,a_2)\,\backslash c$ & $\dn$ &
|
|
581 |
$\textit{if}\;\textit{bnullable}\,a_1$\\
|
|
582 |
& &$\textit{then}\;_{bs}\textit{ALTS}\,List((_{[]}\textit{SEQ}\,(a_1\,\backslash c)\,a_2),$\\
|
|
583 |
& &$\phantom{\textit{then}\;_{bs}\textit{ALTS}\,}(\textit{fuse}\,(\textit{bmkeps}\,a_1)\,(a_2\,\backslash c)))$\\
|
|
584 |
& &$\textit{else}\;_{bs}\textit{SEQ}\,(a_1\,\backslash c)\,a_2$\\
|
|
585 |
$(_{bs}\textit{STAR}\,a)\,\backslash c$ & $\dn$ &
|
118
|
586 |
$_{bs}\textit{SEQ}\;(\textit{fuse}\, [0] \; r\,\backslash c )\,
|
109
|
587 |
(_{bs}\textit{STAR}\,[]\,r)$
|
|
588 |
\end{tabular}
|
|
589 |
\end{center}
|
|
590 |
%\end{definition}
|
|
591 |
\fi
|
|
592 |
|
|
593 |
\begin{center}
|
|
594 |
\begin{tabular}{@{}lcl@{}}
|
|
595 |
$(\ZERO)\,\backslash c$ & $\dn$ & $\ZERO$\\
|
|
596 |
$(_{bs}\ONE)\,\backslash c$ & $\dn$ & $\ZERO$\\
|
|
597 |
$(_{bs}{\bf d})\,\backslash c$ & $\dn$ &
|
|
598 |
$\textit{if}\;c=d\; \;\textit{then}\;
|
|
599 |
_{bs}\ONE\;\textit{else}\;\ZERO$\\
|
132
|
600 |
$(_{bs}\sum \;\textit{as})\,\backslash c$ & $\dn$ &
|
|
601 |
$_{bs}\sum\;(\textit{as.map}(\backslash c))$\\
|
109
|
602 |
$(_{bs}\;a_1\cdot a_2)\,\backslash c$ & $\dn$ &
|
|
603 |
$\textit{if}\;\textit{bnullable}\,a_1$\\
|
132
|
604 |
& &$\textit{then}\;_{bs}\sum\,[(_{[]}\,(a_1\,\backslash c)\cdot\,a_2),$\\
|
|
605 |
& &$\phantom{\textit{then},\;_{bs}\sum\,}(\textit{fuse}\,(\textit{bmkeps}\,a_1)\,(a_2\,\backslash c))]$\\
|
109
|
606 |
& &$\textit{else}\;_{bs}\,(a_1\,\backslash c)\cdot a_2$\\
|
|
607 |
$(_{bs}a^*)\,\backslash c$ & $\dn$ &
|
119
|
608 |
$_{bs}(\textit{fuse}\, [0] \; r\,\backslash c)\cdot
|
109
|
609 |
(_{[]}r^*))$
|
|
610 |
\end{tabular}
|
|
611 |
\end{center}
|
|
612 |
|
|
613 |
%\end{definition}
|
107
|
614 |
\noindent
|
119
|
615 |
For instance, when we do derivative of $_{bs}a^*$ with respect to c,
|
|
616 |
we need to unfold it into a sequence,
|
|
617 |
and attach an additional bit $0$ to the front of $r \backslash c$
|
118
|
618 |
to indicate that there is one more star iteration. Also the sequence clause
|
107
|
619 |
is more subtle---when $a_1$ is $\textit{bnullable}$ (here
|
|
620 |
\textit{bnullable} is exactly the same as $\textit{nullable}$, except
|
|
621 |
that it is for annotated regular expressions, therefore we omit the
|
118
|
622 |
definition). Assume that $\textit{bmkeps}$ correctly extracts the bitcode for how
|
107
|
623 |
$a_1$ matches the string prior to character $c$ (more on this later),
|
118
|
624 |
then the right branch of alternative, which is $\textit{fuse} \; \bmkeps \; a_1 (a_2
|
107
|
625 |
\backslash c)$ will collapse the regular expression $a_1$(as it has
|
|
626 |
already been fully matched) and store the parsing information at the
|
|
627 |
head of the regular expression $a_2 \backslash c$ by fusing to it. The
|
118
|
628 |
bitsequence $\textit{bs}$, which was initially attached to the
|
|
629 |
first element of the sequence $a_1 \cdot a_2$, has
|
132
|
630 |
now been elevated to the top-level of $\sum$, as this information will be
|
118
|
631 |
needed whichever way the sequence is matched---no matter whether $c$ belongs
|
107
|
632 |
to $a_1$ or $ a_2$. After building these derivatives and maintaining all
|
|
633 |
the lexing information, we complete the lexing by collecting the
|
|
634 |
bitcodes using a generalised version of the $\textit{mkeps}$ function
|
|
635 |
for annotated regular expressions, called $\textit{bmkeps}$:
|
|
636 |
|
|
637 |
|
|
638 |
%\begin{definition}[\textit{bmkeps}]\mbox{}
|
|
639 |
\begin{center}
|
|
640 |
\begin{tabular}{lcl}
|
119
|
641 |
$\textit{bmkeps}\,(_{bs}\ONE)$ & $\dn$ & $bs$\\
|
132
|
642 |
$\textit{bmkeps}\,(_{bs}\sum a::\textit{as})$ & $\dn$ &
|
107
|
643 |
$\textit{if}\;\textit{bnullable}\,a$\\
|
|
644 |
& &$\textit{then}\;bs\,@\,\textit{bmkeps}\,a$\\
|
132
|
645 |
& &$\textit{else}\;bs\,@\,\textit{bmkeps}\,(_{bs}\sum \textit{as})$\\
|
120
|
646 |
$\textit{bmkeps}\,(_{bs} a_1 \cdot a_2)$ & $\dn$ &
|
107
|
647 |
$bs \,@\,\textit{bmkeps}\,a_1\,@\, \textit{bmkeps}\,a_2$\\
|
119
|
648 |
$\textit{bmkeps}\,(_{bs}a^*)$ & $\dn$ &
|
121
|
649 |
$bs \,@\, [0]$
|
107
|
650 |
\end{tabular}
|
|
651 |
\end{center}
|
|
652 |
%\end{definition}
|
|
653 |
|
|
654 |
\noindent
|
|
655 |
This function completes the value information by travelling along the
|
|
656 |
path of the regular expression that corresponds to a POSIX value and
|
|
657 |
collecting all the bitcodes, and using $S$ to indicate the end of star
|
|
658 |
iterations. If we take the bitcodes produced by $\textit{bmkeps}$ and
|
|
659 |
decode them, we get the value we expect. The corresponding lexing
|
|
660 |
algorithm looks as follows:
|
|
661 |
|
|
662 |
\begin{center}
|
|
663 |
\begin{tabular}{lcl}
|
|
664 |
$\textit{blexer}\;r\,s$ & $\dn$ &
|
|
665 |
$\textit{let}\;a = (r^\uparrow)\backslash s\;\textit{in}$\\
|
|
666 |
& & $\;\;\textit{if}\; \textit{bnullable}(a)$\\
|
|
667 |
& & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
|
|
668 |
& & $\;\;\textit{else}\;\textit{None}$
|
|
669 |
\end{tabular}
|
|
670 |
\end{center}
|
|
671 |
|
|
672 |
\noindent
|
|
673 |
In this definition $\_\backslash s$ is the generalisation of the derivative
|
|
674 |
operation from characters to strings (just like the derivatives for un-annotated
|
|
675 |
regular expressions).
|
|
676 |
|
108
|
677 |
|
|
678 |
\subsection*{Our Simplification Rules}
|
|
679 |
|
107
|
680 |
The main point of the bitcodes and annotated regular expressions is that
|
|
681 |
we can apply rather aggressive (in terms of size) simplification rules
|
|
682 |
in order to keep derivatives small. We have developed such
|
|
683 |
``aggressive'' simplification rules and generated test data that show
|
|
684 |
that the expected bound can be achieved. Obviously we could only
|
|
685 |
partially cover the search space as there are infinitely many regular
|
|
686 |
expressions and strings.
|
|
687 |
|
|
688 |
One modification we introduced is to allow a list of annotated regular
|
132
|
689 |
expressions in the $\sum$ constructor. This allows us to not just
|
107
|
690 |
delete unnecessary $\ZERO$s and $\ONE$s from regular expressions, but
|
|
691 |
also unnecessary ``copies'' of regular expressions (very similar to
|
|
692 |
simplifying $r + r$ to just $r$, but in a more general setting). Another
|
|
693 |
modification is that we use simplification rules inspired by Antimirov's
|
|
694 |
work on partial derivatives. They maintain the idea that only the first
|
|
695 |
``copy'' of a regular expression in an alternative contributes to the
|
|
696 |
calculation of a POSIX value. All subsequent copies can be pruned away from
|
|
697 |
the regular expression. A recursive definition of our simplification function
|
|
698 |
that looks somewhat similar to our Scala code is given below:
|
|
699 |
%\comment{Use $\ZERO$, $\ONE$ and so on.
|
|
700 |
%Is it $ALTS$ or $ALTS$?}\\
|
|
701 |
|
|
702 |
\begin{center}
|
|
703 |
\begin{tabular}{@{}lcl@{}}
|
|
704 |
|
121
|
705 |
$\textit{simp} \; (_{bs}a_1\cdot a_2)$ & $\dn$ & $ (\textit{simp} \; a_1, \textit{simp} \; a_2) \; \textit{match} $ \\
|
107
|
706 |
&&$\quad\textit{case} \; (\ZERO, \_) \Rightarrow \ZERO$ \\
|
|
707 |
&&$\quad\textit{case} \; (\_, \ZERO) \Rightarrow \ZERO$ \\
|
|
708 |
&&$\quad\textit{case} \; (\ONE, a_2') \Rightarrow \textit{fuse} \; bs \; a_2'$ \\
|
|
709 |
&&$\quad\textit{case} \; (a_1', \ONE) \Rightarrow \textit{fuse} \; bs \; a_1'$ \\
|
121
|
710 |
&&$\quad\textit{case} \; (a_1', a_2') \Rightarrow _{bs}a_1' \cdot a_2'$ \\
|
107
|
711 |
|
133
|
712 |
$\textit{simp} \; (_{bs}\sum \textit{as})$ & $\dn$ & $\textit{distinct}( \textit{flatten} ( \textit{as.map(simp)})) \; \textit{match} $ \\
|
107
|
713 |
&&$\quad\textit{case} \; [] \Rightarrow \ZERO$ \\
|
|
714 |
&&$\quad\textit{case} \; a :: [] \Rightarrow \textit{fuse bs a}$ \\
|
132
|
715 |
&&$\quad\textit{case} \; as' \Rightarrow _{bs}\sum \textit{as'}$\\
|
107
|
716 |
|
|
717 |
$\textit{simp} \; a$ & $\dn$ & $\textit{a} \qquad \textit{otherwise}$
|
|
718 |
\end{tabular}
|
|
719 |
\end{center}
|
|
720 |
|
|
721 |
\noindent
|
|
722 |
The simplification does a pattern matching on the regular expression.
|
|
723 |
When it detected that the regular expression is an alternative or
|
|
724 |
sequence, it will try to simplify its children regular expressions
|
|
725 |
recursively and then see if one of the children turn into $\ZERO$ or
|
|
726 |
$\ONE$, which might trigger further simplification at the current level.
|
132
|
727 |
The most involved part is the $\sum$ clause, where we use two
|
107
|
728 |
auxiliary functions $\textit{flatten}$ and $\textit{distinct}$ to open up nested
|
121
|
729 |
alternatives and reduce as many duplicates as possible. Function
|
107
|
730 |
$\textit{distinct}$ keeps the first occurring copy only and remove all later ones
|
132
|
731 |
when detected duplicates. Function $\textit{flatten}$ opens up nested $\sum$s.
|
107
|
732 |
Its recursive definition is given below:
|
|
733 |
|
|
734 |
\begin{center}
|
|
735 |
\begin{tabular}{@{}lcl@{}}
|
132
|
736 |
$\textit{flatten} \; (_{bs}\sum \textit{as}) :: \textit{as'}$ & $\dn$ & $(\textit{map} \;
|
107
|
737 |
(\textit{fuse}\;bs)\; \textit{as}) \; @ \; \textit{flatten} \; as' $ \\
|
121
|
738 |
$\textit{flatten} \; \ZERO :: as'$ & $\dn$ & $ \textit{flatten} \; \textit{as'} $ \\
|
|
739 |
$\textit{flatten} \; a :: as'$ & $\dn$ & $a :: \textit{flatten} \; \textit{as'}$ \quad(otherwise)
|
107
|
740 |
\end{tabular}
|
|
741 |
\end{center}
|
|
742 |
|
|
743 |
\noindent
|
|
744 |
Here $\textit{flatten}$ behaves like the traditional functional programming flatten
|
|
745 |
function, except that it also removes $\ZERO$s. Or in terms of regular expressions, it
|
|
746 |
removes parentheses, for example changing $a+(b+c)$ into $a+b+c$.
|
|
747 |
|
131
|
748 |
Having defined the $\simp$ function,
|
|
749 |
we can use the previous notation of natural
|
|
750 |
extension from derivative w.r.t.~character to derivative
|
|
751 |
w.r.t.~string:%\comment{simp in the [] case?}
|
|
752 |
|
|
753 |
\begin{center}
|
|
754 |
\begin{tabular}{lcl}
|
|
755 |
$r \backslash_{simp} (c\!::\!s) $ & $\dn$ & $(r \backslash_{simp}\, c) \backslash_{simp}\, s$ \\
|
|
756 |
$r \backslash_{simp} [\,] $ & $\dn$ & $r$
|
|
757 |
\end{tabular}
|
|
758 |
\end{center}
|
|
759 |
|
|
760 |
\noindent
|
|
761 |
to obtain an optimised version of the algorithm:
|
|
762 |
|
|
763 |
\begin{center}
|
|
764 |
\begin{tabular}{lcl}
|
|
765 |
$\textit{blexer\_simp}\;r\,s$ & $\dn$ &
|
|
766 |
$\textit{let}\;a = (r^\uparrow)\backslash_{simp}\, s\;\textit{in}$\\
|
|
767 |
& & $\;\;\textit{if}\; \textit{bnullable}(a)$\\
|
|
768 |
& & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
|
|
769 |
& & $\;\;\textit{else}\;\textit{None}$
|
|
770 |
\end{tabular}
|
|
771 |
\end{center}
|
|
772 |
|
|
773 |
\noindent
|
|
774 |
This algorithm keeps the regular expression size small, for example,
|
|
775 |
with this simplification our previous $(a + aa)^*$ example's 8000 nodes
|
|
776 |
will be reduced to just 6 and stays constant, no matter how long the
|
|
777 |
input string is.
|
107
|
778 |
|
|
779 |
|
|
780 |
|
126
|
781 |
\section{Current Work and Progress}
|
105
|
782 |
For reasons beyond this report, it turns out that a complete set of
|
|
783 |
simplification rules depends on values being encoded as
|
|
784 |
bitsequences.\footnote{Values are the results the lexing algorithms
|
|
785 |
generate; they encode how a regular expression matched a string.} We
|
|
786 |
already know that the lexing algorithm using bitsequences but
|
|
787 |
\emph{without} simplification is correct, albeilt horribly
|
106
|
788 |
slow. Therefore in the past 6 months I was trying to prove that the
|
105
|
789 |
algorithm using bitsequences plus our simplification rules is
|
106
|
790 |
also correct. Formally this amounts to show that
|
100
|
791 |
|
|
792 |
\begin{equation}\label{mainthm}
|
|
793 |
\blexers \; r \; s = \blexer \;r\;s
|
|
794 |
\end{equation}
|
|
795 |
|
94
|
796 |
\noindent
|
105
|
797 |
whereby $\blexers$ simplifies (makes derivatives smaller) in each
|
|
798 |
step, whereas with $\blexer$ the size can grow exponentially. This
|
106
|
799 |
would be an important milestone for my thesis, because we already
|
105
|
800 |
have a very good idea how to establish that our set our simplification
|
|
801 |
rules keeps the size of derivativs below a relatively tight bound.
|
100
|
802 |
|
106
|
803 |
In order to prove the main theorem in \eqref{mainthm}, we need to prove that the
|
|
804 |
two functions produce the same output. The definition of these two functions
|
100
|
805 |
is shown below.
|
|
806 |
|
94
|
807 |
\begin{center}
|
|
808 |
\begin{tabular}{lcl}
|
|
809 |
$\textit{blexer}\;r\,s$ & $\dn$ &
|
|
810 |
$\textit{let}\;a = (r^\uparrow)\backslash s\;\textit{in}$\\
|
|
811 |
& & $\;\;\textit{if}\; \textit{bnullable}(a)$\\
|
|
812 |
& & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
|
|
813 |
& & $\;\;\textit{else}\;\textit{None}$
|
|
814 |
\end{tabular}
|
|
815 |
\end{center}
|
|
816 |
|
100
|
817 |
\begin{center}
|
94
|
818 |
\begin{tabular}{lcl}
|
100
|
819 |
$\blexers \; r \, s$ &$\dn$ &
|
|
820 |
$\textit{let} \; a = (r^\uparrow)\backslash_{simp}\, s\; \textit{in}$\\
|
|
821 |
& & $\; \; \textit{if} \; \textit{bnullable}(a)$\\
|
|
822 |
& & $\; \; \textit{then} \; \textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
|
|
823 |
& & $\;\; \textit{else}\;\textit{None}$
|
94
|
824 |
\end{tabular}
|
|
825 |
\end{center}
|
|
826 |
\noindent
|
105
|
827 |
In these definitions $(r^\uparrow)$ is a kind of coding function that
|
|
828 |
is the same in each case, similarly the decode and the \textit{bmkeps}
|
106
|
829 |
are functions that are the same in each case. Our main
|
|
830 |
theorem~\eqref{mainthm} therefore boils down to proving the following
|
|
831 |
two propositions (depending on which branch the if-else clause
|
|
832 |
takes). They establish how the derivatives \emph{with} simplification
|
|
833 |
do not change the computed result:
|
94
|
834 |
|
|
835 |
\begin{itemize}
|
105
|
836 |
\item{(a)} If a string $s$ is in the language of $L(r)$, then \\
|
100
|
837 |
$\textit{bmkeps} (r^\uparrow)\backslash_{simp}\,s = \textit{bmkeps} (r^\uparrow)\backslash s$,\\
|
105
|
838 |
\item{(b)} If a string $s$ is in the language $L(r)$, then
|
100
|
839 |
$\rup \backslash_{simp} \,s$ is not nullable.
|
94
|
840 |
\end{itemize}
|
100
|
841 |
|
94
|
842 |
\noindent
|
106
|
843 |
We have already proved the second part in Isabelle. This is actually
|
105
|
844 |
not too difficult because we can show that simplification does not
|
|
845 |
change the language of simplified regular expressions.
|
100
|
846 |
|
105
|
847 |
If we can prove the first part, that is the bitsequence algorithm with
|
|
848 |
simplification produces the same result as the one without
|
|
849 |
simplification, then we are done. Unfortunately that part requires
|
|
850 |
more effort, because simplification does not only need to \emph{not}
|
|
851 |
change the language, but also not change the value (that is the
|
|
852 |
computed result).
|
100
|
853 |
|
105
|
854 |
%\bigskip\noindent\rule[1.5ex]{\linewidth}{5pt}
|
|
855 |
%Do you want to keep this? You essentially want to say that the old
|
|
856 |
%method used retrieve, which unfortunately cannot be adopted to
|
|
857 |
%the simplification rules. You could just say that and give an example.
|
|
858 |
%However you have to think about how you give the example....nobody knows
|
|
859 |
%about AZERO etc yet. Maybe it might be better to use normal regexes
|
|
860 |
%like $a + aa$, but annotate bitsequences as subscript like $_1(_0a + _1aa)$.
|
104
|
861 |
|
105
|
862 |
%\bigskip\noindent\rule[1.5ex]{\linewidth}{5pt}
|
|
863 |
%REPLY:\\
|
|
864 |
%Yes, I am essentially saying that the old method
|
|
865 |
%cannot be adopted without adjustments.
|
|
866 |
%But this does not mean we should skip
|
|
867 |
%the proof of the bit-coded algorithm
|
|
868 |
%as it is still the main direction we are looking into
|
|
869 |
%to prove things. We are trying to modify
|
|
870 |
%the old proof to suit our needs, but not give
|
|
871 |
%up it totally, that is why i believe the old
|
|
872 |
%proof is fundamental in understanding
|
|
873 |
%what we are doing in the past 6 months.
|
|
874 |
%\bigskip\noindent\rule[1.5ex]{\linewidth}{5pt}
|
100
|
875 |
|
106
|
876 |
\subsubsection*{Existing Proof}
|
100
|
877 |
|
134
|
878 |
For this we have started with looking at the proof of
|
105
|
879 |
\begin{equation}\label{lexer}
|
134
|
880 |
\blexer \; (r^\uparrow) s = \lexer \;r \;s,
|
105
|
881 |
\end{equation}
|
|
882 |
|
|
883 |
%\noindent
|
|
884 |
%might provide us insight into proving
|
|
885 |
%\begin{center}
|
|
886 |
%$\blexer \; r^\uparrow \;s = \blexers \; r^\uparrow \;s$
|
|
887 |
%\end{center}
|
104
|
888 |
|
94
|
889 |
\noindent
|
143
|
890 |
which established that the bit-sequence algorithm produces the same
|
134
|
891 |
result as the original algorithm, which does not use
|
|
892 |
bit-sequence.
|
106
|
893 |
The proof uses two ``tricks''. One is that it uses a \flex-function
|
104
|
894 |
|
94
|
895 |
\begin{center}
|
|
896 |
\begin{tabular}{lcl}
|
|
897 |
$\textit{flex} \;r\; f\; (c\!::\!s) $ & $\dn$ & $\textit{flex} \; (r\backslash c) \;(\lambda v. f (inj \; r \; c \; v)) \;s$ \\
|
|
898 |
$\textit{flex} \;r\; f\; [\,] $ & $\dn$ & $f$
|
|
899 |
\end{tabular}
|
|
900 |
\end{center}
|
104
|
901 |
|
94
|
902 |
\noindent
|
134
|
903 |
to prove for the right-hand side in \eqref{lexer}
|
104
|
904 |
|
|
905 |
\begin{center}
|
134
|
906 |
$\lexer \;r\; s = \flex \;\textit{id} \; r\;s \;(\mkeps \; (r\backslash s))$.
|
|
907 |
\end{center}
|
104
|
908 |
|
106
|
909 |
|
104
|
910 |
|
|
911 |
\noindent
|
144
|
912 |
This property links $\flex$ and $\lexer$,
|
134
|
913 |
it essentially does lexing by
|
|
914 |
stacking up injection functions while doing derivatives,
|
94
|
915 |
explicitly showing the order of characters being
|
|
916 |
injected back in each step.
|
106
|
917 |
|
94
|
918 |
\noindent
|
135
|
919 |
The other trick, which is the crux in the existing proof,
|
144
|
920 |
is the use of the $\retrieve$-function by Sulzmann and Lu:
|
107
|
921 |
\begin{center}
|
|
922 |
\begin{tabular}{@{}l@{\hspace{2mm}}c@{\hspace{2mm}}l@{}}
|
110
|
923 |
$\textit{retrieve}\,(_{bs}\ONE)\,\Empty$ & $\dn$ & $bs$\\
|
|
924 |
$\textit{retrieve}\,(_{bs}{\bf c})\,(\Char\,d)$ & $\dn$ & $bs$\\
|
132
|
925 |
$\textit{retrieve}\,(_{bs}\sum a::as)\,(\Left\,v)$ & $\dn$ &
|
107
|
926 |
$bs \,@\, \textit{retrieve}\,a\,v$\\
|
132
|
927 |
$\textit{retrieve}\,(_{bs}\sum a::as)\,(\Right\,v)$ & $\dn$ &
|
|
928 |
$\textit{bs} \,@\, \textit{retrieve}\,(_{[]}\sum as)\,v$\\
|
121
|
929 |
$\textit{retrieve}\,(_{bs}a_1\cdot a_2)\,(\Seq\,v_1\,v_2)$ & $\dn$ &
|
107
|
930 |
$bs \,@\,\textit{retrieve}\,a_1\,v_1\,@\, \textit{retrieve}\,a_2\,v_2$\\
|
121
|
931 |
$\textit{retrieve}\,(_{bs}a^*)\,(\Stars\,[])$ & $\dn$ &
|
|
932 |
$bs \,@\, [0]$\\
|
|
933 |
$\textit{retrieve}\,(_{bs}a^*)\,(\Stars\,(v\!::\!vs))$ & $\dn$ &\\
|
107
|
934 |
\multicolumn{3}{l}{
|
121
|
935 |
\hspace{3cm}$bs \,@\, [1] \,@\, \textit{retrieve}\,a\,v\,@\,
|
|
936 |
\textit{retrieve}\,(_{[]}a^*)\,(\Stars\,vs)$}\\
|
107
|
937 |
\end{tabular}
|
|
938 |
\end{center}
|
135
|
939 |
|
|
940 |
\noindent
|
107
|
941 |
This function assembles the bitcode
|
|
942 |
using information from both the derivative regular expression and the
|
135
|
943 |
value. Sulzmann and Lu proposed this function, but did not prove
|
144
|
944 |
anything about it. Ausaf and Urban made use of the
|
|
945 |
fact about $\retrieve$ in their proof:
|
107
|
946 |
|
135
|
947 |
\begin{center}
|
144
|
948 |
$\retrieve\; \rup \backslash c \; v = \retrieve \; \rup (\inj \;r \;c \; v)$
|
107
|
949 |
\end{center}
|
|
950 |
|
|
951 |
\noindent
|
135
|
952 |
whereby $r^\uparrow$ stands for the internalised version of $r$.
|
|
953 |
This fact, together with the fact of how $\flex$ relates to injection:
|
|
954 |
|
|
955 |
\begin{equation}\label{flex}
|
|
956 |
\flex \; r \; id \; (s@[c]) \; v = \flex \; r \; id \; s \; (inj \; (r\backslash s) \; c\; v).
|
|
957 |
\end{equation}
|
|
958 |
|
|
959 |
\noindent
|
|
960 |
can be used to prove what we want:
|
|
961 |
\begin{center}
|
|
962 |
$ \flex \; r\; id\; s\; v = \textit{decode} \;( \textit{bmkeps}\; (\rup \backslash s) ) r$
|
|
963 |
\end{center}
|
|
964 |
\noindent
|
|
965 |
If we state the inductive hypothesis to be
|
|
966 |
\begin{center}
|
|
967 |
$ \flex \; r\; id\; s\; v = \textit{decode} \;( \textit{retrieve}\; (\rup \backslash s) \; v \;) r$
|
|
968 |
\end{center}
|
|
969 |
\noindent
|
|
970 |
where $\mkeps(r\backslash s) $ is denoted using $v$,
|
144
|
971 |
then a reverse induction
|
|
972 |
helps with using
|
135
|
973 |
the fact~\eqref{flex} in proving the $n+1$ step:
|
|
974 |
\begin{center}
|
|
975 |
$ \flex \; r\; id\; (s@[c])\; v = \textit{decode} \;( \textit{retrieve}\; (\rup \backslash s) \; (\inj \; (r\backslash s) \;c\;v)\;) r$
|
|
976 |
\end{center}
|
144
|
977 |
Using a lemma that
|
135
|
978 |
\begin{center}
|
|
979 |
$\textit{retrieve}\; (\rup \backslash s) \; (\inj \; (r\backslash s) \;c\;v)\; = \textit{retrieve}\; (\rup \backslash s@[c]) \; v\; $
|
|
980 |
\end{center}
|
|
981 |
we get
|
|
982 |
\begin{center}
|
|
983 |
$ \textit{decode} \;( \textit{retrieve}\; (\rup \backslash s) \; (\inj \; (r\backslash s) \;c\;v)\;) r = \textit{decode} \;( \textit{retrieve}\; (\rup \backslash s@[c]) \; v\;) r $
|
|
984 |
\end{center}
|
|
985 |
and the inductive step is done because
|
|
986 |
\begin{center}
|
144
|
987 |
$ \textit{retrieve}\; (\rup \backslash s@[c]) \; \mkeps(r\backslash s) = \bmkeps \;(\rup \backslash s@[c])$.
|
135
|
988 |
\end{center}
|
|
989 |
|
144
|
990 |
\noindent
|
|
991 |
To use
|
|
992 |
$ \flex \; r\; id\; s\; v = \textit{decode} \;( \textit{retrieve}\; (\rup \backslash s) \; v \;) r$
|
|
993 |
for our
|
|
994 |
correctness proof, simply replace the $v$ with
|
|
995 |
$\mkeps\;(r\backslash s)$, and apply the lemma that
|
|
996 |
\begin{center}
|
|
997 |
$ \; \bmkeps \; \rup = (\textit{retrieve} \; \rup \; \mkeps(r))$
|
|
998 |
\end{center}
|
|
999 |
\noindent
|
|
1000 |
We get the correctness of our bit-coded algorithm:
|
|
1001 |
\begin{center}
|
|
1002 |
$\flex \;r\; id \; s \; (\mkeps \; (r\backslash s)) = \textit{decode} \; (\bmkeps \; \rup\backslash s) \; r$.
|
|
1003 |
\end{center}
|
|
1004 |
\noindent
|
|
1005 |
|
|
1006 |
|
|
1007 |
|
135
|
1008 |
\subsubsection{Using Retrieve Function In a New Setting}
|
|
1009 |
Ausaf
|
|
1010 |
and Urban used $\retrieve$ to prove the correctness of bitcoded
|
|
1011 |
algorithm without simplification. Our purpose of using it, however,
|
107
|
1012 |
is to establish
|
|
1013 |
|
|
1014 |
\begin{center}
|
|
1015 |
$ \textit{retrieve} \;
|
|
1016 |
a \; v \;=\; \textit{retrieve} \; (\textit{simp}\,a) \; v'.$
|
|
1017 |
\end{center}
|
|
1018 |
The idea is that using $v'$, a simplified version of $v$ that had gone
|
|
1019 |
through the same simplification step as $\textit{simp}(a)$, we are able
|
|
1020 |
to extract the bitcode that gives the same parsing information as the
|
144
|
1021 |
unsimplified one.
|
|
1022 |
If we want to use a similar technique as
|
|
1023 |
that of the existing proof,
|
94
|
1024 |
we face the problem that in the above
|
|
1025 |
equalities,
|
|
1026 |
$\retrieve \; a \; v$ is not always defined.
|
|
1027 |
for example,
|
100
|
1028 |
$\retrieve \; _0(_1a+_0a) \; \Left(\Empty)$
|
101
|
1029 |
is defined, but not $\retrieve \; (_{01}a) \;\Left(\Empty)$,
|
94
|
1030 |
though we can extract the same POSIX
|
|
1031 |
bits from the two annotated regular expressions.
|
95
|
1032 |
The latter might occur when we try to retrieve from
|
|
1033 |
a simplified regular expression using the same value
|
|
1034 |
as the unsimplified one.
|
|
1035 |
This is because $\Left(\Empty)$ corresponds to
|
101
|
1036 |
the regular expression structure $\ONE+r_2$ instead of
|
|
1037 |
$\ONE$.
|
94
|
1038 |
That means, if we
|
|
1039 |
want to prove that
|
|
1040 |
\begin{center}
|
|
1041 |
$\textit{decode} \; \bmkeps \; \rup\backslash s \; r = \textit{decode} \; \bmkeps \; \rup\backslash_{simp} s \; r$
|
|
1042 |
\end{center}
|
|
1043 |
\noindent
|
|
1044 |
holds by using $\retrieve$,
|
|
1045 |
we probably need to prove an equality like below:
|
|
1046 |
\begin{center}
|
|
1047 |
%$\retrieve \; \rup\backslash_{simp} s \; \mkeps(r\backslash_{simp} s)=\textit{retrieve} \; \rup\backslash s \; \mkeps(r\backslash s)$
|
101
|
1048 |
$\retrieve \; \rup\backslash_{simp} s \; \mkeps(f(r\backslash s))=\textit{retrieve} \; \rup\backslash s \; \mkeps(r\backslash s)$
|
94
|
1049 |
\end{center}
|
|
1050 |
\noindent
|
101
|
1051 |
$f$ rectifies $r\backslash s$ so the value $\mkeps(f(r\backslash s))$ becomes
|
|
1052 |
something simpler
|
94
|
1053 |
to make the retrieve function defined.\\
|
107
|
1054 |
\subsubsection{Ways to Rectify Value}
|
95
|
1055 |
One way to do this is to prove the following:
|
|
1056 |
\begin{center}
|
|
1057 |
$\retrieve \; \rup\backslash_{simp} s \; \mkeps(\simp(r\backslash s))=\textit{retrieve} \; \rup\backslash s \; \mkeps(r\backslash s)$
|
|
1058 |
\end{center}
|
|
1059 |
\noindent
|
101
|
1060 |
The reason why we choose $\simp$ as $f$ is because
|
|
1061 |
$\rup\backslash_{simp} \, s$ and $\simp(\rup\backslash \, s)$
|
|
1062 |
have the same shape:
|
|
1063 |
\begin{center}
|
|
1064 |
$\erase (\rup\backslash_{simp} \, s) = \erase(\simp(\rup\backslash s))$
|
|
1065 |
\end{center}
|
|
1066 |
|
|
1067 |
\noindent
|
|
1068 |
$\erase$ in the above equality means to remove the bit-codes
|
|
1069 |
in an annotated regular expression and only keep the original
|
|
1070 |
regular expression(just like "erasing" the bits). Its definition is omitted.
|
|
1071 |
$\rup\backslash_{simp} \, s$ and $\simp(\rup\backslash s)$
|
|
1072 |
are very closely related, but not identical.
|
107
|
1073 |
\subsubsection{Example for $\rup\backslash_{simp} \, s \neq \simp(\rup\backslash s)$}
|
101
|
1074 |
For example, let $r$ be the regular expression
|
|
1075 |
$(a+b)(a+a*)$ and $s$ be the string $aa$, then
|
103
|
1076 |
both $\erase (\rup\backslash_{simp} \, s)$ and $\erase (\simp (\rup\backslash s))$
|
|
1077 |
are $\ONE + a^*$. However, without $\erase$
|
101
|
1078 |
\begin{center}
|
|
1079 |
$\rup\backslash_{simp} \, s$ is equal to $_0(_0\ONE +_{11}a^*)$
|
|
1080 |
\end{center}
|
|
1081 |
\noindent
|
|
1082 |
whereas
|
|
1083 |
\begin{center}
|
103
|
1084 |
$\simp(\rup\backslash s)$ is equal to $(_{00}\ONE +_{011}a^*)$
|
101
|
1085 |
\end{center}
|
|
1086 |
\noindent
|
107
|
1087 |
(For the sake of visual simplicity, we use numbers to denote the bits
|
103
|
1088 |
in bitcodes as we have previously defined for annotated
|
|
1089 |
regular expressions. $\S$ is replaced by
|
107
|
1090 |
subscript $_1$ and $\Z$ by $_0$.)
|
103
|
1091 |
|
107
|
1092 |
What makes the difference?
|
|
1093 |
|
|
1094 |
%Two "rules" might be inferred from the above example.
|
103
|
1095 |
|
107
|
1096 |
%First, after erasing the bits the two regular expressions
|
|
1097 |
%are exactly the same: both become $1+a^*$. Here the
|
|
1098 |
%function $\simp$ exhibits the "one in the end equals many times
|
|
1099 |
%at the front"
|
|
1100 |
%property: one simplification in the end causes the
|
|
1101 |
%same regular expression structure as
|
|
1102 |
%successive simplifications done alongside derivatives.
|
|
1103 |
%$\rup\backslash_{simp} \, s$ unfolds to
|
|
1104 |
%$\simp((\simp(r\backslash a))\backslash a)$
|
|
1105 |
%and $\simp(\rup\backslash s)$ unfolds to
|
|
1106 |
%$\simp((r\backslash a)\backslash a)$. The one simplification
|
|
1107 |
%in the latter causes the resulting regular expression to
|
|
1108 |
%become $1+a^*$, exactly the same as the former with
|
|
1109 |
%two simplifications.
|
103
|
1110 |
|
107
|
1111 |
%Second, the bit-codes are different, but they are essentially
|
|
1112 |
%the same: if we push the outmost bits ${\bf_0}(_0\ONE +_{11}a^*)$ of $\rup\backslash_{simp} \, s$
|
|
1113 |
%inside then we get $(_{00}\ONE +_{011}a^*)$, exactly the
|
|
1114 |
%same as that of $\rup\backslash \, s$. And this difference
|
|
1115 |
%does not matter when we try to apply $\bmkeps$ or $\retrieve$
|
|
1116 |
%to it. This seems a good news if we want to use $\retrieve$
|
|
1117 |
%to prove things.
|
103
|
1118 |
|
107
|
1119 |
%If we look into the difference above, we could see that the
|
|
1120 |
%difference is not fundamental: the bits are just being moved
|
|
1121 |
%around in a way that does not hurt the correctness.
|
103
|
1122 |
During the first derivative operation,
|
|
1123 |
$\rup\backslash a=(_0\ONE + \ZERO)(_0a + _1a^*)$ is
|
107
|
1124 |
in the form of a sequence regular expression with
|
|
1125 |
two components, the first
|
|
1126 |
one $\ONE + \ZERO$ being nullable.
|
103
|
1127 |
Recall the simplification function definition:
|
|
1128 |
\begin{center}
|
|
1129 |
\begin{tabular}{@{}lcl@{}}
|
|
1130 |
|
|
1131 |
$\textit{simp} \; (\textit{SEQ}\;bs\,a_1\,a_2)$ & $\dn$ & $ (\textit{simp} \; a_1, \textit{simp} \; a_2) \; \textit{match} $ \\
|
|
1132 |
&&$\quad\textit{case} \; (\ZERO, \_) \Rightarrow \ZERO$ \\
|
|
1133 |
&&$\quad\textit{case} \; (\_, \ZERO) \Rightarrow \ZERO$ \\
|
|
1134 |
&&$\quad\textit{case} \; (\ONE, a_2') \Rightarrow \textit{fuse} \; bs \; a_2'$ \\
|
|
1135 |
&&$\quad\textit{case} \; (a_1', \ONE) \Rightarrow \textit{fuse} \; bs \; a_1'$ \\
|
|
1136 |
&&$\quad\textit{case} \; (a_1', a_2') \Rightarrow \textit{SEQ} \; bs \; a_1' \; a_2'$ \\
|
|
1137 |
|
131
|
1138 |
$\textit{simp} \; (\textit{ALTS}\;bs\,as)$ & $\dn$ & $\textit{distinct}( \textit{flatten} ( \textit{as.map(simp)})) \; \textit{match} $ \\
|
103
|
1139 |
&&$\quad\textit{case} \; [] \Rightarrow \ZERO$ \\
|
|
1140 |
&&$\quad\textit{case} \; a :: [] \Rightarrow \textit{fuse bs a}$ \\
|
|
1141 |
&&$\quad\textit{case} \; as' \Rightarrow \textit{ALTS}\;bs\;as'$\\
|
|
1142 |
|
|
1143 |
$\textit{simp} \; a$ & $\dn$ & $\textit{a} \qquad \textit{otherwise}$
|
|
1144 |
\end{tabular}
|
|
1145 |
\end{center}
|
|
1146 |
|
|
1147 |
\noindent
|
107
|
1148 |
|
132
|
1149 |
and the definition of $\flatten$:
|
107
|
1150 |
\begin{center}
|
|
1151 |
\begin{tabular}{c c c}
|
|
1152 |
$\flatten \; []$ & $\dn$ & $[]$\\
|
|
1153 |
$\flatten \; \ZERO::rs$ & $\dn$ & $rs$\\
|
132
|
1154 |
$\flatten \;(_{\textit{bs}_1}\sum \textit{rs}_1 ::rs)$ & $\dn$ & $(\map \, (\fuse \, \textit{bs}_1) \,\textit{rs}_1) ::: \flatten(rs)$\\
|
107
|
1155 |
$\flatten \; r :: rs$ & $\dn$ & $r::\flatten(rs)$
|
|
1156 |
\end{tabular}
|
|
1157 |
\end{center}
|
|
1158 |
|
|
1159 |
\noindent
|
|
1160 |
If we call $\simp$ on $\rup\backslash a$, just as $\backslash_{simp}$
|
132
|
1161 |
requires, then we would go through the third clause of
|
103
|
1162 |
the sequence case:$\quad\textit{case} \; (\ONE, a_2') \Rightarrow \textit{fuse} \; bs \; a_2'$.
|
107
|
1163 |
The $\ZERO$ of $(_0\ONE + \ZERO)$ is thrown away
|
|
1164 |
by $\flatten$ and
|
|
1165 |
$_0\ONE$ merged into $(_0a + _1a^*)$ by simply
|
103
|
1166 |
putting its bits($_0$) to the front of the second component:
|
|
1167 |
${\bf_0}(_0a + _1a^*)$.
|
|
1168 |
After a second derivative operation,
|
|
1169 |
namely, $(_0(_0a + _1a^*))\backslash a$, we get
|
|
1170 |
$
|
|
1171 |
_0(_0 \ONE + _1(_1\ONE \cdot a^*))
|
|
1172 |
$, and this simplifies to $_0(_0 \ONE + _{11} a^*)$
|
|
1173 |
by the third clause of the alternative case:
|
132
|
1174 |
\begin{center}
|
|
1175 |
$\quad\textit{case} \; as' \Rightarrow _{bs}\sum{as'}$.
|
|
1176 |
\end{center}
|
|
1177 |
|
|
1178 |
\noindent
|
107
|
1179 |
The outmost bit $_0$ stays with
|
|
1180 |
the outmost regular expression, rather than being fused to
|
|
1181 |
its child regular expressions, as what we will later see happens
|
|
1182 |
to $\simp(\rup\backslash \, s)$.
|
|
1183 |
If we choose to not simplify in the midst of derivative operations,
|
|
1184 |
but only do it at the end after the string has been exhausted,
|
|
1185 |
namely, $\simp(\rup\backslash \, s)=\simp((\rup\backslash a)\backslash a)$,
|
|
1186 |
then at the {\bf second} derivative of
|
|
1187 |
$(\rup\backslash a)\bf{\backslash a}$
|
|
1188 |
we will go throught the clause of $\backslash$:
|
|
1189 |
\begin{center}
|
|
1190 |
\begin{tabular}{lcl}
|
|
1191 |
$(\textit{SEQ}\;bs\,a_1\,a_2)\,\backslash c$ & $\dn$ &
|
|
1192 |
$(when \; \textit{bnullable}\,a_1)$\\
|
133
|
1193 |
& &$_{bs}\sum\,\;[_{[]}((a_1\,\backslash c) \cdot \,a_2),$\\
|
|
1194 |
& &$(\textit{fuse}\,(\textit{bmkeps}\,a_1)\,(a_2\,\backslash c))]$\\
|
107
|
1195 |
\end{tabular}
|
|
1196 |
\end{center}
|
|
1197 |
|
|
1198 |
because
|
|
1199 |
$\rup\backslash a = (_0\ONE + \ZERO)(_0a + _1a^*)$
|
|
1200 |
is a sequence
|
|
1201 |
with the first component being nullable
|
|
1202 |
(unsimplified, unlike the first round of running$\backslash_{simp}$).
|
103
|
1203 |
Therefore $((_0\ONE + \ZERO)(_0a + _1a^*))\backslash a$ splits into
|
|
1204 |
$([(\ZERO + \ZERO)\cdot(_0a + _1a^*)] + _0( _0\ONE + _1[_1\ONE \cdot a^*]))$.
|
|
1205 |
After these two successive derivatives without simplification,
|
|
1206 |
we apply $\simp$ to this regular expression, which goes through
|
107
|
1207 |
the alternative clause, and each component of
|
|
1208 |
$([(\ZERO + \ZERO)\cdot(_0a + _1a^*)] + _0( _0\ONE + _1[_1\ONE \cdot a^*]))$
|
|
1209 |
will be simplified, giving us the list:$[\ZERO, _0(_0\ONE + _{11}a^*)]$
|
|
1210 |
This list is then "flattened"--$\ZERO$ will be
|
|
1211 |
thrown away by $\textit{flatten}$; $ _0(_0\ONE + _{11}a^*)$
|
|
1212 |
is opened up to make the list consisting of two separate elements
|
|
1213 |
$_{00}\ONE$ and $_{011}a^*$, note that $flatten$
|
|
1214 |
$\fuse$s the bit(s) $_0$ to the front of $_0\ONE $ and $_{11}a^*$.
|
122
|
1215 |
The order of simplification, which impacts the order that alternatives
|
|
1216 |
are opened up, causes
|
107
|
1217 |
the bits to be moved differently.
|
|
1218 |
|
|
1219 |
\subsubsection{A Failed Attempt To Remedy the Problem Above}
|
111
|
1220 |
A simple class of regular expression and string
|
|
1221 |
pairs $(r, s)$ can be deduced from the above example
|
|
1222 |
which trigger the difference between
|
107
|
1223 |
$\rup\backslash_{simp} \, s$
|
|
1224 |
and $\simp(\rup\backslash s)$:
|
111
|
1225 |
\begin{center}
|
|
1226 |
\begin{tabular}{lcl}
|
|
1227 |
$D =\{ (r_1 \cdot r_2,\; [c_1c_2]) \mid $ & $\simp(r_2) = r_2, \simp(r_1 \backslash c_1) = \ONE,$\\
|
132
|
1228 |
$r_1 \; not \; \nullable, c_2 \in L(r_2),$ & $\exists \textit{rs},\textit{bs}.\; r_2 \backslash c_2 = _{bs}{\sum rs}$\\
|
|
1229 |
$\exists \textit{rs}_1. \; \simp(r_2 \backslash c_2) = _{bs}{\sum \textit{rs}_1}$ & $and \;\simp(r_1 \backslash [c_1c_2]) = \ZERO\}$\\
|
111
|
1230 |
\end{tabular}
|
|
1231 |
\end{center}
|
|
1232 |
We take a pair $(r, \;s)$ from the set $D$.
|
|
1233 |
|
|
1234 |
Now we compute ${\bf \rup \backslash_{simp} s}$, we get:
|
110
|
1235 |
\begin{center}
|
|
1236 |
\begin{tabular}{lcl}
|
|
1237 |
$(r_1\cdot r_2)\backslash_{simp} \, [c_1c_2]$ & $= \simp\left[ \big(\simp\left[ \left( r_1\cdot r_2 \right) \backslash c_1\right] \big)\backslash c_2\right]$\\
|
|
1238 |
& $= \simp\left[ \big(\simp \left[ \left(r_1 \backslash c_1\right) \cdot r_2 \right] \big) \backslash c_2 \right]$\\
|
111
|
1239 |
& $= \simp \left[ (\fuse \; \bmkeps(r_1\backslash c_1) \; \simp(r_2) ) \backslash c_2 \right]$,\\
|
|
1240 |
& $= \simp \left[ (\fuse \; \bmkeps(r_1\backslash c_1) \; r_2 ) \backslash c_2 \right]$,
|
110
|
1241 |
\end{tabular}
|
|
1242 |
\end{center}
|
|
1243 |
\noindent
|
|
1244 |
from the definition of $D$ we know $r_1 \backslash c_1$ is nullable, therefore
|
|
1245 |
$\bmkeps(r_1\backslash c_1)$ returns a bitcode, we shall call it
|
111
|
1246 |
$\textit{bs}_2$.
|
|
1247 |
The above term can be rewritten as
|
|
1248 |
\begin{center}
|
|
1249 |
$ \simp \left[ \fuse \; \textit{bs}_2\; r_2 \backslash c_2 \right]$,
|
|
1250 |
\end{center}
|
|
1251 |
which is equal to
|
110
|
1252 |
\begin{center}
|
132
|
1253 |
$\simp \left[ \fuse \; \textit{bs}_2 \; _{bs}{\sum rs} \right]$\\
|
|
1254 |
$=\simp \left[ \; _{bs_2++bs}{\sum rs} \right]$\\
|
|
1255 |
$= \; _{bs_2++bs}{\sum \textit{rs}_1} $
|
111
|
1256 |
\end{center}
|
|
1257 |
\noindent
|
|
1258 |
by using the properties from the set $D$ again
|
|
1259 |
and again(The reason why we set so many conditions
|
|
1260 |
that the pair $(r,s)$ need to satisfy is because we can
|
|
1261 |
rewrite them easily to construct the difference.)
|
|
1262 |
|
|
1263 |
Now we compute ${\bf \simp(\rup \backslash s)}$:
|
|
1264 |
\begin{center}
|
|
1265 |
$\simp \big[(r_1\cdot r_2) \backslash [c_1c_2] \big]= \simp \left[ ((r_1 \cdot r_2 )\backslash c_1) \backslash c_2 \right]$
|
110
|
1266 |
\end{center}
|
111
|
1267 |
\noindent
|
|
1268 |
Again, using the properties above, we obtain
|
|
1269 |
the following chain of equalities:
|
110
|
1270 |
\begin{center}
|
111
|
1271 |
$\simp(\rup \backslash s)= \simp \left[ ((r_1 \cdot r_2 )\backslash c_1) \backslash c_2 \right]= \simp\left[ \left(r_1 \backslash c_1\right) \cdot r_2 \big) \backslash c_2 \right]$\\
|
132
|
1272 |
$= \simp \left[ \sum[\big( \left(r_1 \backslash c_1\right) \backslash c_2 \big) \cdot r_2 \; , \; \fuse \; (\bmkeps \;r_1\backslash c_1) \; r_2 \backslash c_2 ] \right]$,
|
111
|
1273 |
\end{center}
|
|
1274 |
\noindent
|
|
1275 |
as before, we call the bitcode returned by
|
|
1276 |
$\bmkeps(r_1\backslash c_1)$ as
|
|
1277 |
$\textit{bs}_2$.
|
|
1278 |
Also, $\simp(r_2 \backslash c_2)$ is
|
132
|
1279 |
$_{bs}\sum \textit{rs}_1$,
|
111
|
1280 |
and $( \left(r_1 \backslash c_1\right) \backslash c_2 \cdot r_2)$
|
|
1281 |
simplifies to $\ZERO$,
|
|
1282 |
so the above term can be expanded as
|
|
1283 |
\begin{center}
|
|
1284 |
\begin{tabular}{l}
|
132
|
1285 |
$\textit{distinct}(\flatten[\ZERO\;, \; _{\textit{bs}_2++\textit{bs}}\sum \textit{rs}_1] ) \; \textit{match} $ \\
|
111
|
1286 |
$\textit{case} \; [] \Rightarrow \ZERO$ \\
|
|
1287 |
$\textit{case} \; a :: [] \Rightarrow \textit{\fuse \; \textit{bs} a}$ \\
|
132
|
1288 |
$\textit{case} \; as' \Rightarrow _{[]}\sum as'$\\
|
110
|
1289 |
\end{tabular}
|
|
1290 |
\end{center}
|
|
1291 |
\noindent
|
111
|
1292 |
Applying the definition of $\flatten$, we get
|
|
1293 |
\begin{center}
|
132
|
1294 |
$_{[]}\sum (\textit{map} \; \fuse (\textit{bs}_2 ++ bs) rs_1)$
|
111
|
1295 |
\end{center}
|
|
1296 |
\noindent
|
|
1297 |
compared to the result
|
110
|
1298 |
\begin{center}
|
132
|
1299 |
$ \; _{bs_2++bs}{\sum \textit{rs}_1} $
|
110
|
1300 |
\end{center}
|
111
|
1301 |
\noindent
|
|
1302 |
Note how these two regular expressions only
|
|
1303 |
differ in terms of the position of the bits
|
|
1304 |
$\textit{bs}_2++\textit{bs}$. They are the same otherwise.
|
|
1305 |
What caused this difference to happen?
|
|
1306 |
The culprit is the $\flatten$ function, which spills
|
|
1307 |
out the bitcodes in the inner alternatives when
|
|
1308 |
there exists an outer alternative.
|
|
1309 |
Note how the absence of simplification
|
|
1310 |
caused $\simp(\rup \backslash s)$ to
|
|
1311 |
generate the nested alternatives structure:
|
|
1312 |
\begin{center}
|
132
|
1313 |
$ \sum[\ZERO \;, \; _{bs}\sum \textit{rs} ]$
|
111
|
1314 |
\end{center}
|
|
1315 |
and this will always trigger the $\flatten$ to
|
112
|
1316 |
spill out the inner alternative's bitcode $\textit{bs}$,
|
111
|
1317 |
whereas when
|
|
1318 |
simplification is done along the way,
|
|
1319 |
the structure of nested alternatives is never created(we can
|
|
1320 |
actually prove that simplification function never allows nested
|
|
1321 |
alternatives to happen, more on this later).
|
110
|
1322 |
|
113
|
1323 |
How about we do not allow the function $\simp$
|
|
1324 |
to fuse out the bits when it is unnecessary?
|
|
1325 |
Like, for the above regular expression, we might
|
|
1326 |
just delete the outer layer of alternative
|
|
1327 |
\begin{center}
|
132
|
1328 |
\st{$ {\sum[\ZERO \;,}$} $_{bs}\sum \textit{rs}$ \st{$]$}
|
113
|
1329 |
\end{center}
|
132
|
1330 |
and get $_{bs}\sum \textit{rs}$ instead, without
|
113
|
1331 |
fusing the bits $\textit{bs}$ inside to every element
|
|
1332 |
of $\textit{rs}$.
|
|
1333 |
This idea can be realized by making the following
|
|
1334 |
changes to the $\simp$-function:
|
|
1335 |
\begin{center}
|
|
1336 |
\begin{tabular}{@{}lcl@{}}
|
|
1337 |
|
114
|
1338 |
$\textit{simp}' \; (_{\textit{bs}}(a_1 \cdot a_2))$ & $\dn$ & $\textit{as} \; \simp \; \textit{was} \; \textit{before} $ \\
|
113
|
1339 |
|
132
|
1340 |
$\textit{simp}' \; (_{bs}\sum as)$ & $\dn$ & \st{$\textit{distinct}( \textit{flatten} ( \textit{map simp as})) \; \textit{match} $} \\
|
114
|
1341 |
&&\st{$\quad\textit{case} \; [] \Rightarrow \ZERO$} \\
|
|
1342 |
&&\st{$\quad\textit{case} \; a :: [] \Rightarrow \textit{fuse bs a}$} \\
|
|
1343 |
&&\st{$\quad\textit{case} \; as' \Rightarrow \textit{ALTS}\;bs\;as'$}\\
|
|
1344 |
&&$\textit{if}(\textit{hollowAlternatives}( \textit{map \; simp \; as}))$\\
|
|
1345 |
&&$\textit{then} \; \fuse \; \textit{bs}\; \textit{extractAlt}(\textit{map} \; \simp \; \textit{as} )$\\
|
132
|
1346 |
&&$\textit{else} \; \simp(_{bs} \sum \textit{as})$\\
|
114
|
1347 |
|
113
|
1348 |
|
114
|
1349 |
$\textit{simp}' \; a$ & $\dn$ & $\textit{a} \qquad \textit{otherwise}$
|
113
|
1350 |
\end{tabular}
|
|
1351 |
\end{center}
|
|
1352 |
|
|
1353 |
\noindent
|
114
|
1354 |
given the definition of $\textit{hollowAlternatives}$ and $\textit{extractAlt}$ :
|
|
1355 |
\begin{center}
|
|
1356 |
$\textit{hollowAlternatives}( \textit{rs}) \dn
|
132
|
1357 |
\exists r = (_{\textit{bs}_1}\sum \textit{rs}_1) \in \textit{rs}. \forall r' \in \textit{rs}, \;
|
114
|
1358 |
\textit{either} \; r' = \ZERO \; \textit{or} \; r' = r $
|
|
1359 |
$\textit{extractAlt}( \textit{rs}) \dn \textit{if}\big(
|
132
|
1360 |
\exists r = (_{\textit{bs}_1}\sum \textit{rs}_1) \in \textit{rs}. \forall r' \in \textit{rs}, \;
|
114
|
1361 |
\textit{either} \; r' = \ZERO \; \textit{or} \; r' = r \big)\; \textit{then} \; \textit{return} \; r$
|
|
1362 |
\end{center}
|
|
1363 |
\noindent
|
|
1364 |
Basically, $\textit{hollowAlternatives}$ captures the feature of
|
|
1365 |
a list of regular expression of the shape
|
|
1366 |
\begin{center}
|
132
|
1367 |
$ \sum[\ZERO \;, \; _{bs}\sum \textit{rs} ]$
|
114
|
1368 |
\end{center}
|
|
1369 |
and this means we can simply elevate the
|
132
|
1370 |
inner regular expression $_{bs}\sum \textit{rs}$
|
114
|
1371 |
to the outmost
|
|
1372 |
and throw away the useless $\ZERO$s and
|
132
|
1373 |
the outer $\sum$ wrapper.
|
114
|
1374 |
Using this new definition of simp,
|
|
1375 |
under the example where $r$ is the regular expression
|
|
1376 |
$(a+b)(a+a*)$ and $s$ is the string $aa$
|
|
1377 |
the problem of $\rup\backslash_{simp} \, s \neq \simp(\rup\backslash s)$
|
|
1378 |
is resolved.
|
113
|
1379 |
|
114
|
1380 |
Unfortunately this causes new problems:
|
|
1381 |
for the counterexample where
|
|
1382 |
\begin{center}
|
|
1383 |
$r$ is the regular expression
|
|
1384 |
$(ab+(a^*+aa))$ and $s$ is the string $aa$
|
|
1385 |
\end{center}
|
122
|
1386 |
|
|
1387 |
\noindent
|
|
1388 |
$\rup\backslash_{simp} \, s$ is equal to
|
125
|
1389 |
$ _1(_{011}a^* + _1\ONE) $ whereas
|
|
1390 |
$ \simp(\rup\backslash s) = (_{1011}a^* + _{11}\ONE)$.
|
|
1391 |
This discrepancy does not appear for the old
|
114
|
1392 |
version of $\simp$.
|
125
|
1393 |
|
|
1394 |
Why?
|
|
1395 |
|
135
|
1396 |
We shall illustrate in detail again of what happened in each recursive call of
|
|
1397 |
$\backslash$ and $\backslash_{simp}$.
|
125
|
1398 |
During the first derivative operation,
|
|
1399 |
\begin{center}
|
135
|
1400 |
$\rup\backslash a=( _0[ \ONE\cdot {\bf b}] + _1( _0[ _1\ONE \cdot {\bf a}^*] + _1[ \ONE \cdot {\bf a}]) )$,
|
125
|
1401 |
\end{center}
|
|
1402 |
\noindent
|
|
1403 |
the second derivative gives us
|
|
1404 |
\begin{center}
|
135
|
1405 |
$(\rup\backslash a)\backslash a=(_0( [\ZERO\cdot {\bf b}] + 0) + _1( _0( [\ZERO\cdot {\bf a}^*] + _1[ _1\ONE \cdot {\bf a}^*]) + _1( [\ZERO \cdot {\bf a}] + \ONE) ))$,
|
125
|
1406 |
\end{center}
|
|
1407 |
|
|
1408 |
\noindent
|
|
1409 |
and this simplifies to
|
|
1410 |
\begin{center}
|
|
1411 |
$ _1(_{011}{\bf a}^* + _1\ONE) $
|
|
1412 |
\end{center}
|
135
|
1413 |
because when $(\rup\backslash a)\backslash a$ goes
|
|
1414 |
through simplification, according to our new $\simp$
|
|
1415 |
clause,
|
|
1416 |
each component of the list
|
|
1417 |
$[_0( [\ZERO\cdot {\bf b}] + 0) , _1( _0( [\ZERO\cdot {\bf a}^*] + _1[ _1\ONE \cdot {\bf a}^*]) + _1( [\ZERO \cdot {\bf a}] + \ONE) )]$
|
|
1418 |
is simplified into
|
|
1419 |
$[\ZERO, _1(_{011}{\bf a}^* + _1\ONE) ]$,
|
|
1420 |
this fits into the definition of $\textit{hollowAlternatives}$,
|
|
1421 |
so the structure of the annotated regular expression
|
|
1422 |
\begin{center}
|
|
1423 |
$_1(_{011}{\bf a}^* + _1\ONE) $
|
|
1424 |
\end{center}
|
|
1425 |
is preserved, in the sense that the outside bit $_1$
|
|
1426 |
is not fused inside.
|
125
|
1427 |
If, after the first derivative we apply simplification we get
|
|
1428 |
$(_0{\bf b} + _{101}{\bf a}^* + _{11}{\bf a} )$,
|
|
1429 |
and we do another derivative, getting
|
|
1430 |
$(\ZERO + (_{101}(\ONE \cdot _1{\bf a}^*)+_{11}\ONE)$,
|
|
1431 |
which simplifies to
|
|
1432 |
\begin{center}
|
|
1433 |
$ (_{1011}a^* + _{11}\ONE) $
|
|
1434 |
\end{center}
|
|
1435 |
|
|
1436 |
|
|
1437 |
|
|
1438 |
|
|
1439 |
|
122
|
1440 |
We have changed the algorithm to suppress the old
|
|
1441 |
counterexample, but this gives rise to new counterexamples.
|
114
|
1442 |
This dilemma causes this amendment not a successful
|
|
1443 |
attempt to make $\rup\backslash_{simp} \, s = \simp(\rup\backslash s)$
|
|
1444 |
under every possible regular expression and string.
|
112
|
1445 |
\subsection{Properties of the Function $\simp$}
|
|
1446 |
|
|
1447 |
We have proved in Isabelle quite a few properties
|
|
1448 |
of the $\simp$-function, which helps the proof to go forward
|
|
1449 |
and we list them here to aid comprehension.
|
|
1450 |
|
|
1451 |
To start, we need a bit of auxiliary notations,
|
|
1452 |
which is quite basic and is only written here
|
|
1453 |
for clarity.
|
113
|
1454 |
|
|
1455 |
$\textit{sub}(r)$ computes the set of the
|
|
1456 |
sub-regular expression of $r$:
|
|
1457 |
\begin{center}
|
|
1458 |
$\textit{sub}(\ONE) \dn \{\ONE\}$\\
|
|
1459 |
$\textit{sub}(r_1 \cdot r_2) \dn \textit{sub}(r_1) \cup \textit{sub}(r_2) \cup \{r_1 \cdot r_2\}$\\
|
112
|
1460 |
$\textit{sub}(r_1 + r_2) \dn \textit{sub}(r_1) \cup \textit{sub}(r_2) \cup \{r_1+r_2\}$\\
|
113
|
1461 |
\end{center}
|
112
|
1462 |
$\textit{good}(r) \dn r \neq \ZERO \land \\
|
132
|
1463 |
\forall r' \in \textit{sub}(r), \textit{if} \; r' = _{bs_1}\sum(rs_1), \;
|
112
|
1464 |
\textit{then} \nexists r'' \in \textit{rs}_1 \; s.t.\;
|
132
|
1465 |
r'' = _{bs_2}\sum \textit{rs}_2 $
|
112
|
1466 |
|
|
1467 |
The properties are mainly the ones below:
|
|
1468 |
\begin{itemize}
|
|
1469 |
\item
|
|
1470 |
\begin{center}
|
|
1471 |
$\simp(\simp(r)) = \simp(r)$
|
|
1472 |
\end{center}
|
|
1473 |
\item
|
|
1474 |
\begin{center}
|
|
1475 |
$\textit{if} r = \simp(r') \textit{then} \; \textit{good}(r) $
|
|
1476 |
\end{center}
|
|
1477 |
\end{itemize}
|
114
|
1478 |
\subsection{the Contains relation}
|
|
1479 |
$\retrieve$ is a too strong relation in that
|
|
1480 |
it only extracts one bitcode instead of a set of them.
|
|
1481 |
Therefore we try to define another relation(predicate)
|
|
1482 |
to capture the fact the regular expression has bits
|
|
1483 |
being moved around but still has all the bits needed.
|
|
1484 |
The contains symbol, written$\gg$, is a relation that
|
|
1485 |
takes two arguments in an infix form
|
|
1486 |
and returns a truth value.
|
112
|
1487 |
|
114
|
1488 |
|
|
1489 |
In other words, from the set of regular expression and
|
|
1490 |
bitcode pairs
|
|
1491 |
$\textit{RV} = \{(r, v) \mid r \text{r is a regular expression, v is a value}\}$,
|
|
1492 |
those that satisfy the following requirements are in the set
|
|
1493 |
$\textit{RV}_Contains$.
|
|
1494 |
Unlike the $\retrieve$
|
|
1495 |
function, which takes two arguments $r$ and $v$ and
|
|
1496 |
produces an only answer $\textit{bs}$, it takes only
|
|
1497 |
one argument $r$ and returns a set of bitcodes that
|
|
1498 |
can be generated by $r$.
|
|
1499 |
\begin{center}
|
|
1500 |
\begin{tabular}{llclll}
|
|
1501 |
& & & $_{bs}\ONE$ & $\gg$ & $\textit{bs}$\\
|
|
1502 |
& & & $_{bs}{\bf c}$ & $\gg$ & $\textit{bs}$\\
|
|
1503 |
$\textit{if} \; r_1 \gg \textit{bs}_1$ & $r_2 \; \gg \textit{bs}_2$
|
|
1504 |
& $\textit{then}$ &
|
|
1505 |
$_{bs}{r_1 \cdot r_2}$ &
|
|
1506 |
$\gg$ &
|
|
1507 |
$\textit{bs} @ \textit{bs}_1 @ \textit{bs}_2$\\
|
|
1508 |
|
|
1509 |
$\textit{if} \; r \gg \textit{bs}_1$ & & $\textit{then}$ &
|
132
|
1510 |
$_{bs}{\sum(r :: \textit{rs}})$ &
|
114
|
1511 |
$\gg$ &
|
|
1512 |
$\textit{bs} @ \textit{bs}_1 $\\
|
|
1513 |
|
132
|
1514 |
$\textit{if} \; _{bs}(\sum \textit{rs}) \gg \textit{bs} @ \textit{bs}_1$ & & $\textit{then}$ &
|
|
1515 |
$_{bs}{\sum(r :: \textit{rs}})$ &
|
114
|
1516 |
$\gg$ &
|
|
1517 |
$\textit{bs} @ \textit{bs}_1 $\\
|
|
1518 |
|
|
1519 |
$\textit{if} \; r \gg \textit{bs}_1\; \textit{and}$ & $_{bs}r^* \gg \textit{bs} @ \textit{bs}_2$ & $\textit{then}$ &
|
|
1520 |
$_{bs}r^* $ &
|
|
1521 |
$\gg$ &
|
|
1522 |
$\textit{bs} @ [0] @ \textit{bs}_1@ \textit{bs}_2 $\\
|
|
1523 |
|
|
1524 |
& & & $_{bs}r^*$ & $\gg$ & $\textit{bs} @ [1]$\\
|
|
1525 |
\end{tabular}
|
|
1526 |
\end{center}
|
|
1527 |
It is a predicate in the sense that given
|
|
1528 |
a regular expression and a bitcode, it
|
|
1529 |
returns true or false, depending on whether
|
|
1530 |
or not the regular expression can actually produce that
|
|
1531 |
value. If the predicates returns a true, then
|
|
1532 |
we say that the regular expression $r$ contains
|
|
1533 |
the bitcode $\textit{bs}$, written
|
|
1534 |
$r \gg \textit{bs}$.
|
|
1535 |
The $\gg$ operator with the
|
|
1536 |
regular expression $r$ may also be seen as a
|
138
|
1537 |
regular language by itself on the alphabet
|
|
1538 |
$\Sigma = {0,1}$.
|
|
1539 |
The definition of contains relation
|
|
1540 |
is given in an inductive form, similar to that
|
|
1541 |
of regular expressions, it might not be surprising
|
|
1542 |
that the set it denotes contains basically
|
|
1543 |
everything a regular expression can
|
|
1544 |
produce during the derivative and lexing process.
|
|
1545 |
This can be seen in the subsequent lemmas we have
|
|
1546 |
proved about contains:
|
|
1547 |
\begin{itemize}
|
|
1548 |
\item
|
139
|
1549 |
\begin{center}
|
|
1550 |
\begin{equation}\label{contains1}
|
|
1551 |
\textit{if}\; \models v:r \; \textit{then} \; \rup \gg \textit{code}(v)
|
|
1552 |
\end{equation}
|
|
1553 |
\end{center}
|
|
1554 |
This lemma states that the set
|
|
1555 |
$\{\textit{bs}\; | \rup \gg \textit{bs} \}$
|
|
1556 |
"contains" all the underlying value $v$ of $r$ in which they are
|
|
1557 |
in a coded form.
|
|
1558 |
These values include the ones created in the
|
|
1559 |
lexing process, for example, when the regular
|
|
1560 |
expression $r$ is nullable, then we have:
|
|
1561 |
\item
|
|
1562 |
\begin{center}
|
|
1563 |
$r \gg \textit{bmkeps}(r)$
|
|
1564 |
\end{center}
|
|
1565 |
This can be seen as a corollary of the previous lemma,
|
|
1566 |
because $\models \textit{mkeps}((r\downarrow)):(r\downarrow)$
|
|
1567 |
and $\textit{code}(\mkeps((r\downarrow))) = \bmkeps(r)$.
|
|
1568 |
Another corollary we have of \eqref{contains1}
|
|
1569 |
\item
|
|
1570 |
\begin{center}
|
|
1571 |
$\textit{if}\; \models v:r \; \textit{then} \; \rup \gg \textit{retrieve} \; \rup \; v$
|
|
1572 |
\end{center}
|
|
1573 |
as $\textit{retrieve} \; \rup \; v = \textit{code}(v)$
|
|
1574 |
It says that if you can extract a bitsequence using
|
|
1575 |
retrieve guided by $v$, then such bitsequence is already there in the set
|
|
1576 |
$\{\textit{bs}\; | \rup \gg \textit{bs} \}$.
|
|
1577 |
This lemma has a slightly different form:
|
|
1578 |
\item
|
|
1579 |
\begin{center}
|
|
1580 |
$\textit{if}\; \models v:a\downarrow \; \textit{then} \; a \gg \textit{retrieve} \; a \; v$
|
|
1581 |
\end{center}
|
|
1582 |
This is almost identical to the previous lemma, except
|
|
1583 |
this time we might have arbitrary bits attached
|
|
1584 |
to anywhere of the annotated regular expression $a$.
|
|
1585 |
$a$ can be any "made up" annotated regular expressions
|
|
1586 |
that does not belong to the "natural" ones created by
|
|
1587 |
internalising an unannotated regular expression.
|
|
1588 |
For example, a regular expression $r = (a+b)$ after internalisation
|
|
1589 |
becomes $\rup = (_0a+_1b)$. For an underlying value $v = \Left(\Char(a))$
|
|
1590 |
we have $\retrieve \; (_0a+_1b) \;v = 0$ and $(_0a+_1b) \gg 0$. We could
|
|
1591 |
attach arbitrary bits to the regular expression $\rup$
|
|
1592 |
without breaking the structure,
|
|
1593 |
for example we could make up $a = _{0100111}(_{1011}a+1b)$,
|
|
1594 |
and we still have $\models v:a\downarrow$, and
|
|
1595 |
therefore $a \gg \retrieve \; a \; v$, this time the bitsequence
|
|
1596 |
being $01001111011$.
|
|
1597 |
This shows that the inductive clauses defining $\gg$
|
|
1598 |
simulates what $\retrieve$ does guided by different
|
|
1599 |
values.
|
|
1600 |
Set $\{\textit{bs}\; | \rup \gg \textit{bs} \}$ contains
|
|
1601 |
a wide range of values coded as bitsequences,
|
|
1602 |
the following property can be routinely established
|
|
1603 |
from the previous lemma
|
|
1604 |
\item
|
|
1605 |
\begin{center}
|
|
1606 |
$r \gg \retrieve \; r \; (\inj \; (r\downarrow) \; c \; v) \;\;\; \textit{if} \; \models v: \textit{der} \; c \; (\erase(r))$
|
|
1607 |
\end{center}
|
|
1608 |
because $\inj \; (r\downarrow)\; c\; v$ is a value
|
|
1609 |
underlying $r$.
|
|
1610 |
Using this we can get something that looks much
|
|
1611 |
less obvious:
|
|
1612 |
\item
|
|
1613 |
|
|
1614 |
\begin{center}
|
|
1615 |
\begin{tabular}{c}
|
|
1616 |
$\textit{if} \models v: \erase(r)\backslash c \; \textit{then}$\\
|
|
1617 |
$r\backslash c \gg \retrieve \; r \; (\inj \; (r\downarrow) \; c \; v) \; \textit{and}$\\
|
|
1618 |
$r \gg \retrieve \; r \; (\inj \; (r\downarrow) \; c \; v)$\\
|
|
1619 |
\end{tabular}
|
|
1620 |
\end{center}
|
|
1621 |
It says that the derivative operation $\backslash c$ is basically
|
|
1622 |
an operation that does not change the bits an annotated regular
|
|
1623 |
expression is able to produce, both
|
|
1624 |
$r\backslash c$ and $r$ can produce
|
|
1625 |
the bitsequence $\inj \; (r\downarrow) \; c \;v)$.
|
|
1626 |
This invariance with respect to derivative can be
|
|
1627 |
further extended to a more surprising property:
|
|
1628 |
\item
|
|
1629 |
\begin{center}
|
|
1630 |
\begin{tabular}{c}
|
|
1631 |
$\textit{if} \models v: \erase(r) \backslash s$\\
|
|
1632 |
$r\backslash s \gg \retrieve \; r \; (\flex \; (r\downarrow) \; \textit{id} \; s \; v) \; \textit{and}$\\
|
|
1633 |
$r \gg \retrieve \; r \; (r\downarrow) \; \textit{id} \; s \; v) \; c \; v)$\\
|
|
1634 |
\end{tabular}
|
|
1635 |
\end{center}
|
|
1636 |
Here $\gg$ is almost like an $\textit{NFA}$ in the sense that
|
|
1637 |
it simulates the lexing process with respect to different strings.
|
140
|
1638 |
|
139
|
1639 |
Our hope is that using $\gg$ we can prove the bits
|
|
1640 |
information are not lost when we simplify a regular expression,
|
|
1641 |
so we need to relate $\gg$ with simplifcation, for example,
|
|
1642 |
one of the lemmas we have proved about $\gg$ is that
|
140
|
1643 |
\item
|
|
1644 |
\begin{center}
|
|
1645 |
$\simp \; a \gg \textit{bs} \iff a \gg \textit{bs}$
|
|
1646 |
\end{center}
|
|
1647 |
This could be a step towards our goal, as
|
|
1648 |
it assures that after simplification none of the
|
|
1649 |
bitsequence that can be created by
|
|
1650 |
the original annotated regular expression
|
|
1651 |
is lost.
|
|
1652 |
If we could prove the following that would be
|
|
1653 |
another step towards our proof,
|
|
1654 |
|
|
1655 |
\item
|
|
1656 |
\begin{center}
|
|
1657 |
$(\simp \;a) \backslash s \gg \textit{bs} \iff a\backslash s \gg \textit{bs}$
|
|
1658 |
\end{center}
|
|
1659 |
|
|
1660 |
as it says
|
|
1661 |
the simplified regular expression after derivatives will
|
|
1662 |
still have the full capacity of producing bitsequences
|
|
1663 |
as the unsimplified ones-- pretty much
|
|
1664 |
the intuition we try to establish.
|
|
1665 |
And if we could prove
|
|
1666 |
\item
|
|
1667 |
\begin{center}
|
|
1668 |
$a \backslash s \gg \textit{bs} \iff a\backslash_\textit{simp} s \gg \textit{bs}$
|
|
1669 |
\end{center}
|
|
1670 |
That would be just a stone's throw away
|
|
1671 |
from $\blexer \; r \; s = \blexers \; r \; s$.
|
|
1672 |
|
|
1673 |
\end{itemize}
|
|
1674 |
What we do after we work out
|
|
1675 |
the proof of the above lemma
|
|
1676 |
is still not clear. It is one of the next steps we need to
|
|
1677 |
work on.
|
|
1678 |
|
114
|
1679 |
\subsection{the $\textit{ders}_2$ Function}
|
|
1680 |
If we want to prove the result
|
|
1681 |
\begin{center}
|
|
1682 |
$ \textit{blexer}\_{simp}(r, \; s) = \textit{blexer}(r, \; s)$
|
|
1683 |
\end{center}
|
|
1684 |
inductively
|
|
1685 |
on the structure of the regular expression,
|
|
1686 |
then we need to induct on the case $r_1 \cdot r_2$,
|
|
1687 |
it can be good if we could express $(r_1 \cdot r_2) \backslash s$
|
140
|
1688 |
in terms of $r_1 \backslash s_1$ and $r_2 \backslash s_1$,
|
|
1689 |
where $s_1$ is a substring of $s$.
|
|
1690 |
For this we introduce the $\textit{ders2}$ function,
|
114
|
1691 |
which does a "convolution" on $r_1$ and $r_2$ using the string
|
141
|
1692 |
$s$. We omit the bits here as they are not affecting the
|
|
1693 |
structure of the regular expression, and we are mainly
|
|
1694 |
focusing on structure here.
|
114
|
1695 |
It is based on the observation that the derivative of $r_1 \cdot r_2$
|
|
1696 |
with respect to a string $s$ can actually be written in an "explicit form"
|
141
|
1697 |
composed of $r_1$ and $r_2$'s derivatives.
|
|
1698 |
For example, we can look at how $r1\cdot r2$ expands
|
|
1699 |
when being derived with a two-character string:
|
114
|
1700 |
\begin{center}
|
141
|
1701 |
\begin{tabular}{lcl}
|
|
1702 |
$ (r_1 \cdot r_2) \backslash [c_1c_2]$ & $=$ & $ (\textit{if} \; \nullable(r_1)\; \textit{then} \; ((r_1 \backslash c_1) \cdot r_2 + r_2 \backslash c_1) \; \textit{else} \; (r_1\backslash c_1) \cdot r_2) \backslash c_2$\\
|
|
1703 |
& $=$ & $\textit{if} \; \textit{nullable}(r_1) \;\textit{and} \; \nullable(r_1\backslash c_1) \; \textit{then} \;
|
|
1704 |
(((r_1\backslash c_1c_2)\cdot r_2 +( r_1 \backslash c_1 )\cdot r_2\backslash c_2 )+ r_2 \backslash c_1c_2)$\\
|
|
1705 |
&& $\textit{else if} \; \nullable(r_1) \textit{and} \;\textit{not} \; \nullable(r_1 \backslash c_1)\; \textit{then} \;
|
|
1706 |
((r_1\backslash c_1c_2)\cdot r_2 + r_2 \backslash c_1c_2)$\\
|
|
1707 |
&& $\textit{else} \;(r_1\backslash c_1c_2) \cdot r_2$
|
|
1708 |
\end{tabular}
|
114
|
1709 |
\end{center}
|
|
1710 |
which can also be written in a "convoluted sum"
|
141
|
1711 |
format if we omit the order in which the alternatives
|
|
1712 |
are being nested:
|
114
|
1713 |
\begin{center}
|
141
|
1714 |
\begin{tabular}{lcl}
|
|
1715 |
$(r_1 \cdot r_2) \backslash [c_1c_2] $ & $=$ & $\textit{if} \; \textit{nullable}(r_1) \;\textit{and} \; \nullable(r_1\backslash c_1) \; \textit{then} \;
|
|
1716 |
(r_1 \backslash c_1c_2) \cdot r_2 + (r_1 \backslash c_1)\cdot (r_2 \backslash c_2) + r_2 \backslash c_1c_2$\\
|
|
1717 |
&& $\textit{else if} \; \nullable(r_1) \textit{and} \;\textit{not} \; \nullable(r_1 \backslash c_1)\; \textit{then} \;
|
|
1718 |
((r_1\backslash c_1c_2)\cdot r_2 + r_2 \backslash c_1c_2)$\\
|
|
1719 |
&& $\textit{else} \;(r_1\backslash c_1c_2) \cdot r_2$\\
|
142
|
1720 |
& $=$ & $(r_1\backslash s) \cdot r_2 + \sum\limits_{s_j=c_2, c_1c_2 }{r_2 \backslash s_j} \; \text{where} \; \nullable(r_1\backslash s_i) \; \text{and} \;s_i @s_j = [c_1c_2]$
|
|
1721 |
\end{tabular}
|
|
1722 |
\end{center}
|
|
1723 |
In a more general form,
|
|
1724 |
\begin{center}
|
|
1725 |
\begin{tabular}{lcl}
|
|
1726 |
$(r_1 \cdot r_2) \backslash s $ & $=$ & $(r_1\backslash s) \cdot r_2 + \sum\limits_{s_i }{r_2 \backslash s_j} \; \text{where} \; s_i \; \text{is} \; \text{true prefix}\; \text{of} \; s \;\text{and} \; s_i @s_j = s \; \text{and} \;\nullable(r_1\backslash s_i)$
|
141
|
1727 |
\end{tabular}
|
114
|
1728 |
\end{center}
|
142
|
1729 |
We have formalized and proved the correctness of this
|
|
1730 |
alternative definition of derivative and call it $\textit{ders2}$ to
|
|
1731 |
make a distinction of it with the $\textit{ders}$-function.
|
|
1732 |
Note this differentiates from the lexing algorithm in the sense that
|
|
1733 |
it calculates the results $r_1\backslash s_i , r_2 \backslash s_j$ first
|
|
1734 |
and then glue them together
|
|
1735 |
into nested alternatives whereas the $r_1 \cdot r_2 \backslash s$ procedure,
|
|
1736 |
used by algorithm $\lexer$, can only produce each element of the list
|
|
1737 |
in the resulting alternatives regular expression
|
|
1738 |
altogether rather than
|
|
1739 |
generating each of the children nodes
|
|
1740 |
in a single recursive call that is only for generating that
|
114
|
1741 |
very expression itself.
|
142
|
1742 |
$\lexer$ does lexing in a "breadth first" manner whereas
|
|
1743 |
$\textit{ders2}$ does it in a "depth first" manner.
|
|
1744 |
Using this intuition we can also define the annotated regular expression version of
|
|
1745 |
derivative and call it $\textit{bders2}$ and prove the equivalence with $\textit{bders}$.
|
|
1746 |
Our hope is to use this alternative definition as a guide
|
|
1747 |
for our induction.
|
|
1748 |
Using $\textit{bders2}$ we have a clearer idea
|
|
1749 |
of what $r\backslash s$ and $\simp(r\backslash s)$ looks like.
|
114
|
1750 |
\section{Conclusion}
|
|
1751 |
Under the exhaustive tests we believe the main
|
|
1752 |
result holds, yet a proof still seems elusive.
|
|
1753 |
We have tried out different approaches, and
|
|
1754 |
found a lot of properties of the function $\simp$.
|
|
1755 |
The counterexamples where $\rup\backslash_{simp} \, s \neq \simp(\rup\backslash s)$
|
|
1756 |
are also valuable in the sense that
|
|
1757 |
we get to know better why they are not equal and what
|
|
1758 |
are the subtle differences between a
|
|
1759 |
nested simplified regular expression and a
|
|
1760 |
regular expression that is simplified at the final moment.
|
|
1761 |
We are almost there, but a last step is needed to make the proof work.
|
|
1762 |
Hopefully in the next few weeks we will be able to find one.
|
94
|
1763 |
|
|
1764 |
|
|
1765 |
\bibliographystyle{plain}
|
|
1766 |
\bibliography{root}
|
|
1767 |
|
|
1768 |
|
|
1769 |
\end{document}
|
118
|
1770 |
|