author | Christian Urban <urbanc@in.tum.de> |
Tue, 23 Jul 2019 21:54:13 +0100 | |
changeset 81 | a0df84875788 |
parent 80 | d9d61a648292 |
child 82 | 3153338ec6e4 |
permissions | -rw-r--r-- |
45 | 1 |
\documentclass[a4paper,UKenglish]{lipics} |
30 | 2 |
\usepackage{graphic} |
3 |
\usepackage{data} |
|
4 |
\usepackage{tikz-cd} |
|
63 | 5 |
%\usepackage{algorithm} |
35 | 6 |
\usepackage{amsmath} |
7 |
\usepackage[noend]{algpseudocode} |
|
42 | 8 |
\usepackage{enumitem} |
70 | 9 |
\usepackage{nccmath} |
63 | 10 |
|
11 |
\definecolor{darkblue}{rgb}{0,0,0.6} |
|
12 |
\hypersetup{colorlinks=true,allcolors=darkblue} |
|
70 | 13 |
\newcommand{\comment}[1]% |
14 |
{{\color{red}$\Rightarrow$}\marginpar{\raggedright\small{\bf\color{red}#1}}} |
|
15 |
||
30 | 16 |
% \documentclass{article} |
17 |
%\usepackage[utf8]{inputenc} |
|
18 |
%\usepackage[english]{babel} |
|
19 |
%\usepackage{listings} |
|
20 |
% \usepackage{amsthm} |
|
63 | 21 |
%\usepackage{hyperref} |
30 | 22 |
% \usepackage[margin=0.5in]{geometry} |
23 |
%\usepackage{pmboxdraw} |
|
24 |
||
25 |
\title{POSIX Regular Expression Matching and Lexing} |
|
26 |
\author{Chengsong Tan} |
|
27 |
\affil{King's College London\\ |
|
28 |
London, UK\\ |
|
29 |
\texttt{chengsong.tan@kcl.ac.uk}} |
|
30 |
\authorrunning{Chengsong Tan} |
|
31 |
\Copyright{Chengsong Tan} |
|
32 |
||
33 |
\newcommand{\dn}{\stackrel{\mbox{\scriptsize def}}{=}}% |
|
34 |
\newcommand{\ZERO}{\mbox{\bf 0}} |
|
35 |
\newcommand{\ONE}{\mbox{\bf 1}} |
|
36 |
\def\lexer{\mathit{lexer}} |
|
37 |
\def\mkeps{\mathit{mkeps}} |
|
38 |
\def\inj{\mathit{inj}} |
|
39 |
\def\Empty{\mathit{Empty}} |
|
40 |
\def\Left{\mathit{Left}} |
|
41 |
\def\Right{\mathit{Right}} |
|
42 |
\def\Stars{\mathit{Stars}} |
|
43 |
\def\Char{\mathit{Char}} |
|
44 |
\def\Seq{\mathit{Seq}} |
|
45 |
\def\Der{\mathit{Der}} |
|
46 |
\def\nullable{\mathit{nullable}} |
|
47 |
\def\Z{\mathit{Z}} |
|
48 |
\def\S{\mathit{S}} |
|
49 |
||
50 |
%\theoremstyle{theorem} |
|
51 |
%\newtheorem{theorem}{Theorem} |
|
52 |
%\theoremstyle{lemma} |
|
53 |
%\newtheorem{lemma}{Lemma} |
|
54 |
%\newcommand{\lemmaautorefname}{Lemma} |
|
55 |
%\theoremstyle{definition} |
|
56 |
%\newtheorem{definition}{Definition} |
|
35 | 57 |
\algnewcommand\algorithmicswitch{\textbf{switch}} |
58 |
\algnewcommand\algorithmiccase{\textbf{case}} |
|
59 |
\algnewcommand\algorithmicassert{\texttt{assert}} |
|
60 |
\algnewcommand\Assert[1]{\State \algorithmicassert(#1)}% |
|
61 |
% New "environments" |
|
62 |
\algdef{SE}[SWITCH]{Switch}{EndSwitch}[1]{\algorithmicswitch\ #1\ \algorithmicdo}{\algorithmicend\ \algorithmicswitch}% |
|
63 |
\algdef{SE}[CASE]{Case}{EndCase}[1]{\algorithmiccase\ #1}{\algorithmicend\ \algorithmiccase}% |
|
64 |
\algtext*{EndSwitch}% |
|
65 |
\algtext*{EndCase}% |
|
30 | 66 |
|
67 |
||
68 |
\begin{document} |
|
69 |
||
70 |
\maketitle |
|
71 |
||
72 |
\begin{abstract} |
|
73 |
Brzozowski introduced in 1964 a beautifully simple algorithm for |
|
74 |
regular expression matching based on the notion of derivatives of |
|
75 |
regular expressions. In 2014, Sulzmann and Lu extended this |
|
40 | 76 |
algorithm to not just give a YES/NO answer for whether or not a |
64 | 77 |
regular expression matches a string, but in case it does also |
58 | 78 |
answers with \emph{how} it matches the string. This is important for |
40 | 79 |
applications such as lexing (tokenising a string). The problem is to |
80 |
make the algorithm by Sulzmann and Lu fast on all inputs without |
|
81 |
breaking its correctness. We have already developed some |
|
59 | 82 |
simplification rules for this, but have not yet proved that they |
40 | 83 |
preserve the correctness of the algorithm. We also have not yet |
84 |
looked at extended regular expressions, such as bounded repetitions, |
|
85 |
negation and back-references. |
|
30 | 86 |
\end{abstract} |
87 |
||
88 |
\section{Introduction} |
|
89 |
||
70 | 90 |
|
30 | 91 |
This PhD-project is about regular expression matching and |
92 |
lexing. Given the maturity of this topic, the reader might wonder: |
|
93 |
Surely, regular expressions must have already been studied to death? |
|
94 |
What could possibly be \emph{not} known in this area? And surely all |
|
95 |
implemented algorithms for regular expression matching are blindingly |
|
96 |
fast? |
|
97 |
||
98 |
Unfortunately these preconceptions are not supported by evidence: Take |
|
99 |
for example the regular expression $(a^*)^*\,b$ and ask whether |
|
100 |
strings of the form $aa..a$ match this regular |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
101 |
expression. Obviously this is not the case---the expected $b$ in the last |
30 | 102 |
position is missing. One would expect that modern regular expression |
103 |
matching engines can find this out very quickly. Alas, if one tries |
|
104 |
this example in JavaScript, Python or Java 8 with strings like 28 |
|
105 |
$a$'s, one discovers that this decision takes around 30 seconds and |
|
106 |
takes considerably longer when adding a few more $a$'s, as the graphs |
|
107 |
below show: |
|
108 |
||
109 |
\begin{center} |
|
110 |
\begin{tabular}{@{}c@{\hspace{0mm}}c@{\hspace{0mm}}c@{}} |
|
111 |
\begin{tikzpicture} |
|
112 |
\begin{axis}[ |
|
113 |
xlabel={$n$}, |
|
114 |
x label style={at={(1.05,-0.05)}}, |
|
115 |
ylabel={time in secs}, |
|
116 |
enlargelimits=false, |
|
117 |
xtick={0,5,...,30}, |
|
118 |
xmax=33, |
|
119 |
ymax=35, |
|
120 |
ytick={0,5,...,30}, |
|
121 |
scaled ticks=false, |
|
122 |
axis lines=left, |
|
123 |
width=5cm, |
|
124 |
height=4cm, |
|
125 |
legend entries={JavaScript}, |
|
126 |
legend pos=north west, |
|
127 |
legend cell align=left] |
|
128 |
\addplot[red,mark=*, mark options={fill=white}] table {re-js.data}; |
|
129 |
\end{axis} |
|
130 |
\end{tikzpicture} |
|
131 |
& |
|
132 |
\begin{tikzpicture} |
|
133 |
\begin{axis}[ |
|
134 |
xlabel={$n$}, |
|
135 |
x label style={at={(1.05,-0.05)}}, |
|
136 |
%ylabel={time in secs}, |
|
137 |
enlargelimits=false, |
|
138 |
xtick={0,5,...,30}, |
|
139 |
xmax=33, |
|
140 |
ymax=35, |
|
141 |
ytick={0,5,...,30}, |
|
142 |
scaled ticks=false, |
|
143 |
axis lines=left, |
|
144 |
width=5cm, |
|
145 |
height=4cm, |
|
146 |
legend entries={Python}, |
|
147 |
legend pos=north west, |
|
148 |
legend cell align=left] |
|
149 |
\addplot[blue,mark=*, mark options={fill=white}] table {re-python2.data}; |
|
150 |
\end{axis} |
|
151 |
\end{tikzpicture} |
|
152 |
& |
|
153 |
\begin{tikzpicture} |
|
154 |
\begin{axis}[ |
|
155 |
xlabel={$n$}, |
|
156 |
x label style={at={(1.05,-0.05)}}, |
|
157 |
%ylabel={time in secs}, |
|
158 |
enlargelimits=false, |
|
159 |
xtick={0,5,...,30}, |
|
160 |
xmax=33, |
|
161 |
ymax=35, |
|
162 |
ytick={0,5,...,30}, |
|
163 |
scaled ticks=false, |
|
164 |
axis lines=left, |
|
165 |
width=5cm, |
|
166 |
height=4cm, |
|
167 |
legend entries={Java 8}, |
|
168 |
legend pos=north west, |
|
169 |
legend cell align=left] |
|
170 |
\addplot[cyan,mark=*, mark options={fill=white}] table {re-java.data}; |
|
171 |
\end{axis} |
|
172 |
\end{tikzpicture}\\ |
|
173 |
\multicolumn{3}{c}{Graphs: Runtime for matching $(a^*)^*\,b$ with strings |
|
174 |
of the form $\underbrace{aa..a}_{n}$.} |
|
175 |
\end{tabular} |
|
176 |
\end{center} |
|
177 |
||
58 | 178 |
\noindent These are clearly abysmal and possibly surprising results. One |
64 | 179 |
would expect these systems to do much better than that---after all, |
58 | 180 |
given a DFA and a string, deciding whether a string is matched by this |
77 | 181 |
DFA should be linear? |
30 | 182 |
|
183 |
Admittedly, the regular expression $(a^*)^*\,b$ is carefully chosen to |
|
77 | 184 |
exhibit this exponential behaviour. But unfortunately, such regular |
64 | 185 |
expressions are not just a few outliers. They are actually |
186 |
frequent enough to have a separate name created for |
|
30 | 187 |
them---\emph{evil regular expressions}. In empiric work, Davis et al |
188 |
report that they have found thousands of such evil regular expressions |
|
189 |
in the JavaScript and Python ecosystems \cite{Davis18}. |
|
190 |
||
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
191 |
\comment{Needs to be consistent: either exponential blowup; or quadratic blowup. Maybe you use the terminology superlinear, like in the Davis et al paper} |
58 | 192 |
This exponential blowup in matching algorithms sometimes causes |
63 | 193 |
considerable grief in real life: for example on 20 July 2016 one evil |
58 | 194 |
regular expression brought the webpage |
195 |
\href{http://stackexchange.com}{Stack Exchange} to its |
|
63 | 196 |
knees.\footnote{\url{https://stackstatus.net/post/147710624694/outage-postmortem-july-20-2016}} |
30 | 197 |
In this instance, a regular expression intended to just trim white |
198 |
spaces from the beginning and the end of a line actually consumed |
|
64 | 199 |
massive amounts of CPU-resources---causing web servers to |
200 |
grind to a halt. This happened when a post with 20,000 white spaces was |
|
58 | 201 |
submitted, but importantly the white spaces were neither at the |
75 | 202 |
beginning nor at the end. |
203 |
As a result, the regular expression matching |
|
204 |
engine needed to backtrack over many choices. |
|
205 |
In this example, the time needed to process the string is not |
|
206 |
exactly the classical exponential case, but rather $O(n^2)$ |
|
207 |
with respect to the string length. But this is enough for the |
|
77 | 208 |
home page of Stack Exchange to respond not fast enough to |
75 | 209 |
the load balancer, which thought that there must be some |
210 |
attack and therefore stopped the servers from responding to |
|
77 | 211 |
requests. This made the whole site become unavailable. |
212 |
Another very recent example is a global outage of all Cloudflare servers |
|
213 |
on 2 July 2019. A poorly written regular expression exhibited |
|
214 |
exponential behaviour and exhausted CPUs that serve HTTP traffic. |
|
215 |
Although the outage had several causes, at the heart was a regular |
|
216 |
expression that was used to monitor network |
|
217 |
traffic.\footnote{\url{https://blog.cloudflare.com/details-of-the-cloudflare-outage-on-july-2-2019/}} |
|
218 |
||
219 |
The underlying problem is that many ``real life'' regular expression |
|
220 |
matching engines do not use DFAs for matching. This is because they |
|
221 |
support regular expressions that are not covered by the classical |
|
222 |
automata theory, and in this more general setting there are quite a few |
|
223 |
research questions still unanswered and fast algorithms still need to be |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
224 |
developed (for example how to treat efficiently bounded repetitions, negation and |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
225 |
back-references). |
64 | 226 |
%question: dfa can have exponential states. isn't this the actual reason why they do not use dfas? |
227 |
%how do they avoid dfas exponential states if they use them for fast matching? |
|
77 | 228 |
|
30 | 229 |
There is also another under-researched problem to do with regular |
230 |
expressions and lexing, i.e.~the process of breaking up strings into |
|
231 |
sequences of tokens according to some regular expressions. In this |
|
232 |
setting one is not just interested in whether or not a regular |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
233 |
expression matches a string, but also in \emph{how}. Consider for |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
234 |
example a regular expression $r_{key}$ for recognising keywords such as |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
235 |
\textit{if}, \textit{then} and so on; and a regular expression $r_{id}$ |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
236 |
for recognising identifiers (say, a single character followed by |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
237 |
characters or numbers). One can then form the compound regular |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
238 |
expression $(r_{key} + r_{id})^*$ and use it to tokenise strings. But |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
239 |
then how should the string \textit{iffoo} be tokenised? It could be |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
240 |
tokenised as a keyword followed by an identifier, or the entire string |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
241 |
as a single identifier. Similarly, how should the string \textit{if} be |
30 | 242 |
tokenised? Both regular expressions, $r_{key}$ and $r_{id}$, would |
243 |
``fire''---so is it an identifier or a keyword? While in applications |
|
244 |
there is a well-known strategy to decide these questions, called POSIX |
|
245 |
matching, only relatively recently precise definitions of what POSIX |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
246 |
matching actually means have been formalised |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
247 |
\cite{AusafDyckhoffUrban2016,OkuiSuzuki2010,Vansummeren2006}. Such a |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
248 |
definition has also been given by Sulzmann and Lu \cite{Sulzmann2014}, |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
249 |
but the corresponding correctness proof turned out to be faulty |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
250 |
\cite{AusafDyckhoffUrban2016}. Roughly, POSIX matching means matching |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
251 |
the longest initial substring. In the case of a tie, the initial |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
252 |
sub-match is chosen according to some priorities attached to the regular |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
253 |
expressions (e.g.~keywords have a higher priority than identifiers). |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
254 |
This sounds rather simple, but according to Grathwohl et al \cite[Page |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
255 |
36]{CrashCourse2014} this is not the case. They wrote: |
30 | 256 |
|
257 |
\begin{quote} |
|
258 |
\it{}``The POSIX strategy is more complicated than the greedy because of |
|
259 |
the dependence on information about the length of matched strings in the |
|
260 |
various subexpressions.'' |
|
261 |
\end{quote} |
|
262 |
||
263 |
\noindent |
|
264 |
This is also supported by evidence collected by Kuklewicz |
|
265 |
\cite{Kuklewicz} who noticed that a number of POSIX regular expression |
|
266 |
matchers calculate incorrect results. |
|
267 |
||
77 | 268 |
Our focus in this project is on an algorithm introduced by Sulzmann and |
269 |
Lu in 2014 for regular expression matching according to the POSIX |
|
270 |
strategy \cite{Sulzmann2014}. Their algorithm is based on an older |
|
271 |
algorithm by Brzozowski from 1964 where he introduced the notion of |
|
272 |
derivatives of regular expressions~\cite{Brzozowski1964}. We shall |
|
273 |
briefly explain this algorithm next. |
|
30 | 274 |
|
46 | 275 |
\section{The Algorithm by Brzozowski based on Derivatives of Regular |
276 |
Expressions} |
|
30 | 277 |
|
40 | 278 |
Suppose (basic) regular expressions are given by the following grammar: |
38 | 279 |
\[ r ::= \ZERO \mid \ONE |
280 |
\mid c |
|
281 |
\mid r_1 \cdot r_2 |
|
282 |
\mid r_1 + r_2 |
|
283 |
\mid r^* |
|
284 |
\] |
|
30 | 285 |
|
286 |
\noindent |
|
46 | 287 |
The intended meaning of the constructors is as follows: $\ZERO$ |
30 | 288 |
cannot match any string, $\ONE$ can match the empty string, the |
289 |
character regular expression $c$ can match the character $c$, and so |
|
40 | 290 |
on. |
291 |
||
58 | 292 |
The ingenious contribution by Brzozowski is the notion of |
30 | 293 |
\emph{derivatives} of regular expressions. The idea behind this |
294 |
notion is as follows: suppose a regular expression $r$ can match a |
|
295 |
string of the form $c\!::\! s$ (that is a list of characters starting |
|
296 |
with $c$), what does the regular expression look like that can match |
|
40 | 297 |
just $s$? Brzozowski gave a neat answer to this question. He started |
298 |
with the definition of $nullable$: |
|
36 | 299 |
\begin{center} |
300 |
\begin{tabular}{lcl} |
|
301 |
$\nullable(\ZERO)$ & $\dn$ & $\mathit{false}$ \\ |
|
302 |
$\nullable(\ONE)$ & $\dn$ & $\mathit{true}$ \\ |
|
303 |
$\nullable(c)$ & $\dn$ & $\mathit{false}$ \\ |
|
304 |
$\nullable(r_1 + r_2)$ & $\dn$ & $\nullable(r_1) \vee \nullable(r_2)$ \\ |
|
305 |
$\nullable(r_1\cdot r_2)$ & $\dn$ & $\nullable(r_1) \wedge \nullable(r_2)$ \\ |
|
306 |
$\nullable(r^*)$ & $\dn$ & $\mathit{true}$ \\ |
|
307 |
\end{tabular} |
|
308 |
\end{center} |
|
38 | 309 |
This function simply tests whether the empty string is in $L(r)$. |
36 | 310 |
He then defined |
30 | 311 |
the following operation on regular expressions, written |
312 |
$r\backslash c$ (the derivative of $r$ w.r.t.~the character $c$): |
|
313 |
||
314 |
\begin{center} |
|
315 |
\begin{tabular}{lcl} |
|
316 |
$\ZERO \backslash c$ & $\dn$ & $\ZERO$\\ |
|
317 |
$\ONE \backslash c$ & $\dn$ & $\ZERO$\\ |
|
318 |
$d \backslash c$ & $\dn$ & |
|
319 |
$\mathit{if} \;c = d\;\mathit{then}\;\ONE\;\mathit{else}\;\ZERO$\\ |
|
320 |
$(r_1 + r_2)\backslash c$ & $\dn$ & $r_1 \backslash c \,+\, r_2 \backslash c$\\ |
|
36 | 321 |
$(r_1 \cdot r_2)\backslash c$ & $\dn$ & $\mathit{if} \, nullable(r_1)$\\ |
30 | 322 |
& & $\mathit{then}\;(r_1\backslash c) \cdot r_2 \,+\, r_2\backslash c$\\ |
323 |
& & $\mathit{else}\;(r_1\backslash c) \cdot r_2$\\ |
|
324 |
$(r^*)\backslash c$ & $\dn$ & $(r\backslash c) \cdot r^*$\\ |
|
325 |
\end{tabular} |
|
326 |
\end{center} |
|
327 |
||
46 | 328 |
%Assuming the classic notion of a |
329 |
%\emph{language} of a regular expression, written $L(\_)$, t |
|
30 | 330 |
|
40 | 331 |
\noindent |
332 |
The main property of the derivative operation is that |
|
30 | 333 |
|
334 |
\begin{center} |
|
335 |
$c\!::\!s \in L(r)$ holds |
|
336 |
if and only if $s \in L(r\backslash c)$. |
|
337 |
\end{center} |
|
338 |
||
339 |
\noindent |
|
46 | 340 |
For us the main advantage is that derivatives can be |
38 | 341 |
straightforwardly implemented in any functional programming language, |
342 |
and are easily definable and reasoned about in theorem provers---the |
|
343 |
definitions just consist of inductive datatypes and simple recursive |
|
344 |
functions. Moreover, the notion of derivatives can be easily |
|
345 |
generalised to cover extended regular expression constructors such as |
|
346 |
the not-regular expression, written $\neg\,r$, or bounded repetitions |
|
347 |
(for example $r^{\{n\}}$ and $r^{\{n..m\}}$), which cannot be so |
|
348 |
straightforwardly realised within the classic automata approach. |
|
349 |
For the moment however, we focus only on the usual basic regular expressions. |
|
350 |
||
351 |
||
40 | 352 |
Now if we want to find out whether a string $s$ matches with a regular |
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
353 |
expression $r$, we can build the derivatives of $r$ w.r.t.\ (in succession) |
40 | 354 |
all the characters of the string $s$. Finally, test whether the |
355 |
resulting regular expression can match the empty string. If yes, then |
|
356 |
$r$ matches $s$, and no in the negative case. To implement this idea |
|
357 |
we can generalise the derivative operation to strings like this: |
|
46 | 358 |
|
30 | 359 |
\begin{center} |
360 |
\begin{tabular}{lcl} |
|
361 |
$r \backslash (c\!::\!s) $ & $\dn$ & $(r \backslash c) \backslash s$ \\ |
|
40 | 362 |
$r \backslash [\,] $ & $\dn$ & $r$ |
30 | 363 |
\end{tabular} |
364 |
\end{center} |
|
40 | 365 |
|
37 | 366 |
\noindent |
46 | 367 |
and then define as regular-expression matching algorithm: |
30 | 368 |
\[ |
369 |
match\;s\;r \;\dn\; nullable(r\backslash s) |
|
370 |
\] |
|
40 | 371 |
|
372 |
\noindent |
|
64 | 373 |
This algorithm looks graphically as follows: |
46 | 374 |
\begin{equation}\label{graph:*} |
375 |
\begin{tikzcd} |
|
376 |
r_0 \arrow[r, "\backslash c_0"] & r_1 \arrow[r, "\backslash c_1"] & r_2 \arrow[r, dashed] & r_n \arrow[r,"\textit{nullable}?"] & \;\textrm{YES}/\textrm{NO} |
|
38 | 377 |
\end{tikzcd} |
46 | 378 |
\end{equation} |
40 | 379 |
|
380 |
\noindent |
|
46 | 381 |
where we start with a regular expression $r_0$, build successive |
382 |
derivatives until we exhaust the string and then use \textit{nullable} |
|
383 |
to test whether the result can match the empty string. It can be |
|
384 |
relatively easily shown that this matcher is correct (that is given |
|
64 | 385 |
an $s = c_0...c_{n-1}$ and an $r_0$, it generates YES if and only if $s \in L(r_0)$). |
46 | 386 |
|
387 |
||
388 |
\section{Values and the Algorithm by Sulzmann and Lu} |
|
38 | 389 |
|
77 | 390 |
One limitation of Brzozowski's algorithm is that it only produces a |
391 |
YES/NO answer for whether a string is being matched by a regular |
|
392 |
expression. Sulzmann and Lu~\cite{Sulzmann2014} extended this algorithm |
|
393 |
to allow generation of an actual matching, called a \emph{value} or |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
394 |
sometimes also \emph{lexical value}. These values and regular |
77 | 395 |
expressions correspond to each other as illustrated in the following |
396 |
table: |
|
46 | 397 |
|
30 | 398 |
|
399 |
\begin{center} |
|
400 |
\begin{tabular}{c@{\hspace{20mm}}c} |
|
401 |
\begin{tabular}{@{}rrl@{}} |
|
402 |
\multicolumn{3}{@{}l}{\textbf{Regular Expressions}}\medskip\\ |
|
403 |
$r$ & $::=$ & $\ZERO$\\ |
|
404 |
& $\mid$ & $\ONE$ \\ |
|
405 |
& $\mid$ & $c$ \\ |
|
406 |
& $\mid$ & $r_1 \cdot r_2$\\ |
|
407 |
& $\mid$ & $r_1 + r_2$ \\ |
|
408 |
\\ |
|
409 |
& $\mid$ & $r^*$ \\ |
|
410 |
\end{tabular} |
|
411 |
& |
|
412 |
\begin{tabular}{@{\hspace{0mm}}rrl@{}} |
|
413 |
\multicolumn{3}{@{}l}{\textbf{Values}}\medskip\\ |
|
414 |
$v$ & $::=$ & \\ |
|
415 |
& & $\Empty$ \\ |
|
416 |
& $\mid$ & $\Char(c)$ \\ |
|
417 |
& $\mid$ & $\Seq\,v_1\, v_2$\\ |
|
418 |
& $\mid$ & $\Left(v)$ \\ |
|
419 |
& $\mid$ & $\Right(v)$ \\ |
|
420 |
& $\mid$ & $\Stars\,[v_1,\ldots\,v_n]$ \\ |
|
421 |
\end{tabular} |
|
422 |
\end{tabular} |
|
423 |
\end{center} |
|
424 |
||
425 |
\noindent |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
426 |
No value corresponds to $\ZERO$; $\Empty$ corresponds to $\ONE$; |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
427 |
$\Char$ to the character regular expression; $\Seq$ to the sequence |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
428 |
regular expression and so on. The idea of values is to encode a kind of |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
429 |
lexical value for how the sub-parts of a regular expression match the |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
430 |
sub-parts of a string. To see this, suppose a \emph{flatten} operation, |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
431 |
written $|v|$ for values. We can use this function to extract the |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
432 |
underlying string of a value $v$. For example, $|\mathit{Seq} \, |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
433 |
(\textit{Char x}) \, (\textit{Char y})|$ is the string $xy$. Using |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
434 |
flatten, we can describe how values encode \comment{Avoid the notion |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
435 |
parse trees! Also later!!}parse trees: $\Seq\,v_1\, v_2$ encodes a tree with two |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
436 |
children nodes that tells how the string $|v_1| @ |v_2|$ matches the |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
437 |
regex $r_1 \cdot r_2$ whereby $r_1$ matches the substring $|v_1|$ and, |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
438 |
respectively, $r_2$ matches the substring $|v_2|$. Exactly how these two |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
439 |
are matched is contained in the children nodes $v_1$ and $v_2$ of parent |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
440 |
$\textit{Seq}$. |
30 | 441 |
|
77 | 442 |
To give a concrete example of how values work, consider the string $xy$ |
46 | 443 |
and the regular expression $(x + (y + xy))^*$. We can view this regular |
30 | 444 |
expression as a tree and if the string $xy$ is matched by two Star |
46 | 445 |
``iterations'', then the $x$ is matched by the left-most alternative in |
446 |
this tree and the $y$ by the right-left alternative. This suggests to |
|
447 |
record this matching as |
|
30 | 448 |
|
449 |
\begin{center} |
|
450 |
$\Stars\,[\Left\,(\Char\,x), \Right(\Left(\Char\,y))]$ |
|
451 |
\end{center} |
|
452 |
||
453 |
\noindent |
|
72 | 454 |
where $\Stars \; [\ldots]$ records all the |
64 | 455 |
iterations; and $\Left$, respectively $\Right$, which |
30 | 456 |
alternative is used. The value for |
457 |
matching $xy$ in a single ``iteration'', i.e.~the POSIX value, |
|
458 |
would look as follows |
|
459 |
||
460 |
\begin{center} |
|
461 |
$\Stars\,[\Seq\,(\Char\,x)\,(\Char\,y)]$ |
|
462 |
\end{center} |
|
463 |
||
464 |
\noindent |
|
465 |
where $\Stars$ has only a single-element list for the single iteration |
|
466 |
and $\Seq$ indicates that $xy$ is matched by a sequence regular |
|
467 |
expression. |
|
468 |
||
469 |
The contribution of Sulzmann and Lu is an extension of Brzozowski's |
|
470 |
algorithm by a second phase (the first phase being building successive |
|
46 | 471 |
derivatives---see \eqref{graph:*}). In this second phase, a POSIX value |
72 | 472 |
is generated in case the regular expression matches the string. |
473 |
Pictorially, the Sulzmann and Lu algorithm is as follows: |
|
46 | 474 |
|
70 | 475 |
\begin{ceqn} |
59 | 476 |
\begin{equation}\label{graph:2} |
30 | 477 |
\begin{tikzcd} |
36 | 478 |
r_0 \arrow[r, "\backslash c_0"] \arrow[d] & r_1 \arrow[r, "\backslash c_1"] \arrow[d] & r_2 \arrow[r, dashed] \arrow[d] & r_n \arrow[d, "mkeps" description] \\ |
30 | 479 |
v_0 & v_1 \arrow[l,"inj_{r_0} c_0"] & v_2 \arrow[l, "inj_{r_1} c_1"] & v_n \arrow[l, dashed] |
480 |
\end{tikzcd} |
|
59 | 481 |
\end{equation} |
70 | 482 |
\end{ceqn} |
37 | 483 |
|
46 | 484 |
\noindent |
77 | 485 |
For convenience, we shall employ the following notations: the regular |
486 |
expression we start with is $r_0$, and the given string $s$ is composed |
|
487 |
of characters $c_0 c_1 \ldots c_{n-1}$. In the first phase from the |
|
488 |
left to right, we build the derivatives $r_1$, $r_2$, \ldots according |
|
489 |
to the characters $c_0$, $c_1$ until we exhaust the string and obtain |
|
490 |
the derivative $r_n$. We test whether this derivative is |
|
46 | 491 |
$\textit{nullable}$ or not. If not, we know the string does not match |
492 |
$r$ and no value needs to be generated. If yes, we start building the |
|
77 | 493 |
values incrementally by \emph{injecting} back the characters into the |
494 |
earlier values $v_n, \ldots, v_0$. This is the second phase of the |
|
495 |
algorithm from the right to left. For the first value $v_n$, we call the |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
496 |
function $\textit{mkeps}$, which builds the \comment{Avoid}parse tree |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
497 |
for how the empty string has been matched by the (nullable) regular |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
498 |
expression $r_n$. This function is defined as |
30 | 499 |
|
51
5df7faf69238
added mkeps and pder, still have not proof read it
Chengsong
parents:
50
diff
changeset
|
500 |
\begin{center} |
5df7faf69238
added mkeps and pder, still have not proof read it
Chengsong
parents:
50
diff
changeset
|
501 |
\begin{tabular}{lcl} |
5df7faf69238
added mkeps and pder, still have not proof read it
Chengsong
parents:
50
diff
changeset
|
502 |
$\mkeps(\ONE)$ & $\dn$ & $\Empty$ \\ |
5df7faf69238
added mkeps and pder, still have not proof read it
Chengsong
parents:
50
diff
changeset
|
503 |
$\mkeps(r_{1}+r_{2})$ & $\dn$ |
5df7faf69238
added mkeps and pder, still have not proof read it
Chengsong
parents:
50
diff
changeset
|
504 |
& \textit{if} $\nullable(r_{1})$\\ |
5df7faf69238
added mkeps and pder, still have not proof read it
Chengsong
parents:
50
diff
changeset
|
505 |
& & \textit{then} $\Left(\mkeps(r_{1}))$\\ |
5df7faf69238
added mkeps and pder, still have not proof read it
Chengsong
parents:
50
diff
changeset
|
506 |
& & \textit{else} $\Right(\mkeps(r_{2}))$\\ |
5df7faf69238
added mkeps and pder, still have not proof read it
Chengsong
parents:
50
diff
changeset
|
507 |
$\mkeps(r_1\cdot r_2)$ & $\dn$ & $\Seq\,(\mkeps\,r_1)\,(\mkeps\,r_2)$\\ |
5df7faf69238
added mkeps and pder, still have not proof read it
Chengsong
parents:
50
diff
changeset
|
508 |
$mkeps(r^*)$ & $\dn$ & $\Stars\,[]$ |
5df7faf69238
added mkeps and pder, still have not proof read it
Chengsong
parents:
50
diff
changeset
|
509 |
\end{tabular} |
5df7faf69238
added mkeps and pder, still have not proof read it
Chengsong
parents:
50
diff
changeset
|
510 |
\end{center} |
41 | 511 |
|
59 | 512 |
|
513 |
\noindent There are no cases for $\ZERO$ and $c$, since |
|
514 |
these regular expression cannot match the empty string. Note |
|
515 |
also that in case of alternatives we give preference to the |
|
516 |
regular expression on the left-hand side. This will become |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
517 |
important later on about what value is calculated. |
59 | 518 |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
519 |
After the $\mkeps$-call, we inject back the characters one by one in order to build |
63 | 520 |
the parse tree $v_i$ for how the regex $r_i$ matches the string $s_i$ |
77 | 521 |
($s_i = c_i \ldots c_{n-1}$ ) from the previous parse tree $v_{i+1}$. |
522 |
After injecting back $n$ characters, we get the parse tree for how $r_0$ |
|
63 | 523 |
matches $s$. For this Sulzmann and Lu defined a function that reverses |
524 |
the ``chopping off'' of characters during the derivative phase. The |
|
77 | 525 |
corresponding function is called \emph{injection}, written |
526 |
$\textit{inj}$; it takes three arguments: the first one is a regular |
|
527 |
expression ${r_{i-1}}$, before the character is chopped off, the second |
|
528 |
is a character ${c_{i-1}}$, the character we want to inject and the |
|
529 |
third argument is the value ${v_i}$, into which one wants to inject the |
|
530 |
character (it corresponds to the regular expression after the character |
|
531 |
has been chopped off). The result of this function is a new value. The |
|
532 |
definition of $\textit{inj}$ is as follows: |
|
59 | 533 |
|
534 |
\begin{center} |
|
535 |
\begin{tabular}{l@{\hspace{1mm}}c@{\hspace{1mm}}l} |
|
536 |
$\textit{inj}\,(c)\,c\,Empty$ & $\dn$ & $Char\,c$\\ |
|
537 |
$\textit{inj}\,(r_1 + r_2)\,c\,\Left(v)$ & $\dn$ & $\Left(\textit{inj}\,r_1\,c\,v)$\\ |
|
538 |
$\textit{inj}\,(r_1 + r_2)\,c\,Right(v)$ & $\dn$ & $Right(\textit{inj}\,r_2\,c\,v)$\\ |
|
539 |
$\textit{inj}\,(r_1 \cdot r_2)\,c\,Seq(v_1,v_2)$ & $\dn$ & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\ |
|
540 |
$\textit{inj}\,(r_1 \cdot r_2)\,c\,\Left(Seq(v_1,v_2))$ & $\dn$ & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\ |
|
541 |
$\textit{inj}\,(r_1 \cdot r_2)\,c\,Right(v)$ & $\dn$ & $Seq(\textit{mkeps}(r_1),\textit{inj}\,r_2\,c\,v)$\\ |
|
542 |
$\textit{inj}\,(r^*)\,c\,Seq(v,Stars\,vs)$ & $\dn$ & $Stars((\textit{inj}\,r\,c\,v)\,::\,vs)$\\ |
|
543 |
\end{tabular} |
|
544 |
\end{center} |
|
545 |
||
63 | 546 |
\noindent This definition is by recursion on the ``shape'' of regular |
547 |
expressions and values. To understands this definition better consider |
|
548 |
the situation when we build the derivative on regular expression $r_{i-1}$. |
|
549 |
For this we chop off a character from $r_{i-1}$ to form $r_i$. This leaves a |
|
72 | 550 |
``hole'' in $r_i$ and its corresponding value $v_i$. |
551 |
To calculate $v_{i-1}$, we need to |
|
64 | 552 |
locate where that hole is and fill it. |
553 |
We can find this location by |
|
63 | 554 |
comparing $r_{i-1}$ and $v_i$. For instance, if $r_{i-1}$ is of shape |
64 | 555 |
$r_a \cdot r_b$, and $v_i$ is of shape $\Left(Seq(v_1,v_2))$, we know immediately that |
63 | 556 |
% |
557 |
\[ (r_a \cdot r_b)\backslash c = (r_a\backslash c) \cdot r_b \,+\, r_b\backslash c,\] |
|
558 |
||
559 |
\noindent |
|
59 | 560 |
otherwise if $r_a$ is not nullable, |
63 | 561 |
\[ (r_a \cdot r_b)\backslash c = (r_a\backslash c) \cdot r_b,\] |
562 |
||
563 |
\noindent |
|
64 | 564 |
the value $v_i$ should be $\Seq(\ldots)$, contradicting the fact that |
565 |
$v_i$ is actually of shape $\Left(\ldots)$. Furthermore, since $v_i$ is of shape |
|
63 | 566 |
$\Left(\ldots)$ instead of $\Right(\ldots)$, we know that the left |
64 | 567 |
branch of \[ (r_a \cdot r_b)\backslash c = |
568 |
\bold{\underline{ (r_a\backslash c) \cdot r_b} }\,+\, r_b\backslash c,\](underlined) |
|
569 |
is taken instead of the right one. This means $c$ is chopped off |
|
570 |
from $r_a$ rather than $r_b$. |
|
571 |
We have therefore found out |
|
63 | 572 |
that the hole will be on $r_a$. So we recursively call $\inj\, |
64 | 573 |
r_a\,c\,v_a$ to fill that hole in $v_a$. After injection, the value |
63 | 574 |
$v_i$ for $r_i = r_a \cdot r_b$ should be $\Seq\,(\inj\,r_a\,c\,v_a)\,v_b$. |
60
c737a0259194
sorry not all done, need a few more mins for last few changes
Chengsong
parents:
59
diff
changeset
|
575 |
Other clauses can be understood in a similar way. |
59 | 576 |
|
71 | 577 |
%\comment{Other word: insight?} |
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
578 |
The following example gives an insight of $\textit{inj}$'s effect and |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
579 |
how Sulzmann and Lu's algorithm works as a whole. Suppose we have a |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
580 |
regular expression $((((a+b)+ab)+c)+abc)^*$, and want to match it |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
581 |
against the string $abc$ (when $abc$ is written as a regular expression, |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
582 |
the standard way of expressing it is $a \cdot (b \cdot c)$. But we |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
583 |
usually omit the parentheses and dots here for better readability. This |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
584 |
algorithm returns a POSIX value, which means it will produce the longest |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
585 |
matching. Consequently, it matches the string $abc$ in one star |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
586 |
iteration, using the longest alternative $abc$ in the sub-expression (we shall use $r$ to denote this |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
587 |
sub-expression for conciseness): |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
588 |
|
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
589 |
\[((((a+b)+ab)+c)+\underbrace{abc}_r)\] |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
590 |
|
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
591 |
\noindent |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
592 |
Before $\textit{inj}$ is called, our lexer first builds derivative using |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
593 |
string $abc$ (we simplified some regular expressions like $\ZERO \cdot |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
594 |
b$ to $\ZERO$ for conciseness; we also omit parentheses if they are |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
595 |
clear from the context): |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
596 |
|
60
c737a0259194
sorry not all done, need a few more mins for last few changes
Chengsong
parents:
59
diff
changeset
|
597 |
%Similarly, we allow |
c737a0259194
sorry not all done, need a few more mins for last few changes
Chengsong
parents:
59
diff
changeset
|
598 |
%$\textit{ALT}$ to take a list of regular expressions as an argument |
c737a0259194
sorry not all done, need a few more mins for last few changes
Chengsong
parents:
59
diff
changeset
|
599 |
%instead of just 2 operands to reduce the nested depth of |
c737a0259194
sorry not all done, need a few more mins for last few changes
Chengsong
parents:
59
diff
changeset
|
600 |
%$\textit{ALT}$ |
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
601 |
|
60
c737a0259194
sorry not all done, need a few more mins for last few changes
Chengsong
parents:
59
diff
changeset
|
602 |
\begin{center} |
63 | 603 |
\begin{tabular}{lcl} |
72 | 604 |
$r^*$ & $\xrightarrow{\backslash a}$ & $r_1 = (\ONE+\ZERO+\ONE \cdot b + \ZERO + \ONE \cdot b \cdot c) \cdot r^*$\\ |
605 |
& $\xrightarrow{\backslash b}$ & $r_2 = (\ZERO+\ZERO+\ONE \cdot \ONE + \ZERO + \ONE \cdot \ONE \cdot c) \cdot r^* +(\ZERO+\ONE+\ZERO + \ZERO + \ZERO) \cdot r^*$\\ |
|
606 |
& $\xrightarrow{\backslash c}$ & $r_3 = ((\ZERO+\ZERO+\ZERO + \ZERO + \ONE \cdot \ONE \cdot \ONE) \cdot r^* + (\ZERO+\ZERO+\ZERO + \ONE + \ZERO) \cdot r^*) + $\\ |
|
607 |
& & $\phantom{r_3 = (} ((\ZERO+\ONE+\ZERO + \ZERO + \ZERO) \cdot r^* + (\ZERO+\ZERO+\ZERO + \ONE + \ZERO) \cdot r^* )$ |
|
63 | 608 |
\end{tabular} |
60
c737a0259194
sorry not all done, need a few more mins for last few changes
Chengsong
parents:
59
diff
changeset
|
609 |
\end{center} |
63 | 610 |
|
611 |
\noindent |
|
72 | 612 |
In case $r_3$ is nullable, we can call $\textit{mkeps}$ |
60
c737a0259194
sorry not all done, need a few more mins for last few changes
Chengsong
parents:
59
diff
changeset
|
613 |
to construct a parse tree for how $r_3$ matched the string $abc$. |
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
614 |
This function gives the following value $v_3$: |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
615 |
|
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
616 |
|
65 | 617 |
\begin{center} |
618 |
$\Left(\Left(\Seq(\Right(\Seq(\Empty, \Seq(\Empty,\Empty))), \Stars [])))$ |
|
619 |
\end{center} |
|
620 |
The outer $\Left(\Left(\ldots))$ tells us the leftmost nullable part of $r_3$(underlined): |
|
70 | 621 |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
622 |
\begin{center}\comment{better layout} |
72 | 623 |
$( \underline{(\ZERO+\ZERO+\ZERO+ \ZERO+ \ONE \cdot \ONE \cdot \ONE) \cdot r^*} + (\ZERO+\ZERO+\ZERO + \ONE + \ZERO) |
624 |
\cdot r^*) +((\ZERO+\ONE+\ZERO + \ZERO + \ZERO) \cdot r^*+(\ZERO+\ZERO+\ZERO + \ONE + \ZERO) \cdot r^* ).$ |
|
65 | 625 |
\end{center} |
70 | 626 |
|
627 |
\noindent |
|
72 | 628 |
Note that the leftmost location of term $((\ZERO+\ZERO+\ZERO + \ZERO + \ONE \cdot \ONE \cdot |
629 |
\ONE) \cdot r^*$ (which corresponds to the initial sub-match $abc$) allows |
|
630 |
$\textit{mkeps}$ to pick it up because $\textit{mkeps}$ is defined to always choose the |
|
70 | 631 |
left one when it is nullable. In the case of this example, $abc$ is |
632 |
preferred over $a$ or $ab$. This $\Left(\Left(\ldots))$ location is |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
633 |
generated by two applications of the splitting clause |
70 | 634 |
|
635 |
\begin{center} |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
636 |
$(r_1 \cdot r_2)\backslash c \;\;(when \; r_1 \; nullable) \, = (r_1\backslash c) \cdot r_2 \,+\, r_2\backslash c.$ |
70 | 637 |
\end{center} |
638 |
||
639 |
\noindent |
|
640 |
By this clause, we put $r_1 \backslash c \cdot r_2 $ at the |
|
641 |
$\textit{front}$ and $r_2 \backslash c$ at the $\textit{back}$. This |
|
72 | 642 |
allows $\textit{mkeps}$ to always pick up among two matches the one with a longer |
70 | 643 |
initial sub-match. Removing the outside $\Left(\Left(...))$, the inside |
644 |
sub-value |
|
645 |
||
646 |
\begin{center} |
|
65 | 647 |
$\Seq(\Right(\Seq(\Empty, \Seq(\Empty, \Empty))), \Stars [])$ |
70 | 648 |
\end{center} |
649 |
||
650 |
\noindent |
|
72 | 651 |
tells us how the empty string $[]$ is matched with $(\ZERO+\ZERO+\ZERO + \ZERO + \ONE \cdot |
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
652 |
\ONE \cdot \ONE) \cdot r^*$. We match $[]$ by a sequence of two nullable regular |
70 | 653 |
expressions. The first one is an alternative, we take the rightmost |
654 |
alternative---whose language contains the empty string. The second |
|
655 |
nullable regular expression is a Kleene star. $\Stars$ tells us how it |
|
656 |
generates the nullable regular expression: by 0 iterations to form |
|
72 | 657 |
$\ONE$. Now $\textit{inj}$ injects characters back and incrementally |
70 | 658 |
builds a parse tree based on $v_3$. Using the value $v_3$, the character |
659 |
c, and the regular expression $r_2$, we can recover how $r_2$ matched |
|
660 |
the string $[c]$ : $\textit{inj} \; r_2 \; c \; v_3$ gives us |
|
65 | 661 |
\begin{center} |
662 |
$v_2 = \Left(\Seq(\Right(\Seq(\Empty, \Seq(\Empty, c))), \Stars [])),$ |
|
663 |
\end{center} |
|
664 |
which tells us how $r_2$ matched $[c]$. After this we inject back the character $b$, and get |
|
665 |
\begin{center} |
|
666 |
$v_1 = \Seq(\Right(\Seq(\Empty, \Seq(b, c))), \Stars [])$ |
|
667 |
\end{center} |
|
61 | 668 |
for how |
65 | 669 |
\begin{center} |
72 | 670 |
$r_1= (\ONE+\ZERO+\ONE \cdot b + \ZERO + \ONE \cdot b \cdot c) \cdot r*$ |
65 | 671 |
\end{center} |
61 | 672 |
matched the string $bc$ before it split into 2 pieces. |
673 |
Finally, after injecting character $a$ back to $v_1$, |
|
65 | 674 |
we get the parse tree |
675 |
\begin{center} |
|
676 |
$v_0= \Stars [\Right(\Seq(a, \Seq(b, c)))]$ |
|
677 |
\end{center} |
|
678 |
for how $r$ matched $abc$. This completes the algorithm. |
|
679 |
||
61 | 680 |
%We omit the details of injection function, which is provided by Sulzmann and Lu's paper \cite{Sulzmann2014}. |
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
681 |
Readers might have noticed that the parse tree information is actually |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
682 |
already available when doing derivatives. For example, immediately after |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
683 |
the operation $\backslash a$ we know that if we want to match a string |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
684 |
that starts with $a$, we can either take the initial match to be |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
685 |
|
65 | 686 |
\begin{center} |
42 | 687 |
\begin{enumerate} |
688 |
\item[1)] just $a$ or |
|
689 |
\item[2)] string $ab$ or |
|
690 |
\item[3)] string $abc$. |
|
691 |
\end{enumerate} |
|
65 | 692 |
\end{center} |
70 | 693 |
|
694 |
\noindent |
|
695 |
In order to differentiate between these choices, we just need to |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
696 |
remember their positions---$a$ is on the left, $ab$ is in the middle , |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
697 |
and $abc$ is on the right. Which of these alternatives is chosen |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
698 |
later does not affect their relative position because the algorithm does |
70 | 699 |
not change this order. If this parsing information can be determined and |
700 |
does not change because of later derivatives, there is no point in |
|
701 |
traversing this information twice. This leads to an optimisation---if we |
|
702 |
store the information for parse trees inside the regular expression, |
|
703 |
update it when we do derivative on them, and collect the information |
|
72 | 704 |
when finished with derivatives and call $\textit{mkeps}$ for deciding which |
70 | 705 |
branch is POSIX, we can generate the parse tree in one pass, instead of |
706 |
doing the rest $n$ injections. This leads to Sulzmann and Lu's novel |
|
71 | 707 |
idea of using bitcodes in derivatives. |
42 | 708 |
|
72 | 709 |
In the next section, we shall focus on the bitcoded algorithm and the |
63 | 710 |
process of simplification of regular expressions. This is needed in |
30 | 711 |
order to obtain \emph{fast} versions of the Brzozowski's, and Sulzmann |
63 | 712 |
and Lu's algorithms. This is where the PhD-project aims to advance the |
713 |
state-of-the-art. |
|
30 | 714 |
|
715 |
||
716 |
\section{Simplification of Regular Expressions} |
|
63 | 717 |
|
70 | 718 |
Using bitcodes to guide parsing is not a novel idea. It was applied to |
63 | 719 |
context free grammars and then adapted by Henglein and Nielson for |
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
720 |
efficient regular expression \comment{?}parsing using DFAs~\cite{nielson11bcre}. |
70 | 721 |
Sulzmann and Lu took this idea of bitcodes a step further by integrating |
722 |
bitcodes into derivatives. The reason why we want to use bitcodes in |
|
723 |
this project is that we want to introduce more aggressive |
|
77 | 724 |
simplification rules in order to keep the size of derivatives small |
70 | 725 |
throughout. This is because the main drawback of building successive |
726 |
derivatives according to Brzozowski's definition is that they can grow |
|
727 |
very quickly in size. This is mainly due to the fact that the derivative |
|
728 |
operation generates often ``useless'' $\ZERO$s and $\ONE$s in |
|
63 | 729 |
derivatives. As a result, if implemented naively both algorithms by |
70 | 730 |
Brzozowski and by Sulzmann and Lu are excruciatingly slow. For example |
731 |
when starting with the regular expression $(a + aa)^*$ and building 12 |
|
63 | 732 |
successive derivatives w.r.t.~the character $a$, one obtains a |
733 |
derivative regular expression with more than 8000 nodes (when viewed as |
|
77 | 734 |
a tree). Operations like $\textit{der}$ and $\nullable$ need to traverse |
63 | 735 |
such trees and consequently the bigger the size of the derivative the |
66 | 736 |
slower the algorithm. |
35 | 737 |
|
70 | 738 |
Fortunately, one can simplify regular expressions after each derivative |
739 |
step. Various simplifications of regular expressions are possible, such |
|
77 | 740 |
as the simplification of $\ZERO + r$, $r + \ZERO$, $\ONE\cdot r$, $r |
70 | 741 |
\cdot \ONE$, and $r + r$ to just $r$. These simplifications do not |
742 |
affect the answer for whether a regular expression matches a string or |
|
743 |
not, but fortunately also do not affect the POSIX strategy of how |
|
744 |
regular expressions match strings---although the latter is much harder |
|
71 | 745 |
to establish. Some initial results in this regard have been |
70 | 746 |
obtained in \cite{AusafDyckhoffUrban2016}. |
747 |
||
748 |
Unfortunately, the simplification rules outlined above are not |
|
77 | 749 |
sufficient to prevent a size explosion in all cases. We |
70 | 750 |
believe a tighter bound can be achieved that prevents an explosion in |
77 | 751 |
\emph{all} cases. Such a tighter bound is suggested by work of Antimirov who |
70 | 752 |
proved that (partial) derivatives can be bound by the number of |
753 |
characters contained in the initial regular expression |
|
754 |
\cite{Antimirov95}. He defined the \emph{partial derivatives} of regular |
|
755 |
expressions as follows: |
|
756 |
||
52
25bbbb8b0e90
just in case of some accidents from erasing my work
Chengsong
parents:
51
diff
changeset
|
757 |
\begin{center} |
25bbbb8b0e90
just in case of some accidents from erasing my work
Chengsong
parents:
51
diff
changeset
|
758 |
\begin{tabular}{lcl} |
80 | 759 |
$\textit{pder} \; c \; \ZERO$ & $\dn$ & $\emptyset$\\ |
760 |
$\textit{pder} \; c \; \ONE$ & $\dn$ & $\emptyset$ \\ |
|
761 |
$\textit{pder} \; c \; d$ & $\dn$ & $\textit{if} \; c \,=\, d \; \{ \ONE \} \; \textit{else} \; \emptyset$ \\ |
|
52
25bbbb8b0e90
just in case of some accidents from erasing my work
Chengsong
parents:
51
diff
changeset
|
762 |
$\textit{pder} \; c \; r_1+r_2$ & $\dn$ & $pder \; c \; r_1 \cup pder \; c \; r_2$ \\ |
70 | 763 |
$\textit{pder} \; c \; r_1 \cdot r_2$ & $\dn$ & $\textit{if} \; nullable \; r_1 $\\ |
764 |
& & $\textit{then} \; \{ r \cdot r_2 \mid r \in pder \; c \; r_1 \} \cup pder \; c \; r_2 \;$\\ |
|
765 |
& & $\textit{else} \; \{ r \cdot r_2 \mid r \in pder \; c \; r_1 \} $ \\ |
|
52
25bbbb8b0e90
just in case of some accidents from erasing my work
Chengsong
parents:
51
diff
changeset
|
766 |
$\textit{pder} \; c \; r^*$ & $\dn$ & $ \{ r' \cdot r^* \mid r' \in pder \; c \; r \} $ \\ |
25bbbb8b0e90
just in case of some accidents from erasing my work
Chengsong
parents:
51
diff
changeset
|
767 |
\end{tabular} |
25bbbb8b0e90
just in case of some accidents from erasing my work
Chengsong
parents:
51
diff
changeset
|
768 |
\end{center} |
70 | 769 |
|
770 |
\noindent |
|
771 |
A partial derivative of a regular expression $r$ is essentially a set of |
|
772 |
regular expressions that are either $r$'s children expressions or a |
|
773 |
concatenation of them. Antimirov has proved a tight bound of the size of |
|
77 | 774 |
\emph{all} partial derivatives no matter what the string looks like. |
775 |
Roughly speaking the size will be quadruple in the size of the regular |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
776 |
expression.\comment{Are you sure? I have just proved that the sum of |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
777 |
sizes in $pder$ is less or equal $(1 + size\;r)^3$. And this is surely |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
778 |
not the best bound.} If we want the size of derivatives in Sulzmann and |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
779 |
Lu's algorithm to stay equal or below this bound, we would need more |
77 | 780 |
aggressive simplifications. Essentially we need to delete useless |
781 |
$\ZERO$s and $\ONE$s, as well as deleting duplicates whenever possible. |
|
71 | 782 |
For example, the parentheses in $(a+b) \cdot c + bc$ can be opened up to |
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
783 |
get $a\cdot c + b \cdot c + b \cdot c$, and then simplified to just $a |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
784 |
\cdot c + b \cdot c$. Another example is simplifying $(a^*+a) + (a^*+ |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
785 |
\ONE) + (a +\ONE)$ to just $a^*+a+\ONE$. Adding these more aggressive |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
786 |
simplification rules helps us to achieve the same size bound as that of |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
787 |
the partial derivatives. |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
788 |
|
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
789 |
In order to implement the idea of ``spilling out alternatives'' and to |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
790 |
make them compatible with the $\text{inj}$-mechanism, we use |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
791 |
\emph{bitcodes}. Bits and bitcodes (lists of bits) are just: |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
792 |
|
71 | 793 |
%This allows us to prove a tight |
794 |
%bound on the size of regular expression during the running time of the |
|
795 |
%algorithm if we can establish the connection between our simplification |
|
796 |
%rules and partial derivatives. |
|
35 | 797 |
|
798 |
%We believe, and have generated test |
|
799 |
%data, that a similar bound can be obtained for the derivatives in |
|
800 |
%Sulzmann and Lu's algorithm. Let us give some details about this next. |
|
30 | 801 |
|
72 | 802 |
|
67 | 803 |
\begin{center} |
77 | 804 |
$b ::= S \mid Z \qquad |
43 | 805 |
bs ::= [] \mid b:bs |
67 | 806 |
$ |
807 |
\end{center} |
|
77 | 808 |
|
809 |
\noindent |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
810 |
The $S$ and $Z$ are arbitrary names for the bits in order to avoid |
77 | 811 |
confusion with the regular expressions $\ZERO$ and $\ONE$. Bitcodes (or |
812 |
bit-lists) can be used to encode values (or incomplete values) in a |
|
813 |
compact form. This can be straightforwardly seen in the following |
|
814 |
coding function from values to bitcodes: |
|
815 |
||
30 | 816 |
\begin{center} |
817 |
\begin{tabular}{lcl} |
|
818 |
$\textit{code}(\Empty)$ & $\dn$ & $[]$\\ |
|
819 |
$\textit{code}(\Char\,c)$ & $\dn$ & $[]$\\ |
|
820 |
$\textit{code}(\Left\,v)$ & $\dn$ & $\Z :: code(v)$\\ |
|
821 |
$\textit{code}(\Right\,v)$ & $\dn$ & $\S :: code(v)$\\ |
|
822 |
$\textit{code}(\Seq\,v_1\,v_2)$ & $\dn$ & $code(v_1) \,@\, code(v_2)$\\ |
|
68 | 823 |
$\textit{code}(\Stars\,[])$ & $\dn$ & $[\Z]$\\ |
824 |
$\textit{code}(\Stars\,(v\!::\!vs))$ & $\dn$ & $\S :: code(v) \;@\; |
|
30 | 825 |
code(\Stars\,vs)$ |
826 |
\end{tabular} |
|
827 |
\end{center} |
|
70 | 828 |
|
77 | 829 |
\noindent |
830 |
Here $\textit{code}$ encodes a value into a bitcodes by converting |
|
831 |
$\Left$ into $\Z$, $\Right$ into $\S$, the start point of a non-empty |
|
832 |
star iteration into $\S$, and the border where a local star terminates |
|
833 |
into $\Z$. This coding is lossy, as it throws away the information about |
|
834 |
characters, and also does not encode the ``boundary'' between two |
|
835 |
sequence values. Moreover, with only the bitcode we cannot even tell |
|
836 |
whether the $\S$s and $\Z$s are for $\Left/\Right$ or $\Stars$. The |
|
837 |
reason for choosing this compact way of storing information is that the |
|
838 |
relatively small size of bits can be easily manipulated and ``moved |
|
839 |
around'' in a regular expression. In order to recover values, we will |
|
840 |
need the corresponding regular expression as an extra information. This |
|
841 |
means the decoding function is defined as: |
|
70 | 842 |
|
843 |
||
37 | 844 |
%\begin{definition}[Bitdecoding of Values]\mbox{} |
36 | 845 |
\begin{center} |
846 |
\begin{tabular}{@{}l@{\hspace{1mm}}c@{\hspace{1mm}}l@{}} |
|
847 |
$\textit{decode}'\,bs\,(\ONE)$ & $\dn$ & $(\Empty, bs)$\\ |
|
848 |
$\textit{decode}'\,bs\,(c)$ & $\dn$ & $(\Char\,c, bs)$\\ |
|
849 |
$\textit{decode}'\,(\Z\!::\!bs)\;(r_1 + r_2)$ & $\dn$ & |
|
850 |
$\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}\; |
|
851 |
(\Left\,v, bs_1)$\\ |
|
852 |
$\textit{decode}'\,(\S\!::\!bs)\;(r_1 + r_2)$ & $\dn$ & |
|
853 |
$\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_2\;\textit{in}\; |
|
854 |
(\Right\,v, bs_1)$\\ |
|
855 |
$\textit{decode}'\,bs\;(r_1\cdot r_2)$ & $\dn$ & |
|
856 |
$\textit{let}\,(v_1, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}$\\ |
|
857 |
& & $\textit{let}\,(v_2, bs_2) = \textit{decode}'\,bs_1\,r_2$\\ |
|
858 |
& & \hspace{35mm}$\textit{in}\;(\Seq\,v_1\,v_2, bs_2)$\\ |
|
859 |
$\textit{decode}'\,(\Z\!::\!bs)\,(r^*)$ & $\dn$ & $(\Stars\,[], bs)$\\ |
|
860 |
$\textit{decode}'\,(\S\!::\!bs)\,(r^*)$ & $\dn$ & |
|
861 |
$\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r\;\textit{in}$\\ |
|
862 |
& & $\textit{let}\,(\Stars\,vs, bs_2) = \textit{decode}'\,bs_1\,r^*$\\ |
|
863 |
& & \hspace{35mm}$\textit{in}\;(\Stars\,v\!::\!vs, bs_2)$\bigskip\\ |
|
864 |
||
865 |
$\textit{decode}\,bs\,r$ & $\dn$ & |
|
866 |
$\textit{let}\,(v, bs') = \textit{decode}'\,bs\,r\;\textit{in}$\\ |
|
867 |
& & $\textit{if}\;bs' = []\;\textit{then}\;\textit{Some}\,v\; |
|
868 |
\textit{else}\;\textit{None}$ |
|
869 |
\end{tabular} |
|
870 |
\end{center} |
|
37 | 871 |
%\end{definition} |
30 | 872 |
|
77 | 873 |
Sulzmann and Lu's integrated the bitcodes into regular expressions to |
874 |
create annotated regular expressions \cite{Sulzmann2014}. |
|
875 |
\emph{Annotated regular expressions} are defined by the following |
|
70 | 876 |
grammar: |
43 | 877 |
|
878 |
\begin{center} |
|
879 |
\begin{tabular}{lcl} |
|
880 |
$\textit{a}$ & $::=$ & $\textit{ZERO}$\\ |
|
881 |
& $\mid$ & $\textit{ONE}\;\;bs$\\ |
|
882 |
& $\mid$ & $\textit{CHAR}\;\;bs\,c$\\ |
|
72 | 883 |
& $\mid$ & $\textit{ALT}\;\;bs\,a_1 \, a_2$\\ |
43 | 884 |
& $\mid$ & $\textit{SEQ}\;\;bs\,a_1\,a_2$\\ |
885 |
& $\mid$ & $\textit{STAR}\;\;bs\,a$ |
|
886 |
\end{tabular} |
|
887 |
\end{center} |
|
72 | 888 |
%(in \textit{ALT}) |
77 | 889 |
|
43 | 890 |
\noindent |
77 | 891 |
where $bs$ stands for bitcodes, and $a$ for $\bold{a}$nnotated regular |
892 |
expressions. We will show that these bitcodes encode information about |
|
893 |
the (POSIX) value that should be generated by the Sulzmann and Lu |
|
894 |
algorithm. |
|
895 |
||
43 | 896 |
|
70 | 897 |
To do lexing using annotated regular expressions, we shall first |
898 |
transform the usual (un-annotated) regular expressions into annotated |
|
899 |
regular expressions. This operation is called \emph{internalisation} and |
|
900 |
defined as follows: |
|
901 |
||
37 | 902 |
%\begin{definition} |
36 | 903 |
\begin{center} |
904 |
\begin{tabular}{lcl} |
|
905 |
$(\ZERO)^\uparrow$ & $\dn$ & $\textit{ZERO}$\\ |
|
906 |
$(\ONE)^\uparrow$ & $\dn$ & $\textit{ONE}\,[]$\\ |
|
907 |
$(c)^\uparrow$ & $\dn$ & $\textit{CHAR}\,[]\,c$\\ |
|
908 |
$(r_1 + r_2)^\uparrow$ & $\dn$ & |
|
909 |
$\textit{ALT}\;[]\,(\textit{fuse}\,[\Z]\,r_1^\uparrow)\, |
|
910 |
(\textit{fuse}\,[\S]\,r_2^\uparrow)$\\ |
|
911 |
$(r_1\cdot r_2)^\uparrow$ & $\dn$ & |
|
912 |
$\textit{SEQ}\;[]\,r_1^\uparrow\,r_2^\uparrow$\\ |
|
913 |
$(r^*)^\uparrow$ & $\dn$ & |
|
914 |
$\textit{STAR}\;[]\,r^\uparrow$\\ |
|
915 |
\end{tabular} |
|
916 |
\end{center} |
|
37 | 917 |
%\end{definition} |
44
4d674a971852
another changes. have written more. but havent typed them. tomorrow will continue.
Chengsong
parents:
43
diff
changeset
|
918 |
|
70 | 919 |
\noindent |
77 | 920 |
We use up arrows here to indicate that the basic un-annotated regular |
921 |
expressions are ``lifted up'' into something slightly more complex. In the |
|
922 |
fourth clause, $\textit{fuse}$ is an auxiliary function that helps to |
|
923 |
attach bits to the front of an annotated regular expression. Its |
|
924 |
definition is as follows: |
|
70 | 925 |
|
44
4d674a971852
another changes. have written more. but havent typed them. tomorrow will continue.
Chengsong
parents:
43
diff
changeset
|
926 |
\begin{center} |
4d674a971852
another changes. have written more. but havent typed them. tomorrow will continue.
Chengsong
parents:
43
diff
changeset
|
927 |
\begin{tabular}{lcl} |
77 | 928 |
$\textit{fuse}\;bs\,(\textit{ZERO})$ & $\dn$ & $\textit{ZERO}$\\ |
929 |
$\textit{fuse}\;bs\,(\textit{ONE}\,bs')$ & $\dn$ & |
|
44
4d674a971852
another changes. have written more. but havent typed them. tomorrow will continue.
Chengsong
parents:
43
diff
changeset
|
930 |
$\textit{ONE}\,(bs\,@\,bs')$\\ |
77 | 931 |
$\textit{fuse}\;bs\,(\textit{CHAR}\,bs'\,c)$ & $\dn$ & |
44
4d674a971852
another changes. have written more. but havent typed them. tomorrow will continue.
Chengsong
parents:
43
diff
changeset
|
932 |
$\textit{CHAR}\,(bs\,@\,bs')\,c$\\ |
77 | 933 |
$\textit{fuse}\;bs\,(\textit{ALT}\,bs'\,a_1\,a_2)$ & $\dn$ & |
44
4d674a971852
another changes. have written more. but havent typed them. tomorrow will continue.
Chengsong
parents:
43
diff
changeset
|
934 |
$\textit{ALT}\,(bs\,@\,bs')\,a_1\,a_2$\\ |
77 | 935 |
$\textit{fuse}\;bs\,(\textit{SEQ}\,bs'\,a_1\,a_2)$ & $\dn$ & |
44
4d674a971852
another changes. have written more. but havent typed them. tomorrow will continue.
Chengsong
parents:
43
diff
changeset
|
936 |
$\textit{SEQ}\,(bs\,@\,bs')\,a_1\,a_2$\\ |
77 | 937 |
$\textit{fuse}\;bs\,(\textit{STAR}\,bs'\,a)$ & $\dn$ & |
44
4d674a971852
another changes. have written more. but havent typed them. tomorrow will continue.
Chengsong
parents:
43
diff
changeset
|
938 |
$\textit{STAR}\,(bs\,@\,bs')\,a$ |
4d674a971852
another changes. have written more. but havent typed them. tomorrow will continue.
Chengsong
parents:
43
diff
changeset
|
939 |
\end{tabular} |
4d674a971852
another changes. have written more. but havent typed them. tomorrow will continue.
Chengsong
parents:
43
diff
changeset
|
940 |
\end{center} |
4d674a971852
another changes. have written more. but havent typed them. tomorrow will continue.
Chengsong
parents:
43
diff
changeset
|
941 |
|
70 | 942 |
\noindent |
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
943 |
After internalising the regular expression, we perform successive |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
944 |
derivative operations on the annotated regular expressions. This |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
945 |
derivative operation is the same as what we had previously for the |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
946 |
basic regular expressions, except that we beed to take care of |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
947 |
the bitcodes:\comment{You need to be consitent with ALTS and ALT; ALT |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
948 |
is just an abbreviation; derivations and so on are defined for |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
949 |
ALTS}\comment{no this is not the case, ALT for 2 regexes, ALTS for a |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
950 |
list...\textcolor{blue}{This does not make sense to me. CU}} |
70 | 951 |
|
952 |
%\begin{definition}{bder} |
|
36 | 953 |
\begin{center} |
954 |
\begin{tabular}{@{}lcl@{}} |
|
77 | 955 |
$(\textit{ZERO})\,\backslash c$ & $\dn$ & $\textit{ZERO}$\\ |
956 |
$(\textit{ONE}\;bs)\,\backslash c$ & $\dn$ & $\textit{ZERO}$\\ |
|
957 |
$(\textit{CHAR}\;bs\,d)\,\backslash c$ & $\dn$ & |
|
36 | 958 |
$\textit{if}\;c=d\; \;\textit{then}\; |
959 |
\textit{ONE}\;bs\;\textit{else}\;\textit{ZERO}$\\ |
|
77 | 960 |
$(\textit{ALT}\;bs\,a_1\,a_2)\,\backslash c$ & $\dn$ & |
961 |
$\textit{ALT}\;bs\,(a_1\,\backslash c)\,(a_2\,\backslash c)$\\ |
|
962 |
$(\textit{SEQ}\;bs\,a_1\,a_2)\,\backslash c$ & $\dn$ & |
|
36 | 963 |
$\textit{if}\;\textit{bnullable}\,a_1$\\ |
77 | 964 |
& &$\textit{then}\;\textit{ALT}\,bs\,(\textit{SEQ}\,[]\,(a_1\,\backslash c)\,a_2)$\\ |
965 |
& &$\phantom{\textit{then}\;\textit{ALT}\,bs\,}(\textit{fuse}\,(\textit{bmkeps}\,a_1)\,(a_2\,\backslash c))$\\ |
|
966 |
& &$\textit{else}\;\textit{SEQ}\,bs\,(a_1\,\backslash c)\,a_2$\\ |
|
967 |
$(\textit{STAR}\,bs\,a)\,\backslash c$ & $\dn$ & |
|
968 |
$\textit{SEQ}\;bs\,(\textit{fuse}\, [\Z] (r\,\backslash c))\, |
|
36 | 969 |
(\textit{STAR}\,[]\,r)$ |
970 |
\end{tabular} |
|
971 |
\end{center} |
|
37 | 972 |
%\end{definition} |
74
9e791ef6022f
just a merge - no changes
Christian Urban <urbanc@in.tum.de>
parents:
72
diff
changeset
|
973 |
|
77 | 974 |
\noindent |
975 |
For instance, when we unfold $\textit{STAR} \; bs \; a$ into a sequence, |
|
976 |
we need to attach an additional bit $Z$ to the front of $r \backslash c$ |
|
977 |
to indicate that there is one more star iteration. Also the $SEQ$ clause |
|
978 |
is more subtle---when $a_1$ is $\textit{bnullable}$ (here |
|
979 |
\textit{bnullable} is exactly the same as $\textit{nullable}$, except |
|
980 |
that it is for annotated regular expressions, therefore we omit the |
|
981 |
definition). Assume that $bmkeps$ correctly extracts the bitcode for how |
|
982 |
$a_1$ matches the string prior to character $c$ (more on this later), |
|
983 |
then the right branch of $ALTS$, which is $fuse \; bmkeps \; a_1 (a_2 |
|
984 |
\backslash c)$ will collapse the regular expression $a_1$(as it has |
|
985 |
already been fully matched) and store the parsing information at the |
|
986 |
head of the regular expression $a_2 \backslash c$ by fusing to it. The |
|
987 |
bitsequence $bs$, which was initially attached to the head of $SEQ$, has |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
988 |
now been elevated to the top-level of $ALT$, as this information will be |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
989 |
needed whichever way the $SEQ$ is matched---no matter whether $c$ belongs |
77 | 990 |
to $a_1$ or $ a_2$. After building these derivatives and maintaining all |
991 |
the lexing information, we complete the lexing by collecting the |
|
992 |
bitcodes using a generalised version of the $\textit{mkeps}$ function |
|
993 |
for annotated regular expressions, called $\textit{bmkeps}$: |
|
44
4d674a971852
another changes. have written more. but havent typed them. tomorrow will continue.
Chengsong
parents:
43
diff
changeset
|
994 |
|
4d674a971852
another changes. have written more. but havent typed them. tomorrow will continue.
Chengsong
parents:
43
diff
changeset
|
995 |
|
37 | 996 |
%\begin{definition}[\textit{bmkeps}]\mbox{} |
36 | 997 |
\begin{center} |
998 |
\begin{tabular}{lcl} |
|
77 | 999 |
$\textit{bmkeps}\,(\textit{ONE}\;bs)$ & $\dn$ & $bs$\\ |
1000 |
$\textit{bmkeps}\,(\textit{ALT}\;bs\,a_1\,a_2)$ & $\dn$ & |
|
36 | 1001 |
$\textit{if}\;\textit{bnullable}\,a_1$\\ |
1002 |
& &$\textit{then}\;bs\,@\,\textit{bmkeps}\,a_1$\\ |
|
1003 |
& &$\textit{else}\;bs\,@\,\textit{bmkeps}\,a_2$\\ |
|
77 | 1004 |
$\textit{bmkeps}\,(\textit{SEQ}\;bs\,a_1\,a_2)$ & $\dn$ & |
36 | 1005 |
$bs \,@\,\textit{bmkeps}\,a_1\,@\, \textit{bmkeps}\,a_2$\\ |
77 | 1006 |
$\textit{bmkeps}\,(\textit{STAR}\;bs\,a)$ & $\dn$ & |
36 | 1007 |
$bs \,@\, [\S]$ |
1008 |
\end{tabular} |
|
1009 |
\end{center} |
|
37 | 1010 |
%\end{definition} |
70 | 1011 |
|
1012 |
\noindent |
|
77 | 1013 |
This function completes the value information by travelling along the |
1014 |
path of the regular expression that corresponds to a POSIX value and |
|
1015 |
collecting all the bitcodes, and using $S$ to indicate the end of star |
|
1016 |
iterations. If we take the bitcodes produced by $\textit{bmkeps}$ and |
|
1017 |
decode them, we get the value we expect. The corresponding lexing |
|
1018 |
algorithm looks as follows: |
|
1019 |
||
37 | 1020 |
\begin{center} |
1021 |
\begin{tabular}{lcl} |
|
1022 |
$\textit{blexer}\;r\,s$ & $\dn$ & |
|
1023 |
$\textit{let}\;a = (r^\uparrow)\backslash s\;\textit{in}$\\ |
|
1024 |
& & $\;\;\textit{if}\; \textit{bnullable}(a)$\\ |
|
1025 |
& & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\ |
|
1026 |
& & $\;\;\textit{else}\;\textit{None}$ |
|
1027 |
\end{tabular} |
|
1028 |
\end{center} |
|
77 | 1029 |
|
1030 |
\noindent |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1031 |
In this definition $\_\backslash s$ is the generalisation of the derivative |
77 | 1032 |
operation from characters to strings (just like the derivatives for un-annotated |
1033 |
regular expressions). |
|
30 | 1034 |
|
77 | 1035 |
The main point of the bitcodes and annotated regular expressions is that |
1036 |
we can apply rather aggressive (in terms of size) simplification rules |
|
1037 |
in order to keep derivatives small. We have developed such |
|
1038 |
``aggressive'' simplification rules and generated test data that show |
|
1039 |
that the expected bound can be achieved. Obviously we could only |
|
1040 |
partially cover the search space as there are infinitely many regular |
|
1041 |
expressions and strings. |
|
30 | 1042 |
|
77 | 1043 |
One modification we introduced is to allow a list of annotated regular |
1044 |
expressions in the \textit{ALTS} constructor. This allows us to not just |
|
1045 |
delete unnecessary $\ZERO$s and $\ONE$s from regular expressions, but |
|
1046 |
also unnecessary ``copies'' of regular expressions (very similar to |
|
1047 |
simplifying $r + r$ to just $r$, but in a more general setting). Another |
|
1048 |
modification is that we use simplification rules inspired by Antimirov's |
|
1049 |
work on partial derivatives. They maintain the idea that only the first |
|
1050 |
``copy'' of a regular expression in an alternative contributes to the |
|
1051 |
calculation of a POSIX value. All subsequent copies can be pruned away from |
|
1052 |
the regular expression. A recursive definition of our simplification function |
|
1053 |
that looks somewhat similar to our Scala code is given below:\comment{Use $\ZERO$, $\ONE$ and so on. |
|
1054 |
Is it $ALT$ or $ALTS$?}\\ |
|
49 | 1055 |
|
52
25bbbb8b0e90
just in case of some accidents from erasing my work
Chengsong
parents:
51
diff
changeset
|
1056 |
\begin{center} |
25bbbb8b0e90
just in case of some accidents from erasing my work
Chengsong
parents:
51
diff
changeset
|
1057 |
\begin{tabular}{@{}lcl@{}} |
77 | 1058 |
|
1059 |
$\textit{simp} \; (\textit{SEQ}\;bs\,a_1\,a_2)$ & $\dn$ & $ (\textit{simp} \; a_1, \textit{simp} \; a_2) \; \textit{match} $ \\ |
|
80 | 1060 |
&&$\quad\textit{case} \; (\ZERO, \_) \Rightarrow \ZERO$ \\ |
1061 |
&&$\quad\textit{case} \; (\_, \ZERO) \Rightarrow \ZERO$ \\ |
|
1062 |
&&$\quad\textit{case} \; (\ONE, a_2') \Rightarrow \textit{fuse} \; bs \; a_2'$ \\ |
|
1063 |
&&$\quad\textit{case} \; (a_1', \ONE) \Rightarrow \textit{fuse} \; bs \; a_1'$ \\ |
|
77 | 1064 |
&&$\quad\textit{case} \; (a_1', a_2') \Rightarrow \textit{SEQ} \; bs \; a_1' \; a_2'$ \\ |
52
25bbbb8b0e90
just in case of some accidents from erasing my work
Chengsong
parents:
51
diff
changeset
|
1065 |
|
77 | 1066 |
$\textit{simp} \; (\textit{ALTS}\;bs\,as)$ & $\dn$ & $\textit{distinct}( \textit{flatten} ( \textit{map simp as})) \; \textit{match} $ \\ |
80 | 1067 |
&&$\quad\textit{case} \; [] \Rightarrow \ZERO$ \\ |
77 | 1068 |
&&$\quad\textit{case} \; a :: [] \Rightarrow \textit{fuse bs a}$ \\ |
80 | 1069 |
&&$\quad\textit{case} \; as' \Rightarrow \textit{ALTS}\;bs\;as'$\\ |
77 | 1070 |
|
1071 |
$\textit{simp} \; a$ & $\dn$ & $\textit{a} \qquad \textit{otherwise}$ |
|
52
25bbbb8b0e90
just in case of some accidents from erasing my work
Chengsong
parents:
51
diff
changeset
|
1072 |
\end{tabular} |
25bbbb8b0e90
just in case of some accidents from erasing my work
Chengsong
parents:
51
diff
changeset
|
1073 |
\end{center} |
47 | 1074 |
|
77 | 1075 |
\noindent |
1076 |
The simplification does a pattern matching on the regular expression. |
|
1077 |
When it detected that the regular expression is an alternative or |
|
1078 |
sequence, it will try to simplify its children regular expressions |
|
1079 |
recursively and then see if one of the children turn into $\ZERO$ or |
|
1080 |
$\ONE$, which might trigger further simplification at the current level. |
|
1081 |
The most involved part is the $\textit{ALTS}$ clause, where we use two |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1082 |
auxiliary functions $\textit{flatten}$ and $\textit{distinct}$ to open up nested |
77 | 1083 |
$\textit{ALTS}$ and reduce as many duplicates as possible. Function |
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1084 |
$\textit{distinct}$ keeps the first occurring copy only and remove all later ones |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1085 |
when detected duplicates. Function $\textit{flatten}$ opens up nested \textit{ALTS}. |
77 | 1086 |
Its recursive definition is given below: |
1087 |
||
53 | 1088 |
\begin{center} |
1089 |
\begin{tabular}{@{}lcl@{}} |
|
80 | 1090 |
$\textit{flatten} \; (\textit{ALTS}\;bs\,as) :: as'$ & $\dn$ & $(\textit{map} \; |
70 | 1091 |
(\textit{fuse}\;bs)\; \textit{as}) \; @ \; \textit{flatten} \; as' $ \\ |
53 | 1092 |
$\textit{flatten} \; \textit{ZERO} :: as'$ & $\dn$ & $ \textit{flatten} \; as' $ \\ |
70 | 1093 |
$\textit{flatten} \; a :: as'$ & $\dn$ & $a :: \textit{flatten} \; as'$ \quad(otherwise) |
53 | 1094 |
\end{tabular} |
1095 |
\end{center} |
|
1096 |
||
70 | 1097 |
\noindent |
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1098 |
Here $\textit{flatten}$ behaves like the traditional functional programming flatten |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1099 |
function, except that it also removes $\ZERO$s. Or in terms of regular expressions, it |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1100 |
removes parentheses, for example changing $a+(b+c)$ into $a+b+c$. |
47 | 1101 |
|
77 | 1102 |
Suppose we apply simplification after each derivative step, and view |
1103 |
these two operations as an atomic one: $a \backslash_{simp}\,c \dn |
|
1104 |
\textit{simp}(a \backslash c)$. Then we can use the previous natural |
|
1105 |
extension from derivative w.r.t.~character to derivative |
|
1106 |
w.r.t.~string:\comment{simp in the [] case?} |
|
53 | 1107 |
|
1108 |
\begin{center} |
|
1109 |
\begin{tabular}{lcl} |
|
77 | 1110 |
$r \backslash_{simp} (c\!::\!s) $ & $\dn$ & $(r \backslash_{simp}\, c) \backslash_{simp}\, s$ \\ |
80 | 1111 |
$r \backslash_{simp} [\,] $ & $\dn$ & $r$ |
53 | 1112 |
\end{tabular} |
1113 |
\end{center} |
|
1114 |
||
77 | 1115 |
\noindent |
1116 |
we obtain an optimised version of the algorithm: |
|
1117 |
||
1118 |
\begin{center} |
|
53 | 1119 |
\begin{tabular}{lcl} |
1120 |
$\textit{blexer\_simp}\;r\,s$ & $\dn$ & |
|
77 | 1121 |
$\textit{let}\;a = (r^\uparrow)\backslash_{simp}\, s\;\textit{in}$\\ |
53 | 1122 |
& & $\;\;\textit{if}\; \textit{bnullable}(a)$\\ |
1123 |
& & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\ |
|
1124 |
& & $\;\;\textit{else}\;\textit{None}$ |
|
1125 |
\end{tabular} |
|
1126 |
\end{center} |
|
48 | 1127 |
|
77 | 1128 |
\noindent |
1129 |
This algorithm keeps the regular expression size small, for example, |
|
1130 |
with this simplification our previous $(a + aa)^*$ example's 8000 nodes |
|
1131 |
will be reduced to just 6 and stays constant, no matter how long the |
|
1132 |
input string is. |
|
35 | 1133 |
|
30 | 1134 |
|
35 | 1135 |
|
70 | 1136 |
\section{Current Work} |
1137 |
||
77 | 1138 |
We are currently engaged in two tasks related to this algorithm. The |
1139 |
first task is proving that our simplification rules actually do not |
|
1140 |
affect the POSIX value that should be generated by the algorithm |
|
1141 |
according to the specification of a POSIX value and furthermore obtain a |
|
1142 |
much tighter bound on the sizes of derivatives. The result is that our |
|
49 | 1143 |
algorithm should be correct and faster on all inputs. The original |
1144 |
blow-up, as observed in JavaScript, Python and Java, would be excluded |
|
77 | 1145 |
from happening in our algorithm. For this proof we use the theorem prover |
1146 |
Isabelle. Once completed, this result will advance the state-of-the-art: |
|
1147 |
Sulzmann and Lu wrote in their paper~\cite{Sulzmann2014} about the |
|
1148 |
bitcoded ``incremental parsing method'' (that is the lexing algorithm |
|
1149 |
outlined in this section): |
|
30 | 1150 |
|
1151 |
\begin{quote}\it |
|
1152 |
``Correctness Claim: We further claim that the incremental parsing |
|
1153 |
method in Figure~5 in combination with the simplification steps in |
|
1154 |
Figure 6 yields POSIX parse trees. We have tested this claim |
|
1155 |
extensively by using the method in Figure~3 as a reference but yet |
|
1156 |
have to work out all proof details.'' |
|
1157 |
\end{quote} |
|
1158 |
||
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1159 |
\noindent We like to settle this correctness claim. It is relatively |
79 | 1160 |
straightforward to establish that after one simplification step, the part of a |
1161 |
nullable derivative that corresponds to a POSIX value remains intact and can |
|
1162 |
still be collected, in other words, we can show that\comment{Double-check....I |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1163 |
think this is not the case} |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1164 |
%\comment{If i remember correctly, you have proved this lemma. |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1165 |
%I feel this is indeed not true because you might place arbitrary |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1166 |
%bits on the regex r, however if this is the case, did i remember wrongly that |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1167 |
%you proved something like simplification does not affect $\textit{bmkeps}$ results? |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1168 |
%Anyway, i have amended this a little bit so it does not allow arbitrary bits attached |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1169 |
%to a regex. Maybe it works now.} |
74
9e791ef6022f
just a merge - no changes
Christian Urban <urbanc@in.tum.de>
parents:
72
diff
changeset
|
1170 |
|
71 | 1171 |
\begin{center} |
80 | 1172 |
$\textit{bmkeps} \; a = \textit{bmkeps} \; \textit{bsimp} \; a\;( a\; \textit{bnullable} and \textit{decode}(r, \textit{bmkeps}(a)) is a \textit{POSIX} value)$ |
71 | 1173 |
\end{center} |
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1174 |
\comment{\textcolor{blue}{I proved $bmkeps\,(bsimp\,a) = bmkeps\,a$ provided $a$ is |
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1175 |
$\textit{bnullable}$}} |
74
9e791ef6022f
just a merge - no changes
Christian Urban <urbanc@in.tum.de>
parents:
72
diff
changeset
|
1176 |
|
9e791ef6022f
just a merge - no changes
Christian Urban <urbanc@in.tum.de>
parents:
72
diff
changeset
|
1177 |
\noindent |
9e791ef6022f
just a merge - no changes
Christian Urban <urbanc@in.tum.de>
parents:
72
diff
changeset
|
1178 |
as this basically comes down to proving actions like removing the |
9e791ef6022f
just a merge - no changes
Christian Urban <urbanc@in.tum.de>
parents:
72
diff
changeset
|
1179 |
additional $r$ in $r+r$ does not delete important POSIX information in |
77 | 1180 |
a regular expression. The hard part of this proof is to establish that |
74
9e791ef6022f
just a merge - no changes
Christian Urban <urbanc@in.tum.de>
parents:
72
diff
changeset
|
1181 |
|
71 | 1182 |
\begin{center} |
79 | 1183 |
$\textit{bmkeps} \; \textit{blexer}\_{simp}(s, \; r) = \textit{bmkeps} \; \textit{blexer} \; \textit{simp}(s, \; r)$ |
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1184 |
\end{center}\comment{This is not true either...look at the definion blexer/blexer-simp} |
74
9e791ef6022f
just a merge - no changes
Christian Urban <urbanc@in.tum.de>
parents:
72
diff
changeset
|
1185 |
|
79 | 1186 |
\noindent That is, if we do derivative on regular expression $r$ and then |
1187 |
simplify it, and repeat this process until we exhaust the string, we get a |
|
80 | 1188 |
regular expression $r''$($\textit{LHS}$) that provides the POSIX matching |
1189 |
information, which is exactly the same as the result $r'$($\textit{RHS}$ of the |
|
1190 |
normal derivative algorithm that only does derivative repeatedly and has no |
|
1191 |
simplification at all. This might seem at first glance very unintuitive, as |
|
1192 |
the $r'$ is exponentially larger than $r''$, but can be explained in the |
|
1193 |
following way: we are pruning away the possible matches that are not POSIX. |
|
1194 |
Since there are exponentially non-POSIX matchings and only 1 POSIX matching, it |
|
1195 |
is understandable that our $r''$ can be a lot smaller. we can still provide |
|
1196 |
the same POSIX value if there is one. This is not as straightforward as the |
|
1197 |
previous proposition, as the two regular expressions $r'$ and $r''$ might have |
|
1198 |
become very different regular expressions. The crucial point is to find the |
|
1199 |
$\textit{POSIX}$ information of a regular expression and how it is modified, |
|
1200 |
augmented and propagated |
|
1201 |
during simplification in parallel with the regularr expression that |
|
1202 |
has not been simplified in the subsequent derivative operations. To aid this, |
|
1203 |
we use the helping function retrieve described by Sulzmann and Lu: \\definition |
|
81
a0df84875788
updated and added comments
Christian Urban <urbanc@in.tum.de>
parents:
80
diff
changeset
|
1204 |
of retrieve TODO\comment{Did not read further}\\ |
80 | 1205 |
This function assembles the bitcode that corresponds to a parse tree for how |
1206 |
the current derivative matches the suffix of the string(the characters that |
|
1207 |
have not yet appeared, but will appear as the successive derivatives go on, |
|
1208 |
how do we get this "future" information? By the value $v$, which is |
|
1209 |
computed by a pass of the algorithm that uses |
|
1210 |
$inj$ as described in the previous section). Sulzmann and Lu used this |
|
1211 |
to connect the bitcoded algorithm to the older algorithm by the following |
|
1212 |
equation: |
|
77 | 1213 |
|
79 | 1214 |
\begin{center} $inj \;a\; c \; v = \textit{decode} \; (\textit{retrieve}\; |
80 | 1215 |
((\textit{internalise}\; r)\backslash_{simp} c) v)$ |
1216 |
\end{center} |
|
1217 |
A little fact that needs to be stated to help comprehension: |
|
1218 |
\begin{center} |
|
1219 |
$r^\uparrow = a$($a$ stands for $\textit{annotated}).$ |
|
1220 |
\end{center} |
|
1221 |
Ausaf and Urban also used this fact to prove the |
|
1222 |
correctness of bitcoded algorithm without simplification. Our purpose |
|
1223 |
of using this, however, is to establish |
|
1224 |
\begin{center} |
|
1225 |
$ \textit{retrieve} \; |
|
1226 |
a \; v \;=\; \textit{retrieve} \; \textit{simp}(a) \; v'.$ |
|
1227 |
\end{center} |
|
1228 |
The idea |
|
1229 |
is that using $v'$, a simplified version of $v$ that possibly had gone |
|
1230 |
through the same simplification step as $\textit{simp}(a)$ we are |
|
1231 |
able to extract the bit-sequence that gives the same parsing |
|
1232 |
information as the unsimplified one. After establishing this, we |
|
1233 |
might be able to finally bridge the gap of proving |
|
1234 |
\begin{center} |
|
1235 |
$\textit{retrieve} \; r \backslash s \; v = \;\textit{retrieve} \; |
|
1236 |
\textit{simp}(r) \backslash s \; v'$ |
|
1237 |
\end{center} |
|
1238 |
and subsequently |
|
1239 |
\begin{center} |
|
1240 |
$\textit{retrieve} \; r \backslash s \; v\; = \; \textit{retrieve} \; |
|
1241 |
r \backslash_{simp} s \; v'$. |
|
1242 |
\end{center} |
|
1243 |
This proves that our simplified |
|
1244 |
version of regular expression still contains all the bitcodes needed. |
|
49 | 1245 |
|
72 | 1246 |
|
70 | 1247 |
The second task is to speed up the more aggressive simplification. |
1248 |
Currently it is slower than a naive simplification(the naive version as |
|
1249 |
implemented in ADU of course can explode in some cases). So it needs to |
|
1250 |
be explored how to make it faster. Our possibility would be to explore |
|
1251 |
again the connection to DFAs. This is very much work in progress. |
|
30 | 1252 |
|
1253 |
\section{Conclusion} |
|
1254 |
||
1255 |
In this PhD-project we are interested in fast algorithms for regular |
|
1256 |
expression matching. While this seems to be a ``settled'' area, in |
|
1257 |
fact interesting research questions are popping up as soon as one steps |
|
1258 |
outside the classic automata theory (for example in terms of what kind |
|
1259 |
of regular expressions are supported). The reason why it is |
|
1260 |
interesting for us to look at the derivative approach introduced by |
|
1261 |
Brzozowski for regular expression matching, and then much further |
|
1262 |
developed by Sulzmann and Lu, is that derivatives can elegantly deal |
|
1263 |
with some of the regular expressions that are of interest in ``real |
|
1264 |
life''. This includes the not-regular expression, written $\neg\,r$ |
|
1265 |
(that is all strings that are not recognised by $r$), but also bounded |
|
1266 |
regular expressions such as $r^{\{n\}}$ and $r^{\{n..m\}}$). There is |
|
1267 |
also hope that the derivatives can provide another angle for how to |
|
1268 |
deal more efficiently with back-references, which are one of the |
|
1269 |
reasons why regular expression engines in JavaScript, Python and Java |
|
1270 |
choose to not implement the classic automata approach of transforming |
|
1271 |
regular expressions into NFAs and then DFAs---because we simply do not |
|
1272 |
know how such back-references can be represented by DFAs. |
|
1273 |
||
1274 |
||
1275 |
\bibliographystyle{plain} |
|
1276 |
\bibliography{root} |
|
1277 |
||
1278 |
||
1279 |
\end{document} |