34
+ − 1
\documentclass[a4paper,UKenglish]{lipics}
30
+ − 2
\usepackage{graphic}
+ − 3
\usepackage{data}
+ − 4
\usepackage{tikz-cd}
35
+ − 5
\usepackage{algorithm}
+ − 6
\usepackage{amsmath}
+ − 7
\usepackage[noend]{algpseudocode}
30
+ − 8
% \documentclass{article}
+ − 9
%\usepackage[utf8]{inputenc}
+ − 10
%\usepackage[english]{babel}
+ − 11
%\usepackage{listings}
+ − 12
% \usepackage{amsthm}
+ − 13
% \usepackage{hyperref}
+ − 14
% \usepackage[margin=0.5in]{geometry}
+ − 15
%\usepackage{pmboxdraw}
+ − 16
+ − 17
\title{POSIX Regular Expression Matching and Lexing}
+ − 18
\author{Chengsong Tan}
+ − 19
\affil{King's College London\\
+ − 20
London, UK\\
+ − 21
\texttt{chengsong.tan@kcl.ac.uk}}
+ − 22
\authorrunning{Chengsong Tan}
+ − 23
\Copyright{Chengsong Tan}
+ − 24
+ − 25
\newcommand{\dn}{\stackrel{\mbox{\scriptsize def}}{=}}%
+ − 26
\newcommand{\ZERO}{\mbox{\bf 0}}
+ − 27
\newcommand{\ONE}{\mbox{\bf 1}}
+ − 28
\def\lexer{\mathit{lexer}}
+ − 29
\def\mkeps{\mathit{mkeps}}
+ − 30
\def\inj{\mathit{inj}}
+ − 31
\def\Empty{\mathit{Empty}}
+ − 32
\def\Left{\mathit{Left}}
+ − 33
\def\Right{\mathit{Right}}
+ − 34
\def\Stars{\mathit{Stars}}
+ − 35
\def\Char{\mathit{Char}}
+ − 36
\def\Seq{\mathit{Seq}}
+ − 37
\def\Der{\mathit{Der}}
+ − 38
\def\nullable{\mathit{nullable}}
+ − 39
\def\Z{\mathit{Z}}
+ − 40
\def\S{\mathit{S}}
+ − 41
+ − 42
%\theoremstyle{theorem}
+ − 43
%\newtheorem{theorem}{Theorem}
+ − 44
%\theoremstyle{lemma}
+ − 45
%\newtheorem{lemma}{Lemma}
+ − 46
%\newcommand{\lemmaautorefname}{Lemma}
+ − 47
%\theoremstyle{definition}
+ − 48
%\newtheorem{definition}{Definition}
35
+ − 49
\algnewcommand\algorithmicswitch{\textbf{switch}}
+ − 50
\algnewcommand\algorithmiccase{\textbf{case}}
+ − 51
\algnewcommand\algorithmicassert{\texttt{assert}}
+ − 52
\algnewcommand\Assert[1]{\State \algorithmicassert(#1)}%
+ − 53
% New "environments"
+ − 54
\algdef{SE}[SWITCH]{Switch}{EndSwitch}[1]{\algorithmicswitch\ #1\ \algorithmicdo}{\algorithmicend\ \algorithmicswitch}%
+ − 55
\algdef{SE}[CASE]{Case}{EndCase}[1]{\algorithmiccase\ #1}{\algorithmicend\ \algorithmiccase}%
+ − 56
\algtext*{EndSwitch}%
+ − 57
\algtext*{EndCase}%
30
+ − 58
+ − 59
+ − 60
\begin{document}
+ − 61
+ − 62
\maketitle
+ − 63
+ − 64
\begin{abstract}
+ − 65
Brzozowski introduced in 1964 a beautifully simple algorithm for
+ − 66
regular expression matching based on the notion of derivatives of
+ − 67
regular expressions. In 2014, Sulzmann and Lu extended this
+ − 68
algorithm to not just give a YES/NO answer for whether or not a regular
+ − 69
expression matches a string, but in case it matches also \emph{how}
+ − 70
it matches the string. This is important for applications such as
+ − 71
lexing (tokenising a string). The problem is to make the algorithm
+ − 72
by Sulzmann and Lu fast on all inputs without breaking its
+ − 73
correctness.
+ − 74
\end{abstract}
+ − 75
+ − 76
\section{Introduction}
+ − 77
+ − 78
This PhD-project is about regular expression matching and
+ − 79
lexing. Given the maturity of this topic, the reader might wonder:
+ − 80
Surely, regular expressions must have already been studied to death?
+ − 81
What could possibly be \emph{not} known in this area? And surely all
+ − 82
implemented algorithms for regular expression matching are blindingly
+ − 83
fast?
+ − 84
+ − 85
+ − 86
+ − 87
Unfortunately these preconceptions are not supported by evidence: Take
+ − 88
for example the regular expression $(a^*)^*\,b$ and ask whether
+ − 89
strings of the form $aa..a$ match this regular
+ − 90
expression. Obviously they do not match---the expected $b$ in the last
+ − 91
position is missing. One would expect that modern regular expression
+ − 92
matching engines can find this out very quickly. Alas, if one tries
+ − 93
this example in JavaScript, Python or Java 8 with strings like 28
+ − 94
$a$'s, one discovers that this decision takes around 30 seconds and
+ − 95
takes considerably longer when adding a few more $a$'s, as the graphs
+ − 96
below show:
+ − 97
+ − 98
\begin{center}
+ − 99
\begin{tabular}{@{}c@{\hspace{0mm}}c@{\hspace{0mm}}c@{}}
+ − 100
\begin{tikzpicture}
+ − 101
\begin{axis}[
+ − 102
xlabel={$n$},
+ − 103
x label style={at={(1.05,-0.05)}},
+ − 104
ylabel={time in secs},
+ − 105
enlargelimits=false,
+ − 106
xtick={0,5,...,30},
+ − 107
xmax=33,
+ − 108
ymax=35,
+ − 109
ytick={0,5,...,30},
+ − 110
scaled ticks=false,
+ − 111
axis lines=left,
+ − 112
width=5cm,
+ − 113
height=4cm,
+ − 114
legend entries={JavaScript},
+ − 115
legend pos=north west,
+ − 116
legend cell align=left]
+ − 117
\addplot[red,mark=*, mark options={fill=white}] table {re-js.data};
+ − 118
\end{axis}
+ − 119
\end{tikzpicture}
+ − 120
&
+ − 121
\begin{tikzpicture}
+ − 122
\begin{axis}[
+ − 123
xlabel={$n$},
+ − 124
x label style={at={(1.05,-0.05)}},
+ − 125
%ylabel={time in secs},
+ − 126
enlargelimits=false,
+ − 127
xtick={0,5,...,30},
+ − 128
xmax=33,
+ − 129
ymax=35,
+ − 130
ytick={0,5,...,30},
+ − 131
scaled ticks=false,
+ − 132
axis lines=left,
+ − 133
width=5cm,
+ − 134
height=4cm,
+ − 135
legend entries={Python},
+ − 136
legend pos=north west,
+ − 137
legend cell align=left]
+ − 138
\addplot[blue,mark=*, mark options={fill=white}] table {re-python2.data};
+ − 139
\end{axis}
+ − 140
\end{tikzpicture}
+ − 141
&
+ − 142
\begin{tikzpicture}
+ − 143
\begin{axis}[
+ − 144
xlabel={$n$},
+ − 145
x label style={at={(1.05,-0.05)}},
+ − 146
%ylabel={time in secs},
+ − 147
enlargelimits=false,
+ − 148
xtick={0,5,...,30},
+ − 149
xmax=33,
+ − 150
ymax=35,
+ − 151
ytick={0,5,...,30},
+ − 152
scaled ticks=false,
+ − 153
axis lines=left,
+ − 154
width=5cm,
+ − 155
height=4cm,
+ − 156
legend entries={Java 8},
+ − 157
legend pos=north west,
+ − 158
legend cell align=left]
+ − 159
\addplot[cyan,mark=*, mark options={fill=white}] table {re-java.data};
+ − 160
\end{axis}
+ − 161
\end{tikzpicture}\\
+ − 162
\multicolumn{3}{c}{Graphs: Runtime for matching $(a^*)^*\,b$ with strings
+ − 163
of the form $\underbrace{aa..a}_{n}$.}
+ − 164
\end{tabular}
+ − 165
\end{center}
+ − 166
+ − 167
\noindent These are clearly abysmal and possibly surprising results.
+ − 168
One would expect these systems doing much better than that---after
+ − 169
all, given a DFA and a string, whether a string is matched by this DFA
+ − 170
should be linear.
+ − 171
+ − 172
Admittedly, the regular expression $(a^*)^*\,b$ is carefully chosen to
+ − 173
exhibit this ``exponential behaviour''. Unfortunately, such regular
+ − 174
expressions are not just a few ``outliers'', but actually they are
+ − 175
frequent enough that a separate name has been created for
+ − 176
them---\emph{evil regular expressions}. In empiric work, Davis et al
+ − 177
report that they have found thousands of such evil regular expressions
+ − 178
in the JavaScript and Python ecosystems \cite{Davis18}.
+ − 179
+ − 180
This exponential blowup sometimes causes real pain in real life:
+ − 181
for example on 20 July 2016 one evil regular expression brought the
+ − 182
webpage \href{http://stackexchange.com}{Stack Exchange} to its knees \footnote{https://stackstatus.net/post/147710624694/outage-postmortem-july-20-2016}.
+ − 183
In this instance, a regular expression intended to just trim white
+ − 184
spaces from the beginning and the end of a line actually consumed
+ − 185
massive amounts of CPU-resources and because of this the web servers
+ − 186
ground to a halt. This happened when a post with 20,000 white spaces
+ − 187
was submitted, but importantly the white spaces were neither at the
+ − 188
beginning nor at the end. As a result, the regular expression matching
+ − 189
engine needed to backtrack over many choices.
+ − 190
+ − 191
The underlying problem is that many ``real life'' regular expression
+ − 192
matching engines do not use DFAs for matching. This is because they
+ − 193
support regular expressions that are not covered by the classical
+ − 194
automata theory, and in this more general setting there are quite a
+ − 195
few research questions still unanswered and fast algorithms still need
+ − 196
to be developed.
+ − 197
+ − 198
There is also another under-researched problem to do with regular
+ − 199
expressions and lexing, i.e.~the process of breaking up strings into
+ − 200
sequences of tokens according to some regular expressions. In this
+ − 201
setting one is not just interested in whether or not a regular
+ − 202
expression matches a string, but if it matches also in \emph{how} it
+ − 203
matches the string. Consider for example a regular expression
+ − 204
$r_{key}$ for recognising keywords such as \textit{if}, \textit{then}
+ − 205
and so on; and a regular expression $r_{id}$ for recognising
+ − 206
identifiers (say, a single character followed by characters or
+ − 207
numbers). One can then form the compound regular expression
+ − 208
$(r_{key} + r_{id})^*$ and use it to tokenise strings. But then how
+ − 209
should the string \textit{iffoo} be tokenised? It could be tokenised
+ − 210
as a keyword followed by an identifier, or the entire string as a
+ − 211
single identifier. Similarly, how should the string \textit{if} be
+ − 212
tokenised? Both regular expressions, $r_{key}$ and $r_{id}$, would
+ − 213
``fire''---so is it an identifier or a keyword? While in applications
+ − 214
there is a well-known strategy to decide these questions, called POSIX
+ − 215
matching, only relatively recently precise definitions of what POSIX
+ − 216
matching actually means have been formalised
+ − 217
\cite{AusafDyckhoffUrban2016,OkuiSuzuki2010,Vansummeren2006}. Roughly,
37
+ − 218
POSIX matching means matching the longest initial substring.
+ − 219
In the case of a tie, the initial submatch is chosen according to some priorities attached to the
30
+ − 220
regular expressions (e.g.~keywords have a higher priority than
+ − 221
identifiers). This sounds rather simple, but according to Grathwohl et
+ − 222
al \cite[Page 36]{CrashCourse2014} this is not the case. They wrote:
+ − 223
+ − 224
\begin{quote}
+ − 225
\it{}``The POSIX strategy is more complicated than the greedy because of
+ − 226
the dependence on information about the length of matched strings in the
+ − 227
various subexpressions.''
+ − 228
\end{quote}
+ − 229
+ − 230
\noindent
+ − 231
This is also supported by evidence collected by Kuklewicz
+ − 232
\cite{Kuklewicz} who noticed that a number of POSIX regular expression
+ − 233
matchers calculate incorrect results.
+ − 234
+ − 235
Our focus is on an algorithm introduced by Sulzmann and Lu in 2014 for
+ − 236
regular expression matching according to the POSIX strategy
+ − 237
\cite{Sulzmann2014}. Their algorithm is based on an older algorithm by
+ − 238
Brzozowski from 1964 where he introduced the notion of derivatives of
+ − 239
regular expressions \cite{Brzozowski1964}. We shall briefly explain
+ − 240
the algorithms next.
+ − 241
+ − 242
\section{The Algorithms by Brzozowski, and Sulzmann and Lu}
+ − 243
36
+ − 244
Suppose regular expressions are given by the following grammar:\\
+ − 245
$r$ $::=$ $\ZERO$ $\mid$ $\ONE$
+ − 246
$\mid$ $c$
+ − 247
$\mid$ $r_1 \cdot r_2$
+ − 248
$\mid$ $r_1 + r_2$
+ − 249
$\mid$ $r^*$
30
+ − 250
+ − 251
+ − 252
+ − 253
\noindent
+ − 254
The intended meaning of the regular expressions is as usual: $\ZERO$
+ − 255
cannot match any string, $\ONE$ can match the empty string, the
+ − 256
character regular expression $c$ can match the character $c$, and so
+ − 257
on. The brilliant contribution by Brzozowski is the notion of
+ − 258
\emph{derivatives} of regular expressions. The idea behind this
+ − 259
notion is as follows: suppose a regular expression $r$ can match a
+ − 260
string of the form $c\!::\! s$ (that is a list of characters starting
+ − 261
with $c$), what does the regular expression look like that can match
36
+ − 262
just $s$? Brzozowski gave a neat answer to this question. He started with the definition of $nullable$:
+ − 263
\begin{center}
+ − 264
\begin{tabular}{lcl}
+ − 265
$\nullable(\ZERO)$ & $\dn$ & $\mathit{false}$ \\
+ − 266
$\nullable(\ONE)$ & $\dn$ & $\mathit{true}$ \\
+ − 267
$\nullable(c)$ & $\dn$ & $\mathit{false}$ \\
+ − 268
$\nullable(r_1 + r_2)$ & $\dn$ & $\nullable(r_1) \vee \nullable(r_2)$ \\
+ − 269
$\nullable(r_1\cdot r_2)$ & $\dn$ & $\nullable(r_1) \wedge \nullable(r_2)$ \\
+ − 270
$\nullable(r^*)$ & $\dn$ & $\mathit{true}$ \\
+ − 271
\end{tabular}
+ − 272
\end{center}
+ − 273
The function tests whether the empty string is in $L(r)$.
+ − 274
He then defined
30
+ − 275
the following operation on regular expressions, written
+ − 276
$r\backslash c$ (the derivative of $r$ w.r.t.~the character $c$):
+ − 277
+ − 278
\begin{center}
+ − 279
\begin{tabular}{lcl}
+ − 280
$\ZERO \backslash c$ & $\dn$ & $\ZERO$\\
+ − 281
$\ONE \backslash c$ & $\dn$ & $\ZERO$\\
+ − 282
$d \backslash c$ & $\dn$ &
+ − 283
$\mathit{if} \;c = d\;\mathit{then}\;\ONE\;\mathit{else}\;\ZERO$\\
+ − 284
$(r_1 + r_2)\backslash c$ & $\dn$ & $r_1 \backslash c \,+\, r_2 \backslash c$\\
36
+ − 285
$(r_1 \cdot r_2)\backslash c$ & $\dn$ & $\mathit{if} \, nullable(r_1)$\\
30
+ − 286
& & $\mathit{then}\;(r_1\backslash c) \cdot r_2 \,+\, r_2\backslash c$\\
+ − 287
& & $\mathit{else}\;(r_1\backslash c) \cdot r_2$\\
+ − 288
$(r^*)\backslash c$ & $\dn$ & $(r\backslash c) \cdot r^*$\\
+ − 289
\end{tabular}
+ − 290
\end{center}
+ − 291
+ − 292
\noindent
+ − 293
37
+ − 294
For us the main advantage is that derivatives can be
+ − 295
straightforwardly implemented in any functional programming language,
+ − 296
and are easily definable and reasoned about in theorem provers---the
+ − 297
definitions just consist of inductive datatypes and simple recursive
+ − 298
functions. Moreover, the notion of derivatives can be easily
+ − 299
generalised to cover extended regular expression constructors such as
+ − 300
the not-regular expression, written $\neg\,r$, or bounded repetitions
+ − 301
(for example $r^{\{n\}}$ and $r^{\{n..m\}}$), which cannot be so
+ − 302
straightforwardly realised within the classic automata approach.
+ − 303
30
+ − 304
+ − 305
+ − 306
%Assuming the classic notion of a
+ − 307
%\emph{language} of a regular expression, written $L(\_)$, t
+ − 308
The main
+ − 309
property of the derivative operation is that
+ − 310
+ − 311
\begin{center}
+ − 312
$c\!::\!s \in L(r)$ holds
+ − 313
if and only if $s \in L(r\backslash c)$.
+ − 314
\end{center}
+ − 315
+ − 316
\noindent
+ − 317
So if we want to find out whether a string $s$
+ − 318
matches with a regular expression $r$, build the derivatives of $r$
+ − 319
w.r.t.\ (in succession) all the characters of the string $s$. Finally,
+ − 320
test whether the resulting regular expression can match the empty
+ − 321
string. If yes, then $r$ matches $s$, and no in the negative
37
+ − 322
case.
+ − 323
36
+ − 324
We can generalise the derivative operation for strings like this:
30
+ − 325
\begin{center}
+ − 326
\begin{tabular}{lcl}
+ − 327
$r \backslash (c\!::\!s) $ & $\dn$ & $(r \backslash c) \backslash s$ \\
+ − 328
$r \backslash \epsilon $ & $\dn$ & $r$
+ − 329
\end{tabular}
+ − 330
\end{center}
37
+ − 331
\noindent
+ − 332
Using the above definition we obtain a simple and elegant regular
30
+ − 333
expression matching algorithm:
+ − 334
\[
+ − 335
match\;s\;r \;\dn\; nullable(r\backslash s)
+ − 336
\]
+ − 337
One limitation, however, of Brzozowski's algorithm is that it only
+ − 338
produces a YES/NO answer for whether a string is being matched by a
+ − 339
regular expression. Sulzmann and Lu~\cite{Sulzmann2014} extended this
+ − 340
algorithm to allow generation of an actual matching, called a
+ − 341
\emph{value}.
+ − 342
+ − 343
\begin{center}
+ − 344
\begin{tabular}{c@{\hspace{20mm}}c}
+ − 345
\begin{tabular}{@{}rrl@{}}
+ − 346
\multicolumn{3}{@{}l}{\textbf{Regular Expressions}}\medskip\\
+ − 347
$r$ & $::=$ & $\ZERO$\\
+ − 348
& $\mid$ & $\ONE$ \\
+ − 349
& $\mid$ & $c$ \\
+ − 350
& $\mid$ & $r_1 \cdot r_2$\\
+ − 351
& $\mid$ & $r_1 + r_2$ \\
+ − 352
\\
+ − 353
& $\mid$ & $r^*$ \\
+ − 354
\end{tabular}
+ − 355
&
+ − 356
\begin{tabular}{@{\hspace{0mm}}rrl@{}}
+ − 357
\multicolumn{3}{@{}l}{\textbf{Values}}\medskip\\
+ − 358
$v$ & $::=$ & \\
+ − 359
& & $\Empty$ \\
+ − 360
& $\mid$ & $\Char(c)$ \\
+ − 361
& $\mid$ & $\Seq\,v_1\, v_2$\\
+ − 362
& $\mid$ & $\Left(v)$ \\
+ − 363
& $\mid$ & $\Right(v)$ \\
+ − 364
& $\mid$ & $\Stars\,[v_1,\ldots\,v_n]$ \\
+ − 365
\end{tabular}
+ − 366
\end{tabular}
+ − 367
\end{center}
+ − 368
+ − 369
\noindent
35
+ − 370
Here we put the regular expression and values of the same shape on the same level to illustrate the corresponding relation between them.
37
+ − 371
36
+ − 372
The flatten notation $| v |$ means extracting the characters in the value $v$ to form a string. For example, $|\mathit{Seq} \, \mathit{Char(c)} \, \mathit{Char(d)}|$ = $cd$. We omit this straightforward definition.
+ − 373
Values are a way of expressing parse trees(the tree structure that tells how a sub-regex matches a substring). For example, $\Seq\,v_1\, v_2$ tells us how the string $|v_1| \cdot |v_2|$ matches the regex $r_1 \cdot r_2$: $r_1$ matches $|v_1|$ and $r_2$ matches $|v_2|$. Exactly how these two are matched are contained in the sub-structure of $v_1$ and $v_2$.
30
+ − 374
+ − 375
To give a concrete example of how value works, consider the string $xy$ and the
+ − 376
regular expression $(x + (y + xy))^*$. We can view this regular
+ − 377
expression as a tree and if the string $xy$ is matched by two Star
+ − 378
``iterations'', then the $x$ is matched by the left-most alternative
+ − 379
in this tree and the $y$ by the right-left alternative. This suggests
+ − 380
to record this matching as
+ − 381
+ − 382
\begin{center}
+ − 383
$\Stars\,[\Left\,(\Char\,x), \Right(\Left(\Char\,y))]$
+ − 384
\end{center}
+ − 385
+ − 386
\noindent
+ − 387
where $\Stars$ records how many
+ − 388
iterations were used; and $\Left$, respectively $\Right$, which
+ − 389
alternative is used. The value for
+ − 390
matching $xy$ in a single ``iteration'', i.e.~the POSIX value,
+ − 391
would look as follows
+ − 392
+ − 393
\begin{center}
+ − 394
$\Stars\,[\Seq\,(\Char\,x)\,(\Char\,y)]$
+ − 395
\end{center}
+ − 396
+ − 397
\noindent
+ − 398
where $\Stars$ has only a single-element list for the single iteration
+ − 399
and $\Seq$ indicates that $xy$ is matched by a sequence regular
+ − 400
expression.
+ − 401
+ − 402
The contribution of Sulzmann and Lu is an extension of Brzozowski's
+ − 403
algorithm by a second phase (the first phase being building successive
+ − 404
derivatives). In this second phase, for every successful match the
35
+ − 405
corresponding POSIX value is computed. The whole process looks like the following diagram(the working flow of the simple matching algorithm that just gives a $YES/NO$ answer is provided on the right for the purpose of comparison):\\
30
+ − 406
\begin{tikzcd}
36
+ − 407
r_0 \arrow[r, "\backslash c_0"] \arrow[d] & r_1 \arrow[r, "\backslash c_1"] \arrow[d] & r_2 \arrow[r, dashed] \arrow[d] & r_n \arrow[d, "mkeps" description] \\
30
+ − 408
v_0 & v_1 \arrow[l,"inj_{r_0} c_0"] & v_2 \arrow[l, "inj_{r_1} c_1"] & v_n \arrow[l, dashed]
+ − 409
\end{tikzcd}
36
+ − 410
\begin{tikzcd}
+ − 411
r_0 \arrow[r, "\backslash c_0"] & r_1 \arrow[r, "\backslash c_1"] & r_2 \arrow[r, dashed] & r_n
+ − 412
\end{tikzcd}
35
+ − 413
37
+ − 414
30
+ − 415
We shall briefly explain this interesting process.\\ For the convenience of explanation, we have the following notations: the regular expression $r$ used for matching is also called $r_0$ and the string $s$ is composed of $n$ characters $c_0 c_1 ... c_{n-1}$.
37
+ − 416
First, we do the derivative operation on $r_0$, $r_1$, ..., using characters $c_0$, $c_1$, ... until we get the final derivative $r_n$.We test whether it is $nullable$ or not. If no we know immediately the string does not match the regex. If yes, we start building the parse tree incrementally. We first call $mkeps$(which stands for make epsilon--make the parse tree for how the empty word matched the empty regular expression epsilon) to construct the parse tree $v_n$ for how the final derivative $r_n$ matches the empty string:
30
+ − 417
+ − 418
After this, we inject back the characters one by one, in reverse order as they were chopped off, to build the parse tree $v_i$ for how the regex $r_i$ matches the string $s_i$($s_i$ means the string s with the first $i$ characters being chopped off) from the previous parse tree. After $n$ transformations, we get the parse tree for how $r_0$ matches $s$, exactly as we wanted.
+ − 419
An inductive proof can be routinely established.
+ − 420
We omit the details of injection function, which is provided by Sulzmann and Lu's paper \cite{Sulzmann2014}. Rather, we shall focus next on the
+ − 421
process of simplification of regular expressions, which is needed in
+ − 422
order to obtain \emph{fast} versions of the Brzozowski's, and Sulzmann
+ − 423
and Lu's algorithms. This is where the PhD-project hopes to advance
+ − 424
the state-of-the-art.
+ − 425
+ − 426
+ − 427
\section{Simplification of Regular Expressions}
+ − 428
+ − 429
The main drawback of building successive derivatives according to
+ − 430
Brzozowski's definition is that they can grow very quickly in size.
+ − 431
This is mainly due to the fact that the derivative operation generates
+ − 432
often ``useless'' $\ZERO$s and $\ONE$s in derivatives. As a result,
+ − 433
if implemented naively both algorithms by Brzozowski and by Sulzmann
+ − 434
and Lu are excruciatingly slow. For example when starting with the
+ − 435
regular expression $(a + aa)^*$ and building 12 successive derivatives
+ − 436
w.r.t.~the character $a$, one obtains a derivative regular expression
+ − 437
with more than 8000 nodes (when viewed as a tree). Operations like
+ − 438
derivative and $\nullable$ need to traverse such trees and
+ − 439
consequently the bigger the size of the derivative the slower the
+ − 440
algorithm. Fortunately, one can simplify regular expressions after
+ − 441
each derivative step. Various simplifications of regular expressions
+ − 442
are possible, such as the simplifications of $\ZERO + r$,
+ − 443
$r + \ZERO$, $\ONE\cdot r$, $r \cdot \ONE$, and $r + r$ to just
+ − 444
$r$. These simplifications do not affect the answer for whether a
+ − 445
regular expression matches a string or not, but fortunately also do
+ − 446
not affect the POSIX strategy of how regular expressions match
+ − 447
strings---although the latter is much harder to establish. Some
+ − 448
initial results in this regard have been obtained in
+ − 449
\cite{AusafDyckhoffUrban2016}. However, what has not been achieved yet
+ − 450
is a very tight bound for the size. Such a tight bound is suggested by
+ − 451
work of Antimirov who proved that (partial) derivatives can be bound
+ − 452
by the number of characters contained in the initial regular
35
+ − 453
expression \cite{Antimirov95}.
+ − 454
+ − 455
Antimirov defined the "partial derivatives" of regular expressions to be this:
+ − 456
%TODO definition of partial derivatives
+ − 457
+ − 458
it is essentially a set of regular expressions that come from the sub-structure of the original regular expression.
+ − 459
Antimirov has proved a nice size bound of the size of partial derivatives. Roughly speaking the size will not exceed the fourth power of the number of nodes in that regular expression. Interestingly, we observed from experiment that after the simplification step, our regular expression has the same size or is smaller than the partial derivatives. This allows us to prove a tight bound on the size of regular expression during the running time of the algorithm if we can establish the connection between our simplification rules and partial derivatives.
+ − 460
+ − 461
%We believe, and have generated test
+ − 462
%data, that a similar bound can be obtained for the derivatives in
+ − 463
%Sulzmann and Lu's algorithm. Let us give some details about this next.
30
+ − 464
+ − 465
We first followed Sulzmann and Lu's idea of introducing
+ − 466
\emph{annotated regular expressions}~\cite{Sulzmann2014}. They are
+ − 467
defined by the following grammar:
+ − 468
+ − 469
\begin{center}
+ − 470
\begin{tabular}{lcl}
+ − 471
$\textit{a}$ & $::=$ & $\textit{ZERO}$\\
+ − 472
& $\mid$ & $\textit{ONE}\;\;bs$\\
+ − 473
& $\mid$ & $\textit{CHAR}\;\;bs\,c$\\
+ − 474
& $\mid$ & $\textit{ALTS}\;\;bs\,as$\\
+ − 475
& $\mid$ & $\textit{SEQ}\;\;bs\,a_1\,a_2$\\
+ − 476
& $\mid$ & $\textit{STAR}\;\;bs\,a$
+ − 477
\end{tabular}
+ − 478
\end{center}
+ − 479
+ − 480
\noindent
+ − 481
where $bs$ stands for bitsequences, and $as$ (in \textit{ALTS}) for a
+ − 482
list of annotated regular expressions. These bitsequences encode
+ − 483
information about the (POSIX) value that should be generated by the
+ − 484
Sulzmann and Lu algorithm. Bitcodes are essentially incomplete values.
+ − 485
This can be straightforwardly seen in the following transformation:
+ − 486
\begin{center}
+ − 487
\begin{tabular}{lcl}
+ − 488
$\textit{code}(\Empty)$ & $\dn$ & $[]$\\
+ − 489
$\textit{code}(\Char\,c)$ & $\dn$ & $[]$\\
+ − 490
$\textit{code}(\Left\,v)$ & $\dn$ & $\Z :: code(v)$\\
+ − 491
$\textit{code}(\Right\,v)$ & $\dn$ & $\S :: code(v)$\\
+ − 492
$\textit{code}(\Seq\,v_1\,v_2)$ & $\dn$ & $code(v_1) \,@\, code(v_2)$\\
+ − 493
$\textit{code}(\Stars\,[])$ & $\dn$ & $[\S]$\\
+ − 494
$\textit{code}(\Stars\,(v\!::\!vs))$ & $\dn$ & $\Z :: code(v) \;@\;
+ − 495
code(\Stars\,vs)$
+ − 496
\end{tabular}
+ − 497
\end{center}
+ − 498
where $\Z$ and $\S$ are arbitrary names for the bits in the
+ − 499
bitsequences.
+ − 500
Here code encodes a value into a bitsequence by converting Left into $\Z$, Right into $\S$, the start point of a non-empty star iteration into $\S$, and the border where a local star terminates into $\Z$. This conversion is apparently lossy, as it throws away the character information, and does not decode the boundary between the two operands of the sequence constructor. Moreover, with only the bitcode we cannot even tell whether the $\S$s and $\Z$s are for $Left/Right$ or $Stars$. The reason for choosing this compact way of storing information is that the relatively small size of bits can be easily moved around during the lexing process. In order to recover the bitcode back into values, we will need the regular expression as the extra information and decode them back into value:\\
37
+ − 501
%\begin{definition}[Bitdecoding of Values]\mbox{}
36
+ − 502
\begin{center}
+ − 503
\begin{tabular}{@{}l@{\hspace{1mm}}c@{\hspace{1mm}}l@{}}
+ − 504
$\textit{decode}'\,bs\,(\ONE)$ & $\dn$ & $(\Empty, bs)$\\
+ − 505
$\textit{decode}'\,bs\,(c)$ & $\dn$ & $(\Char\,c, bs)$\\
+ − 506
$\textit{decode}'\,(\Z\!::\!bs)\;(r_1 + r_2)$ & $\dn$ &
+ − 507
$\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}\;
+ − 508
(\Left\,v, bs_1)$\\
+ − 509
$\textit{decode}'\,(\S\!::\!bs)\;(r_1 + r_2)$ & $\dn$ &
+ − 510
$\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_2\;\textit{in}\;
+ − 511
(\Right\,v, bs_1)$\\
+ − 512
$\textit{decode}'\,bs\;(r_1\cdot r_2)$ & $\dn$ &
+ − 513
$\textit{let}\,(v_1, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}$\\
+ − 514
& & $\textit{let}\,(v_2, bs_2) = \textit{decode}'\,bs_1\,r_2$\\
+ − 515
& & \hspace{35mm}$\textit{in}\;(\Seq\,v_1\,v_2, bs_2)$\\
+ − 516
$\textit{decode}'\,(\Z\!::\!bs)\,(r^*)$ & $\dn$ & $(\Stars\,[], bs)$\\
+ − 517
$\textit{decode}'\,(\S\!::\!bs)\,(r^*)$ & $\dn$ &
+ − 518
$\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r\;\textit{in}$\\
+ − 519
& & $\textit{let}\,(\Stars\,vs, bs_2) = \textit{decode}'\,bs_1\,r^*$\\
+ − 520
& & \hspace{35mm}$\textit{in}\;(\Stars\,v\!::\!vs, bs_2)$\bigskip\\
+ − 521
+ − 522
$\textit{decode}\,bs\,r$ & $\dn$ &
+ − 523
$\textit{let}\,(v, bs') = \textit{decode}'\,bs\,r\;\textit{in}$\\
+ − 524
& & $\textit{if}\;bs' = []\;\textit{then}\;\textit{Some}\,v\;
+ − 525
\textit{else}\;\textit{None}$
+ − 526
\end{tabular}
+ − 527
\end{center}
37
+ − 528
%\end{definition}
30
+ − 529
+ − 530
To do lexing using annotated regular expressions, we shall first transform the
+ − 531
usual (un-annotated) regular expressions into annotated regular
+ − 532
expressions:\\
37
+ − 533
%\begin{definition}
36
+ − 534
\begin{center}
+ − 535
\begin{tabular}{lcl}
+ − 536
$(\ZERO)^\uparrow$ & $\dn$ & $\textit{ZERO}$\\
+ − 537
$(\ONE)^\uparrow$ & $\dn$ & $\textit{ONE}\,[]$\\
+ − 538
$(c)^\uparrow$ & $\dn$ & $\textit{CHAR}\,[]\,c$\\
+ − 539
$(r_1 + r_2)^\uparrow$ & $\dn$ &
+ − 540
$\textit{ALT}\;[]\,(\textit{fuse}\,[\Z]\,r_1^\uparrow)\,
+ − 541
(\textit{fuse}\,[\S]\,r_2^\uparrow)$\\
+ − 542
$(r_1\cdot r_2)^\uparrow$ & $\dn$ &
+ − 543
$\textit{SEQ}\;[]\,r_1^\uparrow\,r_2^\uparrow$\\
+ − 544
$(r^*)^\uparrow$ & $\dn$ &
+ − 545
$\textit{STAR}\;[]\,r^\uparrow$\\
+ − 546
\end{tabular}
+ − 547
\end{center}
37
+ − 548
%\end{definition}
30
+ − 549
Then we do successive derivative operations on the annotated regular expression. This derivative operation is the same as what we previously have for the simple regular expressions, except that we take special care of the bits to store the parse tree information:\\
37
+ − 550
%\begin{definition}{bder}
36
+ − 551
\begin{center}
+ − 552
\begin{tabular}{@{}lcl@{}}
+ − 553
$(\textit{ZERO})\backslash c$ & $\dn$ & $\textit{ZERO}$\\
+ − 554
$(\textit{ONE}\;bs)\backslash c$ & $\dn$ & $\textit{ZERO}$\\
+ − 555
$(\textit{CHAR}\;bs\,d)\backslash c$ & $\dn$ &
+ − 556
$\textit{if}\;c=d\; \;\textit{then}\;
+ − 557
\textit{ONE}\;bs\;\textit{else}\;\textit{ZERO}$\\
+ − 558
$(\textit{ALT}\;bs\,a_1\,a_2)\backslash c$ & $\dn$ &
+ − 559
$\textit{ALT}\,bs\,(a_1\backslash c)\,(a_2\backslash c)$\\
+ − 560
$(\textit{SEQ}\;bs\,a_1\,a_2)\backslash c$ & $\dn$ &
+ − 561
$\textit{if}\;\textit{bnullable}\,a_1$\\
+ − 562
& &$\textit{then}\;\textit{ALT}\,bs\,(\textit{SEQ}\,[]\,(a_1\backslash c)\,a_2)$\\
+ − 563
& &$\phantom{\textit{then}\;\textit{ALT}\,bs\,}(\textit{fuse}\,(\textit{bmkeps}\,a_1)\,(a_2\backslash c))$\\
+ − 564
& &$\textit{else}\;\textit{SEQ}\,bs\,(a_1\backslash c)\,a_2$\\
+ − 565
$(\textit{STAR}\,bs\,a)\backslash c$ & $\dn$ &
+ − 566
$\textit{SEQ}\;bs\,(\textit{fuse}\, [\Z] (r\backslash c))\,
+ − 567
(\textit{STAR}\,[]\,r)$
+ − 568
\end{tabular}
+ − 569
\end{center}
37
+ − 570
%\end{definition}
30
+ − 571
This way, we do not have to use an injection function and a second phase, but instead only need to collect the bits while running $mkeps$:
37
+ − 572
%\begin{definition}[\textit{bmkeps}]\mbox{}
36
+ − 573
\begin{center}
+ − 574
\begin{tabular}{lcl}
+ − 575
$\textit{bmkeps}\,(\textit{ONE}\,bs)$ & $\dn$ & $bs$\\
+ − 576
$\textit{bmkeps}\,(\textit{ALT}\,bs\,a_1\,a_2)$ & $\dn$ &
+ − 577
$\textit{if}\;\textit{bnullable}\,a_1$\\
+ − 578
& &$\textit{then}\;bs\,@\,\textit{bmkeps}\,a_1$\\
+ − 579
& &$\textit{else}\;bs\,@\,\textit{bmkeps}\,a_2$\\
+ − 580
$\textit{bmkeps}\,(\textit{SEQ}\,bs\,a_1\,a_2)$ & $\dn$ &
+ − 581
$bs \,@\,\textit{bmkeps}\,a_1\,@\, \textit{bmkeps}\,a_2$\\
+ − 582
$\textit{bmkeps}\,(\textit{STAR}\,bs\,a)$ & $\dn$ &
+ − 583
$bs \,@\, [\S]$
+ − 584
\end{tabular}
+ − 585
\end{center}
37
+ − 586
%\end{definition}
+ − 587
and then decode the bits using the regular expression. After putting these pieces together, the whole process looks like this:\\
+ − 588
\begin{center}
+ − 589
\begin{tabular}{lcl}
+ − 590
$\textit{blexer}\;r\,s$ & $\dn$ &
+ − 591
$\textit{let}\;a = (r^\uparrow)\backslash s\;\textit{in}$\\
+ − 592
& & $\;\;\textit{if}\; \textit{bnullable}(a)$\\
+ − 593
& & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
+ − 594
& & $\;\;\textit{else}\;\textit{None}$
+ − 595
\end{tabular}
+ − 596
\end{center}
30
+ − 597
+ − 598
The main point of the bitsequences and annotated regular expressions
+ − 599
is that we can apply rather aggressive (in terms of size)
+ − 600
simplification rules in order to keep derivatives small.
+ − 601
+ − 602
We have
+ − 603
developed such ``aggressive'' simplification rules and generated test
+ − 604
data that show that the expected bound can be achieved. Obviously we
+ − 605
could only partially cover the search space as there are infinitely
+ − 606
many regular expressions and strings. One modification we introduced
+ − 607
is to allow a list of annotated regular expressions in the
+ − 608
\textit{ALTS} constructor. This allows us to not just delete
+ − 609
unnecessary $\ZERO$s and $\ONE$s from regular expressions, but also
+ − 610
unnecessary ``copies'' of regular expressions (very similar to
+ − 611
simplifying $r + r$ to just $r$, but in a more general
35
+ − 612
setting).
+ − 613
A psuedocode version of our algorithm is given below:\\
+ − 614
+ − 615
\begin{algorithm}
+ − 616
\caption{simplification of annotated regular expression}\label{euclid}
+ − 617
\begin{algorithmic}[1]
+ − 618
\Procedure{$Simp$}{$areg$}
+ − 619
\Switch{$areg$}
+ − 620
\Case{$ALTS(bs, rs)$}
+ − 621
\For{\textit{rs[i] in array rs}}
+ − 622
\State $\textit{rs[i]} \gets$ \textit{Simp(rs[i])}
+ − 623
\EndFor
+ − 624
\For{\textit{rs[i] in array rs}}
+ − 625
\If{$rs[i] == ALTS(bs', rs')$}
+ − 626
\State $rs'' \gets$ attach bits $bs'$ to all elements in $rs'$
+ − 627
\State Insert $rs''$ into $rs$ at position $i$ ($rs[i]$ is destroyed, replaced by its list of children regular expressions)
+ − 628
\EndIf
+ − 629
\EndFor
+ − 630
\State Remove all duplicates in $rs$, only keeping the first copy for multiple occurrences of the same regular expression
+ − 631
\State Remove all $0$s in $rs$
+ − 632
\If{$ rs.length == 0$} \Return $ZERO$ \EndIf
+ − 633
\If {$ rs.length == 1$} \Return$ rs[0] $\EndIf
+ − 634
\EndCase
+ − 635
\Case{$SEQ(bs, r_1, r_2)$}
+ − 636
\If{$ r_1$ or $r_2$ is $ZERO$} \Return ZERO \EndIf
+ − 637
\State update $r_1$ and $r_2$ by attaching $bs$ to their front
+ − 638
\If {$r_1$ or $r_2$ is $ONE(bs')$} \Return $r_2$ or $r_1$ \EndIf
+ − 639
\EndCase
+ − 640
\Case{$Others$}
+ − 641
\Return $areg$ as it is
+ − 642
\EndCase
+ − 643
\EndSwitch
+ − 644
\EndProcedure
+ − 645
\end{algorithmic}
+ − 646
\end{algorithm}
36
+ − 647
With this simplification our previous $(a + aa)^*$ example's 8000 nodes will be reduced to only 6.
35
+ − 648
+ − 649
Another modification is that we use simplification rules
30
+ − 650
inspired by Antimirov's work on partial derivatives. They maintain the
+ − 651
idea that only the first ``copy'' of a regular expression in an
+ − 652
alternative contributes to the calculation of a POSIX value. All
+ − 653
subsequent copies can be pruned from the regular expression.
+ − 654
35
+ − 655
30
+ − 656
We are currently engaged with proving that our simplification rules
+ − 657
actually do not affect the POSIX value that should be generated by the
+ − 658
algorithm according to the specification of a POSIX value and
+ − 659
furthermore that our derivatives stay small for all derivatives. For
+ − 660
this proof we use the theorem prover Isabelle. Once completed, this
+ − 661
result will advance the state-of-the-art: Sulzmann and Lu wrote in
+ − 662
their paper \cite{Sulzmann2014} about the bitcoded ``incremental
+ − 663
parsing method'' (that is the matching algorithm outlined in this
+ − 664
section):
+ − 665
+ − 666
\begin{quote}\it
+ − 667
``Correctness Claim: We further claim that the incremental parsing
+ − 668
method in Figure~5 in combination with the simplification steps in
+ − 669
Figure 6 yields POSIX parse trees. We have tested this claim
+ − 670
extensively by using the method in Figure~3 as a reference but yet
+ − 671
have to work out all proof details.''
+ − 672
\end{quote}
+ − 673
+ − 674
\noindent
+ − 675
We would settle the correctness claim and furthermore obtain a much
+ − 676
tighter bound on the sizes of derivatives. The result is that our
+ − 677
algorithm should be correct and faster on all inputs. The original
+ − 678
blow-up, as observed in JavaScript, Python and Java, would be excluded
+ − 679
from happening in our algorithm.
+ − 680
+ − 681
\section{Conclusion}
+ − 682
+ − 683
In this PhD-project we are interested in fast algorithms for regular
+ − 684
expression matching. While this seems to be a ``settled'' area, in
+ − 685
fact interesting research questions are popping up as soon as one steps
+ − 686
outside the classic automata theory (for example in terms of what kind
+ − 687
of regular expressions are supported). The reason why it is
+ − 688
interesting for us to look at the derivative approach introduced by
+ − 689
Brzozowski for regular expression matching, and then much further
+ − 690
developed by Sulzmann and Lu, is that derivatives can elegantly deal
+ − 691
with some of the regular expressions that are of interest in ``real
+ − 692
life''. This includes the not-regular expression, written $\neg\,r$
+ − 693
(that is all strings that are not recognised by $r$), but also bounded
+ − 694
regular expressions such as $r^{\{n\}}$ and $r^{\{n..m\}}$). There is
+ − 695
also hope that the derivatives can provide another angle for how to
+ − 696
deal more efficiently with back-references, which are one of the
+ − 697
reasons why regular expression engines in JavaScript, Python and Java
+ − 698
choose to not implement the classic automata approach of transforming
+ − 699
regular expressions into NFAs and then DFAs---because we simply do not
+ − 700
know how such back-references can be represented by DFAs.
+ − 701
+ − 702
+ − 703
\bibliographystyle{plain}
+ − 704
\bibliography{root}
+ − 705
+ − 706
+ − 707
\end{document}