94
+ − 1
\documentclass[a4paper,UKenglish]{lipics}
+ − 2
\usepackage{graphic}
+ − 3
\usepackage{data}
+ − 4
\usepackage{tikz-cd}
+ − 5
%\usepackage{algorithm}
+ − 6
\usepackage{amsmath}
+ − 7
\usepackage[noend]{algpseudocode}
+ − 8
\usepackage{enumitem}
+ − 9
\usepackage{nccmath}
+ − 10
+ − 11
\definecolor{darkblue}{rgb}{0,0,0.6}
+ − 12
\hypersetup{colorlinks=true,allcolors=darkblue}
+ − 13
\newcommand{\comment}[1]%
+ − 14
{{\color{red}$\Rightarrow$}\marginpar{\raggedright\small{\bf\color{red}#1}}}
+ − 15
+ − 16
% \documentclass{article}
+ − 17
%\usepackage[utf8]{inputenc}
+ − 18
%\usepackage[english]{babel}
+ − 19
%\usepackage{listings}
+ − 20
% \usepackage{amsthm}
+ − 21
%\usepackage{hyperref}
+ − 22
% \usepackage[margin=0.5in]{geometry}
+ − 23
%\usepackage{pmboxdraw}
+ − 24
+ − 25
\title{POSIX Regular Expression Matching and Lexing}
+ − 26
\author{Chengsong Tan}
+ − 27
\affil{King's College London\\
+ − 28
London, UK\\
+ − 29
\texttt{chengsong.tan@kcl.ac.uk}}
+ − 30
\authorrunning{Chengsong Tan}
+ − 31
\Copyright{Chengsong Tan}
+ − 32
+ − 33
\newcommand{\dn}{\stackrel{\mbox{\scriptsize def}}{=}}%
+ − 34
\newcommand{\ZERO}{\mbox{\bf 0}}
+ − 35
\newcommand{\ONE}{\mbox{\bf 1}}
101
+ − 36
\def\erase{\textit{erase}}
94
+ − 37
\def\bders{\textit{bders}}
+ − 38
\def\lexer{\mathit{lexer}}
+ − 39
\def\blexer{\textit{blexer}}
+ − 40
\def\blexers{\mathit{blexer\_simp}}
95
+ − 41
\def\simp{\mathit{simp}}
94
+ − 42
\def\mkeps{\mathit{mkeps}}
+ − 43
\def\bmkeps{\textit{bmkeps}}
+ − 44
\def\inj{\mathit{inj}}
+ − 45
\def\Empty{\mathit{Empty}}
+ − 46
\def\Left{\mathit{Left}}
+ − 47
\def\Right{\mathit{Right}}
+ − 48
\def\Stars{\mathit{Stars}}
+ − 49
\def\Char{\mathit{Char}}
+ − 50
\def\Seq{\mathit{Seq}}
+ − 51
\def\Der{\mathit{Der}}
+ − 52
\def\nullable{\mathit{nullable}}
+ − 53
\def\Z{\mathit{Z}}
+ − 54
\def\S{\mathit{S}}
+ − 55
\def\flex{\textit{flex}}
+ − 56
\def\rup{r^\uparrow}
+ − 57
\def\retrieve{\textit{retrieve}}
+ − 58
\def\AALTS{\textit{AALTS}}
+ − 59
\def\AONE{\textit{AONE}}
+ − 60
%\theoremstyle{theorem}
+ − 61
%\newtheorem{theorem}{Theorem}
+ − 62
%\theoremstyle{lemma}
+ − 63
%\newtheorem{lemma}{Lemma}
+ − 64
%\newcommand{\lemmaautorefname}{Lemma}
+ − 65
%\theoremstyle{definition}
+ − 66
%\newtheorem{definition}{Definition}
+ − 67
\algnewcommand\algorithmicswitch{\textbf{switch}}
+ − 68
\algnewcommand\algorithmiccase{\textbf{case}}
+ − 69
\algnewcommand\algorithmicassert{\texttt{assert}}
+ − 70
\algnewcommand\Assert[1]{\State \algorithmicassert(#1)}%
+ − 71
% New "environments"
+ − 72
\algdef{SE}[SWITCH]{Switch}{EndSwitch}[1]{\algorithmicswitch\ #1\ \algorithmicdo}{\algorithmicend\ \algorithmicswitch}%
+ − 73
\algdef{SE}[CASE]{Case}{EndCase}[1]{\algorithmiccase\ #1}{\algorithmicend\ \algorithmiccase}%
+ − 74
\algtext*{EndSwitch}%
+ − 75
\algtext*{EndCase}%
+ − 76
+ − 77
+ − 78
\begin{document}
+ − 79
+ − 80
\maketitle
+ − 81
+ − 82
\begin{abstract}
+ − 83
Brzozowski introduced in 1964 a beautifully simple algorithm for
+ − 84
regular expression matching based on the notion of derivatives of
+ − 85
regular expressions. In 2014, Sulzmann and Lu extended this
+ − 86
algorithm to not just give a YES/NO answer for whether or not a
+ − 87
regular expression matches a string, but in case it does also
+ − 88
answers with \emph{how} it matches the string. This is important for
+ − 89
applications such as lexing (tokenising a string). The problem is to
+ − 90
make the algorithm by Sulzmann and Lu fast on all inputs without
100
+ − 91
breaking its correctness. Being fast depends on a complete set of
+ − 92
simplification rules, some of which
+ − 93
have been put forward by Sulzmann and Lu. We have extended their
+ − 94
rules in order to obtain a tight bound on size of regular expressions.
+ − 95
We have tested the correctness of these extended rules, but have not
+ − 96
formally established their correctness. We also have not yet looked
+ − 97
at extended regular expressions, such as bounded repetitions,
94
+ − 98
negation and back-references.
+ − 99
\end{abstract}
+ − 100
+ − 101
\section{Introduction}
+ − 102
100
+ − 103
While we believe derivatives of regular expressions is a beautiful
+ − 104
concept (interms of ease to implementing them in functional programming
+ − 105
language and ease to reason about them formally), they have one major
+ − 106
drawback: every derivative step can make regular expressions grow
+ − 107
drastically in size. This in turn has negative effects on the runtime of
+ − 108
the corresponding lexing algorithms. Consider for example the regular
+ − 109
expression $(a+aa)^*$ and the short string $aaaaaaaaaaaa$. The size of
+ − 110
the corresponding derivative is already 8668 node assuming the derivatives
+ − 111
is seen as a tree. The reason for the poor runtime of the lexing algorithms is
+ − 112
that they need to traverse such trees over and over again. The solution is to
+ − 113
find a complete set of simplification rules that keep the sizes of derivatives
+ − 114
uniformly small.
94
+ − 115
100
+ − 116
For reasons beyond this report, it turns out that a complete set of
+ − 117
simplification rules depend on values being encoded as bitsequences.
+ − 118
(Vlue are the results of the lexing algorithms generate; they encode how
+ − 119
a regular expression matched a string.) We already know that the lexing
+ − 120
algorithm \emph{without} simplification is correct. Therefore in the
+ − 121
past 6 months we were trying to prove that the algorithm using bitsequences plus
+ − 122
our simplification rules is correct. Formally this amounts to show that
+ − 123
+ − 124
\begin{equation}\label{mainthm}
+ − 125
\blexers \; r \; s = \blexer \;r\;s
+ − 126
\end{equation}
+ − 127
94
+ − 128
\noindent
100
+ − 129
whereby $\blexers$ simplifies (makes derivatives smaller) in each step,
+ − 130
whereas with $\blexer$ the size can grow exponentially. This would be an
+ − 131
important milestone, because we already have a very good idea how to
+ − 132
establish that our set our simplification rules keeps the size below a
+ − 133
relatively tight bound.
+ − 134
+ − 135
In order to prove the main theorem \eqref{mainthm}, we need to prove the
+ − 136
two functions produce the same output. The definition of these functions
+ − 137
is shown below.
+ − 138
94
+ − 139
\begin{center}
+ − 140
\begin{tabular}{lcl}
+ − 141
$\textit{blexer}\;r\,s$ & $\dn$ &
+ − 142
$\textit{let}\;a = (r^\uparrow)\backslash s\;\textit{in}$\\
+ − 143
& & $\;\;\textit{if}\; \textit{bnullable}(a)$\\
+ − 144
& & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
+ − 145
& & $\;\;\textit{else}\;\textit{None}$
+ − 146
\end{tabular}
+ − 147
\end{center}
+ − 148
100
+ − 149
\begin{center}
94
+ − 150
\begin{tabular}{lcl}
100
+ − 151
$\blexers \; r \, s$ &$\dn$ &
+ − 152
$\textit{let} \; a = (r^\uparrow)\backslash_{simp}\, s\; \textit{in}$\\
+ − 153
& & $\; \; \textit{if} \; \textit{bnullable}(a)$\\
+ − 154
& & $\; \; \textit{then} \; \textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
+ − 155
& & $\;\; \textit{else}\;\textit{None}$
94
+ − 156
\end{tabular}
+ − 157
\end{center}
+ − 158
\noindent
100
+ − 159
In these definitions $(r^\uparrow)$ is a kind of coding function that is the
+ − 160
same in each case, similarly the decode and the \textit{bmkeps}
+ − 161
functions. Our main theorem \eqref{mainthm} therefore boils down to
+ − 162
proving the following two propositions (depending on which branch the
+ − 163
if-else clause takes). It establishes how the derivatives \emph{with}
+ − 164
simplification do not change the computed result:
94
+ − 165
+ − 166
\begin{itemize}
100
+ − 167
\item{} If a string $s$ is in the language of $L(r)$, then \\
+ − 168
$\textit{bmkeps} (r^\uparrow)\backslash_{simp}\,s = \textit{bmkeps} (r^\uparrow)\backslash s$,\\
+ − 169
\item{} If a string $s$ is in the language $L(r)$, then
+ − 170
$\rup \backslash_{simp} \,s$ is not nullable.
94
+ − 171
\end{itemize}
100
+ − 172
94
+ − 173
\noindent
100
+ − 174
We have already proved in Isabelle the second part. This is actually not
+ − 175
too difficult because we can show that simplification does not change
+ − 176
the language of regular expressions. If we can prove the first case,
+ − 177
that is the bitsequence algorithm with simplification produces the same
+ − 178
result as the one without simplification, then we are done.
+ − 179
Unfortunately that part requires more effort, because simplification does not
+ − 180
only.need to \emph{not} change the language, but also not change
+ − 181
the value (computed result).
+ − 182
104
+ − 183
100
+ − 184
104
+ − 185
For this we have started with looking at the original proof that
+ − 186
established that the bitsequence algorithm produces the same result as
+ − 187
the algorithm not using bitsequences. Formally this proof estabilshed
100
+ − 188
94
+ − 189
\begin{center}
+ − 190
$\blexer \; r^\uparrow s = \lexer \;r \;s$
+ − 191
\end{center}
104
+ − 192
94
+ − 193
\noindent
104
+ − 194
The proof used two ''tricks", One is that it defined
+ − 195
a $\flex$-function
+ − 196
94
+ − 197
\begin{center}
+ − 198
\begin{tabular}{lcl}
+ − 199
$\textit{flex} \;r\; f\; (c\!::\!s) $ & $\dn$ & $\textit{flex} \; (r\backslash c) \;(\lambda v. f (inj \; r \; c \; v)) \;s$ \\
+ − 200
$\textit{flex} \;r\; f\; [\,] $ & $\dn$ & $f$
+ − 201
\end{tabular}
+ − 202
\end{center}
104
+ − 203
94
+ − 204
\noindent
104
+ − 205
and then proved for the right-hand side in \eqref{lexer}
+ − 206
+ − 207
\begin{center}
+ − 208
$\lexer \;r\; s = \flex \;\textit{id} \; r\;s \;(\mkeps \; r\backslash s)$
+ − 209
\end{center}.
+ − 210
+ − 211
+ − 212
\noindent\rule[1.5ex]{\linewidth}{1pt}
+ − 213
+ − 214
\noindent
+ − 215
The $\flex$-function essentially does lexing by
94
+ − 216
stacking up injection functions while doing derivatives,
+ − 217
explicitly showing the order of characters being
+ − 218
injected back in each step.
+ − 219
With $\flex$ we can write $\lexer$ this way:
+ − 220
\begin{center}
+ − 221
$\lexer \;r\; s = \flex \;id \; r\;s \;(\mkeps r\backslash s)$
+ − 222
\end{center}
+ − 223
\noindent
+ − 224
$\flex$ focuses on
+ − 225
the injections instead
+ − 226
of the derivatives ,
+ − 227
compared
+ − 228
to the original definition of $\lexer$,
+ − 229
which puts equal amount of emphasis on
+ − 230
injection and derivative with respect to each character:
+ − 231
\begin{center}
+ − 232
\begin{tabular}{lcl}
+ − 233
$\textit{lexer} \; r\; (c\!::\!s) $ & $\dn$ & $\textit{case} \; \lexer \; (r\backslash c) \;s \; \textit{of}$ \\
+ − 234
& & $\textit{None} \; \Longrightarrow \; \textit{None}$\\
+ − 235
& & $\textbar \; v \; \Longrightarrow \; \inj \; r\;c\;v$\\
+ − 236
$\textit{lexer} \; r\; [\,] $ & $\dn$ & $\textit{if} \; \nullable (r) \; \textit{then} \; \mkeps (r) \; \textit{else} \;None$
+ − 237
\end{tabular}
+ − 238
\end{center}
+ − 239
\noindent
+ − 240
Using this feature of $\flex$ we can rewrite the lexing
+ − 241
$w.r.t \; s @ [c]$ in term of lexing
+ − 242
$w.r.t \; s$:
+ − 243
\begin{center}
+ − 244
$\flex \; r \; id \; (s@[c]) \; v = \flex \; r \; id \; s \; (inj \; (r\backslash s) \; c\; v)$.
+ − 245
\end{center}
+ − 246
\noindent
+ − 247
this allows us to use
+ − 248
the inductive hypothesis to get
+ − 249
\begin{center}
+ − 250
$ \flex \; r\; id\; (s@[c])\; v = \textit{decode} \;( \textit{retrieve}\; (\rup \backslash s) \; (\inj \; (r\backslash s) \;c\;v)\;) r$
+ − 251
\end{center}
+ − 252
\noindent
+ − 253
By using a property of retrieve we have the $\textit{RHS}$ of the above equality is
+ − 254
$decode (retrieve (r^\uparrow \backslash(s @ [c])) v) r$, and this gives the
+ − 255
main lemma result:
+ − 256
\begin{center}
+ − 257
$ \flex \;r\; id \; (s@[c]) \; v =\textit{decode}(\textit{retrieve} (\rup \backslash (s@[c])) \;v) r$
+ − 258
\end{center}
+ − 259
\noindent
+ − 260
To use this lemma result for our
+ − 261
correctness proof, simply replace the $v$ in the
+ − 262
$\textit{RHS}$ of the above equality with
+ − 263
$\mkeps\;(r\backslash (s@[c]))$, and apply the lemma that
+ − 264
+ − 265
\begin{center}
+ − 266
$\textit{decode} \; \bmkeps \; \rup \; r = \textit{decode} \; (\textit{retrieve} \; \rup \; \mkeps(r)) \;r$
+ − 267
\end{center}
+ − 268
\noindent
+ − 269
We get the correctness of our bit-coded algorithm:
+ − 270
\begin{center}
+ − 271
$\flex \;r\; id \; s \; (\mkeps \; r\backslash s) = \textit{decode} \; \bmkeps \; \rup\backslash s \; r$
+ − 272
\end{center}
+ − 273
\noindent
+ − 274
The bridge between the above chain of equalities
+ − 275
is the use of $\retrieve$,
+ − 276
if we want to use a similar technique for the
+ − 277
simplified version of algorithm,
+ − 278
we face the problem that in the above
+ − 279
equalities,
+ − 280
$\retrieve \; a \; v$ is not always defined.
+ − 281
for example,
100
+ − 282
$\retrieve \; _0(_1a+_0a) \; \Left(\Empty)$
101
+ − 283
is defined, but not $\retrieve \; (_{01}a) \;\Left(\Empty)$,
94
+ − 284
though we can extract the same POSIX
+ − 285
bits from the two annotated regular expressions.
95
+ − 286
The latter might occur when we try to retrieve from
+ − 287
a simplified regular expression using the same value
+ − 288
as the unsimplified one.
+ − 289
This is because $\Left(\Empty)$ corresponds to
101
+ − 290
the regular expression structure $\ONE+r_2$ instead of
+ − 291
$\ONE$.
94
+ − 292
That means, if we
+ − 293
want to prove that
+ − 294
\begin{center}
+ − 295
$\textit{decode} \; \bmkeps \; \rup\backslash s \; r = \textit{decode} \; \bmkeps \; \rup\backslash_{simp} s \; r$
+ − 296
\end{center}
+ − 297
\noindent
+ − 298
holds by using $\retrieve$,
+ − 299
we probably need to prove an equality like below:
+ − 300
\begin{center}
+ − 301
%$\retrieve \; \rup\backslash_{simp} s \; \mkeps(r\backslash_{simp} s)=\textit{retrieve} \; \rup\backslash s \; \mkeps(r\backslash s)$
101
+ − 302
$\retrieve \; \rup\backslash_{simp} s \; \mkeps(f(r\backslash s))=\textit{retrieve} \; \rup\backslash s \; \mkeps(r\backslash s)$
94
+ − 303
\end{center}
+ − 304
\noindent
101
+ − 305
$f$ rectifies $r\backslash s$ so the value $\mkeps(f(r\backslash s))$ becomes
+ − 306
something simpler
94
+ − 307
to make the retrieve function defined.\\
95
+ − 308
One way to do this is to prove the following:
+ − 309
\begin{center}
+ − 310
$\retrieve \; \rup\backslash_{simp} s \; \mkeps(\simp(r\backslash s))=\textit{retrieve} \; \rup\backslash s \; \mkeps(r\backslash s)$
+ − 311
\end{center}
+ − 312
\noindent
101
+ − 313
The reason why we choose $\simp$ as $f$ is because
+ − 314
$\rup\backslash_{simp} \, s$ and $\simp(\rup\backslash \, s)$
+ − 315
have the same shape:
+ − 316
\begin{center}
+ − 317
$\erase (\rup\backslash_{simp} \, s) = \erase(\simp(\rup\backslash s))$
+ − 318
\end{center}
+ − 319
+ − 320
\noindent
+ − 321
$\erase$ in the above equality means to remove the bit-codes
+ − 322
in an annotated regular expression and only keep the original
+ − 323
regular expression(just like "erasing" the bits). Its definition is omitted.
+ − 324
$\rup\backslash_{simp} \, s$ and $\simp(\rup\backslash s)$
+ − 325
are very closely related, but not identical.
+ − 326
For example, let $r$ be the regular expression
+ − 327
$(a+b)(a+a*)$ and $s$ be the string $aa$, then
103
+ − 328
both $\erase (\rup\backslash_{simp} \, s)$ and $\erase (\simp (\rup\backslash s))$
+ − 329
are $\ONE + a^*$. However, without $\erase$
101
+ − 330
\begin{center}
+ − 331
$\rup\backslash_{simp} \, s$ is equal to $_0(_0\ONE +_{11}a^*)$
+ − 332
\end{center}
+ − 333
\noindent
+ − 334
whereas
+ − 335
\begin{center}
103
+ − 336
$\simp(\rup\backslash s)$ is equal to $(_{00}\ONE +_{011}a^*)$
101
+ − 337
\end{center}
+ − 338
\noindent
103
+ − 339
For the sake of visual simplicity, we use numbers to denote the bits
+ − 340
in bitcodes as we have previously defined for annotated
+ − 341
regular expressions. $\S$ is replaced by
+ − 342
subscript $_1$ and $\Z$ by $_0$.
+ − 343
101
+ − 344
Two "rules" might be inferred from the above example.
103
+ − 345
101
+ − 346
First, after erasing the bits the two regular expressions
+ − 347
are exactly the same: both become $1+a^*$. Here the
103
+ − 348
function $\simp$ exhibits the "one in the end equals many times
+ − 349
at the front"
101
+ − 350
property: one simplification in the end causes the
+ − 351
same regular expression structure as
103
+ − 352
successive simplifications done alongside derivatives.
+ − 353
$\rup\backslash_{simp} \, s$ unfolds to
+ − 354
$\simp((\simp(r\backslash a))\backslash a)$
+ − 355
and $\simp(\rup\backslash s)$ unfolds to
+ − 356
$\simp((r\backslash a)\backslash a)$. The one simplification
+ − 357
in the latter causes the resulting regular expression to
+ − 358
become $1+a^*$, exactly the same as the former with
+ − 359
two simplifications.
+ − 360
101
+ − 361
Second, the bit-codes are different, but they are essentially
+ − 362
the same: if we push the outmost bits ${\bf_0}(_0\ONE +_{11}a^*)$ of $\rup\backslash_{simp} \, s$
+ − 363
inside then we get $(_{00}\ONE +_{011}a^*)$, exactly the
+ − 364
same as that of $\rup\backslash \, s$. And this difference
+ − 365
does not matter when we try to apply $\bmkeps$ or $\retrieve$
103
+ − 366
to it. This seems a good news if we want to use $\retrieve$
+ − 367
to prove things.
+ − 368
+ − 369
If we look into the difference above, we could see that the
+ − 370
difference is not fundamental: the bits are just being moved
+ − 371
around in a way that does not hurt the correctness.
+ − 372
During the first derivative operation,
+ − 373
$\rup\backslash a=(_0\ONE + \ZERO)(_0a + _1a^*)$ is
+ − 374
in the form of a sequence regular expression with the first
+ − 375
part being nullable.
+ − 376
Recall the simplification function definition:
+ − 377
\begin{center}
+ − 378
\begin{tabular}{@{}lcl@{}}
+ − 379
+ − 380
$\textit{simp} \; (\textit{SEQ}\;bs\,a_1\,a_2)$ & $\dn$ & $ (\textit{simp} \; a_1, \textit{simp} \; a_2) \; \textit{match} $ \\
+ − 381
&&$\quad\textit{case} \; (\ZERO, \_) \Rightarrow \ZERO$ \\
+ − 382
&&$\quad\textit{case} \; (\_, \ZERO) \Rightarrow \ZERO$ \\
+ − 383
&&$\quad\textit{case} \; (\ONE, a_2') \Rightarrow \textit{fuse} \; bs \; a_2'$ \\
+ − 384
&&$\quad\textit{case} \; (a_1', \ONE) \Rightarrow \textit{fuse} \; bs \; a_1'$ \\
+ − 385
&&$\quad\textit{case} \; (a_1', a_2') \Rightarrow \textit{SEQ} \; bs \; a_1' \; a_2'$ \\
+ − 386
+ − 387
$\textit{simp} \; (\textit{ALTS}\;bs\,as)$ & $\dn$ & $\textit{distinct}( \textit{flatten} ( \textit{map simp as})) \; \textit{match} $ \\
+ − 388
&&$\quad\textit{case} \; [] \Rightarrow \ZERO$ \\
+ − 389
&&$\quad\textit{case} \; a :: [] \Rightarrow \textit{fuse bs a}$ \\
+ − 390
&&$\quad\textit{case} \; as' \Rightarrow \textit{ALTS}\;bs\;as'$\\
+ − 391
+ − 392
$\textit{simp} \; a$ & $\dn$ & $\textit{a} \qquad \textit{otherwise}$
+ − 393
\end{tabular}
+ − 394
\end{center}
+ − 395
+ − 396
\noindent
+ − 397
If we call $\simp$ on $\rup$, just as $\backslash_{simp}$
+ − 398
requires, then we would go throught the third clause of
+ − 399
the sequence case:$\quad\textit{case} \; (\ONE, a_2') \Rightarrow \textit{fuse} \; bs \; a_2'$.
+ − 400
The $\ZERO$ of $(_0\ONE + \ZERO)$ is simplified away and
+ − 401
$_0\ONE$ merged into $_0a + _1a^*$ by simply
+ − 402
putting its bits($_0$) to the front of the second component:
+ − 403
${\bf_0}(_0a + _1a^*)$.
+ − 404
After a second derivative operation,
+ − 405
namely, $(_0(_0a + _1a^*))\backslash a$, we get
+ − 406
$
+ − 407
_0(_0 \ONE + _1(_1\ONE \cdot a^*))
+ − 408
$, and this simplifies to $_0(_0 \ONE + _{11} a^*)$
+ − 409
by the third clause of the alternative case:
+ − 410
$\quad\textit{case} \; as' \Rightarrow \textit{ALTS}\;bs\;as'$.
+ − 411
The outmost bit $_0$ remains unchanged and stays with
+ − 412
the outmost regular expression. However, things are a bit
+ − 413
different when it comes to $\simp(\rup\backslash \, s)$, because
+ − 414
without simplification, first term of the sequence
+ − 415
$\rup\backslash a=(_0\ONE + \ZERO)(_0a + _1a^*)$
+ − 416
is not merged into the second component
+ − 417
and is nullable.
+ − 418
Therefore $((_0\ONE + \ZERO)(_0a + _1a^*))\backslash a$ splits into
+ − 419
$([(\ZERO + \ZERO)\cdot(_0a + _1a^*)] + _0( _0\ONE + _1[_1\ONE \cdot a^*]))$.
+ − 420
After these two successive derivatives without simplification,
+ − 421
we apply $\simp$ to this regular expression, which goes through
+ − 422
the alternative clause, and each component of $([(\ZERO + \ZERO)\cdot(_0a + _1a^*)] + _0( _0\ONE + _1[_1\ONE \cdot a^*]))$ will be simplified, giving us the list:$[\ZERO, _0(_0\ONE + _{11}a^*]$,this
+ − 423
list is then flattened--for
+ − 424
$([(\ZERO + \ZERO)\cdot(_0a + _1a^*)]$ it will be simplified into $\ZERO$
+ − 425
and then thrown away by $\textit{flatten}$; $ _0( _0\ONE + _1[_1\ONE \cdot a^*]))$
+ − 426
becomes $ _{00}\ONE + _{011}a^*]))$ because flatten opens up the alternative
+ − 427
$\ONE + a^*$ and fuses the front bit(s) $_0$ to the front of $_0\ONE $ and $_{11}a^*$
+ − 428
and get $_{00}$ and $_{011}$.
+ − 429
%CONSTRUCTION SITE
102
+ − 430
and we can use this to construct a set of examples based
103
+ − 431
on this type of behaviour of two operations:
+ − 432
$r_1$
101
+ − 433
that is to say, despite the bits are being moved around on the regular expression
+ − 434
(difference in bits), the structure of the (unannotated)regular expression
+ − 435
after one simplification is exactly the same after the
+ − 436
same sequence of derivative operations
+ − 437
regardless of whether we did simplification
+ − 438
along the way.
+ − 439
However, without erase the above equality does not hold:
+ − 440
for the regular expression
+ − 441
$(a+b)(a+a*)$,
+ − 442
if we do derivative with respect to string $aa$,
+ − 443
we get
103
+ − 444
101
+ − 445
sdddddr does not equal sdsdsdsr sometimes.\\
+ − 446
For example,
+ − 447
+ − 448
This equicalence class method might still have the potential of proving this,
+ − 449
but not yet
+ − 450
i parallelly tried another method of using retrieve\\
+ − 451
+ − 452
+ − 453
94
+ − 454
%HERE CONSTRUCTION SITE
+ − 455
The vsimp function, defined as follows
+ − 456
tries to simplify the value in lockstep with
+ − 457
regular expression:\\
+ − 458
+ − 459
+ − 460
The problem here is that
+ − 461
+ − 462
we used retrieve for the key induction:
+ − 463
$decode (retrieve (r\backslash (s @ [c])) v) r $
+ − 464
$decode (retrieve (r\backslash s) (inj (r\backslash s) c v)) r$
+ − 465
Here, decode recovers a value that corresponds to a match(possibly partial)
+ − 466
from bits, and the bits are extracted by retrieve,
+ − 467
and the key value $v$ that guides retrieve is
+ − 468
$mkeps r\backslash s$, $inj r c (mkeps r\backslash s)$, $inj (inj (v))$, ......
+ − 469
if we can
+ − 470
the problem is that
+ − 471
need vsiimp to make a value that is suitable for decoding
+ − 472
$Some(flex rid(s@[c])v) = Some(flex rids(inj (r\backslash s)cv))$
+ − 473
another way that christian came up with that might circumvent the
+ − 474
prblem of finding suitable value is by not stating the visimp
+ − 475
function but include all possible value in a set that a regex is able to produce,
+ − 476
and proving that both r and sr are able to produce the bits that correspond the POSIX value
+ − 477
+ − 478
produced by feeding the same initial regular expression $r$ and string $s$ to the
+ − 479
two functions $ders$ and $ders\_simp$.
+ − 480
The reason why
+ − 481
Namely, if $bmkeps( r_1) = bmkeps(r_2)$, then we
+ − 482
+ − 483
+ − 484
If we define the equivalence relation $\sim_{m\epsilon}$ between two regular expressions
+ − 485
$r_1$ and $r_2$as follows:
+ − 486
$r_1 \sim_{m\epsilon} r_2 \iff bmkeps(r_1)= bmkeps(r_2)$
+ − 487
(in other words, they $r1$ and $r2$ produce the same output under the function $bmkeps$.)
+ − 488
Then the first goal
+ − 489
might be restated as
+ − 490
$(r^\uparrow)\backslash_{simp}\, s \sim_{m\epsilon} (r^\uparrow)\backslash s$.
+ − 491
I tried to establish an equivalence relation between the regular experssions
+ − 492
like dddr dddsr,.....
+ − 493
but right now i am only able to establish dsr and dr, using structural induction on r.
+ − 494
Those involve multiple derivative operations are harder to prove.
+ − 495
Two attempts have been made:
+ − 496
(1)induction on the number of der operations(or in other words, the length of the string s),
+ − 497
the inductive hypothesis was initially specified as
+ − 498
"For an arbitrary regular expression r,
+ − 499
For all string s in the language of r whose length do not exceed
+ − 500
the number n, ders s r me derssimp s r"
+ − 501
and the proof goal may be stated as
+ − 502
"For an arbitrary regular expression r,
+ − 503
For all string s in the language of r whose length do not exceed
+ − 504
the number n+1, ders s r me derssimp s r"
+ − 505
the problem here is that although we can easily break down
+ − 506
a string s of length n+1 into s1@list(c), it is not that easy
+ − 507
to use the i.h. as a stepping stone to prove anything because s1 may well be not
+ − 508
in the language L(r). This inhibits us from obtaining the fact that
+ − 509
ders s1 r me derssimps s1 r.
+ − 510
Further exploration is needed to amend this hypothesis so it includes the
+ − 511
situation when s1 is not nullable.
+ − 512
For example, what information(bits?
+ − 513
values?) can be extracted
+ − 514
from the regular expression ders(s1,r) so that we can compute or predict the possible
+ − 515
result of bmkeps after another derivative operation. What function f can used to
+ − 516
carry out the task? The possible way of exploration can be
+ − 517
more directly perceived throught the graph below:
+ − 518
find a function
+ − 519
f
+ − 520
such that
+ − 521
f(bders s1 r)
+ − 522
= re1
+ − 523
f(bderss s1 r)
+ − 524
= re2
+ − 525
bmkeps(bders s r) = g(re1,c)
+ − 526
bmkeps(bderssimp s r) = g(re2,c)
+ − 527
and g(re1,c) = g(re2,c)
+ − 528
The inductive hypothesis would be
+ − 529
"For all strings s1 of length <= n,
+ − 530
f(bders s1 r)
+ − 531
= re1
+ − 532
f(bderss s1 r)
+ − 533
= re2"
+ − 534
proving this would be a lemma for the main proof:
+ − 535
the main proof would be
+ − 536
"
+ − 537
bmkeps(bders s r) = g(re1,c)
+ − 538
bmkeps(bderssimp s r) = g(re2,c)
+ − 539
for s = s1@c
+ − 540
"
+ − 541
and f need to be a recursive property for the lemma to be proved:
+ − 542
it needs to store not only the "after one char nullable info",
+ − 543
but also the "after two char nullable info",
+ − 544
and so on so that it is able to predict what f will compute after a derivative operation,
+ − 545
in other words, it needs to be "infinitely recursive"\\
+ − 546
To prove the lemma, in other words, to get
+ − 547
"For all strings s1 of length <= n+1,
+ − 548
f(bders s1 r)
+ − 549
= re3
+ − 550
f(bderss s1 r)
+ − 551
= re4"\\
+ − 552
from\\
+ − 553
"For all strings s1 of length <= n,
+ − 554
f(bders s1 r)
+ − 555
= re1
+ − 556
f(bderss s1 r)
+ − 557
= re2"\\
+ − 558
it might be best to construct an auxiliary function h such that\\
+ − 559
h(re1, c) = re3\\
+ − 560
h(re2, c) = re4\\
+ − 561
and re3 = f(bder c (bders s1 r))\\
+ − 562
re4 = f(simp(bder c (bderss s1 r)))
+ − 563
The key point here is that we are not satisfied with what bders s r will produce under
+ − 564
bmkeps, but also how it will perform after a derivative operation and then bmkeps, and two
+ − 565
derivative operations and so on. In essence, we are preserving the regular expression
+ − 566
itself under the function f, in a less compact way than the regluar expression: we are
+ − 567
not just recording but also interpreting what the regular expression matches.
+ − 568
In other words, we need to prove the properties of bderss s r beyond the bmkeps result,
+ − 569
i.e., not just the nullable ones, but also those containing remaining characters.\\
+ − 570
(2)we observed the fact that
+ − 571
erase sdddddr= erase sdsdsdsr
+ − 572
that is to say, despite the bits are being moved around on the regular expression
+ − 573
(difference in bits), the structure of the (unannotated)regular expression
+ − 574
after one simplification is exactly the same after the
+ − 575
same sequence of derivative operations
+ − 576
regardless of whether we did simplification
+ − 577
along the way.
+ − 578
However, without erase the above equality does not hold:
+ − 579
for the regular expression
+ − 580
$(a+b)(a+a*)$,
+ − 581
if we do derivative with respect to string $aa$,
+ − 582
we get
+ − 583
%TODO
+ − 584
sdddddr does not equal sdsdsdsr sometimes.\\
+ − 585
For example,
+ − 586
+ − 587
This equicalence class method might still have the potential of proving this,
+ − 588
but not yet
+ − 589
i parallelly tried another method of using retrieve\\
+ − 590
+ − 591
+ − 592
+ − 593
\noindent\rule[0.5ex]{\linewidth}{1pt}
+ − 594
+ − 595
This PhD-project is about regular expression matching and
+ − 596
lexing. Given the maturity of this topic, the reader might wonder:
+ − 597
Surely, regular expressions must have already been studied to death?
+ − 598
What could possibly be \emph{not} known in this area? And surely all
+ − 599
implemented algorithms for regular expression matching are blindingly
+ − 600
fast?
+ − 601
+ − 602
Unfortunately these preconceptions are not supported by evidence: Take
+ − 603
for example the regular expression $(a^*)^*\,b$ and ask whether
+ − 604
strings of the form $aa..a$ match this regular
+ − 605
expression. Obviously this is not the case---the expected $b$ in the last
+ − 606
position is missing. One would expect that modern regular expression
+ − 607
matching engines can find this out very quickly. Alas, if one tries
+ − 608
this example in JavaScript, Python or Java 8 with strings like 28
+ − 609
$a$'s, one discovers that this decision takes around 30 seconds and
+ − 610
takes considerably longer when adding a few more $a$'s, as the graphs
+ − 611
below show:
+ − 612
+ − 613
\begin{center}
+ − 614
\begin{tabular}{@{}c@{\hspace{0mm}}c@{\hspace{0mm}}c@{}}
+ − 615
\begin{tikzpicture}
+ − 616
\begin{axis}[
+ − 617
xlabel={$n$},
+ − 618
x label style={at={(1.05,-0.05)}},
+ − 619
ylabel={time in secs},
+ − 620
enlargelimits=false,
+ − 621
xtick={0,5,...,30},
+ − 622
xmax=33,
+ − 623
ymax=35,
+ − 624
ytick={0,5,...,30},
+ − 625
scaled ticks=false,
+ − 626
axis lines=left,
+ − 627
width=5cm,
+ − 628
height=4cm,
+ − 629
legend entries={JavaScript},
+ − 630
legend pos=north west,
+ − 631
legend cell align=left]
+ − 632
\addplot[red,mark=*, mark options={fill=white}] table {re-js.data};
+ − 633
\end{axis}
+ − 634
\end{tikzpicture}
+ − 635
&
+ − 636
\begin{tikzpicture}
+ − 637
\begin{axis}[
+ − 638
xlabel={$n$},
+ − 639
x label style={at={(1.05,-0.05)}},
+ − 640
%ylabel={time in secs},
+ − 641
enlargelimits=false,
+ − 642
xtick={0,5,...,30},
+ − 643
xmax=33,
+ − 644
ymax=35,
+ − 645
ytick={0,5,...,30},
+ − 646
scaled ticks=false,
+ − 647
axis lines=left,
+ − 648
width=5cm,
+ − 649
height=4cm,
+ − 650
legend entries={Python},
+ − 651
legend pos=north west,
+ − 652
legend cell align=left]
+ − 653
\addplot[blue,mark=*, mark options={fill=white}] table {re-python2.data};
+ − 654
\end{axis}
+ − 655
\end{tikzpicture}
+ − 656
&
+ − 657
\begin{tikzpicture}
+ − 658
\begin{axis}[
+ − 659
xlabel={$n$},
+ − 660
x label style={at={(1.05,-0.05)}},
+ − 661
%ylabel={time in secs},
+ − 662
enlargelimits=false,
+ − 663
xtick={0,5,...,30},
+ − 664
xmax=33,
+ − 665
ymax=35,
+ − 666
ytick={0,5,...,30},
+ − 667
scaled ticks=false,
+ − 668
axis lines=left,
+ − 669
width=5cm,
+ − 670
height=4cm,
+ − 671
legend entries={Java 8},
+ − 672
legend pos=north west,
+ − 673
legend cell align=left]
+ − 674
\addplot[cyan,mark=*, mark options={fill=white}] table {re-java.data};
+ − 675
\end{axis}
+ − 676
\end{tikzpicture}\\
+ − 677
\multicolumn{3}{c}{Graphs: Runtime for matching $(a^*)^*\,b$ with strings
+ − 678
of the form $\underbrace{aa..a}_{n}$.}
+ − 679
\end{tabular}
+ − 680
\end{center}
+ − 681
+ − 682
\noindent These are clearly abysmal and possibly surprising results. One
+ − 683
would expect these systems to do much better than that---after all,
+ − 684
given a DFA and a string, deciding whether a string is matched by this
+ − 685
DFA should be linear in terms of the size of the regular expression and
+ − 686
the string?
+ − 687
+ − 688
Admittedly, the regular expression $(a^*)^*\,b$ is carefully chosen to
+ − 689
exhibit this super-linear behaviour. But unfortunately, such regular
+ − 690
expressions are not just a few outliers. They are actually
+ − 691
frequent enough to have a separate name created for
+ − 692
them---\emph{evil regular expressions}. In empiric work, Davis et al
+ − 693
report that they have found thousands of such evil regular expressions
+ − 694
in the JavaScript and Python ecosystems \cite{Davis18}. Static analysis
+ − 695
approach that is both sound and complete exists\cite{17Bir}, but the running
+ − 696
time on certain examples in the RegExLib and Snort regular expressions
+ − 697
libraries is unacceptable. Therefore the problem of efficiency still remains.
+ − 698
+ − 699
This superlinear blowup in matching algorithms sometimes causes
+ − 700
considerable grief in real life: for example on 20 July 2016 one evil
+ − 701
regular expression brought the webpage
+ − 702
\href{http://stackexchange.com}{Stack Exchange} to its
+ − 703
knees.\footnote{\url{https://stackstatus.net/post/147710624694/outage-postmortem-july-20-2016}}
+ − 704
In this instance, a regular expression intended to just trim white
+ − 705
spaces from the beginning and the end of a line actually consumed
+ − 706
massive amounts of CPU-resources---causing web servers to grind to a
+ − 707
halt. This happened when a post with 20,000 white spaces was submitted,
+ − 708
but importantly the white spaces were neither at the beginning nor at
+ − 709
the end. As a result, the regular expression matching engine needed to
+ − 710
backtrack over many choices. In this example, the time needed to process
+ − 711
the string was $O(n^2)$ with respect to the string length. This
+ − 712
quadratic overhead was enough for the homepage of Stack Exchange to
+ − 713
respond so slowly that the load balancer assumed there must be some
+ − 714
attack and therefore stopped the servers from responding to any
+ − 715
requests. This made the whole site become unavailable. Another very
+ − 716
recent example is a global outage of all Cloudflare servers on 2 July
+ − 717
2019. A poorly written regular expression exhibited exponential
+ − 718
behaviour and exhausted CPUs that serve HTTP traffic. Although the
+ − 719
outage had several causes, at the heart was a regular expression that
+ − 720
was used to monitor network
+ − 721
traffic.\footnote{\url{https://blog.cloudflare.com/details-of-the-cloudflare-outage-on-july-2-2019/}}
+ − 722
+ − 723
The underlying problem is that many ``real life'' regular expression
+ − 724
matching engines do not use DFAs for matching. This is because they
+ − 725
support regular expressions that are not covered by the classical
+ − 726
automata theory, and in this more general setting there are quite a few
+ − 727
research questions still unanswered and fast algorithms still need to be
+ − 728
developed (for example how to treat efficiently bounded repetitions, negation and
+ − 729
back-references).
+ − 730
%question: dfa can have exponential states. isn't this the actual reason why they do not use dfas?
+ − 731
%how do they avoid dfas exponential states if they use them for fast matching?
+ − 732
+ − 733
There is also another under-researched problem to do with regular
+ − 734
expressions and lexing, i.e.~the process of breaking up strings into
+ − 735
sequences of tokens according to some regular expressions. In this
+ − 736
setting one is not just interested in whether or not a regular
+ − 737
expression matches a string, but also in \emph{how}. Consider for
+ − 738
example a regular expression $r_{key}$ for recognising keywords such as
+ − 739
\textit{if}, \textit{then} and so on; and a regular expression $r_{id}$
+ − 740
for recognising identifiers (say, a single character followed by
+ − 741
characters or numbers). One can then form the compound regular
+ − 742
expression $(r_{key} + r_{id})^*$ and use it to tokenise strings. But
+ − 743
then how should the string \textit{iffoo} be tokenised? It could be
+ − 744
tokenised as a keyword followed by an identifier, or the entire string
+ − 745
as a single identifier. Similarly, how should the string \textit{if} be
+ − 746
tokenised? Both regular expressions, $r_{key}$ and $r_{id}$, would
+ − 747
``fire''---so is it an identifier or a keyword? While in applications
+ − 748
there is a well-known strategy to decide these questions, called POSIX
+ − 749
matching, only relatively recently precise definitions of what POSIX
+ − 750
matching actually means have been formalised
+ − 751
\cite{AusafDyckhoffUrban2016,OkuiSuzuki2010,Vansummeren2006}. Such a
+ − 752
definition has also been given by Sulzmann and Lu \cite{Sulzmann2014},
+ − 753
but the corresponding correctness proof turned out to be faulty
+ − 754
\cite{AusafDyckhoffUrban2016}. Roughly, POSIX matching means matching
+ − 755
the longest initial substring. In the case of a tie, the initial
+ − 756
sub-match is chosen according to some priorities attached to the regular
+ − 757
expressions (e.g.~keywords have a higher priority than identifiers).
+ − 758
This sounds rather simple, but according to Grathwohl et al \cite[Page
+ − 759
36]{CrashCourse2014} this is not the case. They wrote:
+ − 760
+ − 761
\begin{quote}
+ − 762
\it{}``The POSIX strategy is more complicated than the greedy because of
+ − 763
the dependence on information about the length of matched strings in the
+ − 764
various subexpressions.''
+ − 765
\end{quote}
+ − 766
+ − 767
\noindent
+ − 768
This is also supported by evidence collected by Kuklewicz
+ − 769
\cite{Kuklewicz} who noticed that a number of POSIX regular expression
+ − 770
matchers calculate incorrect results.
+ − 771
+ − 772
Our focus in this project is on an algorithm introduced by Sulzmann and
+ − 773
Lu in 2014 for regular expression matching according to the POSIX
+ − 774
strategy \cite{Sulzmann2014}. Their algorithm is based on an older
+ − 775
algorithm by Brzozowski from 1964 where he introduced the notion of
+ − 776
derivatives of regular expressions~\cite{Brzozowski1964}. We shall
+ − 777
briefly explain this algorithm next.
+ − 778
+ − 779
\section{The Algorithm by Brzozowski based on Derivatives of Regular
+ − 780
Expressions}
+ − 781
+ − 782
Suppose (basic) regular expressions are given by the following grammar:
+ − 783
\[ r ::= \ZERO \mid \ONE
+ − 784
\mid c
+ − 785
\mid r_1 \cdot r_2
+ − 786
\mid r_1 + r_2
+ − 787
\mid r^*
+ − 788
\]
+ − 789
+ − 790
\noindent
+ − 791
The intended meaning of the constructors is as follows: $\ZERO$
+ − 792
cannot match any string, $\ONE$ can match the empty string, the
+ − 793
character regular expression $c$ can match the character $c$, and so
+ − 794
on.
+ − 795
+ − 796
The ingenious contribution by Brzozowski is the notion of
+ − 797
\emph{derivatives} of regular expressions. The idea behind this
+ − 798
notion is as follows: suppose a regular expression $r$ can match a
+ − 799
string of the form $c\!::\! s$ (that is a list of characters starting
+ − 800
with $c$), what does the regular expression look like that can match
+ − 801
just $s$? Brzozowski gave a neat answer to this question. He started
+ − 802
with the definition of $nullable$:
+ − 803
\begin{center}
+ − 804
\begin{tabular}{lcl}
+ − 805
$\nullable(\ZERO)$ & $\dn$ & $\mathit{false}$ \\
+ − 806
$\nullable(\ONE)$ & $\dn$ & $\mathit{true}$ \\
+ − 807
$\nullable(c)$ & $\dn$ & $\mathit{false}$ \\
+ − 808
$\nullable(r_1 + r_2)$ & $\dn$ & $\nullable(r_1) \vee \nullable(r_2)$ \\
+ − 809
$\nullable(r_1\cdot r_2)$ & $\dn$ & $\nullable(r_1) \wedge \nullable(r_2)$ \\
+ − 810
$\nullable(r^*)$ & $\dn$ & $\mathit{true}$ \\
+ − 811
\end{tabular}
+ − 812
\end{center}
+ − 813
This function simply tests whether the empty string is in $L(r)$.
+ − 814
He then defined
+ − 815
the following operation on regular expressions, written
+ − 816
$r\backslash c$ (the derivative of $r$ w.r.t.~the character $c$):
+ − 817
+ − 818
\begin{center}
+ − 819
\begin{tabular}{lcl}
+ − 820
$\ZERO \backslash c$ & $\dn$ & $\ZERO$\\
+ − 821
$\ONE \backslash c$ & $\dn$ & $\ZERO$\\
+ − 822
$d \backslash c$ & $\dn$ &
+ − 823
$\mathit{if} \;c = d\;\mathit{then}\;\ONE\;\mathit{else}\;\ZERO$\\
+ − 824
$(r_1 + r_2)\backslash c$ & $\dn$ & $r_1 \backslash c \,+\, r_2 \backslash c$\\
+ − 825
$(r_1 \cdot r_2)\backslash c$ & $\dn$ & $\mathit{if} \, nullable(r_1)$\\
+ − 826
& & $\mathit{then}\;(r_1\backslash c) \cdot r_2 \,+\, r_2\backslash c$\\
+ − 827
& & $\mathit{else}\;(r_1\backslash c) \cdot r_2$\\
+ − 828
$(r^*)\backslash c$ & $\dn$ & $(r\backslash c) \cdot r^*$\\
+ − 829
\end{tabular}
+ − 830
\end{center}
+ − 831
+ − 832
%Assuming the classic notion of a
+ − 833
%\emph{language} of a regular expression, written $L(\_)$, t
+ − 834
+ − 835
\noindent
+ − 836
The main property of the derivative operation is that
+ − 837
+ − 838
\begin{center}
+ − 839
$c\!::\!s \in L(r)$ holds
+ − 840
if and only if $s \in L(r\backslash c)$.
+ − 841
\end{center}
+ − 842
+ − 843
\noindent
+ − 844
For us the main advantage is that derivatives can be
+ − 845
straightforwardly implemented in any functional programming language,
+ − 846
and are easily definable and reasoned about in theorem provers---the
+ − 847
definitions just consist of inductive datatypes and simple recursive
+ − 848
functions. Moreover, the notion of derivatives can be easily
+ − 849
generalised to cover extended regular expression constructors such as
+ − 850
the not-regular expression, written $\neg\,r$, or bounded repetitions
+ − 851
(for example $r^{\{n\}}$ and $r^{\{n..m\}}$), which cannot be so
+ − 852
straightforwardly realised within the classic automata approach.
+ − 853
For the moment however, we focus only on the usual basic regular expressions.
+ − 854
+ − 855
+ − 856
Now if we want to find out whether a string $s$ matches with a regular
+ − 857
expression $r$, we can build the derivatives of $r$ w.r.t.\ (in succession)
+ − 858
all the characters of the string $s$. Finally, test whether the
+ − 859
resulting regular expression can match the empty string. If yes, then
+ − 860
$r$ matches $s$, and no in the negative case. To implement this idea
+ − 861
we can generalise the derivative operation to strings like this:
+ − 862
+ − 863
\begin{center}
+ − 864
\begin{tabular}{lcl}
+ − 865
$r \backslash (c\!::\!s) $ & $\dn$ & $(r \backslash c) \backslash s$ \\
+ − 866
$r \backslash [\,] $ & $\dn$ & $r$
+ − 867
\end{tabular}
+ − 868
\end{center}
+ − 869
+ − 870
\noindent
+ − 871
and then define as regular-expression matching algorithm:
+ − 872
\[
+ − 873
match\;s\;r \;\dn\; nullable(r\backslash s)
+ − 874
\]
+ − 875
+ − 876
\noindent
+ − 877
This algorithm looks graphically as follows:
+ − 878
\begin{equation}\label{graph:*}
+ − 879
\begin{tikzcd}
+ − 880
r_0 \arrow[r, "\backslash c_0"] & r_1 \arrow[r, "\backslash c_1"] & r_2 \arrow[r, dashed] & r_n \arrow[r,"\textit{nullable}?"] & \;\textrm{YES}/\textrm{NO}
+ − 881
\end{tikzcd}
+ − 882
\end{equation}
+ − 883
+ − 884
\noindent
+ − 885
where we start with a regular expression $r_0$, build successive
+ − 886
derivatives until we exhaust the string and then use \textit{nullable}
+ − 887
to test whether the result can match the empty string. It can be
+ − 888
relatively easily shown that this matcher is correct (that is given
+ − 889
an $s = c_0...c_{n-1}$ and an $r_0$, it generates YES if and only if $s \in L(r_0)$).
+ − 890
+ − 891
+ − 892
\section{Values and the Algorithm by Sulzmann and Lu}
+ − 893
+ − 894
One limitation of Brzozowski's algorithm is that it only produces a
+ − 895
YES/NO answer for whether a string is being matched by a regular
+ − 896
expression. Sulzmann and Lu~\cite{Sulzmann2014} extended this algorithm
+ − 897
to allow generation of an actual matching, called a \emph{value} or
+ − 898
sometimes also \emph{lexical value}. These values and regular
+ − 899
expressions correspond to each other as illustrated in the following
+ − 900
table:
+ − 901
+ − 902
+ − 903
\begin{center}
+ − 904
\begin{tabular}{c@{\hspace{20mm}}c}
+ − 905
\begin{tabular}{@{}rrl@{}}
+ − 906
\multicolumn{3}{@{}l}{\textbf{Regular Expressions}}\medskip\\
+ − 907
$r$ & $::=$ & $\ZERO$\\
+ − 908
& $\mid$ & $\ONE$ \\
+ − 909
& $\mid$ & $c$ \\
+ − 910
& $\mid$ & $r_1 \cdot r_2$\\
+ − 911
& $\mid$ & $r_1 + r_2$ \\
+ − 912
\\
+ − 913
& $\mid$ & $r^*$ \\
+ − 914
\end{tabular}
+ − 915
&
+ − 916
\begin{tabular}{@{\hspace{0mm}}rrl@{}}
+ − 917
\multicolumn{3}{@{}l}{\textbf{Values}}\medskip\\
+ − 918
$v$ & $::=$ & \\
+ − 919
& & $\Empty$ \\
+ − 920
& $\mid$ & $\Char(c)$ \\
+ − 921
& $\mid$ & $\Seq\,v_1\, v_2$\\
+ − 922
& $\mid$ & $\Left(v)$ \\
+ − 923
& $\mid$ & $\Right(v)$ \\
+ − 924
& $\mid$ & $\Stars\,[v_1,\ldots\,v_n]$ \\
+ − 925
\end{tabular}
+ − 926
\end{tabular}
+ − 927
\end{center}
+ − 928
+ − 929
\noindent
+ − 930
No value corresponds to $\ZERO$; $\Empty$ corresponds to $\ONE$;
+ − 931
$\Char$ to the character regular expression; $\Seq$ to the sequence
+ − 932
regular expression and so on. The idea of values is to encode a kind of
+ − 933
lexical value for how the sub-parts of a regular expression match the
+ − 934
sub-parts of a string. To see this, suppose a \emph{flatten} operation,
+ − 935
written $|v|$ for values. We can use this function to extract the
+ − 936
underlying string of a value $v$. For example, $|\mathit{Seq} \,
+ − 937
(\textit{Char x}) \, (\textit{Char y})|$ is the string $xy$. Using
+ − 938
flatten, we can describe how values encode lexical values: $\Seq\,v_1\,
+ − 939
v_2$ encodes a tree with two children nodes that tells how the string
+ − 940
$|v_1| @ |v_2|$ matches the regex $r_1 \cdot r_2$ whereby $r_1$ matches
+ − 941
the substring $|v_1|$ and, respectively, $r_2$ matches the substring
+ − 942
$|v_2|$. Exactly how these two are matched is contained in the children
+ − 943
nodes $v_1$ and $v_2$ of parent $\textit{Seq}$.
+ − 944
+ − 945
To give a concrete example of how values work, consider the string $xy$
+ − 946
and the regular expression $(x + (y + xy))^*$. We can view this regular
+ − 947
expression as a tree and if the string $xy$ is matched by two Star
+ − 948
``iterations'', then the $x$ is matched by the left-most alternative in
+ − 949
this tree and the $y$ by the right-left alternative. This suggests to
+ − 950
record this matching as
+ − 951
+ − 952
\begin{center}
+ − 953
$\Stars\,[\Left\,(\Char\,x), \Right(\Left(\Char\,y))]$
+ − 954
\end{center}
+ − 955
+ − 956
\noindent
+ − 957
where $\Stars \; [\ldots]$ records all the
+ − 958
iterations; and $\Left$, respectively $\Right$, which
+ − 959
alternative is used. The value for
+ − 960
matching $xy$ in a single ``iteration'', i.e.~the POSIX value,
+ − 961
would look as follows
+ − 962
+ − 963
\begin{center}
+ − 964
$\Stars\,[\Seq\,(\Char\,x)\,(\Char\,y)]$
+ − 965
\end{center}
+ − 966
+ − 967
\noindent
+ − 968
where $\Stars$ has only a single-element list for the single iteration
+ − 969
and $\Seq$ indicates that $xy$ is matched by a sequence regular
+ − 970
expression.
+ − 971
+ − 972
The contribution of Sulzmann and Lu is an extension of Brzozowski's
+ − 973
algorithm by a second phase (the first phase being building successive
+ − 974
derivatives---see \eqref{graph:*}). In this second phase, a POSIX value
+ − 975
is generated in case the regular expression matches the string.
+ − 976
Pictorially, the Sulzmann and Lu algorithm is as follows:
+ − 977
+ − 978
\begin{ceqn}
+ − 979
\begin{equation}\label{graph:2}
+ − 980
\begin{tikzcd}
+ − 981
r_0 \arrow[r, "\backslash c_0"] \arrow[d] & r_1 \arrow[r, "\backslash c_1"] \arrow[d] & r_2 \arrow[r, dashed] \arrow[d] & r_n \arrow[d, "mkeps" description] \\
+ − 982
v_0 & v_1 \arrow[l,"inj_{r_0} c_0"] & v_2 \arrow[l, "inj_{r_1} c_1"] & v_n \arrow[l, dashed]
+ − 983
\end{tikzcd}
+ − 984
\end{equation}
+ − 985
\end{ceqn}
+ − 986
+ − 987
\noindent
+ − 988
For convenience, we shall employ the following notations: the regular
+ − 989
expression we start with is $r_0$, and the given string $s$ is composed
+ − 990
of characters $c_0 c_1 \ldots c_{n-1}$. In the first phase from the
+ − 991
left to right, we build the derivatives $r_1$, $r_2$, \ldots according
+ − 992
to the characters $c_0$, $c_1$ until we exhaust the string and obtain
+ − 993
the derivative $r_n$. We test whether this derivative is
+ − 994
$\textit{nullable}$ or not. If not, we know the string does not match
+ − 995
$r$ and no value needs to be generated. If yes, we start building the
+ − 996
values incrementally by \emph{injecting} back the characters into the
+ − 997
earlier values $v_n, \ldots, v_0$. This is the second phase of the
+ − 998
algorithm from the right to left. For the first value $v_n$, we call the
+ − 999
function $\textit{mkeps}$, which builds the lexical value
+ − 1000
for how the empty string has been matched by the (nullable) regular
+ − 1001
expression $r_n$. This function is defined as
+ − 1002
+ − 1003
\begin{center}
+ − 1004
\begin{tabular}{lcl}
+ − 1005
$\mkeps(\ONE)$ & $\dn$ & $\Empty$ \\
+ − 1006
$\mkeps(r_{1}+r_{2})$ & $\dn$
+ − 1007
& \textit{if} $\nullable(r_{1})$\\
+ − 1008
& & \textit{then} $\Left(\mkeps(r_{1}))$\\
+ − 1009
& & \textit{else} $\Right(\mkeps(r_{2}))$\\
+ − 1010
$\mkeps(r_1\cdot r_2)$ & $\dn$ & $\Seq\,(\mkeps\,r_1)\,(\mkeps\,r_2)$\\
+ − 1011
$mkeps(r^*)$ & $\dn$ & $\Stars\,[]$
+ − 1012
\end{tabular}
+ − 1013
\end{center}
+ − 1014
+ − 1015
+ − 1016
\noindent There are no cases for $\ZERO$ and $c$, since
+ − 1017
these regular expression cannot match the empty string. Note
+ − 1018
also that in case of alternatives we give preference to the
+ − 1019
regular expression on the left-hand side. This will become
+ − 1020
important later on about what value is calculated.
+ − 1021
+ − 1022
After the $\mkeps$-call, we inject back the characters one by one in order to build
+ − 1023
the lexical value $v_i$ for how the regex $r_i$ matches the string $s_i$
+ − 1024
($s_i = c_i \ldots c_{n-1}$ ) from the previous lexical value $v_{i+1}$.
+ − 1025
After injecting back $n$ characters, we get the lexical value for how $r_0$
+ − 1026
matches $s$. For this Sulzmann and Lu defined a function that reverses
+ − 1027
the ``chopping off'' of characters during the derivative phase. The
+ − 1028
corresponding function is called \emph{injection}, written
+ − 1029
$\textit{inj}$; it takes three arguments: the first one is a regular
+ − 1030
expression ${r_{i-1}}$, before the character is chopped off, the second
+ − 1031
is a character ${c_{i-1}}$, the character we want to inject and the
+ − 1032
third argument is the value ${v_i}$, into which one wants to inject the
+ − 1033
character (it corresponds to the regular expression after the character
+ − 1034
has been chopped off). The result of this function is a new value. The
+ − 1035
definition of $\textit{inj}$ is as follows:
+ − 1036
+ − 1037
\begin{center}
+ − 1038
\begin{tabular}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
+ − 1039
$\textit{inj}\,(c)\,c\,Empty$ & $\dn$ & $Char\,c$\\
+ − 1040
$\textit{inj}\,(r_1 + r_2)\,c\,\Left(v)$ & $\dn$ & $\Left(\textit{inj}\,r_1\,c\,v)$\\
+ − 1041
$\textit{inj}\,(r_1 + r_2)\,c\,Right(v)$ & $\dn$ & $Right(\textit{inj}\,r_2\,c\,v)$\\
+ − 1042
$\textit{inj}\,(r_1 \cdot r_2)\,c\,Seq(v_1,v_2)$ & $\dn$ & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\
+ − 1043
$\textit{inj}\,(r_1 \cdot r_2)\,c\,\Left(Seq(v_1,v_2))$ & $\dn$ & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\
+ − 1044
$\textit{inj}\,(r_1 \cdot r_2)\,c\,Right(v)$ & $\dn$ & $Seq(\textit{mkeps}(r_1),\textit{inj}\,r_2\,c\,v)$\\
+ − 1045
$\textit{inj}\,(r^*)\,c\,Seq(v,Stars\,vs)$ & $\dn$ & $Stars((\textit{inj}\,r\,c\,v)\,::\,vs)$\\
+ − 1046
\end{tabular}
+ − 1047
\end{center}
+ − 1048
+ − 1049
\noindent This definition is by recursion on the ``shape'' of regular
+ − 1050
expressions and values. To understands this definition better consider
+ − 1051
the situation when we build the derivative on regular expression $r_{i-1}$.
+ − 1052
For this we chop off a character from $r_{i-1}$ to form $r_i$. This leaves a
+ − 1053
``hole'' in $r_i$ and its corresponding value $v_i$.
+ − 1054
To calculate $v_{i-1}$, we need to
+ − 1055
locate where that hole is and fill it.
+ − 1056
We can find this location by
+ − 1057
comparing $r_{i-1}$ and $v_i$. For instance, if $r_{i-1}$ is of shape
+ − 1058
$r_a \cdot r_b$, and $v_i$ is of shape $\Left(Seq(v_1,v_2))$, we know immediately that
+ − 1059
%
+ − 1060
\[ (r_a \cdot r_b)\backslash c = (r_a\backslash c) \cdot r_b \,+\, r_b\backslash c,\]
+ − 1061
+ − 1062
\noindent
+ − 1063
otherwise if $r_a$ is not nullable,
+ − 1064
\[ (r_a \cdot r_b)\backslash c = (r_a\backslash c) \cdot r_b,\]
+ − 1065
+ − 1066
\noindent
+ − 1067
the value $v_i$ should be $\Seq(\ldots)$, contradicting the fact that
+ − 1068
$v_i$ is actually of shape $\Left(\ldots)$. Furthermore, since $v_i$ is of shape
+ − 1069
$\Left(\ldots)$ instead of $\Right(\ldots)$, we know that the left
+ − 1070
branch of \[ (r_a \cdot r_b)\backslash c =
+ − 1071
\bold{\underline{ (r_a\backslash c) \cdot r_b} }\,+\, r_b\backslash c,\](underlined)
+ − 1072
is taken instead of the right one. This means $c$ is chopped off
+ − 1073
from $r_a$ rather than $r_b$.
+ − 1074
We have therefore found out
+ − 1075
that the hole will be on $r_a$. So we recursively call $\inj\,
+ − 1076
r_a\,c\,v_a$ to fill that hole in $v_a$. After injection, the value
+ − 1077
$v_i$ for $r_i = r_a \cdot r_b$ should be $\Seq\,(\inj\,r_a\,c\,v_a)\,v_b$.
+ − 1078
Other clauses can be understood in a similar way.
+ − 1079
+ − 1080
%\comment{Other word: insight?}
+ − 1081
The following example gives an insight of $\textit{inj}$'s effect and
+ − 1082
how Sulzmann and Lu's algorithm works as a whole. Suppose we have a
+ − 1083
regular expression $((((a+b)+ab)+c)+abc)^*$, and want to match it
+ − 1084
against the string $abc$ (when $abc$ is written as a regular expression,
+ − 1085
the standard way of expressing it is $a \cdot (b \cdot c)$. But we
+ − 1086
usually omit the parentheses and dots here for better readability. This
+ − 1087
algorithm returns a POSIX value, which means it will produce the longest
+ − 1088
matching. Consequently, it matches the string $abc$ in one star
+ − 1089
iteration, using the longest alternative $abc$ in the sub-expression (we shall use $r$ to denote this
+ − 1090
sub-expression for conciseness):
+ − 1091
+ − 1092
\[((((a+b)+ab)+c)+\underbrace{abc}_r)\]
+ − 1093
+ − 1094
\noindent
+ − 1095
Before $\textit{inj}$ is called, our lexer first builds derivative using
+ − 1096
string $abc$ (we simplified some regular expressions like $\ZERO \cdot
+ − 1097
b$ to $\ZERO$ for conciseness; we also omit parentheses if they are
+ − 1098
clear from the context):
+ − 1099
+ − 1100
%Similarly, we allow
+ − 1101
%$\textit{ALT}$ to take a list of regular expressions as an argument
+ − 1102
%instead of just 2 operands to reduce the nested depth of
+ − 1103
%$\textit{ALT}$
+ − 1104
+ − 1105
\begin{center}
+ − 1106
\begin{tabular}{lcl}
+ − 1107
$r^*$ & $\xrightarrow{\backslash a}$ & $r_1 = (\ONE+\ZERO+\ONE \cdot b + \ZERO + \ONE \cdot b \cdot c) \cdot r^*$\\
+ − 1108
& $\xrightarrow{\backslash b}$ & $r_2 = (\ZERO+\ZERO+\ONE \cdot \ONE + \ZERO + \ONE \cdot \ONE \cdot c) \cdot r^* +(\ZERO+\ONE+\ZERO + \ZERO + \ZERO) \cdot r^*$\\
+ − 1109
& $\xrightarrow{\backslash c}$ & $r_3 = ((\ZERO+\ZERO+\ZERO + \ZERO + \ONE \cdot \ONE \cdot \ONE) \cdot r^* + (\ZERO+\ZERO+\ZERO + \ONE + \ZERO) \cdot r^*) + $\\
+ − 1110
& & $\phantom{r_3 = (} ((\ZERO+\ONE+\ZERO + \ZERO + \ZERO) \cdot r^* + (\ZERO+\ZERO+\ZERO + \ONE + \ZERO) \cdot r^* )$
+ − 1111
\end{tabular}
+ − 1112
\end{center}
+ − 1113
+ − 1114
\noindent
+ − 1115
In case $r_3$ is nullable, we can call $\textit{mkeps}$
+ − 1116
to construct a lexical value for how $r_3$ matched the string $abc$.
+ − 1117
This function gives the following value $v_3$:
+ − 1118
+ − 1119
+ − 1120
\begin{center}
+ − 1121
$\Left(\Left(\Seq(\Right(\Seq(\Empty, \Seq(\Empty,\Empty))), \Stars [])))$
+ − 1122
\end{center}
+ − 1123
The outer $\Left(\Left(\ldots))$ tells us the leftmost nullable part of $r_3$(underlined):
+ − 1124
+ − 1125
\begin{center}
+ − 1126
\begin{tabular}{l@{\hspace{2mm}}l}
+ − 1127
& $\big(\underline{(\ZERO+\ZERO+\ZERO+ \ZERO+ \ONE \cdot \ONE \cdot \ONE) \cdot r^*}
+ − 1128
\;+\; (\ZERO+\ZERO+\ZERO + \ONE + \ZERO) \cdot r^*\big)$ \smallskip\\
+ − 1129
$+$ & $\big((\ZERO+\ONE+\ZERO + \ZERO + \ZERO) \cdot r^*
+ − 1130
\;+\; (\ZERO+\ZERO+\ZERO + \ONE + \ZERO) \cdot r^* \big)$
+ − 1131
\end{tabular}
+ − 1132
\end{center}
+ − 1133
+ − 1134
\noindent
+ − 1135
Note that the leftmost location of term $(\ZERO+\ZERO+\ZERO + \ZERO + \ONE \cdot \ONE \cdot
+ − 1136
\ONE) \cdot r^*$ (which corresponds to the initial sub-match $abc$) allows
+ − 1137
$\textit{mkeps}$ to pick it up because $\textit{mkeps}$ is defined to always choose the
+ − 1138
left one when it is nullable. In the case of this example, $abc$ is
+ − 1139
preferred over $a$ or $ab$. This $\Left(\Left(\ldots))$ location is
+ − 1140
generated by two applications of the splitting clause
+ − 1141
+ − 1142
\begin{center}
+ − 1143
$(r_1 \cdot r_2)\backslash c \;\;(when \; r_1 \; nullable) \, = (r_1\backslash c) \cdot r_2 \,+\, r_2\backslash c.$
+ − 1144
\end{center}
+ − 1145
+ − 1146
\noindent
+ − 1147
By this clause, we put $r_1 \backslash c \cdot r_2 $ at the
+ − 1148
$\textit{front}$ and $r_2 \backslash c$ at the $\textit{back}$. This
+ − 1149
allows $\textit{mkeps}$ to always pick up among two matches the one with a longer
+ − 1150
initial sub-match. Removing the outside $\Left(\Left(...))$, the inside
+ − 1151
sub-value
+ − 1152
+ − 1153
\begin{center}
+ − 1154
$\Seq(\Right(\Seq(\Empty, \Seq(\Empty, \Empty))), \Stars [])$
+ − 1155
\end{center}
+ − 1156
+ − 1157
\noindent
+ − 1158
tells us how the empty string $[]$ is matched with $(\ZERO+\ZERO+\ZERO + \ZERO + \ONE \cdot
+ − 1159
\ONE \cdot \ONE) \cdot r^*$. We match $[]$ by a sequence of two nullable regular
+ − 1160
expressions. The first one is an alternative, we take the rightmost
+ − 1161
alternative---whose language contains the empty string. The second
+ − 1162
nullable regular expression is a Kleene star. $\Stars$ tells us how it
+ − 1163
generates the nullable regular expression: by 0 iterations to form
+ − 1164
$\ONE$. Now $\textit{inj}$ injects characters back and incrementally
+ − 1165
builds a lexical value based on $v_3$. Using the value $v_3$, the character
+ − 1166
c, and the regular expression $r_2$, we can recover how $r_2$ matched
+ − 1167
the string $[c]$ : $\textit{inj} \; r_2 \; c \; v_3$ gives us
+ − 1168
\begin{center}
+ − 1169
$v_2 = \Left(\Seq(\Right(\Seq(\Empty, \Seq(\Empty, c))), \Stars [])),$
+ − 1170
\end{center}
+ − 1171
which tells us how $r_2$ matched $[c]$. After this we inject back the character $b$, and get
+ − 1172
\begin{center}
+ − 1173
$v_1 = \Seq(\Right(\Seq(\Empty, \Seq(b, c))), \Stars [])$
+ − 1174
\end{center}
+ − 1175
for how
+ − 1176
\begin{center}
+ − 1177
$r_1= (\ONE+\ZERO+\ONE \cdot b + \ZERO + \ONE \cdot b \cdot c) \cdot r*$
+ − 1178
\end{center}
+ − 1179
matched the string $bc$ before it split into two substrings.
+ − 1180
Finally, after injecting character $a$ back to $v_1$,
+ − 1181
we get the lexical value tree
+ − 1182
\begin{center}
+ − 1183
$v_0= \Stars [\Right(\Seq(a, \Seq(b, c)))]$
+ − 1184
\end{center}
+ − 1185
for how $r$ matched $abc$. This completes the algorithm.
+ − 1186
+ − 1187
%We omit the details of injection function, which is provided by Sulzmann and Lu's paper \cite{Sulzmann2014}.
+ − 1188
Readers might have noticed that the lexical value information is actually
+ − 1189
already available when doing derivatives. For example, immediately after
+ − 1190
the operation $\backslash a$ we know that if we want to match a string
+ − 1191
that starts with $a$, we can either take the initial match to be
+ − 1192
+ − 1193
\begin{center}
+ − 1194
\begin{enumerate}
+ − 1195
\item[1)] just $a$ or
+ − 1196
\item[2)] string $ab$ or
+ − 1197
\item[3)] string $abc$.
+ − 1198
\end{enumerate}
+ − 1199
\end{center}
+ − 1200
+ − 1201
\noindent
+ − 1202
In order to differentiate between these choices, we just need to
+ − 1203
remember their positions---$a$ is on the left, $ab$ is in the middle ,
+ − 1204
and $abc$ is on the right. Which of these alternatives is chosen
+ − 1205
later does not affect their relative position because the algorithm does
+ − 1206
not change this order. If this parsing information can be determined and
+ − 1207
does not change because of later derivatives, there is no point in
+ − 1208
traversing this information twice. This leads to an optimisation---if we
+ − 1209
store the information for lexical values inside the regular expression,
+ − 1210
update it when we do derivative on them, and collect the information
+ − 1211
when finished with derivatives and call $\textit{mkeps}$ for deciding which
+ − 1212
branch is POSIX, we can generate the lexical value in one pass, instead of
+ − 1213
doing the rest $n$ injections. This leads to Sulzmann and Lu's novel
+ − 1214
idea of using bitcodes in derivatives.
+ − 1215
+ − 1216
In the next section, we shall focus on the bitcoded algorithm and the
+ − 1217
process of simplification of regular expressions. This is needed in
+ − 1218
order to obtain \emph{fast} versions of the Brzozowski's, and Sulzmann
+ − 1219
and Lu's algorithms. This is where the PhD-project aims to advance the
+ − 1220
state-of-the-art.
+ − 1221
+ − 1222
+ − 1223
\section{Simplification of Regular Expressions}
+ − 1224
+ − 1225
Using bitcodes to guide parsing is not a novel idea. It was applied to
+ − 1226
context free grammars and then adapted by Henglein and Nielson for
+ − 1227
efficient regular expression lexing using DFAs~\cite{nielson11bcre}.
+ − 1228
Sulzmann and Lu took this idea of bitcodes a step further by integrating
+ − 1229
bitcodes into derivatives. The reason why we want to use bitcodes in
+ − 1230
this project is that we want to introduce more aggressive simplification
+ − 1231
rules in order to keep the size of derivatives small throughout. This is
+ − 1232
because the main drawback of building successive derivatives according
+ − 1233
to Brzozowski's definition is that they can grow very quickly in size.
+ − 1234
This is mainly due to the fact that the derivative operation generates
+ − 1235
often ``useless'' $\ZERO$s and $\ONE$s in derivatives. As a result, if
+ − 1236
implemented naively both algorithms by Brzozowski and by Sulzmann and Lu
+ − 1237
are excruciatingly slow. For example when starting with the regular
+ − 1238
expression $(a + aa)^*$ and building 12 successive derivatives
+ − 1239
w.r.t.~the character $a$, one obtains a derivative regular expression
+ − 1240
with more than 8000 nodes (when viewed as a tree). Operations like
+ − 1241
$\textit{der}$ and $\nullable$ need to traverse such trees and
+ − 1242
consequently the bigger the size of the derivative the slower the
+ − 1243
algorithm.
+ − 1244
+ − 1245
Fortunately, one can simplify regular expressions after each derivative
+ − 1246
step. Various simplifications of regular expressions are possible, such
+ − 1247
as the simplification of $\ZERO + r$, $r + \ZERO$, $\ONE\cdot r$, $r
+ − 1248
\cdot \ONE$, and $r + r$ to just $r$. These simplifications do not
+ − 1249
affect the answer for whether a regular expression matches a string or
+ − 1250
not, but fortunately also do not affect the POSIX strategy of how
+ − 1251
regular expressions match strings---although the latter is much harder
+ − 1252
to establish. Some initial results in this regard have been
+ − 1253
obtained in \cite{AusafDyckhoffUrban2016}.
+ − 1254
+ − 1255
Unfortunately, the simplification rules outlined above are not
+ − 1256
sufficient to prevent a size explosion in all cases. We
+ − 1257
believe a tighter bound can be achieved that prevents an explosion in
+ − 1258
\emph{all} cases. Such a tighter bound is suggested by work of Antimirov who
+ − 1259
proved that (partial) derivatives can be bound by the number of
+ − 1260
characters contained in the initial regular expression
+ − 1261
\cite{Antimirov95}. He defined the \emph{partial derivatives} of regular
+ − 1262
expressions as follows:
+ − 1263
+ − 1264
\begin{center}
+ − 1265
\begin{tabular}{lcl}
+ − 1266
$\textit{pder} \; c \; \ZERO$ & $\dn$ & $\emptyset$\\
+ − 1267
$\textit{pder} \; c \; \ONE$ & $\dn$ & $\emptyset$ \\
+ − 1268
$\textit{pder} \; c \; d$ & $\dn$ & $\textit{if} \; c \,=\, d \; \{ \ONE \} \; \textit{else} \; \emptyset$ \\
+ − 1269
$\textit{pder} \; c \; r_1+r_2$ & $\dn$ & $pder \; c \; r_1 \cup pder \; c \; r_2$ \\
+ − 1270
$\textit{pder} \; c \; r_1 \cdot r_2$ & $\dn$ & $\textit{if} \; nullable \; r_1 $\\
+ − 1271
& & $\textit{then} \; \{ r \cdot r_2 \mid r \in pder \; c \; r_1 \} \cup pder \; c \; r_2 \;$\\
+ − 1272
& & $\textit{else} \; \{ r \cdot r_2 \mid r \in pder \; c \; r_1 \} $ \\
+ − 1273
$\textit{pder} \; c \; r^*$ & $\dn$ & $ \{ r' \cdot r^* \mid r' \in pder \; c \; r \} $ \\
+ − 1274
\end{tabular}
+ − 1275
\end{center}
+ − 1276
+ − 1277
\noindent
+ − 1278
A partial derivative of a regular expression $r$ is essentially a set of
+ − 1279
regular expressions that are either $r$'s children expressions or a
+ − 1280
concatenation of them. Antimirov has proved a tight bound of the sum of
+ − 1281
the size of \emph{all} partial derivatives no matter what the string
+ − 1282
looks like. Roughly speaking the size sum will be at most cubic in the
+ − 1283
size of the regular expression.
+ − 1284
+ − 1285
If we want the size of derivatives in Sulzmann and Lu's algorithm to
+ − 1286
stay below this bound, we would need more aggressive simplifications.
+ − 1287
Essentially we need to delete useless $\ZERO$s and $\ONE$s, as well as
+ − 1288
deleting duplicates whenever possible. For example, the parentheses in
+ − 1289
$(a+b) \cdot c + bc$ can be opened up to get $a\cdot c + b \cdot c + b
+ − 1290
\cdot c$, and then simplified to just $a \cdot c + b \cdot c$. Another
+ − 1291
example is simplifying $(a^*+a) + (a^*+ \ONE) + (a +\ONE)$ to just
+ − 1292
$a^*+a+\ONE$. Adding these more aggressive simplification rules helps us
+ − 1293
to achieve the same size bound as that of the partial derivatives.
+ − 1294
+ − 1295
In order to implement the idea of ``spilling out alternatives'' and to
+ − 1296
make them compatible with the $\text{inj}$-mechanism, we use
+ − 1297
\emph{bitcodes}. Bits and bitcodes (lists of bits) are just:
+ − 1298
+ − 1299
%This allows us to prove a tight
+ − 1300
%bound on the size of regular expression during the running time of the
+ − 1301
%algorithm if we can establish the connection between our simplification
+ − 1302
%rules and partial derivatives.
+ − 1303
+ − 1304
%We believe, and have generated test
+ − 1305
%data, that a similar bound can be obtained for the derivatives in
+ − 1306
%Sulzmann and Lu's algorithm. Let us give some details about this next.
+ − 1307
+ − 1308
+ − 1309
\begin{center}
+ − 1310
$b ::= S \mid Z \qquad
+ − 1311
bs ::= [] \mid b:bs
+ − 1312
$
+ − 1313
\end{center}
+ − 1314
+ − 1315
\noindent
+ − 1316
The $S$ and $Z$ are arbitrary names for the bits in order to avoid
+ − 1317
confusion with the regular expressions $\ZERO$ and $\ONE$. Bitcodes (or
+ − 1318
bit-lists) can be used to encode values (or incomplete values) in a
+ − 1319
compact form. This can be straightforwardly seen in the following
+ − 1320
coding function from values to bitcodes:
+ − 1321
+ − 1322
\begin{center}
+ − 1323
\begin{tabular}{lcl}
+ − 1324
$\textit{code}(\Empty)$ & $\dn$ & $[]$\\
+ − 1325
$\textit{code}(\Char\,c)$ & $\dn$ & $[]$\\
+ − 1326
$\textit{code}(\Left\,v)$ & $\dn$ & $\Z :: code(v)$\\
+ − 1327
$\textit{code}(\Right\,v)$ & $\dn$ & $\S :: code(v)$\\
+ − 1328
$\textit{code}(\Seq\,v_1\,v_2)$ & $\dn$ & $code(v_1) \,@\, code(v_2)$\\
+ − 1329
$\textit{code}(\Stars\,[])$ & $\dn$ & $[\Z]$\\
+ − 1330
$\textit{code}(\Stars\,(v\!::\!vs))$ & $\dn$ & $\S :: code(v) \;@\;
+ − 1331
code(\Stars\,vs)$
+ − 1332
\end{tabular}
+ − 1333
\end{center}
+ − 1334
+ − 1335
\noindent
+ − 1336
Here $\textit{code}$ encodes a value into a bitcodes by converting
+ − 1337
$\Left$ into $\Z$, $\Right$ into $\S$, the start point of a non-empty
+ − 1338
star iteration into $\S$, and the border where a local star terminates
+ − 1339
into $\Z$. This coding is lossy, as it throws away the information about
+ − 1340
characters, and also does not encode the ``boundary'' between two
+ − 1341
sequence values. Moreover, with only the bitcode we cannot even tell
+ − 1342
whether the $\S$s and $\Z$s are for $\Left/\Right$ or $\Stars$. The
+ − 1343
reason for choosing this compact way of storing information is that the
+ − 1344
relatively small size of bits can be easily manipulated and ``moved
+ − 1345
around'' in a regular expression. In order to recover values, we will
+ − 1346
need the corresponding regular expression as an extra information. This
+ − 1347
means the decoding function is defined as:
+ − 1348
+ − 1349
+ − 1350
%\begin{definition}[Bitdecoding of Values]\mbox{}
+ − 1351
\begin{center}
+ − 1352
\begin{tabular}{@{}l@{\hspace{1mm}}c@{\hspace{1mm}}l@{}}
+ − 1353
$\textit{decode}'\,bs\,(\ONE)$ & $\dn$ & $(\Empty, bs)$\\
+ − 1354
$\textit{decode}'\,bs\,(c)$ & $\dn$ & $(\Char\,c, bs)$\\
+ − 1355
$\textit{decode}'\,(\Z\!::\!bs)\;(r_1 + r_2)$ & $\dn$ &
+ − 1356
$\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}\;
+ − 1357
(\Left\,v, bs_1)$\\
+ − 1358
$\textit{decode}'\,(\S\!::\!bs)\;(r_1 + r_2)$ & $\dn$ &
+ − 1359
$\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_2\;\textit{in}\;
+ − 1360
(\Right\,v, bs_1)$\\
+ − 1361
$\textit{decode}'\,bs\;(r_1\cdot r_2)$ & $\dn$ &
+ − 1362
$\textit{let}\,(v_1, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}$\\
+ − 1363
& & $\textit{let}\,(v_2, bs_2) = \textit{decode}'\,bs_1\,r_2$\\
+ − 1364
& & \hspace{35mm}$\textit{in}\;(\Seq\,v_1\,v_2, bs_2)$\\
+ − 1365
$\textit{decode}'\,(\Z\!::\!bs)\,(r^*)$ & $\dn$ & $(\Stars\,[], bs)$\\
+ − 1366
$\textit{decode}'\,(\S\!::\!bs)\,(r^*)$ & $\dn$ &
+ − 1367
$\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r\;\textit{in}$\\
+ − 1368
& & $\textit{let}\,(\Stars\,vs, bs_2) = \textit{decode}'\,bs_1\,r^*$\\
+ − 1369
& & \hspace{35mm}$\textit{in}\;(\Stars\,v\!::\!vs, bs_2)$\bigskip\\
+ − 1370
+ − 1371
$\textit{decode}\,bs\,r$ & $\dn$ &
+ − 1372
$\textit{let}\,(v, bs') = \textit{decode}'\,bs\,r\;\textit{in}$\\
+ − 1373
& & $\textit{if}\;bs' = []\;\textit{then}\;\textit{Some}\,v\;
+ − 1374
\textit{else}\;\textit{None}$
+ − 1375
\end{tabular}
+ − 1376
\end{center}
+ − 1377
%\end{definition}
+ − 1378
+ − 1379
Sulzmann and Lu's integrated the bitcodes into regular expressions to
+ − 1380
create annotated regular expressions \cite{Sulzmann2014}.
+ − 1381
\emph{Annotated regular expressions} are defined by the following
+ − 1382
grammar:%\comment{ALTS should have an $as$ in the definitions, not just $a_1$ and $a_2$}
+ − 1383
+ − 1384
\begin{center}
+ − 1385
\begin{tabular}{lcl}
+ − 1386
$\textit{a}$ & $::=$ & $\textit{ZERO}$\\
+ − 1387
& $\mid$ & $\textit{ONE}\;\;bs$\\
+ − 1388
& $\mid$ & $\textit{CHAR}\;\;bs\,c$\\
+ − 1389
& $\mid$ & $\textit{ALTS}\;\;bs\,as$\\
+ − 1390
& $\mid$ & $\textit{SEQ}\;\;bs\,a_1\,a_2$\\
+ − 1391
& $\mid$ & $\textit{STAR}\;\;bs\,a$
+ − 1392
\end{tabular}
+ − 1393
\end{center}
+ − 1394
%(in \textit{ALTS})
+ − 1395
+ − 1396
\noindent
+ − 1397
where $bs$ stands for bitcodes, $a$ for $\bold{a}$nnotated regular
+ − 1398
expressions and $as$ for a list of annotated regular expressions.
+ − 1399
The alternative constructor($\textit{ALTS}$) has been generalized to
+ − 1400
accept a list of annotated regular expressions rather than just 2.
+ − 1401
We will show that these bitcodes encode information about
+ − 1402
the (POSIX) value that should be generated by the Sulzmann and Lu
+ − 1403
algorithm.
+ − 1404
+ − 1405
+ − 1406
To do lexing using annotated regular expressions, we shall first
+ − 1407
transform the usual (un-annotated) regular expressions into annotated
+ − 1408
regular expressions. This operation is called \emph{internalisation} and
+ − 1409
defined as follows:
+ − 1410
+ − 1411
%\begin{definition}
+ − 1412
\begin{center}
+ − 1413
\begin{tabular}{lcl}
+ − 1414
$(\ZERO)^\uparrow$ & $\dn$ & $\textit{ZERO}$\\
+ − 1415
$(\ONE)^\uparrow$ & $\dn$ & $\textit{ONE}\,[]$\\
+ − 1416
$(c)^\uparrow$ & $\dn$ & $\textit{CHAR}\,[]\,c$\\
+ − 1417
$(r_1 + r_2)^\uparrow$ & $\dn$ &
+ − 1418
$\textit{ALTS}\;[]\,List((\textit{fuse}\,[\Z]\,r_1^\uparrow),\,
+ − 1419
(\textit{fuse}\,[\S]\,r_2^\uparrow))$\\
+ − 1420
$(r_1\cdot r_2)^\uparrow$ & $\dn$ &
+ − 1421
$\textit{SEQ}\;[]\,r_1^\uparrow\,r_2^\uparrow$\\
+ − 1422
$(r^*)^\uparrow$ & $\dn$ &
+ − 1423
$\textit{STAR}\;[]\,r^\uparrow$\\
+ − 1424
\end{tabular}
+ − 1425
\end{center}
+ − 1426
%\end{definition}
+ − 1427
+ − 1428
\noindent
+ − 1429
We use up arrows here to indicate that the basic un-annotated regular
+ − 1430
expressions are ``lifted up'' into something slightly more complex. In the
+ − 1431
fourth clause, $\textit{fuse}$ is an auxiliary function that helps to
+ − 1432
attach bits to the front of an annotated regular expression. Its
+ − 1433
definition is as follows:
+ − 1434
+ − 1435
\begin{center}
+ − 1436
\begin{tabular}{lcl}
+ − 1437
$\textit{fuse}\;bs\,(\textit{ZERO})$ & $\dn$ & $\textit{ZERO}$\\
+ − 1438
$\textit{fuse}\;bs\,(\textit{ONE}\,bs')$ & $\dn$ &
+ − 1439
$\textit{ONE}\,(bs\,@\,bs')$\\
+ − 1440
$\textit{fuse}\;bs\,(\textit{CHAR}\,bs'\,c)$ & $\dn$ &
+ − 1441
$\textit{CHAR}\,(bs\,@\,bs')\,c$\\
+ − 1442
$\textit{fuse}\;bs\,(\textit{ALTS}\,bs'\,as)$ & $\dn$ &
+ − 1443
$\textit{ALTS}\,(bs\,@\,bs')\,as$\\
+ − 1444
$\textit{fuse}\;bs\,(\textit{SEQ}\,bs'\,a_1\,a_2)$ & $\dn$ &
+ − 1445
$\textit{SEQ}\,(bs\,@\,bs')\,a_1\,a_2$\\
+ − 1446
$\textit{fuse}\;bs\,(\textit{STAR}\,bs'\,a)$ & $\dn$ &
+ − 1447
$\textit{STAR}\,(bs\,@\,bs')\,a$
+ − 1448
\end{tabular}
+ − 1449
\end{center}
+ − 1450
+ − 1451
\noindent
+ − 1452
After internalising the regular expression, we perform successive
+ − 1453
derivative operations on the annotated regular expressions. This
+ − 1454
derivative operation is the same as what we had previously for the
+ − 1455
basic regular expressions, except that we beed to take care of
+ − 1456
the bitcodes:
+ − 1457
+ − 1458
%\begin{definition}{bder}
+ − 1459
\begin{center}
+ − 1460
\begin{tabular}{@{}lcl@{}}
+ − 1461
$(\textit{ZERO})\,\backslash c$ & $\dn$ & $\textit{ZERO}$\\
+ − 1462
$(\textit{ONE}\;bs)\,\backslash c$ & $\dn$ & $\textit{ZERO}$\\
+ − 1463
$(\textit{CHAR}\;bs\,d)\,\backslash c$ & $\dn$ &
+ − 1464
$\textit{if}\;c=d\; \;\textit{then}\;
+ − 1465
\textit{ONE}\;bs\;\textit{else}\;\textit{ZERO}$\\
+ − 1466
$(\textit{ALTS}\;bs\,as)\,\backslash c$ & $\dn$ &
+ − 1467
$\textit{ALTS}\;bs\,(as.map(\backslash c))$\\
+ − 1468
$(\textit{SEQ}\;bs\,a_1\,a_2)\,\backslash c$ & $\dn$ &
+ − 1469
$\textit{if}\;\textit{bnullable}\,a_1$\\
+ − 1470
& &$\textit{then}\;\textit{ALTS}\,bs\,List((\textit{SEQ}\,[]\,(a_1\,\backslash c)\,a_2),$\\
+ − 1471
& &$\phantom{\textit{then}\;\textit{ALTS}\,bs\,}(\textit{fuse}\,(\textit{bmkeps}\,a_1)\,(a_2\,\backslash c)))$\\
+ − 1472
& &$\textit{else}\;\textit{SEQ}\,bs\,(a_1\,\backslash c)\,a_2$\\
+ − 1473
$(\textit{STAR}\,bs\,a)\,\backslash c$ & $\dn$ &
+ − 1474
$\textit{SEQ}\;bs\,(\textit{fuse}\, [\Z] (r\,\backslash c))\,
+ − 1475
(\textit{STAR}\,[]\,r)$
+ − 1476
\end{tabular}
+ − 1477
\end{center}
+ − 1478
%\end{definition}
+ − 1479
+ − 1480
\noindent
+ − 1481
For instance, when we unfold $\textit{STAR} \; bs \; a$ into a sequence,
+ − 1482
we need to attach an additional bit $Z$ to the front of $r \backslash c$
+ − 1483
to indicate that there is one more star iteration. Also the $SEQ$ clause
+ − 1484
is more subtle---when $a_1$ is $\textit{bnullable}$ (here
+ − 1485
\textit{bnullable} is exactly the same as $\textit{nullable}$, except
+ − 1486
that it is for annotated regular expressions, therefore we omit the
+ − 1487
definition). Assume that $bmkeps$ correctly extracts the bitcode for how
+ − 1488
$a_1$ matches the string prior to character $c$ (more on this later),
+ − 1489
then the right branch of $ALTS$, which is $fuse \; bmkeps \; a_1 (a_2
+ − 1490
\backslash c)$ will collapse the regular expression $a_1$(as it has
+ − 1491
already been fully matched) and store the parsing information at the
+ − 1492
head of the regular expression $a_2 \backslash c$ by fusing to it. The
+ − 1493
bitsequence $bs$, which was initially attached to the head of $SEQ$, has
+ − 1494
now been elevated to the top-level of $ALTS$, as this information will be
+ − 1495
needed whichever way the $SEQ$ is matched---no matter whether $c$ belongs
+ − 1496
to $a_1$ or $ a_2$. After building these derivatives and maintaining all
+ − 1497
the lexing information, we complete the lexing by collecting the
+ − 1498
bitcodes using a generalised version of the $\textit{mkeps}$ function
+ − 1499
for annotated regular expressions, called $\textit{bmkeps}$:
+ − 1500
+ − 1501
+ − 1502
%\begin{definition}[\textit{bmkeps}]\mbox{}
+ − 1503
\begin{center}
+ − 1504
\begin{tabular}{lcl}
+ − 1505
$\textit{bmkeps}\,(\textit{ONE}\;bs)$ & $\dn$ & $bs$\\
+ − 1506
$\textit{bmkeps}\,(\textit{ALTS}\;bs\,a::as)$ & $\dn$ &
+ − 1507
$\textit{if}\;\textit{bnullable}\,a$\\
+ − 1508
& &$\textit{then}\;bs\,@\,\textit{bmkeps}\,a$\\
+ − 1509
& &$\textit{else}\;bs\,@\,\textit{bmkeps}\,(\textit{ALTS}\;bs\,as)$\\
+ − 1510
$\textit{bmkeps}\,(\textit{SEQ}\;bs\,a_1\,a_2)$ & $\dn$ &
+ − 1511
$bs \,@\,\textit{bmkeps}\,a_1\,@\, \textit{bmkeps}\,a_2$\\
+ − 1512
$\textit{bmkeps}\,(\textit{STAR}\;bs\,a)$ & $\dn$ &
+ − 1513
$bs \,@\, [\S]$
+ − 1514
\end{tabular}
+ − 1515
\end{center}
+ − 1516
%\end{definition}
+ − 1517
+ − 1518
\noindent
+ − 1519
This function completes the value information by travelling along the
+ − 1520
path of the regular expression that corresponds to a POSIX value and
+ − 1521
collecting all the bitcodes, and using $S$ to indicate the end of star
+ − 1522
iterations. If we take the bitcodes produced by $\textit{bmkeps}$ and
+ − 1523
decode them, we get the value we expect. The corresponding lexing
+ − 1524
algorithm looks as follows:
+ − 1525
+ − 1526
\begin{center}
+ − 1527
\begin{tabular}{lcl}
+ − 1528
$\textit{blexer}\;r\,s$ & $\dn$ &
+ − 1529
$\textit{let}\;a = (r^\uparrow)\backslash s\;\textit{in}$\\
+ − 1530
& & $\;\;\textit{if}\; \textit{bnullable}(a)$\\
+ − 1531
& & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
+ − 1532
& & $\;\;\textit{else}\;\textit{None}$
+ − 1533
\end{tabular}
+ − 1534
\end{center}
+ − 1535
+ − 1536
\noindent
+ − 1537
In this definition $\_\backslash s$ is the generalisation of the derivative
+ − 1538
operation from characters to strings (just like the derivatives for un-annotated
+ − 1539
regular expressions).
+ − 1540
+ − 1541
The main point of the bitcodes and annotated regular expressions is that
+ − 1542
we can apply rather aggressive (in terms of size) simplification rules
+ − 1543
in order to keep derivatives small. We have developed such
+ − 1544
``aggressive'' simplification rules and generated test data that show
+ − 1545
that the expected bound can be achieved. Obviously we could only
+ − 1546
partially cover the search space as there are infinitely many regular
+ − 1547
expressions and strings.
+ − 1548
+ − 1549
One modification we introduced is to allow a list of annotated regular
+ − 1550
expressions in the \textit{ALTS} constructor. This allows us to not just
+ − 1551
delete unnecessary $\ZERO$s and $\ONE$s from regular expressions, but
+ − 1552
also unnecessary ``copies'' of regular expressions (very similar to
+ − 1553
simplifying $r + r$ to just $r$, but in a more general setting). Another
+ − 1554
modification is that we use simplification rules inspired by Antimirov's
+ − 1555
work on partial derivatives. They maintain the idea that only the first
+ − 1556
``copy'' of a regular expression in an alternative contributes to the
+ − 1557
calculation of a POSIX value. All subsequent copies can be pruned away from
+ − 1558
the regular expression. A recursive definition of our simplification function
+ − 1559
that looks somewhat similar to our Scala code is given below:
+ − 1560
%\comment{Use $\ZERO$, $\ONE$ and so on.
+ − 1561
%Is it $ALTS$ or $ALTS$?}\\
+ − 1562
+ − 1563
\begin{center}
+ − 1564
\begin{tabular}{@{}lcl@{}}
+ − 1565
+ − 1566
$\textit{simp} \; (\textit{SEQ}\;bs\,a_1\,a_2)$ & $\dn$ & $ (\textit{simp} \; a_1, \textit{simp} \; a_2) \; \textit{match} $ \\
+ − 1567
&&$\quad\textit{case} \; (\ZERO, \_) \Rightarrow \ZERO$ \\
+ − 1568
&&$\quad\textit{case} \; (\_, \ZERO) \Rightarrow \ZERO$ \\
+ − 1569
&&$\quad\textit{case} \; (\ONE, a_2') \Rightarrow \textit{fuse} \; bs \; a_2'$ \\
+ − 1570
&&$\quad\textit{case} \; (a_1', \ONE) \Rightarrow \textit{fuse} \; bs \; a_1'$ \\
+ − 1571
&&$\quad\textit{case} \; (a_1', a_2') \Rightarrow \textit{SEQ} \; bs \; a_1' \; a_2'$ \\
+ − 1572
+ − 1573
$\textit{simp} \; (\textit{ALTS}\;bs\,as)$ & $\dn$ & $\textit{distinct}( \textit{flatten} ( \textit{map simp as})) \; \textit{match} $ \\
+ − 1574
&&$\quad\textit{case} \; [] \Rightarrow \ZERO$ \\
+ − 1575
&&$\quad\textit{case} \; a :: [] \Rightarrow \textit{fuse bs a}$ \\
+ − 1576
&&$\quad\textit{case} \; as' \Rightarrow \textit{ALTS}\;bs\;as'$\\
+ − 1577
+ − 1578
$\textit{simp} \; a$ & $\dn$ & $\textit{a} \qquad \textit{otherwise}$
+ − 1579
\end{tabular}
+ − 1580
\end{center}
+ − 1581
+ − 1582
\noindent
+ − 1583
The simplification does a pattern matching on the regular expression.
+ − 1584
When it detected that the regular expression is an alternative or
+ − 1585
sequence, it will try to simplify its children regular expressions
+ − 1586
recursively and then see if one of the children turn into $\ZERO$ or
+ − 1587
$\ONE$, which might trigger further simplification at the current level.
+ − 1588
The most involved part is the $\textit{ALTS}$ clause, where we use two
+ − 1589
auxiliary functions $\textit{flatten}$ and $\textit{distinct}$ to open up nested
+ − 1590
$\textit{ALTS}$ and reduce as many duplicates as possible. Function
+ − 1591
$\textit{distinct}$ keeps the first occurring copy only and remove all later ones
+ − 1592
when detected duplicates. Function $\textit{flatten}$ opens up nested \textit{ALTS}.
+ − 1593
Its recursive definition is given below:
+ − 1594
+ − 1595
\begin{center}
+ − 1596
\begin{tabular}{@{}lcl@{}}
+ − 1597
$\textit{flatten} \; (\textit{ALTS}\;bs\,as) :: as'$ & $\dn$ & $(\textit{map} \;
+ − 1598
(\textit{fuse}\;bs)\; \textit{as}) \; @ \; \textit{flatten} \; as' $ \\
+ − 1599
$\textit{flatten} \; \textit{ZERO} :: as'$ & $\dn$ & $ \textit{flatten} \; as' $ \\
+ − 1600
$\textit{flatten} \; a :: as'$ & $\dn$ & $a :: \textit{flatten} \; as'$ \quad(otherwise)
+ − 1601
\end{tabular}
+ − 1602
\end{center}
+ − 1603
+ − 1604
\noindent
+ − 1605
Here $\textit{flatten}$ behaves like the traditional functional programming flatten
+ − 1606
function, except that it also removes $\ZERO$s. Or in terms of regular expressions, it
+ − 1607
removes parentheses, for example changing $a+(b+c)$ into $a+b+c$.
+ − 1608
+ − 1609
Suppose we apply simplification after each derivative step, and view
+ − 1610
these two operations as an atomic one: $a \backslash_{simp}\,c \dn
+ − 1611
\textit{simp}(a \backslash c)$. Then we can use the previous natural
+ − 1612
extension from derivative w.r.t.~character to derivative
+ − 1613
w.r.t.~string:%\comment{simp in the [] case?}
+ − 1614
+ − 1615
\begin{center}
+ − 1616
\begin{tabular}{lcl}
+ − 1617
$r \backslash_{simp} (c\!::\!s) $ & $\dn$ & $(r \backslash_{simp}\, c) \backslash_{simp}\, s$ \\
+ − 1618
$r \backslash_{simp} [\,] $ & $\dn$ & $r$
+ − 1619
\end{tabular}
+ − 1620
\end{center}
+ − 1621
+ − 1622
\noindent
+ − 1623
we obtain an optimised version of the algorithm:
+ − 1624
+ − 1625
\begin{center}
+ − 1626
\begin{tabular}{lcl}
+ − 1627
$\textit{blexer\_simp}\;r\,s$ & $\dn$ &
+ − 1628
$\textit{let}\;a = (r^\uparrow)\backslash_{simp}\, s\;\textit{in}$\\
+ − 1629
& & $\;\;\textit{if}\; \textit{bnullable}(a)$\\
+ − 1630
& & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
+ − 1631
& & $\;\;\textit{else}\;\textit{None}$
+ − 1632
\end{tabular}
+ − 1633
\end{center}
+ − 1634
+ − 1635
\noindent
+ − 1636
This algorithm keeps the regular expression size small, for example,
+ − 1637
with this simplification our previous $(a + aa)^*$ example's 8000 nodes
+ − 1638
will be reduced to just 6 and stays constant, no matter how long the
+ − 1639
input string is.
+ − 1640
+ − 1641
+ − 1642
+ − 1643
\section{Current Work}
+ − 1644
+ − 1645
We are currently engaged in two tasks related to this algorithm. The
+ − 1646
first task is proving that our simplification rules actually do not
+ − 1647
affect the POSIX value that should be generated by the algorithm
+ − 1648
according to the specification of a POSIX value and furthermore obtain a
+ − 1649
much tighter bound on the sizes of derivatives. The result is that our
+ − 1650
algorithm should be correct and faster on all inputs. The original
+ − 1651
blow-up, as observed in JavaScript, Python and Java, would be excluded
+ − 1652
from happening in our algorithm. For this proof we use the theorem prover
+ − 1653
Isabelle. Once completed, this result will advance the state-of-the-art:
+ − 1654
Sulzmann and Lu wrote in their paper~\cite{Sulzmann2014} about the
+ − 1655
bitcoded ``incremental parsing method'' (that is the lexing algorithm
+ − 1656
outlined in this section):
+ − 1657
+ − 1658
\begin{quote}\it
+ − 1659
``Correctness Claim: We further claim that the incremental parsing
+ − 1660
method in Figure~5 in combination with the simplification steps in
+ − 1661
Figure 6 yields POSIX parse tree [our lexical values]. We have tested this claim
+ − 1662
extensively by using the method in Figure~3 as a reference but yet
+ − 1663
have to work out all proof details.''
+ − 1664
\end{quote}
+ − 1665
+ − 1666
\noindent
+ − 1667
We like to settle this correctness claim. It is relatively
+ − 1668
straightforward to establish that after one simplification step, the part of a
+ − 1669
nullable derivative that corresponds to a POSIX value remains intact and can
+ − 1670
still be collected, in other words, we can show that
+ − 1671
%\comment{Double-check....I
+ − 1672
%think this is not the case}
+ − 1673
%\comment{If i remember correctly, you have proved this lemma.
+ − 1674
%I feel this is indeed not true because you might place arbitrary
+ − 1675
%bits on the regex r, however if this is the case, did i remember wrongly that
+ − 1676
%you proved something like simplification does not affect $\textit{bmkeps}$ results?
+ − 1677
%Anyway, i have amended this a little bit so it does not allow arbitrary bits attached
+ − 1678
%to a regex. Maybe it works now.}
+ − 1679
+ − 1680
\begin{center}
+ − 1681
$\textit{bmkeps} \; a = \textit{bmkeps} \; \textit{bsimp} \; a\;($\textit{provided}$ \; a\; is \; \textit{bnullable} )$
+ − 1682
\end{center}
+ − 1683
+ − 1684
\noindent
+ − 1685
as this basically comes down to proving actions like removing the
+ − 1686
additional $r$ in $r+r$ does not delete important POSIX information in
+ − 1687
a regular expression. The hard part of this proof is to establish that
+ − 1688
+ − 1689
\begin{center}
+ − 1690
$ \textit{blexer}\_{simp}(r, \; s) = \textit{blexer}(r, \; s)$
+ − 1691
\end{center}
+ − 1692
%comment{This is not true either...look at the definion blexer/blexer-simp}
+ − 1693
+ − 1694
\noindent That is, if we do derivative on regular expression $r$ and then
+ − 1695
simplify it, and repeat this process until we exhaust the string, we get a
+ − 1696
regular expression $r''$($\textit{LHS}$) that provides the POSIX matching
+ − 1697
information, which is exactly the same as the result $r'$($\textit{RHS}$ of the
+ − 1698
normal derivative algorithm that only does derivative repeatedly and has no
+ − 1699
simplification at all. This might seem at first glance very unintuitive, as
+ − 1700
the $r'$ could be exponentially larger than $r''$, but can be explained in the
+ − 1701
following way: we are pruning away the possible matches that are not POSIX.
+ − 1702
Since there could be exponentially many
+ − 1703
non-POSIX matchings and only 1 POSIX matching, it
+ − 1704
is understandable that our $r''$ can be a lot smaller. we can still provide
+ − 1705
the same POSIX value if there is one. This is not as straightforward as the
+ − 1706
previous proposition, as the two regular expressions $r'$ and $r''$ might have
+ − 1707
become very different. The crucial point is to find the
+ − 1708
$\textit{POSIX}$ information of a regular expression and how it is modified,
+ − 1709
augmented and propagated
+ − 1710
during simplification in parallel with the regular expression that
+ − 1711
has not been simplified in the subsequent derivative operations. To aid this,
+ − 1712
we use the helper function retrieve described by Sulzmann and Lu:
+ − 1713
\begin{center}
+ − 1714
\begin{tabular}{@{}l@{\hspace{2mm}}c@{\hspace{2mm}}l@{}}
+ − 1715
$\textit{retrieve}\,(\textit{ONE}\,bs)\,\Empty$ & $\dn$ & $bs$\\
+ − 1716
$\textit{retrieve}\,(\textit{CHAR}\,bs\,c)\,(\Char\,d)$ & $\dn$ & $bs$\\
+ − 1717
$\textit{retrieve}\,(\textit{ALTS}\,bs\,a::as)\,(\Left\,v)$ & $\dn$ &
+ − 1718
$bs \,@\, \textit{retrieve}\,a\,v$\\
+ − 1719
$\textit{retrieve}\,(\textit{ALTS}\,bs\,a::as)\,(\Right\,v)$ & $\dn$ &
+ − 1720
$bs \,@\, \textit{retrieve}\,(\textit{ALTS}\,bs\,as)\,v$\\
+ − 1721
$\textit{retrieve}\,(\textit{SEQ}\,bs\,a_1\,a_2)\,(\Seq\,v_1\,v_2)$ & $\dn$ &
+ − 1722
$bs \,@\,\textit{retrieve}\,a_1\,v_1\,@\, \textit{retrieve}\,a_2\,v_2$\\
+ − 1723
$\textit{retrieve}\,(\textit{STAR}\,bs\,a)\,(\Stars\,[])$ & $\dn$ &
+ − 1724
$bs \,@\, [\S]$\\
+ − 1725
$\textit{retrieve}\,(\textit{STAR}\,bs\,a)\,(\Stars\,(v\!::\!vs))$ & $\dn$ &\\
+ − 1726
\multicolumn{3}{l}{
+ − 1727
\hspace{3cm}$bs \,@\, [\Z] \,@\, \textit{retrieve}\,a\,v\,@\,
+ − 1728
\textit{retrieve}\,(\textit{STAR}\,[]\,a)\,(\Stars\,vs)$}\\
+ − 1729
\end{tabular}
+ − 1730
\end{center}
+ − 1731
%\comment{Did not read further}\\
+ − 1732
This function assembles the bitcode
+ − 1733
%that corresponds to a lexical value for how
+ − 1734
%the current derivative matches the suffix of the string(the characters that
+ − 1735
%have not yet appeared, but will appear as the successive derivatives go on.
+ − 1736
%How do we get this "future" information? By the value $v$, which is
+ − 1737
%computed by a pass of the algorithm that uses
+ − 1738
%$inj$ as described in the previous section).
+ − 1739
using information from both the derivative regular expression and the
+ − 1740
value. Sulzmann and Lu poroposed this function, but did not prove
+ − 1741
anything about it. Ausaf and Urban used it to connect the bitcoded
+ − 1742
algorithm to the older algorithm by the following equation:
+ − 1743
+ − 1744
\begin{center} $inj \;a\; c \; v = \textit{decode} \; (\textit{retrieve}\;
+ − 1745
(r^\uparrow)\backslash_{simp} \,c)\,v)$
+ − 1746
\end{center}
+ − 1747
+ − 1748
\noindent
+ − 1749
whereby $r^\uparrow$ stands for the internalised version of $r$. Ausaf
+ − 1750
and Urban also used this fact to prove the correctness of bitcoded
+ − 1751
algorithm without simplification. Our purpose of using this, however,
+ − 1752
is to establish
+ − 1753
+ − 1754
\begin{center}
+ − 1755
$ \textit{retrieve} \;
+ − 1756
a \; v \;=\; \textit{retrieve} \; (\textit{simp}\,a) \; v'.$
+ − 1757
\end{center}
+ − 1758
The idea is that using $v'$, a simplified version of $v$ that had gone
+ − 1759
through the same simplification step as $\textit{simp}(a)$, we are able
+ − 1760
to extract the bitcode that gives the same parsing information as the
+ − 1761
unsimplified one. However, we noticed that constructing such a $v'$
+ − 1762
from $v$ is not so straightforward. The point of this is that we might
+ − 1763
be able to finally bridge the gap by proving
+ − 1764
+ − 1765
\begin{center}
+ − 1766
$\textit{retrieve} \; (r^\uparrow \backslash s) \; v = \;\textit{retrieve} \;
+ − 1767
(\textit{simp}(r^\uparrow) \backslash s) \; v'$
+ − 1768
\end{center}
+ − 1769
+ − 1770
\noindent
+ − 1771
and subsequently
+ − 1772
+ − 1773
\begin{center}
+ − 1774
$\textit{retrieve} \; (r^\uparrow \backslash s) \; v\; = \; \textit{retrieve} \;
+ − 1775
(r^\uparrow \backslash_{simp} \, s) \; v'$.
+ − 1776
\end{center}
+ − 1777
+ − 1778
\noindent
+ − 1779
The $\textit{LHS}$ of the above equation is the bitcode we want. This
+ − 1780
would prove that our simplified version of regular expression still
+ − 1781
contains all the bitcodes needed. The task here is to find a way to
+ − 1782
compute the correct $v'$.
+ − 1783
+ − 1784
The second task is to speed up the more aggressive simplification. Currently
+ − 1785
it is slower than the original naive simplification by Ausaf and Urban (the
+ − 1786
naive version as implemented by Ausaf and Urban of course can ``explode'' in
+ − 1787
some cases). It is therefore not surprising that the speed is also much slower
+ − 1788
than regular expression engines in popular programming languages such as Java
+ − 1789
and Python on most inputs that are linear. For example, just by rewriting the
+ − 1790
example regular expression in the beginning of this report $(a^*)^*\,b$ into
+ − 1791
$a^*\,b$ would eliminate the ambiguity in the matching and make the time
+ − 1792
for matching linear with respect to the input string size. This allows the
+ − 1793
DFA approach to become blindingly fast, and dwarf the speed of our current
+ − 1794
implementation. For example, here is a comparison of Java regex engine
+ − 1795
and our implementation on this example.
+ − 1796
+ − 1797
\begin{center}
+ − 1798
\begin{tabular}{@{}c@{\hspace{0mm}}c@{\hspace{0mm}}c@{}}
+ − 1799
\begin{tikzpicture}
+ − 1800
\begin{axis}[
+ − 1801
xlabel={$n*1000$},
+ − 1802
x label style={at={(1.05,-0.05)}},
+ − 1803
ylabel={time in secs},
+ − 1804
enlargelimits=false,
+ − 1805
xtick={0,5,...,30},
+ − 1806
xmax=33,
+ − 1807
ymax=9,
+ − 1808
scaled ticks=true,
+ − 1809
axis lines=left,
+ − 1810
width=5cm,
+ − 1811
height=4cm,
+ − 1812
legend entries={Bitcoded Algorithm},
+ − 1813
legend pos=north west,
+ − 1814
legend cell align=left]
+ − 1815
\addplot[red,mark=*, mark options={fill=white}] table {bad-scala.data};
+ − 1816
\end{axis}
+ − 1817
\end{tikzpicture}
+ − 1818
&
+ − 1819
\begin{tikzpicture}
+ − 1820
\begin{axis}[
+ − 1821
xlabel={$n*1000$},
+ − 1822
x label style={at={(1.05,-0.05)}},
+ − 1823
%ylabel={time in secs},
+ − 1824
enlargelimits=false,
+ − 1825
xtick={0,5,...,30},
+ − 1826
xmax=33,
+ − 1827
ymax=9,
+ − 1828
scaled ticks=false,
+ − 1829
axis lines=left,
+ − 1830
width=5cm,
+ − 1831
height=4cm,
+ − 1832
legend entries={Java},
+ − 1833
legend pos=north west,
+ − 1834
legend cell align=left]
+ − 1835
\addplot[cyan,mark=*, mark options={fill=white}] table {good-java.data};
+ − 1836
\end{axis}
+ − 1837
\end{tikzpicture}\\
+ − 1838
\multicolumn{3}{c}{Graphs: Runtime for matching $a^*\,b$ with strings
+ − 1839
of the form $\underbrace{aa..a}_{n}$.}
+ − 1840
\end{tabular}
+ − 1841
\end{center}
+ − 1842
+ − 1843
+ − 1844
Java regex engine can match string of thousands of characters in a few milliseconds,
+ − 1845
whereas our current algorithm gets excruciatingly slow on input of this size.
+ − 1846
The running time in theory is linear, however it does not appear to be the
+ − 1847
case in an actual implementation. So it needs to be explored how to
+ − 1848
make our algorithm faster on all inputs. It could be the recursive calls that are
+ − 1849
needed to manipulate bits that are causing the slow down. A possible solution
+ − 1850
is to write recursive functions into tail-recusive form.
+ − 1851
Another possibility would be to explore
+ − 1852
again the connection to DFAs to speed up the algorithm on
+ − 1853
subcalls that are small enough. This is very much work in progress.
+ − 1854
+ − 1855
\section{Conclusion}
+ − 1856
+ − 1857
In this PhD-project we are interested in fast algorithms for regular
+ − 1858
expression matching. While this seems to be a ``settled'' area, in
+ − 1859
fact interesting research questions are popping up as soon as one steps
+ − 1860
outside the classic automata theory (for example in terms of what kind
+ − 1861
of regular expressions are supported). The reason why it is
+ − 1862
interesting for us to look at the derivative approach introduced by
+ − 1863
Brzozowski for regular expression matching, and then much further
+ − 1864
developed by Sulzmann and Lu, is that derivatives can elegantly deal
+ − 1865
with some of the regular expressions that are of interest in ``real
+ − 1866
life''. This includes the not-regular expression, written $\neg\,r$
+ − 1867
(that is all strings that are not recognised by $r$), but also bounded
+ − 1868
regular expressions such as $r^{\{n\}}$ and $r^{\{n..m\}}$). There is
+ − 1869
also hope that the derivatives can provide another angle for how to
+ − 1870
deal more efficiently with back-references, which are one of the
+ − 1871
reasons why regular expression engines in JavaScript, Python and Java
+ − 1872
choose to not implement the classic automata approach of transforming
+ − 1873
regular expressions into NFAs and then DFAs---because we simply do not
+ − 1874
know how such back-references can be represented by DFAs.
+ − 1875
We also plan to implement the bitcoded algorithm
+ − 1876
in some imperative language like C to see if the inefficiency of the
+ − 1877
Scala implementation
+ − 1878
is language specific. To make this research more comprehensive we also plan
+ − 1879
to contrast our (faster) version of bitcoded algorithm with the
+ − 1880
Symbolic Regex Matcher, the RE2, the Rust Regex Engine, and the static
+ − 1881
analysis approach by implementing them in the same language and then compare
+ − 1882
their performance.
+ − 1883
+ − 1884
\bibliographystyle{plain}
+ − 1885
\bibliography{root}
+ − 1886
+ − 1887
+ − 1888
\end{document}