\documentclass[a4paper,UKenglish]{lipics}
\usepackage{graphic}
\usepackage{data}
\usepackage{tikz-cd}
%\usepackage{algorithm}
\usepackage{amsmath}
\usepackage[noend]{algpseudocode}
\usepackage{enumitem}
\usepackage{nccmath}
\definecolor{darkblue}{rgb}{0,0,0.6}
\hypersetup{colorlinks=true,allcolors=darkblue}
\newcommand{\comment}[1]%
{{\color{red}$\Rightarrow$}\marginpar{\raggedright\small{\bf\color{red}#1}}}
% \documentclass{article}
%\usepackage[utf8]{inputenc}
%\usepackage[english]{babel}
%\usepackage{listings}
% \usepackage{amsthm}
%\usepackage{hyperref}
% \usepackage[margin=0.5in]{geometry}
%\usepackage{pmboxdraw}
\title{POSIX Regular Expression Matching and Lexing}
\author{Chengsong Tan}
\affil{King's College London\\
London, UK\\
\texttt{chengsong.tan@kcl.ac.uk}}
\authorrunning{Chengsong Tan}
\Copyright{Chengsong Tan}
\newcommand{\dn}{\stackrel{\mbox{\scriptsize def}}{=}}%
\newcommand{\ZERO}{\mbox{\bf 0}}
\newcommand{\ONE}{\mbox{\bf 1}}
\def\lexer{\mathit{lexer}}
\def\mkeps{\mathit{mkeps}}
\def\inj{\mathit{inj}}
\def\Empty{\mathit{Empty}}
\def\Left{\mathit{Left}}
\def\Right{\mathit{Right}}
\def\Stars{\mathit{Stars}}
\def\Char{\mathit{Char}}
\def\Seq{\mathit{Seq}}
\def\Der{\mathit{Der}}
\def\nullable{\mathit{nullable}}
\def\Z{\mathit{Z}}
\def\S{\mathit{S}}
%\theoremstyle{theorem}
%\newtheorem{theorem}{Theorem}
%\theoremstyle{lemma}
%\newtheorem{lemma}{Lemma}
%\newcommand{\lemmaautorefname}{Lemma}
%\theoremstyle{definition}
%\newtheorem{definition}{Definition}
\algnewcommand\algorithmicswitch{\textbf{switch}}
\algnewcommand\algorithmiccase{\textbf{case}}
\algnewcommand\algorithmicassert{\texttt{assert}}
\algnewcommand\Assert[1]{\State \algorithmicassert(#1)}%
% New "environments"
\algdef{SE}[SWITCH]{Switch}{EndSwitch}[1]{\algorithmicswitch\ #1\ \algorithmicdo}{\algorithmicend\ \algorithmicswitch}%
\algdef{SE}[CASE]{Case}{EndCase}[1]{\algorithmiccase\ #1}{\algorithmicend\ \algorithmiccase}%
\algtext*{EndSwitch}%
\algtext*{EndCase}%
\begin{document}
\maketitle
\begin{abstract}
Brzozowski introduced in 1964 a beautifully simple algorithm for
regular expression matching based on the notion of derivatives of
regular expressions. In 2014, Sulzmann and Lu extended this
algorithm to not just give a YES/NO answer for whether or not a
regular expression matches a string, but in case it does also
answers with \emph{how} it matches the string. This is important for
applications such as lexing (tokenising a string). The problem is to
make the algorithm by Sulzmann and Lu fast on all inputs without
breaking its correctness. We have already developed some
simplification rules for this, but have not yet proved that they
preserve the correctness of the algorithm. We also have not yet
looked at extended regular expressions, such as bounded repetitions,
negation and back-references.
\end{abstract}
\section{Introduction}
This PhD-project is about regular expression matching and
lexing. Given the maturity of this topic, the reader might wonder:
Surely, regular expressions must have already been studied to death?
What could possibly be \emph{not} known in this area? And surely all
implemented algorithms for regular expression matching are blindingly
fast?
Unfortunately these preconceptions are not supported by evidence: Take
for example the regular expression $(a^*)^*\,b$ and ask whether
strings of the form $aa..a$ match this regular
expression. Obviously not---the expected $b$ in the last
position is missing. One would expect that modern regular expression
matching engines can find this out very quickly. Alas, if one tries
this example in JavaScript, Python or Java 8 with strings like 28
$a$'s, one discovers that this decision takes around 30 seconds and
takes considerably longer when adding a few more $a$'s, as the graphs
below show:
\begin{center}
\begin{tabular}{@{}c@{\hspace{0mm}}c@{\hspace{0mm}}c@{}}
\begin{tikzpicture}
\begin{axis}[
xlabel={$n$},
x label style={at={(1.05,-0.05)}},
ylabel={time in secs},
enlargelimits=false,
xtick={0,5,...,30},
xmax=33,
ymax=35,
ytick={0,5,...,30},
scaled ticks=false,
axis lines=left,
width=5cm,
height=4cm,
legend entries={JavaScript},
legend pos=north west,
legend cell align=left]
\addplot[red,mark=*, mark options={fill=white}] table {re-js.data};
\end{axis}
\end{tikzpicture}
&
\begin{tikzpicture}
\begin{axis}[
xlabel={$n$},
x label style={at={(1.05,-0.05)}},
%ylabel={time in secs},
enlargelimits=false,
xtick={0,5,...,30},
xmax=33,
ymax=35,
ytick={0,5,...,30},
scaled ticks=false,
axis lines=left,
width=5cm,
height=4cm,
legend entries={Python},
legend pos=north west,
legend cell align=left]
\addplot[blue,mark=*, mark options={fill=white}] table {re-python2.data};
\end{axis}
\end{tikzpicture}
&
\begin{tikzpicture}
\begin{axis}[
xlabel={$n$},
x label style={at={(1.05,-0.05)}},
%ylabel={time in secs},
enlargelimits=false,
xtick={0,5,...,30},
xmax=33,
ymax=35,
ytick={0,5,...,30},
scaled ticks=false,
axis lines=left,
width=5cm,
height=4cm,
legend entries={Java 8},
legend pos=north west,
legend cell align=left]
\addplot[cyan,mark=*, mark options={fill=white}] table {re-java.data};
\end{axis}
\end{tikzpicture}\\
\multicolumn{3}{c}{Graphs: Runtime for matching $(a^*)^*\,b$ with strings
of the form $\underbrace{aa..a}_{n}$.}
\end{tabular}
\end{center}
\noindent These are clearly abysmal and possibly surprising results. One
would expect these systems to do much better than that---after all,
given a DFA and a string, deciding whether a string is matched by this
DFA should be linear.
Admittedly, the regular expression $(a^*)^*\,b$ is carefully chosen to
exhibit this exponential behaviour. Unfortunately, such regular
expressions are not just a few outliers. They are actually
frequent enough to have a separate name created for
them---\emph{evil regular expressions}. In empiric work, Davis et al
report that they have found thousands of such evil regular expressions
in the JavaScript and Python ecosystems \cite{Davis18}.
This exponential blowup in matching algorithms sometimes causes
considerable grief in real life: for example on 20 July 2016 one evil
regular expression brought the webpage
\href{http://stackexchange.com}{Stack Exchange} to its
knees.\footnote{\url{https://stackstatus.net/post/147710624694/outage-postmortem-july-20-2016}}
In this instance, a regular expression intended to just trim white
spaces from the beginning and the end of a line actually consumed
massive amounts of CPU-resources---causing web servers to
grind to a halt. This happened when a post with 20,000 white spaces was
submitted, but importantly the white spaces were neither at the
beginning nor at the end.
As a result, the regular expression matching
engine needed to backtrack over many choices.
In this example, the time needed to process the string is not
exactly the classical exponential case, but rather $O(n^2)$
with respect to the string length. But this is enough for the
home page of stackexchnge to respond not fast enough to
the load balancer, which thought that there must be some
attack and therefore stopped the servers from responding to
requests. This makes the whole site become unavailable.
Another fresh example that just came out of the oven
is the cloudfare outage incident. A poorly written
regular expression exhibited exponential behaviour
and exhausted CPUs that serve HTTP traffic on 2 July 2019,
resulting in a 27-minute outage. The regular expression contained
adjacent Kleene stars, which is the source of the exponential number
of backtracking possibilities. Although this outage has a series of
causes that came from different vulnerabilities within the system
of cloudfare, the heart of the problem lies within regular expression.
Such outages could be excluded from happening if the matching
algorithm is guaranteed to be fast.\footnote{\url{https://blog.cloudflare.com/details-of-the-cloudflare-outage-on-july-2-2019/}}
The underlying problem is
that many ``real life'' regular expression matching engines do not use
DFAs for matching.
This is because they support regular expressions that
are not covered by the classical automata theory, and in this more
general setting there are quite a few research questions still
unanswered and fast algorithms still need to be developed (for example
how to treat bounded repetitions, negation and back-references
efficiently).
%question: dfa can have exponential states. isn't this the actual reason why they do not use dfas?
%how do they avoid dfas exponential states if they use them for fast matching?
There is also another under-researched problem to do with regular
expressions and lexing, i.e.~the process of breaking up strings into
sequences of tokens according to some regular expressions. In this
setting one is not just interested in whether or not a regular
expression matches a string, but also in \emph{how}. Consider for example a regular expression
$r_{key}$ for recognising keywords such as \textit{if}, \textit{then}
and so on; and a regular expression $r_{id}$ for recognising
identifiers (say, a single character followed by characters or
numbers). One can then form the compound regular expression
$(r_{key} + r_{id})^*$ and use it to tokenise strings. But then how
should the string \textit{iffoo} be tokenised? It could be tokenised
as a keyword followed by an identifier, or the entire string as a
single identifier. Similarly, how should the string \textit{if} be
tokenised? Both regular expressions, $r_{key}$ and $r_{id}$, would
``fire''---so is it an identifier or a keyword? While in applications
there is a well-known strategy to decide these questions, called POSIX
matching, only relatively recently precise definitions of what POSIX
matching actually means has been formalised
\cite{AusafDyckhoffUrban2016,OkuiSuzuki2010,Vansummeren2006}.
Such a definition has also been given by Sulzmann and Lu \cite{Sulzmann2014}, but the
corresponding correctness proof turned out to be faulty \cite{AusafDyckhoffUrban2016}.
Roughly, POSIX matching means matching the longest initial substring.
In the case of a tie, the initial sub-match is chosen according to some priorities attached to the
regular expressions (e.g.~keywords have a higher priority than
identifiers). This sounds rather simple, but according to Grathwohl et
al \cite[Page 36]{CrashCourse2014} this is not the case. They wrote:
\begin{quote}
\it{}``The POSIX strategy is more complicated than the greedy because of
the dependence on information about the length of matched strings in the
various subexpressions.''
\end{quote}
\noindent
This is also supported by evidence collected by Kuklewicz
\cite{Kuklewicz} who noticed that a number of POSIX regular expression
matchers calculate incorrect results.
Our focus is on an algorithm introduced by Sulzmann and Lu in 2014 for
regular expression matching according to the POSIX strategy
\cite{Sulzmann2014}. Their algorithm is based on an older algorithm by
Brzozowski from 1964 where he introduced the notion of derivatives of
regular expressions~\cite{Brzozowski1964}. We shall briefly explain
this algorithm next.
\section{The Algorithm by Brzozowski based on Derivatives of Regular
Expressions}
Suppose (basic) regular expressions are given by the following grammar:
\[ r ::= \ZERO \mid \ONE
\mid c
\mid r_1 \cdot r_2
\mid r_1 + r_2
\mid r^*
\]
\noindent
The intended meaning of the constructors is as follows: $\ZERO$
cannot match any string, $\ONE$ can match the empty string, the
character regular expression $c$ can match the character $c$, and so
on.
The ingenious contribution by Brzozowski is the notion of
\emph{derivatives} of regular expressions. The idea behind this
notion is as follows: suppose a regular expression $r$ can match a
string of the form $c\!::\! s$ (that is a list of characters starting
with $c$), what does the regular expression look like that can match
just $s$? Brzozowski gave a neat answer to this question. He started
with the definition of $nullable$:
\begin{center}
\begin{tabular}{lcl}
$\nullable(\ZERO)$ & $\dn$ & $\mathit{false}$ \\
$\nullable(\ONE)$ & $\dn$ & $\mathit{true}$ \\
$\nullable(c)$ & $\dn$ & $\mathit{false}$ \\
$\nullable(r_1 + r_2)$ & $\dn$ & $\nullable(r_1) \vee \nullable(r_2)$ \\
$\nullable(r_1\cdot r_2)$ & $\dn$ & $\nullable(r_1) \wedge \nullable(r_2)$ \\
$\nullable(r^*)$ & $\dn$ & $\mathit{true}$ \\
\end{tabular}
\end{center}
This function simply tests whether the empty string is in $L(r)$.
He then defined
the following operation on regular expressions, written
$r\backslash c$ (the derivative of $r$ w.r.t.~the character $c$):
\begin{center}
\begin{tabular}{lcl}
$\ZERO \backslash c$ & $\dn$ & $\ZERO$\\
$\ONE \backslash c$ & $\dn$ & $\ZERO$\\
$d \backslash c$ & $\dn$ &
$\mathit{if} \;c = d\;\mathit{then}\;\ONE\;\mathit{else}\;\ZERO$\\
$(r_1 + r_2)\backslash c$ & $\dn$ & $r_1 \backslash c \,+\, r_2 \backslash c$\\
$(r_1 \cdot r_2)\backslash c$ & $\dn$ & $\mathit{if} \, nullable(r_1)$\\
& & $\mathit{then}\;(r_1\backslash c) \cdot r_2 \,+\, r_2\backslash c$\\
& & $\mathit{else}\;(r_1\backslash c) \cdot r_2$\\
$(r^*)\backslash c$ & $\dn$ & $(r\backslash c) \cdot r^*$\\
\end{tabular}
\end{center}
%Assuming the classic notion of a
%\emph{language} of a regular expression, written $L(\_)$, t
\noindent
The main property of the derivative operation is that
\begin{center}
$c\!::\!s \in L(r)$ holds
if and only if $s \in L(r\backslash c)$.
\end{center}
\noindent
For us the main advantage is that derivatives can be
straightforwardly implemented in any functional programming language,
and are easily definable and reasoned about in theorem provers---the
definitions just consist of inductive datatypes and simple recursive
functions. Moreover, the notion of derivatives can be easily
generalised to cover extended regular expression constructors such as
the not-regular expression, written $\neg\,r$, or bounded repetitions
(for example $r^{\{n\}}$ and $r^{\{n..m\}}$), which cannot be so
straightforwardly realised within the classic automata approach.
For the moment however, we focus only on the usual basic regular expressions.
Now if we want to find out whether a string $s$ matches with a regular
expression $r$, build the derivatives of $r$ w.r.t.\ (in succession)
all the characters of the string $s$. Finally, test whether the
resulting regular expression can match the empty string. If yes, then
$r$ matches $s$, and no in the negative case. To implement this idea
we can generalise the derivative operation to strings like this:
\begin{center}
\begin{tabular}{lcl}
$r \backslash (c\!::\!s) $ & $\dn$ & $(r \backslash c) \backslash s$ \\
$r \backslash [\,] $ & $\dn$ & $r$
\end{tabular}
\end{center}
\noindent
and then define as regular-expression matching algorithm:
\[
match\;s\;r \;\dn\; nullable(r\backslash s)
\]
\noindent
This algorithm looks graphically as follows:
\begin{equation}\label{graph:*}
\begin{tikzcd}
r_0 \arrow[r, "\backslash c_0"] & r_1 \arrow[r, "\backslash c_1"] & r_2 \arrow[r, dashed] & r_n \arrow[r,"\textit{nullable}?"] & \;\textrm{YES}/\textrm{NO}
\end{tikzcd}
\end{equation}
\noindent
where we start with a regular expression $r_0$, build successive
derivatives until we exhaust the string and then use \textit{nullable}
to test whether the result can match the empty string. It can be
relatively easily shown that this matcher is correct (that is given
an $s = c_0...c_{n-1}$ and an $r_0$, it generates YES if and only if $s \in L(r_0)$).
\section{Values and the Algorithm by Sulzmann and Lu}
One limitation, however, of Brzozowski's algorithm is that it only
produces a YES/NO answer for whether a string is being matched by a
regular expression. Sulzmann and Lu~\cite{Sulzmann2014} extended this
algorithm to allow generation of an actual matching, called a
\emph{value} or sometimes lexical values. These values and regular expressions correspond to each
other as illustrated in the following table:
\begin{center}
\begin{tabular}{c@{\hspace{20mm}}c}
\begin{tabular}{@{}rrl@{}}
\multicolumn{3}{@{}l}{\textbf{Regular Expressions}}\medskip\\
$r$ & $::=$ & $\ZERO$\\
& $\mid$ & $\ONE$ \\
& $\mid$ & $c$ \\
& $\mid$ & $r_1 \cdot r_2$\\
& $\mid$ & $r_1 + r_2$ \\
\\
& $\mid$ & $r^*$ \\
\end{tabular}
&
\begin{tabular}{@{\hspace{0mm}}rrl@{}}
\multicolumn{3}{@{}l}{\textbf{Values}}\medskip\\
$v$ & $::=$ & \\
& & $\Empty$ \\
& $\mid$ & $\Char(c)$ \\
& $\mid$ & $\Seq\,v_1\, v_2$\\
& $\mid$ & $\Left(v)$ \\
& $\mid$ & $\Right(v)$ \\
& $\mid$ & $\Stars\,[v_1,\ldots\,v_n]$ \\
\end{tabular}
\end{tabular}
\end{center}
\noindent
No value corresponds to $\ZERO$; $\Empty$ corresponds to
$\ONE$; $\Char$ to the character regular expression; $\Seq$ to the
sequence regular expression and so on. The idea of values is to encode
a kind of lexical value for how the sub-parts of a regular expression match
the sub-parts of a string. To see this, suppose a \emph{flatten} operation, written
$|v|$ for values. We can use this function to extract the underlying string of a value
$v$. For example, $|\mathit{Seq} \, (\textit{Char x}) \, (\textit{Char
y})|$ is the string $xy$. Using flatten, we can describe how values
encode parse trees: $\Seq\,v_1\, v_2$ encodes a tree with 2 children nodes
that tells how the string $|v_1| @
|v_2|$ matches the regex $r_1 \cdot r_2$ whereby $r_1$ matches the
substring $|v_1|$ and, respectively, $r_2$ matches the substring
$|v_2|$. Exactly how these two are matched is contained in the
children nodes $v_1$ and $v_2$ of parent $\textit{Seq}$ .
To give a concrete example of how values work, consider the string $xy$
and the regular expression $(x + (y + xy))^*$. We can view this regular
expression as a tree and if the string $xy$ is matched by two Star
``iterations'', then the $x$ is matched by the left-most alternative in
this tree and the $y$ by the right-left alternative. This suggests to
record this matching as
\begin{center}
$\Stars\,[\Left\,(\Char\,x), \Right(\Left(\Char\,y))]$
\end{center}
\noindent
where $\Stars \; [\ldots]$ records all the
iterations; and $\Left$, respectively $\Right$, which
alternative is used. The value for
matching $xy$ in a single ``iteration'', i.e.~the POSIX value,
would look as follows
\begin{center}
$\Stars\,[\Seq\,(\Char\,x)\,(\Char\,y)]$
\end{center}
\noindent
where $\Stars$ has only a single-element list for the single iteration
and $\Seq$ indicates that $xy$ is matched by a sequence regular
expression.
The contribution of Sulzmann and Lu is an extension of Brzozowski's
algorithm by a second phase (the first phase being building successive
derivatives---see \eqref{graph:*}). In this second phase, a POSIX value
is generated in case the regular expression matches the string.
Pictorially, the Sulzmann and Lu algorithm is as follows:
\begin{ceqn}
\begin{equation}\label{graph:2}
\begin{tikzcd}
r_0 \arrow[r, "\backslash c_0"] \arrow[d] & r_1 \arrow[r, "\backslash c_1"] \arrow[d] & r_2 \arrow[r, dashed] \arrow[d] & r_n \arrow[d, "mkeps" description] \\
v_0 & v_1 \arrow[l,"inj_{r_0} c_0"] & v_2 \arrow[l, "inj_{r_1} c_1"] & v_n \arrow[l, dashed]
\end{tikzcd}
\end{equation}
\end{ceqn}
\noindent
For convenience, we shall employ the following notations: the regular expression we
start with is $r_0$, and the given string $s$ is composed of characters $c_0 c_1
\ldots c_{n-1}$. In the first phase, we build the derivatives $r_1$, $r_2$, \ldots according to
the characters $c_0$, $c_1$ until we exhaust the string and
obtain the derivative $r_n$. We test whether this derivative is
$\textit{nullable}$ or not. If not, we know the string does not match
$r$ and no value needs to be generated. If yes, we start building the
values incrementally by \emph{injecting} back the characters into
the earlier values $v_n, \ldots, v_0$. For the first value $v_0$, we call the function
$\textit{mkeps}$, which builds the parse tree for how the empty string
has been matched by the (nullable) regular expression $r_n$. This function is defined
as
\begin{center}
\begin{tabular}{lcl}
$\mkeps(\ONE)$ & $\dn$ & $\Empty$ \\
$\mkeps(r_{1}+r_{2})$ & $\dn$
& \textit{if} $\nullable(r_{1})$\\
& & \textit{then} $\Left(\mkeps(r_{1}))$\\
& & \textit{else} $\Right(\mkeps(r_{2}))$\\
$\mkeps(r_1\cdot r_2)$ & $\dn$ & $\Seq\,(\mkeps\,r_1)\,(\mkeps\,r_2)$\\
$mkeps(r^*)$ & $\dn$ & $\Stars\,[]$
\end{tabular}
\end{center}
\noindent There are no cases for $\ZERO$ and $c$, since
these regular expression cannot match the empty string. Note
also that in case of alternatives we give preference to the
regular expression on the left-hand side. This will become
important later on.
After this, we inject back the characters one by one in order to build
the parse tree $v_i$ for how the regex $r_i$ matches the string $s_i$
($s_i = c_i \ldots c_{n-1}$ ) from the previous parse tree $v_{i+1}$. After
injecting back $n$ characters, we get the parse tree for how $r_0$
matches $s$. For this Sulzmann and Lu defined a function that reverses
the ``chopping off'' of characters during the derivative phase. The
corresponding function is called $\textit{inj}$; it takes three
arguments: the first one is a regular expression ${r_{i-1}}$, before the
character is chopped off, the second is a character ${c_{i-1}}$, the
character we want to inject and the third argument is the value
${v_i}$, into which one wants to inject the character (it
corresponds to the regular expression after the character has been
chopped off). The result of this function is a new value. The definition
of $\textit{inj}$ is as follows:
\begin{center}
\begin{tabular}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
$\textit{inj}\,(c)\,c\,Empty$ & $\dn$ & $Char\,c$\\
$\textit{inj}\,(r_1 + r_2)\,c\,\Left(v)$ & $\dn$ & $\Left(\textit{inj}\,r_1\,c\,v)$\\
$\textit{inj}\,(r_1 + r_2)\,c\,Right(v)$ & $\dn$ & $Right(\textit{inj}\,r_2\,c\,v)$\\
$\textit{inj}\,(r_1 \cdot r_2)\,c\,Seq(v_1,v_2)$ & $\dn$ & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\
$\textit{inj}\,(r_1 \cdot r_2)\,c\,\Left(Seq(v_1,v_2))$ & $\dn$ & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\
$\textit{inj}\,(r_1 \cdot r_2)\,c\,Right(v)$ & $\dn$ & $Seq(\textit{mkeps}(r_1),\textit{inj}\,r_2\,c\,v)$\\
$\textit{inj}\,(r^*)\,c\,Seq(v,Stars\,vs)$ & $\dn$ & $Stars((\textit{inj}\,r\,c\,v)\,::\,vs)$\\
\end{tabular}
\end{center}
\noindent This definition is by recursion on the ``shape'' of regular
expressions and values. To understands this definition better consider
the situation when we build the derivative on regular expression $r_{i-1}$.
For this we chop off a character from $r_{i-1}$ to form $r_i$. This leaves a
``hole'' in $r_i$ and its corresponding value $v_i$.
To calculate $v_{i-1}$, we need to
locate where that hole is and fill it.
We can find this location by
comparing $r_{i-1}$ and $v_i$. For instance, if $r_{i-1}$ is of shape
$r_a \cdot r_b$, and $v_i$ is of shape $\Left(Seq(v_1,v_2))$, we know immediately that
%
\[ (r_a \cdot r_b)\backslash c = (r_a\backslash c) \cdot r_b \,+\, r_b\backslash c,\]
\noindent
otherwise if $r_a$ is not nullable,
\[ (r_a \cdot r_b)\backslash c = (r_a\backslash c) \cdot r_b,\]
\noindent
the value $v_i$ should be $\Seq(\ldots)$, contradicting the fact that
$v_i$ is actually of shape $\Left(\ldots)$. Furthermore, since $v_i$ is of shape
$\Left(\ldots)$ instead of $\Right(\ldots)$, we know that the left
branch of \[ (r_a \cdot r_b)\backslash c =
\bold{\underline{ (r_a\backslash c) \cdot r_b} }\,+\, r_b\backslash c,\](underlined)
is taken instead of the right one. This means $c$ is chopped off
from $r_a$ rather than $r_b$.
We have therefore found out
that the hole will be on $r_a$. So we recursively call $\inj\,
r_a\,c\,v_a$ to fill that hole in $v_a$. After injection, the value
$v_i$ for $r_i = r_a \cdot r_b$ should be $\Seq\,(\inj\,r_a\,c\,v_a)\,v_b$.
Other clauses can be understood in a similar way.
%\comment{Other word: insight?}
The following example gives an insight of $\textit{inj}$'s effect
and how Sulzmann and Lu's algorithm works as a whole.
Suppose we have a
regular expression $((((a+b)+ab)+c)+abc)^*$, and want to match it against
the string $abc$ (when $abc$ is written as a regular expression, the most
standard way of expressing it should be $a \cdot (b \cdot c)$. We omit
the parentheses and dots here for readability).
This algorithm returns a POSIX value, which means it
will go for the longest matching, i.e.~it should match the string
$abc$ in one star iteration, using the longest alternative $abc$ in the
sub-expression $((((a+b)+ab)+c)+abc)$ (we use $r$ to denote this sub-expression
for conciseness).
Before $\textit{inj}$ comes into play,
our lexer first builds derivative using string $abc$ (we simplified some regular expressions like
$0 \cdot b$ to $0$ for conciseness; we also omit parentheses if
they are clear from the context):
%Similarly, we allow
%$\textit{ALT}$ to take a list of regular expressions as an argument
%instead of just 2 operands to reduce the nested depth of
%$\textit{ALT}$
\begin{center}
\begin{tabular}{lcl}
$r^*$ & $\xrightarrow{\backslash a}$ & $r_1 = (\ONE+\ZERO+\ONE \cdot b + \ZERO + \ONE \cdot b \cdot c) \cdot r^*$\\
& $\xrightarrow{\backslash b}$ & $r_2 = (\ZERO+\ZERO+\ONE \cdot \ONE + \ZERO + \ONE \cdot \ONE \cdot c) \cdot r^* +(\ZERO+\ONE+\ZERO + \ZERO + \ZERO) \cdot r^*$\\
& $\xrightarrow{\backslash c}$ & $r_3 = ((\ZERO+\ZERO+\ZERO + \ZERO + \ONE \cdot \ONE \cdot \ONE) \cdot r^* + (\ZERO+\ZERO+\ZERO + \ONE + \ZERO) \cdot r^*) + $\\
& & $\phantom{r_3 = (} ((\ZERO+\ONE+\ZERO + \ZERO + \ZERO) \cdot r^* + (\ZERO+\ZERO+\ZERO + \ONE + \ZERO) \cdot r^* )$
\end{tabular}
\end{center}
\noindent
In case $r_3$ is nullable, we can call $\textit{mkeps}$
to construct a parse tree for how $r_3$ matched the string $abc$.
$\textit{mkeps}$ gives the following value $v_3$:
\begin{center}
$\Left(\Left(\Seq(\Right(\Seq(\Empty, \Seq(\Empty,\Empty))), \Stars [])))$
\end{center}
The outer $\Left(\Left(\ldots))$ tells us the leftmost nullable part of $r_3$(underlined):
\begin{center}
$( \underline{(\ZERO+\ZERO+\ZERO+ \ZERO+ \ONE \cdot \ONE \cdot \ONE) \cdot r^*} + (\ZERO+\ZERO+\ZERO + \ONE + \ZERO)
\cdot r^*) +((\ZERO+\ONE+\ZERO + \ZERO + \ZERO) \cdot r^*+(\ZERO+\ZERO+\ZERO + \ONE + \ZERO) \cdot r^* ).$
\end{center}
\noindent
Note that the leftmost location of term $((\ZERO+\ZERO+\ZERO + \ZERO + \ONE \cdot \ONE \cdot
\ONE) \cdot r^*$ (which corresponds to the initial sub-match $abc$) allows
$\textit{mkeps}$ to pick it up because $\textit{mkeps}$ is defined to always choose the
left one when it is nullable. In the case of this example, $abc$ is
preferred over $a$ or $ab$. This $\Left(\Left(\ldots))$ location is
naturally generated by two applications of the splitting clause
\begin{center}
$(r_1 \cdot r_2)\backslash c (when \; r_1 \; nullable) \, = (r_1\backslash c) \cdot r_2 \,+\, r_2\backslash c.$
\end{center}
\noindent
By this clause, we put $r_1 \backslash c \cdot r_2 $ at the
$\textit{front}$ and $r_2 \backslash c$ at the $\textit{back}$. This
allows $\textit{mkeps}$ to always pick up among two matches the one with a longer
initial sub-match. Removing the outside $\Left(\Left(...))$, the inside
sub-value
\begin{center}
$\Seq(\Right(\Seq(\Empty, \Seq(\Empty, \Empty))), \Stars [])$
\end{center}
\noindent
tells us how the empty string $[]$ is matched with $(\ZERO+\ZERO+\ZERO + \ZERO + \ONE \cdot
\ONE \cdot \ONE) \cdot r^*$. We match $[]$ by a sequence of 2 nullable regular
expressions. The first one is an alternative, we take the rightmost
alternative---whose language contains the empty string. The second
nullable regular expression is a Kleene star. $\Stars$ tells us how it
generates the nullable regular expression: by 0 iterations to form
$\ONE$. Now $\textit{inj}$ injects characters back and incrementally
builds a parse tree based on $v_3$. Using the value $v_3$, the character
c, and the regular expression $r_2$, we can recover how $r_2$ matched
the string $[c]$ : $\textit{inj} \; r_2 \; c \; v_3$ gives us
\begin{center}
$v_2 = \Left(\Seq(\Right(\Seq(\Empty, \Seq(\Empty, c))), \Stars [])),$
\end{center}
which tells us how $r_2$ matched $[c]$. After this we inject back the character $b$, and get
\begin{center}
$v_1 = \Seq(\Right(\Seq(\Empty, \Seq(b, c))), \Stars [])$
\end{center}
for how
\begin{center}
$r_1= (\ONE+\ZERO+\ONE \cdot b + \ZERO + \ONE \cdot b \cdot c) \cdot r*$
\end{center}
matched the string $bc$ before it split into 2 pieces.
Finally, after injecting character $a$ back to $v_1$,
we get the parse tree
\begin{center}
$v_0= \Stars [\Right(\Seq(a, \Seq(b, c)))]$
\end{center}
for how $r$ matched $abc$. This completes the algorithm.
%We omit the details of injection function, which is provided by Sulzmann and Lu's paper \cite{Sulzmann2014}.
Readers might have noticed that the parse tree information
is actually already available when doing derivatives.
For example, immediately after the operation $\backslash a$ we know that if we want to match a string that starts with $a$,
we can either take the initial match to be
\begin{center}
\begin{enumerate}
\item[1)] just $a$ or
\item[2)] string $ab$ or
\item[3)] string $abc$.
\end{enumerate}
\end{center}
\noindent
In order to differentiate between these choices, we just need to
remember their positions--$a$ is on the left, $ab$ is in the middle ,
and $abc$ is on the right. Which one of these alternatives is chosen
later does not affect their relative position because our algorithm does
not change this order. If this parsing information can be determined and
does not change because of later derivatives, there is no point in
traversing this information twice. This leads to an optimisation---if we
store the information for parse trees inside the regular expression,
update it when we do derivative on them, and collect the information
when finished with derivatives and call $\textit{mkeps}$ for deciding which
branch is POSIX, we can generate the parse tree in one pass, instead of
doing the rest $n$ injections. This leads to Sulzmann and Lu's novel
idea of using bitcodes in derivatives.
In the next section, we shall focus on the bitcoded algorithm and the
process of simplification of regular expressions. This is needed in
order to obtain \emph{fast} versions of the Brzozowski's, and Sulzmann
and Lu's algorithms. This is where the PhD-project aims to advance the
state-of-the-art.
\section{Simplification of Regular Expressions}
Using bitcodes to guide parsing is not a novel idea. It was applied to
context free grammars and then adapted by Henglein and Nielson for
efficient regular expression parsing using DFAs~\cite{nielson11bcre}.
Sulzmann and Lu took this idea of bitcodes a step further by integrating
bitcodes into derivatives. The reason why we want to use bitcodes in
this project is that we want to introduce more aggressive
simplifications in order to keep the size of derivatives small
throughout. This is because the main drawback of building successive
derivatives according to Brzozowski's definition is that they can grow
very quickly in size. This is mainly due to the fact that the derivative
operation generates often ``useless'' $\ZERO$s and $\ONE$s in
derivatives. As a result, if implemented naively both algorithms by
Brzozowski and by Sulzmann and Lu are excruciatingly slow. For example
when starting with the regular expression $(a + aa)^*$ and building 12
successive derivatives w.r.t.~the character $a$, one obtains a
derivative regular expression with more than 8000 nodes (when viewed as
a tree). Operations like derivative and $\nullable$ need to traverse
such trees and consequently the bigger the size of the derivative the
slower the algorithm.
Fortunately, one can simplify regular expressions after each derivative
step. Various simplifications of regular expressions are possible, such
as the simplifications of $\ZERO + r$, $r + \ZERO$, $\ONE\cdot r$, $r
\cdot \ONE$, and $r + r$ to just $r$. These simplifications do not
affect the answer for whether a regular expression matches a string or
not, but fortunately also do not affect the POSIX strategy of how
regular expressions match strings---although the latter is much harder
to establish. Some initial results in this regard have been
obtained in \cite{AusafDyckhoffUrban2016}.
Unfortunately, the simplification rules outlined above are not
sufficient to prevent an explosion for all regular expression. We
believe a tighter bound can be achieved that prevents an explosion in
all cases. Such a tighter bound is suggested by work of Antimirov who
proved that (partial) derivatives can be bound by the number of
characters contained in the initial regular expression
\cite{Antimirov95}. He defined the \emph{partial derivatives} of regular
expressions as follows:
\begin{center}
\begin{tabular}{lcl}
$\textit{pder} \; c \; 0$ & $\dn$ & $\emptyset$\\
$\textit{pder} \; c \; 1$ & $\dn$ & $\emptyset$ \\
$\textit{pder} \; c \; d$ & $\dn$ & $\textit{if} \; c \,=\, d \; \{ 1 \} \; \textit{else} \; \emptyset$ \\
$\textit{pder} \; c \; r_1+r_2$ & $\dn$ & $pder \; c \; r_1 \cup pder \; c \; r_2$ \\
$\textit{pder} \; c \; r_1 \cdot r_2$ & $\dn$ & $\textit{if} \; nullable \; r_1 $\\
& & $\textit{then} \; \{ r \cdot r_2 \mid r \in pder \; c \; r_1 \} \cup pder \; c \; r_2 \;$\\
& & $\textit{else} \; \{ r \cdot r_2 \mid r \in pder \; c \; r_1 \} $ \\
$\textit{pder} \; c \; r^*$ & $\dn$ & $ \{ r' \cdot r^* \mid r' \in pder \; c \; r \} $ \\
\end{tabular}
\end{center}
\noindent
A partial derivative of a regular expression $r$ is essentially a set of
regular expressions that are either $r$'s children expressions or a
concatenation of them. Antimirov has proved a tight bound of the size of
partial derivatives. Roughly
speaking the size will be quadruple in the size of the regular expression.
If we want the size of derivatives
to stay below this bound, we would need more aggressive simplifications
such as opening up alternatives to achieve the maximum level of duplicates
cancellation.
For example, the parentheses in $(a+b) \cdot c + bc$ can be opened up to
get $a\cdot c +b \cdot c
+ b \cdot c$, and then simplified to $a \cdot c+b \cdot c$. Another example is from
$(a^*+a) + (a^*+ \ONE) + (a +\ONE)$ to $a^*+a+\ONE$.
Adding these more aggressive simplification rules
helped us to achieve the same size bound as that of the partial derivatives.
To introduce these "spilling out alternatives" simplifications
and make the correctness proof easier,
we used bitcodes.
Bitcodes look like this:
%This allows us to prove a tight
%bound on the size of regular expression during the running time of the
%algorithm if we can establish the connection between our simplification
%rules and partial derivatives.
%We believe, and have generated test
%data, that a similar bound can be obtained for the derivatives in
%Sulzmann and Lu's algorithm. Let us give some details about this next.
\begin{center}
$b ::= S \mid Z \; \;\;
bs ::= [] \mid b:bs
$
\end{center}
They are just a string of bits,
the names $S$ and $Z$ here are quite arbitrary, we can use 0 and 1
or any other set of binary symbols to substitute them.
Bitcodes(or bit-sequences) are a compact form of parse trees.
Bitcodes are essentially incomplete values.
This can be straightforwardly seen in the following transformation:
\begin{center}
\begin{tabular}{lcl}
$\textit{code}(\Empty)$ & $\dn$ & $[]$\\
$\textit{code}(\Char\,c)$ & $\dn$ & $[]$\\
$\textit{code}(\Left\,v)$ & $\dn$ & $\Z :: code(v)$\\
$\textit{code}(\Right\,v)$ & $\dn$ & $\S :: code(v)$\\
$\textit{code}(\Seq\,v_1\,v_2)$ & $\dn$ & $code(v_1) \,@\, code(v_2)$\\
$\textit{code}(\Stars\,[])$ & $\dn$ & $[\Z]$\\
$\textit{code}(\Stars\,(v\!::\!vs))$ & $\dn$ & $\S :: code(v) \;@\;
code(\Stars\,vs)$
\end{tabular}
\end{center}
Here code encodes a value into a bit-sequence by converting Left into
$\Z$, Right into $\S$, the start point of a non-empty star iteration
into $\S$, and the border where a local star terminates into $\Z$. This
conversion is apparently lossy, as it throws away the character
information, and does not decode the boundary between the two operands
of the sequence constructor. Moreover, with only the bitcode we cannot
even tell whether the $\S$s and $\Z$s are for $\Left/\Right$ or
$\Stars$. The reason for choosing this compact way of storing
information is that the relatively small size of bits can be easily
moved around. In order to recover the bitcode back into values, we will
need the regular expression as the extra information and decode it back
into value:\\
%\begin{definition}[Bitdecoding of Values]\mbox{}
\begin{center}
\begin{tabular}{@{}l@{\hspace{1mm}}c@{\hspace{1mm}}l@{}}
$\textit{decode}'\,bs\,(\ONE)$ & $\dn$ & $(\Empty, bs)$\\
$\textit{decode}'\,bs\,(c)$ & $\dn$ & $(\Char\,c, bs)$\\
$\textit{decode}'\,(\Z\!::\!bs)\;(r_1 + r_2)$ & $\dn$ &
$\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}\;
(\Left\,v, bs_1)$\\
$\textit{decode}'\,(\S\!::\!bs)\;(r_1 + r_2)$ & $\dn$ &
$\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_2\;\textit{in}\;
(\Right\,v, bs_1)$\\
$\textit{decode}'\,bs\;(r_1\cdot r_2)$ & $\dn$ &
$\textit{let}\,(v_1, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}$\\
& & $\textit{let}\,(v_2, bs_2) = \textit{decode}'\,bs_1\,r_2$\\
& & \hspace{35mm}$\textit{in}\;(\Seq\,v_1\,v_2, bs_2)$\\
$\textit{decode}'\,(\Z\!::\!bs)\,(r^*)$ & $\dn$ & $(\Stars\,[], bs)$\\
$\textit{decode}'\,(\S\!::\!bs)\,(r^*)$ & $\dn$ &
$\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r\;\textit{in}$\\
& & $\textit{let}\,(\Stars\,vs, bs_2) = \textit{decode}'\,bs_1\,r^*$\\
& & \hspace{35mm}$\textit{in}\;(\Stars\,v\!::\!vs, bs_2)$\bigskip\\
$\textit{decode}\,bs\,r$ & $\dn$ &
$\textit{let}\,(v, bs') = \textit{decode}'\,bs\,r\;\textit{in}$\\
& & $\textit{if}\;bs' = []\;\textit{then}\;\textit{Some}\,v\;
\textit{else}\;\textit{None}$
\end{tabular}
\end{center}
%\end{definition}
Sulzmann and Lu's integrated the bitcodes into regular
expressions to create annotated regular expressions.
It is by attaching them to the head of every substructure of a
regular expression\cite{Sulzmann2014}. Annotated regular expressions
are defined by the following
grammar:
\begin{center}
\begin{tabular}{lcl}
$\textit{a}$ & $::=$ & $\textit{ZERO}$\\
& $\mid$ & $\textit{ONE}\;\;bs$\\
& $\mid$ & $\textit{CHAR}\;\;bs\,c$\\
& $\mid$ & $\textit{ALT}\;\;bs\,a_1 \, a_2$\\
& $\mid$ & $\textit{SEQ}\;\;bs\,a_1\,a_2$\\
& $\mid$ & $\textit{STAR}\;\;bs\,a$
\end{tabular}
\end{center}
%(in \textit{ALT})
\noindent
where $bs$ stands for bit-sequences, and $a$ for $\bold{a}$nnotated regular expressions. These bit-sequences encode
information about the (POSIX) value that should be generated by the
Sulzmann and Lu algorithm.
To do lexing using annotated regular expressions, we shall first
transform the usual (un-annotated) regular expressions into annotated
regular expressions. This operation is called \emph{internalisation} and
defined as follows:
%\begin{definition}
\begin{center}
\begin{tabular}{lcl}
$(\ZERO)^\uparrow$ & $\dn$ & $\textit{ZERO}$\\
$(\ONE)^\uparrow$ & $\dn$ & $\textit{ONE}\,[]$\\
$(c)^\uparrow$ & $\dn$ & $\textit{CHAR}\,[]\,c$\\
$(r_1 + r_2)^\uparrow$ & $\dn$ &
$\textit{ALT}\;[]\,(\textit{fuse}\,[\Z]\,r_1^\uparrow)\,
(\textit{fuse}\,[\S]\,r_2^\uparrow)$\\
$(r_1\cdot r_2)^\uparrow$ & $\dn$ &
$\textit{SEQ}\;[]\,r_1^\uparrow\,r_2^\uparrow$\\
$(r^*)^\uparrow$ & $\dn$ &
$\textit{STAR}\;[]\,r^\uparrow$\\
\end{tabular}
\end{center}
%\end{definition}
\noindent
We use up arrows here to imply that the basic un-annotated regular expressions
are "lifted up" into something slightly more complex.
In the fourth clause, $\textit{fuse}$ is an auxiliary function that helps to attach bits to the
front of an annotated regular expression. Its definition is as follows:
\begin{center}
\begin{tabular}{lcl}
$\textit{fuse}\,bs\,(\textit{ZERO})$ & $\dn$ & $\textit{ZERO}$\\
$\textit{fuse}\,bs\,(\textit{ONE}\,bs')$ & $\dn$ &
$\textit{ONE}\,(bs\,@\,bs')$\\
$\textit{fuse}\,bs\,(\textit{CHAR}\,bs'\,c)$ & $\dn$ &
$\textit{CHAR}\,(bs\,@\,bs')\,c$\\
$\textit{fuse}\,bs\,(\textit{ALT}\,bs'\,a_1\,a_2)$ & $\dn$ &
$\textit{ALT}\,(bs\,@\,bs')\,a_1\,a_2$\\
$\textit{fuse}\,bs\,(\textit{SEQ}\,bs'\,a_1\,a_2)$ & $\dn$ &
$\textit{SEQ}\,(bs\,@\,bs')\,a_1\,a_2$\\
$\textit{fuse}\,bs\,(\textit{STAR}\,bs'\,a)$ & $\dn$ &
$\textit{STAR}\,(bs\,@\,bs')\,a$
\end{tabular}
\end{center}
\noindent
After internalise we do successive derivative operations on the
annotated regular expression. This derivative operation is the same as
what we previously have for the simple regular expressions, except that
we take special care of the bits :\\
%\begin{definition}{bder}
\begin{center}
\begin{tabular}{@{}lcl@{}}
$(\textit{ZERO})\backslash c$ & $\dn$ & $\textit{ZERO}$\\
$(\textit{ONE}\;bs)\backslash c$ & $\dn$ & $\textit{ZERO}$\\
$(\textit{CHAR}\;bs\,d)\backslash c$ & $\dn$ &
$\textit{if}\;c=d\; \;\textit{then}\;
\textit{ONE}\;bs\;\textit{else}\;\textit{ZERO}$\\
$(\textit{ALT}\;bs\,a_1\,a_2)\backslash c$ & $\dn$ &
$\textit{ALT}\,bs\,(a_1\backslash c)\,(a_2\backslash c)$\\
$(\textit{SEQ}\;bs\,a_1\,a_2)\backslash c$ & $\dn$ &
$\textit{if}\;\textit{bnullable}\,a_1$\\
& &$\textit{then}\;\textit{ALT}\,bs\,(\textit{SEQ}\,[]\,(a_1\backslash c)\,a_2)$\\
& &$\phantom{\textit{then}\;\textit{ALT}\,bs\,}(\textit{fuse}\,(\textit{bmkeps}\,a_1)\,(a_2\backslash c))$\\
& &$\textit{else}\;\textit{SEQ}\,bs\,(a_1\backslash c)\,a_2$\\
$(\textit{STAR}\,bs\,a)\backslash c$ & $\dn$ &
$\textit{SEQ}\;bs\,(\textit{fuse}\, [\Z] (r\backslash c))\,
(\textit{STAR}\,[]\,r)$
\end{tabular}
\end{center}
%\end{definition}
For instance, when we unfold $STAR \; bs \; a$ into a sequence, we
attach an additional bit Z to the front of $r \backslash c$ to indicate
that there is one more star iteration. The other example, the $SEQ$
clause is more subtle-- when $a_1$ is $bnullable$(here bnullable is
exactly the same as nullable, except that it is for annotated regular
expressions, therefore we omit the definition). Assume that $bmkeps$
correctly extracts the bitcode for how $a_1$ matches the string prior to
character c(more on this later), then the right branch of $ALTS$, which
is $fuse \; bmkeps \; a_1 (a_2 \backslash c)$ will collapse the regular
expression $a_1$(as it has already been fully matched) and store the
parsing information at the head of the regular expression $a_2
\backslash c$ by fusing to it. The bitsequence $bs$, which was initially
attached to the head of $SEQ$, has now been elevated to the top-level of
ALT, as this information will be needed whichever way the $SEQ$ is
matched--no matter whether c belongs to $a_1$ or $ a_2$. After carefully
doing these derivatives and maintaining all the parsing information, we
complete the parsing by collecting the bits using a special $mkeps$
function for annotated regular expressions--$bmkeps$:
%\begin{definition}[\textit{bmkeps}]\mbox{}
\begin{center}
\begin{tabular}{lcl}
$\textit{bmkeps}\,(\textit{ONE}\,bs)$ & $\dn$ & $bs$\\
$\textit{bmkeps}\,(\textit{ALT}\,bs\,a_1\,a_2)$ & $\dn$ &
$\textit{if}\;\textit{bnullable}\,a_1$\\
& &$\textit{then}\;bs\,@\,\textit{bmkeps}\,a_1$\\
& &$\textit{else}\;bs\,@\,\textit{bmkeps}\,a_2$\\
$\textit{bmkeps}\,(\textit{SEQ}\,bs\,a_1\,a_2)$ & $\dn$ &
$bs \,@\,\textit{bmkeps}\,a_1\,@\, \textit{bmkeps}\,a_2$\\
$\textit{bmkeps}\,(\textit{STAR}\,bs\,a)$ & $\dn$ &
$bs \,@\, [\S]$
\end{tabular}
\end{center}
%\end{definition}
\noindent
This function completes the parse tree information by
travelling along the path on the regular expression that corresponds to a POSIX value snd collect all the bits, and
using S to indicate the end of star iterations. If we take the bits produced by $bmkeps$ and decode it,
we get the parse tree we need, the working flow looks like this:\\
\begin{center}
\begin{tabular}{lcl}
$\textit{blexer}\;r\,s$ & $\dn$ &
$\textit{let}\;a = (r^\uparrow)\backslash s\;\textit{in}$\\
& & $\;\;\textit{if}\; \textit{bnullable}(a)$\\
& & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
& & $\;\;\textit{else}\;\textit{None}$
\end{tabular}
\end{center}
Here $(r^\uparrow)\backslash s$ is similar to what we have previously defined for
$r\backslash s$.
The main point of the bit-sequences and annotated regular expressions
is that we can apply rather aggressive (in terms of size)
simplification rules in order to keep derivatives small.
We have
developed such ``aggressive'' simplification rules and generated test
data that show that the expected bound can be achieved. Obviously we
could only partially cover the search space as there are infinitely
many regular expressions and strings. One modification we introduced
is to allow a list of annotated regular expressions in the
\textit{ALTS} constructor. This allows us to not just delete
unnecessary $\ZERO$s and $\ONE$s from regular expressions, but also
unnecessary ``copies'' of regular expressions (very similar to
simplifying $r + r$ to just $r$, but in a more general
setting).
Another modification is that we use simplification rules
inspired by Antimirov's work on partial derivatives. They maintain the
idea that only the first ``copy'' of a regular expression in an
alternative contributes to the calculation of a POSIX value. All
subsequent copies can be pruned from the regular expression.
A recursive definition of simplification function that looks similar to scala code is given below:\\
\begin{center}
\begin{tabular}{@{}lcl@{}}
$\textit{simp} \; a$ & $\dn$ & $\textit{a} \; \textit{if} \; a = (\textit{ONE} \; bs) \; or\; (\textit{CHAR} \, bs \; c) \; or\; (\textit{STAR}\; bs\; a_1)$\\
$\textit{simp} \; \textit{SEQ}\;bs\,a_1\,a_2$ & $\dn$ & $ (\textit{simp} \; a_1, \textit{simp} \; a_2) \; \textit{match} $ \\
&&$\textit{case} \; (0, \_) \Rightarrow 0$ \\
&&$ \textit{case} \; (\_, 0) \Rightarrow 0$ \\
&&$ \textit{case} \; (1, a_2') \Rightarrow \textit{fuse} \; bs \; a_2'$ \\
&&$ \textit{case} \; (a_1', 1) \Rightarrow \textit{fuse} \; bs \; a_1'$ \\
&&$ \textit{case} \; (a_1', a_2') \Rightarrow \textit{SEQ} \; bs \; a_1' \; a_2'$ \\
$\textit{simp} \; \textit{ALTS}\;bs\,as$ & $\dn$ & $\textit{ distinct}( \textit{flatten} ( \textit{map simp as})) \; \textit{match} $ \\
&&$\textit{case} \; [] \Rightarrow 0$ \\
&&$ \textit{case} \; a :: [] \Rightarrow \textit{fuse bs a}$ \\
&&$ \textit{case} \; as' \Rightarrow \textit{ALT bs as'}$
\end{tabular}
\end{center}
The simplification does a pattern matching on the regular expression. When it detected that
the regular expression is an alternative or sequence,
it will try to simplify its children regular expressions
recursively and then see if one of the children turn
into $\ZERO$ or $\ONE$, which might trigger further simplification
at the current level. The most involved part is the $\textit{ALTS}$
clause, where we use two auxiliary functions
flatten and distinct to open up nested $\textit{ALTS}$ and
reduce as many duplicates as possible.
Function distinct keeps the first occurring copy only and
remove all later ones when detected duplicates.
Function flatten opens up nested \textit{ALT}. Its recursive
definition is given below:
\begin{center}
\begin{tabular}{@{}lcl@{}}
$\textit{flatten} \; (\textit{ALT}\;bs\,as) :: as'$ & $\dn$ & $(\textit{map} \;
(\textit{fuse}\;bs)\; \textit{as}) \; @ \; \textit{flatten} \; as' $ \\
$\textit{flatten} \; \textit{ZERO} :: as'$ & $\dn$ & $ \textit{flatten} \; as' $ \\
$\textit{flatten} \; a :: as'$ & $\dn$ & $a :: \textit{flatten} \; as'$ \quad(otherwise)
\end{tabular}
\end{center}
\noindent
Here flatten behaves like the traditional functional programming flatten function, except that it also removes $\ZERO$s.
What it does is basically removing parentheses like changing $a+(b+c)$ into $a+b+c$.
Suppose we apply simplification after each derivative step,
and view these two operations as an atomic one: $a \backslash_{simp} c \dn \textit{simp}(a \backslash c)$.
Then we can use the previous natural extension from derivative w.r.t character to derivative w.r.t string:
\begin{center}
\begin{tabular}{lcl}
$r \backslash_{simp} (c\!::\!s) $ & $\dn$ & $(r \backslash_{simp} c) \backslash_{simp} s$ \\
$r \backslash [\,] $ & $\dn$ & $r$
\end{tabular}
\end{center}
we get an optimized version of the algorithm:
\begin{center}
\begin{tabular}{lcl}
$\textit{blexer\_simp}\;r\,s$ & $\dn$ &
$\textit{let}\;a = (r^\uparrow)\backslash_{simp} s\;\textit{in}$\\
& & $\;\;\textit{if}\; \textit{bnullable}(a)$\\
& & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
& & $\;\;\textit{else}\;\textit{None}$
\end{tabular}
\end{center}
This algorithm effectively keeps the regular expression size small, for example,
with this simplification our previous $(a + aa)^*$ example's 8000 nodes will be reduced to only 6 and stay constant, however long the input string is.
\section{Current Work}
We are currently engaged in two tasks related to this algorithm.
The first one is proving that our simplification rules
actually do not affect the POSIX value that should be generated by the
algorithm according to the specification of a POSIX value
and furthermore obtain a much
tighter bound on the sizes of derivatives. The result is that our
algorithm should be correct and faster on all inputs. The original
blow-up, as observed in JavaScript, Python and Java, would be excluded
from happening in our algorithm.For
this proof we use the theorem prover Isabelle. Once completed, this
result will advance the state-of-the-art: Sulzmann and Lu wrote in
their paper \cite{Sulzmann2014} about the bitcoded ``incremental
parsing method'' (that is the matching algorithm outlined in this
section):
\begin{quote}\it
``Correctness Claim: We further claim that the incremental parsing
method in Figure~5 in combination with the simplification steps in
Figure 6 yields POSIX parse trees. We have tested this claim
extensively by using the method in Figure~3 as a reference but yet
have to work out all proof details.''
\end{quote}
\noindent
We would settle the correctness claim. It is relatively straightforward
to establish that after one simplification step, the part of a nullable
derivative that corresponds to a POSIX value remains intact and can
still be collected, in other words,
\begin{center}
$\textit{bmkeps} \; r = \textit{bmkeps} \; \textit{simp} \; r\;( r\; \textit{nullable})$
\end{center}
\noindent
as this basically comes down to proving actions like removing the
additional $r$ in $r+r$ does not delete important POSIX information in
a regular expression. The hardcore of this problem is to prove that
\begin{center}
$\textit{bmkeps} \; \textit{blexer}\_{simp} \; r = \textit{bmkeps} \; \textit{blexer} \; \textit{simp} \; r$
\end{center}
\noindent
That is, if we do derivative on regular expression r and the simplified version,
they can still provide the same POSIX value if there is one .
This is not as straightforward as the previous proposition, as the two regular expressions $r$ and $\textit{simp}\; r$
might become very different regular expressions after repeated application of $\textit{simp}$ and derivative.
The crucial point is to find the indispensable information of
a regular expression and how it is kept intact during simplification so that it performs
as good as a regular expression that has not been simplified in the subsequent derivative operations.
To aid this, we use the helping function retrieve described by Sulzmann and Lu:
\\definition of retrieve\\
This function assembled the bitcode that corresponds to a parse tree for how the current
derivative matches the suffix of the string(the characters that have not yet appeared, but is stored in the value).
Sulzmann and Lu used this to connect the bitcoded algorithm to the older algorithm by the following equation:
\begin{center}
$inj \;a\; c \; v = \textit{decode} \; (\textit{retrieve}\; ((\textit{internalise}\; r)\backslash_{simp} c) v)$
\end{center}
A little fact that needs to be stated to help comprehension:
\begin{center}
$r^\uparrow = a$($a$ stands for $\textit{annotated}).$
\end{center}
Ausaf and Urban also used this fact to prove the correctness of bitcoded algorithm without simplification.
Our purpose of using this, however, is try to establish \\
$ \textit{retrieve} \; a \; v \;=\; \textit{retrieve} \; \textit{simp}(a) \; v'.$\\
The idea is that using $v'$,
a simplified version of $v$ that possibly had gone through the same simplification step as $\textit{simp}(a)$ we are still able to extract the bit-sequence that gives the same parsing information as the unsimplified one.
After establishing this, we might be able to finally bridge the gap of proving\\
$\textit{retrieve} \; r \backslash s \; v = \;\textit{retrieve} \; \textit{simp}(r) \backslash s \; v'$\\
and subsequently\\
$\textit{retrieve} \; r \backslash s \; v\; = \; \textit{retrieve} \; r \backslash_{simp} s \; v'$.\\
This proves that our simplified version of regular expression still contains all the bitcodes needed.
The second task is to speed up the more aggressive simplification.
Currently it is slower than a naive simplification(the naive version as
implemented in ADU of course can explode in some cases). So it needs to
be explored how to make it faster. Our possibility would be to explore
again the connection to DFAs. This is very much work in progress.
\section{Conclusion}
In this PhD-project we are interested in fast algorithms for regular
expression matching. While this seems to be a ``settled'' area, in
fact interesting research questions are popping up as soon as one steps
outside the classic automata theory (for example in terms of what kind
of regular expressions are supported). The reason why it is
interesting for us to look at the derivative approach introduced by
Brzozowski for regular expression matching, and then much further
developed by Sulzmann and Lu, is that derivatives can elegantly deal
with some of the regular expressions that are of interest in ``real
life''. This includes the not-regular expression, written $\neg\,r$
(that is all strings that are not recognised by $r$), but also bounded
regular expressions such as $r^{\{n\}}$ and $r^{\{n..m\}}$). There is
also hope that the derivatives can provide another angle for how to
deal more efficiently with back-references, which are one of the
reasons why regular expression engines in JavaScript, Python and Java
choose to not implement the classic automata approach of transforming
regular expressions into NFAs and then DFAs---because we simply do not
know how such back-references can be represented by DFAs.
\bibliographystyle{plain}
\bibliography{root}
\end{document}