\documentclass[a4paper,UKenglish]{lipics}\usepackage{graphic}\usepackage{data}\usepackage{tikz-cd}\usepackage{algorithm}\usepackage{amsmath}\usepackage[noend]{algpseudocode}\usepackage{enumitem}% \documentclass{article}%\usepackage[utf8]{inputenc}%\usepackage[english]{babel}%\usepackage{listings}% \usepackage{amsthm}% \usepackage{hyperref}% \usepackage[margin=0.5in]{geometry}%\usepackage{pmboxdraw}\title{POSIX Regular Expression Matching and Lexing}\author{Chengsong Tan}\affil{King's College London\\London, UK\\\texttt{chengsong.tan@kcl.ac.uk}}\authorrunning{Chengsong Tan}\Copyright{Chengsong Tan}\newcommand{\dn}{\stackrel{\mbox{\scriptsize def}}{=}}%\newcommand{\ZERO}{\mbox{\bf 0}}\newcommand{\ONE}{\mbox{\bf 1}}\def\lexer{\mathit{lexer}}\def\mkeps{\mathit{mkeps}}\def\inj{\mathit{inj}}\def\Empty{\mathit{Empty}}\def\Left{\mathit{Left}}\def\Right{\mathit{Right}}\def\Stars{\mathit{Stars}}\def\Char{\mathit{Char}}\def\Seq{\mathit{Seq}}\def\Der{\mathit{Der}}\def\nullable{\mathit{nullable}}\def\Z{\mathit{Z}}\def\S{\mathit{S}}%\theoremstyle{theorem}%\newtheorem{theorem}{Theorem}%\theoremstyle{lemma}%\newtheorem{lemma}{Lemma}%\newcommand{\lemmaautorefname}{Lemma}%\theoremstyle{definition}%\newtheorem{definition}{Definition}\algnewcommand\algorithmicswitch{\textbf{switch}}\algnewcommand\algorithmiccase{\textbf{case}}\algnewcommand\algorithmicassert{\texttt{assert}}\algnewcommand\Assert[1]{\State \algorithmicassert(#1)}%% New "environments"\algdef{SE}[SWITCH]{Switch}{EndSwitch}[1]{\algorithmicswitch\ #1\ \algorithmicdo}{\algorithmicend\ \algorithmicswitch}%\algdef{SE}[CASE]{Case}{EndCase}[1]{\algorithmiccase\ #1}{\algorithmicend\ \algorithmiccase}%\algtext*{EndSwitch}%\algtext*{EndCase}%\begin{document}\maketitle\begin{abstract} Brzozowski introduced in 1964 a beautifully simple algorithm for regular expression matching based on the notion of derivatives of regular expressions. In 2014, Sulzmann and Lu extended this algorithm to not just give a YES/NO answer for whether or not a regular expression matches a string, but in case it matches also \emph{how} it matches the string. This is important for applications such as lexing (tokenising a string). The problem is to make the algorithm by Sulzmann and Lu fast on all inputs without breaking its correctness. We have already developed some simplification rules for this, but have not proved that they preserve the correctness of the algorithm. We also have not yet looked at extended regular expressions, such as bounded repetitions, negation and back-references.\end{abstract}\section{Introduction}This PhD-project is about regular expression matching andlexing. Given the maturity of this topic, the reader might wonder:Surely, regular expressions must have already been studied to death?What could possibly be \emph{not} known in this area? And surely allimplemented algorithms for regular expression matching are blindinglyfast?Unfortunately these preconceptions are not supported by evidence: Takefor example the regular expression $(a^*)^*\,b$ and ask whetherstrings of the form $aa..a$ match this regularexpression. Obviously they do not match---the expected $b$ in the lastposition is missing. One would expect that modern regular expressionmatching engines can find this out very quickly. Alas, if one triesthis example in JavaScript, Python or Java 8 with strings like 28$a$'s, one discovers that this decision takes around 30 seconds andtakes considerably longer when adding a few more $a$'s, as the graphsbelow show:\begin{center}\begin{tabular}{@{}c@{\hspace{0mm}}c@{\hspace{0mm}}c@{}}\begin{tikzpicture}\begin{axis}[ xlabel={$n$}, x label style={at={(1.05,-0.05)}}, ylabel={time in secs}, enlargelimits=false, xtick={0,5,...,30}, xmax=33, ymax=35, ytick={0,5,...,30}, scaled ticks=false, axis lines=left, width=5cm, height=4cm, legend entries={JavaScript}, legend pos=north west, legend cell align=left]\addplot[red,mark=*, mark options={fill=white}] table {re-js.data};\end{axis}\end{tikzpicture} &\begin{tikzpicture}\begin{axis}[ xlabel={$n$}, x label style={at={(1.05,-0.05)}}, %ylabel={time in secs}, enlargelimits=false, xtick={0,5,...,30}, xmax=33, ymax=35, ytick={0,5,...,30}, scaled ticks=false, axis lines=left, width=5cm, height=4cm, legend entries={Python}, legend pos=north west, legend cell align=left]\addplot[blue,mark=*, mark options={fill=white}] table {re-python2.data};\end{axis}\end{tikzpicture} &\begin{tikzpicture}\begin{axis}[ xlabel={$n$}, x label style={at={(1.05,-0.05)}}, %ylabel={time in secs}, enlargelimits=false, xtick={0,5,...,30}, xmax=33, ymax=35, ytick={0,5,...,30}, scaled ticks=false, axis lines=left, width=5cm, height=4cm, legend entries={Java 8}, legend pos=north west, legend cell align=left]\addplot[cyan,mark=*, mark options={fill=white}] table {re-java.data};\end{axis}\end{tikzpicture}\\\multicolumn{3}{c}{Graphs: Runtime for matching $(a^*)^*\,b$ with strings of the form $\underbrace{aa..a}_{n}$.}\end{tabular} \end{center} \noindent These are clearly abysmal and possibly surprising results.One would expect these systems doing much better than that---afterall, given a DFA and a string, whether a string is matched by this DFAshould be linear.Admittedly, the regular expression $(a^*)^*\,b$ is carefully chosen toexhibit this ``exponential behaviour''. Unfortunately, such regularexpressions are not just a few ``outliers'', but actually they arefrequent enough that a separate name has been created forthem---\emph{evil regular expressions}. In empiric work, Davis et alreport that they have found thousands of such evil regular expressionsin the JavaScript and Python ecosystems \cite{Davis18}.This exponential blowup sometimes causes real pain in real life:for example on 20 July 2016 one evil regular expression brought thewebpage \href{http://stackexchange.com}{Stack Exchange} to its knees \footnote{https://stackstatus.net/post/147710624694/outage-postmortem-july-20-2016}.In this instance, a regular expression intended to just trim whitespaces from the beginning and the end of a line actually consumedmassive amounts of CPU-resources and because of this the web serversground to a halt. This happened when a post with 20,000 white spaceswas submitted, but importantly the white spaces were neither at thebeginning nor at the end. As a result, the regular expression matchingengine needed to backtrack over many choices.The underlying problem is that many ``real life'' regular expressionmatching engines do not use DFAs for matching. This is because theysupport regular expressions that are not covered by the classicalautomata theory, and in this more general setting there are quite afew research questions still unanswered and fast algorithms still needto be developed.There is also another under-researched problem to do with regularexpressions and lexing, i.e.~the process of breaking up strings intosequences of tokens according to some regular expressions. In thissetting one is not just interested in whether or not a regularexpression matches a string, but if it matches also in \emph{how} itmatches the string. Consider for example a regular expression$r_{key}$ for recognising keywords such as \textit{if}, \textit{then}and so on; and a regular expression $r_{id}$ for recognisingidentifiers (say, a single character followed by characters ornumbers). One can then form the compound regular expression$(r_{key} + r_{id})^*$ and use it to tokenise strings. But then howshould the string \textit{iffoo} be tokenised? It could be tokenisedas a keyword followed by an identifier, or the entire string as asingle identifier. Similarly, how should the string \textit{if} betokenised? Both regular expressions, $r_{key}$ and $r_{id}$, would``fire''---so is it an identifier or a keyword? While in applicationsthere is a well-known strategy to decide these questions, called POSIXmatching, only relatively recently precise definitions of what POSIXmatching actually means have been formalised\cite{AusafDyckhoffUrban2016,OkuiSuzuki2010,Vansummeren2006}. Roughly,POSIX matching means matching the longest initial substring.In the case of a tie, the initial submatch is chosen according to some priorities attached to theregular expressions (e.g.~keywords have a higher priority thanidentifiers). This sounds rather simple, but according to Grathwohl etal \cite[Page 36]{CrashCourse2014} this is not the case. They wrote:\begin{quote}\it{}``The POSIX strategy is more complicated than the greedy because of the dependence on information about the length of matched strings in the various subexpressions.''\end{quote}\noindentThis is also supported by evidence collected by Kuklewicz\cite{Kuklewicz} who noticed that a number of POSIX regular expressionmatchers calculate incorrect results.Our focus is on an algorithm introduced by Sulzmann and Lu in 2014 forregular expression matching according to the POSIX strategy\cite{Sulzmann2014}. Their algorithm is based on an older algorithm byBrzozowski from 1964 where he introduced the notion of derivatives ofregular expressions \cite{Brzozowski1964}. We shall briefly explainthe algorithms next.\section{The Algorithms by Brzozowski, and Sulzmann and Lu}Suppose (basic) regular expressions are given by the following grammar:\[ r ::= \ZERO \mid \ONE \mid c \mid r_1 \cdot r_2 \mid r_1 + r_2 \mid r^* \]\noindentThe intended meaning of the constructors is as usual: $\ZERO$cannot match any string, $\ONE$ can match the empty string, thecharacter regular expression $c$ can match the character $c$, and soon.The brilliant contribution by Brzozowski is the notion of\emph{derivatives} of regular expressions. The idea behind thisnotion is as follows: suppose a regular expression $r$ can match astring of the form $c\!::\! s$ (that is a list of characters startingwith $c$), what does the regular expression look like that can matchjust $s$? Brzozowski gave a neat answer to this question. He startedwith the definition of $nullable$:\begin{center} \begin{tabular}{lcl} $\nullable(\ZERO)$ & $\dn$ & $\mathit{false}$ \\ $\nullable(\ONE)$ & $\dn$ & $\mathit{true}$ \\ $\nullable(c)$ & $\dn$ & $\mathit{false}$ \\ $\nullable(r_1 + r_2)$ & $\dn$ & $\nullable(r_1) \vee \nullable(r_2)$ \\ $\nullable(r_1\cdot r_2)$ & $\dn$ & $\nullable(r_1) \wedge \nullable(r_2)$ \\ $\nullable(r^*)$ & $\dn$ & $\mathit{true}$ \\ \end{tabular} \end{center}This function simply tests whether the empty string is in $L(r)$.He then definedthe following operation on regular expressions, written$r\backslash c$ (the derivative of $r$ w.r.t.~the character $c$):\begin{center}\begin{tabular}{lcl} $\ZERO \backslash c$ & $\dn$ & $\ZERO$\\ $\ONE \backslash c$ & $\dn$ & $\ZERO$\\ $d \backslash c$ & $\dn$ & $\mathit{if} \;c = d\;\mathit{then}\;\ONE\;\mathit{else}\;\ZERO$\\$(r_1 + r_2)\backslash c$ & $\dn$ & $r_1 \backslash c \,+\, r_2 \backslash c$\\$(r_1 \cdot r_2)\backslash c$ & $\dn$ & $\mathit{if} \, nullable(r_1)$\\ & & $\mathit{then}\;(r_1\backslash c) \cdot r_2 \,+\, r_2\backslash c$\\ & & $\mathit{else}\;(r_1\backslash c) \cdot r_2$\\ $(r^*)\backslash c$ & $\dn$ & $(r\backslash c) \cdot r^*$\\\end{tabular}\end{center}\noindent %Assuming the classic notion of a%\emph{language} of a regular expression, written $L(\_)$, t\noindentThe main property of the derivative operation is that\begin{center}$c\!::\!s \in L(r)$ holdsif and only if $s \in L(r\backslash c)$.\end{center}\noindent For us the main advantage is that derivatives can bestraightforwardly implemented in any functional programming language,and are easily definable and reasoned about in theorem provers---thedefinitions just consist of inductive datatypes and simple recursivefunctions. Moreover, the notion of derivatives can be easilygeneralised to cover extended regular expression constructors such asthe not-regular expression, written $\neg\,r$, or bounded repetitions(for example $r^{\{n\}}$ and $r^{\{n..m\}}$), which cannot be sostraightforwardly realised within the classic automata approach.For the moment however, we focus only on the usual basic regular expressions.Now if we want to find out whether a string $s$ matches with a regularexpression $r$, build the derivatives of $r$ w.r.t.\ (in succession)all the characters of the string $s$. Finally, test whether theresulting regular expression can match the empty string. If yes, then$r$ matches $s$, and no in the negative case. To implement this ideawe can generalise the derivative operation to strings like this:\begin{center}\begin{tabular}{lcl}$r \backslash (c\!::\!s) $ & $\dn$ & $(r \backslash c) \backslash s$ \\$r \backslash [\,] $ & $\dn$ & $r$\end{tabular}\end{center}\noindentUsing this definition we obtain a simple and elegant regularexpression matching algorithm: \[match\;s\;r \;\dn\; nullable(r\backslash s)\]\noindentPictorially this algorithm can be illustrated as follows:\begin{center}\begin{tikzcd}\label{graph:*} r_0 \arrow[r, "\backslash c_0"] & r_1 \arrow[r, "\backslash c_1"] & r_2 \arrow[r, dashed] & r_n \arrow[r,"nullable?"] & Yes/No\end{tikzcd}\end{center}\noindentwhere we start with a regular expression $r_0$, build successive derivativesuntil we exhaust the string and then use \textit{nullable} to test whether theresult can match the empty string. It can be relatively easily shown that thismatcher is correct.One limitation, however, of Brzozowski's algorithm is that it onlyproduces a YES/NO answer for whether a string is being matched by aregular expression. Sulzmann and Lu~\cite{Sulzmann2014} extended thisalgorithm to allow generation of an actual matching, called a\emph{value}.\begin{center} \begin{tabular}{c@{\hspace{20mm}}c} \begin{tabular}{@{}rrl@{}} \multicolumn{3}{@{}l}{\textbf{Regular Expressions}}\medskip\\ $r$ & $::=$ & $\ZERO$\\ & $\mid$ & $\ONE$ \\ & $\mid$ & $c$ \\ & $\mid$ & $r_1 \cdot r_2$\\ & $\mid$ & $r_1 + r_2$ \\ \\ & $\mid$ & $r^*$ \\ \end{tabular} & \begin{tabular}{@{\hspace{0mm}}rrl@{}} \multicolumn{3}{@{}l}{\textbf{Values}}\medskip\\ $v$ & $::=$ & \\ & & $\Empty$ \\ & $\mid$ & $\Char(c)$ \\ & $\mid$ & $\Seq\,v_1\, v_2$\\ & $\mid$ & $\Left(v)$ \\ & $\mid$ & $\Right(v)$ \\ & $\mid$ & $\Stars\,[v_1,\ldots\,v_n]$ \\ \end{tabular} \end{tabular}\end{center}\noindentValues and regular expressions correspond to each other as illustrated by placing corresponding values next to the regular expressions.The idea of values is to express parse trees. Suppose by using a flatten operation, written $|v|$, we can extract the underlying string of v.For example, $|\mathit{Seq} \, \mathit{Char(c)} \, \mathit{Char(d)}|$ = $cd$. We omit this straightforward definition.Using this flatten notation, we now elaborate how values can express parse trees. $\Seq\,v_1\, v_2$ tells us how the string $|v_1| \cdot |v_2|$ matches the regex $r_1 \cdot r_2$: $r_1$ matches $|v_1|$ and respectively $r_2$ matches $|v_2|$. Exactly how these two are matched are contained in the sub-structure of $v_1$ and $v_2$. To give a concrete example of how value works, consider the string $xy$ and theregular expression $(x + (y + xy))^*$. We can view this regularexpression as a tree and if the string $xy$ is matched by two Star``iterations'', then the $x$ is matched by the left-most alternativein this tree and the $y$ by the right-left alternative. This suggeststo record this matching as\begin{center}$\Stars\,[\Left\,(\Char\,x), \Right(\Left(\Char\,y))]$\end{center}\noindentwhere $\Stars$ records how manyiterations were used; and $\Left$, respectively $\Right$, whichalternative is used. The value formatching $xy$ in a single ``iteration'', i.e.~the POSIX value,would look as follows\begin{center}$\Stars\,[\Seq\,(\Char\,x)\,(\Char\,y)]$\end{center}\noindentwhere $\Stars$ has only a single-element list for the single iterationand $\Seq$ indicates that $xy$ is matched by a sequence regularexpression.The contribution of Sulzmann and Lu is an extension of Brzozowski'salgorithm by a second phase (the first phase being building successivederivatives). In this second phase, for every successful match thecorresponding POSIX value is computed. Pictorially, the Sulzmann and Lu algorithm looks like the following diagram(the working flow of the simple matching algorithm that just gives a $YES/NO$ answer is given before \ref{graph:*}):\\\begin{tikzcd}r_0 \arrow[r, "\backslash c_0"] \arrow[d] & r_1 \arrow[r, "\backslash c_1"] \arrow[d] & r_2 \arrow[r, dashed] \arrow[d] & r_n \arrow[d, "mkeps" description] \\v_0 & v_1 \arrow[l,"inj_{r_0} c_0"] & v_2 \arrow[l, "inj_{r_1} c_1"] & v_n \arrow[l, dashed] \end{tikzcd}We shall briefly explain this interesting process.\\ For the convenience of explanation, we have the following notations: the regular expression $r$ used for matching is also called $r_0$ and the string $s$ is composed of $n$ characters $c_0 c_1 ... c_{n-1}$.First, we do the derivative operation on $r_0$, $r_1$, ..., using characters $c_0$, $c_1$, ... until we get the final derivative $r_n$.We test whether it is $nullable$ or not. If no we know immediately the string does not match the regex. If yes, we start building the parse tree incrementally. We first call $mkeps$(which stands for make epsilon--make the parse tree for how the empty word matched the empty regular expression epsilon) to construct the parse tree $v_n$ for how the final derivative $r_n$ matches the empty string:$mkeps $ $1 \,[] $ $= Empty$...... After this, we inject back the characters one by one, in reverse order as they were chopped off, to build the parse tree $v_i$ for how the regex $r_i$ matches the string $s_i$($s_i$ means the string s with the first $i$ characters being chopped off) from the previous parse tree. After $n$ transformations, we get the parse tree for how $r_0$ matches $s$, exactly as we wanted.An inductive proof can be routinely established.It is instructive to see how it works by a little example. Suppose we have a regular expression $(a+b+ab+c+abc)*$ and we want to match it against the string $abc$. By POSIX rules the lexer should go for the longest matching, i.e. it should match the string $abc$ in one star iteration, using the longest string $abc$ in the sub-expression $a+b+ab+c+abc$(we use $r$ to denote this sub-expression for conciseness). Here is how the lexer achieves a parse tree for this matching.First, we build successive derivatives until we exhaust the string, as illustrated here( we omitted some parenthesis for better readability):\[ r^* \xrightarrow{\backslash a} r_1 = (1+0+1 \cdot b + 0 + 1 \cdot b \cdot c) \cdot r* \xrightarrow{\backslash b}\]\[r_2 = (0+0+1 \cdot 1 + 0 + 1 \cdot 1 \cdot c) \cdot r^* +(0+1+0 + 0 + 0) \cdot r* \xrightarrow{\backslash c}\] \[r_3 = ((0+0+0 + 0 + 1 \cdot 1 \cdot 1) \cdot r^* + (0+0+0 + 1 + 0) \cdot r*) +((0+1+0 + 0 + 0) \cdot r*+(0+0+0 + 1 + 0) \cdot r* )\]Now instead of using $nullable$ to give a $yes$, we call $mkeps$ to construct a parse tree for how $r_3$ matched the string $abc$. $mkeps$ gives the following value $v_3$: \\$Left(Left(Seq(Right(Right(Right(Seq(Empty, Seq(Empty, Empty)))))), Stars []))$\\This corresponds to the leftmost term $((0+0+0 + 0 + 1 \cdot 1 \cdot 1) \cdot r^* $ in $r_3$. Note that its leftmost location allows $mkeps$ to choose it as the first candidate that meets the requirement of being $nullable$. This location is naturally generated by the splitting clause\\ $(r_1 \cdot r_2)\backslash c (when \, r_1 \, nullable)) \, = (r_1\backslash c) \cdot r_2 \,+\, r_2\backslash c. \\$. By this clause, we put$r_1 \backslash c \cdot r_2 $ at the front and $r_2 \backslash c$ at the back. This allows $mkeps$ to always pick up among two matches the one with a longer prefix. The value \\$Left(Left(Seq(Right(Right(Right(Seq(Empty, Seq(Empty, Empty)))))), Stars []))$\\tells us how about the empty string matches the final regular expression after doing all the derivatives: among the regular expressions $(0+0+0 + 0 + 1 \cdot 1 \cdot 1) \cdot r^* + (0+0+0 + 1 + 0) \cdot r*) +((0+1+0 + 0 + 0) \cdot r*+(0+0+0 + 1 + 0) \cdot r* )$ we choose the left most nullable one, which is composed of a sequence of a nested alternative and a folded star that iterates 0 times. In that nested alternative we take the rightmost alternative.Using the value $v_3$, the character c, and the regular expression $r_2$, we can recover how $r_2$ matched the string $[c]$ : we inject $c$ back to $v_3$, and get \\ $v_2 = Left(Seq(Right(Right(Right(Seq(Empty, Seq(Empty, c)))))), Stars [])$, which tells us how $r_2$ matched $c$. After this we inject back the character $b$, and get\\ $v_1 = Seq(Right(Right(Right(Seq(Empty, Seq(b, c)))))), Stars [])$ for how $r_1= (1+0+1 \cdot b + 0 + 1 \cdot b \cdot c) \cdot r*$ matched the string $bc$ before it split into 2 pieces. Finally, after injecting character a back to $v_1$, we get the parse tree $v_0= Stars [Right(Right(Right(Seq(a, Seq(b, c)))))]$ for how r matched $abc$.We omit the details of injection function, which is provided by Sulzmann and Lu's paper \cite{Sulzmann2014}. Readers might have noticed that the parse tree information as actually already available when doing derivatives. For example, immediately after the operation $\backslash a$ we know that if we want to match a string that starts with a, we can either take the initial match to be \begin{enumerate} \item[1)] just $a$ or \item[2)] string $ab$ or \item[3)] string $abc$.\end{enumerate}In order to differentiate between these choices, we just need to remember their positions--$a$ is on the left, $ab$ is in the middle , and $abc$ is on the right. Which one of these alternatives is chosen later does not affect their relative position because our algorithm does not change this order. There is no need to traverse this information twice. This leads to a new approach of lexing-- if we store the information for parse trees in the corresponding regular expression pieces, update this information when we do derivative operation on them, and collect the information when fininshed with derivatives and calling $mkeps$ for deiciding which branch is POSIX, we can generate the parse tree in one pass, instead of doing an n-step backward transformation.This leads to Sulzmann and Lu's novel idea of using bit-codes on derivatives.In the next section, we shall focus on the bit-coded algorithm and the naturalprocess of simplification of regular expressions using bit-codes, which is needed inorder to obtain \emph{fast} versions of the Brzozowski's, and Sulzmannand Lu's algorithms. This is where the PhD-project hopes to advancethe state-of-the-art.\section{Simplification of Regular Expressions}Using bit-codes to guide parsing is not a new idea.The main drawback of building successive derivatives according toBrzozowski's definition is that they can grow very quickly in size.This is mainly due to the fact that the derivative operation generatesoften ``useless'' $\ZERO$s and $\ONE$s in derivatives. As a result,if implemented naively both algorithms by Brzozowski and by Sulzmannand Lu are excruciatingly slow. For example when starting with theregular expression $(a + aa)^*$ and building 12 successive derivativesw.r.t.~the character $a$, one obtains a derivative regular expressionwith more than 8000 nodes (when viewed as a tree). Operations likederivative and $\nullable$ need to traverse such trees andconsequently the bigger the size of the derivative the slower thealgorithm. Fortunately, one can simplify regular expressions aftereach derivative step. Various simplifications of regular expressionsare possible, such as the simplifications of $\ZERO + r$,$r + \ZERO$, $\ONE\cdot r$, $r \cdot \ONE$, and $r + r$ to just$r$. These simplifications do not affect the answer for whether aregular expression matches a string or not, but fortunately also donot affect the POSIX strategy of how regular expressions matchstrings---although the latter is much harder to establish. Someinitial results in this regard have been obtained in\cite{AusafDyckhoffUrban2016}. However, what has not been achieved yetis a very tight bound for the size. Such a tight bound is suggested bywork of Antimirov who proved that (partial) derivatives can be boundby the number of characters contained in the initial regularexpression \cite{Antimirov95}.Antimirov defined the "partial derivatives" of regular expressions to be this:%TODO definition of partial derivativesit is essentially a set of regular expressions that come from the sub-structure of the original regular expression. Antimirov has proved a nice size bound of the size of partial derivatives. Roughly speaking the size will not exceed the fourth power of the number of nodes in that regular expression. Interestingly, we observed from experiment that after the simplification step, our regular expression has the same size or is smaller than the partial derivatives. This allows us to prove a tight bound on the size of regular expression during the running time of the algorithm if we can establish the connection between our simplification rules and partial derivatives. %We believe, and have generated test%data, that a similar bound can be obtained for the derivatives in%Sulzmann and Lu's algorithm. Let us give some details about this next.We first followed Sulzmann and Lu's idea of introducing\emph{annotated regular expressions}~\cite{Sulzmann2014}. They aredefined by the following grammar:\begin{center}\begin{tabular}{lcl} $\textit{a}$ & $::=$ & $\textit{ZERO}$\\ & $\mid$ & $\textit{ONE}\;\;bs$\\ & $\mid$ & $\textit{CHAR}\;\;bs\,c$\\ & $\mid$ & $\textit{ALTS}\;\;bs\,as$\\ & $\mid$ & $\textit{SEQ}\;\;bs\,a_1\,a_2$\\ & $\mid$ & $\textit{STAR}\;\;bs\,a$\end{tabular} \end{center} \noindentwhere $bs$ stands for bitsequences, and $as$ (in \textit{ALTS}) for alist of annotated regular expressions. These bitsequences encodeinformation about the (POSIX) value that should be generated by theSulzmann and Lu algorithm. Bitcodes are essentially incomplete values.This can be straightforwardly seen in the following transformation: \begin{center}\begin{tabular}{lcl} $\textit{code}(\Empty)$ & $\dn$ & $[]$\\ $\textit{code}(\Char\,c)$ & $\dn$ & $[]$\\ $\textit{code}(\Left\,v)$ & $\dn$ & $\Z :: code(v)$\\ $\textit{code}(\Right\,v)$ & $\dn$ & $\S :: code(v)$\\ $\textit{code}(\Seq\,v_1\,v_2)$ & $\dn$ & $code(v_1) \,@\, code(v_2)$\\ $\textit{code}(\Stars\,[])$ & $\dn$ & $[\S]$\\ $\textit{code}(\Stars\,(v\!::\!vs))$ & $\dn$ & $\Z :: code(v) \;@\; code(\Stars\,vs)$\end{tabular} \end{center} where $\Z$ and $\S$ are arbitrary names for the bits in thebitsequences. Here code encodes a value into a bitsequence by converting Left into $\Z$, Right into $\S$, the start point of a non-empty star iteration into $\S$, and the border where a local star terminates into $\Z$. This conversion is apparently lossy, as it throws away the character information, and does not decode the boundary between the two operands of the sequence constructor. Moreover, with only the bitcode we cannot even tell whether the $\S$s and $\Z$s are for $Left/Right$ or $Stars$. The reason for choosing this compact way of storing information is that the relatively small size of bits can be easily moved around during the lexing process. In order to recover the bitcode back into values, we will need the regular expression as the extra information and decode them back into value:\\%\begin{definition}[Bitdecoding of Values]\mbox{}\begin{center}\begin{tabular}{@{}l@{\hspace{1mm}}c@{\hspace{1mm}}l@{}} $\textit{decode}'\,bs\,(\ONE)$ & $\dn$ & $(\Empty, bs)$\\ $\textit{decode}'\,bs\,(c)$ & $\dn$ & $(\Char\,c, bs)$\\ $\textit{decode}'\,(\Z\!::\!bs)\;(r_1 + r_2)$ & $\dn$ & $\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}\; (\Left\,v, bs_1)$\\ $\textit{decode}'\,(\S\!::\!bs)\;(r_1 + r_2)$ & $\dn$ & $\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_2\;\textit{in}\; (\Right\,v, bs_1)$\\ $\textit{decode}'\,bs\;(r_1\cdot r_2)$ & $\dn$ & $\textit{let}\,(v_1, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}$\\ & & $\textit{let}\,(v_2, bs_2) = \textit{decode}'\,bs_1\,r_2$\\ & & \hspace{35mm}$\textit{in}\;(\Seq\,v_1\,v_2, bs_2)$\\ $\textit{decode}'\,(\Z\!::\!bs)\,(r^*)$ & $\dn$ & $(\Stars\,[], bs)$\\ $\textit{decode}'\,(\S\!::\!bs)\,(r^*)$ & $\dn$ & $\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r\;\textit{in}$\\ & & $\textit{let}\,(\Stars\,vs, bs_2) = \textit{decode}'\,bs_1\,r^*$\\ & & \hspace{35mm}$\textit{in}\;(\Stars\,v\!::\!vs, bs_2)$\bigskip\\ $\textit{decode}\,bs\,r$ & $\dn$ & $\textit{let}\,(v, bs') = \textit{decode}'\,bs\,r\;\textit{in}$\\ & & $\textit{if}\;bs' = []\;\textit{then}\;\textit{Some}\,v\; \textit{else}\;\textit{None}$ \end{tabular} \end{center} %\end{definition}To do lexing using annotated regular expressions, we shall first transform theusual (un-annotated) regular expressions into annotated regularexpressions:\\%\begin{definition}\begin{center}\begin{tabular}{lcl} $(\ZERO)^\uparrow$ & $\dn$ & $\textit{ZERO}$\\ $(\ONE)^\uparrow$ & $\dn$ & $\textit{ONE}\,[]$\\ $(c)^\uparrow$ & $\dn$ & $\textit{CHAR}\,[]\,c$\\ $(r_1 + r_2)^\uparrow$ & $\dn$ & $\textit{ALT}\;[]\,(\textit{fuse}\,[\Z]\,r_1^\uparrow)\, (\textit{fuse}\,[\S]\,r_2^\uparrow)$\\ $(r_1\cdot r_2)^\uparrow$ & $\dn$ & $\textit{SEQ}\;[]\,r_1^\uparrow\,r_2^\uparrow$\\ $(r^*)^\uparrow$ & $\dn$ & $\textit{STAR}\;[]\,r^\uparrow$\\\end{tabular} \end{center} %\end{definition}Then we do successive derivative operations on the annotated regular expression. This derivative operation is the same as what we previously have for the simple regular expressions, except that we take special care of the bits to store the parse tree information:\\%\begin{definition}{bder}\begin{center} \begin{tabular}{@{}lcl@{}} $(\textit{ZERO})\backslash c$ & $\dn$ & $\textit{ZERO}$\\ $(\textit{ONE}\;bs)\backslash c$ & $\dn$ & $\textit{ZERO}$\\ $(\textit{CHAR}\;bs\,d)\backslash c$ & $\dn$ & $\textit{if}\;c=d\; \;\textit{then}\; \textit{ONE}\;bs\;\textit{else}\;\textit{ZERO}$\\ $(\textit{ALT}\;bs\,a_1\,a_2)\backslash c$ & $\dn$ & $\textit{ALT}\,bs\,(a_1\backslash c)\,(a_2\backslash c)$\\ $(\textit{SEQ}\;bs\,a_1\,a_2)\backslash c$ & $\dn$ & $\textit{if}\;\textit{bnullable}\,a_1$\\ & &$\textit{then}\;\textit{ALT}\,bs\,(\textit{SEQ}\,[]\,(a_1\backslash c)\,a_2)$\\ & &$\phantom{\textit{then}\;\textit{ALT}\,bs\,}(\textit{fuse}\,(\textit{bmkeps}\,a_1)\,(a_2\backslash c))$\\ & &$\textit{else}\;\textit{SEQ}\,bs\,(a_1\backslash c)\,a_2$\\ $(\textit{STAR}\,bs\,a)\backslash c$ & $\dn$ & $\textit{SEQ}\;bs\,(\textit{fuse}\, [\Z] (r\backslash c))\, (\textit{STAR}\,[]\,r)$\end{tabular} \end{center} %\end{definition}This way, we do not have to use an injection function and a second phase, but instead only need to collect the bits while running $mkeps$:%\begin{definition}[\textit{bmkeps}]\mbox{}\begin{center}\begin{tabular}{lcl} $\textit{bmkeps}\,(\textit{ONE}\,bs)$ & $\dn$ & $bs$\\ $\textit{bmkeps}\,(\textit{ALT}\,bs\,a_1\,a_2)$ & $\dn$ & $\textit{if}\;\textit{bnullable}\,a_1$\\ & &$\textit{then}\;bs\,@\,\textit{bmkeps}\,a_1$\\ & &$\textit{else}\;bs\,@\,\textit{bmkeps}\,a_2$\\ $\textit{bmkeps}\,(\textit{SEQ}\,bs\,a_1\,a_2)$ & $\dn$ & $bs \,@\,\textit{bmkeps}\,a_1\,@\, \textit{bmkeps}\,a_2$\\ $\textit{bmkeps}\,(\textit{STAR}\,bs\,a)$ & $\dn$ & $bs \,@\, [\S]$\end{tabular} \end{center} %\end{definition}and then decode the bits using the regular expression. After putting these pieces together, the whole process looks like this:\\\begin{center}\begin{tabular}{lcl} $\textit{blexer}\;r\,s$ & $\dn$ & $\textit{let}\;a = (r^\uparrow)\backslash s\;\textit{in}$\\ & & $\;\;\textit{if}\; \textit{bnullable}(a)$\\ & & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\ & & $\;\;\textit{else}\;\textit{None}$\end{tabular}\end{center}The main point of the bitsequences and annotated regular expressionsis that we can apply rather aggressive (in terms of size)simplification rules in order to keep derivatives small. We havedeveloped such ``aggressive'' simplification rules and generated testdata that show that the expected bound can be achieved. Obviously wecould only partially cover the search space as there are infinitelymany regular expressions and strings. One modification we introducedis to allow a list of annotated regular expressions in the\textit{ALTS} constructor. This allows us to not just deleteunnecessary $\ZERO$s and $\ONE$s from regular expressions, but alsounnecessary ``copies'' of regular expressions (very similar tosimplifying $r + r$ to just $r$, but in a more generalsetting). A psuedocode version of our algorithm is given below:\\\begin{algorithm}\caption{simplification of annotated regular expression}\label{euclid}\begin{algorithmic}[1]\Procedure{$Simp$}{$areg$}\Switch{$areg$} \Case{$ALTS(bs, rs)$} \For{\textit{rs[i] in array rs}} \State $\textit{rs[i]} \gets$ \textit{Simp(rs[i])} \EndFor \For{\textit{rs[i] in array rs}} \If{$rs[i] == ALTS(bs', rs')$} \State $rs'' \gets$ attach bits $bs'$ to all elements in $rs'$ \State Insert $rs''$ into $rs$ at position $i$ ($rs[i]$ is destroyed, replaced by its list of children regular expressions) \EndIf \EndFor \State Remove all duplicates in $rs$, only keeping the first copy for multiple occurrences of the same regular expression \State Remove all $0$s in $rs$ \If{$ rs.length == 0$} \Return $ZERO$ \EndIf \If {$ rs.length == 1$} \Return$ rs[0] $\EndIf \EndCase \Case{$SEQ(bs, r_1, r_2)$} \If{$ r_1$ or $r_2$ is $ZERO$} \Return ZERO \EndIf \State update $r_1$ and $r_2$ by attaching $bs$ to their front \If {$r_1$ or $r_2$ is $ONE(bs')$} \Return $r_2$ or $r_1$ \EndIf \EndCase \Case{$Others$} \Return $areg$ as it is \EndCase\EndSwitch\EndProcedure\end{algorithmic}\end{algorithm}With this simplification our previous $(a + aa)^*$ example's 8000 nodes will be reduced to only 6.Another modification is that we use simplification rulesinspired by Antimirov's work on partial derivatives. They maintain theidea that only the first ``copy'' of a regular expression in analternative contributes to the calculation of a POSIX value. Allsubsequent copies can be pruned from the regular expression.We are currently engaged with proving that our simplification rulesactually do not affect the POSIX value that should be generated by thealgorithm according to the specification of a POSIX value andfurthermore that our derivatives stay small for all derivatives. Forthis proof we use the theorem prover Isabelle. Once completed, thisresult will advance the state-of-the-art: Sulzmann and Lu wrote intheir paper \cite{Sulzmann2014} about the bitcoded ``incrementalparsing method'' (that is the matching algorithm outlined in thissection):\begin{quote}\it ``Correctness Claim: We further claim that the incremental parsing method in Figure~5 in combination with the simplification steps in Figure 6 yields POSIX parse trees. We have tested this claim extensively by using the method in Figure~3 as a reference but yet have to work out all proof details.''\end{quote} \noindentWe would settle the correctness claim and furthermore obtain a muchtighter bound on the sizes of derivatives. The result is that ouralgorithm should be correct and faster on all inputs. The originalblow-up, as observed in JavaScript, Python and Java, would be excludedfrom happening in our algorithm.\section{Conclusion}In this PhD-project we are interested in fast algorithms for regularexpression matching. While this seems to be a ``settled'' area, infact interesting research questions are popping up as soon as one stepsoutside the classic automata theory (for example in terms of what kindof regular expressions are supported). The reason why it isinteresting for us to look at the derivative approach introduced byBrzozowski for regular expression matching, and then much furtherdeveloped by Sulzmann and Lu, is that derivatives can elegantly dealwith some of the regular expressions that are of interest in ``reallife''. This includes the not-regular expression, written $\neg\,r$(that is all strings that are not recognised by $r$), but also boundedregular expressions such as $r^{\{n\}}$ and $r^{\{n..m\}}$). There isalso hope that the derivatives can provide another angle for how todeal more efficiently with back-references, which are one of thereasons why regular expression engines in JavaScript, Python and Javachoose to not implement the classic automata approach of transformingregular expressions into NFAs and then DFAs---because we simply do notknow how such back-references can be represented by DFAs.\bibliographystyle{plain}\bibliography{root}\end{document}