\documentclass[a4paper,UKenglish]{lipics}\usepackage{graphic}\usepackage{data}\usepackage{tikz-cd}%\usepackage{algorithm}\usepackage{amsmath}\usepackage[noend]{algpseudocode}\usepackage{enumitem}\definecolor{darkblue}{rgb}{0,0,0.6}\hypersetup{colorlinks=true,allcolors=darkblue}% \documentclass{article}%\usepackage[utf8]{inputenc}%\usepackage[english]{babel}%\usepackage{listings}% \usepackage{amsthm}%\usepackage{hyperref}% \usepackage[margin=0.5in]{geometry}%\usepackage{pmboxdraw}\title{POSIX Regular Expression Matching and Lexing}\author{Chengsong Tan}\affil{King's College London\\London, UK\\\texttt{chengsong.tan@kcl.ac.uk}}\authorrunning{Chengsong Tan}\Copyright{Chengsong Tan}\newcommand{\dn}{\stackrel{\mbox{\scriptsize def}}{=}}%\newcommand{\ZERO}{\mbox{\bf 0}}\newcommand{\ONE}{\mbox{\bf 1}}\def\lexer{\mathit{lexer}}\def\mkeps{\mathit{mkeps}}\def\inj{\mathit{inj}}\def\Empty{\mathit{Empty}}\def\Left{\mathit{Left}}\def\Right{\mathit{Right}}\def\Stars{\mathit{Stars}}\def\Char{\mathit{Char}}\def\Seq{\mathit{Seq}}\def\Der{\mathit{Der}}\def\nullable{\mathit{nullable}}\def\Z{\mathit{Z}}\def\S{\mathit{S}}%\theoremstyle{theorem}%\newtheorem{theorem}{Theorem}%\theoremstyle{lemma}%\newtheorem{lemma}{Lemma}%\newcommand{\lemmaautorefname}{Lemma}%\theoremstyle{definition}%\newtheorem{definition}{Definition}\algnewcommand\algorithmicswitch{\textbf{switch}}\algnewcommand\algorithmiccase{\textbf{case}}\algnewcommand\algorithmicassert{\texttt{assert}}\algnewcommand\Assert[1]{\State \algorithmicassert(#1)}%% New "environments"\algdef{SE}[SWITCH]{Switch}{EndSwitch}[1]{\algorithmicswitch\ #1\ \algorithmicdo}{\algorithmicend\ \algorithmicswitch}%\algdef{SE}[CASE]{Case}{EndCase}[1]{\algorithmiccase\ #1}{\algorithmicend\ \algorithmiccase}%\algtext*{EndSwitch}%\algtext*{EndCase}%\begin{document}\maketitle\begin{abstract} Brzozowski introduced in 1964 a beautifully simple algorithm for regular expression matching based on the notion of derivatives of regular expressions. In 2014, Sulzmann and Lu extended this algorithm to not just give a YES/NO answer for whether or not a regular expression matches a string, but in case it does also answers with \emph{how} it matches the string. This is important for applications such as lexing (tokenising a string). The problem is to make the algorithm by Sulzmann and Lu fast on all inputs without breaking its correctness. We have already developed some simplification rules for this, but have not yet proved that they preserve the correctness of the algorithm. We also have not yet looked at extended regular expressions, such as bounded repetitions, negation and back-references.\end{abstract}\section{Introduction}This PhD-project is about regular expression matching andlexing. Given the maturity of this topic, the reader might wonder:Surely, regular expressions must have already been studied to death?What could possibly be \emph{not} known in this area? And surely allimplemented algorithms for regular expression matching are blindinglyfast?Unfortunately these preconceptions are not supported by evidence: Takefor example the regular expression $(a^*)^*\,b$ and ask whetherstrings of the form $aa..a$ match this regularexpression. Obviously not---the expected $b$ in the lastposition is missing. One would expect that modern regular expressionmatching engines can find this out very quickly. Alas, if one triesthis example in JavaScript, Python or Java 8 with strings like 28$a$'s, one discovers that this decision takes around 30 seconds andtakes considerably longer when adding a few more $a$'s, as the graphsbelow show:\begin{center}\begin{tabular}{@{}c@{\hspace{0mm}}c@{\hspace{0mm}}c@{}}\begin{tikzpicture}\begin{axis}[ xlabel={$n$}, x label style={at={(1.05,-0.05)}}, ylabel={time in secs}, enlargelimits=false, xtick={0,5,...,30}, xmax=33, ymax=35, ytick={0,5,...,30}, scaled ticks=false, axis lines=left, width=5cm, height=4cm, legend entries={JavaScript}, legend pos=north west, legend cell align=left]\addplot[red,mark=*, mark options={fill=white}] table {re-js.data};\end{axis}\end{tikzpicture} &\begin{tikzpicture}\begin{axis}[ xlabel={$n$}, x label style={at={(1.05,-0.05)}}, %ylabel={time in secs}, enlargelimits=false, xtick={0,5,...,30}, xmax=33, ymax=35, ytick={0,5,...,30}, scaled ticks=false, axis lines=left, width=5cm, height=4cm, legend entries={Python}, legend pos=north west, legend cell align=left]\addplot[blue,mark=*, mark options={fill=white}] table {re-python2.data};\end{axis}\end{tikzpicture} &\begin{tikzpicture}\begin{axis}[ xlabel={$n$}, x label style={at={(1.05,-0.05)}}, %ylabel={time in secs}, enlargelimits=false, xtick={0,5,...,30}, xmax=33, ymax=35, ytick={0,5,...,30}, scaled ticks=false, axis lines=left, width=5cm, height=4cm, legend entries={Java 8}, legend pos=north west, legend cell align=left]\addplot[cyan,mark=*, mark options={fill=white}] table {re-java.data};\end{axis}\end{tikzpicture}\\\multicolumn{3}{c}{Graphs: Runtime for matching $(a^*)^*\,b$ with strings of the form $\underbrace{aa..a}_{n}$.}\end{tabular} \end{center} \noindent These are clearly abysmal and possibly surprising results. Onewould expect these systems to do much better than that---after all,given a DFA and a string, deciding whether a string is matched by thisDFA should be linear.Admittedly, the regular expression $(a^*)^*\,b$ is carefully chosen toexhibit this exponential behaviour. Unfortunately, such regularexpressions are not just a few outliers. They are actually frequent enough to have a separate name created forthem---\emph{evil regular expressions}. In empiric work, Davis et alreport that they have found thousands of such evil regular expressionsin the JavaScript and Python ecosystems \cite{Davis18}.This exponential blowup in matching algorithms sometimes causesconsiderable grief in real life: for example on 20 July 2016 one evilregular expression brought the webpage\href{http://stackexchange.com}{Stack Exchange} to itsknees.\footnote{\url{https://stackstatus.net/post/147710624694/outage-postmortem-july-20-2016}}In this instance, a regular expression intended to just trim whitespaces from the beginning and the end of a line actually consumedmassive amounts of CPU-resources---causing web servers togrind to a halt. This happened when a post with 20,000 white spaces wassubmitted, but importantly the white spaces were neither at thebeginning nor at the end. As a result, the regular expression matchingengine needed to backtrack over many choices. The underlying problem isthat many ``real life'' regular expression matching engines do not useDFAs for matching. This is because they support regular expressions thatare not covered by the classical automata theory, and in this moregeneral setting there are quite a few research questions stillunanswered and fast algorithms still need to be developed (for examplehow to treat bounded repetitions, negation and back-referencesefficiently).%question: dfa can have exponential states. isn't this the actual reason why they do not use dfas?%how do they avoid dfas exponential states if they use them for fast matching?There is also another under-researched problem to do with regularexpressions and lexing, i.e.~the process of breaking up strings intosequences of tokens according to some regular expressions. In thissetting one is not just interested in whether or not a regularexpression matches a string, but also in \emph{how}. Consider for example a regular expression$r_{key}$ for recognising keywords such as \textit{if}, \textit{then}and so on; and a regular expression $r_{id}$ for recognisingidentifiers (say, a single character followed by characters ornumbers). One can then form the compound regular expression$(r_{key} + r_{id})^*$ and use it to tokenise strings. But then howshould the string \textit{iffoo} be tokenised? It could be tokenisedas a keyword followed by an identifier, or the entire string as asingle identifier. Similarly, how should the string \textit{if} betokenised? Both regular expressions, $r_{key}$ and $r_{id}$, would``fire''---so is it an identifier or a keyword? While in applicationsthere is a well-known strategy to decide these questions, called POSIXmatching, only relatively recently precise definitions of what POSIXmatching actually means has been formalised\cite{AusafDyckhoffUrban2016,OkuiSuzuki2010,Vansummeren2006}. Such a definition has also been given by Sulzmann and Lu \cite{Sulzmann2014}, but thecorresponding correctness proof turned out to be faulty \cite{AusafDyckhoffUrban2016}.Roughly, POSIX matching means matching the longest initial substring.In the case of a tie, the initial sub-match is chosen according to some priorities attached to theregular expressions (e.g.~keywords have a higher priority thanidentifiers). This sounds rather simple, but according to Grathwohl etal \cite[Page 36]{CrashCourse2014} this is not the case. They wrote:\begin{quote}\it{}``The POSIX strategy is more complicated than the greedy because of the dependence on information about the length of matched strings in the various subexpressions.''\end{quote}\noindentThis is also supported by evidence collected by Kuklewicz\cite{Kuklewicz} who noticed that a number of POSIX regular expressionmatchers calculate incorrect results.Our focus is on an algorithm introduced by Sulzmann and Lu in 2014 forregular expression matching according to the POSIX strategy\cite{Sulzmann2014}. Their algorithm is based on an older algorithm byBrzozowski from 1964 where he introduced the notion of derivatives ofregular expressions \cite{Brzozowski1964}. We shall briefly explainthis algorithm next.\section{The Algorithm by Brzozowski based on Derivatives of RegularExpressions}Suppose (basic) regular expressions are given by the following grammar:\[ r ::= \ZERO \mid \ONE \mid c \mid r_1 \cdot r_2 \mid r_1 + r_2 \mid r^* \]\noindentThe intended meaning of the constructors is as follows: $\ZERO$cannot match any string, $\ONE$ can match the empty string, thecharacter regular expression $c$ can match the character $c$, and soon.The ingenious contribution by Brzozowski is the notion of\emph{derivatives} of regular expressions. The idea behind thisnotion is as follows: suppose a regular expression $r$ can match astring of the form $c\!::\! s$ (that is a list of characters startingwith $c$), what does the regular expression look like that can matchjust $s$? Brzozowski gave a neat answer to this question. He startedwith the definition of $nullable$:\begin{center} \begin{tabular}{lcl} $\nullable(\ZERO)$ & $\dn$ & $\mathit{false}$ \\ $\nullable(\ONE)$ & $\dn$ & $\mathit{true}$ \\ $\nullable(c)$ & $\dn$ & $\mathit{false}$ \\ $\nullable(r_1 + r_2)$ & $\dn$ & $\nullable(r_1) \vee \nullable(r_2)$ \\ $\nullable(r_1\cdot r_2)$ & $\dn$ & $\nullable(r_1) \wedge \nullable(r_2)$ \\ $\nullable(r^*)$ & $\dn$ & $\mathit{true}$ \\ \end{tabular} \end{center}This function simply tests whether the empty string is in $L(r)$.He then definedthe following operation on regular expressions, written$r\backslash c$ (the derivative of $r$ w.r.t.~the character $c$):\begin{center}\begin{tabular}{lcl} $\ZERO \backslash c$ & $\dn$ & $\ZERO$\\ $\ONE \backslash c$ & $\dn$ & $\ZERO$\\ $d \backslash c$ & $\dn$ & $\mathit{if} \;c = d\;\mathit{then}\;\ONE\;\mathit{else}\;\ZERO$\\$(r_1 + r_2)\backslash c$ & $\dn$ & $r_1 \backslash c \,+\, r_2 \backslash c$\\$(r_1 \cdot r_2)\backslash c$ & $\dn$ & $\mathit{if} \, nullable(r_1)$\\ & & $\mathit{then}\;(r_1\backslash c) \cdot r_2 \,+\, r_2\backslash c$\\ & & $\mathit{else}\;(r_1\backslash c) \cdot r_2$\\ $(r^*)\backslash c$ & $\dn$ & $(r\backslash c) \cdot r^*$\\\end{tabular}\end{center}%Assuming the classic notion of a%\emph{language} of a regular expression, written $L(\_)$, t\noindentThe main property of the derivative operation is that\begin{center}$c\!::\!s \in L(r)$ holdsif and only if $s \in L(r\backslash c)$.\end{center}\noindentFor us the main advantage is that derivatives can bestraightforwardly implemented in any functional programming language,and are easily definable and reasoned about in theorem provers---thedefinitions just consist of inductive datatypes and simple recursivefunctions. Moreover, the notion of derivatives can be easilygeneralised to cover extended regular expression constructors such asthe not-regular expression, written $\neg\,r$, or bounded repetitions(for example $r^{\{n\}}$ and $r^{\{n..m\}}$), which cannot be sostraightforwardly realised within the classic automata approach.For the moment however, we focus only on the usual basic regular expressions.Now if we want to find out whether a string $s$ matches with a regularexpression $r$, build the derivatives of $r$ w.r.t.\ (in succession)all the characters of the string $s$. Finally, test whether theresulting regular expression can match the empty string. If yes, then$r$ matches $s$, and no in the negative case. To implement this ideawe can generalise the derivative operation to strings like this:\begin{center}\begin{tabular}{lcl}$r \backslash (c\!::\!s) $ & $\dn$ & $(r \backslash c) \backslash s$ \\$r \backslash [\,] $ & $\dn$ & $r$\end{tabular}\end{center}\noindentand then define as regular-expression matching algorithm: \[match\;s\;r \;\dn\; nullable(r\backslash s)\]\noindentThis algorithm looks graphically as follows:\begin{equation}\label{graph:*}\begin{tikzcd}r_0 \arrow[r, "\backslash c_0"] & r_1 \arrow[r, "\backslash c_1"] & r_2 \arrow[r, dashed] & r_n \arrow[r,"\textit{nullable}?"] & \;\textrm{YES}/\textrm{NO}\end{tikzcd}\end{equation}\noindentwhere we start with a regular expression $r_0$, build successivederivatives until we exhaust the string and then use \textit{nullable}to test whether the result can match the empty string. It can berelatively easily shown that this matcher is correct (that is givenan $s = c_0...c_{n-1}$ and an $r_0$, it generates YES if and only if $s \in L(r_0)$).\section{Values and the Algorithm by Sulzmann and Lu}One limitation, however, of Brzozowski's algorithm is that it onlyproduces a YES/NO answer for whether a string is being matched by aregular expression. Sulzmann and Lu~\cite{Sulzmann2014} extended thisalgorithm to allow generation of an actual matching, called a\emph{value}. Values and regular expressions correspond to each other as illustrated in the following table:\begin{center} \begin{tabular}{c@{\hspace{20mm}}c} \begin{tabular}{@{}rrl@{}} \multicolumn{3}{@{}l}{\textbf{Regular Expressions}}\medskip\\ $r$ & $::=$ & $\ZERO$\\ & $\mid$ & $\ONE$ \\ & $\mid$ & $c$ \\ & $\mid$ & $r_1 \cdot r_2$\\ & $\mid$ & $r_1 + r_2$ \\ \\ & $\mid$ & $r^*$ \\ \end{tabular} & \begin{tabular}{@{\hspace{0mm}}rrl@{}} \multicolumn{3}{@{}l}{\textbf{Values}}\medskip\\ $v$ & $::=$ & \\ & & $\Empty$ \\ & $\mid$ & $\Char(c)$ \\ & $\mid$ & $\Seq\,v_1\, v_2$\\ & $\mid$ & $\Left(v)$ \\ & $\mid$ & $\Right(v)$ \\ & $\mid$ & $\Stars\,[v_1,\ldots\,v_n]$ \\ \end{tabular} \end{tabular}\end{center}\noindentNo value corresponds to $\ZERO$; $\Empty$ corresponds to$\ONE$; $\Char$ to the character regular expression; $\Seq$ to thesequence regular expression and so on. The idea of values is to encodeparse trees for how the sub-parts of a regular expression matchesthe sub-parts of a string. To see this, suppose a \emph{flatten} operation, written$|v|$. We use this function to extract the underlying string of a value$v$. For example, $|\mathit{Seq} \, (\textit{Char x}) \, (\textit{Chary})|$ is the string $xy$. Using flatten, we can describe how valuesencode parse trees: $\Seq\,v_1\, v_2$ encodes a tree with 2 children nodesthat tells how the string $|v_1| @|v_2|$ matches the regex $r_1 \cdot r_2$ whereby $r_1$ matches thesubstring $|v_1|$ and, respectively, $r_2$ matches the substring$|v_2|$. Exactly how these two are matched is contained in thechildren nodes $v_1$ and $v_2$ of parent $\textit{Seq}$ . To give a concrete example of how value works, consider the string $xy$and the regular expression $(x + (y + xy))^*$. We can view this regularexpression as a tree and if the string $xy$ is matched by two Star``iterations'', then the $x$ is matched by the left-most alternative inthis tree and the $y$ by the right-left alternative. This suggests torecord this matching as\begin{center}$\Stars\,[\Left\,(\Char\,x), \Right(\Left(\Char\,y))]$\end{center}\noindentwhere $\Stars [\ldots]$ records all theiterations; and $\Left$, respectively $\Right$, whichalternative is used. The value formatching $xy$ in a single ``iteration'', i.e.~the POSIX value,would look as follows\begin{center}$\Stars\,[\Seq\,(\Char\,x)\,(\Char\,y)]$\end{center}\noindentwhere $\Stars$ has only a single-element list for the single iterationand $\Seq$ indicates that $xy$ is matched by a sequence regularexpression.The contribution of Sulzmann and Lu is an extension of Brzozowski'salgorithm by a second phase (the first phase being building successivederivatives---see \eqref{graph:*}). In this second phase, a POSIX value is generated assuming the regular expression matches the string. Pictorially, the algorithm is as follows:\begin{equation}\label{graph:2}\begin{tikzcd}r_0 \arrow[r, "\backslash c_0"] \arrow[d] & r_1 \arrow[r, "\backslash c_1"] \arrow[d] & r_2 \arrow[r, dashed] \arrow[d] & r_n \arrow[d, "mkeps" description] \\v_0 & v_1 \arrow[l,"inj_{r_0} c_0"] & v_2 \arrow[l, "inj_{r_1} c_1"] & v_n \arrow[l, dashed] \end{tikzcd}\end{equation}\noindentFor convenience, we shall employ the following notations: the regular expression westart with is $r_0$, and the given string $s$ is composed of characters $c_0 c_1\ldots c_{n-1}$. In the first phase, we build the derivatives $r_1$, $r_2$, \ldots according tothe characters $c_0$, $c_1$,\ldots until we exhaust the string andobtain at the derivative $r_n$. We test whether this derivative is$\textit{nullable}$ or not. If not, we know the string does not match$r$ and no value needs to be generated. If yes, we start building theparse tree incrementally by \emph{injecting} back the characters intothe values $v_n, \ldots, v_0$. For this we first call the function$\textit{mkeps}$, which builds the parse tree for how the empty stringhas matched the (nullable) regular expression $r_n$. This function is definedas \begin{center} \begin{tabular}{lcl} $\mkeps(\ONE)$ & $\dn$ & $\Empty$ \\ $\mkeps(r_{1}+r_{2})$ & $\dn$ & \textit{if} $\nullable(r_{1})$\\ & & \textit{then} $\Left(\mkeps(r_{1}))$\\ & & \textit{else} $\Right(\mkeps(r_{2}))$\\ $\mkeps(r_1\cdot r_2)$ & $\dn$ & $\Seq\,(\mkeps\,r_1)\,(\mkeps\,r_2)$\\ $mkeps(r^*)$ & $\dn$ & $\Stars\,[]$ \end{tabular} \end{center}\noindent There are no cases for $\ZERO$ and $c$, sincethese regular expression cannot match the empty string. Notealso that in case of alternatives we give preference to theregular expression on the left-hand side. This will becomeimportant later on.After this, we inject back the characters one by one in order to buildthe parse tree $v_i$ for how the regex $r_i$ matches the string $s_i$($s_i = c_i \ldots c_{n-1}$ ) from the previous parse tree $v_{i+1}$. Afterinjecting back $n$ characters, we get the parse tree for how $r_0$matches $s$. For this Sulzmann and Lu defined a function that reversesthe ``chopping off'' of characters during the derivative phase. Thecorresponding function is called $\textit{inj}$; it takes threearguments: the first one is a regular expression ${r_{i-1}}$, before thecharacter is chopped off, the second is a character ${c_{i-1}}$, thecharacter we want to inject and the third argument is the value${v_i}$, into which one wants to inject the character (itcorresponds to the regular expression after the character has beenchopped off). The result of this function is a new value. The definitionof $\textit{inj}$ is as follows: \begin{center}\begin{tabular}{l@{\hspace{1mm}}c@{\hspace{1mm}}l} $\textit{inj}\,(c)\,c\,Empty$ & $\dn$ & $Char\,c$\\ $\textit{inj}\,(r_1 + r_2)\,c\,\Left(v)$ & $\dn$ & $\Left(\textit{inj}\,r_1\,c\,v)$\\ $\textit{inj}\,(r_1 + r_2)\,c\,Right(v)$ & $\dn$ & $Right(\textit{inj}\,r_2\,c\,v)$\\ $\textit{inj}\,(r_1 \cdot r_2)\,c\,Seq(v_1,v_2)$ & $\dn$ & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\ $\textit{inj}\,(r_1 \cdot r_2)\,c\,\Left(Seq(v_1,v_2))$ & $\dn$ & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\ $\textit{inj}\,(r_1 \cdot r_2)\,c\,Right(v)$ & $\dn$ & $Seq(\textit{mkeps}(r_1),\textit{inj}\,r_2\,c\,v)$\\ $\textit{inj}\,(r^*)\,c\,Seq(v,Stars\,vs)$ & $\dn$ & $Stars((\textit{inj}\,r\,c\,v)\,::\,vs)$\\\end{tabular}\end{center}\noindent This definition is by recursion on the ``shape'' of regularexpressions and values. To understands this definition better considerthe situation when we build the derivative on regular expression $r_{i-1}$.For this we chop off a character from $r_{i-1}$ to form $r_i$. This leaves a``hole'' in $r_i$ and its corresponding value $v_i$. To calculate $v_{i-1}$, we need tolocate where that hole is and fill it. We can find this location bycomparing $r_{i-1}$ and $v_i$. For instance, if $r_{i-1}$ is of shape$r_a \cdot r_b$, and $v_i$ is of shape $\Left(Seq(v_1,v_2))$, we know immediately that %\[ (r_a \cdot r_b)\backslash c = (r_a\backslash c) \cdot r_b \,+\, r_b\backslash c,\]\noindentotherwise if $r_a$ is not nullable,\[ (r_a \cdot r_b)\backslash c = (r_a\backslash c) \cdot r_b,\]\noindentthe value $v_i$ should be $\Seq(\ldots)$, contradicting the fact that$v_i$ is actually of shape $\Left(\ldots)$. Furthermore, since $v_i$ is of shape$\Left(\ldots)$ instead of $\Right(\ldots)$, we know that the leftbranch of \[ (r_a \cdot r_b)\backslash c =\bold{\underline{ (r_a\backslash c) \cdot r_b} }\,+\, r_b\backslash c,\](underlined) is taken instead of the right one. This means $c$ is chopped off from $r_a$ rather than $r_b$.We have therefore found out that the hole will be on $r_a$. So we recursively call $\inj\, r_a\,c\,v_a$ to fill that hole in $v_a$. After injection, the value $v_i$ for $r_i = r_a \cdot r_b$ should be $\Seq\,(\inj\,r_a\,c\,v_a)\,v_b$.Other clauses can be understood in a similar way.The following example gives a taste of $\textit{inj}$'s effectand how Sulzmann and Lu's algorithm works as a whole. Suppose we have aregular expression $((((a+b)+ab)+c)+abc)^*$, and want to match it againstthe string $abc$ (when $abc$ is written as a regular expression, the moststandard way of expressing it should be $a \cdot (b \cdot c)$. We omitthe parentheses and dots here for readability). This algorithm returns a POSIX value, which means itwill go for the longest matching, i.e.~it should match the string$abc$ in one star iteration, using the longest alternative $abc$ in thesub-expression $((((a+b)+ab)+c)+abc)$ (we use $r$ to denote this sub-expressionfor conciseness). Before $\textit{inj}$ comes into play, our lexer first builds derivative using string $abc$ (we simplified some regular expressions like$0 \cdot b$ to $0$ for conciseness; we also omit parentheses ifthey are clear from the context):%Similarly, we allow%$\textit{ALT}$ to take a list of regular expressions as an argument%instead of just 2 operands to reduce the nested depth of%$\textit{ALT}$\begin{center}\begin{tabular}{lcl}$r^*$ & $\xrightarrow{\backslash a}$ & $r_1 = (1+0+1 \cdot b + 0 + 1 \cdot b \cdot c) \cdot r^*$\\ & $\xrightarrow{\backslash b}$ & $r_2 = (0+0+1 \cdot 1 + 0 + 1 \cdot 1 \cdot c) \cdot r^* +(0+1+0 + 0 + 0) \cdot r^*$\\ & $\xrightarrow{\backslash c}$ & $r_3 = ((0+0+0 + 0 + 1 \cdot 1 \cdot 1) \cdot r^* + (0+0+0 + 1 + 0) \cdot r^*) + $\\ & & $\phantom{r_3 = (} ((0+1+0 + 0 + 0) \cdot r^* + (0+0+0 + 1 + 0) \cdot r^* )$\end{tabular}\end{center}\noindentNow when $nullable$ gives a $yes$ on $r_3$, we call $mkeps$ to construct a parse tree for how $r_3$ matched the string $abc$. $mkeps$ gives the following value $v_3$: \begin{center}$\Left(\Left(\Seq(\Right(\Seq(\Empty, \Seq(\Empty,\Empty))), \Stars [])))$\end{center}The outer $\Left(\Left(\ldots))$ tells us the leftmost nullable part of $r_3$(underlined):\begin{center} $( \underline{(0+0+0 + 0 + 1 \cdot 1 \cdot 1) \cdot r^*} + (0+0+0 + 1 + 0) \cdot r^*) +((0+1+0 + 0 + 0) \cdot r^*+(0+0+0 + 1 + 0) \cdot r^* ).$ \end{center} Note that the leftmost location of term $((0+0+0 + 0 + 1 \cdot 1 \cdot 1) \cdot r^*$ (which corresponds to the initial sub-match $abc$) allows $mkeps$ to pick it up because $mkeps$ is defined to always choose the left one when it is nullable. In the case of this example, $abc$ is preferred over $a$ or $ab$. This $\Left(\Left(\ldots))$ location is naturally generated by two applications of the splitting clause \begin{center} $(r_1 \cdot r_2)\backslash c (when \; r_1 \; nullable) \, = (r_1\backslash c) \cdot r_2 \,+\, r_2\backslash c.$ \end{center} By this clause, we put$r_1 \backslash c \cdot r_2 $ at the $\textit{front}$ and $r_2 \backslash c$ at the $\textit{back}$. This allows $mkeps$ to always pick up among two matches the one with a longer initial sub-match. Removing the outside $\Left(\Left(...))$, the inside sub-value \begin{center} $\Seq(\Right(\Seq(\Empty, \Seq(\Empty, \Empty))), \Stars [])$ \end{center}tells us how the empty string $[]$ is matched with $(0+0+0 + 0 + 1 \cdot 1 \cdot 1) \cdot r^*$.We match $[]$ by a sequence of 2 nullable regular expressions. The first one is an alternative, we take the rightmost alternative---whose languagecontains the empty string. The second nullable regular expression is a Kleene star. $\Stars$ tells us how itgenerates the nullable regular expression: by 0 iterations to form $\epsilon$.Now $\textit{inj}$ injects characters back and incrementally builds a parse tree based on $v_3$.Using the value $v_3$, the character c, and the regular expression $r_2$, we can recover how $r_2$ matched the string $[c]$ : $\textit{inj} \; r_2 \; c \; v_3$ gives us \begin{center} $v_2 = \Left(\Seq(\Right(\Seq(\Empty, \Seq(\Empty, c))), \Stars [])),$ \end{center}which tells us how $r_2$ matched $[c]$. After this we inject back the character $b$, and get\begin{center}$v_1 = \Seq(\Right(\Seq(\Empty, \Seq(b, c))), \Stars [])$\end{center} for how \begin{center} $r_1= (1+0+1 \cdot b + 0 + 1 \cdot b \cdot c) \cdot r*$ \end{center} matched the string $bc$ before it split into 2 pieces. Finally, after injecting character $a$ back to $v_1$, we get the parse tree \begin{center} $v_0= \Stars [\Right(\Seq(a, \Seq(b, c)))]$ \end{center} for how $r$ matched $abc$. This completes the algorithm.%We omit the details of injection function, which is provided by Sulzmann and Lu's paper \cite{Sulzmann2014}. Readers might have noticed that the parse tree information is actually already available when doing derivatives. For example, immediately after the operation $\backslash a$ we know that if we want to match a string that starts with $a$, we can either take the initial match to be \begin{center}\begin{enumerate} \item[1)] just $a$ or \item[2)] string $ab$ or \item[3)] string $abc$.\end{enumerate}\end{center}In order to differentiate between these choices, we just need to remember their positions--$a$ is on the left, $ab$ is in the middle , and $abc$ is on the right. Which one of these alternatives is chosen later does not affect their relative position because our algorithm does not change this order. If this parsing information can be determined and doesnot change because of later derivatives,there is no point in traversing this information twice. This leads to an optimisation---if we store the information for parse trees inside the regular expression, update it when we do derivative on them, and collect the information when finished with derivatives and call $mkeps$ for deciding which branch is POSIX, we can generate the parse tree in one pass, instead of doing the rest $n$ injections. This leads to Sulzmann and Lu's novel idea of using bit-codes in derivatives.In the next section, we shall focus on the bit-coded algorithm and theprocess of simplification of regular expressions. This is needed inorder to obtain \emph{fast} versions of the Brzozowski's, and Sulzmannand Lu's algorithms. This is where the PhD-project aims to advance thestate-of-the-art.\section{Simplification of Regular Expressions}Using bit-codes to guide parsing is not a novel idea. It was applied tocontext free grammars and then adapted by Henglein and Nielson forefficient regular expression parsing using DFAs \cite{nielson11bcre}. Sulzmann andLu took a step further by integrating bitcodes into derivatives.The reason why we want to use bitcodes in this project is that we want to introduce aggressive simplifications. The main drawback of buildingsuccessive derivatives according to Brzozowski's definition is that theycan grow very quickly in size. This is mainly due to the fact that thederivative operation generates often ``useless'' $\ZERO$s and $\ONE$s inderivatives. As a result, if implemented naively both algorithms byBrzozowski and by Sulzmann and Lu are excruciatingly slow. For example when starting with the regular expression $(a + aa)^*$ and building 12successive derivatives w.r.t.~the character $a$, one obtains aderivative regular expression with more than 8000 nodes (when viewed asa tree). Operations like derivative and $\nullable$ need to traversesuch trees and consequently the bigger the size of the derivative theslower the algorithm. Fortunately, one can simplify regular expressionsafter each derivative step. Various simplifications of regularexpressions are possible, such as the simplifications of $\ZERO + r$, $r+ \ZERO$, $\ONE\cdot r$, $r \cdot \ONE$, and $r + r$ to just $r$. Thesesimplifications do not affect the answer for whether a regularexpression matches a string or not, but fortunately also do not affectthe POSIX strategy of how regular expressions match strings---althoughthe latter is much harder to establish. The argument for complicating the data structures from basic regularexpressions to those with bitcodes is that we can introducesimplification without making the algorithm crash or overly complex toreason about. The latter is crucial for a correctness proof.Some initial results in thisregard have been obtained in \cite{AusafDyckhoffUrban2016}. However,what has not been achieved yet is correctness for the bit-coded algorithmthat involves simplifications and a very tight bound for the size. Sucha tight bound is suggested by work of Antimirov who proved that(partial) derivatives can be bound by the number of characters containedin the initial regular expression \cite{Antimirov95}.Antimirov defined the \emph{partial derivatives} of regular expressions to be this:%TODO definition of partial derivatives\begin{center}\begin{tabular}{lcl} $\textit{pder} \; c \; 0$ & $\dn$ & $\emptyset$\\ $\textit{pder} \; c \; 1$ & $\dn$ & $\emptyset$ \\ $\textit{pder} \; c \; d$ & $\dn$ & $\textit{if} \; c \,=\, d \; \{ 1 \} \; \textit{else} \; \emptyset$ \\ $\textit{pder} \; c \; r_1+r_2$ & $\dn$ & $pder \; c \; r_1 \cup pder \; c \; r_2$ \\ $\textit{pder} \; c \; r_1 \cdot r_2$ & $\dn$ & $\textit{if} \; nullable \; r_1 \; \{ r \cdot r_2 \mid r \in pder \; c \; r_1 \} \cup pder \; c \; r_2 \; \textit{else} \; \{ r \cdot r_2 \mid r \in pder \; c \; r_1 \} $ \\ $\textit{pder} \; c \; r^*$ & $\dn$ & $ \{ r' \cdot r^* \mid r' \in pder \; c \; r \} $ \\ \end{tabular} \end{center} A partial derivative of a regular expression $r$ is essentially a set of regular expressions that are either $r$'s children expressions or a concatenation of them. Antimirov has proved a nice size bound of the size of partial derivatives. Roughly speaking the size will not exceed the fourth power of the number of nodes in that regular expression. Interestingly, we observed from experiment that after the simplification step, our regular expression has the same size or is smaller than the partial derivatives. This allows us to prove a tight bound on the size of regular expression during the running time of the algorithm if we can establish the connection between our simplification rules and partial derivatives. %We believe, and have generated test%data, that a similar bound can be obtained for the derivatives in%Sulzmann and Lu's algorithm. Let us give some details about this next.Bit-codes look like this:\begin{center} $b ::= S \mid Z \; \;\;bs ::= [] \mid b:bs $\end{center}They are just a string of bits, the names $S$ and $Z$ here are quite arbitrary, we can use 0 and 1 or any other set of binary symbols to substitute them. They are a compact form of parse trees.Here is how values and bit-codes are related:Bitcodes are essentially incomplete values.This can be straightforwardly seen in the following transformation: \begin{center}\begin{tabular}{lcl} $\textit{code}(\Empty)$ & $\dn$ & $[]$\\ $\textit{code}(\Char\,c)$ & $\dn$ & $[]$\\ $\textit{code}(\Left\,v)$ & $\dn$ & $\Z :: code(v)$\\ $\textit{code}(\Right\,v)$ & $\dn$ & $\S :: code(v)$\\ $\textit{code}(\Seq\,v_1\,v_2)$ & $\dn$ & $code(v_1) \,@\, code(v_2)$\\ $\textit{code}(\Stars\,[])$ & $\dn$ & $[\S]$\\ $\textit{code}(\Stars\,(v\!::\!vs))$ & $\dn$ & $\Z :: code(v) \;@\; code(\Stars\,vs)$\end{tabular} \end{center} where $\Z$ and $\S$ are arbitrary names for the bits in thebitsequences. Here code encodes a value into a bitsequence by converting Left into $\Z$, Right into $\S$, the start point of a non-empty star iteration into $\S$, and the border where a local star terminates into $\Z$. This conversion is apparently lossy, as it throws away the character information, and does not decode the boundary between the two operands of the sequence constructor. Moreover, with only the bitcode we cannot even tell whether the $\S$s and $\Z$s are for $Left/Right$ or $Stars$. The reason for choosing this compact way of storing information is that the relatively small size of bits can be easily moved around during the lexing process. In order to recover the bitcode back into values, we will need the regular expression as the extra information and decode them back into value:\\%\begin{definition}[Bitdecoding of Values]\mbox{}\begin{center}\begin{tabular}{@{}l@{\hspace{1mm}}c@{\hspace{1mm}}l@{}} $\textit{decode}'\,bs\,(\ONE)$ & $\dn$ & $(\Empty, bs)$\\ $\textit{decode}'\,bs\,(c)$ & $\dn$ & $(\Char\,c, bs)$\\ $\textit{decode}'\,(\Z\!::\!bs)\;(r_1 + r_2)$ & $\dn$ & $\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}\; (\Left\,v, bs_1)$\\ $\textit{decode}'\,(\S\!::\!bs)\;(r_1 + r_2)$ & $\dn$ & $\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_2\;\textit{in}\; (\Right\,v, bs_1)$\\ $\textit{decode}'\,bs\;(r_1\cdot r_2)$ & $\dn$ & $\textit{let}\,(v_1, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}$\\ & & $\textit{let}\,(v_2, bs_2) = \textit{decode}'\,bs_1\,r_2$\\ & & \hspace{35mm}$\textit{in}\;(\Seq\,v_1\,v_2, bs_2)$\\ $\textit{decode}'\,(\Z\!::\!bs)\,(r^*)$ & $\dn$ & $(\Stars\,[], bs)$\\ $\textit{decode}'\,(\S\!::\!bs)\,(r^*)$ & $\dn$ & $\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r\;\textit{in}$\\ & & $\textit{let}\,(\Stars\,vs, bs_2) = \textit{decode}'\,bs_1\,r^*$\\ & & \hspace{35mm}$\textit{in}\;(\Stars\,v\!::\!vs, bs_2)$\bigskip\\ $\textit{decode}\,bs\,r$ & $\dn$ & $\textit{let}\,(v, bs') = \textit{decode}'\,bs\,r\;\textit{in}$\\ & & $\textit{if}\;bs' = []\;\textit{then}\;\textit{Some}\,v\; \textit{else}\;\textit{None}$ \end{tabular} \end{center} %\end{definition}Sulzmann and Lu's integrated the bitcodes into annotated regular expressions by attaching them to the head of every substructure of a regular expression\cite{Sulzmann2014}. They aredefined by the following grammar:\begin{center}\begin{tabular}{lcl} $\textit{a}$ & $::=$ & $\textit{ZERO}$\\ & $\mid$ & $\textit{ONE}\;\;bs$\\ & $\mid$ & $\textit{CHAR}\;\;bs\,c$\\ & $\mid$ & $\textit{ALTS}\;\;bs\,as$\\ & $\mid$ & $\textit{SEQ}\;\;bs\,a_1\,a_2$\\ & $\mid$ & $\textit{STAR}\;\;bs\,a$\end{tabular} \end{center} \noindentwhere $bs$ stands for bitsequences, and $as$ (in \textit{ALTS}) for alist of annotated regular expressions. These bitsequences encodeinformation about the (POSIX) value that should be generated by theSulzmann and Lu algorithm. To do lexing using annotated regular expressions, we shall first transform theusual (un-annotated) regular expressions into annotated regularexpressions:\\%\begin{definition}\begin{center}\begin{tabular}{lcl} $(\ZERO)^\uparrow$ & $\dn$ & $\textit{ZERO}$\\ $(\ONE)^\uparrow$ & $\dn$ & $\textit{ONE}\,[]$\\ $(c)^\uparrow$ & $\dn$ & $\textit{CHAR}\,[]\,c$\\ $(r_1 + r_2)^\uparrow$ & $\dn$ & $\textit{ALT}\;[]\,(\textit{fuse}\,[\Z]\,r_1^\uparrow)\, (\textit{fuse}\,[\S]\,r_2^\uparrow)$\\ $(r_1\cdot r_2)^\uparrow$ & $\dn$ & $\textit{SEQ}\;[]\,r_1^\uparrow\,r_2^\uparrow$\\ $(r^*)^\uparrow$ & $\dn$ & $\textit{STAR}\;[]\,r^\uparrow$\\\end{tabular} \end{center} %\end{definition}Here $fuse$ is an auxiliary function that helps to attach bits to the front of an annotated regular expression. Its definition goes as follows:\begin{center}\begin{tabular}{lcl} $\textit{fuse}\,bs\,(\textit{ZERO})$ & $\dn$ & $\textit{ZERO}$\\ $\textit{fuse}\,bs\,(\textit{ONE}\,bs')$ & $\dn$ & $\textit{ONE}\,(bs\,@\,bs')$\\ $\textit{fuse}\,bs\,(\textit{CHAR}\,bs'\,c)$ & $\dn$ & $\textit{CHAR}\,(bs\,@\,bs')\,c$\\ $\textit{fuse}\,bs\,(\textit{ALT}\,bs'\,a_1\,a_2)$ & $\dn$ & $\textit{ALT}\,(bs\,@\,bs')\,a_1\,a_2$\\ $\textit{fuse}\,bs\,(\textit{SEQ}\,bs'\,a_1\,a_2)$ & $\dn$ & $\textit{SEQ}\,(bs\,@\,bs')\,a_1\,a_2$\\ $\textit{fuse}\,bs\,(\textit{STAR}\,bs'\,a)$ & $\dn$ & $\textit{STAR}\,(bs\,@\,bs')\,a$\end{tabular} \end{center} After internalise we do successive derivative operations on the annotated regular expression. This derivative operation is the same as what we previously have for the simple regular expressions, except that we take special care of the bits :\\%\begin{definition}{bder}\begin{center} \begin{tabular}{@{}lcl@{}} $(\textit{ZERO})\backslash c$ & $\dn$ & $\textit{ZERO}$\\ $(\textit{ONE}\;bs)\backslash c$ & $\dn$ & $\textit{ZERO}$\\ $(\textit{CHAR}\;bs\,d)\backslash c$ & $\dn$ & $\textit{if}\;c=d\; \;\textit{then}\; \textit{ONE}\;bs\;\textit{else}\;\textit{ZERO}$\\ $(\textit{ALT}\;bs\,a_1\,a_2)\backslash c$ & $\dn$ & $\textit{ALT}\,bs\,(a_1\backslash c)\,(a_2\backslash c)$\\ $(\textit{SEQ}\;bs\,a_1\,a_2)\backslash c$ & $\dn$ & $\textit{if}\;\textit{bnullable}\,a_1$\\ & &$\textit{then}\;\textit{ALT}\,bs\,(\textit{SEQ}\,[]\,(a_1\backslash c)\,a_2)$\\ & &$\phantom{\textit{then}\;\textit{ALT}\,bs\,}(\textit{fuse}\,(\textit{bmkeps}\,a_1)\,(a_2\backslash c))$\\ & &$\textit{else}\;\textit{SEQ}\,bs\,(a_1\backslash c)\,a_2$\\ $(\textit{STAR}\,bs\,a)\backslash c$ & $\dn$ & $\textit{SEQ}\;bs\,(\textit{fuse}\, [\Z] (r\backslash c))\, (\textit{STAR}\,[]\,r)$\end{tabular} \end{center} %\end{definition}For instance, when we unfold $STAR \; bs \; a$ into a sequence, we attach an additional bit Z to the front of $r \backslash c$ to indicate that there is one more star iteration. The other example, the $SEQ$ clause is more subtle-- when $a_1$ is $bnullable$(here bnullable is exactly the same as nullable, except that it is for annotated regular expressions, therefore we omit the definition).Assume that $bmkeps$ correctly extracts the bitcode for how $a_1$ matches the string prior to character c(more on this later), then the right branch of $ALTS$, which is $fuse \; bmkeps \; a_1 (a_2 \backslash c)$ will collapse the regular expression $a_1$(as it has already been fully matched) and store the parsing information at the head of the regular expression $a_2 \backslash c$ by fusing to it. The bitsequence $bs$, which was initially attached to the head of $SEQ$, has now been elevated to the top-level of ALT,as this information will be needed whichever way the $SEQ$ is matched--no matter whether c belongs to $a_1$ or $ a_2$.After carefully doing these derivatives and maintaining all the parsing information, we complete the parsing by collecting the bits using a special $mkeps$ function for annotated regular expressions--$bmkeps$:%\begin{definition}[\textit{bmkeps}]\mbox{}\begin{center}\begin{tabular}{lcl} $\textit{bmkeps}\,(\textit{ONE}\,bs)$ & $\dn$ & $bs$\\ $\textit{bmkeps}\,(\textit{ALT}\,bs\,a_1\,a_2)$ & $\dn$ & $\textit{if}\;\textit{bnullable}\,a_1$\\ & &$\textit{then}\;bs\,@\,\textit{bmkeps}\,a_1$\\ & &$\textit{else}\;bs\,@\,\textit{bmkeps}\,a_2$\\ $\textit{bmkeps}\,(\textit{SEQ}\,bs\,a_1\,a_2)$ & $\dn$ & $bs \,@\,\textit{bmkeps}\,a_1\,@\, \textit{bmkeps}\,a_2$\\ $\textit{bmkeps}\,(\textit{STAR}\,bs\,a)$ & $\dn$ & $bs \,@\, [\S]$\end{tabular} \end{center} %\end{definition}This function completes the parse tree information by travelling along the path on the regular expression that corresponds to a POSIX value snd collect all the bits, andusing S to indicate the end of star iterations. If we take the bitsproduced by $bmkeps$ and decode it, we get the parse tree we need, the working flow looks like this:\\\begin{center}\begin{tabular}{lcl} $\textit{blexer}\;r\,s$ & $\dn$ & $\textit{let}\;a = (r^\uparrow)\backslash s\;\textit{in}$\\ & & $\;\;\textit{if}\; \textit{bnullable}(a)$\\ & & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\ & & $\;\;\textit{else}\;\textit{None}$\end{tabular}\end{center}Here $(r^\uparrow)\backslash s$ is similar to what we have previously defined for $r\backslash s$.The main point of the bitsequences and annotated regular expressionsis that we can apply rather aggressive (in terms of size)simplification rules in order to keep derivatives small. We havedeveloped such ``aggressive'' simplification rules and generated testdata that show that the expected bound can be achieved. Obviously wecould only partially cover the search space as there are infinitelymany regular expressions and strings. One modification we introducedis to allow a list of annotated regular expressions in the\textit{ALTS} constructor. This allows us to not just deleteunnecessary $\ZERO$s and $\ONE$s from regular expressions, but alsounnecessary ``copies'' of regular expressions (very similar tosimplifying $r + r$ to just $r$, but in a more generalsetting). Another modification is that we use simplification rulesinspired by Antimirov's work on partial derivatives. They maintain theidea that only the first ``copy'' of a regular expression in analternative contributes to the calculation of a POSIX value. Allsubsequent copies can be pruned from the regular expression.A recursive definition of simplification function that looks similar to scala code is given below:\\\begin{center} \begin{tabular}{@{}lcl@{}} $\textit{simp} \; a$ & $\dn$ & $\textit{a} \; \textit{if} \; a = (\textit{ONE} \; bs) \; or\; (\textit{CHAR} \, bs \; c) \; or\; (\textit{STAR}\; bs\; a_1)$\\ $\textit{simp} \; \textit{SEQ}\;bs\,a_1\,a_2$ & $\dn$ & $ (\textit{simp} \; a_1, \textit{simp} \; a_2) \; \textit{match} $ \\ &&$\textit{case} \; (0, \_) \Rightarrow 0$ \\ &&$ \textit{case} \; (\_, 0) \Rightarrow 0$ \\ &&$ \textit{case} \; (1, a_2') \Rightarrow \textit{fuse} \; bs \; a_2'$ \\ &&$ \textit{case} \; (a_1', 1) \Rightarrow \textit{fuse} \; bs \; a_1'$ \\ &&$ \textit{case} \; (a_1', a_2') \Rightarrow \textit{SEQ} \; bs \; a_1' \; a_2'$ \\ $\textit{simp} \; \textit{ALT}\;bs\,as$ & $\dn$ & $\textit{ distinct}( \textit{flatten} ( \textit{map simp as})) \; \textit{match} $ \\ &&$\textit{case} \; [] \Rightarrow 0$ \\ &&$ \textit{case} \; a :: [] \Rightarrow \textit{fuse bs a}$ \\ &&$ \textit{case} \; as' \Rightarrow \textit{ALT bs as'}$ \end{tabular} \end{center} The simplification does a pattern matching on the regular expression. When it detected thatthe regular expression is an alternative or sequence, it will try to simplify its children regular expressionsrecursively and then see if one of the children turn into 0 or 1, which might trigger further simplification at the current level. The most involved part is the ALTS clause, where we use two auxiliary functions flatten and distinct to open up nested ALT and reduce as many duplicates as possible. Function distinct keeps the first occurring copy only and remove all later ones when detected duplicates. Function flatten opens up nested ALT. Its recursive definition is given below: \begin{center} \begin{tabular}{@{}lcl@{}} $\textit{flatten} \; (\textit{ALT}\;bs\,as) :: as'$ & $\dn$ & $(\textit{ map fuse}( \textit{bs, \_} ) \textit{ as}) \; +\!+ \; \textit{flatten} \; as' $ \\ $\textit{flatten} \; \textit{ZERO} :: as'$ & $\dn$ & $ \textit{flatten} \; as' $ \\ $\textit{flatten} \; a :: as'$ & $\dn$ & $a :: \textit{flatten} \; as' $ \end{tabular} \end{center} Here flatten behaves like the traditional functional programming flatten function, what it does is basically removing parentheses like changing $a+(b+c)$ into $a+b+c$.Suppose we apply simplification after each derivative step,and view these two operations as an atomic one: $a \backslash_{simp} c \dn \textit{simp}(a \backslash c)$.Then we can use the previous natural extension from derivative w.r.t character to derivative w.r.t string:\begin{center}\begin{tabular}{lcl}$r \backslash_{simp} (c\!::\!s) $ & $\dn$ & $(r \backslash_{simp} c) \backslash_{simp} s$ \\$r \backslash [\,] $ & $\dn$ & $r$\end{tabular}\end{center} we get an optimized version of the algorithm:\begin{center}\begin{tabular}{lcl} $\textit{blexer\_simp}\;r\,s$ & $\dn$ & $\textit{let}\;a = (r^\uparrow)\backslash_{simp} s\;\textit{in}$\\ & & $\;\;\textit{if}\; \textit{bnullable}(a)$\\ & & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\ & & $\;\;\textit{else}\;\textit{None}$\end{tabular}\end{center}This algorithm effectively keeps the regular expression size small, for example,with this simplification our previous $(a + aa)^*$ example's 8000 nodes will be reduced to only 6 and stay constant, however long the input string is.We are currently engaged in 2 tasks related to this algorithm. The first one is proving that our simplification rulesactually do not affect the POSIX value that should be generated by thealgorithm according to the specification of a POSIX value and furthermore obtain a muchtighter bound on the sizes of derivatives. The result is that ouralgorithm should be correct and faster on all inputs. The originalblow-up, as observed in JavaScript, Python and Java, would be excludedfrom happening in our algorithm.Forthis proof we use the theorem prover Isabelle. Once completed, thisresult will advance the state-of-the-art: Sulzmann and Lu wrote intheir paper \cite{Sulzmann2014} about the bitcoded ``incrementalparsing method'' (that is the matching algorithm outlined in thissection):\begin{quote}\it ``Correctness Claim: We further claim that the incremental parsing method in Figure~5 in combination with the simplification steps in Figure 6 yields POSIX parse trees. We have tested this claim extensively by using the method in Figure~3 as a reference but yet have to work out all proof details.''\end{quote} \noindentWe would settle the correctness claim.It is relatively straightforward to establish that after 1 simplification step, the part of derivative that corresponds to a POSIX value remains intact and can still be collected, in other words,bmkeps r = bmkeps simp ras this basically comes down to proving actions like removing the additional $r$ in $r+r$ does not delete important POSIX information in a regular expression.The hardcore of this problem is to prove thatbmkeps bders r = bmkeps bders simp rThat is, if we do derivative on regular expression r and the simplified version for, they can still prove the same POSIX value if there is one . This is not as straightforward as the previous proposition, as the two regular expression r and simp r might become very different regular expressions after repeated application ofd simp and derivative.The crucial point is to find the "gene" of a regular expression and how it is kept intact during simplification.To aid this, we are utilizing the helping function retrieve described by Sulzmann and Lu:\\definition of retrieve\\ This function assembled the bitcode that corresponds to a parse tree for how the current derivative matches the suffix of the string(the characters that have not yet appeared, but is stored in the value). Sulzmann and Lu used this to connect the bit-coded algorithm to the older algorithm by the following equation:\\ $inj \;a\; c \; v = \textit{decode} \; (\textit{retrieve}\; ((\textit{internalise}\; r)\backslash_{simp} c) v)$\\ A little fact that needs to be stated to help comprehension:\\ $r^\uparrow = a$($a$ stands for $annotated$).\\ Fahad and Christian also used this fact to prove the correctness of bit-coded algorithm without simplification. Our purpose of using this, however, is try to establish \\$ \textit{retrieve} \; a \; v \;=\; \textit{retrieve} \; \textit{simp}(a) \; v'.$\\ The idea is that using $v'$, a simplified version of $v$ that possibly had gone through the same simplification step as $\textit{simp}(a)$ we are still able to extract the bitsequence that gives the same parsing information as the unsimplified one. After establishing this, we might be able to finally bridge the gap of proving\\ $\textit{retrieve} \; r \backslash s \; v = \;\textit{retrieve} \; \textit{simp}(r) \backslash s \; v'$\\ and subsequently\\ $\textit{retrieve} \; r \backslash s \; v\; = \; \textit{retrieve} \; r \backslash_{simp} s \; v'$.\\ This proves that our simplified version of regular expression still contains all the bitcodes needed.The second task is to speed up the more aggressive simplification. Currently it is slower than a naive simplification(the naive version as implemented in ADU of course can explode in some cases).So it needs to be explored how to make it faster. Our possibility would be to explore again the connection to DFAs. This is very much work in progress.\section{Conclusion}In this PhD-project we are interested in fast algorithms for regularexpression matching. While this seems to be a ``settled'' area, infact interesting research questions are popping up as soon as one stepsoutside the classic automata theory (for example in terms of what kindof regular expressions are supported). The reason why it isinteresting for us to look at the derivative approach introduced byBrzozowski for regular expression matching, and then much furtherdeveloped by Sulzmann and Lu, is that derivatives can elegantly dealwith some of the regular expressions that are of interest in ``reallife''. This includes the not-regular expression, written $\neg\,r$(that is all strings that are not recognised by $r$), but also boundedregular expressions such as $r^{\{n\}}$ and $r^{\{n..m\}}$). There isalso hope that the derivatives can provide another angle for how todeal more efficiently with back-references, which are one of thereasons why regular expression engines in JavaScript, Python and Javachoose to not implement the classic automata approach of transformingregular expressions into NFAs and then DFAs---because we simply do notknow how such back-references can be represented by DFAs.\bibliographystyle{plain}\bibliography{root}\end{document}