etnms/etnms.tex
author Chengsong
Fri, 10 Apr 2020 11:58:11 +0100
changeset 148 c8ef391dd6f7
parent 145 a7c063981fa5
permissions -rw-r--r--
vunsimp

\documentclass[a4paper,UKenglish]{lipics}
\usepackage{graphic}
\usepackage{data}
\usepackage{tikz-cd}
\usepackage{tikz}

%\usetikzlibrary{graphs}
%\usetikzlibrary{graphdrawing}
%\usegdlibrary{trees}

%\usepackage{algorithm}
\usepackage{amsmath}
\usepackage{xcolor}
\usepackage[noend]{algpseudocode}
\usepackage{enumitem}
\usepackage{nccmath}
\usepackage{soul}

\definecolor{darkblue}{rgb}{0,0,0.6}
\hypersetup{colorlinks=true,allcolors=darkblue}
\newcommand{\comment}[1]%
{{\color{red}$\Rightarrow$}\marginpar{\raggedright\small{\bf\color{red}#1}}}

% \documentclass{article}
%\usepackage[utf8]{inputenc}
%\usepackage[english]{babel}
%\usepackage{listings}
% \usepackage{amsthm}
%\usepackage{hyperref}
% \usepackage[margin=0.5in]{geometry}
%\usepackage{pmboxdraw}
 
\title{POSIX Regular Expression Matching and Lexing}
\author{Chengsong Tan}
\affil{King's College London\\
London, UK\\
\texttt{chengsong.tan@kcl.ac.uk}}
\authorrunning{Chengsong Tan}
\Copyright{Chengsong Tan}

\newcommand{\dn}{\stackrel{\mbox{\scriptsize def}}{=}}%
\newcommand{\ZERO}{\mbox{\bf 0}}
\newcommand{\ONE}{\mbox{\bf 1}}
\def\erase{\textit{erase}}
\def\bders{\textit{bders}}
\def\lexer{\mathit{lexer}}
\def\blexer{\textit{blexer}}
\def\fuse{\textit{fuse}}
\def\flatten{\textit{flatten}}
\def\map{\textit{map}}
\def\blexers{\mathit{blexer\_simp}}
\def\simp{\mathit{simp}}
\def\mkeps{\mathit{mkeps}}
\def\bmkeps{\textit{bmkeps}}
\def\inj{\mathit{inj}}
\def\Empty{\mathit{Empty}}
\def\Left{\mathit{Left}}
\def\Right{\mathit{Right}}
\def\Stars{\mathit{Stars}}
\def\Char{\mathit{Char}}
\def\Seq{\mathit{Seq}}
\def\Der{\mathit{Der}}
\def\nullable{\mathit{nullable}}
\def\Z{\mathit{Z}}
\def\S{\mathit{S}}
\def\flex{\textit{flex}}
\def\rup{r^\uparrow}
\def\retrieve{\textit{retrieve}}
\def\AALTS{\textit{AALTS}}
\def\AONE{\textit{AONE}}
%\theoremstyle{theorem}
%\newtheorem{theorem}{Theorem}
%\theoremstyle{lemma}
%\newtheorem{lemma}{Lemma}
%\newcommand{\lemmaautorefname}{Lemma}
%\theoremstyle{definition}
%\newtheorem{definition}{Definition}
\algnewcommand\algorithmicswitch{\textbf{switch}}
\algnewcommand\algorithmiccase{\textbf{case}}
\algnewcommand\algorithmicassert{\texttt{assert}}
\algnewcommand\Assert[1]{\State \algorithmicassert(#1)}%
% New "environments"
\algdef{SE}[SWITCH]{Switch}{EndSwitch}[1]{\algorithmicswitch\ #1\ \algorithmicdo}{\algorithmicend\ \algorithmicswitch}%
\algdef{SE}[CASE]{Case}{EndCase}[1]{\algorithmiccase\ #1}{\algorithmicend\ \algorithmiccase}%
\algtext*{EndSwitch}%
\algtext*{EndCase}%


\begin{document}

\maketitle

\begin{abstract}
Regular expressions, a useful concept in computer science
that was initially designed to match string patterns,
need fast matching algorithms.
If we do matching by naively converting
the regular expression to a
Nondeterministic-Finite-Automaton(NFA) and simulating it,
they can run into trouble on certain evil regular expression and string 
pairs.
This necessitates the introudction of a 
method called derivatives of regular expressions\cite{Brzozowski1964}.
With some variation introduced by Sulzmann and Lu\cite{Sulzmann2014},
it circumvents the problem of 
catastrphoic backtracking elegantly when
compared to backtracking NFA regex engines.
We believe that such a method can help to address 
the status-quo where badly written code for 
regular expression matching can cause a software grief.
We have come up with a set of simplification rules
that makes the space and time requirements below a tight bound.
We have proved certain properties of the algorithm 
and simplification.
We have not yet worked out a proof for the correctness
although test data suggested this.
\end{abstract}



\section{Introduction}
The following brief introduction gives the background for 
derivatives--how they are originally defined and the 
variety\cite{Sulzmann2014} that is more suitable for lexing.
Regular expressions can be formalized as follows:\\
\begin{center}
	$r ::= \ZERO | \ONE | c | r_1 +r_2 | r_1 \cdot r_2 | r^*$
\end{center}
Derivatives of regular expression can be 
seen as the regular expression that corresponds to
the language derived by computing
the left quotient of the original regular language
w.r.t a character.
Derivative of regular expression is also a 
regular expression because 
the left quotient of a regular language is also regular.
With this we know such a function exist without having
to worry that it might not be defined.
In fact, regular expressions derivatives
are not just defined but also simple and elegant.
They are composed of two recursive functions,
and can be easily explained by intuition.
The function
$\nullable$ defined below,
tests whether the empty string belongs
to the regular expression being checked.

\begin{center}
		\begin{tabular}{lcl}
			$\nullable(\ZERO)$     & $\dn$ & $\mathit{false}$ \\  
			$\nullable(\ONE)$      & $\dn$ & $\mathit{true}$ \\
			$\nullable(c)$ 	       & $\dn$ & $\mathit{false}$ \\
			$\nullable(r_1 + r_2)$ & $\dn$ & $\nullable(r_1) \vee \nullable(r_2)$ \\
			$\nullable(r_1\cdot r_2)$  & $\dn$ & $\nullable(r_1) \wedge \nullable(r_2)$ \\
			$\nullable(r^*)$       & $\dn$ & $\mathit{true}$ \\
		\end{tabular}
\end{center}
\noindent
The empty set does not contain any string and
therefore not the empty string, the empty string 
regular expression contains the empty string
by definition, the character regular expression
is the singleton that contains character only,
and therefore does not contain the empty string,
the alternative regular expression(or "or" expression)
might have one of its children regular expressions
being nullable and any one of its children being nullable
would suffice. The sequence regular expression
would require both children to have the empty string
to compose an empty string and the Kleene star
operation naturally introduced the empty string.
All nice and easy.
This definition is used in the derivative operation
as a condition:

\begin{center}
\begin{tabular}{lcl}
		$\ZERO \backslash c$ & $\dn$ & $\ZERO$\\  
		$\ONE \backslash c$  & $\dn$ & $\ZERO$\\
		$d \backslash c$     & $\dn$ & 
		$\mathit{if} \;c = d\;\mathit{then}\;\ONE\;\mathit{else}\;\ZERO$\\
$(r_1 + r_2)\backslash c$     & $\dn$ & $r_1 \backslash c \,+\, r_2 \backslash c$\\
$(r_1 \cdot r_2)\backslash c$ & $\dn$ & $\mathit{if} \, nullable(r_1)$\\
	&   & $\mathit{then}\;(r_1\backslash c) \cdot r_2 \,+\, r_2\backslash c$\\
	&   & $\mathit{else}\;(r_1\backslash c) \cdot r_2$\\
	$(r^*)\backslash c$           & $\dn$ & $(r\backslash c) \cdot r^*$\\
\end{tabular}
\end{center}
\noindent
\noindent
The function derivative, written $\backslash c$, 
defines how a regular expression evolves into
a new regular expression after all the string it contains
is chopped off a certain head character $c$.
The most involved cases are the sequence 
and star case.
The sequence case says that if the first regular expression
contains an empty string then second component of the sequence
might be chosen as the target regular expression to be chopped
off its head character.
The star regular expression unwraps the iteration of
regular expression and attack the star regular expression
to its back again to make sure there are 0 or more iterations
following this unfolded iteration.


The main property of the derivative operation
that enables us to reason about the correctness of
an algorithm using derivatives is 

\begin{center}
$c\!::\!s \in L(r)$ holds
if and only if $s \in L(r\backslash c)$.
\end{center}

\noindent
We can generalise the derivative operation shown above for single characters
to strings as follows:

\begin{center}
\begin{tabular}{lcl}
$r \backslash (c\!::\!s) $ & $\dn$ & $(r \backslash c) \backslash s$ \\
$r \backslash [\,] $ & $\dn$ & $r$
\end{tabular}
\end{center}

\noindent
and then define Brzozowski's  regular-expression matching algorithm as:

\[
match\;s\;r \;\dn\; nullable(r\backslash s)
\]

\noindent
Assuming the a string is given as a sequence of characters, say $c_0c_1..c_n$, 
this algorithm presented graphically is as follows:

\begin{equation}\label{graph:*}
\begin{tikzcd}
r_0 \arrow[r, "\backslash c_0"]  & r_1 \arrow[r, "\backslash c_1"] & r_2 \arrow[r, dashed]  & r_n  \arrow[r,"\textit{nullable}?"] & \;\textrm{YES}/\textrm{NO}
\end{tikzcd}
\end{equation}

\noindent
where we start with  a regular expression  $r_0$, build successive
derivatives until we exhaust the string and then use \textit{nullable}
to test whether the result can match the empty string. It can  be
relatively  easily shown that this matcher is correct  (that is given
an $s = c_0...c_{n-1}$ and an $r_0$, it generates YES if and only if $s \in L(r_0)$).

Beautiful and simple definition.

If we implement the above algorithm naively, however,
the algorithm can be excruciatingly slow. For example, when starting with the regular
expression $(a + aa)^*$ and building 12 successive derivatives
w.r.t.~the character $a$, one obtains a derivative regular expression
with more than 8000 nodes (when viewed as a tree). Operations like
$\backslash$ and $\nullable$ need to traverse such trees and
consequently the bigger the size of the derivative the slower the
algorithm. 

Brzozowski was quick in finding that during this process a lot useless
$\ONE$s and $\ZERO$s are generated and therefore not optimal.
He also introduced some "similarity rules" such
as $P+(Q+R) = (P+Q)+R$ to merge syntactically 
different but language-equivalent sub-regexes to further decrease the size
of the intermediate regexes. 

More simplifications are possible, such as deleting duplicates
and opening up nested alternatives to trigger even more simplifications.
And suppose we apply simplification after each derivative step, and compose
these two operations together as an atomic one: $a \backslash_{simp}\,c \dn
\textit{simp}(a \backslash c)$. Then we can build
a matcher without having  cumbersome regular expressions.


If we want the size of derivatives in the algorithm to
stay even lower, we would need more aggressive simplifications.
Essentially we need to delete useless $\ZERO$s and $\ONE$s, as well as
deleting duplicates whenever possible. For example, the parentheses in
$(a+b) \cdot c + b\cdot c$ can be opened up to get $a\cdot c + b \cdot c + b
\cdot c$, and then simplified to just $a \cdot c + b \cdot c$. Another
example is simplifying $(a^*+a) + (a^*+ \ONE) + (a +\ONE)$ to just
$a^*+a+\ONE$. Adding these more aggressive simplification rules help us
to achieve a very tight size bound, namely,
 the same size bound as that of the \emph{partial derivatives}. 

Building derivatives and then simplify them.
So far so good. But what if we want to 
do lexing instead of just a YES/NO answer?
This requires us to go back again to the world 
without simplification first for a moment.
Sulzmann and Lu~\cite{Sulzmann2014} first came up with a nice and 
elegant(arguably as beautiful as the original
derivatives definition) solution for this.

\subsection*{Values and the Lexing Algorithm by Sulzmann and Lu}


They first defined the datatypes for storing the 
lexing information called a \emph{value} or
sometimes also \emph{lexical value}.  These values and regular
expressions correspond to each other as illustrated in the following
table:

\begin{center}
	\begin{tabular}{c@{\hspace{20mm}}c}
		\begin{tabular}{@{}rrl@{}}
			\multicolumn{3}{@{}l}{\textbf{Regular Expressions}}\medskip\\
			$r$ & $::=$  & $\ZERO$\\
			& $\mid$ & $\ONE$   \\
			& $\mid$ & $c$          \\
			& $\mid$ & $r_1 \cdot r_2$\\
			& $\mid$ & $r_1 + r_2$   \\
			\\
			& $\mid$ & $r^*$         \\
		\end{tabular}
		&
		\begin{tabular}{@{\hspace{0mm}}rrl@{}}
			\multicolumn{3}{@{}l}{\textbf{Values}}\medskip\\
			$v$ & $::=$  & \\
			&        & $\Empty$   \\
			& $\mid$ & $\Char(c)$          \\
			& $\mid$ & $\Seq\,v_1\, v_2$\\
			& $\mid$ & $\Left(v)$   \\
			& $\mid$ & $\Right(v)$  \\
			& $\mid$ & $\Stars\,[v_1,\ldots\,v_n]$ \\
		\end{tabular}
	\end{tabular}
\end{center}

\noindent
One regular expression can have multiple lexical values. For example
for the regular expression $(a+b)^*$, it has a infinite list of
values corresponding to it: $\Stars\,[]$, $\Stars\,[\Left(Char(a))]$,
$\Stars\,[\Right(Char(b))]$, $\Stars\,[\Left(Char(a),\,\Right(Char(b))]$,
$\ldots$, and vice versa.
Even for the regular expression matching a certain string, there could 
still be more than one value corresponding to it.
Take the example where $r= (a^*\cdot a^*)^*$ and the string 
$s=\underbrace{aa\ldots a}_\text{n \textit{a}s}$.
The number of different ways of matching 
without allowing any value under a star to be flattened
to an empty string can be given by the following formula:
\begin{center}
	$C_n = (n+1)+n C_1+\ldots + 2 C_{n-1}$
\end{center}
and a closed form formula can be calculated to be
\begin{equation}
	C_n =\frac{(2+\sqrt{2})^n - (2-\sqrt{2})^n}{4\sqrt{2}}
\end{equation}
which is clearly in exponential order.
A lexer aimed at getting all the possible values has an exponential
worst case runtime. Therefore it is impractical to try to generate
all possible matches in a run. In practice, we are usually 
interested about POSIX values, which by intuition always
match the leftmost regular expression when there is a choice
and always match a sub part as much as possible before proceeding
to the next token. For example, the above example has the POSIX value
$ \Stars\,[\Seq(Stars\,[\underbrace{\Char(a),\ldots,\Char(a)}_\text{n iterations}], Stars\,[])]$.
The output of an algorithm we want would be a POSIX matching
encoded as a value.
The contribution of Sulzmann and Lu is an extension of Brzozowski's
algorithm by a second phase (the first phase being building successive
derivatives---see \eqref{graph:*}). In this second phase, a POSIX value 
is generated in case the regular expression matches  the string. 
Pictorially, the Sulzmann and Lu algorithm is as follows:

\begin{ceqn}
\begin{equation}\label{graph:2}
\begin{tikzcd}
r_0 \arrow[r, "\backslash c_0"]  \arrow[d] & r_1 \arrow[r, "\backslash c_1"] \arrow[d] & r_2 \arrow[r, dashed] \arrow[d] & r_n \arrow[d, "mkeps" description] \\
v_0           & v_1 \arrow[l,"inj_{r_0} c_0"]                & v_2 \arrow[l, "inj_{r_1} c_1"]              & v_n \arrow[l, dashed]         
\end{tikzcd}
\end{equation}
\end{ceqn}

\noindent
For convenience, we shall employ the following notations: the regular
expression we start with is $r_0$, and the given string $s$ is composed
of characters $c_0 c_1 \ldots c_{n-1}$. In  the first phase from the
left to right, we build the derivatives $r_1$, $r_2$, \ldots  according
to the characters $c_0$, $c_1$  until we exhaust the string and obtain
the derivative $r_n$. We test whether this derivative is
$\textit{nullable}$ or not. If not, we know the string does not match
$r$ and no value needs to be generated. If yes, we start building the
values incrementally by \emph{injecting} back the characters into the
earlier values $v_n, \ldots, v_0$. This is the second phase of the
algorithm from the right to left. For the first value $v_n$, we call the
function $\textit{mkeps}$, which builds a POSIX lexical value
for how the empty string has been matched by the (nullable) regular
expression $r_n$. This function is defined as

	\begin{center}
		\begin{tabular}{lcl}
			$\mkeps(\ONE)$ 		& $\dn$ & $\Empty$ \\
			$\mkeps(r_{1}+r_{2})$	& $\dn$ 
			& \textit{if} $\nullable(r_{1})$\\ 
			& & \textit{then} $\Left(\mkeps(r_{1}))$\\ 
			& & \textit{else} $\Right(\mkeps(r_{2}))$\\
			$\mkeps(r_1\cdot r_2)$ 	& $\dn$ & $\Seq\,(\mkeps\,r_1)\,(\mkeps\,r_2)$\\
			$mkeps(r^*)$	        & $\dn$ & $\Stars\,[]$
		\end{tabular}
	\end{center}


\noindent 
After the $\mkeps$-call, we inject back the characters one by one in order to build
the lexical value $v_i$ for how the regex $r_i$ matches the string $s_i$
($s_i = c_i \ldots c_{n-1}$ ) from the previous lexical value $v_{i+1}$.
After injecting back $n$ characters, we get the lexical value for how $r_0$
matches $s$. The POSIX value is maintained throught out the process.
For this Sulzmann and Lu defined a function that reverses
the ``chopping off'' of characters during the derivative phase. The
corresponding function is called \emph{injection}, written
$\textit{inj}$; it takes three arguments: the first one is a regular
expression ${r_{i-1}}$, before the character is chopped off, the second
is a character ${c_{i-1}}$, the character we want to inject and the
third argument is the value ${v_i}$, into which one wants to inject the
character (it corresponds to the regular expression after the character
has been chopped off). The result of this function is a new value. The
definition of $\textit{inj}$ is as follows: 

\begin{center}
\begin{tabular}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
  $\textit{inj}\,(c)\,c\,Empty$            & $\dn$ & $Char\,c$\\
  $\textit{inj}\,(r_1 + r_2)\,c\,\Left(v)$ & $\dn$ & $\Left(\textit{inj}\,r_1\,c\,v)$\\
  $\textit{inj}\,(r_1 + r_2)\,c\,Right(v)$ & $\dn$ & $Right(\textit{inj}\,r_2\,c\,v)$\\
  $\textit{inj}\,(r_1 \cdot r_2)\,c\,Seq(v_1,v_2)$ & $\dn$  & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\
  $\textit{inj}\,(r_1 \cdot r_2)\,c\,\Left(Seq(v_1,v_2))$ & $\dn$  & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\
  $\textit{inj}\,(r_1 \cdot r_2)\,c\,Right(v)$ & $\dn$  & $Seq(\textit{mkeps}(r_1),\textit{inj}\,r_2\,c\,v)$\\
  $\textit{inj}\,(r^*)\,c\,Seq(v,Stars\,vs)$         & $\dn$  & $Stars((\textit{inj}\,r\,c\,v)\,::\,vs)$\\
\end{tabular}
\end{center}

\noindent This definition is by recursion on the ``shape'' of regular
expressions and values. 
The clauses basically do one thing--identifying the ``holes'' on 
value to inject the character back into.
For instance, in the last clause for injecting back to a value
that would turn into a new star value that corresponds to a star,
we know it must be a sequence value. And we know that the first 
value of that sequence corresponds to the child regex of the star
with the first character being chopped off--an iteration of the star
that had just been unfolded. This value is followed by the already
matched star iterations we collected before. So we inject the character 
back to the first value and form a new value with this new iteration
being added to the previous list of iterations, all under the $Stars$
top level.

We have mentioned before that derivatives without simplification 
can get clumsy, and this is true for values as well--they reflect
the regular expressions size by definition.

One can introduce simplification on the regex and values, but have to
be careful in not breaking the correctness as the injection 
function heavily relies on the structure of the regexes and values
being correct and match each other.
It can be achieved by recording some extra rectification functions
during the derivatives step, and applying these rectifications in 
each run during the injection phase.
And we can prove that the POSIX value of how
regular expressions match strings will not be affected---although is much harder
to establish. Some initial results in this regard have been
obtained in \cite{AusafDyckhoffUrban2016}. 

%Brzozowski, after giving the derivatives and simplification,
%did not explore lexing with simplification or he may well be 
%stuck on an efficient simplificaiton with a proof.
%He went on to explore the use of derivatives together with 
%automaton, and did not try lexing using derivatives.

We want to get rid of complex and fragile rectification of values.
Can we not create those intermediate values $v_1,\ldots v_n$,
and get the lexing information that should be already there while
doing derivatives in one pass, without a second phase of injection?
In the meantime, can we make sure that simplifications
are easily handled without breaking the correctness of the algorithm?

Sulzmann and Lu solved this problem by
introducing additional informtaion to the 
regular expressions called \emph{bitcodes}.

\subsection*{Bit-coded Algorithm}
Bits and bitcodes (lists of bits) are defined as:

\begin{center}
		$b ::=   1 \mid  0 \qquad
bs ::= [] \mid b::bs    
$
\end{center}

\noindent
The $1$ and $0$ are not in bold in order to avoid 
confusion with the regular expressions $\ZERO$ and $\ONE$. Bitcodes (or
bit-lists) can be used to encode values (or potentially incomplete values) in a
compact form. This can be straightforwardly seen in the following
coding function from values to bitcodes: 

\begin{center}
\begin{tabular}{lcl}
  $\textit{code}(\Empty)$ & $\dn$ & $[]$\\
  $\textit{code}(\Char\,c)$ & $\dn$ & $[]$\\
  $\textit{code}(\Left\,v)$ & $\dn$ & $0 :: code(v)$\\
  $\textit{code}(\Right\,v)$ & $\dn$ & $1 :: code(v)$\\
  $\textit{code}(\Seq\,v_1\,v_2)$ & $\dn$ & $code(v_1) \,@\, code(v_2)$\\
  $\textit{code}(\Stars\,[])$ & $\dn$ & $[0]$\\
  $\textit{code}(\Stars\,(v\!::\!vs))$ & $\dn$ & $1 :: code(v) \;@\;
                                                 code(\Stars\,vs)$
\end{tabular}    
\end{center} 

\noindent
Here $\textit{code}$ encodes a value into a bitcodes by converting
$\Left$ into $0$, $\Right$ into $1$, and marks the start of a non-empty
star iteration by $1$. The border where a local star terminates
is marked by $0$. This coding is lossy, as it throws away the information about
characters, and also does not encode the ``boundary'' between two
sequence values. Moreover, with only the bitcode we cannot even tell
whether the $1$s and $0$s are for $\Left/\Right$ or $\Stars$. The
reason for choosing this compact way of storing information is that the
relatively small size of bits can be easily manipulated and ``moved
around'' in a regular expression. In order to recover values, we will 
need the corresponding regular expression as an extra information. This
means the decoding function is defined as:


%\begin{definition}[Bitdecoding of Values]\mbox{}
\begin{center}
\begin{tabular}{@{}l@{\hspace{1mm}}c@{\hspace{1mm}}l@{}}
  $\textit{decode}'\,bs\,(\ONE)$ & $\dn$ & $(\Empty, bs)$\\
  $\textit{decode}'\,bs\,(c)$ & $\dn$ & $(\Char\,c, bs)$\\
  $\textit{decode}'\,(0\!::\!bs)\;(r_1 + r_2)$ & $\dn$ &
     $\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}\;
       (\Left\,v, bs_1)$\\
  $\textit{decode}'\,(1\!::\!bs)\;(r_1 + r_2)$ & $\dn$ &
     $\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r_2\;\textit{in}\;
       (\Right\,v, bs_1)$\\                           
  $\textit{decode}'\,bs\;(r_1\cdot r_2)$ & $\dn$ &
        $\textit{let}\,(v_1, bs_1) = \textit{decode}'\,bs\,r_1\;\textit{in}$\\
  & &   $\textit{let}\,(v_2, bs_2) = \textit{decode}'\,bs_1\,r_2$\\
  & &   \hspace{35mm}$\textit{in}\;(\Seq\,v_1\,v_2, bs_2)$\\
  $\textit{decode}'\,(0\!::\!bs)\,(r^*)$ & $\dn$ & $(\Stars\,[], bs)$\\
  $\textit{decode}'\,(1\!::\!bs)\,(r^*)$ & $\dn$ & 
         $\textit{let}\,(v, bs_1) = \textit{decode}'\,bs\,r\;\textit{in}$\\
  & &   $\textit{let}\,(\Stars\,vs, bs_2) = \textit{decode}'\,bs_1\,r^*$\\
  & &   \hspace{35mm}$\textit{in}\;(\Stars\,v\!::\!vs, bs_2)$\bigskip\\
  
  $\textit{decode}\,bs\,r$ & $\dn$ &
     $\textit{let}\,(v, bs') = \textit{decode}'\,bs\,r\;\textit{in}$\\
  & & $\textit{if}\;bs' = []\;\textit{then}\;\textit{Some}\,v\;
       \textit{else}\;\textit{None}$                       
\end{tabular}    
\end{center}    
%\end{definition}

Sulzmann and Lu's integrated the bitcodes into regular expressions to
create annotated regular expressions \cite{Sulzmann2014}.
\emph{Annotated regular expressions} are defined by the following
grammar:%\comment{ALTS should have  an $as$ in  the definitions, not  just $a_1$ and $a_2$}

\begin{center}
\begin{tabular}{lcl}
  $\textit{a}$ & $::=$  & $\ZERO$\\
                  & $\mid$ & $_{bs}\ONE$\\
                  & $\mid$ & $_{bs}{\bf c}$\\
                  & $\mid$ & $_{bs}\sum\,as$\\
                  & $\mid$ & $_{bs}a_1\cdot a_2$\\
                  & $\mid$ & $_{bs}a^*$
\end{tabular}    
\end{center}  
%(in \textit{ALTS})

\noindent
where $bs$ stands for bitcodes, $a$  for $\bold{a}$nnotated regular
expressions and $as$ for a list of annotated regular expressions.
The alternative constructor($\sum$) has been generalized to 
accept a list of annotated regular expressions rather than just 2.
We will show that these bitcodes encode information about
the (POSIX) value that should be generated by the Sulzmann and Lu
algorithm.


To do lexing using annotated regular expressions, we shall first
transform the usual (un-annotated) regular expressions into annotated
regular expressions. This operation is called \emph{internalisation} and
defined as follows:

%\begin{definition}
\begin{center}
\begin{tabular}{lcl}
  $(\ZERO)^\uparrow$ & $\dn$ & $\ZERO$\\
  $(\ONE)^\uparrow$ & $\dn$ & $_{[]}\ONE$\\
  $(c)^\uparrow$ & $\dn$ & $_{[]}{\bf c}$\\
  $(r_1 + r_2)^\uparrow$ & $\dn$ &
  $_{[]}\sum[\textit{fuse}\,[0]\,r_1^\uparrow,\,
  \textit{fuse}\,[1]\,r_2^\uparrow]$\\
  $(r_1\cdot r_2)^\uparrow$ & $\dn$ &
         $_{[]}r_1^\uparrow \cdot r_2^\uparrow$\\
  $(r^*)^\uparrow$ & $\dn$ &
         $_{[]}(r^\uparrow)^*$\\
\end{tabular}    
\end{center}    
%\end{definition}

\noindent
We use up arrows here to indicate that the basic un-annotated regular
expressions are ``lifted up'' into something slightly more complex. In the
fourth clause, $\textit{fuse}$ is an auxiliary function that helps to
attach bits to the front of an annotated regular expression. Its
definition is as follows:

\begin{center}
\begin{tabular}{lcl}
  $\textit{fuse}\;bs \; \ZERO$ & $\dn$ & $\ZERO$\\
  $\textit{fuse}\;bs\; _{bs'}\ONE$ & $\dn$ &
     $_{bs @ bs'}\ONE$\\
  $\textit{fuse}\;bs\;_{bs'}{\bf c}$ & $\dn$ &
     $_{bs@bs'}{\bf c}$\\
  $\textit{fuse}\;bs\,_{bs'}\sum\textit{as}$ & $\dn$ &
     $_{bs@bs'}\sum\textit{as}$\\
  $\textit{fuse}\;bs\; _{bs'}a_1\cdot a_2$ & $\dn$ &
     $_{bs@bs'}a_1 \cdot a_2$\\
  $\textit{fuse}\;bs\,_{bs'}a^*$ & $\dn$ &
     $_{bs @ bs'}a^*$
\end{tabular}    
\end{center}  

\noindent
After internalising the regular expression, we perform successive
derivative operations on the annotated regular expressions. This
derivative operation is the same as what we had previously for the
basic regular expressions, except that we beed to take care of
the bitcodes:


\iffalse
 %\begin{definition}{bder}
\begin{center}
  \begin{tabular}{@{}lcl@{}}
  $(\textit{ZERO})\,\backslash c$ & $\dn$ & $\textit{ZERO}$\\  
  $(\textit{ONE}\;bs)\,\backslash c$ & $\dn$ & $\textit{ZERO}$\\  
  $(\textit{CHAR}\;bs\,d)\,\backslash c$ & $\dn$ &
        $\textit{if}\;c=d\; \;\textit{then}\;
         \textit{ONE}\;bs\;\textit{else}\;\textit{ZERO}$\\  
  $(\textit{ALTS}\;bs\,as)\,\backslash c$ & $\dn$ &
  $\textit{ALTS}\;bs\,(as.map(\backslash c))$\\
  $(\textit{SEQ}\;bs\,a_1\,a_2)\,\backslash c$ & $\dn$ &
     $\textit{if}\;\textit{bnullable}\,a_1$\\
					       & &$\textit{then}\;\textit{ALTS}\,bs\,List((\textit{SEQ}\,[]\,(a_1\,\backslash c)\,a_2),$\\
					       & &$\phantom{\textit{then}\;\textit{ALTS}\,bs\,}(\textit{fuse}\,(\textit{bmkeps}\,a_1)\,(a_2\,\backslash c)))$\\
  & &$\textit{else}\;\textit{SEQ}\,bs\,(a_1\,\backslash c)\,a_2$\\
  $(\textit{STAR}\,bs\,a)\,\backslash c$ & $\dn$ &
      $\textit{SEQ}\;bs\,(\textit{fuse}\, [\Z] (r\,\backslash c))\,
       (\textit{STAR}\,[]\,r)$
\end{tabular}    
\end{center}    
%\end{definition}

\begin{center}
  \begin{tabular}{@{}lcl@{}}
  $(\textit{ZERO})\,\backslash c$ & $\dn$ & $\textit{ZERO}$\\  
  $(_{bs}\textit{ONE})\,\backslash c$ & $\dn$ & $\textit{ZERO}$\\  
  $(_{bs}\textit{CHAR}\;d)\,\backslash c$ & $\dn$ &
        $\textit{if}\;c=d\; \;\textit{then}\;
         _{bs}\textit{ONE}\;\textit{else}\;\textit{ZERO}$\\  
  $(_{bs}\textit{ALTS}\;\textit{as})\,\backslash c$ & $\dn$ &
  $_{bs}\textit{ALTS}\;(\textit{as}.\textit{map}(\backslash c))$\\
  $(_{bs}\textit{SEQ}\;a_1\,a_2)\,\backslash c$ & $\dn$ &
     $\textit{if}\;\textit{bnullable}\,a_1$\\
					       & &$\textit{then}\;_{bs}\textit{ALTS}\,List((_{[]}\textit{SEQ}\,(a_1\,\backslash c)\,a_2),$\\
					       & &$\phantom{\textit{then}\;_{bs}\textit{ALTS}\,}(\textit{fuse}\,(\textit{bmkeps}\,a_1)\,(a_2\,\backslash c)))$\\
  & &$\textit{else}\;_{bs}\textit{SEQ}\,(a_1\,\backslash c)\,a_2$\\
  $(_{bs}\textit{STAR}\,a)\,\backslash c$ & $\dn$ &
      $_{bs}\textit{SEQ}\;(\textit{fuse}\, [0] \; r\,\backslash c )\,
       (_{bs}\textit{STAR}\,[]\,r)$
\end{tabular}    
\end{center}    
%\end{definition}
\fi

\begin{center}
  \begin{tabular}{@{}lcl@{}}
  $(\ZERO)\,\backslash c$ & $\dn$ & $\ZERO$\\  
  $(_{bs}\ONE)\,\backslash c$ & $\dn$ & $\ZERO$\\  
  $(_{bs}{\bf d})\,\backslash c$ & $\dn$ &
        $\textit{if}\;c=d\; \;\textit{then}\;
         _{bs}\ONE\;\textit{else}\;\ZERO$\\  
  $(_{bs}\sum \;\textit{as})\,\backslash c$ & $\dn$ &
  $_{bs}\sum\;(\textit{as.map}(\backslash c))$\\
  $(_{bs}\;a_1\cdot a_2)\,\backslash c$ & $\dn$ &
     $\textit{if}\;\textit{bnullable}\,a_1$\\
					       & &$\textit{then}\;_{bs}\sum\,[(_{[]}\,(a_1\,\backslash c)\cdot\,a_2),$\\
					       & &$\phantom{\textit{then},\;_{bs}\sum\,}(\textit{fuse}\,(\textit{bmkeps}\,a_1)\,(a_2\,\backslash c))]$\\
  & &$\textit{else}\;_{bs}\,(a_1\,\backslash c)\cdot a_2$\\
  $(_{bs}a^*)\,\backslash c$ & $\dn$ &
      $_{bs}(\textit{fuse}\, [0] \; r\,\backslash c)\cdot
       (_{[]}r^*))$
\end{tabular}    
\end{center}    

%\end{definition}
\noindent
For instance, when we do derivative of  $_{bs}a^*$ with respect to c,
we need to unfold it into a sequence,
and attach an additional bit $0$ to the front of $r \backslash c$
to indicate that there is one more star iteration. Also the sequence clause
is more subtle---when $a_1$ is $\textit{bnullable}$ (here
\textit{bnullable} is exactly the same as $\textit{nullable}$, except
that it is for annotated regular expressions, therefore we omit the
definition). Assume that $\textit{bmkeps}$ correctly extracts the bitcode for how
$a_1$ matches the string prior to character $c$ (more on this later),
then the right branch of alternative, which is $\textit{fuse} \; \bmkeps \;  a_1 (a_2
\backslash c)$ will collapse the regular expression $a_1$(as it has
already been fully matched) and store the parsing information at the
head of the regular expression $a_2 \backslash c$ by fusing to it. The
bitsequence $\textit{bs}$, which was initially attached to the
first element of the sequence $a_1 \cdot a_2$, has
now been elevated to the top-level of $\sum$, as this information will be
needed whichever way the sequence is matched---no matter whether $c$ belongs
to $a_1$ or $ a_2$. After building these derivatives and maintaining all
the lexing information, we complete the lexing by collecting the
bitcodes using a generalised version of the $\textit{mkeps}$ function
for annotated regular expressions, called $\textit{bmkeps}$:


%\begin{definition}[\textit{bmkeps}]\mbox{}
\begin{center}
\begin{tabular}{lcl}
  $\textit{bmkeps}\,(_{bs}\ONE)$ & $\dn$ & $bs$\\
  $\textit{bmkeps}\,(_{bs}\sum a::\textit{as})$ & $\dn$ &
     $\textit{if}\;\textit{bnullable}\,a$\\
  & &$\textit{then}\;bs\,@\,\textit{bmkeps}\,a$\\
  & &$\textit{else}\;bs\,@\,\textit{bmkeps}\,(_{bs}\sum \textit{as})$\\
  $\textit{bmkeps}\,(_{bs} a_1 \cdot a_2)$ & $\dn$ &
     $bs \,@\,\textit{bmkeps}\,a_1\,@\, \textit{bmkeps}\,a_2$\\
  $\textit{bmkeps}\,(_{bs}a^*)$ & $\dn$ &
     $bs \,@\, [0]$
\end{tabular}    
\end{center}    
%\end{definition}

\noindent
This function completes the value information by travelling along the
path of the regular expression that corresponds to a POSIX value and
collecting all the bitcodes, and using $S$ to indicate the end of star
iterations. If we take the bitcodes produced by $\textit{bmkeps}$ and
decode them, we get the value we expect. The corresponding lexing
algorithm looks as follows:

\begin{center}
\begin{tabular}{lcl}
  $\textit{blexer}\;r\,s$ & $\dn$ &
      $\textit{let}\;a = (r^\uparrow)\backslash s\;\textit{in}$\\                
  & & $\;\;\textit{if}\; \textit{bnullable}(a)$\\
  & & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
  & & $\;\;\textit{else}\;\textit{None}$
\end{tabular}
\end{center}

\noindent
In this definition $\_\backslash s$ is the  generalisation  of the derivative
operation from characters to strings (just like the derivatives for un-annotated
regular expressions).

Remember tha one of the important reasons we introduced bitcodes
is that they can make simplification more structured and therefore guaranteeing
the correctness.

\subsection*{Our Simplification Rules}

In this section we introduce aggressive (in terms of size) simplification rules
on annotated regular expressions
in order to keep derivatives small. Such simplifications are promising
as we have
generated test data that show
that a good tight bound can be achieved. Obviously we could only
partially cover  the search space as there are infinitely many regular
expressions and strings. 

One modification we introduced is to allow a list of annotated regular
expressions in the $\sum$ constructor. This allows us to not just
delete unnecessary $\ZERO$s and $\ONE$s from regular expressions, but
also unnecessary ``copies'' of regular expressions (very similar to
simplifying $r + r$ to just $r$, but in a more general setting). Another
modification is that we use simplification rules inspired by Antimirov's
work on partial derivatives. They maintain the idea that only the first
``copy'' of a regular expression in an alternative contributes to the
calculation of a POSIX value. All subsequent copies can be pruned away from
the regular expression. A recursive definition of our  simplification function 
that looks somewhat similar to our Scala code is given below:
%\comment{Use $\ZERO$, $\ONE$ and so on. 
%Is it $ALTS$ or $ALTS$?}\\

\begin{center}
  \begin{tabular}{@{}lcl@{}}
   
  $\textit{simp} \; (_{bs}a_1\cdot a_2)$ & $\dn$ & $ (\textit{simp} \; a_1, \textit{simp}  \; a_2) \; \textit{match} $ \\
   &&$\quad\textit{case} \; (\ZERO, \_) \Rightarrow  \ZERO$ \\
   &&$\quad\textit{case} \; (\_, \ZERO) \Rightarrow  \ZERO$ \\
   &&$\quad\textit{case} \;  (\ONE, a_2') \Rightarrow  \textit{fuse} \; bs \;  a_2'$ \\
   &&$\quad\textit{case} \; (a_1', \ONE) \Rightarrow  \textit{fuse} \; bs \;  a_1'$ \\
   &&$\quad\textit{case} \; (a_1', a_2') \Rightarrow   _{bs}a_1' \cdot a_2'$ \\

  $\textit{simp} \; (_{bs}\sum \textit{as})$ & $\dn$ & $\textit{distinct}( \textit{flatten} ( \textit{as.map(simp)})) \; \textit{match} $ \\
  &&$\quad\textit{case} \; [] \Rightarrow  \ZERO$ \\
   &&$\quad\textit{case} \; a :: [] \Rightarrow  \textit{fuse bs a}$ \\
   &&$\quad\textit{case} \;  as' \Rightarrow _{bs}\sum \textit{as'}$\\ 

   $\textit{simp} \; a$ & $\dn$ & $\textit{a} \qquad \textit{otherwise}$   
\end{tabular}    
\end{center}    

\noindent
The simplification does a pattern matching on the regular expression.
When it detected that the regular expression is an alternative or
sequence, it will try to simplify its children regular expressions
recursively and then see if one of the children turn into $\ZERO$ or
$\ONE$, which might trigger further simplification at the current level.
The most involved part is the $\sum$ clause, where we use two
auxiliary functions $\textit{flatten}$ and $\textit{distinct}$ to open up nested
alternatives and reduce as many duplicates as possible. Function
$\textit{distinct}$  keeps the first occurring copy only and remove all later ones
when detected duplicates. Function $\textit{flatten}$ opens up nested $\sum$s.
Its recursive definition is given below:

 \begin{center}
  \begin{tabular}{@{}lcl@{}}
  $\textit{flatten} \; (_{bs}\sum \textit{as}) :: \textit{as'}$ & $\dn$ & $(\textit{map} \;
     (\textit{fuse}\;bs)\; \textit{as}) \; @ \; \textit{flatten} \; as' $ \\
  $\textit{flatten} \; \ZERO :: as'$ & $\dn$ & $ \textit{flatten} \;  \textit{as'} $ \\
    $\textit{flatten} \; a :: as'$ & $\dn$ & $a :: \textit{flatten} \; \textit{as'}$ \quad(otherwise) 
\end{tabular}    
\end{center}  

\noindent
Here $\textit{flatten}$ behaves like the traditional functional programming flatten
function, except that it also removes $\ZERO$s. Or in terms of regular expressions, it
removes parentheses, for example changing $a+(b+c)$ into $a+b+c$.

Having defined the $\simp$ function,
we can use the previous notation of  natural
extension from derivative w.r.t.~character to derivative
w.r.t.~string:%\comment{simp in  the [] case?}

\begin{center}
\begin{tabular}{lcl}
$r \backslash_{simp} (c\!::\!s) $ & $\dn$ & $(r \backslash_{simp}\, c) \backslash_{simp}\, s$ \\
$r \backslash_{simp} [\,] $ & $\dn$ & $r$
\end{tabular}
\end{center}

\noindent
to obtain an optimised version of the algorithm:

 \begin{center}
\begin{tabular}{lcl}
  $\textit{blexer\_simp}\;r\,s$ & $\dn$ &
      $\textit{let}\;a = (r^\uparrow)\backslash_{simp}\, s\;\textit{in}$\\                
  & & $\;\;\textit{if}\; \textit{bnullable}(a)$\\
  & & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
  & & $\;\;\textit{else}\;\textit{None}$
\end{tabular}
\end{center}

\noindent
This algorithm keeps the regular expression size small, for example,
with this simplification our previous $(a + aa)^*$ example's 8000 nodes
will be reduced to just 6 and stays constant, no matter how long the
input string is.



\section{Current Work and Progress}
For reasons beyond this report, it turns out that a complete set of
simplification rules depends on values being encoded as
bitsequences.\footnote{Values are the results the lexing algorithms
  generate; they encode how a regular expression matched a string.} We
already know that the lexing algorithm using bitsequences but
\emph{without} simplification is correct, albeilt horribly
slow. Therefore in the past 6 months I was trying to prove that the
algorithm using bitsequences plus our simplification rules is
also correct. Formally this amounts to show that

\begin{equation}\label{mainthm}
\blexers \; r \; s = \blexer \;r\;s
\end{equation}

\noindent
whereby $\blexers$ simplifies (makes derivatives smaller) in each
step. This
would be an important milestone for my thesis, because we already
have a very good idea how to establish that our set our simplification
rules keep the size of derivatives below a relatively tight bound.

In order to prove the main theorem in \eqref{mainthm}, we need to prove that the
two functions produce the same output. The definition of these two  functions 
is shown below.

\begin{center}
\begin{tabular}{lcl}
  $\textit{blexer}\;r\,s$ & $\dn$ &
      $\textit{let}\;a = (r^\uparrow)\backslash s\;\textit{in}$\\                
  & & $\;\;\textit{if}\; \textit{bnullable}(a)$\\
  & & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
  & & $\;\;\textit{else}\;\textit{None}$
\end{tabular}
\end{center}

\begin{center}
\begin{tabular}{lcl}
  $\blexers \; r \, s$ &$\dn$ &
    $\textit{let} \; a = (r^\uparrow)\backslash_{simp}\, s\; \textit{in}$\\
  & & $\; \; \textit{if} \; \textit{bnullable}(a)$\\
  & & $\; \; \textit{then} \; \textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
  & & $\;\;   \textit{else}\;\textit{None}$
\end{tabular}
\end{center}
\noindent
In these definitions $(r^\uparrow)$ is a kind of coding function that
is the same in each case, similarly the decode and the \textit{bmkeps}
are functions that are the same in each case. Our main
theorem~\eqref{mainthm} therefore boils down to proving the following
two propositions (depending on which branch the if-else clause
takes). They establish how the derivatives \emph{with} simplification
do not change the computed result:

\begin{itemize}
\item{(a)} If a string $s$ is in the language of $L(r)$, then \\
$\textit{bmkeps} (r^\uparrow)\backslash_{simp}\,s = \textit{bmkeps} (r^\uparrow)\backslash s$,\\
\item{(b)} If a string $s$ is in the language $L(r)$, then 
$\rup \backslash_{simp} \,s$ is not nullable.
\end{itemize}

\noindent
We have already proved the second part  in Isabelle. This is actually
not too difficult because we can show that simplification does not
change the language of simplified regular expressions.

If we can prove the first part, that is the bitsequence algorithm with
simplification produces the same result as the one without
simplification, then we are done.  Unfortunately that part requires
more effort, because simplification does not only need to \emph{not}
change the language, but also not change the value (that is the
computed result).

%\bigskip\noindent\rule[1.5ex]{\linewidth}{5pt}
%Do you want to keep this? You essentially want to say that the old
%method used retrieve, which unfortunately cannot be adopted to 
%the simplification rules. You could just say that and give an example.
%However you have to think about how you give the example....nobody knows
%about AZERO etc yet. Maybe it might be better to use normal regexes
%like $a + aa$, but annotate bitsequences as subscript like $_1(_0a + _1aa)$.

%\bigskip\noindent\rule[1.5ex]{\linewidth}{5pt}
%REPLY:\\
%Yes, I am essentially saying that the old method
%cannot be adopted without adjustments.
%But this does not mean we should skip
%the proof of the bit-coded algorithm
%as it is still the main direction we are looking into
%to prove things. We are trying to modify
%the old proof to suit our needs, but not give 
%up it totally, that is why i believe the old 
%proof is fundamental in understanding
%what we are doing in the past 6 months.
%\bigskip\noindent\rule[1.5ex]{\linewidth}{5pt}

\subsubsection*{Existing Proof}

For this we have started with looking at the proof of
\begin{equation}\label{lexer}
\blexer \; (r^\uparrow)  s = \lexer \;r \;s,
\end{equation}

%\noindent
%might provide us insight into proving 
%\begin{center}
%$\blexer \; r^\uparrow \;s = \blexers \; r^\uparrow \;s$
%\end{center}

\noindent
which established that the bit-sequence algorithm produces the same
result as the original algorithm, which does not use 
bit-sequences.
The proof uses two ``tricks''. One is that it uses a \flex-function

\begin{center}
\begin{tabular}{lcl}
$\textit{flex} \;r\; f\; (c\!::\!s) $ & $\dn$ & $\textit{flex} \;  (r\backslash c) \;(\lambda v. f (inj \; r \; c \; v)) \;s$ \\
$\textit{flex} \;r\; f\;  [\,] $ & $\dn$ & $f$
\end{tabular}
\end{center}

\noindent
to prove for the right-hand side in \eqref{lexer}

\begin{center}
$\lexer \;r\; s = \flex \;\textit{id} \; r\;s \;(\mkeps \; (r\backslash s))$.
\end{center}



\noindent
This property links $\flex$ and $\lexer$,
it essentially does lexing by
stacking up injection functions while doing derivatives,
explicitly showing the order of characters being
injected back in each step.

\noindent
The other trick, which is the crux in the existing proof, 
is the use of the $\retrieve$-function by Sulzmann and Lu:
\begin{center}
\begin{tabular}{@{}l@{\hspace{2mm}}c@{\hspace{2mm}}l@{}}
  $\textit{retrieve}\,(_{bs}\ONE)\,\Empty$ & $\dn$ & $bs$\\
  $\textit{retrieve}\,(_{bs}{\bf c})\,(\Char\,d)$ & $\dn$ & $bs$\\
  $\textit{retrieve}\,(_{bs}\sum a::as)\,(\Left\,v)$ & $\dn$ &
     $bs \,@\, \textit{retrieve}\,a\,v$\\
  $\textit{retrieve}\,(_{bs}\sum a::as)\,(\Right\,v)$ & $\dn$ &
  $\textit{bs} \,@\, \textit{retrieve}\,(_{[]}\sum as)\,v$\\
  $\textit{retrieve}\,(_{bs}a_1\cdot a_2)\,(\Seq\,v_1\,v_2)$ & $\dn$ &
     $bs \,@\,\textit{retrieve}\,a_1\,v_1\,@\, \textit{retrieve}\,a_2\,v_2$\\
  $\textit{retrieve}\,(_{bs}a^*)\,(\Stars\,[])$ & $\dn$ &
     $bs \,@\, [0]$\\
  $\textit{retrieve}\,(_{bs}a^*)\,(\Stars\,(v\!::\!vs))$ & $\dn$ &\\
  \multicolumn{3}{l}{
     \hspace{3cm}$bs \,@\, [1] \,@\, \textit{retrieve}\,a\,v\,@\,
                    \textit{retrieve}\,(_{[]}a^*)\,(\Stars\,vs)$}\\
\end{tabular}
\end{center}

\noindent
This function assembles the bitcode 
using information from both the derivative regular expression and the
value. Sulzmann and Lu proposed this function, but did not prove
anything about it. Ausaf and Urban made use of the
fact about $\retrieve$ in their proof:
 
 \begin{center} 
 $\retrieve\; \rup \backslash c \; v = \retrieve \;  \rup (\inj \;r \;c \; v)$
 \end{center} 

\noindent
whereby $r^\uparrow$ stands for the internalised version of $r$.
This fact, together with the fact of how $\flex$ relates to injection:

\begin{equation}\label{flex}
\flex \; r \; id \; (s@[c]) \; v = \flex \;  r \; id \; s \; (inj \; (r\backslash s) \; c\; v).
\end{equation}

\noindent
can be used to prove what we want:
\begin{center} 
$ \flex \; r\; id\; s\; v = \textit{decode} \;( \textit{bmkeps}\; (\rup \backslash s) ) r$
\end{center}
\noindent
If we state the inductive hypothesis to be
\begin{center} 
$ \flex \; r\; id\; s\; v = \textit{decode} \;( \textit{retrieve}\; (\rup \backslash s) \; v \;) r$
\end{center}
\noindent
where $\mkeps(r\backslash s) $ is denoted using $v$,
then a reverse induction  
helps with using 
the fact~\eqref{flex} in proving the $n+1$ step:
\begin{center} 
$ \flex \; r\; id\; (s@[c])\; v = \textit{decode} \;( \textit{retrieve}\; (\rup \backslash s) \; (\inj \; (r\backslash s) \;c\;v)\;) r$
\end{center}
Using a lemma that
\begin{center}
$\textit{retrieve}\; (\rup \backslash s) \; (\inj \; (r\backslash s) \;c\;v)\; = \textit{retrieve}\; (\rup \backslash s@[c])  \; v\; $
\end{center}
we get
\begin{center} 
$  \textit{decode} \;( \textit{retrieve}\; (\rup \backslash s) \; (\inj \; (r\backslash s) \;c\;v)\;) r = \textit{decode} \;( \textit{retrieve}\; (\rup \backslash s@[c]) \; v\;) r $
\end{center}
and the inductive step is done because
\begin{center} 
$ \textit{retrieve}\; (\rup \backslash s@[c]) \; \mkeps(r\backslash s) =  \bmkeps \;(\rup \backslash s@[c])$.
\end{center}

\noindent
To use 
$ \flex \; r\; id\; s\; v = \textit{decode} \;( \textit{retrieve}\; (\rup \backslash s) \; v \;) r$
 for our 
correctness proof, simply replace the $v$ with
$\mkeps\;(r\backslash s)$, and apply the lemma that
\begin{center}
$ \; \bmkeps \; \rup =  (\textit{retrieve} \; \rup \; \mkeps(r))$
\end{center}
\noindent
We get the correctness of our bit-coded algorithm:
\begin{center}
$\flex \;r\;  id \; s \; (\mkeps \; (r\backslash s)) = \textit{decode} \; (\bmkeps \; \rup\backslash s) \; r$.
\end{center}
\noindent



\subsubsection{Using Retrieve Function In a New Setting}
 Ausaf
and Urban used $\retrieve$  to prove  the correctness of bitcoded
algorithm without simplification.  Our purpose of using it, however,
is to establish 

\begin{center}
$ \textit{retrieve} \;
a \; v \;=\; \textit{retrieve}  \; (\textit{simp}\,a) \; v'.$
\end{center}
The idea is that using $v'$, a simplified version of $v$ that had gone
through the same simplification step as $\textit{simp}(a)$, we are able
to extract the bitcode that gives the same lexing information as the
unsimplified one. 
If we want to use a similar technique as
that of the existing proof,
we face the problem that in the above 
equalities,
$\retrieve \; a \; v$ is not always defined.
For example,
$\retrieve \; _0(_1a+_0a) \; \Left(\Empty)$
is defined, but not $\retrieve \; (_{01}a) \;\Left(\Empty)$,
though we can extract the same POSIX
bits from the two annotated regular expressions.
The latter might occur when we try to retrieve from 
a simplified regular expression using the same value
underlying the unsimplified one.
This is because $\Left(\Empty)$ corresponds to
the regular expression structure $\ONE+r_2$ instead of
$\ONE$.
That means, if we 
want to prove that 
\begin{center}
$\bmkeps \; \rup\backslash s = \bmkeps \; \rup\backslash_{simp} s$
\end{center}
\noindent
holds by using $\retrieve$,
we probably need to prove an equality like below:
\begin{center}
%$\retrieve \; \rup\backslash_{simp} s \; \mkeps(r\backslash_{simp} s)=\textit{retrieve} \; \rup\backslash s \; \mkeps(r\backslash s)$
$\retrieve \; \rup\backslash_{simp} s \; \mkeps(f(r\backslash s))=\textit{retrieve} \; \rup\backslash s \; \mkeps(r\backslash s)$
\end{center}
\noindent
$f$ rectifies $r\backslash s$ so the value $\mkeps(f(r\backslash s))$ becomes
 something simpler
to make the retrieve function defined.\\
\subsubsection{Ways to Rectify Values}
One way to do this is to prove the following:
\begin{center}
$\retrieve \; \rup\backslash_{simp} s \; \mkeps(\simp(r\backslash s))=\textit{retrieve} \; \rup\backslash s \; \mkeps(r\backslash s)$
\end{center}
\noindent
The reason why we choose $\simp$ as $f$ is because
$\rup\backslash_{simp} \, s$ and $\simp(\rup\backslash \, s)$
have the same shape:
\begin{center}
$\erase (\rup\backslash_{simp} \, s) = \erase(\simp(\rup\backslash s))$
\end{center}

\noindent
$\erase$ in the above equality means to remove the bit-codes
in an annotated regular expression and only keep the original
regular expression(just like "erasing" the bits). Its definition is omitted.
$\rup\backslash_{simp} \, s$ and $\simp(\rup\backslash s)$
are very closely related, but not identical.
\subsubsection{Example for $\rup\backslash_{simp} \, s \neq \simp(\rup\backslash s)$}
For example, let $r$ be the regular expression
$(a+b)(a+a*)$ and $s$  be the string $aa$, then
both $\erase (\rup\backslash_{simp} \, s)$ and $\erase (\simp (\rup\backslash s))$
are $\ONE + a^*$. However, without $\erase$ 
\begin{center}
$\rup\backslash_{simp} \, s$ is equal to $_0(_0\ONE +_{11}a^*)$
\end{center}
\noindent
whereas
\begin{center}
$\simp(\rup\backslash  s)$ is equal to $(_{00}\ONE +_{011}a^*)$
\end{center}

What makes the difference?

%Two "rules" might be inferred from the above example.

%First, after erasing the bits the two regular expressions
%are exactly the same: both become $1+a^*$. Here the 
%function $\simp$ exhibits the "one in the end equals many times
%at the front"
%property: one simplification in the end causes the 
%same regular expression structure as
%successive simplifications done alongside derivatives.
%$\rup\backslash_{simp} \, s$ unfolds to 
%$\simp((\simp(r\backslash a))\backslash a)$
%and $\simp(\rup\backslash s)$ unfolds to 
%$\simp((r\backslash a)\backslash a)$. The one simplification
%in the latter causes the resulting regular expression to 
%become $1+a^*$, exactly the same as the former with
%two simplifications.

%Second, the bit-codes are different, but they are essentially
%the same: if we push the outmost bits ${\bf_0}(_0\ONE +_{11}a^*)$ of $\rup\backslash_{simp} \, s$
%inside then we get $(_{00}\ONE +_{011}a^*)$, exactly the 
%same as that of $\rup\backslash \, s$. And this difference 
%does not matter when we try to apply $\bmkeps$ or $\retrieve$
%to it. This seems a good news if we want to use $\retrieve$
%to prove things.

%If we look into the difference above, we could see that the
%difference is not fundamental: the bits are just being moved
%around in a way that does not hurt the correctness.
During the first derivative operation, 
$\rup\backslash a=(_0\ONE  + \ZERO)(_0a  +  _1a^*)$  is
in the form of a sequence regular expression with
two components, the first
one $\ONE + \ZERO$ being nullable. 
Recall the simplification function definition:
\begin{center}
  \begin{tabular}{@{}lcl@{}}
   
  $\textit{simp} \; (\textit{SEQ}\;bs\,a_1\,a_2)$ & $\dn$ & $ (\textit{simp} \; a_1, \textit{simp}  \; a_2) \; \textit{match} $ \\
   &&$\quad\textit{case} \; (\ZERO, \_) \Rightarrow  \ZERO$ \\
   &&$\quad\textit{case} \; (\_, \ZERO) \Rightarrow  \ZERO$ \\
   &&$\quad\textit{case} \;  (\ONE, a_2') \Rightarrow  \textit{fuse} \; bs \;  a_2'$ \\
   &&$\quad\textit{case} \; (a_1', \ONE) \Rightarrow  \textit{fuse} \; bs \;  a_1'$ \\
   &&$\quad\textit{case} \; (a_1', a_2') \Rightarrow  \textit{SEQ} \; bs \; a_1' \;  a_2'$ \\

  $\textit{simp} \; (\textit{ALTS}\;bs\,as)$ & $\dn$ & $\textit{distinct}( \textit{flatten} ( \textit{as.map(simp)})) \; \textit{match} $ \\
  &&$\quad\textit{case} \; [] \Rightarrow  \ZERO$ \\
   &&$\quad\textit{case} \; a :: [] \Rightarrow  \textit{fuse bs a}$ \\
   &&$\quad\textit{case} \;  as' \Rightarrow  \textit{ALTS}\;bs\;as'$\\ 

   $\textit{simp} \; a$ & $\dn$ & $\textit{a} \qquad \textit{otherwise}$   
\end{tabular}    
\end{center}    

\noindent

and the definition of $\flatten$:
 \begin{center}
 \begin{tabular}{c c c}
 $\flatten \; []$ & $\dn$ & $[]$\\
 $\flatten \; \ZERO::rs$ & $\dn$ & $rs$\\
 $\flatten \;(_{\textit{bs}_1}\sum \textit{rs}_1 ::rs)$ & $\dn$ & $(\map \, (\fuse \, \textit{bs}_1) \,\textit{rs}_1) ::: \flatten(rs)$\\
 $\flatten \; r :: rs$ & $\dn$ & $r::\flatten(rs)$
 \end{tabular}
 \end{center}
 
 \noindent
If we call $\simp$ on $\rup\backslash a$, just as $\backslash_{simp}$
requires, then we would go through the third clause of 
the sequence case:$\quad\textit{case} \;  (\ONE, a_2') \Rightarrow  \textit{fuse} \; bs \;  a_2'$.
The $\ZERO$ of $(_0\ONE  + \ZERO)$ is thrown away 
by $\flatten$ and 
$_0\ONE$ merged into $(_0a  +  _1a^*)$ by simply
putting its bits($_0$) to the front of the second component:
 ${\bf_0}(_0a  +  _1a^*)$. 
 After a second derivative operation,
 namely, $(_0(_0a  +  _1a^*))\backslash a$, we get 
 $
 _0(_0 \ONE  +  _1(_1\ONE \cdot a^*))
 $, and this simplifies to $_0(_0 \ONE  +  _{11} a^*)$
 by the third clause of the alternative case:
 \begin{center}
 $\quad\textit{case} \;  as' \Rightarrow  _{bs}\sum{as'}$.
 \end{center}
 
 \noindent
The outmost bit $_0$ stays with 
the outermost regular expression, rather than being fused to
its child regular expressions, as what we will later see happens
to $\simp(\rup\backslash \, s)$.
If we choose to not simplify in the midst of derivative operations,
but only do it at the end after the string has been exhausted, 
namely, $\simp(\rup\backslash \, s)=\simp((\rup\backslash a)\backslash a)$,
then at the {\bf second} derivative of 
$(\rup\backslash a)\bf{\backslash a}$
we will go throught the clause of $\backslash$:
\begin{center}
\begin{tabular}{lcl}
$(\textit{SEQ}\;bs\,a_1\,a_2)\,\backslash c$ & $\dn$ &
     $(when \; \textit{bnullable}\,a_1)$\\
					       & &$_{bs}\sum\,\;[_{[]}((a_1\,\backslash c) \cdot \,a_2),$\\
					       & &$(\textit{fuse}\,(\textit{bmkeps}\,a_1)\,(a_2\,\backslash c))]$\\
\end{tabular}
\end{center}
\noindent
because
$\rup\backslash a = (_0\ONE  + \ZERO)(_0a  +  _1a^*)$  
is a sequence
with the first component being nullable
(unsimplified, unlike the first round of running$\backslash_{simp}$).
Therefore $((_0\ONE  + \ZERO)(_0a  +  _1a^*))\backslash a$ splits into
$([(\ZERO + \ZERO)\cdot(_0a  +  _1a^*)] + _0( _0\ONE  + _1[_1\ONE \cdot a^*]))$.
After these two successive derivatives without simplification,
we apply $\simp$ to this regular expression, which goes through
the alternative clause, and each component of 
$([(\ZERO + \ZERO)\cdot(_0a  +  _1a^*)] + _0( _0\ONE  + _1[_1\ONE \cdot a^*]))$ 
will be simplified, giving us the list:$[\ZERO, _0(_0\ONE  + _{11}a^*)]$
This list is then "flattened"--$\ZERO$ will be
thrown away by $\textit{flatten}$; $ _0(_0\ONE  + _{11}a^*)$
is opened up to make the list consisting of two separate elements 
$_{00}\ONE$ and $_{011}a^*$, note that $flatten$ 
$\fuse$s the bit(s) $_0$ to the front of $_0\ONE $ and $_{11}a^*$.
The order of simplification, which impacts the order that alternatives
are  opened up, causes
the bits to be moved differently.
 
 \subsubsection{A Failed Attempt To Remedy the Problem Above}
A simple class of regular expression and string
pairs $(r, s)$ can be deduced from the above example 
which trigger the difference between 
$\rup\backslash_{simp} \, s$
and  $\simp(\rup\backslash s)$:
\begin{center}
\begin{tabular}{lcl}
$D =\{ (r_1 \cdot r_2,\; [c_1c_2]) \mid $ & $\simp(r_2) = r_2, \simp(r_1 \backslash c_1) = \ONE,$\\
 $r_1 \; not \; \nullable, c_2 \in L(r_2),$ & $\exists \textit{rs},\textit{bs}.\;  r_2 \backslash c_2 = _{bs}{\sum rs}$\\
$\exists \textit{rs}_1. \; \simp(r_2 \backslash c_2) = _{bs}{\sum \textit{rs}_1}$ &  $and \;\simp(r_1 \backslash [c_1c_2]) = \ZERO\}$\\
\end{tabular}
\end{center}
We take a pair $(r, \;s)$ from the set $D$.

Now we compute $ \rup \backslash_{simp} s$, we get:
\begin{center}
\begin{tabular}{lcl}
$(r_1\cdot r_2)\backslash_{simp} \, [c_1c_2]$ & $= \simp\left[ \big(\simp\left[ \left( r_1\cdot r_2 \right) \backslash c_1\right] \big)\backslash c_2\right]$\\
								      & $= \simp\left[ \big(\simp \left[  \left(r_1 \backslash c_1\right) \cdot r_2 \right] \big) \backslash c_2 \right]$\\
								      & $= \simp \left[  (\fuse \; \bmkeps(r_1\backslash c_1) \; \simp(r_2) ) \backslash c_2 \right]$,\\
								      & $= \simp \left[  (\fuse \; \bmkeps(r_1\backslash c_1) \; r_2 ) \backslash c_2 \right]$,
\end{tabular}
\end{center}
\noindent
from the definition of $D$ we know $r_1 \backslash c_1$ is nullable, therefore
$\bmkeps(r_1\backslash c_1)$  returns a bitcode, we shall call it
 $\textit{bs}_2$. 
The above term can be rewritten as
\begin{center}
$ \simp \left[  \fuse \; \textit{bs}_2\; r_2  \backslash c_2 \right]$,
\end{center}
which is equal to 
\begin{center}
$\simp \left[ \fuse \; \textit{bs}_2 \; _{bs}{\sum rs} \right]$\\
$=\simp \left[  \; _{bs_2++bs}{\sum rs} \right]$\\
$=  \; _{bs_2++bs}{\sum \textit{rs}_1} $
\end{center}
\noindent
by using the properties from the set $D$ again
and again(The reason why we set so many conditions 
that the pair $(r,s)$ need to satisfy is because we can
rewrite them easily to construct the difference.)

Now we compute ${\bf \simp(\rup \backslash s)}$:
\begin{center}
$\simp \big[(r_1\cdot r_2) \backslash [c_1c_2] \big]= \simp \left[ ((r_1 \cdot r_2 )\backslash c_1) \backslash c_2 \right]$
\end{center}
\noindent
Again, using the properties above, we obtain
the following chain of equalities:
\begin{center}
$\simp(\rup \backslash s)= \simp \left[ ((r_1 \cdot r_2 )\backslash c_1) \backslash c_2 \right]= \simp\left[    \left(r_1 \backslash c_1\right) \cdot r_2  \big) \backslash c_2 \right]$\\
$= \simp \left[ \sum[\big( \left(r_1 \backslash c_1\right) \backslash c_2 \big) \cdot r_2 \; , \; \fuse \; (\bmkeps \;r_1\backslash c_1) \; r_2 \backslash c_2 ] \right]$,
\end{center}
\noindent
as before, we call the bitcode returned by 
$\bmkeps(r_1\backslash c_1)$ as
$\textit{bs}_2$. 
Also, $\simp(r_2 \backslash c_2)$ is 
$_{bs}\sum \textit{rs}_1$, 
and $( \left(r_1 \backslash c_1\right) \backslash c_2  \cdot r_2)$
simplifies to $\ZERO$,
so the above term can be expanded as
\begin{center}
\begin{tabular}{l}
$\textit{distinct}(\flatten[\ZERO\;, \; _{\textit{bs}_2++\textit{bs}}\sum \textit{rs}_1] ) \; \textit{match} $ \\
  $\textit{case} \; [] \Rightarrow  \ZERO$ \\
   $\textit{case} \; a :: [] \Rightarrow  \textit{\fuse \; \textit{bs} a}$ \\
    $\textit{case} \;  as' \Rightarrow  _{[]}\sum as'$\\ 
\end{tabular}
\end{center}
\noindent
Applying the definition of $\flatten$, we get
\begin{center}
$_{[]}\sum (\textit{map} \; \fuse (\textit{bs}_2 ++ bs) rs_1)$
\end{center}
\noindent
compared to the result 
\begin{center}
$ \; _{bs_2++bs}{\sum \textit{rs}_1} $
\end{center}
\noindent
Note how these two regular expressions only
differ in terms of the position of the bits 
$\textit{bs}_2++\textit{bs}$. They are the same otherwise.
What caused this difference to happen?
The culprit is the $\flatten$ function, which spills
out the bitcodes in the inner alternatives when 
there exists an outer alternative.
Note how the absence of simplification
caused $\simp(\rup \backslash s)$ to
generate the nested alternatives structure:
\begin{center}
$  \sum[\ZERO \;, \; _{bs}\sum \textit{rs} ]$
\end{center}
and this will always trigger the $\flatten$ to 
spill out the inner alternative's bitcode $\textit{bs}$,
whereas when
simplification is done along the way, 
the structure of nested alternatives is never created(we can
actually prove that simplification function never allows nested
alternatives to happen, more on this later).

How about we do not allow the function $\simp$
to fuse out the bits when it is unnecessary?
Like, for the above regular expression, we might
just delete the outer layer of alternative
\begin{center}
\st{$ {\sum[\ZERO \;,}$} $_{bs}\sum \textit{rs}$ \st{$]$}
\end{center}
and get $_{bs}\sum \textit{rs}$ instead, without
fusing the bits $\textit{bs}$ inside to every element 
of $\textit{rs}$.
This idea can be realized by making the following
changes to the $\simp$-function:
\begin{center}
  \begin{tabular}{@{}lcl@{}}
   
  $\textit{simp}' \; (_{\textit{bs}}(a_1 \cdot a_2))$ & $\dn$ & $\textit{as} \; \simp \; \textit{was} \; \textit{before} $ \\

  $\textit{simp}' \; (_{bs}\sum as)$ & $\dn$ & \st{$\textit{distinct}( \textit{flatten} ( \textit{map simp as})) \; \textit{match} $} \\
  &&\st{$\quad\textit{case} \; [] \Rightarrow  \ZERO$} \\
   &&\st{$\quad\textit{case} \; a :: [] \Rightarrow  \textit{fuse bs a}$} \\
   &&\st{$\quad\textit{case} \;  as' \Rightarrow  \textit{ALTS}\;bs\;as'$}\\ 
   &&$\textit{if}(\textit{hollowAlternatives}( \textit{map \; simp \; as}))$\\
   &&$\textit{then} \; \fuse  \; \textit{bs}\; \textit{extractAlt}(\textit{map} \; \simp \; \textit{as} )$\\
   &&$\textit{else} \; \simp(_{bs} \sum \textit{as})$\\
   

   $\textit{simp}' \; a$ & $\dn$ & $\textit{a} \qquad \textit{otherwise}$   
\end{tabular}    
\end{center}    

\noindent
given the definition of $\textit{hollowAlternatives}$ and  $\textit{extractAlt}$ :
\begin{center}
$\textit{hollowAlternatives}( \textit{rs}) \dn 
\exists r = (_{\textit{bs}_1}\sum \textit{rs}_1)  \in \textit{rs}.  \forall r' \in \textit{rs}, \;
\textit{either} \; r' = \ZERO \; \textit{or} \; r' = r $
$\textit{extractAlt}( \textit{rs}) \dn \textit{if}\big(
\exists r = (_{\textit{bs}_1}\sum \textit{rs}_1)  \in \textit{rs}.  \forall r' \in \textit{rs}, \;
\textit{either} \; r' = \ZERO \; \textit{or} \; r' = r \big)\; \textit{then} \; \textit{return} \; r$
\end{center}
\noindent
Basically, $\textit{hollowAlternatives}$ captures the feature of
a list of regular expression of the shape 
\begin{center}
$  \sum[\ZERO \;, \; _{bs}\sum \textit{rs} ]$
\end{center}
and this means we can simply elevate the 
inner regular expression $_{bs}\sum \textit{rs}$
 to the outmost
and throw away the useless $\ZERO$s and
the outer $\sum$ wrapper.
Using this new definition of simp, 
under the example where  $r$ is the regular expression
$(a+b)(a+a*)$ and $s$  is the string $aa$
the problem of $\rup\backslash_{simp} \, s \neq \simp(\rup\backslash s)$
is resolved.

Unfortunately this causes new problems:
for the counterexample where 
\begin{center}
$r$ is the regular expression
$(ab+(a^*+aa))$ and $s$  is the string $aa$
\end{center}

\noindent
$\rup\backslash_{simp} \, s$ is equal to 
$ _1(_{011}a^* +  _1\ONE)  $ whereas
$ \simp(\rup\backslash s) = (_{1011}a^* +  _{11}\ONE)$.
This discrepancy does not appear for the old
version of $\simp$.

Why?

We shall illustrate in detail  again of what happened in each recursive call of 
$\backslash$ and $\backslash_{simp}$.
During the first derivative operation, 
\begin{center}
$\rup\backslash a=(    _0[ \ONE\cdot {\bf b}] + _1( _0[ _1\ONE \cdot {\bf a}^*] + _1[ \ONE \cdot {\bf a}])      )$,
\end{center}
\noindent
 the second derivative gives us
 \begin{center}
$(\rup\backslash a)\backslash a=(_0( [\ZERO\cdot {\bf b}] + 0) + _1( _0( [\ZERO\cdot {\bf a}^*] + _1[ _1\ONE \cdot {\bf a}^*]) + _1( [\ZERO \cdot {\bf a}] + \ONE)  ))$,
\end{center}

\noindent
and this simplifies to
\begin{center}
$ _1(_{011}{\bf a}^* +  _1\ONE)  $ 
\end{center}
because when  $(\rup\backslash a)\backslash a$ goes 
through simplification, according to our new $\simp$
clause,
each component of the list
$[_0( [\ZERO\cdot {\bf b}] + 0) , _1( _0( [\ZERO\cdot {\bf a}^*] + _1[ _1\ONE \cdot {\bf a}^*]) + _1( [\ZERO \cdot {\bf a}] + \ONE)  )]$
is simplified into
$[\ZERO, _1(_{011}{\bf a}^* +  _1\ONE) ]$,
this fits into the definition of $\textit{hollowAlternatives}$,
so the structure of the annotated regular expression
\begin{center}
$_1(_{011}{\bf a}^* +  _1\ONE) $
\end{center}
is preserved, in the sense that the outside bit $_1$
is not fused inside.
If, after the first derivative we apply simplification we get
$(_0{\bf b}  + _{101}{\bf  a}^* + _{11}{\bf a}  )$,
and we do another derivative, getting
$(\ZERO  + (_{101}(\ONE \cdot _1{\bf a}^*)+_{11}\ONE)$,
which simplifies to 
\begin{center}
$ (_{1011}a^* +  _{11}\ONE)  $ 
\end{center}





We have changed the algorithm to avoid the old
counterexample, but this gives rise to new counterexamples.
This dilemma causes this amendment not a successful 
attempt to make $\rup\backslash_{simp} \, s = \simp(\rup\backslash s)$
under every possible regular expression and string.
\subsection{Properties of the Function $\simp$}

We have proved in Isabelle quite a few properties
of the $\simp$-function, which helps the proof to go forward
and we list them here to aid comprehension.

To start, we need a bit of auxiliary notations,
which is quite basic and is only written here
for clarity.

$\textit{sub}(r)$ computes the set of the 
sub-regular expression of $r$:
\begin{center}
$\textit{sub}(\ONE) \dn \{\ONE\}$\\
$\textit{sub}(r_1 \cdot r_2) \dn \textit{sub}(r_1) \cup \textit{sub}(r_2) \cup \{r_1 \cdot r_2\}$\\
$\textit{sub}(r_1 + r_2) \dn \textit{sub}(r_1) \cup \textit{sub}(r_2) \cup \{r_1+r_2\}$\\
\end{center}
$\textit{good}(r) \dn r \neq \ZERO \land \\
\forall r' \in \textit{sub}(r), \textit{if} \; r' = _{bs_1}\sum(rs_1), \;
\textit{then} \nexists r'' \in \textit{rs}_1 \; s.t.\;
r'' = _{bs_2}\sum \textit{rs}_2 $

The properties are mainly the ones below:
\begin{itemize}
\item
\begin{center}
$\simp(\simp(r)) = \simp(r)$
\end{center}
\item
\begin{center}
$\textit{if}\; r = \simp(r') \textit{then} \; \textit{good}(r) $
\end{center}
\end{itemize}
\subsection{The Contains relation}
$\retrieve$ is a too strong relation in that
it only extracts one bitcode instead of a set of them.
Therefore we try to define another relation(predicate)
to capture the fact the regular expression has bits
being moved around but still has all the bits needed.
The contains symbol, written$\gg$, is  a relation that
takes two arguments in an infix form 
and returns a truth value. 


In other words, from the set of regular expression and 
bitcode pairs 
$\textit{RV} = \{(r, v) \mid r \text{r is a regular expression, v is a value}\}$,
those that satisfy the following requirements are in the set
$\textit{RV}_Contains$.
Unlike the $\retrieve$
function, which takes two arguments $r$ and $v$ and 
produces an only answer $\textit{bs}$, it takes only 
one argument $r$ and returns a set of bitcodes that 
can be generated by $r$.
\begin{center}
\begin{tabular}{llclll}
& & & $_{bs}\ONE$ & $\gg$ & $\textit{bs}$\\
& & & $_{bs}{\bf c}$ & $\gg$ & $\textit{bs}$\\
$\textit{if} \; r_1 \gg \textit{bs}_1$ & $r_2 \; \gg \textit{bs}_2$ 
& $\textit{then}$  &
 $_{bs}{r_1 \cdot r_2}$ & 
 $\gg$ & 
 $\textit{bs} @ \textit{bs}_1 @ \textit{bs}_2$\\

 $\textit{if} \; r \gg \textit{bs}_1$ &  & $\textit{then}$  &
 $_{bs}{\sum(r :: \textit{rs}})$ & 
 $\gg$ & 
 $\textit{bs} @ \textit{bs}_1 $\\ 

 $\textit{if} \; _{bs}(\sum \textit{rs}) \gg \textit{bs} @ \textit{bs}_1$ &  & $\textit{then}$  &
 $_{bs}{\sum(r :: \textit{rs}})$ & 
 $\gg$ & 
 $\textit{bs} @ \textit{bs}_1 $\\  

 $\textit{if} \; r \gg \textit{bs}_1\; \textit{and}$ &  $_{bs}r^* \gg \textit{bs} @ \textit{bs}_2$ & $\textit{then}$  &
 $_{bs}r^* $ & 
 $\gg$ & 
 $\textit{bs} @ [0] @ \textit{bs}_1@ \textit{bs}_2 $\\
 
 & & & $_{bs}r^*$ & $\gg$ & $\textit{bs} @ [1]$\\
\end{tabular}
\end{center}
It is a predicate in the sense that given 
a regular expression and a bitcode, it 
returns true or false, depending on whether 
or not the regular expression can actually produce that
value. If the predicates returns a true, then 
we say that the regular expression $r$ contains
the bitcode $\textit{bs}$, written 
$r \gg \textit{bs}$.
The $\gg$ operator with the
regular expression $r$ may also be seen as a 
regular language by itself on the alphabet
$\Sigma = {0,1}$.
The definition of contains relation
is given in an inductive form, similar to that
of regular expressions, it might not be surprising 
that the set it denotes contains basically 
 everything a regular expression can 
produce during the derivative and lexing process.
This can be seen in the subsequent lemmas we have
proved about contains:
\begin{itemize}
\item
\begin{center}
\begin{equation}\label{contains1}
\textit{if}\; \models v:r \; \textit{then} \; \rup \gg \textit{code}(v)
\end{equation}
\end{center}
This lemma states that the set
$\{\textit{bs}\; | \rup \gg \textit{bs} \}$ 
"contains" all the underlying value $v$ of $r$ in which they are 
in a coded form.
These values include the ones created in the 
lexing process, for example, when the regular
expression $r$ is nullable, then we have:
\item
\begin{center}
$r \gg \textit{bmkeps}(r)$
\end{center}
This can be seen as a corollary of the previous lemma,
because $\models \textit{mkeps}((r\downarrow)):(r\downarrow)$
and $\textit{code}(\mkeps((r\downarrow))) = \bmkeps(r)$.
Another corollary we have of \eqref{contains1}
\item
\begin{center}
$\textit{if}\; \models v:r \; \textit{then} \; \rup \gg \textit{retrieve} \; \rup \; v$
\end{center}
as $\textit{retrieve} \; \rup \; v = \textit{code}(v)$
It says that if you can extract a bitsequence using 
retrieve guided by $v$, then such bitsequence is already there in the set 
$\{\textit{bs}\; | \rup \gg \textit{bs} \}$.
This lemma has a slightly different form:
\item
\begin{center}
$\textit{if}\; \models v:a\downarrow \; \textit{then} \; a \gg \textit{retrieve} \; a \; v$
\end{center}
This is almost identical to the previous lemma, except
this time we might have arbitrary bits attached 
to anywhere of the annotated regular expression $a$.
$a$ can be any "made up" annotated regular expressions
that does not belong to the "natural" ones created by 
internalising an unannotated regular expression.
For example, a regular expression $r = (a+b)$ after internalisation
becomes $\rup = (_0a+_1b)$. For an underlying value $v = \Left(\Char(a))$
we have $\retrieve \; (_0a+_1b) \;v = 0$ and $(_0a+_1b) \gg 0$. We could 
attach arbitrary bits to the regular expression $\rup$
without breaking the structure,
 for example  we could make up $a = _{0100111}(_{1011}a+1b)$,
 and we still have $\models v:a\downarrow$, and
 therefore  $a \gg \retrieve \; a \; v$, this time the bitsequence
 being $01001111011$.
This shows that the inductive clauses defining $\gg$
simulates what $\retrieve$ does guided by different 
values.
Set $\{\textit{bs}\; | \rup \gg \textit{bs} \}$ contains
a wide range of values coded as bitsequences,
the following property can be routinely established 
from the previous lemma
\item
\begin{center}
$r \gg \retrieve \; r \; (\inj \; (r\downarrow) \; c \; v) \;\;\; \textit{if} \; \models v: \textit{der} \; c \; (\erase(r))$
\end{center}
because $\inj \; (r\downarrow)\; c\; v$ is a value
underlying $r$.
Using this we can get something that looks much
less obvious:
\item

\begin{center}
\begin{tabular}{c}
$\textit{if} \models v: \erase(r)\backslash c \; \textit{then}$\\
$r\backslash c \gg \retrieve \; r \; (\inj \; (r\downarrow) \; c \; v) \; \textit{and}$\\
$r \gg \retrieve \; r \; (\inj \; (r\downarrow) \; c \; v)$\\
\end{tabular}
\end{center}
It says that the derivative operation $\backslash c$ is basically
an operation that does not change the bits an annotated regular
expression is able to produce, both
$r\backslash c$ and $r$ can produce 
the bitsequence $\inj \; (r\downarrow) \; c \;v)$.
This invariance with respect to derivative can be
further extended to a more surprising property:
\item
\begin{center}
\begin{tabular}{c}
$\textit{if} \models v: \erase(r) \backslash s$\\
$r\backslash s \gg \retrieve \; r \; (\flex \; (r\downarrow) \; \textit{id} \; s \; v) \; \textit{and}$\\
$r \gg \retrieve \; r \; (r\downarrow) \; \textit{id} \; s \; v) \; c \; v)$\\
\end{tabular}
\end{center}
Here $\gg$ is almost like an $\textit{NFA}$ in the sense that 
it simulates the lexing process with respect to different strings.

Our hope is that using $\gg$ we can prove the bits
information are not lost when we simplify a regular expression.
So we need to relate $\gg$ with simplifcation, for example, 
one of the lemmas we have proved about $\gg$ is that
\item
\begin{center}
$\simp \; a \gg \textit{bs} \iff  a \gg \textit{bs}$
\end{center}
This could be a step towards our goal, as
it assures that after simplification none of the 
bitsequence that can be created by
the original annotated regular expression
is lost.
If we could prove the following that would be
another step towards our proof, 

\item
\begin{center}
$(\simp \;a) \backslash s \gg \textit{bs} \iff  a\backslash s \gg \textit{bs}$
\end{center}

as it says
the simplified regular expression after derivatives will 
still have the full capacity of producing bitsequences
as the unsimplified ones-- pretty much
the intuition we try to establish.
And if we could prove
\item
\begin{center}
$a \backslash s \gg \textit{bs} \iff  a\backslash_\textit{simp} s \gg \textit{bs}$
\end{center}
That would be just a stone's throw away
from $\blexer \; r \; s = \blexers \; r \; s$.

\end{itemize}
What we do after we work out 
the proof of the above lemma 
is still not clear. It is one of the next steps we need to 
work on.

\subsection{The $\textit{ders}_2$ Function}
If we want to prove the result 
\begin{center}
	$ \textit{blexer}\_{simp}(r, \; s) =  \textit{blexer}(r, \; s)$
\end{center}
inductively 
on the structure of the regular expression,
then we need to induct on the  case $r_1 \cdot r_2$,
it can be good if we could express $(r_1 \cdot r_2) \backslash s$
in terms of $r_1 \backslash s_1$ and $r_2 \backslash s_1$,
where $s_1$ is a substring of $s$.
For this we introduce the $\textit{ders2}$ function,
which does a "convolution" on $r_1$ and $r_2$ using the string
$s$. We omit the bits here as they are not affecting the 
structure of the regular expression, and we are mainly 
focusing on structure here.
It is based on the observation that the derivative of $r_1 \cdot r_2$
with respect to a string $s$ can actually be written in an "explicit form"
composed of $r_1$ and $r_2$'s derivatives.
For example, we can look at how $r_1\cdot r_2$ expands
when being derived with a two-character string:
\begin{center}
\begin{tabular}{lcl}
	$ (r_1 \cdot r_2) \backslash [c_1c_2]$ & $=$ & $ (\textit{if} \; \nullable(r_1)\;  \textit{then} \; ((r_1 \backslash c_1) \cdot r_2 + r_2 \backslash c_1) \; \textit{else} \; (r_1\backslash c_1) \cdot r_2) \backslash c_2$\\
	& $=$ & $\textit{if} \; \textit{nullable}(r_1) \;\textit{and} \; \nullable(r_1\backslash c_1) \; \textit{then} \;
	(((r_1\backslash c_1c_2)\cdot r_2 +( r_1 \backslash c_1 )\cdot r_2\backslash c_2 )+ r_2 \backslash c_1c_2)$\\
	&& $\textit{else if} \; \nullable(r_1) \textit{and} \;\textit{not} \; \nullable(r_1 \backslash c_1)\; \textit{then} \;
	((r_1\backslash c_1c_2)\cdot r_2 + r_2 \backslash c_1c_2)$\\
	&& $\textit{else} \;(r_1\backslash c_1c_2) \cdot r_2$
\end{tabular}
\end{center}
which can also be written in a "convoluted sum"
format if we omit the order in which the alternatives
are being nested:
\begin{center}
\begin{tabular}{lcl}
	$(r_1 \cdot r_2) \backslash [c_1c_2] $ & $=$ & $\textit{if} \; \textit{nullable}(r_1) \;\textit{and} \; \nullable(r_1\backslash c_1) \; \textit{then} \;
	(r_1 \backslash c_1c_2) \cdot r_2 + (r_1 \backslash c_1)\cdot (r_2 \backslash c_2) + r_2 \backslash c_1c_2$\\
	&& $\textit{else if} \; \nullable(r_1) \textit{and} \;\textit{not} \; \nullable(r_1 \backslash c_1)\; \textit{then} \;
	((r_1\backslash c_1c_2)\cdot r_2 + r_2 \backslash c_1c_2)$\\
	&& $\textit{else} \;(r_1\backslash c_1c_2) \cdot r_2$\\
	& $=$ & $(r_1\backslash s) \cdot r_2 + \sum\limits_{s_j=c_2, c_1c_2 }{r_2 \backslash s_j} \; \text{where} \; \nullable(r_1\backslash s_i) \; \text{and} \;s_i @s_j = [c_1c_2]$
\end{tabular}
\end{center}
In a more general form, 
\begin{center}
\begin{tabular}{lcl}
	$(r_1 \cdot r_2) \backslash s $ & $=$ & $(r_1\backslash s) \cdot r_2 + \sum\limits_{s_i }{r_2 \backslash s_j} \; \text{where} \; s_i \; \text{is} \; \text{true prefix}\;  \text{of} \; s \;\text{and} \; s_i @s_j = s \; \text{and} \;\nullable(r_1\backslash s_i)$
\end{tabular}
\end{center}
We have formalized and proved the correctness of this 
alternative definition of derivative and call it $\textit{ders}_2$ to
make a distinction of it with the $\textit{ders}$-function.
Note this differentiates from the lexing algorithm  in the sense that
it calculates the results $r_1\backslash s_i , r_2 \backslash s_j$ first 
and then glue them together
into nested alternatives. 
$\lexer$, on the other hand, can only produce each element of the list
in the resulting alternatives regular expression 
altogether in the last derivative step.
$\lexer$ does lexing in a "breadth first" manner,
generating all the children nodes simultaneously
whereas
$\textit{ders}_2$ does it in a "depth first" manner.
Using this intuition we can also define the annotated regular expression version of 
derivative and call it $\textit{bders}_2$ and prove the equivalence with $\textit{bders}$.
Our hope is to use this alternative definition as a guide
for our induction.
Using $\textit{bders}_2$ we have a clearer idea
of what $r\backslash s$ and $\simp(r\backslash s)$ looks like.
\section{Conclusion}
Under the exhaustive tests we believe the main
result holds, yet a proof still seems elusive.
We have tried out different approaches, and 
found a lot of properties of the function $\simp$.
The counterexamples where $\rup\backslash_{simp} \, s \neq \simp(\rup\backslash s)$
are also valuable in the sense that 
we get to know better why they are not equal and what 
are the subtle differences between a 
nested simplified regular expression and a 
regular expression that is simplified at the final moment.
We are almost there, but a last step is needed to make the proof work.
Hopefully in the next few weeks we will be able to find one.
This would be an important milestone for my dissertation.


\section{Future Plans}

Before the proof comes out, a plan is needed to make 
sure some progress is happening.
We plan to get
a practical implementation of our current
method, which would involve  extending the set of allowed syntax
to include common regular expression constructs
such as bounded repitions, negation(complement) 
and character range or even Boolean functions.
Adding these concise way of expressing regular expressions
while keeping the correctness of the simplification
makes the work more practical.
For example, using the added constructs we can 
demonstrate that our implementation can actually 
help the cloudflare web-application firewall
to run smoothly even if it had been fed with 
evil regular expression 
$.*(?:.*=.*)))$
and strings like $x=xxxxxxxxxx....$.
This could help prove that our algorithm is not just 
some nice theory work but is actually competent in reality
as this regular expression and string pair made the cloudflare
servers to grind
to a halt on July 2nd, 2019.
After all, simply the "basic" regular expressions
we gave in the introduction is really so basic, that
even Brzozowski's 1964 paper has included more 
varieties than that(Boolean functions instead of 
just alternatives)\cite{Brzozowski1964}.
A function only implementing the basic ones
would lack practical interests.
Making the algorithm practical also means to make it fast.
Right now the Scala implementation
is slower than the naive algorithm without bitcodes, even with those
drastic simplification measures, which is not suggested by theory.
We might need to implement it in other languages such as C
to see if the problem is language specific.
This is about 3 months' work.

We believe that there
is still potential for more simplification as there are a lot
of rules for regular expression similarity, however,
this needs to be explored and checked.
Looking for maximal simplification and 
best size bound and proving them would take around 3 months to complete.

Context-free languages are a bit harder to deal with by derivatives
when compared with regular expressions.
The main reason is that the child node of a context grammar
can be the grammar itself--this left-recursion makes 
the computation procedure an impossibility when trying to solve directly.
An example would be the following\cite{might2011cfgder}:
\begin{center}
$L = L \cdot x | \ONE$
\end{center}
then a derivative would give us the following:
\begin{center}
$L\backslash x = (L \backslash x)\cdot x | \ONE$
\end{center}
which is essentially what we have previously.
This recursion can go on forever if we do not use some 
other methods such as introducing methods
or manually decide that the two equations 
are actually equivalent and $L$ and $L\backslash x$
denote the same language. And then give a mathematical proof
that this is actually the case.
This seems well beyond any mechanical algorithm one can 
expect to be able to do.....
People have tried various ways to deal with the problems
of looping when computing derivatives of context-free grammars
such as memoization and laziness, with some degree of success\cite{might2011cfgder}.
However this field seems largely unexplored and further 
optimizations seem possible. It would be great
if we could find a simple augmentation to the current 
derivative method so that the subset of context free language 
of interest can be handled.
This may take longer than the two previous tasks, but we still
give it 3 months time and see how it goes.



 


\bibliographystyle{plain}
\bibliography{root}


\end{document}