ChengsongTanPhdThesis/Chapters/Bitcoded2.tex
author Chengsong
Tue, 30 Aug 2022 12:41:52 +0100
changeset 588 80e1114d6421
parent 586 826af400b068
child 589 86e0203db2da
permissions -rwxr-xr-x
data

% Chapter Template

% Main chapter title
\chapter{Correctness of Bit-coded Algorithm with Simplification}

\label{Bitcoded2} % Change X to a consecutive number; for referencing this chapter elsewhere, use \ref{ChapterX}
%Then we illustrate how the algorithm without bitcodes falls short for such aggressive 
%simplifications and therefore introduce our version of the bitcoded algorithm and 
%its correctness proof in 
%Chapter 3\ref{Chapter3}. 



In this chapter we introduce the simplifications
on annotated regular expressions that can be applied to 
each intermediate derivative result. This allows
us to make $\blexer$ much more efficient.
We contrast this simplification function 
with Sulzmann and Lu's original
simplifications, indicating the simplicity of our algorithm and
improvements we made, demostrating
the usefulness and reliability of formal proofs on algorithms.
These ``aggressive'' simplifications would not be possible in the injection-based 
lexing we introduced in chapter \ref{Inj}.
We then go on to prove the correctness with the improved version of 
$\blexer$, called $\blexersimp$, by establishing 
$\blexer \; r \; s= \blexersimp \; r \; s$ using a term rewriting system.

\section{Simplifications by Sulzmann and Lu}
The first thing we notice in the fast growth of examples such as $(a^*a^*)^*$'s
and $(a^* + (aa)^*)^*$'s derivatives is that a lot of duplicated sub-patterns
are scattered around different levels, and therefore requires 
de-duplication at different levels:
\begin{center}
	$(a^*a^*)^* \stackrel{\backslash a}{\longrightarrow} (a^*a^* + a^*)\cdot(a^*a^*)^* \stackrel{\backslash a}{\longrightarrow} $\\
	$((a^*a^* + a^*) + a^*)\cdot(a^*a^*)^* + (a^*a^* + a^*)\cdot(a^*a^*)^* \stackrel{\backslash a}{\longrightarrow} \ldots$
\end{center}
\noindent
As we have already mentioned in \ref{eqn:growth2},
a simple-minded simplification function cannot simplify
the third regular expression in the above chain of derivative
regular expressions:
\begin{center}
$((a^*a^* + a^*) + a^*)\cdot(a^*a^*)^* + (a^*a^* + a^*)\cdot(a^*a^*)^*$
\end{center}
one would expect a better simplification function to work in the 
following way:
\begin{gather*}
	((a^*a^* + \underbrace{a^*}_\text{A})+\underbrace{a^*}_\text{duplicate of A})\cdot(a^*a^*)^* + 
	\underbrace{(a^*a^* + a^*)\cdot(a^*a^*)^*}_\text{further simp removes this}.\\
	\bigg\downarrow \\
	(a^*a^* + a^* 
	\color{gray} + a^* \color{black})\cdot(a^*a^*)^* + 
	\underbrace{(a^*a^* + a^*)\cdot(a^*a^*)^*}_\text{further simp removes this} \\
	\bigg\downarrow \\
	(a^*a^* + a^* 
	)\cdot(a^*a^*)^*  
	\color{gray} + (a^*a^* + a^*) \cdot(a^*a^*)^*\\
	\bigg\downarrow \\
	(a^*a^* + a^* 
	)\cdot(a^*a^*)^*  
\end{gather*}
\noindent
This motivating example came from testing Sulzmann and Lu's 
algorithm: their simplification does 
not work!
We quote their $\textit{simp}$ function verbatim here:
\begin{center}
	\begin{tabular}{lcl}
		$\simpsulz \; _{bs}(_{bs'}\ONE \cdot r)$ & $\dn$ & 
		$\textit{if} \; (\textit{zeroable} \; r)\; \textit{then} \;\; \ZERO$\\
						   & &$\textit{else}\;\; \fuse \; (bs@ bs') \; r$\\
		$\simpsulz \;(_{bs}r_1\cdot r_2)$ & $\dn$ & $\textit{if} 
		\; (\textit{zeroable} \; r_1 \; \textit{or} \; \textit{zeroable}\; r_2)\;
		\textit{then} \;\; \ZERO$\\
					     & & $\textit{else}\;\;_{bs}((\simpsulz \;r_1)\cdot
					     (\simpsulz \; r_2))$\\
		$\simpsulz  \; _{bs}\sum []$ & $\dn$ & $\ZERO$\\
		$\simpsulz  \; _{bs}\sum ((_{bs'}\sum rs_1) :: rs_2)$ & $\dn$ &
		$_{bs}\sum ((\map \; (\fuse \; bs')\; rs_1) @ rs_2)$\\
		$\simpsulz  \; _{bs}\sum[r]$ & $\dn$ & $\fuse \; bs \; (\simpsulz  \; r)$\\
		$\simpsulz  \; _{bs}\sum(r::rs)$ & $\dn$ & $_{bs}\sum 
		(\nub \; (\filter \; (\not \circ \zeroable)\;((\simpsulz  \; r) :: \map \; \simpsulz  \; rs)))$\\ 
		
	\end{tabular}
\end{center}
\noindent
the $\textit{zeroable}$ predicate 
which tests whether the regular expression
is equivalent to $\ZERO$,
is defined as:
\begin{center}
	\begin{tabular}{lcl}
		$\zeroable \; _{bs}\sum (r::rs)$ & $\dn$ & $\zeroable \; r\;\; \land \;\;
		\zeroable \;_{[]}\sum\;rs $\\
		$\zeroable\;_{bs}(r_1 \cdot r_2)$ & $\dn$ & $\zeroable\; r_1 \;\; \lor \;\; \zeroable \; r_2$\\
		$\zeroable\;_{bs}r^*$ & $\dn$ & $\textit{false}$ \\
		$\zeroable\;_{bs}c$ & $\dn$ & $\textit{false}$\\
		$\zeroable\;_{bs}\ONE$ & $\dn$ & $\textit{false}$\\
		$\zeroable\;_{bs}\ZERO$ & $\dn$ & $\textit{true}$
	\end{tabular}
\end{center}
\noindent
They suggested that the $\simpsulz $ function should be
applied repeatedly until a fixpoint is reached.
We call this construction $\textit{sulzSimp}$:
\begin{center}
	\begin{tabular}{lcl}
		$\textit{sulzSimp} \; r$ & $\dn$ & 
		$\textit{while}((\simpsulz  \; r)\; \cancel{=} \; r)$ \\
		& & $\quad r := \simpsulz  \; r$\\
		& & $\textit{return} \; r$
	\end{tabular}
\end{center}
We call the operation of alternatingly 
applying derivatives and simplifications
(until the string is exhausted) Sulz-simp-derivative,
written $\backslash_{sulzSimp}$:
\begin{center}
\begin{tabular}{lcl}
	$r \backslash_{sulzSimp} (c\!::\!s) $ & $\dn$ & $(\textit{sulzSimp} \; (r \backslash c)) \backslash_{sulzSimp}\, s$ \\
$r \backslash_{sulzSimp} [\,] $ & $\dn$ & $r$
\end{tabular}
\end{center}
\noindent
After the derivatives have been taken, the bitcodes
are extracted and decoded in the same manner
as $\blexer$:
\begin{center}
\begin{tabular}{lcl}
  $\textit{blexer\_sulzSimp}\;r\,s$ & $\dn$ &
      $\textit{let}\;a = (r^\uparrow)\backslash_{sulzSimp}\, s\;\textit{in}$\\                
  & & $\;\;\textit{if}\; \textit{bnullable}(a)$\\
  & & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
  & & $\;\;\textit{else}\;\textit{None}$
\end{tabular}
\end{center}
\noindent
We implemented this lexing algorithm in Scala, 
and found that the final derivative regular expression
size grows exponentially fast:
\begin{figure}[H]
	\centering
\begin{tikzpicture}
\begin{axis}[
    xlabel={$n$},
    ylabel={size},
    ymode = log,
    legend entries={Final Derivative Size},  
    legend pos=north west,
    legend cell align=left]
\addplot[red,mark=*, mark options={fill=white}] table {SulzmannLuLexer.data};
\end{axis}
\end{tikzpicture} 
\caption{Lexing the regular expression $(a^*a^*)^*$ against strings of the form
$\protect\underbrace{aa\ldots a}_\text{n \textit{a}s}
$ using Sulzmann and Lu's lexer}\label{SulzmannLuLexer}
\end{figure}
\noindent
At $n= 20$ we already get an out of memory error with Scala's normal 
JVM heap size settings.
In fact their simplification does not improve over
the simple-minded simplifications we have shown in \ref{fig:BetterWaterloo}.
The time required also grows exponentially:
\begin{figure}[H]
	\centering
\begin{tikzpicture}
\begin{axis}[
    xlabel={$n$},
    ylabel={time},
    ymode = log,
    legend entries={time in secs},  
    legend pos=north west,
    legend cell align=left]
\addplot[red,mark=*, mark options={fill=white}] table {SulzmannLuLexerTime.data};
\end{axis}
\end{tikzpicture} 
\caption{Lexing the regular expression $(a^*a^*)^*$ against strings of the form
$\protect\underbrace{aa\ldots a}_\text{n \textit{a}s}
$ using Sulzmann and Lu's lexer}\label{SulzmannLuLexerTime}
\end{figure}
\noindent
which seems like a counterexample for 
their linear complexity claim:
\begin{quote}\it
Linear-Time Complexity Claim \\It is easy to see that each call of one of the functions/operations:
simp, fuse, mkEpsBC and isPhi leads to subcalls whose number is bound by the size of the regular expression involved. We claim that thanks to aggressively applying simp this size remains finite. Hence, we can argue that the above mentioned functions/operations have constant time complexity which implies that we can incrementally compute bit-coded parse trees in linear time in the size of the input. 
\end{quote}
\noindent
The assumption that the size of the regular expressions
in the algorithm
would stay below a finite constant is not ture.
In addition to that, even if the regular expressions size
do stay finite, one has to take into account that
the $\simpsulz$ function is applied many times
in each derivative step, and that number is not necessarily
a constant with respect to the size of the regular expression.
To not get ``caught off guard'' by
these counterexamples,
one needs to be more careful when designing the
simplification function and making claims about them.

\section{Our $\textit{Simp}$ Function}
We will now introduce our simplification function,
by making a contrast with $\simpsulz$.
We describe
the ideas behind components in their algorithm 
and why they fail to achieve the desired effect, followed
by our solution. These solutions come with correctness
statements that are backed up by formal proofs.
\subsection{Flattening Nested Alternatives}
The idea behind the 
\begin{center}
$\simpsulz  \; _{bs}\sum ((_{bs'}\sum rs_1) :: rs_2) \quad \dn \quad
	       _{bs}\sum ((\map \; (\fuse \; bs')\; rs_1) @ rs_2)$
\end{center}
clause is that it allows
duplicate removal of regular expressions at different
levels.
For example, this would help with the
following simplification:

\begin{center}
$(a+r)+r \longrightarrow a+r$
\end{center}
The problem here is that only the head element
is ``spilled out'',
whereas we would want to flatten
an entire list to open up possibilities for further simplifications.\\
Not flattening the rest of the elements also means that
the later de-duplication processs 
does not fully remove apparent duplicates.
For example,
using $\simpsulz$ we could not 
simplify
\begin{center}
$((a^* a^*)+ (a^* + a^*))\cdot (a^*a^*)^*+
((a^*a^*)+a^*)\cdot (a^*a^*)^*$
\end{center}
due to the underlined part not in the first element
of the alternative.\\
We define a flatten operation that flattens not only 
the first regular expression of an alternative,
but the entire list: 
 \begin{center}
  \begin{tabular}{@{}lcl@{}}
  $\textit{flts} \; (_{bs}\sum \textit{as}) :: \textit{as'}$ & $\dn$ & $(\textit{map} \;
     (\textit{fuse}\;bs)\; \textit{as}) \; @ \; \textit{flts} \; as' $ \\
  $\textit{flts} \; \ZERO :: as'$ & $\dn$ & $ \textit{flts} \;  \textit{as'} $ \\
    $\textit{flts} \; a :: as'$ & $\dn$ & $a :: \textit{flts} \; \textit{as'}$ \quad(otherwise) 
\end{tabular}    
\end{center}  
\noindent
Our $\flts$ operation 
also throws away $\ZERO$s
as they do not contribute to a lexing result.
\subsection{Duplicate Removal}
After flattening is done, we are ready to deduplicate.
The de-duplicate function is called $\distinctBy$,
and that is where we make our second improvement over
Sulzmann and Lu's.
The process goes as follows:
\begin{center}
$rs \stackrel{\textit{flts}}{\longrightarrow} 
rs_{flat} 
\xrightarrow{\distinctBy \; 
rs_{flat} \; \rerases\; \varnothing} rs_{distinct}$
%\stackrel{\distinctBy \; 
%rs_{flat} \; \erase\; \varnothing}{\longrightarrow} \; rs_{distinct}$
\end{center}
where the $\distinctBy$ function is defined as:
\begin{center}
	\begin{tabular}{@{}lcl@{}}
		$\distinctBy \; [] \; f\; acc $ & $ =$ & $ []$\\
		$\distinctBy \; (x :: xs) \; f \; acc$ & $=$ & $\quad \textit{if} (f \; x \in acc)\;\; \textit{then} \;\; \distinctBy \; xs \; f \; acc$\\
						       & & $\quad \textit{else}\;\; x :: (\distinctBy \; xs \; f \; (\{f \; x\} \cup acc))$ 
	\end{tabular}
\end{center}
\noindent
The reason we define a distinct function under a mapping $f$ is because
we want to eliminate regular expressions that are syntactically the same,
but with different bit-codes.
For example, we can remove the second $a^*a^*$ from
$_{ZSZ}a^*a^* + _{SZZ}a^*a^*$, because it
represents a match with shorter initial sub-match 
(and therefore is definitely not POSIX),
and will be discarded by $\bmkeps$ later.
\begin{center}
	$_{ZSZ}\underbrace{a^*}_{ZS:\; match \; 1\; times\quad}\underbrace{a^*}_{Z: \;match\; 1 \;times} + 
	_{SZZ}\underbrace{a^*}_{S: \; match \; 0 \; times\quad}\underbrace{a^*}_{ZZ: \; match \; 2 \; times}
	$
\end{center}
%$_{bs1} r_1 + _{bs2} r_2 \text{where} (r_1)_{\downarrow} = (r_2)_{\downarrow}$
Due to the way our algorithm works,
the matches that conform to the POSIX standard 
will always be placed further to the left. When we 
traverse the list from left to right,
regular expressions we have already seen
will definitely not contribute to a POSIX value,
even if they are attached with different bitcodes.
These duplicates therefore need to be removed.
To achieve this, we call $\rerases$ as the function $f$ during the distinction
operation.\\
$\rerases$ is very similar to $\erase$, except that it preserves the structure
when erasing an alternative regular expression.
The reason why we use $\rerases$ instead of $\erase$ is that
it keeps the structures of alternative 
annotated regular expressions
whereas $\erase$ would turn it back into a binary structure.
Not having to mess with the structure 
greatly simplifies the finiteness proof in chapter \ref{Finite}.
We give the definitions of $\rerases$ here together with
the new datatype used by $\rerases$ (as our plain
regular expression datatype does not allow non-binary alternatives),
and explain in detail
why we want it in the next chapter.
For the moment the reader can just think of 
$\rerases$ as $\erase$ and $\rrexp$ as plain regular expressions.
\[			\rrexp ::=   \RZERO \mid  \RONE
			 \mid  \RCHAR{c}  
			 \mid  \RSEQ{r_1}{r_2}
			 \mid  \RALTS{rs}
			 \mid \RSTAR{r}        
\]
The notation of $\rerases$ also follows that of $\erase$,
which is a postfix operator written as a subscript,
except that it has an \emph{r} attached to it to distinguish against $\erase$:
\begin{center}
\begin{tabular}{lcl}
$\rerase{\ZERO}$ & $\dn$ & $\RZERO$\\
$\rerase{_{bs}\ONE}$ & $\dn$ & $\RONE$\\
	$\rerase{_{bs}\mathbf{c}}$ & $\dn$ & $\RCHAR{c}$\\
$\rerase{_{bs}r_1\cdot r_2}$ & $\dn$ & $\RSEQ{\rerase{r_1}}{\rerase{r_2}}$\\
$\rerase{_{bs}\sum as}$ & $\dn$ & $\RALTS{\map \; \rerase{\_} \; as}$\\
$\rerase{_{bs} a ^*}$ & $\dn$ & $\rerase{a}^*$
\end{tabular}
\end{center}

\subsection{Putting Things Together}
A recursive definition of our  simplification function 
is given below:
%that looks somewhat similar to our Scala code is 
\begin{center}
  \begin{tabular}{@{}lcl@{}}
   
	  $\textit{bsimp} \; (_{bs}a_1\cdot a_2)$ & $\dn$ & $ \textit{bsimp}_{ASEQ} \; bs \;(\textit{bsimp} \; a_1) \; (\textit{bsimp}  \; a_2)  $ \\
	  $\textit{bsimp} \; (_{bs}\sum \textit{as})$ & $\dn$ & $\textit{bsimp}_{ALTS} \; \textit{bs} \; (\textit{distinctBy} \; ( \textit{flatten} ( \textit{map} \; bsimp \; as)) \; \rerases \; \varnothing) $ \\
   $\textit{bsimp} \; a$ & $\dn$ & $\textit{a} \qquad \textit{otherwise}$   
\end{tabular}    
\end{center}    

\noindent
The simplification (named $\textit{bsimp}$ for \emph{b}it-coded) 
does a pattern matching on the regular expression.
When it detected that the regular expression is an alternative or
sequence, it will try to simplify its children regular expressions
recursively and then see if one of the children turns into $\ZERO$ or
$\ONE$, which might trigger further simplification at the current level.
Current level simplifications are handled by the function $\textit{bsimp}_{ASEQ}$,
using rules such as  $\ZERO \cdot r \rightarrow \ZERO$ and $\ONE \cdot r \rightarrow r$.
\begin{center}
	\begin{tabular}{@{}lcl@{}}
		$\textit{bsimp}_{ASEQ} \; bs\; a \; b$ & $\dn$ & $ (a,\; b) \textit{match}$\\
   &&$\quad\textit{case} \; (\ZERO, \_) \Rightarrow  \ZERO$ \\
   &&$\quad\textit{case} \; (\_, \ZERO) \Rightarrow  \ZERO$ \\
   &&$\quad\textit{case} \;  (_{bs1}\ONE, a_2') \Rightarrow  \textit{fuse} \; (bs@bs_1) \;  a_2'$ \\
   &&$\quad\textit{case} \; (a_1', a_2') \Rightarrow   _{bs}a_1' \cdot a_2'$ 
	\end{tabular}
\end{center}
\noindent
The most involved part is the $\sum$ clause, where we first call $\flts$ on
the simplified children regular expression list $\textit{map}\; \textit{bsimp}\; \textit{as}$.
and then call $\distinctBy$ on that list, the predicate determining whether two 
elements are the same is $\rerases \; r_1 = \rerases\; r_2$.
Finally, depending on whether the regular expression list $as'$ has turned into a
singleton or empty list after $\flts$ and $\distinctBy$, $\textit{bsimp}_{AALTS}$
decides whether to keep the current level constructor $\sum$ as it is, and 
removes it when there are less than two elements:
\begin{center}
	\begin{tabular}{lcl}
		$\textit{bsimp}_{AALTS} \; bs \; as'$ & $ \dn$ & $ as' \; \textit{match}$\\		
  &&$\quad\textit{case} \; [] \Rightarrow  \ZERO$ \\
   &&$\quad\textit{case} \; a :: [] \Rightarrow  \textit{fuse bs a}$ \\
   &&$\quad\textit{case} \;  as' \Rightarrow _{bs}\sum \textit{as'}$\\ 
	\end{tabular}
	
\end{center}
Having defined the $\bsimp$ function,
we add it as a phase after a derivative is taken,
so it stays small:
\begin{center}
	\begin{tabular}{lcl}
		$r \backslash_{bsimp} s$ & $\dn$ & $\textit{bsimp}(r \backslash s)$
	\end{tabular}
\end{center}
%Following previous notations
%when extending from derivatives w.r.t.~character to derivative
%w.r.t.~string, we define the derivative that nests simplifications 
%with derivatives:%\comment{simp in  the [] case?}
We extend this from character to string:
\begin{center}
\begin{tabular}{lcl}
$r \backslash_{bsimps} (c\!::\!s) $ & $\dn$ & $(r \backslash_{bsimp}\, c) \backslash_{bsimps}\, s$ \\
$r \backslash_{bsimps} [\,] $ & $\dn$ & $r$
\end{tabular}
\end{center}

\noindent
The lexer that extracts bitcodes from the 
derivatives with simplifications from our $\simp$ function
is called $\blexersimp$:
\begin{center}
\begin{tabular}{lcl}
  $\textit{blexer\_simp}\;r\,s$ & $\dn$ &
      $\textit{let}\;a = (r^\uparrow)\backslash_{simp}\, s\;\textit{in}$\\                
  & & $\;\;\textit{if}\; \textit{bnullable}(a)$\\
  & & $\;\;\textit{then}\;\textit{decode}\,(\textit{bmkeps}\,a)\,r$\\
  & & $\;\;\textit{else}\;\textit{None}$
\end{tabular}
\end{center}

\noindent
This algorithm keeps the regular expression size small.


\subsection{$(a+aa)^*$ and $(a^*\cdot a^*)^*$  against 
$\protect\underbrace{aa\ldots a}_\text{n \textit{a}s}$ After Simplification}
For example,
with our simplification the
previous $(a^*a^*)^*$ example
where $\simpsulz$ could not
stop the fast growth (over
3 million nodes just below $20$ input length)
will be reduced to just 15 and stays constant, no matter how long the
input string is.
This is demonstrated in the graphs below.
\begin{figure}[H]
\begin{center}
\begin{tabular}{ll}
\begin{tikzpicture}
\begin{axis}[
    xlabel={$n$},
    ylabel={derivative size},
        width=7cm,
    height=4cm, 
    legend entries={Lexer with $\textit{bsimp}$},  
    legend pos=  south east,
    legend cell align=left]
\addplot[red,mark=*, mark options={fill=white}] table {BitcodedLexer.data};
\end{axis}
\end{tikzpicture} %\label{fig:BitcodedLexer}
&
\begin{tikzpicture}
\begin{axis}[
    xlabel={$n$},
    ylabel={derivative size},
    width = 7cm,
    height = 4cm,
    legend entries={Lexer with $\simpsulz$},  
    legend pos=  north west,
    legend cell align=left]
\addplot[red,mark=*, mark options={fill=white}] table {BetterWaterloo.data};
\end{axis}
\end{tikzpicture} 
\end{tabular}
\end{center}
\caption{Our Improvement over Sulzmann and Lu's in terms of size}
\end{figure}
\noindent
Given the size difference, it is not
surprising that our $\blexersimp$ significantly outperforms
$\textit{blexer\_sulzSimp}$.
In the next section we are going to establish the
first important property of our lexer--the correctness.
%----------------------------------------------------------------------------------------
%	SECTION rewrite relation
%----------------------------------------------------------------------------------------
\section{Correctness of $\blexersimp$}
In this section we give details
of the correctness proof of $\blexersimp$,
an important contribution of this thesis.\\
We first introduce the rewriting relation \emph{rrewrite}
($\rrewrite$) between two regular expressions,
which expresses an atomic
simplification step from the left-hand-side
to the right-hand-side.
We then prove properties about
this rewriting relation and its reflexive transitive closure.
Finally we leverage these properties to show
an equivalence between the internal data structures of 
$\blexer$ and $\blexersimp$.

\subsection{The Rewriting Relation $\rrewrite$($\rightsquigarrow$)}
In the $\blexer$'s correctness proof, we
did not directly derive the fact that $\blexer$ gives out the POSIX value,
but first proved that $\blexer$ is linked with $\lexer$.
Then we re-use
the correctness of $\lexer$
to obtain
\begin{center}
	$(r, s) \rightarrow v \;\; \textit{iff} \;\; \blexer \; r \;s = v$.
\end{center}
Here we apply this
modularised technique again
by first proving that
$\blexersimp \; r \; s $ 
produces the same output as $\blexer \; r\; s$,
and then piecing it together with 
$\blexer$'s correctness to achieve our main
theorem:\footnote{ the case when 
$s$ is not in $L \; r$, is routine to establish }
\begin{center}
	$(r, s) \rightarrow v \; \;   \textit{iff} \;\;  \blexersimp \; r \; s = v$
\end{center}
\noindent
The overall idea for the proof
of $\blexer \;r \;s = \blexersimp \; r \;s$ 
is that the transition from $r$ to $\textit{bsimp}\; r$ can be
broken down into finitely many rewrite steps:
\begin{center}
	$r \rightsquigarrow^* \textit{bsimp} \; r$
\end{center}
where each rewrite step, written $\rightsquigarrow$,
is an ``atomic'' simplification that
cannot be broken down any further:
\begin{figure}[H]
\begin{mathpar}
	\inferrule * [Right = $S\ZERO_l$]{\vspace{0em}}{_{bs} \ZERO \cdot r_2 \rightsquigarrow \ZERO\\}

	\inferrule * [Right = $S\ZERO_r$]{\vspace{0em}}{_{bs} r_1 \cdot \ZERO \rightsquigarrow \ZERO\\}

	\inferrule * [Right = $S_1$]{\vspace{0em}}{_{bs1} ((_{bs2} \ONE) \cdot r) \rightsquigarrow \fuse \; (bs_1 @ bs_2) \; r\\}\\

	
	
	\inferrule * [Right = $SL$] {\\ r_1 \rightsquigarrow r_2}{_{bs} r_1 \cdot r_3 \rightsquigarrow _{bs} r_2 \cdot r_3\\}

	\inferrule * [Right = $SR$] {\\ r_3 \rightsquigarrow r_4}{_{bs} r_1 \cdot r_3 \rightsquigarrow _{bs} r_1 \cdot r_4\\}\\

	\inferrule * [Right = $A0$] {\vspace{0em}}{ _{bs}\sum [] \rightsquigarrow \ZERO}

	\inferrule * [Right = $A1$] {\vspace{0em}}{ _{bs}\sum [a] \rightsquigarrow \fuse \; bs \; a}

	\inferrule * [Right = $AL$] {\\ rs_1 \stackrel{s}{\rightsquigarrow} rs_2}{_{bs}\sum rs_1 \rightsquigarrow rs_2}

	\inferrule * [Right = $LE$] {\vspace{0em}}{ [] \stackrel{s}{\rightsquigarrow} []}

	\inferrule * [Right = $LT$] {rs_1 \stackrel{s}{\rightsquigarrow} rs_2}{ r :: rs_1 \stackrel{s}{\rightsquigarrow} r :: rs_2 }

	\inferrule * [Right = $LH$] {r_1 \rightsquigarrow r_2}{ r_1 :: rs \stackrel{s}{\rightsquigarrow} r_2 :: rs}

	\inferrule * [Right = $L\ZERO$] {\vspace{0em}}{\ZERO :: rs \stackrel{s}{\rightsquigarrow} rs}

	\inferrule * [Right = $LS$] {\vspace{0em}}{_{bs} \sum (rs_1 :: rs_b) \stackrel{s}{\rightsquigarrow} ((\map \; (\fuse \; bs_1) \; rs_1) @ rsb) }

	\inferrule * [Right = $LD$] {\\ \rerase{a_1} = \rerase{a_2}}{rs_a @ [a_1] @ rs_b @ [a_2] @ rsc \stackrel{s}{\rightsquigarrow} rs_a @ [a_1] @ rs_b @ rs_c}

\end{mathpar}
\caption{
The rewrite rules that generate simplified regular expressions 
in small steps: $r_1 \rightsquigarrow r_2$ is for bitcoded regular expressions 
and $rs_1 \stackrel{s}{\rightsquigarrow} rs_2$ for 
lists of bitcoded regular expressions. 
Interesting is the LD rule that allows copies of regular expressions 
to be removed provided a regular expression 
earlier in the list can match the same strings.
}\label{rrewriteRules}
\end{figure}
\noindent
The rules such as $LT$ and $LH$ are for rewriting between two regular expression lists
such that one regular expression
in the left-hand-side list is rewritable in one step
to the right-hand-side's regular expression at the same position.
This helps with defining the ``context rules'' such as $AL$.\\
The reflexive transitive closure of $\rightsquigarrow$ and $\stackrel{s}{\rightsquigarrow}$
are defined in the usual way:
\begin{figure}[H]
	\centering
\begin{mathpar}
	\inferrule{\vspace{0em}}{ r \rightsquigarrow^* r \\}

	\inferrule{\vspace{0em}}{rs \stackrel{s*}{\rightsquigarrow} rs \\}

	\inferrule{r_1 \rightsquigarrow^*  r_2 \land \; r_2 \rightsquigarrow^* r_3}{r_1 \rightsquigarrow^* r_3\\}
	
	\inferrule{rs_1 \stackrel{s*}{\rightsquigarrow}  rs_2 \land \; rs_2 \stackrel{s*}{\rightsquigarrow} rs_3}{rs_1 \stackrel{s*}{\rightsquigarrow} rs_3}
\end{mathpar}
\caption{The Reflexive Transitive Closure of 
$\rightsquigarrow$ and $\stackrel{s}{\rightsquigarrow}$}\label{transClosure}
\end{figure}
Two rewritable terms will remain rewritable to each other
even after a derivative is taken:
\begin{center}
	$r_1 \rightsquigarrow r_2 \implies (r_1 \backslash c) \rightsquigarrow^* (r_2 \backslash c)$
\end{center}
And finally, if two terms are rewritable to each other,
then they produce the same bitcodes:
\begin{center}
	$r \rightsquigarrow^* r' \;\; \textit{then} \; \; \bmkeps \; r = \bmkeps \; r'$
\end{center}
The decoding phase of both $\blexer$ and $\blexersimp$
are the same, which means that if they get the same
bitcodes before the decoding phase,
they get the same value after decoding is done.
We will prove the three properties 
we mentioned above in the next sub-section.
\subsection{Important Properties of $\rightsquigarrow$}
First we prove some basic facts 
about $\rightsquigarrow$, $\stackrel{s}{\rightsquigarrow}$, 
$\rightsquigarrow^*$ and $\stackrel{s*}{\rightsquigarrow}$,
which will be needed later.\\
The inference rules (\ref{rrewriteRules}) we 
gave in the previous section 
have their ``many-steps version'':

\begin{lemma}\label{squig1}
	\hspace{0em}
	\begin{itemize}
		\item
			$rs_1 \stackrel{s*}{\rightsquigarrow} rs_2 \implies _{bs} \sum rs_1 \stackrel{*}{\rightsquigarrow} _{bs} \sum rs_2$
		\item
			$r \rightsquigarrow^* r' \implies _{bs} \sum (r :: rs)\; \rightsquigarrow^*\;  _{bs} \sum (r' :: rs)$

		\item
			The rewriting in many steps property is composible 
			in terms of the sequence constructor:\\
			$r_1 \rightsquigarrow^* r_2 
			\implies _{bs} r_1 \cdot r_3 \rightsquigarrow^* \;  
			_{bs} r_2 \cdot r_3 \quad $ 
			and 
			$\quad r_3 \rightsquigarrow^* r_4 
			\implies _{bs} r_1 \cdot r_3 \rightsquigarrow^* _{bs} \; r_1 \cdot r_4$
		\item
			The rewriting in many steps properties 
			$\stackrel{*}{\rightsquigarrow}$ and $\stackrel{s*}{\rightsquigarrow}$ 
			is preserved under the function $\fuse$:\\
				$r_1 \rightsquigarrow^* r_2 
				\implies \fuse \; bs \; r_1 \rightsquigarrow^* \; 
				\fuse \; bs \; r_2 \quad  $ and 
				$rs_1 \stackrel{s}{\rightsquigarrow} rs_2 
				\implies \map \; (\fuse \; bs) \; rs_1 
				\stackrel{s*}{\rightsquigarrow} \map \; (\fuse \; bs) \; rs_2$
	\end{itemize}
\end{lemma}
\begin{proof}
	By an induction on 
	the inductive cases of $\stackrel{s*}{\rightsquigarrow}$ and $\rightsquigarrow^*$ respectively.
	The third and fourth points are 
	by the properties $r_1 \rightsquigarrow r_2 \implies \fuse \; bs \; r_1 \implies \fuse \; bs \; r_2$ and
	$rs_2 \stackrel{s}{\rightsquigarrow} rs_3 
	\implies \map \; (\fuse \; bs) rs_2 \stackrel{s*}{\rightsquigarrow} \map \; (\fuse \; bs)\; rs_3$,
	which can be indutively proven by the inductive cases of $\rightsquigarrow$ and 
	$\stackrel{s}{\rightsquigarrow}$.
\end{proof}
\noindent
The inference rules of $\stackrel{s}{\rightsquigarrow}$
are defined in terms of list cons operation, here
we establish that the 
$\stackrel{s}{\rightsquigarrow}$ and $\stackrel{s*}{\rightsquigarrow}$ 
relation is also preserved w.r.t appending and prepending of a list.
In addition, we
also prove some relations 
between $\rightsquigarrow^*$ and $\stackrel{s*}{\rightsquigarrow}$.
\begin{lemma}\label{ssgqTossgs}
	\hspace{0em}
	\begin{itemize}
		\item
			$rs_1 \stackrel{s}{\rightsquigarrow} rs_2 \implies rs @ rs_1 \stackrel{s}{\rightsquigarrow} rs @ rs_2$

		\item
			$rs_1 \stackrel{s*}{\rightsquigarrow} rs_2 \implies 
			rs @ rs_1 \stackrel{s*}{\rightsquigarrow} rs @ rs_2 \; \;
			\textit{and} \; \;
			rs_1 @ rs \stackrel{s*}{\rightsquigarrow} rs_2 @ rs$
			
		\item
			The $\stackrel{s}{\rightsquigarrow} $ relation after appending 
			a list becomes $\stackrel{s*}{\rightsquigarrow}$:\\
			$rs_1 \stackrel{s}{\rightsquigarrow} rs_2 
			\implies rs_1 @ rs \stackrel{s*}{\rightsquigarrow} rs_2 @ rs$
		\item
		
			$r_1 \rightsquigarrow^* r_2 \implies [r_1] \stackrel{s*}{\rightsquigarrow} [r_2]$
		\item
		
			$rs_3 \stackrel{s*}{\rightsquigarrow} rs_4 \land r_1 \rightsquigarrow^* r_2 \implies
			r_2 :: rs_3 \stackrel{s*}{\rightsquigarrow} r_2 :: rs_4$
		\item			
			If we could rewrite a regular expression 
			in many steps to $\ZERO$, then 
			we could also rewrite any sequence containing it to $\ZERO$:\\
			$r_1 \rightsquigarrow^* \ZERO 
			\implies _{bs}r_1\cdot r_2 \rightsquigarrow^* \ZERO$
	\end{itemize}
\end{lemma}
\begin{proof}
	The first part is by induction on the list $rs$.
	The second part is by induction on the inductive cases of $\stackrel{s*}{\rightsquigarrow}$.
	The third part is 
	by rule induction of $\stackrel{s}{\rightsquigarrow}$.
	The fourth sub-lemma is 
	by rule induction of 
	$\stackrel{s*}{\rightsquigarrow}$ and using part one to three. 
	The fifth part is a corollary of part four.
	The last part is proven by rule induction again on $\rightsquigarrow^*$.
\end{proof}
\noindent
Now we are ready to give the proofs of the below properties:
\begin{itemize}
	\item
		$(r \rightsquigarrow^* r'\land \bnullable \; r_1) 
		\implies \bmkeps \; r = \bmkeps \; r'$. \\
	\item
		$r \rightsquigarrow^* \textit{bsimp} \;r$.\\
	\item
		$r \rightsquigarrow r' \implies r \backslash c \rightsquigarrow^* r'\backslash c$.\\
\end{itemize}
These properties would work together towards the correctness theorem.
\subsubsection{Property 1: $(r \rightsquigarrow^* r'\land \bnullable \; r_1) 
		\implies \bmkeps \; r = \bmkeps \; r'$}
Intuitively, this property says we can 
extract the same bitcodes using $\bmkeps$ from the nullable
components of two regular expressions $r$ and $r'$,
if we can rewrite from one to the other in finitely
many steps.\\
For convenience, 
we define a predicate for a list of regular expressions
having at least one nullable regular expressions:
\begin{center}
	$\textit{bnullables} \; rs \quad \dn \quad \exists r \in rs. \;\; \bnullable \; r$
\end{center}
\noindent
The rewriting relation $\rightsquigarrow$ preserves nullability:
\begin{lemma}\label{rewritesBnullable}
	\hspace{0em}
	\begin{itemize}
		\item
			$\text{If} \; r_1 \rightsquigarrow r_2, \; 
			\text{then} \; \bnullable \; r_1 = \bnullable \; r_2$
		\item 	
			$\text{If} \; rs_1 \stackrel{s}{\rightsquigarrow} rs_2 \;
			\text{then} \; \textit{bnullables} \; rs_1 = \textit{bnullables} \; rs_2$
		\item
			$r_1 \rightsquigarrow^* r_2 
			\implies \bnullable \; r_1 = \bnullable \; r_2$
	\end{itemize}
\end{lemma}
\begin{proof}
	By rule induction of $\rightsquigarrow$ and $\stackrel{s}{\rightsquigarrow}$.
	The third point is a corollary of the second.
\end{proof}
\noindent
For convenience again,
we define $\bmkepss$ on a list $rs$,
which extracts the bit-codes on the first $\bnullable$ element in $rs$:
\begin{center}
	\begin{tabular}{lcl}
		$\bmkepss \; [] $ & $\dn$ & $[]$\\
		$\bmkepss \; r :: rs$ & $\dn$ & $\textit{if} \;(\bnullable \; r) \;\; 
		\textit{then} \;\; \bmkeps \; r \; \textit{else} \;\; \bmkepss \; rs$
	\end{tabular}
\end{center}
\noindent
If both regular expressions in a rewriting relation are nullable, then they 
produce the same bitcodes:
\begin{lemma}\label{rewriteBmkepsAux}
	\hspace{0em}
	\begin{itemize}
		\item
			$r_1 \rightsquigarrow r_2 \implies 
			(\bnullable \; r_1 \land \bnullable \; r_2 \implies \bmkeps \; r_1 = 
			\bmkeps \; r_2)$ 
		\item
			and
			$rs_ 1 \stackrel{s}{\rightsquigarrow} rs_2 
			\implies (\bnullables \; rs_1 \land \bnullables \; rs_2 \implies 
			\bmkepss \; rs_1 = \bmkepss \; rs2)$
	\end{itemize}
\end{lemma}
\begin{proof}
	By rule induction over the cases that lead to $r_1 \rightsquigarrow r_2$.
\end{proof}
\noindent
With lemma \ref{rewriteBmkepsAux} we are ready to prove its
many-step version: 
\begin{lemma}
	$\text{If} \;\; r \stackrel{*}{\rightsquigarrow} r' \;\; \text{and} \;\; \bnullable \; r, \;\;\; \text{then} \;\; \bmkeps \; r = \bmkeps \; r'$
\end{lemma}
\begin{proof}
	By rule induction of $\stackrel{*}{\rightsquigarrow} $.
	$\ref{rewritesBnullable}$ tells us both $r$ and $r'$ are nullable.
	\ref{rewriteBmkepsAux} solves the inductive case.
\end{proof}

\subsubsection{Property 2: $r \stackrel{*}{\rightsquigarrow} \bsimp{r}$}
Now we get to the ``meaty'' part of the proof, 
which says that our simplification's helper functions 
such as $\distinctBy$ and $\flts$ conform to 
the $\stackrel{s*}{\rightsquigarrow}$ and 
$\rightsquigarrow^* $ rewriting relations.\\
The first lemma to prove is a more general version of 
$rs_ 1 \rightsquigarrow^* \distinctBy \; rs_1 \; \phi$:
\begin{lemma}
	$rs_1 @ rs_2 \stackrel{s*}{\rightsquigarrow} 
	(rs_1 @ (\distinctBy \; rs_2 \; \; \rerases \;\; (\map\;\; \rerases \; \; rs_1)))$
\end{lemma}
\noindent
It says that that for a list made of two parts $rs_1 @ rs_2$, 
one can throw away the duplicate
elements in $rs_2$, as well as those that have appeared in $rs_1$.
\begin{proof}
	By induction on $rs_2$, where $rs_1$ is allowed to be arbitrary.
\end{proof}
\noindent
Setting $rs_2$ to be empty,
we get the corollary
\begin{corollary}\label{dBPreserves}
	$rs_1 \stackrel{s*}{\rightsquigarrow} \distinctBy \; rs_1 \; \phi$.
\end{corollary}
\noindent
The flatten function $\flts$ conforms to
$\stackrel{s*}{\rightsquigarrow}$ as well:

\begin{lemma}\label{fltsPreserves}
	$rs \stackrel{s*}{\rightsquigarrow} \flts \; rs$
\end{lemma}
\begin{proof}
	By an induction on $rs$.
\end{proof}
\noindent
The function $\bsimpalts$ preserves rewritability:
\begin{lemma}\label{bsimpaltsPreserves}
	$_{bs} \sum rs \stackrel{*}{\rightsquigarrow} \bsimpalts \; _{bs} \; rs$
\end{lemma}
\noindent
The simplification function
$\textit{bsimp}$ only transforms the regex $r$ using steps specified by 
$\rightsquigarrow^*$ and nothing else.
\begin{lemma}
	$r \stackrel{*}{\rightsquigarrow} \bsimp{r}$
\end{lemma}
\begin{proof}
	By an induction on $r$.
	The most involved case would be the alternative, 
	where we use lemmas \ref{bsimpaltsPreserves},
	\ref{fltsPreserves} and \ref{dBPreserves} to do a series of rewriting:\\
	\begin{center}
		\begin{tabular}{lcl}
			$rs$ &  $\stackrel{s*}{\rightsquigarrow}$ & $ \map \; \textit{bsimp} \; rs$\\
			     &  $\stackrel{s*}{\rightsquigarrow}$ & $ \flts \; (\map \; \textit{bsimp} \; rs)$\\
			     &  $\stackrel{s*}{\rightsquigarrow}$ & $ \distinctBy \; 
			(\flts \; (\map \; \textit{bsimp}\; rs)) \; \rerases \; \phi$\\
		\end{tabular}
	\end{center}
	Using this we derive the following rewrite relation:\\
	\begin{center}
		\begin{tabular}{lcl}
			$r$ & $=$ & $_{bs}\sum rs$\\[1.5ex]
			    & $\rightsquigarrow^*$ & $\bsimpalts \; bs \; rs$ \\[1.5ex]
			    & $\rightsquigarrow^*$ & $\ldots$ \\ [1.5ex]
			    & $\rightsquigarrow^*$ & $\bsimpalts \; bs \; 
			    (\distinctBy \; (\flts \; (\map \; \textit{bsimp}\; rs)) 
			    \; \rerases \; \phi)$\\[1.5ex]
			    %& $\rightsquigarrow^*$ & $ _{bs} \sum (\distinctBy \; 
				%(\flts \; (\map \; \textit{bsimp}\; rs)) \; \;
				%\rerases \; \;\phi) $\\[1.5ex]
			    & $\rightsquigarrow^*$ & $\textit{bsimp} \; r$\\[1.5ex]
		\end{tabular}
	\end{center}	
\end{proof}
\subsubsection{Property 3: $r_1 \stackrel{*}{\rightsquigarrow}  r_2 \implies r_1 \backslash c \stackrel{*}{\rightsquigarrow} r_2 \backslash c$}
The rewritability relation 
$\rightsquigarrow$ is preserved under derivatives--
it is just that we might need multiple steps 
where originally only one step was needed:
\begin{lemma}\label{rewriteBder}
	\hspace{0em}
	\begin{itemize}
		\item
			If $r_1 \rightsquigarrow r_2$, then $r_1 \backslash c 
			\rightsquigarrow^*  r_2 \backslash c$ 
		\item	
			If $rs_1 \stackrel{s}{\rightsquigarrow} rs_2$, then $ 
			\map \; (\_\backslash c) \; rs_1 
			\stackrel{s*}{\rightsquigarrow} \map \; (\_ \backslash c) \; rs_2$
	\end{itemize}
\end{lemma}
\begin{proof}
	By induction on $\rightsquigarrow$ 
	and $\stackrel{s}{\rightsquigarrow}$, using a number of the previous lemmas.
\end{proof}
\noindent
Now we can prove property 3, as an immediate corollary:
\begin{corollary}\label{rewritesBder}
	$r_1 \rightsquigarrow^* r_2 \implies r_1 \backslash c \rightsquigarrow^*   
	r_2 \backslash c$
\end{corollary}
\begin{proof}
	By rule induction of $\stackrel{*}{\rightsquigarrow} $ and using the previous lemma \ref{rewriteBder}.
\end{proof}
\noindent
This can be extended and combined with $r \rightsquigarrow^* \textit{bsimp} \; r$
to obtain the rewritability between
$\blexer$ and $\blexersimp$'s intermediate
derivative regular expressions 
\begin{lemma}\label{bderBderssimp}
	$a \backslash s \rightsquigarrow^* \bderssimp{a}{s} $
\end{lemma}
\begin{proof}
	By an induction on $s$.
\end{proof}
\subsection{Main Theorem}
Now with \ref{bderBderssimp} we are ready for the main theorem.
\begin{theorem}
	$\blexer \; r \; s = \blexersimp{r}{s}$
\end{theorem}
\noindent
\begin{proof}
	One can rewrite in many steps from the original lexer's 
	derivative regular expressions to the 
	lexer with simplification applied (by lemma \ref{bderBderssimp}):
	\begin{center}
		$a \backslash s \stackrel{*}{\rightsquigarrow} \bderssimp{a}{s} $.
	\end{center}
	we know that they give out the same bits, if the lexing result is a match:
	\begin{center}
		$\bnullable \; (a \backslash s) 
		\implies \bmkeps \; (a \backslash s) = \bmkeps \; (\bderssimp{a}{s})$
	\end{center}
	Now that they give out the same bits, we know that they give the same value after decoding.
	\begin{center}
		$\bnullable \; (a \backslash s) 
		\implies \decode \; r \; (\bmkeps \; (a \backslash s)) = 
		\decode \; r \; (\bmkeps \; (\bderssimp{a}{s}))$
	\end{center}
	Which is equivalent to our proof goal:
	\begin{center}
		$\blexer \; r \; s = \blexersimp \; r \; s$.
	\end{center}	
\end{proof}
\noindent
As a corollary,
we link this result with the lemma we proved earlier that 
\begin{center}
	$(r, s) \rightarrow v \implies \blexer \; r \; s = v$
\end{center}
and obtain the corollary that the bit-coded lexer with simplification is
indeed correctly outputting POSIX lexing result, if such a result exists.
\begin{corollary}
	$(r, s) \rightarrow v \implies \blexersimp{r}{s}$
\end{corollary}

\subsection{Comments on the Proof Techniques Used}
The non-trivial part of proving the correctness of the algorithm with simplification
compared with not having simplification is that we can no longer use the argument 
in \cref{flex_retrieve}.
The function \retrieve needs the cumbersome structure of the (umsimplified)
annotated regular expression to 
agree with the structure of the value, but simplification will always mess with the 
structure.

We also tried to prove $\bsimp{\bderssimp{a}{s}} = \bsimp{a\backslash s}$,
but this turns out to be not true, A counterexample of this being
\[ r = [(1+c)\cdot [aa \cdot (1+c)]] \land s = aa
\]

Then we would have $\bsimp{a \backslash s}$ being 
$_{[]}(_{ZZ}\ONE +  _{ZS}c ) $
whereas $\bsimp{\bderssimp{a}{s}}$ would be 
$_{Z}(_{Z} \ONE + _{S} c)$.
Unfortunately if we apply $\textit{bsimp}$ at different
stages we will always have this discrepancy, due to 
whether the $\map \; (\fuse\; bs) \; as$ operation in $\textit{bsimp}$
is taken at some points will be entirely dependant on when the simplification 
take place whether there is a larger alternative structure surrounding the 
alternative being simplified.
The good thing about $\stackrel{*}{\rightsquigarrow} $ is that it allows
us not specify how exactly the "atomic" simplification steps $\rightsquigarrow$
are taken, but simply say that they can be taken to make two similar 
regular expressions equal, and can be done after interleaving derivatives
and simplifications.


Having correctness property is good. But we would also like the lexer to be efficient in 
some sense, for exampe, not grinding to a halt at certain cases.
In the next chapter we shall prove that for a given $r$, the internal derivative size is always
finitely bounded by a constant.
we would expect in the