ninems/ninems.tex
changeset 72 83b021fc7d29
parent 71 0573615e41a3
child 74 9e791ef6022f
child 75 24d9d64c2a95
--- a/ninems/ninems.tex	Mon Jul 08 21:21:54 2019 +0100
+++ b/ninems/ninems.tex	Wed Jul 10 23:16:14 2019 +0100
@@ -251,7 +251,7 @@
 regular expression matching according to the POSIX strategy
 \cite{Sulzmann2014}. Their algorithm is based on an older algorithm by
 Brzozowski from 1964 where he introduced the notion of derivatives of
-regular expressions \cite{Brzozowski1964}. We shall briefly explain
+regular expressions~\cite{Brzozowski1964}. We shall briefly explain
 this algorithm next.
 
 \section{The Algorithm by Brzozowski based on Derivatives of Regular
@@ -373,7 +373,7 @@
 produces a YES/NO answer for whether a string is being matched by a
 regular expression.  Sulzmann and Lu~\cite{Sulzmann2014} extended this
 algorithm to allow generation of an actual matching, called a
-\emph{value}. Values and regular expressions correspond to each 
+\emph{value} or sometimes lexical values.  These values and regular expressions correspond to each 
 other as illustrated in the following table:
 
 
@@ -407,9 +407,9 @@
 No value  corresponds to $\ZERO$; $\Empty$ corresponds to
 $\ONE$; $\Char$ to the character regular expression; $\Seq$ to the
 sequence regular expression and so on. The idea of values is to encode
-parse trees for how the sub-parts of a regular expression matches
+a kind of lexical value for how the sub-parts of a regular expression match
 the sub-parts of a string. To see this, suppose a \emph{flatten} operation, written
-$|v|$. We use this function to extract the underlying string of a value
+$|v|$ for values. We can use this function to extract the underlying string of a value
 $v$. For example, $|\mathit{Seq} \, (\textit{Char x}) \, (\textit{Char
 y})|$ is the string $xy$.  Using flatten, we can describe how values
 encode parse trees: $\Seq\,v_1\, v_2$ encodes a tree with 2 children nodes
@@ -419,7 +419,7 @@
 $|v_2|$. Exactly how these two are matched is contained in the
 children nodes $v_1$ and $v_2$ of parent $\textit{Seq}$ . 
 
- To give a concrete example of how value works, consider the string $xy$
+ To give a concrete example of how values work, consider the string $xy$
 and the regular expression $(x + (y + xy))^*$. We can view this regular
 expression as a tree and if the string $xy$ is matched by two Star
 ``iterations'', then the $x$ is matched by the left-most alternative in
@@ -431,7 +431,7 @@
 \end{center}
 
 \noindent
-where $\Stars [\ldots]$ records all the
+where $\Stars \; [\ldots]$ records all the
 iterations; and $\Left$, respectively $\Right$, which
 alternative is used. The value for
 matching $xy$ in a single ``iteration'', i.e.~the POSIX value,
@@ -449,8 +449,8 @@
 The contribution of Sulzmann and Lu is an extension of Brzozowski's
 algorithm by a second phase (the first phase being building successive
 derivatives---see \eqref{graph:*}). In this second phase, a POSIX value 
-is generated assuming the regular expression matches  the string. 
-Pictorially, the algorithm is as follows:
+is generated in case the regular expression matches  the string. 
+Pictorially, the Sulzmann and Lu algorithm is as follows:
 
 \begin{ceqn}
 \begin{equation}\label{graph:2}
@@ -465,14 +465,14 @@
 For convenience, we shall employ the following notations: the regular expression we
 start with is $r_0$, and the given string $s$ is composed of characters $c_0 c_1
 \ldots c_{n-1}$. In  the first phase, we build the derivatives $r_1$, $r_2$, \ldots  according to
-the characters $c_0$, $c_1$,\ldots  until we exhaust the string and
-obtain at the derivative $r_n$. We test whether this derivative is
+the characters $c_0$, $c_1$  until we exhaust the string and
+obtain the derivative $r_n$. We test whether this derivative is
 $\textit{nullable}$ or not. If not, we know the string does not match
 $r$ and no value needs to be generated. If yes, we start building the
-parse tree incrementally by \emph{injecting} back the characters into
-the values $v_n, \ldots, v_0$. For this we first call the function
+values incrementally by \emph{injecting} back the characters into
+the earlier values $v_n, \ldots, v_0$. For the first value $v_0$, we call the function
 $\textit{mkeps}$, which builds the parse tree for how the empty string
-has matched the (nullable) regular expression $r_n$. This function is defined
+has been matched by the (nullable) regular expression $r_n$. This function is defined
 as
 
 	\begin{center}
@@ -525,8 +525,8 @@
 expressions and values. To understands this definition better consider
 the situation when we build the derivative on regular expression $r_{i-1}$.
 For this we chop off a character from $r_{i-1}$ to form $r_i$. This leaves a
-``hole'' in $r_i$ and its corresponding value $v_i$
-. To calculate $v_{i-1}$, we need to
+``hole'' in $r_i$ and its corresponding value $v_i$. 
+To calculate $v_{i-1}$, we need to
 locate where that hole is and fill it. 
 We can find this location by
 comparing $r_{i-1}$ and $v_i$. For instance, if $r_{i-1}$ is of shape
@@ -575,31 +575,31 @@
 %$\textit{ALT}$
 \begin{center}
 \begin{tabular}{lcl}
-$r^*$ & $\xrightarrow{\backslash a}$ & $r_1 = (1+0+1 \cdot b + 0 + 1 \cdot b \cdot c) \cdot r^*$\\
-      & $\xrightarrow{\backslash b}$ & $r_2 = (0+0+1 \cdot 1 + 0 + 1 \cdot 1 \cdot c) \cdot r^* +(0+1+0  + 0 + 0) \cdot r^*$\\
-      & $\xrightarrow{\backslash c}$ & $r_3 = ((0+0+0 + 0 + 1 \cdot 1 \cdot 1) \cdot r^* + (0+0+0  + 1 + 0) \cdot r^*) + $\\ 
-      &                              & $\phantom{r_3 = (} ((0+1+0  + 0 + 0) \cdot r^* + (0+0+0  + 1 + 0) \cdot r^* )$
+$r^*$ & $\xrightarrow{\backslash a}$ & $r_1 = (\ONE+\ZERO+\ONE \cdot b + \ZERO + \ONE \cdot b \cdot c) \cdot r^*$\\
+      & $\xrightarrow{\backslash b}$ & $r_2 = (\ZERO+\ZERO+\ONE \cdot \ONE + \ZERO + \ONE \cdot \ONE \cdot c) \cdot r^* +(\ZERO+\ONE+\ZERO  + \ZERO + \ZERO) \cdot r^*$\\
+      & $\xrightarrow{\backslash c}$ & $r_3 = ((\ZERO+\ZERO+\ZERO + \ZERO + \ONE \cdot \ONE \cdot \ONE) \cdot r^* + (\ZERO+\ZERO+\ZERO  + \ONE + \ZERO) \cdot r^*) + $\\ 
+      &                              & $\phantom{r_3 = (} ((\ZERO+\ONE+\ZERO  + \ZERO + \ZERO) \cdot r^* + (\ZERO+\ZERO+\ZERO  + \ONE + \ZERO) \cdot r^* )$
 \end{tabular}
 \end{center}
 
 \noindent
-In  case $r_3$ is nullable, we can call $mkeps$ 
+In  case $r_3$ is nullable, we can call $\textit{mkeps}$ 
 to construct a parse tree for how $r_3$ matched the string $abc$. 
-$mkeps$ gives the following value $v_3$: 
+$\textit{mkeps}$ gives the following value $v_3$: 
 \begin{center}
 $\Left(\Left(\Seq(\Right(\Seq(\Empty, \Seq(\Empty,\Empty))), \Stars [])))$
 \end{center}
 The outer $\Left(\Left(\ldots))$ tells us the leftmost nullable part of $r_3$(underlined):
 
 \begin{center}
-   $( \underline{(0+0+0 + 0 + 1 \cdot 1 \cdot 1) \cdot r^*} + (0+0+0  + 1 + 0)
-  \cdot r^*) +((0+1+0  + 0 + 0) \cdot r^*+(0+0+0  + 1 + 0) \cdot r^* ).$
+   $( \underline{(\ZERO+\ZERO+\ZERO+ \ZERO+ \ONE \cdot \ONE \cdot \ONE) \cdot r^*} + (\ZERO+\ZERO+\ZERO + \ONE + \ZERO)
+  \cdot r^*) +((\ZERO+\ONE+\ZERO  + \ZERO + \ZERO) \cdot r^*+(\ZERO+\ZERO+\ZERO  + \ONE + \ZERO) \cdot r^* ).$
  \end{center}
 
 \noindent
- Note that the leftmost location of term $((0+0+0 + 0 + 1 \cdot 1 \cdot
- 1) \cdot r^*$ (which corresponds to the initial sub-match $abc$) allows
- $mkeps$ to pick it up because $mkeps$ is defined to always choose the
+ Note that the leftmost location of term $((\ZERO+\ZERO+\ZERO + \ZERO + \ONE \cdot \ONE \cdot
+ \ONE) \cdot r^*$ (which corresponds to the initial sub-match $abc$) allows
+ $\textit{mkeps}$ to pick it up because $\textit{mkeps}$ is defined to always choose the
  left one when it is nullable. In the case of this example, $abc$ is
  preferred over $a$ or $ab$. This $\Left(\Left(\ldots))$ location is
  naturally generated by two applications of the splitting clause
@@ -611,7 +611,7 @@
 \noindent
 By this clause, we put $r_1 \backslash c \cdot r_2 $ at the
 $\textit{front}$ and $r_2 \backslash c$ at the $\textit{back}$. This
-allows $mkeps$ to always pick up among two matches the one with a longer
+allows $\textit{mkeps}$ to always pick up among two matches the one with a longer
 initial sub-match. Removing the outside $\Left(\Left(...))$, the inside
 sub-value 
  
@@ -620,13 +620,13 @@
 \end{center}
 
 \noindent
-tells us how the empty string $[]$ is matched with $(0+0+0 + 0 + 1 \cdot
-1 \cdot 1) \cdot r^*$. We match $[]$ by a sequence of 2 nullable regular
+tells us how the empty string $[]$ is matched with $(\ZERO+\ZERO+\ZERO + \ZERO + \ONE \cdot
+\ONE \cdot \ONE) \cdot r^*$. We match $[]$ by a sequence of 2 nullable regular
 expressions. The first one is an alternative, we take the rightmost
 alternative---whose language contains the empty string. The second
 nullable regular expression is a Kleene star. $\Stars$ tells us how it
 generates the nullable regular expression: by 0 iterations to form
-$\epsilon$. Now $\textit{inj}$ injects characters back and incrementally
+$\ONE$. Now $\textit{inj}$ injects characters back and incrementally
 builds a parse tree based on $v_3$. Using the value $v_3$, the character
 c, and the regular expression $r_2$, we can recover how $r_2$ matched
 the string $[c]$ : $\textit{inj} \; r_2 \; c \; v_3$ gives us
@@ -639,7 +639,7 @@
 \end{center}
  for how 
  \begin{center}
- $r_1= (1+0+1 \cdot b + 0 + 1 \cdot b \cdot c) \cdot r*$
+ $r_1= (\ONE+\ZERO+\ONE \cdot b + \ZERO + \ONE \cdot b \cdot c) \cdot r*$
  \end{center}
   matched  the string $bc$ before it split into 2 pieces. 
   Finally, after injecting character $a$ back to $v_1$, 
@@ -672,12 +672,12 @@
 traversing this information twice. This leads to an optimisation---if we
 store the information for parse trees inside the regular expression,
 update it when we do derivative on them, and collect the information
-when finished with derivatives and call $mkeps$ for deciding which
+when finished with derivatives and call $\textit{mkeps}$ for deciding which
 branch is POSIX, we can generate the parse tree in one pass, instead of
 doing the rest $n$ injections. This leads to Sulzmann and Lu's novel
 idea of using bitcodes in derivatives.
 
-In the next section, we shall focus on the bit-coded algorithm and the
+In the next section, we shall focus on the bitcoded algorithm and the
 process of simplification of regular expressions. This is needed in
 order to obtain \emph{fast} versions of the Brzozowski's, and Sulzmann
 and Lu's algorithms.  This is where the PhD-project aims to advance the
@@ -743,21 +743,21 @@
 regular expressions that are either $r$'s children expressions or a
 concatenation of them. Antimirov has proved a tight bound of the size of
 partial derivatives. Roughly
-speaking the size will not exceed $t^2$, t is the number of characters 
-appearing in that regular expression. If we want the size of derivatives 
+speaking the size will be quadruple in the size of the regular expression.
+ If we want the size of derivatives 
 to stay below this bound, we would need more aggressive simplifications
 such as opening up alternatives to achieve the maximum level of duplicates
 cancellation.
 For example, the parentheses in $(a+b) \cdot c + bc$ can be opened up to
 get $a\cdot c +b \cdot c
 + b \cdot c$, and then simplified to $a \cdot c+b \cdot c$. Another example is from
-$(a^*+a) + (a^*+ 1) + (a +1)$ to $a^*+a+1$.
+$(a^*+a) + (a^*+ \ONE) + (a +\ONE)$ to $a^*+a+\ONE$.
 Adding these more aggressive simplification rules
  helped us to achieve the same size bound as that of the partial derivatives.
 To introduce these "spilling out alternatives" simplifications
  and make the correctness proof easier,
 we used bitcodes. 
-
+Bitcodes look like this:
 %This allows us to prove a tight
 %bound on the size of regular expression during the running time of the
 %algorithm if we can establish the connection between our simplification
@@ -767,7 +767,7 @@
 %data, that a similar bound can be obtained for the derivatives in
 %Sulzmann and Lu's algorithm. Let us give some details about this next.
 
-Bitcodes look like this:
+
 \begin{center}
 		$b ::=   S \mid  Z \; \;\;
 bs ::= [] \mid b:bs    
@@ -847,15 +847,14 @@
   $\textit{a}$ & $::=$  & $\textit{ZERO}$\\
                   & $\mid$ & $\textit{ONE}\;\;bs$\\
                   & $\mid$ & $\textit{CHAR}\;\;bs\,c$\\
-                  & $\mid$ & $\textit{ALT}\;\;bs\,as$\\
+                  & $\mid$ & $\textit{ALT}\;\;bs\,a_1 \, a_2$\\
                   & $\mid$ & $\textit{SEQ}\;\;bs\,a_1\,a_2$\\
                   & $\mid$ & $\textit{STAR}\;\;bs\,a$
 \end{tabular}    
 \end{center}  
-
+%(in \textit{ALT})
 \noindent
-where $bs$ stands for bit-sequences, and $as$ (in \textit{ALTS}) for a
-list of annotated regular expressions. These bit-sequences encode
+where $bs$ stands for bit-sequences, and $a$  for $\bold{a}$nnotated regular expressions. These bit-sequences encode
 information about the (POSIX) value that should be generated by the
 Sulzmann and Lu algorithm. 
 
@@ -882,7 +881,9 @@
 %\end{definition}
 
 \noindent
-In the fourth clause, $fuse$ is an auxiliary function that helps to attach bits to the
+We use up arrows here to imply that the basic un-annotated regular expressions
+are "lifted up" into something slightly more complex.
+In the fourth clause, $\textit{fuse}$ is an auxiliary function that helps to attach bits to the
 front of an annotated regular expression. Its definition is as follows:
 
 \begin{center}
@@ -928,11 +929,31 @@
 \end{tabular}    
 \end{center}    
 %\end{definition}
-For instance, when we unfold $STAR \; bs \; a$ into a sequence, we attach an additional bit Z to the front of $r \backslash c$ to indicate that there is one more star iteration. 
-The other example, the $SEQ$ clause is more subtle-- when $a_1$ is $bnullable$(here bnullable is exactly the same as nullable, except that it is for annotated regular expressions, therefore we omit the definition).
-Assume that $bmkeps$ correctly extracts the bitcode for how $a_1$ matches the string prior to character c(more on this later), then the right branch of $ALTS$, which is $fuse \; bmkeps \;  a_1 (a_2 \backslash c)$ will collapse the regular expression $a_1$(as it has already been fully matched) and store the parsing information at the head of the regular expression $a_2 \backslash c$ by fusing to it. The bitsequence $bs$, which was initially attached to the head of $SEQ$, has now been elevated to the top-level of ALT,
-as this information will be needed whichever way the $SEQ$ is matched--no matter whether c belongs to $a_1$ or $ a_2$.
-After carefully doing these derivatives and maintaining all the parsing information, we complete the parsing by collecting the bits using a special $mkeps$ function for annotated regular expressions--$bmkeps$:
+For instance, we attach an additional bit $\Z$ to
+ the front of $r \backslash c$ to indicate that there
+  is one more star iteration when we unfold $\textit{STAR} \; bs \; a$ 
+into a sequence in the last clause. 
+A more subtle treatment of the bits lies in the $\textit{SEQ}$ clause
+ when $a_1$ is $\textit{bnullable}$($\textit{bnullable}$
+  is exactly the same as $\textit{nullable}$, except that it is for
+   annotated regular expressions, therefore we omit the definition).
+$\textit{bmkeps} \, a_1$ extracts the bitcode accumulated in 
+$a_1$ for how the prefix $s_1$ of the input string ending at character $c$ 
+is matched by the predecessor of $a_1$(before $s_1$ was chopped off).
+Then $\textit{fuse}$ stores the information 
+extracted by $\textit{bmkeps}$ at the head of the 
+regular expression $a_2 \backslash c$. 
+The bitsequence $bs$, which was initially 
+attached to the head of $\textit{SEQ}$, has 
+now been elevated to the top-level $\textit{ALT}$,
+as this information will be needed 
+whichever way the $\textit{SEQ}\,bs \,a_1\,a_2$ is 
+matched--no matter whether $c$ belongs to $a_1$ or $ a_2$.
+After carefully doing these derivatives and 
+maintaining all the parsing information, 
+we complete the parsing by collecting the bits using 
+a special $\textit{mkeps}$ function for annotated 
+regular expressions--$\textit{bmkeps}$:
 
 
 %\begin{definition}[\textit{bmkeps}]\mbox{}
@@ -954,7 +975,7 @@
 \noindent
 This function completes the parse tree information by 
 travelling along the path on the regular expression that corresponds to a POSIX value snd collect all the bits, and
-using S to indicate the end of star iterations. If we take the bitsproduced by $bmkeps$ and decode it, 
+using S to indicate the end of star iterations. If we take the bits produced by $bmkeps$ and decode it, 
 we get the parse tree we need, the working flow looks like this:\\
 \begin{center}
 \begin{tabular}{lcl}
@@ -1001,7 +1022,7 @@
    &&$ \textit{case} \; (a_1', 1) \Rightarrow  \textit{fuse} \; bs \;  a_1'$ \\
    &&$ \textit{case} \; (a_1', a_2') \Rightarrow  \textit{SEQ} \; bs \; a_1' \;  a_2'$ \\
 
-  $\textit{simp} \; \textit{ALT}\;bs\,as$ & $\dn$ & $\textit{ distinct}( \textit{flatten} ( \textit{map simp as})) \; \textit{match} $ \\
+  $\textit{simp} \; \textit{ALTS}\;bs\,as$ & $\dn$ & $\textit{ distinct}( \textit{flatten} ( \textit{map simp as})) \; \textit{match} $ \\
   &&$\textit{case} \; [] \Rightarrow  0$ \\
    &&$ \textit{case} \; a :: [] \Rightarrow  \textit{fuse bs a}$ \\
    &&$ \textit{case} \;  as' \Rightarrow  \textit{ALT bs as'}$ 
@@ -1009,12 +1030,18 @@
 \end{center}    
 
 The simplification does a pattern matching on the regular expression. When it detected that
-the regular expression is an alternative or sequence, it will try to simplify its children regular expressions
-recursively and then see if one of the children turn into 0 or 1, which might trigger further simplification
- at the current level. The most involved part is the ALTS clause, where we use two auxiliary functions 
- flatten and distinct to open up nested ALT and reduce as many duplicates as possible.
- Function distinct  keeps the first occurring copy only and remove all later ones when detected duplicates.
- Function flatten opens up nested ALT. Its recursive definition is given below:
+the regular expression is an alternative or sequence, 
+it will try to simplify its children regular expressions
+recursively and then see if one of the children turn 
+into $\ZERO$ or $\ONE$, which might trigger further simplification
+ at the current level. The most involved part is the $\textit{ALTS}$
+  clause, where we use two auxiliary functions 
+ flatten and distinct to open up nested $\textit{ALTS}$ and 
+ reduce as many duplicates as possible.
+ Function distinct  keeps the first occurring copy only and 
+ remove all later ones when detected duplicates.
+ Function flatten opens up nested \textit{ALT}. Its recursive
+  definition is given below:
  \begin{center}
   \begin{tabular}{@{}lcl@{}}
   $\textit{flatten} \; (\textit{ALT}\;bs\,as) :: as'$ & $\dn$ & $(\textit{map} \;
@@ -1059,8 +1086,7 @@
 
 We are currently engaged in two tasks related to this algorithm. 
 
-\begin{itemize}
-\item  
+
 The first one is proving that our simplification rules
 actually do not affect the POSIX value that should be generated by the
 algorithm according to the specification of a POSIX value
@@ -1108,13 +1134,17 @@
 as good as a regular expression that has not been simplified in the subsequent derivative operations.
 To aid this, we use the helping function retrieve described by Sulzmann and Lu:
 \\definition of retrieve\\
- This function assembles the bitcode that corresponds to a parse tree for how the current 
+ This function assembled the bitcode that corresponds to a parse tree for how the current 
  derivative matches the suffix of the string(the characters that have not yet appeared, but is stored in the value).
- Sulzmann and Lu used this to connect the bit-coded algorithm to the older algorithm by the following equation:\\
- $inj \;a\; c \; v = \textit{decode} \; (\textit{retrieve}\; ((\textit{internalise}\; r)\backslash_{simp} c) v)$\\
- A little fact that needs to be stated to help comprehension:\\
- $r^\uparrow = a$($a$ stands for $annotated$).\\
- Fahad and Christian also used this fact to prove  the correctness of bit-coded algorithm without simplification.
+ Sulzmann and Lu used this to connect the bitcoded algorithm to the older algorithm by the following equation:
+ \begin{center}
+ $inj \;a\; c \; v = \textit{decode} \; (\textit{retrieve}\; ((\textit{internalise}\; r)\backslash_{simp} c) v)$
+ \end{center}
+ A little fact that needs to be stated to help comprehension:
+ \begin{center}
+ $r^\uparrow = a$($a$ stands for $\textit{annotated}).$
+ \end{center}
+ Ausaf and Urban also used this fact to prove  the correctness of bitcoded algorithm without simplification.
  Our purpose of using this, however, is try to establish \\
 $ \textit{retrieve} \; a \; v \;=\; \textit{retrieve}  \; \textit{simp}(a) \; v'.$\\
  The idea is that using $v'$,
@@ -1125,13 +1155,12 @@
  $\textit{retrieve} \; r \backslash  s \; v\; = \; \textit{retrieve} \; r  \backslash_{simp}   s \; v'$.\\
  This proves that our simplified version of regular expression still contains all the bitcodes needed.
 
-\item
+
 The second task is to speed up the more aggressive simplification.
 Currently it is slower than a naive simplification(the naive version as
 implemented in ADU of course can explode in some cases). So it needs to
 be explored how to make it faster. Our possibility would be to explore
 again the connection to DFAs. This is very much work in progress.
-\end{itemize}
 
 \section{Conclusion}