--- a/ChengsongTanPhdThesis/Chapters/Inj.tex Mon Jun 06 23:17:45 2022 +0100
+++ b/ChengsongTanPhdThesis/Chapters/Inj.tex Thu Jun 09 12:57:53 2022 +0100
@@ -8,43 +8,34 @@
%and then give the algorithm and its variant, and discuss
%why more aggressive simplifications are needed.
+In this chapter, we define the basic notions
+for regular languages and regular expressions.
+We also give the definition of what $\POSIX$ lexing means.
-\section{Basic Concepts and Notations for Strings, Languages, and Regular Expressions}
-We have a primitive datatype char, denoting characters.
-\[ char ::= a
- \mid b
- \mid c
- \mid \ldots
- \mid z
-\]
-(which one is better?)
-\begin{center}
-\begin{tabular}{lcl}
-$\textit{char}$ & $\dn$ & $a | b | c | \ldots$\\
-\end{tabular}
-\end{center}
-They can form strings by lists:
+\section{Basic Concepts}
+Usually in formal language theory there is an alphabet
+denoting a set of characters.
+Here we only use the datatype of characters from Isabelle,
+which roughly corresponds to the ASCII character.
+Then using the usual $[]$ notation for lists,
+we can define strings using chars:
\begin{center}
\begin{tabular}{lcl}
$\textit{string}$ & $\dn$ & $[] | c :: cs$\\
& & $(c\; \text{has char type})$
\end{tabular}
\end{center}
-And strings can be concatenated to form longer strings:
-\begin{center}
-\begin{tabular}{lcl}
-$[] @ s_2$ & $\dn$ & $s_2$\\
-$(c :: s_1) @ s_2$ & $\dn$ & $c :: (s_1 @ s_2)$
-\end{tabular}
-\end{center}
-
-A set of strings can operate with another set of strings:
+And strings can be concatenated to form longer strings,
+in the same way as we concatenate two lists,
+which we denote as $@$. We omit the precise
+recursive definition here.
+We overload this concatenation operator for two sets of strings:
\begin{center}
\begin{tabular}{lcl}
$A @ B $ & $\dn$ & $\{s_A @ s_B \mid s_A \in A; s_B \in B \}$\\
\end{tabular}
\end{center}
-We also call the above "language concatenation".
+We also call the above \emph{language concatenation}.
The power of a language is defined recursively, using the
concatenation operator $@$:
\begin{center}
@@ -54,38 +45,43 @@
\end{tabular}
\end{center}
The union of all the natural number powers of a language
-is denoted by the Kleene star operator:
+is defined as the Kleene star operator:
\begin{center}
\begin{tabular}{lcl}
$A*$ & $\dn$ & $\bigcup_{i \geq 0} A^i$ \\
\end{tabular}
\end{center}
-To get an induction principle in Isabelle/HOL,
+\noindent
+However, to obtain a convenient induction principle
+in Isabelle/HOL,
we instead define the Kleene star
as an inductive set:
+
\begin{center}
-\begin{tabular}{lcl}
-$[] \in A*$ & &\\
-$s_1 \in A \land \; s_2 \in A* $ & $\implies$ & $s_1 @ s_2 \in A*$\\
-\end{tabular}
+\begin{mathpar}
+\inferrule{}{[] \in A*\\}
+
+\inferrule{\\s_1 \in A \land \; s_2 \in A*}{s_1 @ s_2 \in A*}
+\end{mathpar}
\end{center}
-\subsection{Language Derivatives}
-We also define an operation of chopping off a character from
-a language:
+
+We also define an operation of "chopping of" a character from
+a language, which we call $\Der$, meaning "Derivative for a language":
\begin{center}
\begin{tabular}{lcl}
$\textit{Der} \;c \;A$ & $\dn$ & $\{ s \mid c :: s \in A \}$\\
\end{tabular}
\end{center}
-
-This can be generalised to chopping off a string from all strings within set $A$:
+\noindent
+This can be generalised to "chopping off" a string from all strings within set $A$,
+with the help of the concatenation operator:
\begin{center}
\begin{tabular}{lcl}
$\textit{Ders} \;w \;A$ & $\dn$ & $\{ s \mid w@s \in A \}$\\
\end{tabular}
\end{center}
-
+\noindent
which is essentially the left quotient $A \backslash L'$ of $A$ against
the singleton language $L' = \{w\}$
in formal language theory.
@@ -99,50 +95,48 @@
\[
\Der \; c \; (A @ B) =
\begin{cases}
- ((\Der \; c \; A) @ B ) \cup \Der \; c\; B, & \text{if} \; [] \in A \\
- (\Der \; c \; A) @ B, & \text{otherwise}
+ ((\Der \; c \; A) \, @ \, B ) \cup (\Der \; c\; B) , & \text{if} \; [] \in A \\
+ (\Der \; c \; A) \, @ \, B, & \text{otherwise}
\end{cases}
\]
\end{lemma}
\noindent
This lemma states that if $A$ contains the empty string, $\Der$ can "pierce" through it
and get to $B$.
-
The language $A*$'s derivative can be described using the language derivative
of $A$:
\begin{lemma}
-$\textit{Der} \;c \;A* = (\textit{Der}\; c A) @ (A*)$\\
+$\textit{Der} \;c \;(A*) = (\textit{Der}\; c A) @ (A*)$\\
\end{lemma}
\begin{proof}
\begin{itemize}
\item{$\subseteq$}
+\noindent
The set
\[ \{s \mid c :: s \in A*\} \]
is enclosed in the set
\[ \{s_1 @ s_2 \mid s_1 \, s_2. s_1 \in \{s \mid c :: s \in A\} \land s_2 \in A* \} \]
because whenever you have a string starting with a character
-in the language of a Kleene star $A*$, then that character together with some sub-string
-immediately after it will form the first iteration, and the rest of the string will
+in the language of a Kleene star $A*$,
+then that character together with some sub-string
+immediately after it will form the first iteration,
+and the rest of the string will
be still in $A*$.
\item{$\supseteq$}
Note that
-\[ \Der \; c \; A* = \Der \; c \; (\{ [] \} \cup (A @ A*) ) \]
+\[ \Der \; c \; (A*) = \Der \; c \; (\{ [] \} \cup (A @ A*) ) \]
and
\[ \Der \; c \; (\{ [] \} \cup (A @ A*) ) = \Der\; c \; (A @ A*) \]
where the $\textit{RHS}$ of the above equatioin can be rewritten
as \[ (\Der \; c\; A) @ A* \cup A' \], $A'$ being a possibly empty set.
\end{itemize}
\end{proof}
+
+\noindent
Before we define the $\textit{Der}$ and $\textit{Ders}$ counterpart
for regular languages, we need to first give definitions for regular expressions.
\subsection{Regular Expressions and Their Meaning}
-Suppose we have an alphabet $\Sigma$, the strings whose characters
-are from $\Sigma$
-can be expressed as $\Sigma^*$.
-
-We use patterns to define a set of strings concisely. Regular expressions
-are one of such patterns systems:
The basic regular expressions are defined inductively
by the following grammar:
\[ r ::= \ZERO \mid \ONE
@@ -151,6 +145,16 @@
\mid r_1 + r_2
\mid r^*
\]
+\noindent
+We call them basic because we might introduce
+more constructs later such as negation
+and bounded repetitions.
+We defined the regular expression containing
+nothing as $\ZERO$, note that some authors
+also use $\phi$ for that.
+Similarly, the regular expression denoting the
+singleton set with only $[]$ is sometimes
+denoted by $\epsilon$, but we use $\ONE$ here.
The language or set of strings denoted
by regular expressions are defined as
@@ -172,13 +176,24 @@
Now with semantic derivatives of a language and regular expressions and
their language interpretations in place, we are ready to define derivatives on regexes.
\subsection{Brzozowski Derivatives and a Regular Expression Matcher}
+
+\ChristianComment{Hi this part I want to keep the ordering as is, so that it keeps the
+readers engaged with a story how we got to the definition of $\backslash$, rather
+than first "overwhelming" them with the definition of $\nullable$.}
+
The language derivative acts on a string set and chops off a character from
all strings in that set, we want to define a derivative operation on regular expressions
so that after derivative $L(r\backslash c)$
will look as if it was obtained by doing a language derivative on $L(r)$:
+\begin{center}
\[
-L(r \backslash c) = \Der \; c \; L(r)
+r\backslash c \dn ?
\]
+so that
+\[
+L(r \backslash c) = \Der \; c \; L(r) ?
+\]
+\end{center}
So we mimic the equalities we have for $\Der$ on language concatenation
\[
@@ -213,7 +228,7 @@
$d \backslash c$ & $\dn$ &
$\mathit{if} \;c = d\;\mathit{then}\;\ONE\;\mathit{else}\;\ZERO$\\
$(r_1 + r_2)\backslash c$ & $\dn$ & $r_1 \backslash c \,+\, r_2 \backslash c$\\
-$(r_1 \cdot r_2)\backslash c$ & $\dn$ & $\mathit{if} \, nullable(r_1)$\\
+$(r_1 \cdot r_2)\backslash c$ & $\dn$ & $\mathit{if} \, [] \in L(r_1)$\\
& & $\mathit{then}\;(r_1\backslash c) \cdot r_2 \,+\, r_2\backslash c$\\
& & $\mathit{else}\;(r_1\backslash c) \cdot r_2$\\
$(r^*)\backslash c$ & $\dn$ & $(r\backslash c) \cdot r^*$\\
@@ -236,7 +251,8 @@
for possible more iterations in later phases of lexing.
-The $\nullable$ function tests whether the empty string $""$
+To test whether $[] \in L(r_1)$, we need the $\nullable$ function,
+which tests whether the empty string $""$
is in the language of $r$:
@@ -302,7 +318,7 @@
and then define Brzozowski's regular-expression matching algorithm as:
\begin{definition}
-$match\;s\;r \;\dn\; nullable(r\backslash s)$
+$\textit{match}\;s\;r \;\dn\; \nullable(r\backslash s)$
\end{definition}
\noindent
@@ -316,6 +332,362 @@
\end{equation}
\noindent
+
+
+Building derivatives and then testing the existence
+of empty string in the resulting regular expression's language.
+So far, so good. But what if we want to
+do lexing instead of just getting a YES/NO answer?
+Sulzmann and Lu~\cite{Sulzmann2014} first came up with a nice and
+elegant (arguably as beautiful as the definition of the original derivative) solution for this.
+
+\subsection*{Values and the Lexing Algorithm by Sulzmann and Lu}
+Here we present the hybrid phases of a regular expression lexing
+algorithm using the function $\inj$, as given by Sulzmann and Lu.
+They first defined the datatypes for storing the
+lexing information called a \emph{value} or
+sometimes also \emph{lexical value}. These values and regular
+expressions correspond to each other as illustrated in the following
+table:
+
+\begin{center}
+ \begin{tabular}{c@{\hspace{20mm}}c}
+ \begin{tabular}{@{}rrl@{}}
+ \multicolumn{3}{@{}l}{\textbf{Regular Expressions}}\medskip\\
+ $r$ & $::=$ & $\ZERO$\\
+ & $\mid$ & $\ONE$ \\
+ & $\mid$ & $c$ \\
+ & $\mid$ & $r_1 \cdot r_2$\\
+ & $\mid$ & $r_1 + r_2$ \\
+ \\
+ & $\mid$ & $r^*$ \\
+ \end{tabular}
+ &
+ \begin{tabular}{@{\hspace{0mm}}rrl@{}}
+ \multicolumn{3}{@{}l}{\textbf{Values}}\medskip\\
+ $v$ & $::=$ & \\
+ & & $\Empty$ \\
+ & $\mid$ & $\Char(c)$ \\
+ & $\mid$ & $\Seq\,v_1\, v_2$\\
+ & $\mid$ & $\Left(v)$ \\
+ & $\mid$ & $\Right(v)$ \\
+ & $\mid$ & $\Stars\,[v_1,\ldots\,v_n]$ \\
+ \end{tabular}
+ \end{tabular}
+\end{center}
+
+\noindent
+We have a formal binary relation for telling whether the structure
+of a regular expression agrees with the value.
+\begin{mathpar}
+\inferrule{}{\vdash \Char(c) : \mathbf{c}} \hspace{2em}
+\inferrule{}{\vdash \Empty : \ONE} \hspace{2em}
+\inferrule{\vdash v_1 : r_1 \\ \vdash v_2 : r_2 }{\vdash \Seq(v_1, v_2) : (r_1 \cdot r_2)}
+\end{mathpar}
+
+Building on top of Sulzmann and Lu's attempt to formalise the
+notion of POSIX lexing rules \parencite{Sulzmann2014},
+Ausaf and Urban\parencite{AusafDyckhoffUrban2016} modelled
+POSIX matching as a ternary relation recursively defined in a
+natural deduction style.
+The formal definition of a $\POSIX$ value $v$ for a regular expression
+$r$ and string $s$, denoted as $(s, r) \rightarrow v$, can be specified
+in the following set of rules:
+\ChristianComment{Will complete later}
+\newcommand*{\inference}[3][t]{%
+ \begingroup
+ \def\and{\\}%
+ \begin{tabular}[#1]{@{\enspace}c@{\enspace}}
+ #2 \\
+ \hline
+ #3
+ \end{tabular}%
+ \endgroup
+}
+\begin{center}
+\inference{$s_1 @ s_2 = s$ \and $(\nexists s_3 s_4 s_5. s_1 @ s_5 = s_3 \land s_5 \neq [] \land s_3 @ s_4 = s \land (s_3, r_1) \rightarrow v_3 \land (s_4, r_2) \rightarrow v_4)$ \and $(s_1, r_1) \rightarrow v_1$ \and $(s_2, r_2) \rightarrow v_2$ }{$(s, r_1 \cdot r_2) \rightarrow \Seq(v_1, v_2)$ }
+\end{center}
+\noindent
+The above $\POSIX$ rules could be explained intuitionally as
+\begin{itemize}
+\item
+match the leftmost regular expression when multiple options of matching
+are available
+\item
+always match a subpart as much as possible before proceeding
+to the next token.
+\end{itemize}
+
+The reason why we are interested in $\POSIX$ values is that they can
+be practically used in the lexing phase of a compiler front end.
+For instance, when lexing a code snippet
+$\textit{iffoo} = 3$ with the regular expression $\textit{keyword} + \textit{identifier}$, we want $\textit{iffoo}$ to be recognized
+as an identifier rather than a keyword.
+
+The good property about a $\POSIX$ value is that
+given the same regular expression $r$ and string $s$,
+one can always uniquely determine the $\POSIX$ value for it:
+\begin{lemma}
+$\textit{if} \,(s, r) \rightarrow v_1 \land (s, r) \rightarrow v_2\quad \textit{then} \; v_1 = v_2$
+\end{lemma}
+Now we know what a $\POSIX$ value is, the problem is how do we achieve
+such a value in a lexing algorithm, using derivatives?
+
+\subsection{Sulzmann and Lu's Injection-based Lexing Algorithm}
+
+The contribution of Sulzmann and Lu is an extension of Brzozowski's
+algorithm by a second phase (the first phase being building successive
+derivatives---see \eqref{graph:successive_ders}). In this second phase, a POSIX value
+is generated if the regular expression matches the string.
+Two functions are involved: $\inj$ and $\mkeps$.
+The function $\mkeps$ constructs a value from the last
+one of all the successive derivatives:
+\begin{ceqn}
+\begin{equation}\label{graph:mkeps}
+\begin{tikzcd}
+r_0 \arrow[r, "\backslash c_0"] & r_1 \arrow[r, "\backslash c_1"] & r_2 \arrow[r, dashed] & r_n \arrow[d, "mkeps" description] \\
+ & & & v_n
+\end{tikzcd}
+\end{equation}
+\end{ceqn}
+
+It tells us how can an empty string be matched by a
+regular expression, in a $\POSIX$ way:
+
+ \begin{center}
+ \begin{tabular}{lcl}
+ $\mkeps(\ONE)$ & $\dn$ & $\Empty$ \\
+ $\mkeps(r_{1}+r_{2})$ & $\dn$
+ & \textit{if} $\nullable(r_{1})$\\
+ & & \textit{then} $\Left(\mkeps(r_{1}))$\\
+ & & \textit{else} $\Right(\mkeps(r_{2}))$\\
+ $\mkeps(r_1\cdot r_2)$ & $\dn$ & $\Seq\,(\mkeps\,r_1)\,(\mkeps\,r_2)$\\
+ $mkeps(r^*)$ & $\dn$ & $\Stars\,[]$
+ \end{tabular}
+ \end{center}
+
+
+\noindent
+We favour the left to match an empty string if there is a choice.
+When there is a star for us to match the empty string,
+we give the $\Stars$ constructor an empty list, meaning
+no iterations are taken.
+The result of a call to $\mkeps$ on a $\nullable$ $r$ would
+be a $\POSIX$ value corresponding to $r$:
+\begin{lemma}
+$\nullable(r) \implies (r, []) \rightarrow (\mkeps\; v)$
+\end{lemma}\label{mePosix}
+
+
+After the $\mkeps$-call, we inject back the characters one by one in order to build
+the lexical value $v_i$ for how the regex $r_i$ matches the string $s_i$
+($s_i = c_i \ldots c_{n-1}$ ) from the previous lexical value $v_{i+1}$.
+After injecting back $n$ characters, we get the lexical value for how $r_0$
+matches $s$.
+To do this, Sulzmann and Lu defined a function that reverses
+the ``chopping off'' of characters during the derivative phase. The
+corresponding function is called \emph{injection}, written
+$\textit{inj}$; it takes three arguments: the first one is a regular
+expression ${r_{i-1}}$, before the character is chopped off, the second
+is a character ${c_{i-1}}$, the character we want to inject and the
+third argument is the value ${v_i}$, into which one wants to inject the
+character (it corresponds to the regular expression after the character
+has been chopped off). The result of this function is a new value.
+\begin{ceqn}
+\begin{equation}\label{graph:inj}
+\begin{tikzcd}
+r_1 \arrow[r, dashed] \arrow[d]& r_i \arrow[r, "\backslash c_i"] \arrow[d] & r_{i+1} \arrow[r, dashed] \arrow[d] & r_n \arrow[d, "mkeps" description] \\
+v_1 \arrow[u] & v_i \arrow[l, dashed] & v_{i+1} \arrow[l,"inj_{r_i} c_i"] & v_n \arrow[l, dashed]
+\end{tikzcd}
+\end{equation}
+\end{ceqn}
+
+
+\noindent
+The
+definition of $\textit{inj}$ is as follows:
+
+\begin{center}
+\begin{tabular}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
+ $\textit{inj}\,(c)\,c\,Empty$ & $\dn$ & $Char\,c$\\
+ $\textit{inj}\,(r_1 + r_2)\,c\,\Left(v)$ & $\dn$ & $\Left(\textit{inj}\,r_1\,c\,v)$\\
+ $\textit{inj}\,(r_1 + r_2)\,c\,Right(v)$ & $\dn$ & $Right(\textit{inj}\,r_2\,c\,v)$\\
+ $\textit{inj}\,(r_1 \cdot r_2)\,c\,Seq(v_1,v_2)$ & $\dn$ & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\
+ $\textit{inj}\,(r_1 \cdot r_2)\,c\,\Left(Seq(v_1,v_2))$ & $\dn$ & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\
+ $\textit{inj}\,(r_1 \cdot r_2)\,c\,Right(v)$ & $\dn$ & $Seq(\textit{mkeps}(r_1),\textit{inj}\,r_2\,c\,v)$\\
+ $\textit{inj}\,(r^*)\,c\,Seq(v,Stars\,vs)$ & $\dn$ & $Stars((\textit{inj}\,r\,c\,v)\,::\,vs)$\\
+\end{tabular}
+\end{center}
+
+\noindent
+This definition is by recursion on the ``shape'' of regular
+expressions and values.
+The clauses do one thing--identifying the ``hole'' on a
+value to inject the character back into.
+For instance, in the last clause for injecting back to a value
+that would turn into a new star value that corresponds to a star,
+we know it must be a sequence value. And we know that the first
+value of that sequence corresponds to the child regex of the star
+with the first character being chopped off--an iteration of the star
+that had just been unfolded. This value is followed by the already
+matched star iterations we collected before. So we inject the character
+back to the first value and form a new value with this latest iteration
+being added to the previous list of iterations, all under the $\Stars$
+top level.
+The POSIX value is maintained throughout the process.
+\begin{lemma}
+$(r \backslash c, s) \rightarrow v \implies (r, c :: s) \rightarrow (\inj r \; c\; v)$
+\end{lemma}\label{injPosix}
+
+
+Putting all the functions $\inj$, $\mkeps$, $\backslash$ together,
+and taking into consideration the possibility of a non-match,
+we have a lexer with the following recursive definition:
+\begin{center}
+\begin{tabular}{lcr}
+$\lexer \; r \; [] $ & $=$ & $\textit{if} (\nullable \; r)\; \textit{then}\; \Some(\mkeps \; r) \; \textit{else} \; \None$\\
+$\lexer \; r \;c::s$ & $=$ & $\textit{case}\; (\lexer (r\backslash c) s) \textit{of} $\\
+& & $\None \implies \None$\\
+& & $\mid \Some(v) \implies \Some(\inj \; r\; c\; v)$
+\end{tabular}
+\end{center}
+ \noindent
+ The central property of the $\lexer$ is that it gives the correct result by
+ $\POSIX$ standards:
+ \begin{lemma}
+ \begin{tabular}{l}
+ $s \in L(r) \Longleftrightarrow (\exists v. \; r \; s = \Some(v) \land (r, \; s) \rightarrow v)$\\
+ $s \notin L(r) \Longleftrightarrow (\lexer \; r\; s = \None)$
+ \end{tabular}
+ \end{lemma}
+
+
+ \begin{proof}
+ By induction on $s$. $r$ is allowed to be an arbitrary regular expression.
+ The $[]$ case is proven by lemma \ref{mePosix}, and the inductive case
+ by lemma \ref{injPosix}.
+ \end{proof}
+
+For convenience, we shall employ the following notations: the regular
+expression we start with is $r_0$, and the given string $s$ is composed
+of characters $c_0 c_1 \ldots c_{n-1}$. In the first phase from the
+left to right, we build the derivatives $r_1$, $r_2$, \ldots according
+to the characters $c_0$, $c_1$ until we exhaust the string and obtain
+the derivative $r_n$. We test whether this derivative is
+$\textit{nullable}$ or not. If not, we know the string does not match
+$r$, and no value needs to be generated. If yes, we start building the
+values incrementally by \emph{injecting} back the characters into the
+earlier values $v_n, \ldots, v_0$.
+Pictorially, the algorithm is as follows:
+
+\begin{ceqn}
+\begin{equation}\label{graph:2}
+\begin{tikzcd}
+r_0 \arrow[r, "\backslash c_0"] \arrow[d] & r_1 \arrow[r, "\backslash c_1"] \arrow[d] & r_2 \arrow[r, dashed] \arrow[d] & r_n \arrow[d, "mkeps" description] \\
+v_0 & v_1 \arrow[l,"inj_{r_0} c_0"] & v_2 \arrow[l, "inj_{r_1} c_1"] & v_n \arrow[l, dashed]
+\end{tikzcd}
+\end{equation}
+\end{ceqn}
+
+
+\noindent
+ This is the second phase of the
+algorithm from right to left. For the first value $v_n$, we call the
+function $\textit{mkeps}$, which builds a POSIX lexical value
+for how the empty string has been matched by the (nullable) regular
+expression $r_n$. This function is defined as
+
+
+
+We have mentioned before that derivatives without simplification
+can get clumsy, and this is true for values as well--they reflect
+the size of the regular expression by definition.
+
+One can introduce simplification on the regex and values but have to
+be careful not to break the correctness, as the injection
+function heavily relies on the structure of the regexes and values
+being correct and matching each other.
+It can be achieved by recording some extra rectification functions
+during the derivatives step, and applying these rectifications in
+each run during the injection phase.
+And we can prove that the POSIX value of how
+regular expressions match strings will not be affected---although it is much harder
+to establish.
+Some initial results in this regard have been
+obtained in \cite{AusafDyckhoffUrban2016}.
+
+
+
+%Brzozowski, after giving the derivatives and simplification,
+%did not explore lexing with simplification, or he may well be
+%stuck on an efficient simplification with proof.
+%He went on to examine the use of derivatives together with
+%automaton, and did not try lexing using products.
+
+We want to get rid of the complex and fragile rectification of values.
+Can we not create those intermediate values $v_1,\ldots v_n$,
+and get the lexing information that should be already there while
+doing derivatives in one pass, without a second injection phase?
+In the meantime, can we make sure that simplifications
+are easily handled without breaking the correctness of the algorithm?
+
+Sulzmann and Lu solved this problem by
+introducing additional information to the
+regular expressions called \emph{bitcodes}.
+
+
+
+
+
+With the formally-specified rules for what a POSIX matching is,
+they proved in Isabelle/HOL that the algorithm gives correct results.
+But having a correct result is still not enough,
+we want at least some degree of $\mathbf{efficiency}$.
+
+
+A pair of regular expression and string can have multiple lexical values.
+Take the example where $r= (a^*\cdot a^*)^*$ and the string
+$s=\underbrace{aa\ldots a}_\text{n \textit{a}s}$.
+If we do not allow any empty iterations in its lexical values,
+there will be $n - 1$ "splitting points" on $s$ we can choose to
+split or not so that each sub-string
+segmented by those chosen splitting points will form different iterations:
+\begin{center}
+\begin{tabular}{lcr}
+$a \mid aaa $ & $\rightarrow$ & $\Stars\, [v_{iteration \,a},\, v_{iteration \,aaa}]$\\
+$aa \mid aa $ & $\rightarrow$ & $\Stars\, [v_{iteration \, aa},\, v_{iteration \, aa}]$\\
+$a \mid aa\mid a $ & $\rightarrow$ & $\Stars\, [v_{iteration \, a},\, v_{iteration \, aa}, \, v_{iteration \, a}]$\\
+ & $\textit{etc}.$ &
+ \end{tabular}
+\end{center}
+\noindent
+And for each iteration, there are still multiple ways to split
+between the two $a^*$s.
+It is not surprising there are exponentially many lexical values
+that are distinct for the regex and string pair $r= (a^*\cdot a^*)^*$ and
+$s=\underbrace{aa\ldots a}_\text{n \textit{a}s}$.
+A lexer to keep all the possible values will naturally
+have an exponential runtime on ambiguous regular expressions.
+With just $\inj$ and $\mkeps$, the lexing algorithm will keep track of all different values
+of a match. This means Sulzmann and Lu's injection-based algorithm
+will be exponential by nature.
+Somehow one has to decide which lexical value to keep and
+output in a lexing algorithm.
+
+
+ For example, the above $r= (a^*\cdot a^*)^*$ and
+$s=\underbrace{aa\ldots a}_\text{n \textit{a}s}$ example has the POSIX value
+$ \Stars\,[\Seq(Stars\,[\underbrace{\Char(a),\ldots,\Char(a)}_\text{n iterations}], Stars\,[])]$.
+The output of an algorithm we want would be a POSIX matching
+encoded as a value.
+
+
+
+
+
+%kind of redundant material
+
+
where we start with a regular expression $r_0$, build successive
derivatives until we exhaust the string and then use \textit{nullable}
to test whether the result can match the empty string. It can be
@@ -405,314 +777,5 @@
a very tight size bound, possibly as low
as that of the \emph{partial derivatives}\parencite{Antimirov1995}.
-Building derivatives and then simplifying them.
-So far, so good. But what if we want to
-do lexing instead of just getting a YES/NO answer?
-This requires us to go back again to the world
-without simplification first for a moment.
-Sulzmann and Lu~\cite{Sulzmann2014} first came up with a nice and
-elegant(arguably as beautiful as the original
-derivatives definition) solution for this.
-
-\subsection*{Values and the Lexing Algorithm by Sulzmann and Lu}
-
-
-They first defined the datatypes for storing the
-lexing information called a \emph{value} or
-sometimes also \emph{lexical value}. These values and regular
-expressions correspond to each other as illustrated in the following
-table:
-
-\begin{center}
- \begin{tabular}{c@{\hspace{20mm}}c}
- \begin{tabular}{@{}rrl@{}}
- \multicolumn{3}{@{}l}{\textbf{Regular Expressions}}\medskip\\
- $r$ & $::=$ & $\ZERO$\\
- & $\mid$ & $\ONE$ \\
- & $\mid$ & $c$ \\
- & $\mid$ & $r_1 \cdot r_2$\\
- & $\mid$ & $r_1 + r_2$ \\
- \\
- & $\mid$ & $r^*$ \\
- \end{tabular}
- &
- \begin{tabular}{@{\hspace{0mm}}rrl@{}}
- \multicolumn{3}{@{}l}{\textbf{Values}}\medskip\\
- $v$ & $::=$ & \\
- & & $\Empty$ \\
- & $\mid$ & $\Char(c)$ \\
- & $\mid$ & $\Seq\,v_1\, v_2$\\
- & $\mid$ & $\Left(v)$ \\
- & $\mid$ & $\Right(v)$ \\
- & $\mid$ & $\Stars\,[v_1,\ldots\,v_n]$ \\
- \end{tabular}
- \end{tabular}
-\end{center}
-
-\noindent
-
-Building on top of Sulzmann and Lu's attempt to formalise the
-notion of POSIX lexing rules \parencite{Sulzmann2014},
-Ausaf and Urban\parencite{AusafDyckhoffUrban2016} modelled
-POSIX matching as a ternary relation recursively defined in a
-natural deduction style.
-With the formally-specified rules for what a POSIX matching is,
-they proved in Isabelle/HOL that the algorithm gives correct results.
-
-But having a correct result is still not enough,
-we want at least some degree of $\mathbf{efficiency}$.
-
-
-
-One regular expression can have multiple lexical values. For example
-for the regular expression $(a+b)^*$, it has a infinite list of
-values corresponding to it: $\Stars\,[]$, $\Stars\,[\Left(Char(a))]$,
-$\Stars\,[\Right(Char(b))]$, $\Stars\,[\Left(Char(a),\,\Right(Char(b))]$,
-$\ldots$, and vice versa.
-Even for the regular expression matching a particular string, there could
-be more than one value corresponding to it.
-Take the example where $r= (a^*\cdot a^*)^*$ and the string
-$s=\underbrace{aa\ldots a}_\text{n \textit{a}s}$.
-If we do not allow any empty iterations in its lexical values,
-there will be $n - 1$ "splitting points" on $s$ we can choose to
-split or not so that each sub-string
-segmented by those chosen splitting points will form different iterations:
-\begin{center}
-\begin{tabular}{lcr}
-$a \mid aaa $ & $\rightarrow$ & $\Stars\, [v_{iteration \,a},\, v_{iteration \,aaa}]$\\
-$aa \mid aa $ & $\rightarrow$ & $\Stars\, [v_{iteration \, aa},\, v_{iteration \, aa}]$\\
-$a \mid aa\mid a $ & $\rightarrow$ & $\Stars\, [v_{iteration \, a},\, v_{iteration \, aa}, \, v_{iteration \, a}]$\\
- & $\textit{etc}.$ &
- \end{tabular}
-\end{center}
-
-And for each iteration, there are still multiple ways to split
-between the two $a^*$s.
-It is not surprising there are exponentially many lexical values
-that are distinct for the regex and string pair $r= (a^*\cdot a^*)^*$ and
-$s=\underbrace{aa\ldots a}_\text{n \textit{a}s}$.
-
-A lexer to keep all the possible values will naturally
-have an exponential runtime on ambiguous regular expressions.
-Somehow one has to decide which lexical value to keep and
-output in a lexing algorithm.
-In practice, we are usually
-interested in POSIX values, which by intuition always
-\begin{itemize}
-\item
-match the leftmost regular expression when multiple options of matching
-are available
-\item
-always match a subpart as much as possible before proceeding
-to the next token.
-\end{itemize}
-The formal definition of a $\POSIX$ value $v$ for a regular expression
-$r$ and string $s$, denoted as $(s, r) \rightarrow v$, can be specified
-in the following set of rules:
-(TODO: write the entire set of inference rules for POSIX )
-\newcommand*{\inference}[3][t]{%
- \begingroup
- \def\and{\\}%
- \begin{tabular}[#1]{@{\enspace}c@{\enspace}}
- #2 \\
- \hline
- #3
- \end{tabular}%
- \endgroup
-}
-\begin{center}
-\inference{$s_1 @ s_2 = s$ \and $(\nexists s_3 s_4 s_5. s_1 @ s_5 = s_3 \land s_5 \neq [] \land s_3 @ s_4 = s \land (s_3, r_1) \rightarrow v_3 \land (s_4, r_2) \rightarrow v_4)$ \and $(s_1, r_1) \rightarrow v_1$ \and $(s_2, r_2) \rightarrow v_2$ }{$(s, r_1 \cdot r_2) \rightarrow \Seq(v_1, v_2)$ }
-\end{center}
-
-The reason why we are interested in $\POSIX$ values is that they can
-be practically used in the lexing phase of a compiler front end.
-For instance, when lexing a code snippet
-$\textit{iffoo} = 3$ with the regular expression $\textit{keyword} + \textit{identifier}$, we want $\textit{iffoo}$ to be recognized
-as an identifier rather than a keyword.
-
- For example, the above $r= (a^*\cdot a^*)^*$ and
-$s=\underbrace{aa\ldots a}_\text{n \textit{a}s}$ example has the POSIX value
-$ \Stars\,[\Seq(Stars\,[\underbrace{\Char(a),\ldots,\Char(a)}_\text{n iterations}], Stars\,[])]$.
-The output of an algorithm we want would be a POSIX matching
-encoded as a value.
-
-
-
-
-The contribution of Sulzmann and Lu is an extension of Brzozowski's
-algorithm by a second phase (the first phase being building successive
-derivatives---see \eqref{graph:successive_ders}). In this second phase, a POSIX value
-is generated if the regular expression matches the string.
-How can we construct a value out of regular expressions and character
-sequences only?
-Two functions are involved: $\inj$ and $\mkeps$.
-The function $\mkeps$ constructs a value from the last
-one of all the successive derivatives:
-\begin{ceqn}
-\begin{equation}\label{graph:mkeps}
-\begin{tikzcd}
-r_0 \arrow[r, "\backslash c_0"] & r_1 \arrow[r, "\backslash c_1"] & r_2 \arrow[r, dashed] & r_n \arrow[d, "mkeps" description] \\
- & & & v_n
-\end{tikzcd}
-\end{equation}
-\end{ceqn}
-
-It tells us how can an empty string be matched by a
-regular expression, in a $\POSIX$ way:
-
- \begin{center}
- \begin{tabular}{lcl}
- $\mkeps(\ONE)$ & $\dn$ & $\Empty$ \\
- $\mkeps(r_{1}+r_{2})$ & $\dn$
- & \textit{if} $\nullable(r_{1})$\\
- & & \textit{then} $\Left(\mkeps(r_{1}))$\\
- & & \textit{else} $\Right(\mkeps(r_{2}))$\\
- $\mkeps(r_1\cdot r_2)$ & $\dn$ & $\Seq\,(\mkeps\,r_1)\,(\mkeps\,r_2)$\\
- $mkeps(r^*)$ & $\dn$ & $\Stars\,[]$
- \end{tabular}
- \end{center}
-
-
-\noindent
-We favour the left to match an empty string if there is a choice.
-When there is a star for us to match the empty string,
-we give the $\Stars$ constructor an empty list, meaning
-no iterations are taken.
-
-
-After the $\mkeps$-call, we inject back the characters one by one in order to build
-the lexical value $v_i$ for how the regex $r_i$ matches the string $s_i$
-($s_i = c_i \ldots c_{n-1}$ ) from the previous lexical value $v_{i+1}$.
-After injecting back $n$ characters, we get the lexical value for how $r_0$
-matches $s$. The POSIX value is maintained throughout the process.
-For this Sulzmann and Lu defined a function that reverses
-the ``chopping off'' of characters during the derivative phase. The
-corresponding function is called \emph{injection}, written
-$\textit{inj}$; it takes three arguments: the first one is a regular
-expression ${r_{i-1}}$, before the character is chopped off, the second
-is a character ${c_{i-1}}$, the character we want to inject and the
-third argument is the value ${v_i}$, into which one wants to inject the
-character (it corresponds to the regular expression after the character
-has been chopped off). The result of this function is a new value.
-\begin{ceqn}
-\begin{equation}\label{graph:inj}
-\begin{tikzcd}
-r_1 \arrow[r, dashed] \arrow[d]& r_i \arrow[r, "\backslash c_i"] \arrow[d] & r_{i+1} \arrow[r, dashed] \arrow[d] & r_n \arrow[d, "mkeps" description] \\
-v_1 \arrow[u] & v_i \arrow[l, dashed] & v_{i+1} \arrow[l,"inj_{r_i} c_i"] & v_n \arrow[l, dashed]
-\end{tikzcd}
-\end{equation}
-\end{ceqn}
-
-
-\noindent
-The
-definition of $\textit{inj}$ is as follows:
-
-\begin{center}
-\begin{tabular}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
- $\textit{inj}\,(c)\,c\,Empty$ & $\dn$ & $Char\,c$\\
- $\textit{inj}\,(r_1 + r_2)\,c\,\Left(v)$ & $\dn$ & $\Left(\textit{inj}\,r_1\,c\,v)$\\
- $\textit{inj}\,(r_1 + r_2)\,c\,Right(v)$ & $\dn$ & $Right(\textit{inj}\,r_2\,c\,v)$\\
- $\textit{inj}\,(r_1 \cdot r_2)\,c\,Seq(v_1,v_2)$ & $\dn$ & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\
- $\textit{inj}\,(r_1 \cdot r_2)\,c\,\Left(Seq(v_1,v_2))$ & $\dn$ & $Seq(\textit{inj}\,r_1\,c\,v_1,v_2)$\\
- $\textit{inj}\,(r_1 \cdot r_2)\,c\,Right(v)$ & $\dn$ & $Seq(\textit{mkeps}(r_1),\textit{inj}\,r_2\,c\,v)$\\
- $\textit{inj}\,(r^*)\,c\,Seq(v,Stars\,vs)$ & $\dn$ & $Stars((\textit{inj}\,r\,c\,v)\,::\,vs)$\\
-\end{tabular}
-\end{center}
-
-\noindent This definition is by recursion on the ``shape'' of regular
-expressions and values.
-The clauses do one thing--identifying the ``hole'' on a
-value to inject the character back into.
-For instance, in the last clause for injecting back to a value
-that would turn into a new star value that corresponds to a star,
-we know it must be a sequence value. And we know that the first
-value of that sequence corresponds to the child regex of the star
-with the first character being chopped off--an iteration of the star
-that had just been unfolded. This value is followed by the already
-matched star iterations we collected before. So we inject the character
-back to the first value and form a new value with this latest iteration
-being added to the previous list of iterations, all under the $\Stars$
-top level.
-
-Putting all the functions $\inj$, $\mkeps$, $\backslash$ together,
-we have a lexer with the following recursive definition:
-\begin{center}
-\begin{tabular}{lcr}
-$\lexer \; r \; [] $ & $=$ & $\mkeps \; r$\\
-$\lexer \; r \;c::s$ & $=$ & $\inj \; r \; c (\lexer (r\backslash c) s)$
-\end{tabular}
-\end{center}
-
-Pictorially, the algorithm is as follows:
-
-\begin{ceqn}
-\begin{equation}\label{graph:2}
-\begin{tikzcd}
-r_0 \arrow[r, "\backslash c_0"] \arrow[d] & r_1 \arrow[r, "\backslash c_1"] \arrow[d] & r_2 \arrow[r, dashed] \arrow[d] & r_n \arrow[d, "mkeps" description] \\
-v_0 & v_1 \arrow[l,"inj_{r_0} c_0"] & v_2 \arrow[l, "inj_{r_1} c_1"] & v_n \arrow[l, dashed]
-\end{tikzcd}
-\end{equation}
-\end{ceqn}
-
-
-\noindent
-For convenience, we shall employ the following notations: the regular
-expression we start with is $r_0$, and the given string $s$ is composed
-of characters $c_0 c_1 \ldots c_{n-1}$. In the first phase from the
-left to right, we build the derivatives $r_1$, $r_2$, \ldots according
-to the characters $c_0$, $c_1$ until we exhaust the string and obtain
-the derivative $r_n$. We test whether this derivative is
-$\textit{nullable}$ or not. If not, we know the string does not match
-$r$, and no value needs to be generated. If yes, we start building the
-values incrementally by \emph{injecting} back the characters into the
-earlier values $v_n, \ldots, v_0$. This is the second phase of the
-algorithm from right to left. For the first value $v_n$, we call the
-function $\textit{mkeps}$, which builds a POSIX lexical value
-for how the empty string has been matched by the (nullable) regular
-expression $r_n$. This function is defined as
-
-
-
-We have mentioned before that derivatives without simplification
-can get clumsy, and this is true for values as well--they reflect
-the size of the regular expression by definition.
-
-One can introduce simplification on the regex and values but have to
-be careful not to break the correctness, as the injection
-function heavily relies on the structure of the regexes and values
-being correct and matching each other.
-It can be achieved by recording some extra rectification functions
-during the derivatives step, and applying these rectifications in
-each run during the injection phase.
-And we can prove that the POSIX value of how
-regular expressions match strings will not be affected---although it is much harder
-to establish.
-Some initial results in this regard have been
-obtained in \cite{AusafDyckhoffUrban2016}.
-
-
-
-%Brzozowski, after giving the derivatives and simplification,
-%did not explore lexing with simplification, or he may well be
-%stuck on an efficient simplification with proof.
-%He went on to examine the use of derivatives together with
-%automaton, and did not try lexing using products.
-
-We want to get rid of the complex and fragile rectification of values.
-Can we not create those intermediate values $v_1,\ldots v_n$,
-and get the lexing information that should be already there while
-doing derivatives in one pass, without a second injection phase?
-In the meantime, can we make sure that simplifications
-are easily handled without breaking the correctness of the algorithm?
-
-Sulzmann and Lu solved this problem by
-introducing additional information to the
-regular expressions called \emph{bitcodes}.
-
-
-
-
-