thys/Paper/Paper.thy
changeset 173 80fe81a28a52
parent 172 cdc0bdcfba3f
child 174 4e3778f4a802
--- a/thys/Paper/Paper.thy	Mon May 09 12:09:56 2016 +0100
+++ b/thys/Paper/Paper.thy	Mon May 09 12:53:12 2016 +0100
@@ -110,15 +110,15 @@
 of the work by Krauss and Nipkow \cite{Krauss2011}. And another one in Coq is given
 by Coquand and Siles \cite{Coquand2012}.
 
-One limitation of Brzozowski's matcher is that it only generates a YES/NO
-answer for whether a string is being matched by a regular expression.
-Sulzmann and Lu \cite{Sulzmann2014} extended this matcher to allow
-generation not just of a YES/NO answer but of an actual matching, called a
-[lexical] {\em value}. They give a simple algorithm to calculate a value
-that appears to be the value associated with POSIX matching
-\cite{Kuklewicz,Vansummeren2006}. The challenge then is to specify that
-value, in an algorithm-independent fashion, and to show that Sulzmann and
-Lu's derivative-based algorithm does indeed calculate a value that is
+One limitation of Brzozowski's matcher is that it only generates a
+YES/NO answer for whether a string is being matched by a regular
+expression.  Sulzmann and Lu \cite{Sulzmann2014} extended this matcher
+to allow generation not just of a YES/NO answer but of an actual
+matching, called a [lexical] {\em value}. They give a simple algorithm
+to calculate a value that appears to be the value associated with
+POSIX matching. The challenge then is to specify that value, in an
+algorithm-independent fashion, and to show that Sulzmann and Lu's
+derivative-based algorithm does indeed calculate a value that is
 correct according to the specification.
 
 The answer given by Sulzmann and Lu \cite{Sulzmann2014} is to define a
@@ -305,11 +305,13 @@
   @{thm (lhs) nullable.simps(4)[of "r\<^sub>1" "r\<^sub>2"]} & $\dn$ & @{thm (rhs) nullable.simps(4)[of "r\<^sub>1" "r\<^sub>2"]}\\
   @{thm (lhs) nullable.simps(5)[of "r\<^sub>1" "r\<^sub>2"]} & $\dn$ & @{thm (rhs) nullable.simps(5)[of "r\<^sub>1" "r\<^sub>2"]}\\
   @{thm (lhs) nullable.simps(6)} & $\dn$ & @{thm (rhs) nullable.simps(6)}\medskip\\
-  \end{tabular}
-  \end{center}
+  
+  %\end{tabular}
+  %\end{center}
 
-  \begin{center}
-  \begin{tabular}{lcl}
+  %\begin{center}
+  %\begin{tabular}{lcl}
+  
   @{thm (lhs) der.simps(1)} & $\dn$ & @{thm (rhs) der.simps(1)}\\
   @{thm (lhs) der.simps(2)} & $\dn$ & @{thm (rhs) der.simps(2)}\\
   @{thm (lhs) der.simps(3)} & $\dn$ & @{thm (rhs) der.simps(3)}\\
@@ -608,9 +610,8 @@
 
   The well-known idea of POSIX matching is informally defined by the longest
   match and priority rule (see Introduction); as correctly argued in \cite{Sulzmann2014}, this
-  needs formal specification. Sulzmann and Lu define a \emph{dominance}
-  relation\footnote{Sulzmann and Lu call it an ordering relation, but
-  without giving evidence that it is transitive.} between values and argue
+  needs formal specification. Sulzmann and Lu define an ``ordering
+  relation'' between values and argue
   that there is a maximum value, as given by the derivative-based algorithm.
   In contrast, we shall introduce a simple inductive definition that
   specifies directly what a \emph{POSIX value} is, incorporating the
@@ -856,7 +857,7 @@
   calculation algorithm by Sulzmann and Lu: if we build a derivative regular
   expression and then simplify it, we will calculate a POSIX value for this
   simplified derivative regular expression, \emph{not} for the original (unsimplified)
-  derivative regular expression. Sulzmann and Lu overcome this obstacle by
+  derivative regular expression. Sulzmann and Lu \cite{Sulzmann2014} overcome this obstacle by
   not just calculating a simplified regular expression, but also calculating
   a \emph{rectification function} that ``repairs'' the incorrect value.
   
@@ -1142,10 +1143,10 @@
   Having proved the correctness of the POSIX lexing algorithm in
   \cite{Sulzmann2014}, which lessons have we learned? Well, this is a
   perfect example for the importance of the \emph{right} definitions. We
-  have (on and off) banged our heads on doors as soon as as first versions
+  have (on and off) banged our heads on doors as soon as first versions
   of \cite{Sulzmann2014} appeared, but have made little progress with
   turning the relatively detailed proof sketch in \cite{Sulzmann2014} into a
-  formalisable proof. Having seen \cite{Vansummeren2006} and adapting the
+  formalisable proof. Having seen \cite{Vansummeren2006} and adapted the
   POSIX definition given there for the algorithm by Sulzmann and Lu made all
   the difference: the proofs, as said, are nearly straightforward. The
   question remains whether the original proof idea of \cite{Sulzmann2014},