handouts/ho04.tex
changeset 936 0b5f06539a84
parent 874 ffe02fd574a5
child 941 66adcae6c762
--- a/handouts/ho04.tex	Sun Oct 01 15:25:22 2023 +0100
+++ b/handouts/ho04.tex	Mon Oct 02 23:10:56 2023 +0100
@@ -17,21 +17,21 @@
 Answering this question will also help us with the problem we
 are after, namely tokenising an input string. 
 
-The algorithm we will be looking at in this lecture was designed by Sulzmann
-\& Lu in a rather recent research paper (from 2014). A link to it is
-provided on KEATS, in case you are interested.\footnote{In my
-humble opinion this is an interesting instance of the research
-literature: it contains a very neat idea, but its presentation
-is rather sloppy. In earlier versions of this paper, a King's
-undergraduate student and I found several rather annoying typos in the
-examples and definitions.} My former PhD student Fahad Ausaf and I even more recently 
-wrote a paper where we build on their result: we provided a 
-mathematical proof that their algorithm is really correct---the proof 
-Sulzmann \& Lu had originally given contained major flaws. Such correctness
-proofs are important: Kuklewicz maintains a unit-test library
-for the kind of algorithms we are interested in here and he showed 
-that many implementations in the ``wild'' are buggy, that is not
-satisfy his unit tests:
+The algorithm we will be looking at in this lecture was designed by
+Sulzmann \& Lu in a rather recent research paper (from 2014). A link
+to it is provided on KEATS, in case you are interested.\footnote{In my
+  humble opinion this is an interesting instance of the research
+  literature: it contains very clever ideas, but its presentation is
+  rather sloppy. In earlier versions of this paper, students and I
+  found several rather annoying typos in the examples and definitions;
+  we even found downright errors in their work.} Together with my former PhD
+students Fahad Ausaf and Chengsong Tan we wrote several papers where
+we build on their result: we provided a mathematical proof that their
+algorithm is really correct---the proof Sulzmann \& Lu had originally
+given contained major flaws. Such correctness proofs are important:
+Kuklewicz maintains a unit-test library for the kind of algorithms we
+are interested in here and he showed that many implementations in the
+``wild'' are buggy, that is not satisfy his unit tests:
 
 \begin{center}
 \url{http://www.haskell.org/haskellwiki/Regex_Posix}
@@ -433,7 +433,7 @@
 a gigantic problem with the described algorithm so far: it is very
 slow. To make it faster, we have to include in all this the simplification 
 from Lecture 2\ldots{}and what rotten luck: simplification messes things 
-up and we need to rectify the mess. This is what we shall do next.
+up and we need to \emph{rectify} the mess. This is what we shall do next.
 
 
 \subsubsection*{Simplification}