--- a/handouts/ho03.tex Wed May 10 17:03:21 2017 +0100
+++ b/handouts/ho03.tex Sun May 21 00:43:02 2017 +0100
@@ -416,9 +416,9 @@
\stackrel{\epsilon}{\longrightarrow}\ldots\stackrel{\epsilon}{\longrightarrow} q'
\]
-\noindent and replace them with $q \stackrel{a}{\longrightarrow}
-q'$. Doing this to the $\epsilon$NFA on the right-hand side above gives
-the NFA
+\noindent where somewhere in the ``middle'' is an $a$-transition. We
+replace them with $q \stackrel{a}{\longrightarrow} q'$. Doing this to
+the $\epsilon$NFA on the right-hand side above gives the NFA
\begin{center}
\begin{tikzpicture}[>=stealth',very thick,
@@ -624,11 +624,11 @@
\node[state, accepting] (c_2) [above=of c_1] {$\mbox{}$};
\node[state, accepting] (c_3) [below=of c_1] {$\mbox{}$};
\path[->] (t_1) edge (A_01);
-\path[->] (t_2) edge node [above] {$\epsilon$} (A_01);
+\path[->] (t_2) edge node [above] {$\epsilon$s} (A_01);
\path[->] (t_3) edge (A_01);
\path[->] (t_1) edge (A_02);
\path[->] (t_2) edge (A_02);
-\path[->] (t_3) edge node [below] {$\epsilon$} (A_02);
+\path[->] (t_3) edge node [below] {$\epsilon$s} (A_02);
\begin{pgfonlayer}{background}
\node (3) [rounded corners, inner sep=1mm, thick,
@@ -1077,14 +1077,14 @@
\noindent
You might be able to spend some quality tinkering with this code and
-time to ponder. Then you will probably notice it is actually
-silly. The whole point of translating the NFA into a DFA via the
+time to ponder about it. Then you will probably notice it is actually
+a bit silly. The whole point of translating the NFA into a DFA via the
subset construction is to make the decision of whether a string is
accepted or not faster. Given the code above, the generated DFA will
be exactly as fast, or as slow, as the NFA we started with (actually
it will even be a tiny bit slower). The reason is that we just re-use
-the \texttt{nexts} function from the NFA. This fucntion implements the
-non-deterministic breadth-first search. You might be thinking: That
+the \texttt{nexts} function from the NFA. This function implements the
+non-deterministic breadth-first search. You might be thinking: This
is cheating! \ldots{} Well, not quite as you will see later, but in
terms of speed we still need to work a bit in order to get
sometimes(!) a faster DFA. Let's do this next.
@@ -1242,20 +1242,29 @@
\end{center}
+By the way, we are not bothering with implementing the above
+minimisation algorith: while up to now all the transformations used
+some clever composition of functions, the minimisation algorithm
+cannot be implemented by just composing some functions. For this we
+would require a more concrete representation of the transition
+function (like maps). If we did this, however, then many advantages of
+the functions would be thrown away. So the compromise is to not being
+able to minimise (easily) our DFAs.
+
\subsection*{Brzozowski's Method}
I know tyhis is already a long, long rant: but after all it is a topic
that has been researched for more than 60 years. If you reflect on
-what you have read so far, the story you can take a regular
+what you have read so far, the story is that you can take a regular
expression, translate it via the Thompson Construction into an
$\epsilon$NFA, then translate it into a NFA by removing all
$\epsilon$-transitions, and then via the subset construction obtain a
DFA. In all steps we made sure the language, or which strings can be
recognised, stays the same. After the last section, we can even
-minimise the DFA. But again we made sure the same language is
-recognised. You might be wondering: Can we go into the other
-direction? Can we go from a DFA and obtain a regular expression that
-can recognise the same language as the DFA?\medskip
+minimise the DFA (maybe not in code). But again we made sure the same
+language is recognised. You might be wondering: Can we go into the
+other direction? Can we go from a DFA and obtain a regular expression
+that can recognise the same language as the DFA?\medskip
\noindent
The answer is yes. Again there are several methods for calculating a
@@ -1508,6 +1517,19 @@
that would lead us too far afield for what we want to do in this
module.
+
+\subsection*{Where Have Derivatives Gone?}
+
+By now you are probably fed up by this text. It is now way too long
+for one lecture, but there is still one aspect of the
+automata-connection I like to highlight for you. Perhaps by now you
+are asking yourself: Where have the derivatives gone? Did we just
+forget them? Well, they have a place in the picture of calculating a
+DFA from the regular expression.
+
+To be done
+
+
%\section*{Further Reading}
%Compare what a ``human expert'' would create as an automaton for the