Binary file hws/hw01.pdf has changed
Binary file hws/hw02.pdf has changed
Binary file hws/hw03.pdf has changed
--- a/hws/hw03.tex Wed Oct 16 02:07:02 2013 +0100
+++ b/hws/hw03.tex Mon Oct 21 15:02:54 2013 +0100
@@ -12,7 +12,7 @@
\item What is a regular language?
\item Assume you have an alphabet consisting of the letters $a$, $b$ and $c$ only.
-(a) Find a regular expression that recognises the two strings $ab$ and $ac$. (b)
+(1) Find a regular expression that recognises the two strings $ab$ and $ac$. (2)
Find a regular expression that matches all strings \emph{except} these two strings.
Note, you can only use regular expressions of the form
\begin{center}
Binary file hws/hw04.pdf has changed
--- a/hws/hw04.tex Wed Oct 16 02:07:02 2013 +0100
+++ b/hws/hw04.tex Mon Oct 21 15:02:54 2013 +0100
@@ -3,6 +3,9 @@
\usepackage{hyperref}
\usepackage{amssymb}
\usepackage{amsmath}
+\usepackage{tikz}
+\usetikzlibrary{automata}
+
\newcommand{\dn}{\stackrel{\mbox{\scriptsize def}}{=}}% for definitions
@@ -78,18 +81,44 @@
What is the languages recognised by this automaton?
-\item Give a deterministic finite automaton that can recognise
-the language $L(a^*\cdot b\cdot b^*)$.
+\item Give a non-deterministic finite automaton that can recognise
+the language $L(a\cdot (a + b)^* \cdot c)$.
-\item (Optional) The tokenizer in \texttt{regexp3.scala} takes as
-argument a string and a list of rules. The result is a list of tokens. Improve this tokenizer so
-that it filters out all comments and whitespace from the result.
+\item Given the following deterministic finite automaton over the alphabet $\{0, 1\}$,
+find the corresponding minimal automaton. In case states can be merged,
+state clearly which states can
+be merged.
-\item (Optional) Modify the tokenizer in \texttt{regexp2.scala} so that it
-implements the \texttt{findAll} function. This function takes a regular
-expressions and a string, and returns all substrings in this string that
-match the regular expression.
+\begin{center}
+\begin{tikzpicture}[scale=3, line width=0.7mm]
+ \node[state, initial] (q0) at ( 0,1) {$q_0$};
+ \node[state] (q1) at ( 1,1) {$q_1$};
+ \node[state, accepting] (q4) at ( 2,1) {$q_4$};
+ \node[state] (q2) at (0.5,0) {$q_2$};
+ \node[state] (q3) at (1.5,0) {$q_3$};
+ \path[->] (q0) edge node[above] {$0$} (q1)
+ (q0) edge node[right] {$1$} (q2)
+ (q1) edge node[above] {$0$} (q4)
+ (q1) edge node[right] {$1$} (q2)
+ (q2) edge node[above] {$0$} (q3)
+ (q2) edge [loop below] node {$1$} ()
+ (q3) edge node[left] {$0$} (q4)
+ (q3) edge [bend left=95, looseness = 2.2] node [left=2mm] {$1$} (q0)
+ (q4) edge [loop right] node {$0, 1$} ()
+ ;
+\end{tikzpicture}
+\end{center}
+
+
+%\item (Optional) The tokenizer in \texttt{regexp3.scala} takes as
+%argument a string and a list of rules. The result is a list of tokens. Improve this tokenizer so
+%that it filters out all comments and whitespace from the result.
+
+%\item (Optional) Modify the tokenizer in \texttt{regexp2.scala} so that it
+%implements the \texttt{findAll} function. This function takes a regular
+%expressions and a string, and returns all substrings in this string that
+%match the regular expression.
\end{enumerate}
% explain what is a context-free grammar and the language it generates
Binary file slides/slides04.pdf has changed