--- a/coursework/cw02.tex Tue Jan 12 02:18:58 2016 +0000
+++ b/coursework/cw02.tex Tue Jan 12 02:49:26 2016 +0000
@@ -69,7 +69,9 @@
\item whitespaces are either \texttt{" "} (one or more) or \texttt{$\backslash$n}
\item identifiers are letters followed by underscores \texttt{\_\!\_}, letters
or digits
-\item numbers are \texttt{0}, \text{1}, \ldots
+\item numbers are \pcode{0}, \pcode{1}, \ldots and so on; give
+a regular expression that can recognise \pcode{0}, but not numbers
+with leading zeroes, such as \pcode{001}
\end{enumerate}
\noindent
@@ -91,8 +93,9 @@
\end{tabular}
\end{center}
-\noindent
-Try to design your regular expressions to be as small as possible.
+\noindent Try to design your regular expressions to be as
+small as possible. For example you should use character ranges
+for identifiers and numbers.
\subsection*{Question 2 (marked with 3\%)}
@@ -117,11 +120,27 @@
\end{center}
\noindent where $inj$ takes three arguments: a regular
-expression, a character and a value. Also add the record
-regular expression from the lectures to your tokeniser and
-implement a function, say \pcode{env}, that returns all
-assignments from a value (such that you can extract easily the
-tokens from a value).\medskip
+expression, a character and a value. Test your lexer code
+with at least the two small examples below:
+
+\begin{center}
+\begin{tabular}{ll}
+regex: & string:\smallskip\\
+$a^{\{3\}}$ & $aaa$\\
+$(a + \epsilon)^{\{3\}}$ & $aa$
+\end{tabular}
+\end{center}
+
+
+\noindent Both strings should be sucessfully lexed by the
+respective regular expression, that means the lexer returns
+in both examples a value.
+
+
+Also add the record regular expression from the
+lectures to your tokeniser and implement a function, say
+\pcode{env}, that returns all assignments from a value (such
+that you can extract easily the tokens from a value).\medskip
\noindent
Finally give the tokens for your regular expressions from Q1 and the