--- a/cws/cw02.tex Fri Sep 26 23:10:52 2025 +0100
+++ b/cws/cw02.tex Sun Sep 28 14:03:59 2025 +0100
@@ -35,11 +35,13 @@
\subsection*{Disclaimer\alert}
It should be understood that the work you submit represents
-your own effort. You have not copied from anyone else
-including CoPilot, ChatGPT \& Co. An
+your own effort.
+%You have not copied from anyone else
+%including CoPilot, ChatGPT \& Co.
+An
exception is the Scala code from KEATS and the code I showed
during the lectures, which you can both freely use. You can
-also use your own code from the CW~1.
+also use your own code from CW~1.
%But do not
%be tempted to ask Github Copilot for help or do any other
%shenanigans like this!
@@ -104,7 +106,7 @@
\texttt{$\backslash$t} or \texttt{$\backslash$r}
\item identifiers are letters followed by underscores \texttt{\_\!\_}, letters
or digits
-\item numbers for numbers give
+\item for numbers give
a regular expression that can recognise \pcode{0}, but not numbers
with leading zeroes, such as \pcode{001}
\item strings are enclosed by double quotes, like \texttt{"\ldots"}, and consisting of
@@ -152,7 +154,7 @@
this you need to implement the functions $nullable$ and $der$
(you can use your code from CW~1), as well as $mkeps$ and
$inj$. These functions need to be appropriately extended for
-the extended regular expressions from Q1. The definitions
+the extended regular expressions from Task 1. The definitions
you need to create are:
@@ -195,7 +197,7 @@
\noindent
Finally make that the function \texttt{lexing\_simp} generates
-with the regular expression from Q1 for the string
+with the regular expression from Task 1 for the string
\begin{center}
\code{"read n;"}
@@ -214,7 +216,7 @@
\subsection*{Task 3}
-Make sure your lexer from Q2 also simplifies regular expressions after
+Make sure your lexer from Task 2 also simplifies regular expressions after
each derivation step and rectifies the computed values after each
injection. Use this lexer to tokenise the six WHILE programs
in the \texttt{examples} directory. Make sure that the \texttt{tokenise}