--- a/cws/cw05.tex Sat Sep 04 14:09:26 2021 +0100
+++ b/cws/cw05.tex Sun Sep 05 23:51:37 2021 +0100
@@ -6,11 +6,11 @@
\begin{document}
-\section*{Coursework 5\footnote{\today}}
+\section*{Coursework 5}
-\noindent This coursework is worth 12\% and is due on \cwFIVE{} at
+\noindent This coursework is worth 25\% and is due on \cwFIVE{} at
18:00. You are asked to implement a compiler targeting the LLVM-IR.
Be careful that this CW needs some material about the LLVM-IR
that has not been shown in the lectures and your own experiments
@@ -27,8 +27,8 @@
you generated the LLVM-IR files, otherwise a mark of 0\% will be
awarded. You should use the lexer and parser from the previous
courseworks, but you need to make some modifications to them for the
-`typed' fun-language. I will award up to 4\% if a lexer and parser are
-implemented. At the end, please package everything(!) in a zip-file
+`typed' fun-language. I will award up to 5\% if a lexer and a parser are
+correctly implemented. At the end, please package everything(!) in a zip-file
that creates a directory with the name \texttt{YournameYourFamilyname}
on my end.
@@ -214,7 +214,9 @@
with simple first-order functions, nothing on the scale
as the `Hindley-Milner' typing-algorithm is needed. I suggest
to just look at what data is avaliable and generate all
- missing information by simple means.
+ missing information by ``simple means''\ldots rather than
+ looking at the literature which solves the problem
+ with much heavier machinery.
\item \textbf{Build-In Functions}: The `prelude' comes
with several build-in functions: \texttt{new\_line()},