updated
authorChristian Urban <urbanc@in.tum.de>
Thu, 15 Nov 2018 14:23:55 +0000
changeset 203 eb188f9ac038
parent 202 f7bcb27d1940
child 204 9b45dd24271b
updated
README
cws/cw02.pdf
cws/cw02.tex
solutions2/docdiff.scala
templates2/danube.jar
templates2/danube.scala
templates2/docdiff.jar
templates2/docdiff.scala
--- a/README	Thu Nov 15 03:35:38 2018 +0000
+++ b/README	Thu Nov 15 14:23:55 2018 +0000
@@ -1,3 +1,7 @@
+scalac -Ydelambdafy:inline -d docdiff.jar docdiff.scala
+
+
+
 given two lists, is one a sublist of the other except one
 element
 
Binary file cws/cw02.pdf has changed
--- a/cws/cw02.tex	Thu Nov 15 03:35:38 2018 +0000
+++ b/cws/cw02.tex	Thu Nov 15 14:23:55 2018 +0000
@@ -6,12 +6,12 @@
 \begin{document}
 
 
-\section*{Coursework 7 (DocDiff and Danube.org)}
+\section*{Coursework 7 (Scala)}
 
-This coursework is worth 10\%. The first part and second part are due
+This coursework is worth 10\%. The first and second part are due
 on 22 November at 11pm; the third, more advanced part, is due on 21
 December at 11pm. You are asked to implement Scala programs for
-measuring similarity in texts and for recommending movies
+measuring similarity in texts, and for recommending movies
 according to a ratings list.  Note the second part might include
 material you have not yet seen in the first two lectures. \bigskip
 
@@ -36,7 +36,7 @@
 any test cases on your own computer. For example you can call Scala on
 the command line with the option \texttt{-cp docdiff.jar} and then
 query any function from the template file. Say you want to find out
-what the function \texttt{occurences} produces: for this you just need
+what the function \texttt{occurrences} produces: for this you just need
 to prefix it with the object name \texttt{CW7a} (and \texttt{CW7b}
 respectively for \texttt{danube.jar}).  If you want to find out what
 these functions produce for the list \texttt{List("a", "b", "b")},
@@ -45,76 +45,101 @@
 \begin{lstlisting}[language={},numbers=none,basicstyle=\ttfamily\small]
 $ scala -cp docdiff.jar
   
-scala> CW7a.occurences(List("a", "b", "b"))
+scala> CW7a.occurrences(List("a", "b", "b"))
 ...
 \end{lstlisting}%$
 
 \subsection*{Hints}
 
+\noindent
+\textbf{For Part 1:} useful operations involving regular
+expressions:
+\[\texttt{reg.findAllIn(s).toList}\]
+\noindent finds all
+substrings in \texttt{s} according to a regular regular expression
+\texttt{reg}; useful list operations: \texttt{.distinct}
+removing duplicates from a list, \texttt{.count} counts the number of
+elements in a list that satisfy some condition, \texttt{.toMap}
+transfers a list of pairs into a Map, \texttt{.sum} adds up a list of
+integers, \texttt{.max} calculates the maximum of a list.\bigskip
 
+\noindent
+\textbf{For Part 2 + 3:} use \texttt{.split(",").toList} for splitting
+strings according to commas (similarly $\backslash$\texttt{n}),
+\texttt{.getOrElse(..,..)} allows to querry a Map, but also gives a
+default value if the Map is not defined, a Map can be `updated' by
+using \texttt{+}, \texttt{.contains} and \texttt{.filter} can test whether
+an element is included in a list, and respectively filter out elements in a list,
+\texttt{.sortBy(\_.\_2)} sorts a list of pairs according to the second
+elements in the pairs---the sorting is done from smallest to highest,
+\texttt{.take(n)} for taking some elements in a list (takes fewer if the list
+contains less than \texttt{n} elements).
 
 
 \newpage
 \subsection*{Part 1 (4 Marks, file docdiff.scala)}
 
-It seems source code plagiarism---stealing someone else's code---is a
-serious problem at other universities.\footnote{Surely, King's
-  students, after all their instructions and warnings, would never
-  commit such an offence.} Dedecting such plagiarism is time-consuming
-and disheartening. To aid the poor lecturers at other universities,
-let's implement a program that determines the similarity between two
-documents (be they code or English texts). A document will be
-represented as a list of strings.
+It seems source code plagiarism---stealing and submitting someone
+else's code---is a serious problem at other
+universities.\footnote{Surely, King's students, after all their
+  instructions and warnings, would never commit such an offence. Yes?}
+Detecting such plagiarism is time-consuming and disheartening for
+lecturers at those universities. To aid these poor souls, let's
+implement in this part a program that determines the similarity
+between two documents (be they source code or texts in English). A
+document will be represented as a list of strings.
 
 
 \subsection*{Tasks}
 
 \begin{itemize}
-\item[(1)] Implement a function that cleans a string by finding all
-  words in this string. For this use the regular expression
-  \texttt{"$\backslash$w+"} and the library function
-  \texttt{findAllIn}. The function should return a list of
-  strings.\\
+\item[(1)] Implement a function that `cleans' a string by finding all
+  (proper) words in this string. For this use the regular expression
+  \texttt{$\backslash$w+} for recognising word characters and the
+  library function \texttt{findAllIn}. The function should return a
+  document (a list of
+  strings).\\
   \mbox{}\hfill [1 Mark]
 
-\item[(2)] In order to compute the similarity between two documents, we
+\item[(2)] In order to compute the overlap between two documents, we
   associate each document with a \texttt{Map}. This Map represents the
-  strings in a document and how many times these strings occur in a
+  strings in a document and how many times these strings occur in the
   document. A simple (though slightly inefficient) method for counting
-  the number of string-occurences in a document is as follows: remove
+  the number of string-occurrences in a document is as follows: remove
   all duplicates from the document; for each of these (unique)
   strings, count how many times they occur in the original document.
-  Return a Map from strings to occurences. For example
+  Return a Map associating strings with occurrences. For example
 
   \begin{center}
-  \pcode{occurences(List("a", "b", "b", "c", "d"))}
+  \pcode{occurrences(List("a", "b", "b", "c", "d"))}
   \end{center}
 
   produces \pcode{Map(a -> 1, b -> 2, c -> 1, d -> 1)} and
 
   \begin{center}
-  \pcode{occurences(List("d", "b", "d", "b", "d"))}
+  \pcode{occurrences(List("d", "b", "d", "b", "d"))}
   \end{center}
 
   produces \pcode{Map(d -> 3, b -> 2)}.\hfill[1 Mark]
 
-\item[(3)] You can think of the Maps calculated under (2) as efficient
+\item[(3)] You can think of the Maps calculated under (2) as memory-efficient
   representations of sparse ``vectors''. In this subtask you need to
-  implement the \emph{product} of two vectors, sometimes also called
-  \emph{dot product}.\footnote{\url{https://en.wikipedia.org/wiki/Dot_product}}
+  implement the \emph{product} of two such vectors, sometimes also called
+  \emph{dot product} of two vectors.\footnote{\url{https://en.wikipedia.org/wiki/Dot_product}}
 
-  For this implement a function that takes two documents
+  For this dot product, implement a function that takes two documents
   (\texttt{List[String]}) as arguments. The function first calculates
   the (unique) strings in both. For each string, it multiplies the
-  occurences in each document. If a string does not occur in one of the
-  documents, then the product is zero. At the end you
-  sum all products. For the two documents in (2) the dot product is 7:
+  corresponding occurrences in each document. If a string does not
+  occur in one of the documents, then the product for this string is zero. At the end
+  you need to add up all products. For the two documents in (2) the dot
+  product is 7, because
 
   \[
     \underbrace{1 * 0}_{"a"} \;\;+\;\;
     \underbrace{2 * 2}_{"b"} \;\;+\;\;
     \underbrace{1 * 0}_{"c"} \;\;+\;\;
-    \underbrace{1 * 3}_{"d"}
+    \underbrace{1 * 3}_{"d"} \qquad = 7
   \]  
   
   \hfill\mbox{[1 Mark]}
@@ -126,34 +151,134 @@
   \texttt{overlap}(d_1, d_2) = \frac{d_1 \cdot d_2}{max(d_1^2, d_2^2)}  
   \]
 
-  This function should return a \texttt{Double} between 0 and 1. The
+  You can expect this function to return a \texttt{Double} between 0 and 1. The
   overlap between the lists in (2) is $0.5384615384615384$.
 
-  Second implement a function that calculates the similarity of
-  two strings, by first extracting the strings using the function from (1)
-  and then calculating the overlap.
-  \hfill\mbox{[1 Mark]}
-\end{itemize}
+  Second, implement a function that calculates the similarity of
+  two strings, by first extracting the substrings using the clean
+  function from (1)
+  and then calculating the overlap of the resulting documents.\\
+  \mbox{}\hfill\mbox{[1 Mark]}
+\end{itemize}\bigskip
+
+
+\subsection*{Part 2 (2 Marks, file danube.scala)}
+
+You are creating Danube.co.uk which you hope will be the next big thing
+in online movie renting. You know that you can save money by
+anticipating what movies people will rent; you will pass these savings
+on to your users by offering a discount if they rent movies that
+Danube.co.uk recommends.  
 
+Your task is to generate \emph{two} movie recommendations for every
+movie a user rents. To do this, you calculate what other
+renters, who also watched this movie, suggest by giving positive ratings.
+Of course, some suggestions are more popular than others. You need to find
+the two most-frequently suggested movies. Return fewer recommendations,
+if there are fewer movies suggested.
+
+The calculations will be based on the small datasets which the research lab
+GroupLens provides for education and development purposes.
+
+\begin{center}
+\url{https://grouplens.org/datasets/movielens/}
+\end{center}
+
+\noindent
+The slightly adapted CSV-files should be downloaded in your Scala
+file from the URLs:
 
 
-\newpage
-You are creating Danube.org, which you hope will be the next big thing
-in online movie provider. You know that you can save money by
-anticipating what movies people will rent; you will pass these savings
-on to your users by offering a discount if they rent movies that Danube.org
-recommends.  This assignment is meant to calculate 
+\begin{center}
+\begin{tabular}{ll}  
+  \url{https://nms.kcl.ac.uk/christian.urban/ratings.csv} & (940 KByte)\\
+  \url{https://nms.kcl.ac.uk/christian.urban/movies.csv}  & (280 KByte)\\
+\end{tabular}
+\end{center}
+
+\noindent
+The ratings.csv file is organised as userID, 
+movieID, and rating (which is between 0 and 5, with \emph{positive} ratings
+being 4 and 5). The file movie.csv is organised as
+movieID and full movie name. Both files still contain the usual
+CSV-file header (first line). In this part you are asked
+to implement functions that process these files. If bandwidth
+is an issue for you, download the files locally, but in the submitted
+version use \texttt{Source.fromURL} instead of \texttt{Source.fromFile}.
+
+\subsection*{Tasks}
 
+\begin{itemize}
+\item[(1)] Implement the function \pcode{get_csv_url} which takes an
+  URL-string as argument and requests the corresponding file. The two
+  URLs of interest are \pcode{ratings_url} and \pcode{movies_url},
+  which correspond to CSV-files mentioned above.  The function should
+  return the CSV-file appropriately broken up into lines, and the
+  first line should be dropped (that is omit the header of the CSV-file).
+  The result is a list of strings (the lines in the file). In case
+  the url does not produce a file, return the empty list.\\
+  \mbox{}\hfill [1 Mark]
+
+\item[(2)] Implement two functions that process the (broken up)
+  CSV-files from (1). The \pcode{process_ratings} function filters out all
+  ratings below 4 and returns a list of (userID, movieID) pairs. The
+  \pcode{process_movies} function returns a list of (movieID, title) pairs.\\
+  \mbox{}\hfill [1 Mark]
+\end{itemize}  
+  
+
+\subsection*{Part 3 (4 Marks, file danube.scala)}
+
+\subsection*{Tasks}
 
-To do this, you offer an incentive for people to upload their lists of
-recommended books. From their lists, you can establish suggested
-pairs. A pair of books is a suggested pair if both books appear on one
-person’s recommendation list. Of course, some suggested pairs are more
-popular than others. Also, any given book is paired with some books
-much more frequently than with others.
+\begin{itemize}
+\item[(3)] Implement a kind of grouping function that calculates a Map
+  containing the userIDs and all the corresponding recommendations for
+  this user (list of movieIDs). This should be implemented in a tail
+  recursive fashion using a Map as accumulator. This Map is set to
+  \pcode{Map()} at the beginning of the calculation. For example
+
+\begin{lstlisting}[numbers=none]
+val lst = List(("1", "a"), ("1", "b"),
+               ("2", "x"), ("3", "a"),
+               ("2", "y"), ("3", "c"))
+groupById(lst, Map())
+\end{lstlisting}
+
+returns the ratings map
+
+\begin{center}
+  \pcode{Map(1 -> List(b, a), 2 -> List(y, x), 3 -> List(c, a))}.
+\end{center}
+
+\noindent
+In which order the elements of the list are given is unimportant.\\
+\mbox{}\hfill [1 Mark]
 
+\item[(4)] Implement a function that takes a ratings map and a movieID
+  as argument.  The function calculates all suggestions containing the
+  agiven movie in its recommendations. It returns a list of all these
+  recommendations (each of them is a list and needs to have the given
+  movie deleted, otherwise it might happen we recommend the same movie
+  ``back''). For example for the Map from above and the movie
+  \pcode{"y"} we obtain \pcode{List(List("x"))}, and for the movie
+  \pcode{"a"} we get \pcode{List(List("b"), List("c"))}.\\
+  \mbox{}\hfill [1 Mark]
 
+\item[(5)] Implement a suggestions function which takes a ratings map
+  and a movieID as arguments. It calculates all the recommended movies
+  sorted according to the most frequently suggested movie(s) sorted
+  first. This function returns \emph{all} suggested movieIDs as a list of
+  strings.\\
+  \mbox{}\hfill [1 Mark]
 
+\item[(6)]  
+  Implement then a recommendation function which generates a maximum
+  of two most-suggested movies (as calculated above). But it returns
+  the actual movie name, not the movieID. If fewer movies are recommended,
+  then return fewer than two movie names.\\
+  \mbox{}\hfill [1 Mark]
+\end{itemize}
 
 \end{document}
 
--- a/solutions2/docdiff.scala	Thu Nov 15 03:35:38 2018 +0000
+++ b/solutions2/docdiff.scala	Thu Nov 15 14:23:55 2018 +0000
@@ -10,29 +10,29 @@
 //
 //         some_regex.findAllIn(some_string)
 //
-//    The words should be Returned as a lsit of strings.
+//    The words should be Returned as a list of strings.
 
 def clean(s: String) : List[String] = 
   ("""\w+""".r).findAllIn(s).toList
 
 
-//(2) The function occurences calculates the number of times  
-//    strings occur in a list of strings. These occurences should 
+//(2) The function occurrences calculates the number of times  
+//    strings occur in a list of strings. These occurrences should 
 //    be calculated as a Map from strings to integers.
 
-def occurences(xs: List[String]): Map[String, Int] =
+def occurrences(xs: List[String]): Map[String, Int] =
   (for (x <- xs.distinct) yield (x, xs.count(_ == x))).toMap
 
 //(3) This functions calculates the dot-product of two documents
-//    (list of strings). For this it calcualtes the occurence
-//    maps from (2) and then multiplies the corresponding occurences. 
+//    (list of strings). For this it calculates the occurrence
+//    maps from (2) and then multiplies the corresponding occurrences. 
 //    If a string does not occur in a document, the product is zero.
 //    The function finally sums up all products. 
 
 def prod(lst1: List[String], lst2: List[String]) : Int = {
     val words = (lst1 ::: lst2).distinct
-    val occs1 = occurences(lst1)
-    val occs2 = occurences(lst2)
+    val occs1 = occurrences(lst1)
+    val occs2 = occurrences(lst2)
     words.map{ w => occs1.getOrElse(w, 0) * occs2.getOrElse(w, 0) }.sum
 }
 
@@ -57,8 +57,8 @@
 val list1 = List("a", "b", "b", "c", "d") 
 val list2 = List("d", "b", "d", "b", "d")
 
-occurences(List("a", "b", "b", "c", "d"))   // Map(a -> 1, b -> 2, c -> 1, d -> 1)
-occurences(List("d", "b", "d", "b", "d"))   // Map(d -> 3, b -> 2)
+occurrences(List("a", "b", "b", "c", "d"))   // Map(a -> 1, b -> 2, c -> 1, d -> 1)
+occurrences(List("d", "b", "d", "b", "d"))   // Map(d -> 3, b -> 2)
 
 prod(list1,list2) // 7 
 
Binary file templates2/danube.jar has changed
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/templates2/danube.scala	Thu Nov 15 14:23:55 2018 +0000
@@ -0,0 +1,176 @@
+// Part 2 and 3 about Movie Recommendations 
+// at Danube.co.uk
+//===========================================
+
+import io.Source
+import scala.util._
+
+// (1) Implement the function get_csv_url which takes an url-string
+//     as argument and requests the corresponding file. The two urls
+//     of interest are ratings_url and movies_url, which correspond 
+//     to CSV-files.
+//
+//     The function should ReTurn the CSV-file appropriately broken
+//     up into lines, and the first line should be dropped (that is without
+//     the header of the CSV-file). The result is a list of strings (lines
+//     in the file).
+
+//def get_csv_url(url: String) : List[String] = ...
+
+
+val ratings_url = """https://nms.kcl.ac.uk/christian.urban/ratings.csv"""
+val movies_url = """https://nms.kcl.ac.uk/christian.urban/movies.csv"""
+
+// testcases
+//-----------
+//val ratings = get_csv_url(ratings_url)
+//val movies = get_csv_url(movies_url)
+
+//ratings.length  // 87313
+//movies.length   // 9742
+
+
+
+// (2) Implement two functions that process the CSV-files from (1). The ratings
+//     function filters out all ratings below 4 and ReTurns a list of 
+//     (userID, movieID) pairs. The movies function just ReTurns a list 
+//     of (movieID, title) pairs.
+
+
+//def process_ratings(lines: List[String]) : List[(String, String)] = ...
+
+//def process_movies(lines: List[String]) : List[(String, String)] = ...
+
+
+// testcases
+//-----------
+//val good_ratings = process_ratings(ratings)
+//val movie_names = process_movies(movies)
+
+//good_ratings.length   //48580
+//movie_names.length    // 9742
+
+
+
+//==============================================
+// Do not change anything below, unless you want 
+// to submit the file for the advanced part 3!
+//==============================================
+
+
+
+// (3) Implement a grouping function that calculates a Map
+//     containing the userIDs and all the corresponding recommendations 
+//     (list of movieIDs). This  should be implemented in a tail
+//     recursive fashion, using a Map m as accumulator. This Map m
+//     is set to Map() at the beginning of the calculation.
+
+//def groupById(ratings: List[(String, String)], 
+//              m: Map[String, List[String]]) : Map[String, List[String]] = ...
+
+
+// testcases
+//-----------
+//val ratings_map = groupById(good_ratings, Map())
+//val movies_map = movie_names.toMap
+
+//ratings_map.get("414").get.map(movies_map.get(_)) 
+//    => most prolific recommender with 1227 positive ratings
+
+//ratings_map.get("474").get.map(movies_map.get(_)) 
+//    => second-most prolific recommender with 787 positive ratings
+
+//ratings_map.get("214").get.map(movies_map.get(_)) 
+//    => least prolific recommender with only 1 positive rating
+
+
+
+// (4) Implement a function that takes a ratings map and a movie_name as argument.
+//     The function calculates all suggestions containing
+//     the movie in its recommendations. It ReTurns a list of all these
+//     recommendations (each of them is a list and needs to have the movie deleted, 
+//     otherwise it might happen we recommend the same movie).
+
+
+//def favourites(m: Map[String, List[String]], mov: String) : List[List[String]] = ...
+
+
+// testcases
+//-----------
+// movie ID "912" -> Casablanca (1942)
+//          "858" -> Godfather
+//          "260" -> Star Wars: Episode IV - A New Hope (1977)
+
+//favourites(ratings_map, "912").length  // => 80
+
+// That means there are 80 users that recommend the movie with ID 912.
+// Of these 80  users, 55 gave a good rating to movie 858 and
+// 52 a good rating to movies 260, 318, 593.
+
+
+
+// (5) Implement a suggestions function which takes a rating
+//     map and a movie_name as arguments. It calculates all the recommended
+//     movies sorted according to the most frequently suggested movie(s) first.
+
+//def suggestions(recs: Map[String, List[String]], 
+//                mov_name: String) : List[String] = ...
+
+
+// testcases
+//-----------
+
+//suggestions(ratings_map, "912")
+//suggestions(ratings_map, "912").length  
+// => 4110 suggestions with List(858, 260, 318, 593, ...)
+//    being the most frequently suggested movies
+
+
+
+// (6) Implement a recommendations function which generates at most
+//     *two* of the most frequently suggested movies. It ReTurns the 
+//     actual movie names, not the movieIDs.
+
+
+//def recommendations(recs: Map[String, List[String]],
+//                    movs: Map[String, String],
+//                    mov_name: String) : List[String] = ...
+
+
+
+// testcases
+//-----------
+// recommendations(ratings_map, movies_map, "912")
+//   => List(Godfather, Star Wars: Episode IV - A NewHope (1977))
+
+//recommendations(ratings_map, movies_map, "260")
+//   => List(Star Wars: Episode V - The Empire Strikes Back (1980), 
+//           Star Wars: Episode VI - Return of the Jedi (1983))
+
+// recommendations(ratings_map, movies_map, "2")
+//   => List(Lion King, Jurassic Park (1993))
+
+// recommendations(ratings_map, movies_map, "0")
+//   => Nil
+
+// recommendations(ratings_map, movies_map, "1")
+//   => List(Shawshank Redemption, Forrest Gump (1994))
+
+// recommendations(ratings_map, movies_map, "4")
+//   => Nil  (there are three ratings for this movie in ratings.csv but they are not positive)     
+
+
+// If you want to calculate the recommendations for all movies,
+// then use this code (it will take a few seconds calculation time).
+
+//val all = for (name <- movie_names.map(_._1)) yield {
+//  recommendations(ratings_map, movies_map, name)
+//}
+
+// helper functions
+//List().take(2
+//List(1).take(2)
+//List(1,2).take(2)
+//List(1,2,3).take(2)
+
+
Binary file templates2/docdiff.jar has changed
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/templates2/docdiff.scala	Thu Nov 15 14:23:55 2018 +0000
@@ -0,0 +1,110 @@
+// Part 1 about Code Similarity
+//==============================
+
+//(1) Complete the clean function below. It should find
+//    all words in a string using the regular expression
+//    \w+  and the library function 
+//
+//         some_regex.findAllIn(some_string)
+//
+//    The words should be Returned as a list of strings.
+
+
+//def clean(s: String) : List[String] = ...
+  
+
+
+//(2) The function occurrences calculates the number of times  
+//    strings occur in a list of strings. These occurrences should 
+//    be calculated as a Map from strings to integers.
+
+
+//def occurrences(xs: List[String]): Map[String, Int] = ..
+
+
+//(3) This functions calculates the dot-product of two documents
+//    (list of strings). For this it calculates the occurrence
+//    maps from (2) and then multiplies the corresponding occurrences. 
+//    If a string does not occur in a document, the product is zero.
+//    The function finally sums up all products. 
+
+
+//def prod(lst1: List[String], lst2: List[String]) : Int = ..
+
+
+//(4) Complete the functions overlap and similarity. The overlap of
+//    two documents is calculated by the formula given in the assignment
+//    description. The similarity of two strings is given by the overlap
+//    of the cleaned strings (see (1)).  
+
+
+//def overlap(lst1: List[String], lst2: List[String]) : Double = ...
+
+//def similarity(s1: String, s2: String) : Double = ...
+
+
+
+
+/* Test cases
+
+
+val list1 = List("a", "b", "b", "c", "d") 
+val list2 = List("d", "b", "d", "b", "d")
+
+occurrences(List("a", "b", "b", "c", "d"))   // Map(a -> 1, b -> 2, c -> 1, d -> 1)
+occurrences(List("d", "b", "d", "b", "d"))   // Map(d -> 3, b -> 2)
+
+prod(list1,list2) // 7 
+
+overlap(list1, list2)   // 0.5384615384615384
+overlap(list2, list1)   // 0.5384615384615384
+overlap(list1, list1)   // 1.0
+overlap(list2, list2)   // 1.0
+
+// Plagiarism examples from 
+// https://desales.libguides.com/avoidingplagiarism/examples
+
+val orig1 = """There is a strong market demand for eco-tourism in
+Australia. Its rich and diverse natural heritage ensures Australia's
+capacity to attract international ecotourists and gives Australia a
+comparative advantage in the highly competitive tourism industry."""
+
+val plag1 = """There is a high market demand for eco-tourism in
+Australia. Australia has a comparative advantage in the highly
+competitive tourism industry due to its rich and varied natural
+heritage which ensures Australia's capacity to attract international
+ecotourists."""
+
+similarity(orig1, plag1) // 0.8679245283018868
+
+
+// Plagiarism examples from 
+// https://www.utc.edu/library/help/tutorials/plagiarism/examples-of-plagiarism.php
+
+val orig2 = """No oil spill is entirely benign. Depending on timing and
+location, even a relatively minor spill can cause significant harm to
+individual organisms and entire populations. Oil spills can cause
+impacts over a range of time scales, from days to years, or even
+decades for certain spills. Impacts are typically divided into acute
+(short-term) and chronic (long-term) effects. Both types are part of a
+complicated and often controversial equation that is addressed after
+an oil spill: ecosystem recovery."""
+
+val plag2 = """There is no such thing as a "good" oil spill. If the
+time and place are just right, even a small oil spill can cause damage
+to sensitive ecosystems. Further, spills can cause harm days, months,
+years, or even decades after they occur. Because of this, spills are
+usually broken into short-term (acute) and long-term (chronic)
+effects. Both of these types of harm must be addressed in ecosystem
+recovery: a controversial tactic that is often implemented immediately
+following an oil spill."""
+
+overlap(clean(orig2), clean(plag2))  // 0.728
+similarity(orig2, plag2)             // 0.728
+
+
+ 
+// The punchline: everything above 0.6 looks suspicious and 
+// should be investigated by staff.
+
+*/