# HG changeset patch # User Christian Urban <urbanc@in.tum.de> # Date 1323161138 0 # Node ID e0d36fd0a8fde63df9878099721784b827b00b6b # Parent 87dc0c744ab2bc123059b9a9e77fdb4799fa1822 tuned diff -r 87dc0c744ab2 -r e0d36fd0a8fd projects.html --- a/projects.html Sat Dec 03 01:52:30 2011 +0000 +++ b/projects.html Tue Dec 06 08:45:38 2011 +0000 @@ -30,8 +30,9 @@ <H2>2011/12 MSc Individual Projects</H2> <H4>Supervisor: Christian Urban</H4> -<H4>Email: @kcl Office: Strand Building S6.30</H4> -<H4>If you are interested in a project, please send me email and we can discuss details.</H4> +<H4>Email: christian dot urban at kcl dot ac dot uk, Office: Strand Building S6.30</H4> +<H4>If you are interested in a project, please send me an email and we can discuss details. Please include +a short description about your programming and computer science background in your first email. Thanks.</H4> <ul class="striped"> <li> <H4>[CU1] Implementing a SAT-Solver in a Functional Programming Language</H4> @@ -199,14 +200,14 @@ Lexing and parsing are usually done using automated tools, like <A HREF="http://en.wikipedia.org/wiki/Lex_programming_tool">lex</A> and <A HREF="http://en.wikipedia.org/wiki/Yacc">yacc</A>. The problem - with them is that they "work when they work", but if not, they are + with them is that they "work when they work", but if they do not, then they are <A HREF="http://en.wikipedia.org/wiki/Black_box">black boxes</A> which are difficult to debug and change. They are really quite - clumsy, to the point that Might wrote a paper titled + clumsy to the point that Might and Darais wrote a paper titled "<A HREF="http://arxiv.org/pdf/1010.5023v1">Yacc is dead</A>".</p> <p> - There is simple algorithm for regular expression matching (that is lexing). + There is a simple algorithm for regular expression matching (that is lexing). This algorithm was introduced by <A HREF="http://en.wikipedia.org/wiki/Janusz_Brzozowski_(computer_scientist)">Brzozowski</A> in 1964. It is based on the notion of derivatives of regular expressions and @@ -239,14 +240,18 @@ <B>Description:</B> Solving the problem of deciding equivalence of regular expressions can be used to decide a number of problems in automated reasoning. Therefore one likes to - have a method for equivalence checking that is as fast as possible. + have a method for equivalence checking that is as fast as possible. There have + been a number of algorithms proposed in the past, but one based on a method + by Antimirov and Mosses seems relatively simple and easy to implement. </p> <p> <B>Tasks:</B> The task is to implement the algorithm by Antimirov and Mosses and compare it to other methods. Hopefully the algorithm can be tuned to be faster than other - methods. + methods. The project can be carried out in almost all programming languages, but + as usual functional programming languages such Scala, ML, Haskell have an edge + for this kind of problems. </p> <p> @@ -254,6 +259,8 @@ Central to this project is the paper <A HREF="http://www.dcc.fc.up.pt/~nam/publica/ijcs08.pdf">here</A>. Other methods have been described, for example, <A HREF="http://www4.informatik.tu-muenchen.de/~krauss/papers/rexp.pdf">here</A>. + A relatively complicated method, based on automata, is described + <A HREF="http://sardes.inrialpes.fr/~braibant/atbr/">here</A>. </p> </ul> @@ -263,7 +270,7 @@ <P><!-- Created: Tue Mar 4 00:23:25 GMT 1997 --> <!-- hhmts start --> -Last modified: Fri Dec 2 03:26:32 GMT 2011 +Last modified: Tue Dec 6 08:41:27 GMT 2011 <!-- hhmts end --> <a href="http://validator.w3.org/check/referer">[Validate this page.]</a> </BODY>