# HG changeset patch # User Christian Urban # Date 1483800746 0 # Node ID 9476086849adc12cef5247637e1a3327bec0b337 # Parent 1f4e81950ab4e7d7c13d33bbadcd49ad7944f85f updated diff -r 1f4e81950ab4 -r 9476086849ad handouts/ho01.pdf Binary file handouts/ho01.pdf has changed diff -r 1f4e81950ab4 -r 9476086849ad handouts/ho01.tex --- a/handouts/ho01.tex Mon Nov 14 15:50:42 2016 +0000 +++ b/handouts/ho01.tex Sat Jan 07 14:52:26 2017 +0000 @@ -32,26 +32,27 @@ \begin{document} -\fnote{\copyright{} Christian Urban, King's College London, 2014, 2015, 2016} +\fnote{\copyright{} Christian Urban, King's College London, 2014, 2015, 2016, 2017} \section*{Handout 1} This module is about text processing, be it for web-crawlers, -compilers, dictionaries, DNA-data and so on. When looking for -a particular string, like $abc$ in a large text we can use the -Knuth-Morris-Pratt algorithm, which is currently the most -efficient general string search algorithm. But often we do -\emph{not} just look for a particular string, but for string -patterns. For example in program code we need to identify what -are the keywords, what are the identifiers etc. A pattern for -identifiers could be stated as: they start with a letter, -followed by zero or more letters, numbers and underscores. -Also often we face the problem that we are given a string (for -example some user input) and want to know whether it matches a -particular pattern---be it an email address, for example. In -this way we can exclude user input that would otherwise have -nasty effects on our program (crashing it or making it go into -an infinite loop, if not worse).\smallskip +compilers, dictionaries, DNA-data and so on. When looking for a +particular string, like $abc$ in a large text we can use the +Knuth-Morris-Pratt algorithm, which is currently the most efficient +general string search algorithm. But often we do \emph{not} just look +for a particular string, but for string patterns. For example in +program code we need to identify what are the keywords (if, then, +while, etc), what are the identifiers (variable names). A pattern for +identifiers could be stated as: they start with a letter, followed by +zero or more letters, numbers and underscores. Also often we face the +problem that we are given a string (for example some user input) and +want to know whether it matches a particular pattern---be it an email +address, for example. In this way we can exclude user input that would +otherwise have nasty effects on our program (crashing it or making it +go into an infinite loop, if not worse). The point is that the fast +Knuth-Morris-Pratt algorithm for strings is not good enough for such +string patterns.\smallskip \defn{Regular expressions} help with conveniently specifying such patterns. The idea behind regular expressions is that @@ -72,7 +73,7 @@ \noindent where the first part, the user name, matches one or more lowercase letters (\pcode{a-z}), digits (\pcode{0-9}), underscores, dots and hyphens. The \pcode{+} at the end of the brackets ensures -the ``one or more''. Then comes the \pcode{@}-sign, followed +the ``one or more''. Then comes the email \pcode{@}-sign, followed by the domain name which must be one or more lowercase letters, digits, underscores, dots or hyphens. Note there cannot be an underscore in the domain name. Finally there must @@ -96,11 +97,10 @@ disposable.style.email.with+symbol@example.com \end{lstlisting} -\noindent according to the regular expression we specified in -\eqref{email}. Whether this is intended or not is a different -question (the second email above is actually an acceptable -email address acording to the RFC 5322 standard for email -addresses). +\noindent according to the regular expression we specified in line +\eqref{email} above. Whether this is intended or not is a different +question (the second email above is actually an acceptable email +address acording to the RFC 5322 standard for email addresses). As mentioned above, identifiers, or variables, in program code are often required to satisfy the constraints that they start @@ -178,14 +178,14 @@ \subsection*{Why Study Regular Expressions?} -Regular expressions were introduced by Kleene in the 1950ies -and they have been object of intense study since then. They -are nowadays pretty much ubiquitous in computer science. There -are many libraries implementing regular expressions. I am sure -you have come across them before (remember PRA?). Why on earth -then is there any interest in studying them again in depth in -this module? Well, one answer is in the following two graphs about -regular expression matching in Python, Ruby and Java. +Regular expressions were introduced by Kleene in the 1950ies and they +have been object of intense study since then. They are nowadays pretty +much ubiquitous in computer science. There are many libraries +implementing regular expressions. I am sure you have come across them +before (remember the PRA module?). Why on earth then is there any +interest in studying them again in depth in this module? Well, one +answer is in the following two graphs about regular expression +matching in Python, Ruby and Java. \begin{center} \begin{tabular}{@{\hspace{-1mm}}c@{\hspace{-1mm}}c@{}} @@ -423,9 +423,9 @@ \] \noindent -which is much less readable than \eqref{email}. Similarly for -the regular expression that matches the string $hello$ we -should write +which is much less readable than the regular expression in +\eqref{email}. Similarly for the regular expression that matches the +string $hello$ we should write \[ h \cdot e \cdot l \cdot l \cdot o @@ -460,8 +460,8 @@ \begin{center} \begin{tabular}{rcl} -$r+$ & $\mapsto$ & $r\cdot r^*$\\ -$r?$ & $\mapsto$ & $\ONE + r$\\ +$r^+$ & $\mapsto$ & $r\cdot r^*$\\ +$r^?$ & $\mapsto$ & $\ONE + r$\\ $\backslash d$ & $\mapsto$ & $0 + 1 + 2 + \ldots + 9$\\ $[\text{\it a - z}]$ & $\mapsto$ & $a + b + \ldots + z$\\ \end{tabular} @@ -569,52 +569,49 @@ \subsection*{My Fascination for Regular Expressions} -Up until a few years ago I was not really interested in -regular expressions. They have been studied for the last 60 -years (by smarter people than me)---surely nothing new can be -found out about them. I even have the vague recollection that -I did not quite understand them during my undergraduate study. If I remember -correctly,\footnote{That was really a long time ago.} I got -utterly confused about $\ONE$ (which my lecturer wrote as -$\epsilon$) and the empty string.\footnote{Obviously the -lecturer must have been bad.} Since my then, I have used -regular expressions for implementing lexers and parsers as I -have always been interested in all kinds of programming -languages and compilers, which invariably need regular -expressions in some form or shape. +Up until a few years ago I was not really interested in regular +expressions. They have been studied for the last 60 years (by smarter +people than me)---surely nothing new can be found out about them. I +even have the vague recollection that I did not quite understand them +during my undergraduate study. If I remember correctly,\footnote{That + was really a long time ago.} I got utterly confused about $\ONE$ +(which my lecturer wrote as $\epsilon$) and the empty string (which he +also wrote as $\epsilon$).\footnote{Obviously the lecturer must have + been bad ;o)} Since my then, I have used regular expressions for +implementing lexers and parsers as I have always been interested in +all kinds of programming languages and compilers, which invariably +need regular expressions in some form or shape. To understand my fascination \emph{nowadays} with regular expressions, you need to know that my main scientific interest -for the last 14 years has been with theorem provers. I am a +for the last 17 years has been with theorem provers. I am a core developer of one of them.\footnote{\url{http://isabelle.in.tum.de}} Theorem provers are systems in which you can formally reason about mathematical concepts, but also about programs. In this way -theorem prover can help with the manacing problem of writing bug-free code. Theorem provers have +theorem provers can help with the manacing problem of writing bug-free code. Theorem provers have proved already their value in a number of cases (even in terms of hard cash), but they are still clunky and difficult to use by average programmers. -Anyway, in about 2011 I came across the notion of -\defn{derivatives of regular expressions}. This notion allows -one to do almost all calculations in regular language theory -on the level of regular expressions, not needing any automata (you will -see we only touch briefly on automata in lecture 3). -This is crucial because automata are graphs and it is rather -difficult to reason about graphs in theorem provers. In -contrast, reasoning about regular expressions is easy-peasy in -theorem provers. Is this important? I think yes, because -according to Kuklewicz nearly all POSIX-based regular -expression matchers are +Anyway, in about 2011 I came across the notion of \defn{derivatives of + regular expressions}. This notion allows one to do almost all +calculations with regular expressions on the level of regular +expressions, not needing any automata (you will see we only touch +briefly on automata in lecture 3). Automata are usually the main +object of study in formal language courses. The avoidance of automata +is crucial because automata are graphs and it is rather difficult to +reason about graphs in theorem provers. In contrast, reasoning about +regular expressions is easy-peasy in theorem provers. Is this +important? I think yes, because according to Kuklewicz nearly all +POSIX-based regular expression matchers are buggy.\footnote{\url{http://www.haskell.org/haskellwiki/Regex_Posix}} -With my PhD student Fahad Ausaf I proved -the correctness for one such matcher that was -proposed by Sulzmann and Lu in -2014.\footnote{\url{http://goo.gl/bz0eHp}} Hopefully we can -prove that the machine code(!) -that implements this code efficiently is correct also. Writing -programs in this way does not leave any room for potential -errors or bugs. How nice! +With my PhD student Fahad Ausaf I proved the correctness for one such +matcher that was proposed by Sulzmann and Lu in +2014.\footnote{\url{http://goo.gl/bz0eHp}} Hopefully we can prove that +the machine code(!) that implements this code efficiently is correct +also. Writing programs in this way does not leave any room for +potential errors or bugs. How nice! What also helped with my fascination with regular expressions is that we could indeed find out new things about them that @@ -629,42 +626,43 @@ book\footnote{\url{http://goo.gl/fD0eHx}} on regular exprssions did not know better. Well, we showed it can also be done with regular expressions only.\footnote{\url{http://www.inf.kcl.ac.uk/staff/urbanc/Publications/rexp.pdf}} -What a feeling if you are an outsider to the subject! +What a feeling when you are an outsider to the subject! -To conclude: Despite my early ignorance about regular -expressions, I find them now very interesting. They have a -beautiful mathematical theory behind them, which can be -sometimes quite deep and which sometimes contains hidden snares. They have -practical importance (remember the shocking runtime of the -regular expression matchers in Python, Ruby and Java in some -instances and the problems in Stack Exchange and the Atom editor). -People who are not very familiar with the -mathematical background of regular expressions get them -consistently wrong. The hope is that we can do better in the -future---for example by proving that the algorithms actually -satisfy their specification and that the corresponding -implementations do not contain any bugs. We are close, but not -yet quite there. +To conclude: Despite my early ignorance about regular expressions, I +find them now very interesting. They have a beautiful mathematical +theory behind them, which can be sometimes quite deep and which +sometimes contains hidden snares. They have practical importance +(remember the shocking runtime of the regular expression matchers in +Python, Ruby and Java in some instances and the problems in Stack +Exchange and the Atom editor). People who are not very familiar with +the mathematical background of regular expressions get them +consistently wrong (surprising given they are a supposed to be core +skill for computer scientists). The hope is that we can do better in +the future---for example by proving that the algorithms actually +satisfy their specification and that the corresponding implementations +do not contain any bugs. We are close, but not yet quite there. Notwithstanding my fascination, I am also happy to admit that regular expressions have their shortcomings. There are some well-known -``theoretical'' shortcomings, for example recognising strings -of the form $a^{n}b^{n}$. I am not so bothered by them. What I -am bothered about is when regular expressions are in the way -of practical programming. For example, it turns out that the -regular expression for email addresses shown in \eqref{email} -is hopelessly inadequate for recognising all of them (despite -being touted as something every computer scientist should know -about). The W3 Consortium (which standardises the Web) -proposes to use the following, already more complicated -regular expressions for email addresses: +``theoretical'' shortcomings, for example recognising strings of the +form $a^{n}b^{n}$ is not possible with regular expressions. This means +for example if we try to regognise whether parentheses are well-nested +is impossible with (basic) regular expressions. I am not so bothered +by these shortcomings. What I am bothered about is when regular +expressions are in the way of practical programming. For example, it +turns out that the regular expression for email addresses shown in +\eqref{email} is hopelessly inadequate for recognising all of them +(despite being touted as something every computer scientist should +know about). The W3 Consortium (which standardises the Web) proposes +to use the following, already more complicated regular expressions for +email addresses: {\small\begin{lstlisting}[language={},keywordstyle=\color{black},numbers=none] [a-zA-Z0-9.!#$%&'*+/=?^_`{|}~-]+@[a-zA-Z0-9-]+(?:\.[a-zA-Z0-9-]+)* \end{lstlisting}} \noindent But they admit that by using this regular expression -they wilfully violate the RFC 5322 standard which specifies +they wilfully violate the RFC 5322 standard, which specifies the syntax of email addresses. With their proposed regular expression they are too strict in some cases and too lax in others. Not a good situation to be in. A regular expression @@ -683,7 +681,8 @@ \noindent which explains some of the crazier parts of email addresses. Still it is good to know that some tasks in text processing just cannot be achieved by using regular -expressions. +expressions. But for what we want to use them (lexing) they are +pretty good. \begin{figure}[p] diff -r 1f4e81950ab4 -r 9476086849ad handouts/ho02.pdf Binary file handouts/ho02.pdf has changed diff -r 1f4e81950ab4 -r 9476086849ad handouts/ho02.tex --- a/handouts/ho02.tex Mon Nov 14 15:50:42 2016 +0000 +++ b/handouts/ho02.tex Sat Jan 07 14:52:26 2017 +0000 @@ -6,7 +6,7 @@ \begin{document} -\fnote{\copyright{} Christian Urban, King's College London, 2014, 2015, 2016} +\fnote{\copyright{} Christian Urban, King's College London, 2014, 2015, 2016, 2017} \section*{Handout 2 (Regular Expression Matching)} diff -r 1f4e81950ab4 -r 9476086849ad handouts/ho03.pdf Binary file handouts/ho03.pdf has changed diff -r 1f4e81950ab4 -r 9476086849ad handouts/ho03.tex --- a/handouts/ho03.tex Mon Nov 14 15:50:42 2016 +0000 +++ b/handouts/ho03.tex Sat Jan 07 14:52:26 2017 +0000 @@ -924,7 +924,7 @@ \section*{Further Reading} -Compare what a ``human expert'' would create as automaton for the +Compare what a ``human expert'' would create as an automaton for the regular expression $a (b + c)^*$ and what the Thomson algorithm generates. diff -r 1f4e81950ab4 -r 9476086849ad handouts/scala-ho.pdf Binary file handouts/scala-ho.pdf has changed diff -r 1f4e81950ab4 -r 9476086849ad handouts/scala-ho.tex --- a/handouts/scala-ho.tex Mon Nov 14 15:50:42 2016 +0000 +++ b/handouts/scala-ho.tex Sat Jan 07 14:52:26 2017 +0000 @@ -40,7 +40,7 @@ \end{quote} \noindent -A ready-mad bundle with the Eclipse IDE is at +A ready-made bundle with the Eclipse IDE is at \begin{quote} \url{http://scala-ide.org/download/sdk.html} diff -r 1f4e81950ab4 -r 9476086849ad hws/hw02.pdf Binary file hws/hw02.pdf has changed diff -r 1f4e81950ab4 -r 9476086849ad hws/hw02.tex --- a/hws/hw02.tex Mon Nov 14 15:50:42 2016 +0000 +++ b/hws/hw02.tex Sat Jan 07 14:52:26 2017 +0000 @@ -96,7 +96,7 @@ zeroable(\sim r) \dn \neg(zeroable(r)) \] - Find out why? + Find a counter example? \item Give a regular expressions that can recognise all strings from the language $\{a^n\;|\;\exists k.\; n = 3 k diff -r 1f4e81950ab4 -r 9476086849ad progs/LOOP.j --- a/progs/LOOP.j Mon Nov 14 15:50:42 2016 +0000 +++ b/progs/LOOP.j Sat Jan 07 14:52:26 2017 +0000 @@ -23,7 +23,7 @@ .limit locals 200 .limit stack 200 -ldc 500000 +ldc 500000 istore 0 iload 0 istore 1 diff -r 1f4e81950ab4 -r 9476086849ad progs/comb2.scala --- a/progs/comb2.scala Mon Nov 14 15:50:42 2016 +0000 +++ b/progs/comb2.scala Sat Jan 07 14:52:26 2017 +0000 @@ -1,3 +1,6 @@ +// A parser and evaluator for the while language +// + import scala.language.implicitConversions import scala.language.reflectiveCalls @@ -125,12 +128,12 @@ // boolean expressions lazy val BExp: Parser[String, BExp] = - ((AExp ~ "=" ~ AExp) ==> { case ((x, y), z) => Bop("=", x, z): BExp } || - (AExp ~ "!=" ~ AExp) ==> { case ((x, y), z) => Bop("!=", x, z): BExp } || - (AExp ~ "<" ~ AExp) ==> { case ((x, y), z) => Bop("<", x, z): BExp } || - (AExp ~ ">" ~ AExp) ==> { case ((x, y), z) => Bop(">", x, z): BExp } || - ("true" ==> ((_) => True: BExp)) || - ("false" ==> ((_) => False: BExp)) || + ((AExp ~ "=" ~ AExp) ==> { case ((x, y), z) => Bop("=", x, z):BExp } || + (AExp ~ "!=" ~ AExp) ==> { case ((x, y), z) => Bop("!=", x, z):BExp } || + (AExp ~ "<" ~ AExp) ==> { case ((x, y), z) => Bop("<", x, z):BExp } || + (AExp ~ ">" ~ AExp) ==> { case ((x, y), z) => Bop(">", x, z):BExp } || + ("true" ==> ((_) => True:BExp )) || + ("false" ==> ((_) => False:BExp )) || ("(" ~ BExp ~ ")") ==> { case ((x, y), z) => y}) lazy val Stmt: Parser[String, Stmt] = @@ -149,7 +152,7 @@ (Stmt ==> ((s) => List(s)))) -Block.parse_all("x2:=5") +Stmt.parse_all("x2:=5") Block.parse_all("{x:=5;y:=8}") Block.parse_all("if(false)then{x:=5}else{x:=10}") @@ -193,4 +196,4 @@ def eval(bl: Block) : Env = eval_bl(bl, Map()) -eval(Block.parse_all(fib).head)("result") +println(eval(Block.parse_all(fib).head)("result")) diff -r 1f4e81950ab4 -r 9476086849ad progs/compile.scala --- a/progs/compile.scala Mon Nov 14 15:50:42 2016 +0000 +++ b/progs/compile.scala Sat Jan 07 14:52:26 2017 +0000 @@ -1,5 +1,5 @@ // A Small Compiler for the WHILE Language -// +// (it does not use a parser and lexer) // the abstract syntax trees abstract class Stmt @@ -94,6 +94,9 @@ .end method """ +println("starting") + + // for generating new labels var counter = -1 @@ -221,7 +224,25 @@ // // java fib/fib +import scala.util._ +import scala.sys.process._ +import scala.io + +def compile_tofile(bl: Block, class_name: String) = { + val output = compile(bl, class_name) + val fw = new java.io.FileWriter(class_name + ".j") + fw.write(output) + fw.close() +} + +def compile_and_run(bl: Block, class_name: String) : Unit = { + compile_tofile(bl, class_name) + println("compiled ") + val test = ("java -jar jvm/jasmin-2.4/jasmin.jar " + class_name + ".j").!! + println("assembled ") + ("java " + class_name + "/" + class_name).!! +} +compile_and_run(fib_test, "fib") - diff -r 1f4e81950ab4 -r 9476086849ad progs/fib.j --- a/progs/fib.j Mon Nov 14 15:50:42 2016 +0000 +++ b/progs/fib.j Sat Jan 07 14:52:26 2017 +0000 @@ -64,11 +64,11 @@ ldc 0 istore 3 -Loop_begin_0: +Loop_begin_2: ldc 0 iload 0 -if_icmpge Loop_end_1 +if_icmpge Loop_end_3 iload 2 istore 3 iload 1 @@ -81,9 +81,9 @@ ldc 1 isub istore 0 -goto Loop_begin_0 +goto Loop_begin_2 -Loop_end_1: +Loop_end_3: iload 1 invokestatic fib/fib/write(I)V @@ -92,4 +92,3 @@ return .end method - diff -r 1f4e81950ab4 -r 9476086849ad progs/pow.scala --- a/progs/pow.scala Mon Nov 14 15:50:42 2016 +0000 +++ b/progs/pow.scala Sat Jan 07 14:52:26 2017 +0000 @@ -12,6 +12,8 @@ val B = Set("a", "b", "c", "") pow(B, 4).size // -> 121 + + val B2 = Set("a", "b", "c", "") pow(B2, 3).size // -> 40 diff -r 1f4e81950ab4 -r 9476086849ad progs/re1.scala --- a/progs/re1.scala Mon Nov 14 15:50:42 2016 +0000 +++ b/progs/re1.scala Sat Jan 07 14:52:26 2017 +0000 @@ -1,13 +1,13 @@ abstract class Rexp -case object ZERO extends Rexp -case object ONE extends Rexp -case class CHAR(c: Char) extends Rexp -case class ALT(r1: Rexp, r2: Rexp) extends Rexp -case class SEQ(r1: Rexp, r2: Rexp) extends Rexp -case class STAR(r: Rexp) extends Rexp +case object ZERO extends Rexp // matches nothing +case object ONE extends Rexp // matches the empty string +case class CHAR(c: Char) extends Rexp // matches a character c +case class ALT(r1: Rexp, r2: Rexp) extends Rexp // alternative +case class SEQ(r1: Rexp, r2: Rexp) extends Rexp // sequence +case class STAR(r: Rexp) extends Rexp // star -// nullable function: tests whether the regular +// nullable function: tests whether a regular // expression can recognise the empty string def nullable (r: Rexp) : Boolean = r match { case ZERO => false @@ -46,10 +46,10 @@ der('b', r) der('c', r) -//optional: one or zero times +//optional (one or zero times) def OPT(r: Rexp) = ALT(r, ONE) -//n-times +//n-times (explicitly expanded) def NTIMES(r: Rexp, n: Int) : Rexp = n match { case 0 => ONE case 1 => r diff -r 1f4e81950ab4 -r 9476086849ad progs/re2.scala --- a/progs/re2.scala Mon Nov 14 15:50:42 2016 +0000 +++ b/progs/re2.scala Sat Jan 07 14:52:26 2017 +0000 @@ -1,4 +1,5 @@ -// version with explicit n-times regular expression +// version with explicit an n-times regular expression +// this keeps the regular expression small abstract class Rexp case object ZERO extends Rexp diff -r 1f4e81950ab4 -r 9476086849ad progs/re3a.scala --- a/progs/re3a.scala Mon Nov 14 15:50:42 2016 +0000 +++ b/progs/re3a.scala Sat Jan 07 14:52:26 2017 +0000 @@ -74,7 +74,7 @@ //evil regular expressions def EVIL1(n: Int) = SEQ(NTIMES(OPT(CHAR('a')), n), NTIMES(CHAR('a'), n)) val EVIL2 = SEQ(STAR(STAR(CHAR('a'))), CHAR('b')) - +val EVIL3 = SEQ(STAR(ALT(CHAR('a'), SEQ(CHAR('a'),CHAR('a')))), CHAR('b')) def time_needed[T](i: Int, code: => T) = { val start = System.nanoTime() @@ -94,11 +94,25 @@ } //test: (a*)* b -for (i <- 1 to 7000001 by 500000) { +for (i <- 1 to 5000001 by 500000) { + println(i + " " + "%.5f".format(time_needed(2, matcher(EVIL2, "a" * i)))) +} + +for (i <- 1 to 5000001 by 500000) { println(i + " " + "%.5f".format(time_needed(2, matcher(EVIL2, "a" * i)))) } -for (i <- 1 to 7000001 by 500000) { - println(i + " " + "%.5f".format(time_needed(2, matcher(EVIL2, "a" * i)))) + +val r0 = simp(der('a', EVIL3)) +val r1 = simp(der('a', r0)) +val r2 = simp(der('a', r1)) +val r3 = simp(der('a', r2)) +val r4 = simp(der('a', r3)) +val r5 = simp(der('a', r4)) +val r6 = simp(der('a', r5)) + +//test: (a|aa)* b +for (i <- 1 to 7001 by 500) { + println(i + " " + "%.5f".format(time_needed(2, matcher(EVIL3, "a" * i ++ "c")))) } diff -r 1f4e81950ab4 -r 9476086849ad progs/while.scala --- a/progs/while.scala Mon Nov 14 15:50:42 2016 +0000 +++ b/progs/while.scala Sat Jan 07 14:52:26 2017 +0000 @@ -1,4 +1,4 @@ -// A parser and evaluator for teh while language +// A parser and evaluator for the while language // import matcher._ import parser._ diff -r 1f4e81950ab4 -r 9476086849ad progs/while1.scala --- a/progs/while1.scala Mon Nov 14 15:50:42 2016 +0000 +++ b/progs/while1.scala Sat Jan 07 14:52:26 2017 +0000 @@ -1,7 +1,8 @@ // A parser and evaluator for the WHILE language // -import matcher._ -import parser._ + + + // some regular expressions diff -r 1f4e81950ab4 -r 9476086849ad slides/slides06.pdf Binary file slides/slides06.pdf has changed diff -r 1f4e81950ab4 -r 9476086849ad slides/slides11.pdf Binary file slides/slides11.pdf has changed diff -r 1f4e81950ab4 -r 9476086849ad slides/slides11.tex --- a/slides/slides11.tex Mon Nov 14 15:50:42 2016 +0000 +++ b/slides/slides11.tex Sat Jan 07 14:52:26 2017 +0000 @@ -4,12 +4,35 @@ \usepackage{../data} \usepackage{../graphics} \usepackage{soul} - +\usepackage{proof} % beamer stuff \renewcommand{\slidecaption}{CFL, King's College London} \newcommand{\bl}[1]{\textcolor{blue}{#1}} +\newcommand\grid[1]{% + \begin{tikzpicture}[baseline=(char.base)] + \path[use as bounding box] + (0,0) rectangle (1em,1em); + \draw[red!50, fill=red!20] + (0,0) rectangle (1em,1em); + \node[inner sep=1pt,anchor=base west] + (char) at (0em,\gridraiseamount) {#1}; + \end{tikzpicture}} +\newcommand\gridraiseamount{0.12em} + +\makeatletter +\newcommand\Grid[1]{% + \@tfor\z:=#1\do{\grid{\z}}} +\makeatother + +\newcommand\Vspace[1][.3em]{% + \mbox{\kern.06em\vrule height.3ex}% + \vbox{\hrule width#1}% + \hbox{\vrule height.3ex}} + +\def\VS{\Vspace[0.6em]} + \begin{document} @@ -35,6 +58,326 @@ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +\begin{frame}[c] +\frametitle{Compilers \& Boeings 777} + +First flight in 1994. They want to achieve triple redundancy in hardware +faults.\bigskip + +They compile 1 Ada program to\medskip + +\begin{itemize} +\item Intel 80486 +\item Motorola 68040 (old Macintosh's) +\item AMD 29050 (RISC chips used often in laser printers) +\end{itemize}\medskip + +using 3 independent compilers.\bigskip\pause + +\small Airbus uses C and static analysers. Recently started using CompCert. + +\end{frame} +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +\begin{frame}[c] +\frametitle{seL4 / Isabelle} + +\begin{itemize} +\item verified a microkernel operating system ($\approx$8000 lines of C code)\bigskip +\item US DoD has competitions to hack into drones; they found that the + isolation guarantees of seL4 hold up\bigskip +\item CompCert and seL4 sell their code +\end{itemize} + +\end{frame} +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +\begin{frame}[c] +\frametitle{POSIX Matchers} + +\begin{itemize} +\item Longest match rule (``maximal munch rule''): The +longest initial substring matched by any regular expression +is taken as the next token. + +\begin{center} +\bl{$\texttt{\Grid{iffoo\VS bla}}$} +\end{center}\medskip + +\item Rule priority: +For a particular longest initial substring, the first regular +expression that can match determines the token. + +\begin{center} +\bl{$\texttt{\Grid{if\VS bla}}$} +\end{center} +\end{itemize}\bigskip\pause + +\small +\hfill Kuklewicz: most POSIX matchers are buggy\\ +\footnotesize +\hfill \url{http://www.haskell.org/haskellwiki/Regex_Posix} + +\end{frame} +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +\begin{frame}[c] +\mbox{}\\[-14mm]\mbox{} +\small +\bl{ +\begin{center} +\begin{tabular}{lcl} +$\textit{der}\;c\;(\ZERO)$ & $\dn$ & $\ZERO$\\ +$\textit{der}\;c\;(\ONE)$ & $\dn$ & $\ZERO$\\ +$\textit{der}\;c\;(d)$ & $\dn$ & $\textit{if}\; c = d\;\textit{then} \;\ONE \; \textit{else} \;\ZERO$\\ +$\textit{der}\;c\;(r_1 + r_2)$ & $\dn$ & $(\textit{der}\;c\;r_1) + (\textit{der}\;c\;r_2)$\\ +$\textit{der}\;c\;(r_1 \cdot r_2)$ & $\dn$ & $\textit{if}\;\textit{nullable}(r_1)$\\ + & & $\textit{then}\;((\textit{der}\;c\;r_1)\cdot r_2) + (\textit{der}\;c\;r_2)$\\ + & & $\textit{else}\;(\textit{der}\;c\;r_1)\cdot r_2$\\ +$\textit{der}\;c\;(r^*)$ & $\dn$ & $(\textit{der}\;c\;r)\cdot (r^*)$\\ + $\textit{der}\;c\;(r^{\{n\}})$ & $\dn$ & \textit{if} $n=0$ \textit{then} $\ZERO$\\ + & & \textit{else if} $\textit{nullable}(r)$ \textit{then} $(\textit{der}\;c\;r)\cdot (r^{\{\uparrow n-1\}})$\\ + & & \textit{else} $(\textit{der}\;c\;r)\cdot (r^{\{n-1\}})$\\ + $\textit{der}\;c\;(r^{\{\uparrow n\}})$ & $\dn$ & \textit{if} $n=0$ \textit{then} $\ZERO$\\ + & & \textit{else} + $(\textit{der}\;c\;r)\cdot (r^{\{\uparrow n-1\}})$\\ +\end{tabular} +\end{center}} + +\end{frame} +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +\begin{frame}[t] +\frametitle{Proofs about Rexps} + +Remember their inductive definition: + + \begin{center} + \begin{tabular}{@ {}rrl} + \bl{$r$} & \bl{$::=$} & \bl{$\ZERO$}\\ + & \bl{$\mid$} & \bl{$\ONE$} \\ + & \bl{$\mid$} & \bl{$c$} \\ + & \bl{$\mid$} & \bl{$r_1 \cdot r_2$}\\ + & \bl{$\mid$} & \bl{$r_1 + r_2$} \\ + & \bl{$\mid$} & \bl{$r^*$} \\ + & \bl{$\mid$} & \bl{$r^{\{n\}}$} \\ + & \bl{$\mid$} & \bl{$r^{\{\uparrow n\}}$} \\ + \end{tabular} + \end{center} + +If we want to prove something, say a property \bl{$P(r)$}, for all regular expressions \bl{$r$} then \ldots + +\end{frame} +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +\begin{frame}[c] +\frametitle{Proofs about Rexp (2)} + +\begin{itemize} +\item \bl{$P$} holds for \bl{$\ZERO$}, \bl{$\ONE$} and \bl{c}\bigskip +\item \bl{$P$} holds for \bl{$r_1 + r_2$} under the assumption that \bl{$P$} already +holds for \bl{$r_1$} and \bl{$r_2$}.\bigskip +\item \bl{$P$} holds for \bl{$r_1 \cdot r_2$} under the assumption that \bl{$P$} already +holds for \bl{$r_1$} and \bl{$r_2$}.\bigskip +\item \bl{$P$} holds for \bl{$r^*$} under the assumption that \bl{$P$} already + holds for \bl{$r$}. +\item \ldots +\end{itemize} + +\end{frame} +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +\begin{frame}[c] +\frametitle{Proofs about Strings} + +If we want to prove something, say a property \bl{$P(s)$}, for all +strings \bl{$s$} then \ldots\bigskip + +\begin{itemize} +\item \bl{$P$} holds for the empty string, and\medskip +\item \bl{$P$} holds for the string \bl{$c\!::\!s$} under the assumption that \bl{$P$} +already holds for \bl{$s$} +\end{itemize} +\end{frame} +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +%\begin{frame}[c] +% +%\bl{\begin{center} +%\begin{tabular}{@ {}l@ {\hspace{2mm}}c@ {\hspace{2mm}}l@ {}} +%$zeroable(\varnothing)$ & $\dn$ & \textit{true}\\ +%$zeroable(\epsilon)$ & $\dn$ & \textit{false}\\ +%$zeroable (c)$ & $\dn$ & \textit{false}\\ +%$zeroable (r_1 + r_2)$ & $\dn$ & $zeroable(r_1) \wedge zeroable(r_2)$ \\ +%$zeroable (r_1 \cdot r_2)$ & $\dn$ & $zeroable(r_1) \vee zeroable(r_2)$ \\ +%$zeroable (r^*)$ & $\dn$ & \textit{false}\\ +%\end{tabular} +%\end{center}} + +%\begin{center} +%\bl{$zeroable(r)$} if and only if \bl{$L(r) = \{\}$} +%\end{center} + +%\end{frame} +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +\begin{frame}[c] +\frametitle{Correctness of the Matcher} + +\begin{itemize} +\item We want to prove\medskip +\begin{center} +\bl{$matches\;r\;s$} if and only if \bl{$s\in L(r)$} +\end{center}\bigskip + +where \bl{$matches\;r\;s \dn nullable(ders\;s\;r)$} +\bigskip\pause + +\item We can do this, if we know\medskip +\begin{center} +\bl{$L(der\;c\;r) = Der\;c\;(L(r))$} +\end{center} +\end{itemize} +\end{frame} +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +\begin{frame}[c] +\frametitle{Some Lemmas} + +\begin{itemize} +\item \bl{$Der\;c\;(A\cup B) = +(Der\;c\;A)\cup(Der\;c\;B)$}\bigskip +\item If \bl{$[] \in A$} then +\begin{center} +\bl{$Der\;c\;(A\,@\,B) = (Der\;c\;A)\,@\,B \;\cup\; (Der\;c\;B)$} +\end{center}\bigskip +\item If \bl{$[] \not\in A$} then +\begin{center} +\bl{$Der\;c\;(A\,@\,B) = (Der\;c\;A)\,@\,B$} +\end{center}\bigskip +\item \bl{$Der\;c\;(A^*) = (Der\;c\;A)\,@\,A^*$}\\ +\small\mbox{}\hfill (interesting case)\\ +\end{itemize} + +\end{frame} +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +\begin{frame}[c] +\frametitle{Why?} + +Why does \bl{$Der\;c\;(A^*) = (Der\;c\;A)\,@\,A^*$} hold? +\bigskip + + +\begin{center} +\begin{tabular}{lcl} +\bl{$Der\;c\;(A^*)$} & \bl{$=$} & \bl{$Der\;c\;(A^* - \{[]\})$}\medskip\\ +& \bl{$=$} & \bl{$Der\;c\;((A - \{[]\})\,@\,A^*)$}\medskip\\ +& \bl{$=$} & \bl{$(Der\;c\;(A - \{[]\}))\,@\,A^*$}\medskip\\ +& \bl{$=$} & \bl{$(Der\;c\;A)\,@\,A^*$}\medskip\\ +\end{tabular} +\end{center}\bigskip\bigskip + +\small +using the facts \bl{$Der\;c\;A = Der\;c\;(A - \{[]\})$} and\\ +\mbox{}\hfill\bl{$(A - \{[]\}) \,@\, A^* = A^* - \{[]\}$} +\end{frame} +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +\begin{frame}[c] +\frametitle{POSIX Spec} + +\begin{center} +\bl{\infer{[] \in \ONE \to Empty}{}}\hspace{15mm} +\bl{\infer{c \in c \to Char(c)}{}}\bigskip\medskip + +\bl{\infer{s \in r_1 + r_2 \to Left(v)} + {s \in r_1 \to v}}\hspace{10mm} +\bl{\infer{s \in r_1 + r_2 \to Right(v)} + {s \in r_2 \to v & s \not\in L(r_1)}}\bigskip\medskip + +\bl{\infer{s_1 @ s_2 \in r_1 \cdot r_2 \to Seq(v_1, v_2)} + {\small\begin{array}{l} + s_1 \in r_1 \to v_1 \\ + s_2 \in r_2 \to v_2 \\ + \neg(\exists s_3\,s_4.\; s_3 \not= [] + \wedge s_3 @ s_4 = s_2 \wedge + s_1 @ s_3 \in L(r_1) \wedge + s_4 \in L(r_2)) + \end{array}}} + +\bl{\ldots} +\end{center} + + +\end{frame} +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +\begin{frame}[t,squeeze] +\frametitle{Sulzmann \& Lu Paper} + +\begin{itemize} +\item I have no doubt the algorithm is correct --- + the problem is I do not believe their proof. + + \begin{center} + \begin{bubble}[10cm]\small + ``How could I miss this? Well, I was rather careless when + stating this Lemma :)\smallskip + + Great example how formal machine checked proofs (and + proof assistants) can help to spot flawed reasoning steps.'' + \end{bubble} + \end{center}\pause + + %\begin{center} + %\begin{bubble}[10cm]\small + %``Well, I don't think there's any flaw. The issue is how to + %come up with a mechanical proof. In my world mathematical + %proof $=$ mechanical proof doesn't necessarily hold.'' + %\end{bubble} + %\end{center}\pause + +\end{itemize} + + \only<3>{% + \begin{textblock}{11}(1,4.4) + \begin{center} + \begin{bubble}[10.9cm]\small\centering + \includegraphics[scale=0.37]{msbug.png} + \end{bubble} + \end{center} + \end{textblock}} + + +\end{frame} +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +\end{document} + +%%% Local Variables: +%%% mode: latex +%%% TeX-master: t +%%% End: + + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[t] \frametitle{2nd CW} @@ -93,111 +436,6 @@ \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% - - -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -\begin{frame}[c] -\frametitle{Compilers in Boeings 777} - -They want to achieve triple redundancy in hardware -faults.\bigskip - -They compile 1 Ada program to - -\begin{itemize} -\item Intel 80486 -\item Motorola 68040 (old Macintosh's) -\item AMD 29050 (RISC chips used often in laser printers) -\end{itemize} - -\end{frame} -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% - - - - -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -\begin{frame}[t] -\frametitle{Proofs about Rexps} - -Remember their inductive definition: - - \begin{center} - \begin{tabular}{@ {}rrl} - \bl{$r$} & \bl{$::=$} & \bl{$\varnothing$}\\ - & \bl{$\mid$} & \bl{$\epsilon$} \\ - & \bl{$\mid$} & \bl{$c$} \\ - & \bl{$\mid$} & \bl{$r_1 \cdot r_2$}\\ - & \bl{$\mid$} & \bl{$r_1 + r_2$} \\ - & \bl{$\mid$} & \bl{$r^*$} \\ - \end{tabular} - \end{center} - -If we want to prove something, say a property \bl{$P(r)$}, for all regular expressions \bl{$r$} then \ldots - -\end{frame} -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% - -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -\begin{frame}[c] -\frametitle{Proofs about Rexp (2)} - -\begin{itemize} -\item \bl{$P$} holds for \bl{$\varnothing$}, \bl{$\epsilon$} and \bl{c}\bigskip -\item \bl{$P$} holds for \bl{$r_1 + r_2$} under the assumption that \bl{$P$} already -holds for \bl{$r_1$} and \bl{$r_2$}.\bigskip -\item \bl{$P$} holds for \bl{$r_1 \cdot r_2$} under the assumption that \bl{$P$} already -holds for \bl{$r_1$} and \bl{$r_2$}.\bigskip -\item \bl{$P$} holds for \bl{$r^*$} under the assumption that \bl{$P$} already -holds for \bl{$r$}. -\end{itemize} - -\end{frame} -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% - - -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -\begin{frame}[c] - -\bl{\begin{center} -\begin{tabular}{@ {}l@ {\hspace{2mm}}c@ {\hspace{2mm}}l@ {}} -$zeroable(\varnothing)$ & $\dn$ & \textit{true}\\ -$zeroable(\epsilon)$ & $\dn$ & \textit{false}\\ -$zeroable (c)$ & $\dn$ & \textit{false}\\ -$zeroable (r_1 + r_2)$ & $\dn$ & $zeroable(r_1) \wedge zeroable(r_2)$ \\ -$zeroable (r_1 \cdot r_2)$ & $\dn$ & $zeroable(r_1) \vee zeroable(r_2)$ \\ -$zeroable (r^*)$ & $\dn$ & \textit{false}\\ -\end{tabular} -\end{center}} - -\begin{center} -\bl{$zeroable(r)$} if and only if \bl{$L(r) = \{\}$} -\end{center} - -\end{frame} -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% - -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -\begin{frame}[c] -\frametitle{Correctness of the Matcher} - -\begin{itemize} -\item We want to prove\medskip -\begin{center} -\bl{$matches\;r\;s$} if and only if \bl{$s\in L(r)$} -\end{center}\bigskip - -where \bl{$matches\;r\;s \dn nullable(ders\;s\;r)$} -\bigskip\pause - -\item We can do this, if we know\medskip -\begin{center} -\bl{$L(der\;c\;r) = Der\;c\;(L(r))$} -\end{center} -\end{itemize} -\end{frame} -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% - %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[c] \frametitle{Induction over Strings} @@ -258,60 +496,3 @@ \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% - - -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -\begin{frame}[c] -\frametitle{Some Lemmas} - -\begin{itemize} -\item \bl{$Der\;c\;(A\cup B) = -(Der\;c\;A)\cup(Der\;c\;B)$}\bigskip -\item If \bl{$[] \in A$} then -\begin{center} -\bl{$Der\;c\;(A\,@\,B) = (Der\;c\;A)\,@\,B \;\cup\; (Der\;c\;B)$} -\end{center}\bigskip -\item If \bl{$[] \not\in A$} then -\begin{center} -\bl{$Der\;c\;(A\,@\,B) = (Der\;c\;A)\,@\,B$} -\end{center}\bigskip -\item \bl{$Der\;c\;(A^*) = (Der\;c\;A)\,@\,A^*$}\\ -\small\mbox{}\hfill (interesting case)\\ -\end{itemize} - -\end{frame} -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% - - -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -\begin{frame}[c] -\frametitle{Why?} - -Why does \bl{$Der\;c\;(A^*) = (Der\;c\;A)\,@\,A^*$} hold? -\bigskip - - -\begin{center} -\begin{tabular}{lcl} -\bl{$Der\;c\;(A^*)$} & \bl{$=$} & \bl{$Der\;c\;(A^* - \{[]\})$}\medskip\\ -& \bl{$=$} & \bl{$Der\;c\;((A - \{[]\})\,@\,A^*)$}\medskip\\ -& \bl{$=$} & \bl{$(Der\;c\;(A - \{[]\}))\,@\,A^*$}\medskip\\ -& \bl{$=$} & \bl{$(Der\;c\;A)\,@\,A^*$}\medskip\\ -\end{tabular} -\end{center}\bigskip\bigskip - -\small -using the facts \bl{$Der\;c\;A = Der\;c\;(A - \{[]\})$} and\\ -\mbox{}\hfill\bl{$(A - \{[]\}) \,@\, A^* = A^* - \{[]\}$} -\end{frame} -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% - - - -\end{document} - -%%% Local Variables: -%%% mode: latex -%%% TeX-master: t -%%% End: -