\documentclass{article}\usepackage{../style}\usepackage{../langs}\usepackage{../graphics}\usepackage{../data}\begin{document}\fnote{\copyright{} Christian Urban, King's College London, 2014, 2015, 2016}\section*{Handout 2 (Regular Expression Matching)}This lecture is about implementing a more efficient regular expressionmatcher (the plots on the right)---more efficient than the matchersfrom regular expression libraries in Ruby, Python and Java (the plotson the left). The first pair of plots show the running time for theregular expressions $a^?{}^{\{n\}}\cdot a^{\{n\}}$ and strings composedof $n$ \pcode{a}s. The second pair of plots show the running timefor the regular expression $(a^*)^*\cdot b$ and also strings composedof $n$ \pcode{a}s (meaning this regular expression actually does notmatch the strings). To see the substantial differences in the leftand right plots below, note the different scales of the $x$-axes. \begin{center}\begin{tabular}{@{}cc@{}}\begin{tikzpicture}\begin{axis}[ xlabel={\small $a^{?\{n\}} \cdot a^{\{n\}}$ and strings $\underbrace{a\ldots a}_{n}$}, ylabel={\small time in secs}, enlargelimits=false, xtick={0,5,...,30}, xmax=33, ymax=35, ytick={0,5,...,30}, scaled ticks=false, axis lines=left, width=5cm, height=5cm, legend entries={Python,Ruby}, legend pos=north west, legend cell align=left]\addplot[blue,mark=*, mark options={fill=white}] table {re-python.data};\addplot[brown,mark=triangle*, mark options={fill=white}] table {re-ruby.data}; \end{axis}\end{tikzpicture}&\begin{tikzpicture} \begin{axis}[ xlabel={\small $a^{?\{n\}} \cdot a^{\{n\}}$ and strings $\underbrace{a\ldots a}_{n}$}, ylabel={\small time in secs}, enlargelimits=false, xtick={0,3000,...,12000}, xmax=12500, ymax=35, ytick={0,5,...,30}, scaled ticks=false, axis lines=left, width=6.5cm, height=5cm]\addplot[green,mark=square*,mark options={fill=white}] table {re2b.data};\addplot[black,mark=square*,mark options={fill=white}] table {re3.data};\end{axis}\end{tikzpicture}\end{tabular}\end{center}\begin{center}\begin{tabular}{@{}cc@{}}\begin{tikzpicture}\begin{axis}[ xlabel={$(a^*)^* \cdot b$ and strings $\underbrace{a\ldots a}_{n}$}, ylabel={time in secs}, enlargelimits=false, xtick={0,5,...,30}, xmax=33, ymax=35, ytick={0,5,...,30}, scaled ticks=false, axis lines=left, width=5cm, height=5cm, legend entries={Java}, legend pos=north west, legend cell align=left]\addplot[blue,mark=*, mark options={fill=white}] table {re-java.data};\end{axis}\end{tikzpicture}&\begin{tikzpicture} \begin{axis}[ xlabel={$(a^*)^* \cdot b$ and strings $\underbrace{a\ldots a}_{n}$}, ylabel={time in secs}, enlargelimits=false, xtick={0,3000,...,12000}, xmax=12500, ymax=35, ytick={0,5,...,30}, scaled ticks=false, axis lines=left, width=6.5cm, height=5cm]\addplot[green,mark=square*,mark options={fill=white}] table {re2b.data};\addplot[black,mark=square*,mark options={fill=white}] table {re3.data};\end{axis}\end{tikzpicture}\end{tabular}\end{center}\medskip\noindentWe will use these regular expressions and stringsas running examples.Having specified in the previous lecture whatproblem our regular expression matcher is supposed to solve,namely for any given regular expression $r$ and string $s$answer \textit{true} if and only if\[s \in L(r)\]\noindent we can look at an algorithm to solve this problem. Clearlywe cannot use the function $L$ directly for this, because in generalthe set of strings $L$ returns is infinite (recall what $L(a^*)$ is).In such cases there is no way we can implement an exhaustive test forwhether a string is member of this set or not. In contrast ourmatching algorithm will operate on the regular expression $r$ andstring $s$, only, which are both finite objects. Before we explain tothe matching algorithm, however, let us have a closer look at what itmeans when two regular expressions are equivalent.\subsection*{Regular Expression Equivalences}We already defined in Handout 1 what it means for two regularexpressions to be equivalent, namely if their meaning is thesame language:\[r_1 \equiv r_2 \;\dn\; L(r_1) = L(r_2)\]\noindentIt is relatively easy to verify that some concrete equivalenceshold, for example\begin{center}\begin{tabular}{rcl}$(a + b) + c$ & $\equiv$ & $a + (b + c)$\\$a + a$ & $\equiv$ & $a$\\$a + b$ & $\equiv$ & $b + a$\\$(a \cdot b) \cdot c$ & $\equiv$ & $a \cdot (b \cdot c)$\\$c \cdot (a + b)$ & $\equiv$ & $(c \cdot a) + (c \cdot b)$\\\end{tabular}\end{center}\noindentbut also easy to verify that the following regular expressionsare \emph{not} equivalent\begin{center}\begin{tabular}{rcl}$a \cdot a$ & $\not\equiv$ & $a$\\$a + (b \cdot c)$ & $\not\equiv$ & $(a + b) \cdot (a + c)$\\\end{tabular}\end{center}\noindent I leave it to you to verify these equivalences andnon-equivalences. It is also interesting to look at somecorner cases involving $\ONE$ and $\ZERO$:\begin{center}\begin{tabular}{rcl}$a \cdot \ZERO$ & $\not\equiv$ & $a$\\$a + \ONE$ & $\not\equiv$ & $a$\\$\ONE$ & $\equiv$ & $\ZERO^*$\\$\ONE^*$ & $\equiv$ & $\ONE$\\$\ZERO^*$ & $\not\equiv$ & $\ZERO$\\\end{tabular}\end{center}\noindent Again I leave it to you to make sure you agreewith these equivalences and non-equivalences. For our matching algorithm however the following sevenequivalences will play an important role:\begin{center}\begin{tabular}{rcl}$r + \ZERO$ & $\equiv$ & $r$\\$\ZERO + r$ & $\equiv$ & $r$\\$r \cdot \ONE$ & $\equiv$ & $r$\\$\ONE \cdot r$ & $\equiv$ & $r$\\$r \cdot \ZERO$ & $\equiv$ & $\ZERO$\\$\ZERO \cdot r$ & $\equiv$ & $\ZERO$\\$r + r$ & $\equiv$ & $r$\end{tabular}\end{center}\noindent which always hold no matter what the regular expression $r$looks like. The first two are easy to verify since $L(\ZERO)$ is theempty set. The next two are also easy to verify since $L(\ONE) =\{[]\}$ and appending the empty string to every string of another set,leaves the set unchanged. Be careful to fully comprehend the fifth andsixth equivalence: if you concatenate two sets of strings and one isthe empty set, then the concatenation will also be the empty set. Tosee this, check the definition of $\_ @ \_$ for sets. The lastequivalence is again trivial.What will be important later on is that we can orient theseequivalences and read them from left to right. In this way wecan view them as \emph{simplification rules}. Consider for example the regular expression \begin{equation}(r_1 + \ZERO) \cdot \ONE + ((\ONE + r_2) + r_3) \cdot (r_4 \cdot \ZERO)\label{big}\end{equation}\noindent If we can find an equivalent regular expression that issimpler (smaller for example), then this might potentially make ourmatching algorithm run faster. We can look for such a simpler regularexpression $r'$ because whether a string $s$ is in $L(r)$ or in$L(r')$ with $r\equiv r'$ will always give the same answer. In theexample above you will see that the regular expression is equivalentto just $r_1$. You can verify this by iteratively applying thesimplification rules from above:\begin{center}\begin{tabular}{ll} & $(r_1 + \ZERO) \cdot \ONE + ((\ONE + r_2) + r_3) \cdot (\underline{r_4 \cdot \ZERO})$\smallskip\\$\equiv$ & $(r_1 + \ZERO) \cdot \ONE + \underline{((\ONE + r_2) + r_3) \cdot \ZERO}$\smallskip\\$\equiv$ & $\underline{(r_1 + \ZERO) \cdot \ONE} + \ZERO$\smallskip\\$\equiv$ & $(\underline{r_1 + \ZERO}) + \ZERO$\smallskip\\$\equiv$ & $\underline{r_1 + \ZERO}$\smallskip\\$\equiv$ & $r_1$\\end{tabular}\end{center}\noindent In each step, I underlined where a simplificationrule is applied. Our matching algorithm in the next sectionwill often generate such ``useless'' $\ONE$s and$\ZERO$s, therefore simplifying them away will make thealgorithm quite a bit faster.\subsection*{The Matching Algorithm}The algorithm we will define below consists of two parts. Oneis the function $\textit{nullable}$ which takes a regular expression asargument and decides whether it can match the empty string(this means it returns a boolean in Scala). This can be easilydefined recursively as follows:\begin{center}\begin{tabular}{@ {}l@ {\hspace{2mm}}c@ {\hspace{2mm}}l@ {}}$\textit{nullable}(\ZERO)$ & $\dn$ & $\textit{false}$\\$\textit{nullable}(\ONE)$ & $\dn$ & $\textit{true}$\\$\textit{nullable}(c)$ & $\dn$ & $\textit{false}$\\$\textit{nullable}(r_1 + r_2)$ & $\dn$ & $\textit{nullable}(r_1) \vee \textit{nullable}(r_2)$\\ $\textit{nullable}(r_1 \cdot r_2)$ & $\dn$ & $\textit{nullable}(r_1) \wedge \textit{nullable}(r_2)$\\$\textit{nullable}(r^*)$ & $\dn$ & $\textit{true}$ \\\end{tabular}\end{center}\noindent The idea behind this function is that the followingproperty holds:\[\textit{nullable}(r) \;\;\text{if and only if}\;\; []\in L(r)\]\noindent Note on the left-hand side of the if-and-only-if wehave a function we can implement; on the right we have itsspecification (which we cannot implement in a programminglanguage).The other function of our matching algorithm calculates a\emph{derivative} of a regular expression. This is a functionwhich will take a regular expression, say $r$, and acharacter, say $c$, as arguments and returns a new regularexpression. Be careful that the intuition behind this functionis not so easy to grasp on first reading. Essentially thisfunction solves the following problem: if $r$ can match astring of the form $c\!::\!s$, what does the regularexpression look like that can match just $s$? The definitionof this function is as follows:\begin{center}\begin{tabular}{l@ {\hspace{2mm}}c@ {\hspace{2mm}}l} $der\, c\, (\ZERO)$ & $\dn$ & $\ZERO$\\ $der\, c\, (\ONE)$ & $\dn$ & $\ZERO$ \\ $der\, c\, (d)$ & $\dn$ & if $c = d$ then $\ONE$ else $\ZERO$\\ $der\, c\, (r_1 + r_2)$ & $\dn$ & $der\, c\, r_1 + der\, c\, r_2$\\ $der\, c\, (r_1 \cdot r_2)$ & $\dn$ & if $\textit{nullable} (r_1)$\\ & & then $(der\,c\,r_1) \cdot r_2 + der\, c\, r_2$\\ & & else $(der\, c\, r_1) \cdot r_2$\\ $der\, c\, (r^*)$ & $\dn$ & $(der\,c\,r) \cdot (r^*)$ \end{tabular}\end{center}\noindent The first two clauses can be rationalised asfollows: recall that $der$ should calculate a regularexpression so that given the ``input'' regular expression canmatch a string of the form $c\!::\!s$, we want a regularexpression for $s$. Since neither $\ZERO$ nor $\ONE$can match a string of the form $c\!::\!s$, we return$\ZERO$. In the third case we have to make acase-distinction: In case the regular expression is $c$, thenclearly it can recognise a string of the form $c\!::\!s$, justthat $s$ is the empty string. Therefore we return the$\ONE$-regular expression. In the other case we againreturn $\ZERO$ since no string of the $c\!::\!s$ can bematched. Next come the recursive cases, which are a bit moreinvolved. Fortunately, the $+$-case is still relativelystraightforward: all strings of the form $c\!::\!s$ are eithermatched by the regular expression $r_1$ or $r_2$. So we justhave to recursively call $der$ with these two regularexpressions and compose the results again with $+$. Makessense? The $\cdot$-case is more complicated: if $r_1\cdot r_2$matches a string of the form $c\!::\!s$, then the first partmust be matched by $r_1$. Consequently, it makes sense toconstruct the regular expression for $s$ by calling $der$ with$r_1$ and ``appending'' $r_2$. There is however one exceptionto this simple rule: if $r_1$ can match the empty string, thenall of $c\!::\!s$ is matched by $r_2$. So in case $r_1$ isnullable (that is can match the empty string) we have to allowthe choice $der\,c\,r_2$ for calculating the regularexpression that can match $s$. Therefore we have to add theregular expression $der\,c\,r_2$ in the result. The $*$-caseis again simple: if $r^*$ matches a string of the form$c\!::\!s$, then the first part must be ``matched'' by asingle copy of $r$. Therefore we call recursively $der\,c\,r$and ``append'' $r^*$ in order to match the rest of $s$.If this did not make sense yet, here is another way to rationalisethe definition of $der$ by considering the following operationon sets:\begin{equation}\label{Der}Der\,c\,A\;\dn\;\{s\,|\,c\!::\!s \in A\}\end{equation}\noindent This operation essentially transforms a set ofstrings $A$ by filtering out all strings that do not startwith $c$ and then strips off the $c$ from all the remainingstrings. For example suppose $A = \{f\!oo, bar, f\!rak\}$ then\[ Der\,f\,A = \{oo, rak\}\quad,\quad Der\,b\,A = \{ar\} \quad \text{and} \quad Der\,a\,A = \{\} \]\noindentNote that in the last case $Der$ is empty, because no string in $A$starts with $a$. With this operation we can state the followingproperty about $der$:\[L(der\,c\,r) = Der\,c\,(L(r))\]\noindentThis property clarifies what regular expression $der$ calculates,namely take the set of strings that $r$ can match (that is $L(r)$),filter out all strings not starting with $c$ and strip off the $c$from the remaining strings---this is exactly the language that$der\,c\,r$ can match.If we want to find out whether the string $abc$ is matched bythe regular expression $r_1$ then we can iteratively apply $der$as follows\begin{center}\begin{tabular}{rll}Input: $r_1$, $abc$\medskip\\Step 1: & build derivative of $a$ and $r_1$ & $(r_2 = der\,a\,r_1)$\smallskip\\Step 2: & build derivative of $b$ and $r_2$ & $(r_3 = der\,b\,r_2)$\smallskip\\Step 3: & build derivative of $c$ and $r_3$ & $(r_4 = der\,b\,r_3)$\smallskip\\Step 4: & the string is exhausted; test & ($\textit{nullable}(r_4)$)\\ & whether $r_4$ can recognise the\\ & empty string\smallskip\\Output: & result of this test $\Rightarrow \textit{true} \,\text{or}\, \textit{false}$\\ \end{tabular}\end{center}\noindent Again the operation $Der$ might help to rationalisethis algorithm. We want to know whether $abc \in L(r_1)$. Wedo not know yet---but let us assume it is. Then $Der\,a\,L(r_1)$builds the set where all the strings not starting with $a$ arefiltered out. Of the remaining strings, the $a$ is strippedoff. So we should still have $bc$ in the set.Then we continue with filtering out all strings notstarting with $b$ and stripping off the $b$ from the remainingstrings, that means we build $Der\,b\,(Der\,a\,(L(r_1)))$.Finally we filter out all strings not starting with $c$ andstrip off $c$ from the remaining string. This is$Der\,c\,(Der\,b\,(Der\,a\,(L(r))))$. Now if $abc$ was in the original set ($L(r_1)$), then in $Der\,c\,(Der\,b\,(Der\,a\,(L(r))))$ must contain the empty string. If not, then $abc$ was not in the language we started with.Our matching algorithm using $der$ and $\textit{nullable}$ workssimilarly, just using regular expression instead of sets. Forthis we need to extend the notion of derivatives from singlecharacters to strings. This can be done using the followingfunction, taking a string and regular expression as input anda regular expression as output.\begin{center}\begin{tabular}{@ {}l@ {\hspace{2mm}}c@ {\hspace{2mm}}l@ {\hspace{-10mm}}l@ {}} $\textit{ders}\, []\, r$ & $\dn$ & $r$ & \\ $\textit{ders}\, (c\!::\!s)\, r$ & $\dn$ & $\textit{ders}\,s\,(der\,c\,r)$ & \\ \end{tabular}\end{center}\noindent This function iterates $der$ taking one character atthe time from the original string until it is exhausted.Having $ders$ in place, we can finally define our matchingalgorithm:\[matches\,s\,r \dn \textit{nullable}(ders\,s\,r)\]\noindentand we can claim that\[matches\,s\,r\quad\text{if and only if}\quad s\in L(r)\]\noindent holds, which means our algorithm satisfies thespecification. Of course we can claim many things\ldotswhether the claim holds any water is a different question,which for example is the point of the Strand-2 Coursework.This algorithm was introduced by Janus Brzozowski in 1964. Itsmain attractions are simplicity and being fast, as well asbeing easily extendable for other regular expressions such as$r^{\{n\}}$, $r^?$, $\sim{}r$ and so on (this is subject ofStrand-1 Coursework 1).\subsection*{The Matching Algorithm in Scala}Another attraction of the algorithm is that it can be easilyimplemented in a functional programming language, like Scala.Given the implementation of regular expressions in Scala shownin the first lecture and handout, the functions and subfunctionsfor \pcode{matches} are shown in Figure~\ref{scala1}.\begin{figure}[p]\lstinputlisting{../progs/app5.scala}\caption{Scala implementation of the \textit{nullable} and derivative functions. These functions are easy to implement in functional languages, because their built-in pattern matching and recursion allow us to mimic the mathematical definitions very closely.\label{scala1}}\end{figure}For running the algorithm with our favourite example, the evilregular expression $a^?{}^{\{n\}}a^{\{n\}}$, we need to implementthe optional regular expression and the exactly $n$-timesregular expression. This can be done with the translations\lstinputlisting[numbers=none]{../progs/app51.scala}\noindent Running the matcher with the example, we find it isslightly worse then the matcher in Ruby and Python.Ooops\ldots\begin{center}\begin{tikzpicture}\begin{axis}[ xlabel={\pcode{a}s}, ylabel={time in secs}, enlargelimits=false, xtick={0,5,...,30}, xmax=30, ytick={0,5,...,30}, scaled ticks=false, axis lines=left, width=6cm, height=5cm, legend entries={Python,Ruby,Scala V1}, legend pos=outer north east, legend cell align=left ]\addplot[blue,mark=*, mark options={fill=white}] table {re-python.data};\addplot[brown,mark=pentagon*, mark options={fill=white}] table {re-ruby.data}; \addplot[red,mark=triangle*,mark options={fill=white}] table {re1.data}; \end{axis}\end{tikzpicture}\end{center}\noindent Analysing this failure we notice that for$a^{\{n\}}$ we generate quite big regular expressions:\begin{center}\begin{tabular}{rl}1: & $a$\\2: & $a\cdot a$\\3: & $a\cdot a\cdot a$\\& \ldots\\13: & $a\cdot a\cdot a\cdot a\cdot a\cdot a\cdot a\cdot a\cdot a\cdot a\cdot a\cdot a\cdot a$\\& \ldots\end{tabular}\end{center}\noindent Our algorithm traverses such regular expressions atleast once every time a derivative is calculated. So havinglarge regular expressions will cause problems. This problemis aggravated by $a^?$ being represented as $a + \ONE$.We can however fix this by having an explicit constructor for$r^{\{n\}}$. In Scala we would introduce a constructor like\begin{center}\code{case class NTIMES(r: Rexp, n: Int) extends Rexp}\end{center}\noindent With this fix we have a constant ``size'' regularexpression for our running example no matter how large $n$ is.This means we have to also add cases for \pcode{NTIMES} in thefunctions $\textit{nullable}$ and $der$. Does the change have anyeffect?\begin{center}\begin{tikzpicture}\begin{axis}[ xlabel={\pcode{a}s}, ylabel={time in secs}, enlargelimits=false, xtick={0,100,...,1000}, xmax=1000, ytick={0,5,...,30}, scaled ticks=false, axis lines=left, width=9.5cm, height=5cm, legend entries={Python,Ruby,Scala V1,Scala V2}, legend pos=outer north east, legend cell align=left ]\addplot[blue,mark=*, mark options={fill=white}] table {re-python.data};\addplot[brown,mark=pentagon*, mark options={fill=white}] table {re-ruby.data}; \addplot[red,mark=triangle*,mark options={fill=white}] table {re1.data}; \addplot[green,mark=square*,mark options={fill=white}] table {re2b.data};\end{axis}\end{tikzpicture}\end{center}\noindent Now we are talking business! The modified matcher can within 30 seconds handle regular expressions up to$n = 950$ before a StackOverflow is raised. Python and Ruby (and our first version) could only handle $n = 27$ or so in 30 second.SECOND EXAMPLEThe moral is that our algorithm is rather sensitive to thesize of regular expressions it needs to handle. This is ofcourse obvious because both $\textit{nullable}$ and $der$ frequentlyneed to traverse the whole regular expression. There seems,however, one more issue for making the algorithm run faster.The derivative function often produces ``useless''$\ZERO$s and $\ONE$s. To see this, consider $r = ((a\cdot b) + b)^*$ and the following two derivatives\begin{center}\begin{tabular}{l}$der\,a\,r = ((\ONE \cdot b) + \ZERO) \cdot r$\\$der\,b\,r = ((\ZERO \cdot b) + \ONE)\cdot r$\\$der\,c\,r = ((\ZERO \cdot b) + \ZERO)\cdot r$\end{tabular}\end{center}\noindent If we simplify them according to the simple rules from thebeginning, we can replace the right-hand sides by the smaller equivalent regular expressions\begin{center}\begin{tabular}{l}$der\,a\,r \equiv b \cdot r$\\$der\,b\,r \equiv r$\\$der\,c\,r \equiv \ZERO$\end{tabular}\end{center}\noindent I leave it to you to contemplate whether such asimplification can have any impact on the correctness of ouralgorithm (will it change any answers?). Figure~\ref{scala2}gives a simplification function that recursively traverses aregular expression and simplifies it according to the rulesgiven at the beginning. There are only rules for $+$, $\cdot$and $n$-times (the latter because we added it in the secondversion of our matcher). There is no rule for a star, becauseempirical data and also a little thought showed thatsimplifying under a star is a waste of computation time. Thesimplification function will be called after every derivation.This additional step removes all the ``junk'' the derivativefunction introduced. Does this improve the speed? You bet!!\begin{figure}[p]\lstinputlisting{../progs/app6.scala}\caption{The simplification function and modified \texttt{ders}-function; this function nowcalls \texttt{der} first, but then simplifiesthe resulting derivative regular expressions beforebuilding the next derivative, seeLine~\ref{simpline}.\label{scala2}}\end{figure}\begin{center}\begin{tikzpicture}\begin{axis}[xlabel={\pcode{a}s},ylabel={time in secs}, enlargelimits=false, xtick={0,2000,...,12000}, xmax=12000, ytick={0,5,...,30}, scaled ticks=false, axis lines=left, width=9cm, height=5cm, legend entries={Scala V2,Scala V3}]\addplot[green,mark=square*,mark options={fill=white}] table {re2b.data};\addplot[black,mark=square*,mark options={fill=white}] table {re3.data};\end{axis}\end{tikzpicture}\end{center}SECOND EXAMPLE\section*{Proofs}You might not like doing proofs. But they serve a veryimportant purpose in Computer Science: How can we be sure thatour algorithm matches its specification. We can try to testthe algorithm, but that often overlooks corner cases and anexhaustive testing is impossible (since there are infinitelymany inputs). Proofs allow us to ensure that an algorithmreally meets its specification. For the programs we look at in this module, the proofs willmostly by some form of induction. Remember that regularexpressions are defined as \begin{center}\begin{tabular}{r@{\hspace{1mm}}r@{\hspace{1mm}}l@{\hspace{13mm}}l} $r$ & $::=$ & $\ZERO$ & null language\\ & $\mid$ & $\ONE$ & empty string / \texttt{""} / []\\ & $\mid$ & $c$ & single character\\ & $\mid$ & $r_1 + r_2$ & alternative / choice\\ & $\mid$ & $r_1 \cdot r_2$ & sequence\\ & $\mid$ & $r^*$ & star (zero or more)\\ \end{tabular}\end{center}\noindent If you want to show a property $P(r)$ for all regular expressions $r$, then you have to follow essentially the recipe:\begin{itemize}\item $P$ has to hold for $\ZERO$, $\ONE$ and $c$ (these are the base cases).\item $P$ has to hold for $r_1 + r_2$ under the assumption that $P$ already holds for $r_1$ and $r_2$.\item $P$ has to hold for $r_1 \cdot r_2$ under the assumption that $P$ already holds for $r_1$ and $r_2$.\item $P$ has to hold for $r^*$ under the assumption that $P$ already holds for $r$.\end{itemize}\noindent A simple proof is for example showing the following property:\begin{equation}\textit{nullable}(r) \;\;\text{if and only if}\;\; []\in L(r)\label{nullableprop}\end{equation}\noindentLet us say that this property is $P(r)$, then the first casewe need to check is whether $P(\ZERO)$ (see recipe above). So we have to show that\[\textit{nullable}(\ZERO) \;\;\text{if and only if}\;\; []\in L(\ZERO)\]\noindent whereby $\textit{nullable}(\ZERO)$ is by definition ofthe function $\textit{nullable}$ always $\textit{false}$. We also havethat $L(\ZERO)$ is by definition $\{\}$. It isimpossible that the empty string $[]$ is in the empty set.Therefore also the right-hand side is false. Consequently weverified this case: both sides are false. We would still needto do this for $P(\ONE)$ and $P(c)$. I leave this toyou to verify.Next we need to check the inductive cases, for example$P(r_1 + r_2)$, which is\begin{equation}\textit{nullable}(r_1 + r_2) \;\;\text{if and only if}\;\; []\in L(r_1 + r_2)\label{propalt}\end{equation}\noindent The difference to the base cases is that in thiscase we can already assume we proved\begin{center}\begin{tabular}{l}$\textit{nullable}(r_1) \;\;\text{if and only if}\;\; []\in L(r_1)$ and\\$\textit{nullable}(r_2) \;\;\text{if and only if}\;\; []\in L(r_2)$\\\end{tabular}\end{center}\noindent These are the induction hypotheses. To check this case, we can start from $\textit{nullable}(r_1 + r_2)$, which by definition is\[\textit{nullable}(r_1) \vee \textit{nullable}(r_2)\]\noindent Using the two induction hypotheses from above,we can transform this into \[[] \in L(r_1) \vee []\in(r_2)\]\noindent We just replaced the $\textit{nullable}(\ldots)$ parts bythe equivalent $[] \in L(\ldots)$ from the induction hypotheses. A bit of thinking convinces you that if$[] \in L(r_1) \vee []\in L(r_2)$ then the empty stringmust be in the union $L(r_1)\cup L(r_2)$, that is\[[] \in L(r_1)\cup L(r_2)\]\noindent but this is by definition of $L$ exactly $[] \inL(r_1 + r_2)$, which we needed to establish according to\eqref{propalt}. What we have shown is that starting from$\textit{nullable}(r_1 + r_2)$ we have done equivalent transformationsto end up with $[] \in L(r_1 + r_2)$. Consequently we haveestablished that $P(r_1 + r_2)$ holds.In order to complete the proof we would now need to look at the cases \mbox{$P(r_1\cdot r_2)$} and $P(r^*)$. Again I let youcheck the details.You might have to do induction proofs over strings. That means you want to establish a property $P(s)$ for allstrings $s$. For this remember strings are lists of characters. These lists can be either the empty list or alist of the form $c::s$. If you want to perform an inductionproof for strings you need to consider the cases\begin{itemize}\item $P$ has to hold for $[]$ (this is the base case).\item $P$ has to hold for $c::s$ under the assumption that $P$ already holds for $s$.\end{itemize}\noindentGiven this recipe, I let you show\begin{equation}Ders\,s\,(L(r)) = L(ders\,s\,r)\label{dersprop}\end{equation}\noindent by induction on $s$. Recall $Der$ is defined for character---see \eqref{Der}; $Ders$ is similar, but for strings:\[Ders\,s\,A\;\dn\;\{s'\,|\,s @ s' \in A\}\]\noindent In this proof you can assume the following propertyfor $der$ and $Der$ has already been proved, that is you canassume\[L(der\,c\,r) = Der\,c\,(L(r))\]\noindent holds (this would be of course a property thatneeds to be proved in a side-lemma by induction on $r$).To sum up, using reasoning like the one shown above allows us to show the correctness of our algorithm. To see this,start from the specification\[s \in L(r)\]\noindent That is the problem we want to solve. Thinking a little, you will see that this problem is equivalent to the following problem\begin{equation}[] \in Ders\,s\,(L(r))\label{dersstep}\end{equation}\noindent But we have shown above in \eqref{dersprop}, thatthe $Ders$ can be replaced by $L(ders\ldots)$. That means \eqref{dersstep} is equivalent to \begin{equation}[] \in L(ders\,s\,r)\label{prefinalstep}\end{equation}\noindent We have also shown that testing whether the emptystring is in a language is equivalent to the $\textit{nullable}$function; see \eqref{nullableprop}. That means\eqref{prefinalstep} is equivalent with\[\textit{nullable}(ders\,s\,r)\] \noindent But this is just the definition of $matches$\[matches\,s\,r \dn nullable(ders\,s\,r)\]\noindent In effect we have shown\[matches\,s\,r\;\;\text{if and only if}\;\;s\in L(r)\]\noindent which is the property we set out to prove:our algorithm meets its specification. To have doneso, requires a few induction proofs about strings andregular expressions. Following the recipes is already a big step in performing these proofs.\end{document}%%% Local Variables: %%% mode: latex%%% TeX-master: t%%% End: