\documentclass[11pt]{article}\usepackage[left]{lineno}\usepackage{amsmath}\usepackage{stmaryrd}\begin{document}%%%\linenumbers\noindent We already proved that\[\text{If}\;nullable(r)\;\text{then}\;POSIX\;(mkeps\; r)\;r\]\noindent holds. This is essentially the ``base case'' for thecorrectness proof of the algorithm. For the ``inductioncase'' we need the following main theorem, which we are currently after:\begin{center}\begin{tabular}{lll}If & (*) & $POSIX\;v\;(der\;c\;r)$ and $\vdash v : der\;c\;r$\\then & & $POSIX\;(inj\;r\;c\;v)\;r$\end{tabular}\end{center}\noindent That means a POSIX value $v$ is still $POSIX$ after injection.I am not sure whether this theorem is actually true in thisfull generality. Maybe it requires some restrictions.If we unfold the $POSIX$ definition in the then-part, we arrive at\[\forall v'.\;\text{if}\;\vdash v' : r\; \text{and} \;|inj\;r\;c\;v| = |v'|\;\text{then}\; |inj\;r\;c\;v| \succ_r v' \]\noindent which is what we need to prove assuming the if-part (*) in thetheorem above. Since this is a universally quantified formula,we just need to fix a $v'$. We can then prove the implicationby assuming\[\text{(a)}\;\;\vdash v' : r\;\; \text{and} \;\;\text{(b)}\;\;inj\;r\;c\;v = |v'|\]\noindent and our goal is\[(goal)\;\;inj\;r\;c\;v \succ_r v'\]\noindent There are already two lemmas proved that can transform the assumptions (a) and (b) into\[\text{(a*)}\;\;\vdash proj\;r\;c\;v' : der\;c\;r\;\; \text{and} \;\;\text{(b*)}\;\;c\,\#\,|v| = |v'|\]\noindent Another lemma shows that\[|v'| = c\,\#\,|proj\;r\;c\;v|\]\noindent Using (b*) we can therefore infer \[\text{(b**)}\;\;|v| = |proj\;r\;c\;v|\]\noindent The main idea of the proof is now a simple instantiationof the assumption $POSIX\;v\;(der\;c\;r)$. If we unfold the $POSIX$ definition, we get\[\forall v'.\;\text{if}\;\vdash v' : der\;c\;r\; \text{and} \;|v| = |v'|\;\text{then}\; v \succ_{der\;c\;r}\; v' \]\noindent We can instantiate this $v'$ with $proj\;r\;c\;v'$ and can use (a*) and (b**) in order to infer\[v \succ_{der\;c\;r}\; proj\;r\;c\;v'\]\noindent The point of the side-lemma below is that we can ``add'' an$inj$ to both sides to obtain\[inj\;r\;c\;v \succ_r\; inj\;r\;c\;(proj\;r\;c\;v')\]\noindent Finally there is already a lemma proved that showsthat an injection and projection is the identity, meaning\[inj\;r\;c\;(proj\;r\;c\;v') = v'\]\noindent With this we have shown our goal (pending a proof of the side-lemma next).\subsection*{Side-Lemma}A side-lemma needed for the theorem above which might be true, but can also be false, is as follows:\begin{center}\begin{tabular}{lll}If & (1) & $v_1 \succ_{der\;c\;r} v_2$,\\ & (2) & $\vdash v_1 : der\;c\;r$, and\\ & (3) & $\vdash v_2 : der\;c\;r$ holds,\\then & & $inj\;r\;c\;v_1 \succ_r inj\;r\;c\;v_2$ also holds. \end{tabular}\end{center}\noindent It essentially states that if one value $v_1$ is bigger than $v_2$ then this ordering is preserved under injections. This is proved by induction (on the definition of $der$\ldots this is very similar to an induction on $r$).\bigskip\noindentThe case that is still unproved is the sequence case where we assume $r = r_1\cdot r_2$ and also $r_1$ being nullable.The derivative $der\;c\;r$ is then\begin{center}$der\;c\;r = ((der\;c\;r_1) \cdot r_2) + (der\;c\;r_2)$\end{center}\noindent or without the parentheses\begin{center}$der\;c\;r = (der\;c\;r_1) \cdot r_2 + der\;c\;r_2$\end{center}\noindent In this case the assumptions are\begin{center}\begin{tabular}{ll}(a) & $v_1 \succ_{(der\;c\;r_1) \cdot r_2 + der\;c\;r_2} v_2$\\(b) & $\vdash v_1 : (der\;c\;r_1) \cdot r_2 + der\;c\;r_2$\\(c) & $\vdash v_2 : (der\;c\;r_1) \cdot r_2 + der\;c\;r_2$\\(d) & $nullable(r_1)$\end{tabular}\end{center}\noindent The induction hypotheses are\begin{center}\begin{tabular}{ll}(IH1) & $\forall v_1 v_2.\;v_1 \succ_{der\;c\;r_1} v_2\;\wedge\; \vdash v_1 : der\;c\;r_1 \;\wedge\; \vdash v_2 : der\;c\;r_1\qquad$\\ & $\hfill\longrightarrow inj\;r_1\;c\;v_1 \succ{r_1} \;inj\;r_1\;c\;v_2$\smallskip\\(IH2) & $\forall v_1 v_2.\;v_1 \succ_{der\;c\;r_2} v_2\;\wedge\; \vdash v_2 : der\;c\;r_2 \;\wedge\; \vdash v_2 : der\;c\;r_2\qquad$\\ & $\hfill\longrightarrow inj\;r_2\;c\;v_1 \succ{r_2} \;inj\;r_2\;c\;v_2$\\\end{tabular}\end{center}\noindent The goal is\[(goal)\qquadinj\; (r_1 \cdot r_2)\;c\;v_1 \succ_{r_1 \cdot r_2} inj\; (r_1 \cdot r_2)\;c\;v_2\]\noindent If we analyse how (a) could have arisen (that is make a casedistinction), then we will find four cases:\begin{center}\begin{tabular}{ll}LL & $v_1 = Left(w_1)$, $v_2 = Left(w_2)$\\LR & $v_1 = Left(w_1)$, $v_2 = Right(w_2)$\\RL & $v_1 = Right(w_1)$, $v_2 = Left(w_2)$\\RR & $v_1 = Right(w_1)$, $v_2 = Right(w_2)$\\\end{tabular}\end{center}\noindent We have to establish our goal in all four cases. \subsubsection*{Case LR}The corresponding rule (instantiated) is:\begin{center}\begin{tabular}{c}$len\,|w_1| \geq len\,|w_2|$\\\hline$Left(w_1) \succ_{(der\;c\;r_1) \cdot r_2 + der\;c\;r_2} Right(w_2)$\end{tabular}\end{center}\noindent This means we can also assume in this case\[(e)\quad len\,|w_1| \geq len\,|w_2|\] \noindent which is the premise of the rule above.Instantiating $v_1$ and $v_2$ in the assumptions (b) and (c)gives us\begin{center}\begin{tabular}{ll}(b*) & $\vdash Left(w_1) : (der\;c\;r_1) \cdot r_2 + der\;c\;r_2$\\(c*) & $\vdash Right(w_2) : (der\;c\;r_1) \cdot r_2 + der\;c\;r_2$\\\end{tabular}\end{center}\noindent Since these are assumptions, we can further analysehow they could have arisen according to the rules of $\vdash\_ : \_\,$. This gives us two new assumptions\begin{center}\begin{tabular}{ll}(b**) & $\vdash w_1 : (der\;c\;r_1) \cdot r_2$\\(c**) & $\vdash w_2 : der\;c\;r_2$\\\end{tabular}\end{center}\noindent Looking at (b**) we can further analyse how thisjudgement could have arisen. This tells us that $w_1$must have been a sequence, say $u_1\cdot u_2$, with\begin{center}\begin{tabular}{ll}(b***) & $\vdash u_1 : der\;c\;r_1$\\ & $\vdash u_2 : r_2$\\\end{tabular}\end{center}\noindent Instantiating the goal means we need to prove\[inj\; (r_1 \cdot r_2)\;c\;(Left(u_1\cdot u_2)) \succ_{r_1 \cdot r_2} inj\; (r_1 \cdot r_2)\;c\;(Right(w_2))\]\noindent We can simplify this according to the rules of $inj$:\[(inj\; r_1\;c\;u_1)\cdot u_2 \succ_{r_1 \cdot r_2} (mkeps\;r_1) \cdot (inj\; r_2\;c\;w_2)\]\noindentThis is what we need to prove. There are only two rules thatcan be used to prove this judgement:\begin{center}\begin{tabular}{cc}\begin{tabular}{c}$v_1 = v_1'$\qquad $v_2 \succ_{r_2} v_2'$\\\hline$v_1\cdot v_2 \succ_{r_1\cdot r_2} v_1'\cdot v_2'$\end{tabular} &\begin{tabular}{c}$v_1 \succ_{r_1} v_1'$\\\hline$v_1\cdot v_2 \succ_{r_1\cdot r_2} v_1'\cdot v_2'$\end{tabular}\end{tabular}\end{center}\noindent Using the left rule would mean we need to show that\[inj\; r_1\;c\;u_1 = mkeps\;r_1\]\noindent but this can never be the case.\footnote{Actually Isabellefound this out after analysing its argument. ;o)} Lets assumeit would be true, then also if we flat each side, it must holdthat\[|inj\; r_1\;c\;u_1| = |mkeps\;r_1|\]\noindent But this leads to a contradiction, because the right-hand sidewill be equal to the empty list, or empty string. This is because we assumed $nullable(r_1)$ and there is a lemmacalled \texttt{mkeps\_flat} which shows this. On the otherside we know by assumption (b***) and lemma \texttt{v4} that the other side needs to be a string starting with $c$ (sincewe inject $c$ into $u_1$). The empty string can never be equal to something starting with $c$\ldots therefore there is a contradiction.That means we can only use the rule on the right-hand side to prove our goal. This implies we need to prove\[inj\; r_1\;c\;u_1 \succ_{r_1} mkeps\;r_1\]\subsubsection*{Case RL}The corresponding rule (instantiated) is:\begin{center}\begin{tabular}{c}$len\,|w_1| > len\,|w_2|$\\\hline$Right(w_1) \succ_{(der\;c\;r_1) \cdot r_2 + der\;c\;r_2} Left(w_2)$\end{tabular}\end{center}\subsection*{Test Proof}We want to prove that\[nullable(r) \;\text{implies}\; POSIX (mkeps\; r)\; r\]\noindentWe prove this by induction on $r$. There are 5 subcases, and only the $r_1 + r_2$-case is interesting. In this case we know the induction hypotheses are\begin{center}\begin{tabular}{ll}(IMP1) & $nullable(r_1) \;\text{implies}\; POSIX (mkeps\; r_1)\; r_1$ \\(IMP2) & $nullable(r_2) \;\text{implies}\; POSIX (mkeps\; r_2)\; r_2$\end{tabular}\end{center}\noindent and know that $nullable(r_1 + r_2)$ holds. From thiswe know that either $nullable(r_1)$ holds or $nullable(r_2)$.Let us consider the first case where we know $nullable(r_1)$.\subsection*{Problems in the paper proof}I cannot verify\ldots\newpage\section*{Isabelle Cheat-Sheet}\begin{itemize} \item The main notion in Isabelle is a \emph{theorem}. Definitions, inductive predicates and recursive functions all have underlying theorems. If a definition is called \texttt{foo}, then the theorem will be called \texttt{foo\_def}. Take a recursive function, say \texttt{bar}, it will have a theorem that is called \texttt{bar.simps} and will be added to the simplifier. That means the simplifier will automatically Inductive predicates called \texttt{baz} will be called \texttt{baz.intros}. For inductive predicates, there are also theorems \texttt{baz.induct} and \texttt{baz.cases}. \item A \emph{goal-state} consists of one or more subgoals. If there are \texttt{No more subgoals!} then the theorem is proved. Each subgoal is of the form \[ \llbracket \ldots{}premises\ldots \rrbracket \Longrightarrow conclusion \] \noindent where $premises$ and $conclusion$ are formulas of type \texttt{bool}.\item There are three low-level methods for applying one or more theorem to a subgoal, called \texttt{rule}, \texttt{drule} and \texttt{erule}. The first applies a theorem to a conclusion of a goal. For example \[\texttt{apply}(\texttt{rule}\;thm) \] If the conclusion is of the form $\_ \wedge \_$, $\_ \longrightarrow \_$ and $\forall\,x. \_$ the $thm$ is called \begin{center} \begin{tabular}{lcl} $\_ \wedge \_$ & $\Rightarrow$ & $conjI$\\ $\_ \longrightarrow \_$ & $\Rightarrow$ & $impI$\\ $\forall\,x.\_$ & $\Rightarrow$ & $allI$ \end{tabular} \end{center} Many of such rule are called intro-rules and end with an ``$I$'', or in case of inductive predicates $\_.intros$.\end{itemize}\end{document}