--- a/ProgTutorial/Tactical.thy Tue Oct 27 15:43:21 2009 +0100
+++ b/ProgTutorial/Tactical.thy Thu Oct 29 16:02:15 2009 +0100
@@ -13,15 +13,14 @@
chapter {* Tactical Reasoning\label{chp:tactical} *}
text {*
- One of the main reason for descending to the ML-level of Isabelle is to be able to
- implement automatic proof procedures. Such proof procedures usually lessen
- considerably the burden of manual reasoning, for example, when introducing
- new definitions. These proof procedures are centred around refining a goal
- state using tactics. This is similar to the \isacommand{apply}-style
- reasoning at the user-level, where goals are modified in a sequence of proof
- steps until all of them are solved. However, there are also more structured
- operations available on the ML-level that help with the handling of
- variables and assumptions.
+ One of the main reason for descending to the ML-level of Isabelle is to be
+ able to implement automatic proof procedures. Such proof procedures can
+ considerably lessen the burden of manual reasoning. They are centred around
+ the idea of refining a goal state using tactics. This is similar to the
+ \isacommand{apply}-style reasoning at the user-level, where goals are
+ modified in a sequence of proof steps until all of them are discharged.
+ In this chapter we will explain simple tactics and how to combine them using
+ tactic combinators. We also explain briefly the simplifier and conversions.
*}
@@ -32,7 +31,8 @@
into ML. Suppose the following proof.
*}
-lemma disj_swap: "P \<or> Q \<Longrightarrow> Q \<or> P"
+lemma disj_swap:
+ shows "P \<or> Q \<Longrightarrow> Q \<or> P"
apply(erule disjE)
apply(rule disjI2)
apply(assumption)
@@ -57,17 +57,18 @@
THEN atac 1)
end" "?P \<or> ?Q \<Longrightarrow> ?Q \<or> ?P"}
- To start the proof, the function @{ML_ind "Goal.prove"}~@{text "ctxt xs As C
- tac"} sets up a goal state for proving the goal @{text C} (that is @{prop "P
- \<or> Q \<Longrightarrow> Q \<or> P"} in the proof at hand) under the assumptions @{text As}
- (happens to be empty) with the variables @{text xs} that will be generalised
- once the goal is proved (in our case @{text P} and @{text
- Q}).\footnote{FIXME: explain prove earlier} The @{text "tac"} is the tactic
- that proves the goal; it can make use of the local assumptions (there are
- none in this example). The tactics @{ML_ind etac}, @{ML_ind rtac} and
- @{ML_ind atac} in the code above correspond roughly to @{text erule}, @{text
- rule} and @{text assumption}, respectively. The operator @{ML_ind THEN}
- strings the tactics together.
+ To start the proof, the function @{ML_ind prove in Goal} sets up a goal
+ state for proving the goal @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"}. We can give this
+ function some assumptions in the third argument (there are no assumption in
+ the proof at hand). The second argument stands for a list of variables
+ (given as strings). This list contains the variables that will be turned
+ into schematic variables once the goal is proved (in our case @{text P} and
+ @{text Q}). The last argument is the tactic that proves the goal. This
+ tactic can make use of the local assumptions (there are none in this
+ example). The tactics @{ML_ind etac}, @{ML_ind rtac} and @{ML_ind atac} in
+ the code above correspond roughly to @{text erule}, @{text rule} and @{text
+ assumption}, respectively. The combinator @{ML_ind THEN} strings the tactics
+ together.
\begin{readmore}
To learn more about the function @{ML_ind prove in Goal} see
@@ -79,15 +80,29 @@
\end{readmore}
Note that in the code above we use antiquotations for referencing the
- theorems. Many theorems also have ML-bindings with the same name. Therefore,
- we could also just have written @{ML "etac disjE 1"}, or in case where there
- is no ML-binding obtain the theorem dynamically using the function @{ML
- thm}; for example \mbox{@{ML "etac (thm \"disjE\") 1"}}. Both ways however
- are considered bad style! The reason is that the binding for @{ML disjE} can
- be re-assigned by the user and thus one does not have complete control over
- which theorem is actually applied. This problem is nicely prevented by using
- antiquotations, because then the theorems are fixed statically at
- compile-time.
+ theorems. We could also just have written @{ML "etac disjE 1"}. The reason
+ is that many of the basic theorems have a corresponding ML-binding:
+
+ @{ML_response_fake [gray,display]
+ "disjE"
+ "\<lbrakk>?P \<or> ?Q; ?P \<Longrightarrow> ?R; ?Q \<Longrightarrow> ?R\<rbrakk> \<Longrightarrow> ?R"}
+
+ In case where there is no ML-binding theorems can also be obtained dynamically using
+ the function @{ML thm} and the (string) name of the theorem; for example:
+
+ @{ML_response_fake [gray,display]
+ "thm \"disjE\""
+ "\<lbrakk>?P \<or> ?Q; ?P \<Longrightarrow> ?R; ?Q \<Longrightarrow> ?R\<rbrakk> \<Longrightarrow> ?R"}
+
+ Both ways however are considered bad style! The reason is that the binding
+ for @{ML disjE} can be re-assigned by the user and thus one does not have
+ complete control over which theorem is actually applied. Similarly with the
+ string @{text [quotes] "disjE"}. Although theorems in the theorem database
+ must have a unique name, the string can stand for a dynamically updated
+ theorem list. Also in this case we cannot be sure which theorem is applied.
+ These problems can be nicely prevented by using antiquotations, because then
+ the theorems are fixed statically at compile-time.
+
During the development of automatic proof procedures, you will often find it
necessary to test a tactic on examples. This can be conveniently done with
@@ -105,7 +120,8 @@
text {* and the Isabelle proof: *}
-lemma "P \<or> Q \<Longrightarrow> Q \<or> P"
+lemma
+ shows "P \<or> Q \<Longrightarrow> Q \<or> P"
apply(tactic {* foo_tac *})
done
@@ -130,37 +146,39 @@
THEN' atac)*}text_raw{*\label{tac:footacprime}*}
text {*
- where @{ML_ind THEN'} is used instead of @{ML THEN}. With @{ML foo_tac'} you can give
- the number for the subgoal explicitly when the tactic is
- called. So in the next proof you can first discharge the second subgoal, and
- subsequently the first.
+ where @{ML_ind THEN'} is used instead of @{ML THEN}. (For most combinators
+ that combine tactics---@{ML THEN} is only one such combinator---a ``primed''
+ version exists.) With @{ML foo_tac'} you can give the number for the
+ subgoal explicitly when the tactic is called. So in the next proof you can
+ first discharge the second subgoal, and subsequently the first.
*}
-lemma "P1 \<or> Q1 \<Longrightarrow> Q1 \<or> P1"
- and "P2 \<or> Q2 \<Longrightarrow> Q2 \<or> P2"
+lemma
+ shows "P1 \<or> Q1 \<Longrightarrow> Q1 \<or> P1"
+ and "P2 \<or> Q2 \<Longrightarrow> Q2 \<or> P2"
apply(tactic {* foo_tac' 2 *})
apply(tactic {* foo_tac' 1 *})
done
text {*
This kind of addressing is more difficult to achieve when the goal is
- hard-coded inside the tactic. For most operators that combine tactics
- (@{ML THEN} is only one such operator) a ``primed'' version exists.
-
- The tactics @{ML foo_tac} and @{ML foo_tac'} are very specific for
- analysing goals being only of the form @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"}. If the goal is not
- of this form, then these tactics return the error message:
+ hard-coded inside the tactic.
+
+ The tactics @{ML foo_tac} and @{ML foo_tac'} are very specific for analysing
+ goals being only of the form @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"}. If the goal is not of
+ this form, then these tactics return the error message:\footnote{To be
+ precise, tactics do not produce this error message, it originates from the
+ \isacommand{apply} wrapper.}
+
\begin{isabelle}
@{text "*** empty result sequence -- proof command failed"}\\
@{text "*** At command \"apply\"."}
\end{isabelle}
- This means they failed.\footnote{To be precise, tactics do not produce this error
- message, it originates from the \isacommand{apply} wrapper.} The reason for this
- error message is that tactics
- are functions mapping a goal state to a (lazy) sequence of successor states.
- Hence the type of a tactic is:
+ This means they failed. The reason for this error message is that tactics
+ are functions mapping a goal state to a (lazy) sequence of successor
+ states. Hence the type of a tactic is:
*}
ML{*type tactic = thm -> thm Seq.seq*}
@@ -194,7 +212,8 @@
@{ML foo_tac'}: either using the first assumption or the second.
*}
-lemma "\<lbrakk>P \<or> Q; P \<or> Q\<rbrakk> \<Longrightarrow> Q \<or> P"
+lemma
+ shows "\<lbrakk>P \<or> Q; P \<or> Q\<rbrakk> \<Longrightarrow> Q \<or> P"
apply(tactic {* foo_tac' 1 *})
back
done
@@ -235,7 +254,8 @@
*}
notation (output) "prop" ("#_" [1000] 1000)
-lemma shows "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"
+lemma
+ shows "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"
apply(tactic {* my_print_tac @{context} *})
txt{* \begin{minipage}{\textwidth}
@@ -302,21 +322,22 @@
text_raw {*
\end{isabelle}
\end{boxedminipage}
- \caption{The figure shows a proof where each intermediate goal state is
- printed by the Isabelle system and by @{ML my_print_tac}. The latter shows
- the goal state as represented internally (highlighted boxes). This
- tactic shows that every goal state in Isabelle is represented by a theorem:
- when you start the proof of \mbox{@{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}} the theorem is
- @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> #(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}; when you finish the proof the
- theorem is @{text "#(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}.\label{fig:goalstates}}
+ \caption{The figure shows an Isabelle snippet of a proof where each
+ intermediate goal state is printed by the Isabelle system and by @{ML
+ my_print_tac}. The latter shows the goal state as represented internally
+ (highlighted boxes). This tactic shows that every goal state in Isabelle is
+ represented by a theorem: when you start the proof of \mbox{@{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow>
+ A \<and> B"}} the theorem is @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> #(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}; when
+ you finish the proof the theorem is @{text "#(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and>
+ B)"}.\label{fig:goalstates}}
\end{figure}
*}
text {*
which prints out the given theorem (using the string-function defined in
Section~\ref{sec:printing}) and then behaves like @{ML all_tac}. With this
- tactic we are in the position to inspect every goal state in a
- proof. Consider now the proof in Figure~\ref{fig:goalstates}: as can be seen,
+ tactic we are in the position to inspect every goal state in a proof. For
+ this consider the proof in Figure~\ref{fig:goalstates}: as can be seen,
internally every goal state is an implication of the form
@{text[display] "A\<^isub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^isub>n \<Longrightarrow> #C"}
@@ -331,8 +352,8 @@
misinterpreted as open subgoals. While tactics can operate on the subgoals
(the @{text "A\<^isub>i"} above), they are expected to leave the conclusion
@{term C} intact, with the exception of possibly instantiating schematic
- variables. The instantiations of schematic variables can even be observed
- on the user level. For this consider the following definition and proof.
+ variables. This instantiations of schematic variables can be observed
+ on the user level. Have a look at the following definition and proof.
*}
definition
@@ -349,14 +370,16 @@
Although Isabelle issues a warning message when stating goals involving
meta-variables, it is possible to prove this theorem. The reason for the warning
message is that the proved theorem is not @{text "EQ_TRUE ?X"}, as one might
- expect, but @{thm test}:
+ expect, but @{thm test}. We can test this with:
\begin{isabelle}
\isacommand{thm}~@{thm [source] test}\\
@{text ">"}~@{thm test}
\end{isabelle}
- As can be seen, the schematic variable @{text "?X"} has been instantiated inside the proof.
+ The reason for this result is that the schematic variable @{text "?X"}
+ is instantiated inside the proof and then the instantiation propagated
+ to the ``outside''.
\begin{readmore}
For more information about the internals of goals see \isccite{sec:tactical-goals}.
@@ -367,13 +390,15 @@
section {* Simple Tactics\label{sec:simpletacs} *}
text {*
- Let us start with explaining the simple tactic @{ML_ind print_tac}, which is quite useful
+ In this section we will describe more of the simple tactics in Isabelle. The
+ first one is @{ML_ind print_tac}, which is quite useful
for low-level debugging of tactics. It just prints out a message and the current
goal state. Unlike @{ML my_print_tac} shown earlier, it prints the goal state
as the user would see it. For example, processing the proof
*}
-lemma shows "False \<Longrightarrow> True"
+lemma
+ shows "False \<Longrightarrow> True"
apply(tactic {* print_tac "foo message" *})
txt{*gives:\medskip
@@ -387,12 +412,13 @@
text {*
A simple tactic for easy discharge of any proof obligations is
- @{ML_ind cheat_tac in Skip_Proof}. This tactic corresponds to
- the Isabelle command \isacommand{sorry} and is sometimes useful
- during the development of tactics.
+ @{ML_ind cheat_tac in Skip_Proof} in the structure @{ML_struct Skip_Proof}.
+ This tactic corresponds to the Isabelle command \isacommand{sorry} and is
+ sometimes useful during the development of tactics.
*}
-lemma shows "False" and "Goldbach_conjecture"
+lemma
+ shows "False" and "Goldbach_conjecture"
apply(tactic {* Skip_Proof.cheat_tac @{theory} *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
@@ -401,13 +427,15 @@
text {*
This tactic works however only if the quick-and-dirty mode of Isabelle
- is switched on.
-
- Another simple tactic is the function @{ML_ind atac}, which, as shown in the previous
- section, corresponds to the assumption command.
+ is switched on. This is done automatically in the ``interactive
+ mode'' of Isabelle, but must ne done manually in the ``batch mode''.
+
+ Another simple tactic is the function @{ML_ind atac}, which, as shown
+ earlier, corresponds to the assumption method.
*}
-lemma shows "P \<Longrightarrow> P"
+lemma
+ shows "P \<Longrightarrow> P"
apply(tactic {* atac 1 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
@@ -422,21 +450,24 @@
apply it to a goal. Below are three self-explanatory examples.
*}
-lemma shows "P \<and> Q"
+lemma
+ shows "P \<and> Q"
apply(tactic {* rtac @{thm conjI} 1 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
-lemma shows "P \<and> Q \<Longrightarrow> False"
+lemma
+ shows "P \<and> Q \<Longrightarrow> False"
apply(tactic {* etac @{thm conjE} 1 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
-lemma shows "False \<and> True \<Longrightarrow> False"
+lemma
+ shows "False \<and> True \<Longrightarrow> False"
apply(tactic {* dtac @{thm conjunct2} 1 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
@@ -457,7 +488,9 @@
implication is analysed and then an outermost conjunction.
*}
-lemma shows "C \<longrightarrow> (A \<and> B)" and "(A \<longrightarrow> B) \<and> C"
+lemma
+ shows "C \<longrightarrow> (A \<and> B)"
+ and "(A \<longrightarrow> B) \<and> C"
apply(tactic {* resolve_xmp_tac 1 *})
apply(tactic {* resolve_xmp_tac 2 *})
txt{*\begin{minipage}{\textwidth}
@@ -472,10 +505,12 @@
Another simple tactic is @{ML_ind cut_facts_tac}. It inserts a list of theorems
- into the assumptions of the current goal state. For example
+ into the assumptions of the current goal state. Below we will insert the definitions
+ for the constants @{term True} and @{term False}. So
*}
-lemma shows "True \<noteq> False"
+lemma
+ shows "True \<noteq> False"
apply(tactic {* cut_facts_tac [@{thm True_def}, @{thm False_def}] 1 *})
txt{*produces the goal state\medskip
@@ -485,103 +520,38 @@
(*<*)oops(*>*)
text {*
- Since rules are applied using higher-order unification, an automatic proof
- procedure might become too fragile, if it just applies inference rules as
- shown above. The reason is that a number of rules introduce meta-variables
- into the goal state. Consider for example the proof
-*}
-
-lemma shows "\<forall>x \<in> A. P x \<Longrightarrow> Q x"
-apply(tactic {* dtac @{thm bspec} 1 *})
-txt{*\begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}*}
-(*<*)oops(*>*)
-
-text {*
- where the application of rule @{text bspec} generates two subgoals involving the
- meta-variable @{text "?x"}. Now, if you are not careful, tactics
- applied to the first subgoal might instantiate this meta-variable in such a
- way that the second subgoal becomes unprovable. If it is clear what the @{text "?x"}
- should be, then this situation can be avoided by introducing a more
- constrained version of the @{text bspec}-rule. Such constraints can be given by
- pre-instantiating theorems with other theorems. One function to do this is
- @{ML_ind "RS"}
-
- @{ML_response_fake [display,gray]
- "@{thm disjI1} RS @{thm conjI}" "\<lbrakk>?P1; ?Q\<rbrakk> \<Longrightarrow> (?P1 \<or> ?Q1) \<and> ?Q"}
-
- which in the example instantiates the first premise of the @{text conjI}-rule
- with the rule @{text disjI1}. If the instantiation is impossible, as in the
- case of
-
- @{ML_response_fake_both [display,gray]
- "@{thm conjI} RS @{thm mp}"
-"*** Exception- THM (\"RSN: no unifiers\", 1,
-[\"\<lbrakk>?P; ?Q\<rbrakk> \<Longrightarrow> ?P \<and> ?Q\", \"\<lbrakk>?P \<longrightarrow> ?Q; ?P\<rbrakk> \<Longrightarrow> ?Q\"]) raised"}
-
- then the function raises an exception. The function @{ML_ind RSN} is similar to @{ML RS}, but
- takes an additional number as argument that makes explicit which premise
- should be instantiated.
-
- To improve readability of the theorems we shall produce below, we will use the
- function @{ML no_vars} from Section~\ref{sec:printing}, which transforms
- schematic variables into free ones. Using this function for the first @{ML
- RS}-expression above produces the more readable result:
-
- @{ML_response_fake [display,gray]
- "no_vars @{context} (@{thm disjI1} RS @{thm conjI})" "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> Q"}
-
- If you want to instantiate more than one premise of a theorem, you can use
- the function @{ML_ind MRS}:
-
- @{ML_response_fake [display,gray]
- "no_vars @{context} ([@{thm disjI1}, @{thm disjI2}] MRS @{thm conjI})"
- "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> (Pa \<or> Q)"}
-
- If you need to instantiate lists of theorems, you can use the
- functions @{ML RL} and @{ML_ind MRL}. For example in the code below,
- every theorem in the second list is instantiated with every
- theorem in the first.
-
- @{ML_response_fake [display,gray]
-"map (no_vars @{context})
- ([@{thm impI}, @{thm disjI2}] RL [@{thm conjI}, @{thm disjI1}])"
-"[\<lbrakk>P \<Longrightarrow> Q; Qa\<rbrakk> \<Longrightarrow> (P \<longrightarrow> Q) \<and> Qa,
- \<lbrakk>Q; Qa\<rbrakk> \<Longrightarrow> (P \<or> Q) \<and> Qa,
- (P \<Longrightarrow> Q) \<Longrightarrow> (P \<longrightarrow> Q) \<or> Qa,
- Q \<Longrightarrow> (P \<or> Q) \<or> Qa]"}
-
- \begin{readmore}
- The combinators for instantiating theorems are defined in @{ML_file "Pure/drule.ML"}.
- \end{readmore}
-
Often proofs on the ML-level involve elaborate operations on assumptions and
@{text "\<And>"}-quantified variables. To do such operations using the basic tactics
shown so far is very unwieldy and brittle. Some convenience and
safety is provided by the functions @{ML_ind FOCUS in Subgoal} and
@{ML_ind SUBPROOF}. These tactics fix the parameters
and bind the various components of a goal state to a record.
- To see what happens, use the function defined in Figure~\ref{fig:sptac}, which
+ To see what happens, suppose the function defined in Figure~\ref{fig:sptac}, which
takes a record and just prints out the contents of this record. Consider
now the proof:
*}
+ML {* Subgoal.FOCUS *}
+
text_raw{*
\begin{figure}[t]
\begin{minipage}{\textwidth}
\begin{isabelle}
*}
-
ML{*fun foc_tac {prems, params, asms, concl, context, schematics} =
let
- val string_of_params = string_of_cterms context (map snd params)
+ fun pairs1 f1 f2 xs =
+ commas (map (fn (x, y) => f1 x ^ ":=" ^ f2 y) xs)
+
+ fun pairs2 f xs = pairs1 f f xs
+
+ val string_of_params = pairs1 I (string_of_cterm context) params
val string_of_asms = string_of_cterms context asms
val string_of_concl = string_of_cterm context concl
val string_of_prems = string_of_thms_no_vars context prems
- val string_of_schms = string_of_cterms context (map fst (snd schematics))
-
+ val string_of_schms = pairs2 (string_of_cterm context) (snd schematics)
+
val strs = ["params: " ^ string_of_params,
"schematics: " ^ string_of_schms,
"assumptions: " ^ string_of_asms,
@@ -601,7 +571,8 @@
\end{figure}
*}
-lemma shows "\<And>x y. A x y \<Longrightarrow> B y x \<longrightarrow> C (?z y) x"
+lemma
+ shows "\<And>x y. A x y \<Longrightarrow> B y x \<longrightarrow> C (?z y) x"
apply(tactic {* Subgoal.FOCUS foc_tac @{context} 1 *})
txt {*
@@ -609,17 +580,17 @@
\begin{quote}\small
\begin{tabular}{ll}
- params: & @{term x}, @{term y}\\
- schematics: & @{text ?z}\\
+ params: & @{text "x:=x"}, @{text "y:=y"}\\
+ schematics: & @{text "?z:=z"}\\
assumptions: & @{term "A x y"}\\
conclusion: & @{term "B y x \<longrightarrow> C (z y) x"}\\
premises: & @{term "A x y"}
\end{tabular}
\end{quote}
- (FIXME: Find out how nowadays the schematics are handled)
-
- Notice in the actual output the brown colour of the variables @{term x},
+ The @{text params} and @{text schematics} stand or list of pairs where the left-hand
+ side of @{text ":="} is replaced by the right-hand side inside the tactic.
+ Notice that in the actual output the brown colour of the variables @{term x},
and @{term y}. Although they are parameters in the original goal, they are fixed inside
the tactic. By convention these fixed variables are printed in brown colour.
Similarly the schematic variable @{text ?z}. The assumption, or premise,
@@ -637,8 +608,8 @@
\begin{quote}\small
\begin{tabular}{ll}
- params: & @{term x}, @{term y}\\
- schematics: & @{text ?z}\\
+ params: & @{text "x:=x"}, @{text "y:=y"}\\
+ schematics: & @{text "?z:=z"}\\
assumptions: & @{term "A x y"}, @{term "B y x"}\\
conclusion: & @{term "C (z y) x"}\\
premises: & @{term "A x y"}, @{term "B y x"}
@@ -650,9 +621,9 @@
text {*
Now also @{term "B y x"} is an assumption bound to @{text asms} and @{text prems}.
- The difference between the tactics @{ML SUBPROOF} and @{ML FOCUS in Subgoal}
+ One difference between the tactics @{ML SUBPROOF} and @{ML FOCUS in Subgoal}
is that the former expects that the goal is solved completely, which the
- latter does not. @{ML SUBPROOF} can also not instantiate an schematic
+ latter does not. Another is that @{ML SUBPROOF} can not instantiate any schematic
variables.
One convenience of both @{ML FOCUS in Subgoal} and @{ML SUBPROOF} is that we
@@ -667,7 +638,8 @@
If you apply @{ML atac'} to the next lemma
*}
-lemma shows "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
+lemma
+ shows "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
apply(tactic {* atac' @{context} 1 *})
txt{* it will produce
@{subgoals [display]} *}
@@ -684,7 +656,9 @@
*}
-lemma shows "True" and "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
+lemma
+ shows "True"
+ and "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
apply(tactic {* atac' @{context} 2 *})
apply(rule TrueI)
done
@@ -739,7 +713,8 @@
*}
-lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
+lemma
+ shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
apply(tactic {* SUBGOAL select_tac 4 *})
apply(tactic {* SUBGOAL select_tac 3 *})
apply(tactic {* SUBGOAL select_tac 2 *})
@@ -756,7 +731,8 @@
produced by the rule. If we had applied it in the other order
*}
-lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
+lemma
+ shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
apply(tactic {* SUBGOAL select_tac 1 *})
apply(tactic {* SUBGOAL select_tac 3 *})
apply(tactic {* SUBGOAL select_tac 4 *})
@@ -772,8 +748,177 @@
contrived: there are much simpler methods available in Isabelle for
implementing a tactic analysing a goal according to its topmost
connective. These simpler methods use tactic combinators, which we will
- explain in the next section.
-
+ explain in the next section, but before that we will show how
+ tactic application can be constraint.
+
+ Since rules are applied using higher-order unification, an automatic proof
+ procedure might become too fragile, if it just applies inference rules as
+ shown above. The reason is that a number of rules introduce meta-variables
+ into the goal state. Consider for example the proof
+*}
+
+lemma
+ shows "\<forall>x \<in> A. P x \<Longrightarrow> Q x"
+apply(tactic {* dtac @{thm bspec} 1 *})
+txt{*\begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+ where the application of rule @{text bspec} generates two subgoals involving the
+ meta-variable @{text "?x"}. Now, if you are not careful, tactics
+ applied to the first subgoal might instantiate this meta-variable in such a
+ way that the second subgoal becomes unprovable. If it is clear what the @{text "?x"}
+ should be, then this situation can be avoided by introducing a more
+ constrained version of the @{text bspec}-rule. One way to give such
+ constraints is by pre-instantiating theorems with other theorems.
+ The function @{ML_ind "RS"}, for example, does this.
+
+ @{ML_response_fake [display,gray]
+ "@{thm disjI1} RS @{thm conjI}" "\<lbrakk>?P1; ?Q\<rbrakk> \<Longrightarrow> (?P1 \<or> ?Q1) \<and> ?Q"}
+
+ In this example it instantiates the first premise of the @{text conjI}-rule
+ with the rule @{text disjI1}. If the instantiation is impossible, as in the
+ case of
+
+ @{ML_response_fake_both [display,gray]
+ "@{thm conjI} RS @{thm mp}"
+"*** Exception- THM (\"RSN: no unifiers\", 1,
+[\"\<lbrakk>?P; ?Q\<rbrakk> \<Longrightarrow> ?P \<and> ?Q\", \"\<lbrakk>?P \<longrightarrow> ?Q; ?P\<rbrakk> \<Longrightarrow> ?Q\"]) raised"}
+
+ then the function raises an exception. The function @{ML_ind RSN} is similar to @{ML RS}, but
+ takes an additional number as argument that makes explicit which premise
+ should be instantiated.
+
+ To improve readability of the theorems we shall produce below, we will use the
+ function @{ML no_vars} from Section~\ref{sec:printing}, which transforms
+ schematic variables into free ones. Using this function for the first @{ML
+ RS}-expression above produces the more readable result:
+
+ @{ML_response_fake [display,gray]
+ "no_vars @{context} (@{thm disjI1} RS @{thm conjI})" "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> Q"}
+
+ If you want to instantiate more than one premise of a theorem, you can use
+ the function @{ML_ind MRS}:
+
+ @{ML_response_fake [display,gray]
+ "no_vars @{context} ([@{thm disjI1}, @{thm disjI2}] MRS @{thm conjI})"
+ "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> (Pa \<or> Q)"}
+
+ If you need to instantiate lists of theorems, you can use the
+ functions @{ML RL} and @{ML_ind MRL}. For example in the code below,
+ every theorem in the second list is instantiated with every
+ theorem in the first.
+
+ @{ML_response_fake [display,gray]
+"map (no_vars @{context})
+ ([@{thm impI}, @{thm disjI2}] RL [@{thm conjI}, @{thm disjI1}])"
+"[\<lbrakk>P \<Longrightarrow> Q; Qa\<rbrakk> \<Longrightarrow> (P \<longrightarrow> Q) \<and> Qa,
+ \<lbrakk>Q; Qa\<rbrakk> \<Longrightarrow> (P \<or> Q) \<and> Qa,
+ (P \<Longrightarrow> Q) \<Longrightarrow> (P \<longrightarrow> Q) \<or> Qa,
+ Q \<Longrightarrow> (P \<or> Q) \<or> Qa]"}
+
+ \begin{readmore}
+ The combinators for instantiating theorems are defined in @{ML_file "Pure/drule.ML"}.
+ \end{readmore}
+
+ Higher-order unification is also often in the way when applying certain
+ congruence theorems. For example we would expect that the theorem
+ @{thm [source] cong}
+
+ \begin{isabelle}
+ \isacommand{thm}~@{thm [source] cong}\\
+ @{text "> "}~@{thm cong}
+ \end{isabelle}
+
+ is applicable in the following proof producing the subgoals
+ @{term "t x = s u"} and @{term "y = w"}.
+*}
+
+lemma
+ fixes x y u w::"'a"
+ shows "t x y = s u w"
+apply(rule cong)
+txt{*\begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+ As you can see this is unfortunately \emph{not} the case. The problem is
+ that higher-order unification produces an instantiation that is not the
+ intended one. While we can use \isacommand{back} to interactively find the
+ intended instantiation, this is not an option for an automatic proof
+ procedure. Fortunately, the tactic @{ML_ind cong_tac in Cong_Tac} helps
+ with applying congruence rules and finding the intended instantiation.
+*}
+
+lemma
+ fixes x y u w::"'a"
+ shows "t x y = s u w"
+apply(tactic {* Cong_Tac.cong_tac @{thm cong} 1 *})
+txt{*\begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+ However, sometimes it is necessary to expicitly match a theroem against a proof
+ state and in doing so finding manually an appropriate instantiation. Imagine
+ you have the theorem
+*}
+
+lemma rule:
+ shows "\<lbrakk>f = g; x = y\<rbrakk> \<Longrightarrow> R (f x) (g y)"
+sorry
+
+text {*
+ and you want to apply it to the goal @{term "(t\<^isub>1 t\<^isub>2) \<le>
+ (s\<^isub>1 s\<^isub>2)"}. Since in this theorem, all variables are
+ schematic, we have a nasty higher-order unification problem and @{text rtac}
+ will not be able to apply this rule as we want. However, in the proof below
+ we are only interested where @{term R} is instantiated to a constant and
+ similarly the instantiation for the other variables is ``obvious'' from the
+ proof state. To use this rule we essentially match the conclusion of
+ @{thm [source] rule} against the goal state reading of an instantiation for
+ @{thm [source] rule}. For this the function @{ML_ind first_order_match in Thm}
+ matches two @{ML_type cterm}s and produces, in the sucessful case, a matcher
+ that can be used to instantiate the @{thm [source] rule}. The instantiation
+ can be done with the function @{ML_ind instantiate in Drule}. To automate
+ this we implement the following function.
+*}
+
+ML{*fun fo_rtac thm =
+ Subgoal.FOCUS (fn {concl, ...} =>
+ let
+ val concl_pat = Drule.strip_imp_concl (cprop_of thm)
+ val insts = Thm.first_order_match (concl_pat, concl)
+ in
+ rtac (Drule.instantiate insts thm) 1
+ end)*}
+
+text {*
+ Note that we use @{ML FOCUS in Subgoal} because we have directly access
+ to the conclusion of the goal state and also because this function also
+ takes care about correctly handling parameters that might be present
+ in the goal state. An example for @{ML fo_rtac} is as follows.
+*}
+
+lemma
+ shows "\<And>t\<^isub>1 s\<^isub>1 (t\<^isub>2::'a) (s\<^isub>2::'a). (t\<^isub>1 t\<^isub>2) \<le> (s\<^isub>1 s\<^isub>2)"
+apply(tactic {* fo_rtac @{thm rule} @{context} 1 *})
+txt{*\begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+ We obtain the intended subgoals and also the parameters are correctly
+ introduced in both of them. Such manual instantiations are quite frequently
+ necessary in order to contrain the application of inference rules. Otherwise
+ one ends up with ``wild'' higher-order unification problems that make
+ automatic proofs fails.
*}
section {* Tactic Combinators *}
@@ -784,7 +929,8 @@
just strings together two tactics in a sequence. For example:
*}
-lemma shows "(Foo \<and> Bar) \<and> False"
+lemma
+ shows "(Foo \<and> Bar) \<and> False"
apply(tactic {* rtac @{thm conjI} 1 THEN rtac @{thm conjI} 1 *})
txt {* \begin{minipage}{\textwidth}
@{subgoals [display]}
@@ -797,7 +943,8 @@
the ``primed'' version of @{ML THEN}. For example:
*}
-lemma shows "(Foo \<and> Bar) \<and> False"
+lemma
+ shows "(Foo \<and> Bar) \<and> False"
apply(tactic {* (rtac @{thm conjI} THEN' rtac @{thm conjI}) 1 *})
txt {* \begin{minipage}{\textwidth}
@{subgoals [display]}
@@ -848,7 +995,8 @@
will try @{text conjI}. To see this consider the proof
*}
-lemma shows "True \<and> False" and "Foo \<or> Bar"
+lemma
+ shows "True \<and> False" and "Foo \<or> Bar"
apply(tactic {* orelse_xmp_tac 2 *})
apply(tactic {* orelse_xmp_tac 1 *})
txt {* which results in the goal state
@@ -876,7 +1024,8 @@
test the tactic on the same goals:
*}
-lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
+lemma
+ shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
apply(tactic {* select_tac' 4 *})
apply(tactic {* select_tac' 3 *})
apply(tactic {* select_tac' 2 *})
@@ -892,7 +1041,8 @@
@{ML_ind ALLGOALS} that simplifies this. Using this combinator you can simply
write: *}
-lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
+lemma
+ shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
apply(tactic {* ALLGOALS select_tac' *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
@@ -918,7 +1068,8 @@
*}
-lemma shows "E \<Longrightarrow> F"
+lemma
+ shows "E \<Longrightarrow> F"
apply(tactic {* select_tac' 1 *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
@@ -940,7 +1091,8 @@
*}
-lemma shows "E \<Longrightarrow> F"
+lemma
+ shows "E \<Longrightarrow> F"
apply(tactic {* CHANGED (select_tac' 1) *})(*<*)?(*>*)
txt{* gives the error message:
\begin{isabelle}
@@ -961,7 +1113,8 @@
text {* which applied to the proof *}
-lemma shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
+lemma
+ shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
apply(tactic {* repeat_xmp_tac *})
txt{* produces
@@ -1000,7 +1153,8 @@
you see that the following goal
*}
-lemma shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
+lemma
+ shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
apply(tactic {* repeat_all_new_xmp_tac 1 *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
@@ -1016,7 +1170,8 @@
*}
-lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
+lemma
+ shows "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
apply(tactic {* etac @{thm disjE} 1 *})
txt{* applies the rule to the first assumption yielding the goal state:\smallskip
@@ -1028,7 +1183,8 @@
*}
(*<*)
oops
-lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
+lemma
+ shows "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
apply(tactic {* etac @{thm disjE} 1 *})
(*>*)
back
@@ -1046,7 +1202,8 @@
combinator @{ML_ind DETERM}.
*}
-lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
+lemma
+ shows "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
apply(tactic {* DETERM (etac @{thm disjE} 1) *})
txt {*\begin{minipage}{\textwidth}
@{subgoals [display]}
@@ -1188,7 +1345,8 @@
to specify the goal to be analysed). So the proof
*}
-lemma "Suc (1 + 2) < 3 + 2"
+lemma
+ shows "Suc (1 + 2) < 3 + 2"
apply(simp)
done
@@ -1196,7 +1354,8 @@
corresponds on the ML-level to the tactic
*}
-lemma "Suc (1 + 2) < 3 + 2"
+lemma
+ shows "Suc (1 + 2) < 3 + 2"
apply(tactic {* asm_full_simp_tac @{simpset} 1 *})
done
@@ -1411,7 +1570,8 @@
then we can use this tactic to unfold the definition of the constant.
*}
-lemma shows "MyTrue \<Longrightarrow> True"
+lemma
+ shows "MyTrue \<Longrightarrow> True"
apply(tactic {* rewrite_goal_tac @{thms MyTrue_def} 1 *})
txt{* producing the goal state
@@ -1460,18 +1620,18 @@
end
lemma perm_append[simp]:
-fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
-shows "((pi\<^isub>1@pi\<^isub>2)\<bullet>c) = (pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c))"
+ fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "((pi\<^isub>1@pi\<^isub>2)\<bullet>c) = (pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c))"
by (induct pi\<^isub>1) (auto)
lemma perm_bij[simp]:
-fixes c d::"nat" and pi::"prm"
-shows "(pi\<bullet>c = pi\<bullet>d) = (c = d)"
+ fixes c d::"nat" and pi::"prm"
+ shows "(pi\<bullet>c = pi\<bullet>d) = (c = d)"
by (induct pi) (auto)
lemma perm_rev[simp]:
-fixes c::"nat" and pi::"prm"
-shows "pi\<bullet>((rev pi)\<bullet>c) = c"
+ fixes c::"nat" and pi::"prm"
+ shows "pi\<bullet>((rev pi)\<bullet>c) = c"
by (induct pi arbitrary: c) (auto)
lemma perm_compose:
@@ -1506,7 +1666,7 @@
lemma
fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
-shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
+ shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
apply(simp)
apply(rule trans)
apply(rule perm_compose)
@@ -1528,13 +1688,13 @@
text {* Now the two lemmas *}
lemma perm_aux_expand:
-fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
-shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = pi\<^isub>1 \<bullet>aux (pi\<^isub>2\<bullet>c)"
+ fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = pi\<^isub>1 \<bullet>aux (pi\<^isub>2\<bullet>c)"
unfolding perm_aux_def by (rule refl)
lemma perm_compose_aux:
-fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
-shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>aux c) = (pi\<^isub>1\<bullet>pi\<^isub>2) \<bullet>aux (pi\<^isub>1\<bullet>c)"
+ fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>aux c) = (pi\<^isub>1\<bullet>pi\<^isub>2) \<bullet>aux (pi\<^isub>1\<bullet>c)"
unfolding perm_aux_def by (rule perm_compose)
text {*
@@ -1572,8 +1732,8 @@
*}
lemma
-fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
-shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
+ fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
apply(tactic {* perm_simp_tac 1 *})
done
@@ -1640,7 +1800,8 @@
it fires on the lemma:
*}
-lemma shows "Suc 0 = 1"
+lemma
+ shows "Suc 0 = 1"
apply(simp)
(*<*)oops(*>*)
@@ -1668,7 +1829,8 @@
as follows:
*}
-lemma shows "Suc 0 = 1"
+lemma
+ shows "Suc 0 = 1"
apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [@{simproc fail}]) 1*})
(*<*)oops(*>*)
@@ -1718,7 +1880,8 @@
see this in the proof
*}
-lemma shows "Suc (Suc 0) = (Suc 1)"
+lemma
+ shows "Suc (Suc 0) = (Suc 1)"
apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [fail']) 1*})
(*<*)oops(*>*)
@@ -1770,7 +1933,8 @@
in the following proof
*}
-lemma shows "P (Suc (Suc (Suc 0))) (Suc 0)"
+lemma
+ shows "P (Suc (Suc (Suc 0))) (Suc 0)"
apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [plus_one]) 1*})
txt{*
where the simproc produces the goal state
@@ -1875,7 +2039,8 @@
Now in the lemma
*}
-lemma "P (Suc (Suc 2)) (Suc 99) (0::nat) (Suc 4 + Suc 0) (Suc (0 + 0))"
+lemma
+ shows "P (Suc (Suc 2)) (Suc 99) (0::nat) (Suc 4 + Suc 0) (Suc (0 + 0))"
apply(tactic {* simp_tac (HOL_ss addsimprocs [nat_number]) 1*})
txt {*
you obtain the more legible goal state
@@ -1977,7 +2142,8 @@
*}
-lemma true_conj1: "True \<and> P \<equiv> P" by simp
+lemma true_conj1:
+ shows "True \<and> P \<equiv> P" by simp
text {*
It can be used for example to rewrite @{term "True \<and> (Foo \<longrightarrow> Bar)"}
@@ -2158,7 +2324,8 @@
consider the conversion @{ML all_true1_conv} and the lemma:
*}
-lemma foo_test: "P \<or> (True \<and> \<not>P)" by simp
+lemma foo_test:
+ shows "P \<or> (True \<and> \<not>P)" by simp
text {*
Using the conversion @{ML all_true1_conv} you can transform this theorem into a