added to the tactical chapter; polished; added the tabularstar environment (which is just tabular*)
theory Tactical
imports Base FirstSteps
begin
chapter {* Tactical Reasoning\label{chp:tactical} *}
text {*
The main reason for descending to the ML-level of Isabelle is to be able to
implement automatic proof procedures. Such proof procedures usually lessen
considerably the burden of manual reasoning, for example, when introducing
new definitions. These proof procedures are centred around refining a goal
state using tactics. This is similar to the \isacommand{apply}-style
reasoning at the user level, where goals are modified in a sequence of proof
steps until all of them are solved. However, there are also more structured
operations available on the ML-level that help with the handling of
variables and assumptions.
*}
section {* Basics of Reasoning with Tactics*}
text {*
To see how tactics work, let us first transcribe a simple \isacommand{apply}-style proof
into ML. Suppose the following proof.
*}
lemma disj_swap: "P \<or> Q \<Longrightarrow> Q \<or> P"
apply(erule disjE)
apply(rule disjI2)
apply(assumption)
apply(rule disjI1)
apply(assumption)
done
text {*
This proof translates to the following ML-code.
@{ML_response_fake [display,gray]
"let
val ctxt = @{context}
val goal = @{prop \"P \<or> Q \<Longrightarrow> Q \<or> P\"}
in
Goal.prove ctxt [\"P\", \"Q\"] [] goal
(fn _ =>
etac @{thm disjE} 1
THEN rtac @{thm disjI2} 1
THEN atac 1
THEN rtac @{thm disjI1} 1
THEN atac 1)
end" "?P \<or> ?Q \<Longrightarrow> ?Q \<or> ?P"}
To start the proof, the function @{ML "Goal.prove"}~@{text "ctxt xs As C
tac"} sets up a goal state for proving the goal @{text C}
(that is @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"} in the proof at hand) under the
assumptions @{text As} (happens to be empty) with the variables
@{text xs} that will be generalised once the goal is proved (in our case
@{text P} and @{text Q}). The @{text "tac"} is the tactic that proves the goal;
it can make use of the local assumptions (there are none in this example).
The functions @{ML etac}, @{ML rtac} and @{ML atac} correspond to
@{text erule}, @{text rule} and @{text assumption}, respectively.
The operator @{ML THEN} strings the tactics together.
\begin{readmore}
To learn more about the function @{ML Goal.prove} see \isccite{sec:results}
and the file @{ML_file "Pure/goal.ML"}. See @{ML_file
"Pure/tactic.ML"} and @{ML_file "Pure/tctical.ML"} for the code of basic
tactics and tactic combinators; see also Chapters 3 and 4 in the old
Isabelle Reference Manual.
\end{readmore}
Note that in the code above we used antiquotations for referencing the theorems. Many theorems
also have ML-bindings with the same name. Therefore, we could also just have
written @{ML "etac disjE 1"}, or in case there are no ML-binding obtained
the theorem dynamically using the function @{ML thm}; for example
@{ML "etac (thm \"disjE\") 1"}. Both ways however are considered bad style!
The reason
is that the binding for @{ML disjE} can be re-assigned by the user and thus
one does not have complete control over which theorem is actually
applied. This problem is nicely prevented by using antiquotations, because
then the theorems are fixed statically at compile-time.
During the development of automatic proof procedures, you will often find it
necessary to test a tactic on examples. This can be conveniently
done with the command \isacommand{apply}@{text "(tactic \<verbopen> \<dots> \<verbclose>)"}.
Consider the following sequence of tactics
*}
ML{*val foo_tac =
(etac @{thm disjE} 1
THEN rtac @{thm disjI2} 1
THEN atac 1
THEN rtac @{thm disjI1} 1
THEN atac 1)*}
text {* and the Isabelle proof: *}
lemma "P \<or> Q \<Longrightarrow> Q \<or> P"
apply(tactic {* foo_tac *})
done
text {*
By using @{text "tactic \<verbopen> \<dots> \<verbclose>"} you can call from the
user level of Isabelle the tactic @{ML foo_tac} or
any other function that returns a tactic.
The tactic @{ML foo_tac} is just a sequence of simple tactics stringed
together by @{ML THEN}. As can be seen, each simple tactic in @{ML foo_tac}
has a hard-coded number that stands for the subgoal analysed by the
tactic (@{text "1"} stands for the first, or top-most, subgoal). This hard-coding
of goals is sometimes wanted, but usually it is not. To avoid the explicit numbering,
you can write\label{tac:footacprime}
*}
ML{*val foo_tac' =
(etac @{thm disjE}
THEN' rtac @{thm disjI2}
THEN' atac
THEN' rtac @{thm disjI1}
THEN' atac)*}
text {*
and then give the number for the subgoal explicitly when the tactic is
called. So in the next proof you can first discharge the second subgoal, and
after that the first.
*}
lemma "P1 \<or> Q1 \<Longrightarrow> Q1 \<or> P1"
and "P2 \<or> Q2 \<Longrightarrow> Q2 \<or> P2"
apply(tactic {* foo_tac' 2 *})
apply(tactic {* foo_tac' 1 *})
done
text {*
This kind of addressing is more difficult to achieve when the goal is
hard-coded inside the tactic. For most operators that combine tactics
(@{ML THEN} is only one such operator) a ``primed'' version exists.
The tactics @{ML foo_tac} and @{ML foo_tac'} are very specific for
analysing goals being only of the form @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"}. If the goal is not
of this form, then they throw the error message:
\begin{isabelle}
@{text "*** empty result sequence -- proof command failed"}\\
@{text "*** At command \"apply\"."}
\end{isabelle}
Meaning the tactics failed. The reason for this error message is that tactics
are functions that map a goal state to a (lazy) sequence of successor states,
hence the type of a tactic is:
@{text [display, gray] "type tactic = thm -> thm Seq.seq"}
It is custom that if a tactic fails, it should return the empty sequence:
therefore your own tactics should not raise exceptions willy-nilly. Only
in very grave failure situations should a tactic raise the exception
@{text THM}.
The simplest tactics are @{ML no_tac} and @{ML all_tac}. The first returns
the empty sequence and is defined as
*}
ML{*fun no_tac thm = Seq.empty*}
text {*
which means @{ML no_tac} always fails. The second returns the given theorem wrapped
as a single member sequence; @{ML all_tac} is defined as
*}
ML{*fun all_tac thm = Seq.single thm*}
text {*
which means it always succeeds (but also does not make any progress
with the proof).
The lazy list of possible successor states shows through at the user-level
of Isabelle when using the command \isacommand{back}. For instance in the
following proof there are two possibilities for how to apply
@{ML foo_tac'}: either using the first assumption or the second.
*}
lemma "\<lbrakk>P \<or> Q; P \<or> Q\<rbrakk> \<Longrightarrow> Q \<or> P"
apply(tactic {* foo_tac' 1 *})
back
done
text {*
By using \isacommand{back}, we construct the proof that uses the
second assumption. In more interesting situations, only by exploring
different possibilities one might be able to find a proof.
\begin{readmore}
See @{ML_file "Pure/General/seq.ML"} for the implementation of lazy
sequences. However in day-to-day Isabelle programming, one rarely
constructs sequences explicitly, but uses the predefined functions
instead.
\end{readmore}
It might be surprising that tactics, which transform
one proof state to the next, are functions from theorems to theorem
(sequences). The surprise resolves by knowing that every
goal state is indeed a theorem. To shed more light on this,
let us modify the code of @{ML all_tac} to obtain the following
tactic
*}
ML{*fun my_print_tac ctxt thm =
let
val _ = warning (str_of_thm ctxt thm)
in
Seq.single thm
end*}
text {*
which prints out the given theorem (using the string-function defined
in Section~\ref{sec:printing}) and then behaves like @{ML all_tac}. We
are now in the position to inspect every goal state in a proof. Consider
the proof below: on the left-hand side we show the goal state as shown
by Isabelle, on the right-hand side the print out from @{ML my_print_tac}.
*}
lemma shows "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"
apply(tactic {* my_print_tac @{context} *})
txt{* \small
\begin{tabularstar}{\textwidth}{@ {}l@ {\extracolsep{\fill}}l@ {}}
\begin{minipage}[t]{0.3\textwidth}
@{subgoals [display]}
\end{minipage} &
@{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}
\end{tabularstar}
*}
apply(rule conjI)
apply(tactic {* my_print_tac @{context} *})
txt{* \small
\begin{tabularstar}{\textwidth}{@ {}l@ {\extracolsep{\fill}}l@ {}}
\begin{minipage}[t]{0.26\textwidth}
@{subgoals [display]}
\end{minipage} &
@{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}
\end{tabularstar}
*}
apply(assumption)
apply(tactic {* my_print_tac @{context} *})
txt{* \small
\begin{tabularstar}{\textwidth}{@ {}l@ {\extracolsep{\fill}}l@ {}}
\begin{minipage}[t]{0.3\textwidth}
@{subgoals [display]}
\end{minipage} &
@{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}
\end{tabularstar}
*}
apply(assumption)
apply(tactic {* my_print_tac @{context} *})
txt{* \small
\begin{tabularstar}{\textwidth}{@ {}l@ {\extracolsep{\fill}}l@ {}}
\begin{minipage}[t]{0.3\textwidth}
@{subgoals [display]}
\end{minipage} &
@{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}
\end{tabularstar}
*}
done
text {*
As can be seen, internally every goal state is an implication of the form
@{text[display] "A\<^isub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^isub>n \<Longrightarrow> (C)"}
where @{term C} is the goal to be proved and the @{term "A\<^isub>i"} are the
subgoals. So after setting up the lemma, the goal state is always of the form
@{text "C \<Longrightarrow> (C)"}; when the proof is finished we are left with @{text "(C)"}.
Since the goal @{term C} can potentially be an implication,
there is a ``protector'' wrapped around it (in from of an outermost constant
@{text "Const (\"prop\", bool \<Rightarrow> bool)"} applied to each goal;
however this constant is invisible in the print out above). This
prevents that premises of @{text C} are mis-interpreted as open subgoals.
While tactics can operate on the subgoals (the @{text "A\<^isub>i"} above), they
are expected to leave the conclusion @{term C} intact, with the
exception of possibly instantiating schematic variables. If you use
the predefined tactics, this will always be the case.
\begin{readmore}
For more information about the internals of goals see \isccite{sec:tactical-goals}.
\end{readmore}
*}
section {* Simple Tactics *}
text {*
A simple tactic is @{ML print_tac}, which is quite useful for low-level debugging of tactics.
It just prints out a message and the current goal state.
*}
lemma shows "False \<Longrightarrow> True"
apply(tactic {* print_tac "foo message" *})
txt{*\begin{minipage}{\textwidth}\small
@{text "foo message"}\\[3mm]
@{prop "False \<Longrightarrow> True"}\\
@{text " 1. False \<Longrightarrow> True"}\\
\end{minipage}
*}
(*<*)oops(*>*)
text {*
Another simple tactic is the function @{ML atac}, which, as shown in the previous
section, corresponds to the assumption command.
*}
lemma shows "P \<Longrightarrow> P"
apply(tactic {* atac 1 *})
done
text {*
Similarly, @{ML rtac}, @{ML dtac}, @{ML etac} and @{ML ftac} correspond
to @{text rule}, @{text drule}, @{text erule} and @{text frule},
respectively. Each of them takes a theorem as argument. Below are three
examples with the resulting goal state. How
they work should be self-explanatory.
*}
lemma shows "P \<and> Q"
apply(tactic {* rtac @{thm conjI} 1 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
lemma shows "P \<and> Q \<Longrightarrow> False"
apply(tactic {* etac @{thm conjE} 1 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
lemma shows "False \<and> True \<Longrightarrow> False"
apply(tactic {* dtac @{thm conjunct2} 1 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
text {*
As mentioned in the previous section, most basic tactics take a number as
argument, which addresses the subgoal they are analysing. In the proof below,
we first analyse the second subgoal by focusing on this subgoal first.
*}
lemma shows "Foo" and "P \<and> Q"
apply(tactic {* rtac @{thm conjI} 2 *})
txt {*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
text {*
The function @{ML resolve_tac} is similar to @{ML rtac}, except that it
expects a list of theorems as arguments. From this list it will apply the
first applicable theorem (later theorems that are also applicable can be
explored via the lazy sequences mechanism). Given the code
*}
ML{*val resolve_tac_xmp = resolve_tac [@{thm impI}, @{thm conjI}]*}
text {*
an example for @{ML resolve_tac} is the following proof where first an outermost
implication is analysed and then an outermost conjunction.
*}
lemma shows "C \<longrightarrow> (A \<and> B)" and "(A \<longrightarrow> B) \<and> C"
apply(tactic {* resolve_tac_xmp 1 *})
apply(tactic {* resolve_tac_xmp 2 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
text {*
Similarly versions exists for @{ML atac} (@{ML assume_tac}), @{ML etac}
(@{ML eresolve_tac}) and so on.
Another simple tactic is @{ML cut_facts_tac}. It inserts a list of theorems
into the assumptions of the current goal state. For example:
*}
lemma shows "True \<noteq> False"
apply(tactic {* cut_facts_tac [@{thm True_def}, @{thm False_def}] 1 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
text {*
Since rules are applied using higher-order unification, an automatic proof
procedure might become too fragile, if it just applies inference rules as
shown above. The reason is that a number of rules introduce meta-variables
into the goal state. Consider for example the proof
*}
lemma shows "\<forall>x\<in>A. P x \<Longrightarrow> Q x"
apply(drule bspec)
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
text {*
where the application of @{text bspec} results into two subgoals involving the
meta-variable @{text "?x"}. The point is that if you are not careful, tactics
applied to the first subgoal might instantiate this meta-variable in such a
way that the second subgoal becomes unprovable. If it is clear what the @{text "?x"}
should be, then this situation can be avoided by introducing a more
constraint version of the @{text bspec}-rule. Such constraints can be enforced by
pre-instantiating theorems with other theorems, for example by using the
function @{ML RS}
@{ML_response_fake [display,gray]
"@{thm disjI1} RS @{thm conjI}" "\<lbrakk>?P1; ?Q\<rbrakk> \<Longrightarrow> (?P1 \<or> ?Q1) \<and> ?Q"}
which, for instance, instantiates the first premise of the @{text conjI}-rule
with the rule @{text disjI1}. If this is impossible, as in the case of
@{ML_response_fake_both [display,gray]
"@{thm conjI} RS @{thm mp}"
"*** Exception- THM (\"RSN: no unifiers\", 1,
[\"\<lbrakk>?P; ?Q\<rbrakk> \<Longrightarrow> ?P \<and> ?Q\", \"\<lbrakk>?P \<longrightarrow> ?Q; ?P\<rbrakk> \<Longrightarrow> ?Q\"]) raised"}
the function raises an exception. The function @{ML RSN} is similar, but
takes a number as argument and thus you can make explicit which premise
should be instantiated.
To improve readability of the theorems we produce below, we shall use
the following function
*}
ML{*fun no_vars ctxt thm =
let
val ((_, [thm']), _) = Variable.import_thms true [thm] ctxt
in
thm'
end*}
text {*
that transform the schematic variables of a theorem into free variables.
This means for the first @{ML RS}-expression above:
@{ML_response_fake [display,gray]
"no_vars @{context} (@{thm disjI1} RS @{thm conjI})" "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> Q"}
If you want to instantiate more than one premise of a theorem, you can use
the function @{ML MRS}:
@{ML_response_fake [display,gray]
"no_vars @{context} ([@{thm disjI1}, @{thm disjI2}] MRS @{thm conjI})"
"\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> (Pa \<or> Q)"}
If you need to instantiate lists of theorems, you can use the
functions @{ML RL} and @{ML MRL}. For example in the code below
every theorem in the first list is instantiated against every
theorem in the second.
@{ML_response_fake [display,gray]
"[@{thm impI}, @{thm disjI2}] RL [@{thm conjI}, @{thm disjI1}]"
"[\<lbrakk>P \<Longrightarrow> Q; Qa\<rbrakk> \<Longrightarrow> (P \<longrightarrow> Q) \<and> Qa,
\<lbrakk>Q; Qa\<rbrakk> \<Longrightarrow> (P \<or> Q) \<and> Qa,
(P \<Longrightarrow> Q) \<Longrightarrow> (P \<longrightarrow> Q) \<or> Qa,
Q \<Longrightarrow> (P \<or> Q) \<or> Qa]"}
\begin{readmore}
The combinators for instantiating theorems are defined in @{ML_file "Pure/drule.ML"}.
\end{readmore}
Often proofs involve elaborate operations on assumptions and
@{text "\<And>"}-quantified variables. To do such operations on the ML-level
using the basic tactics is very unwieldy and brittle. Some convenience and
safety is provided by the tactic @{ML SUBPROOF}. This tactic fixes the parameters
and binds the various components of a proof state into a record.
To see what happens, assume the function defined in Figure~\ref{fig:sptac}, which
takes a record as argument and just prints out the content of this record (using the
string transformation functions defined in Section~\ref{sec:printing}). Consider
now the proof
*}
text_raw{*
\begin{figure}
\begin{isabelle}
*}
ML{*fun sp_tac {prems, params, asms, concl, context, schematics} =
let
val str_of_params = str_of_cterms context params
val str_of_asms = str_of_cterms context asms
val str_of_concl = str_of_cterm context concl
val str_of_prems = str_of_thms context prems
val str_of_schms = str_of_cterms context (snd schematics)
val _ = (warning ("params: " ^ str_of_params);
warning ("schematics: " ^ str_of_schms);
warning ("assumptions: " ^ str_of_asms);
warning ("conclusion: " ^ str_of_concl);
warning ("premises: " ^ str_of_prems))
in
no_tac
end*}
text_raw{*
\end{isabelle}
\caption{A function that prints out the various parameters provided by the tactic
@{ML SUBPROOF}. It uses the functions defined in Section~\ref{sec:printing} for
extracting strings from @{ML_type cterm}s and @{ML_type thm}s.\label{fig:sptac}}
\end{figure}
*}
lemma shows "\<And>x y. A x y \<Longrightarrow> B y x \<longrightarrow> C (?z y) x"
apply(tactic {* SUBPROOF sp_tac @{context} 1 *})?
txt {*
which gives the printout:
\begin{quote}\small
\begin{tabular}{ll}
params: & @{term x}, @{term y}\\
schematics: & @{term z}\\
assumptions: & @{term "A x y"}\\
conclusion: & @{term "B y x \<longrightarrow> C (z y) x"}\\
premises: & @{term "A x y"}
\end{tabular}
\end{quote}
Note in the actual output the brown colour of the variables @{term x} and
@{term y}. Although they are parameters in the original goal, they are fixed inside
the subproof. Similarly the schematic variable @{term z}. The assumption
@{prop "A x y"} is bound as @{ML_type cterm} to the record-variable
@{text asms} but also as @{ML_type thm} to @{text prems}.
Notice also that we had to append @{text "?"} to \isacommand{apply}. The
reason is that @{ML SUBPROOF} normally expects that the subgoal is solved completely.
Since in the function @{ML sp_tac} we returned the tactic @{ML no_tac}, the subproof
obviously fails. The question-mark allows us to recover from this failure
in a graceful manner so that the warning messages are not overwritten
by an error message.
If we continue the proof script by applying the @{text impI}-rule
*}
apply(rule impI)
apply(tactic {* SUBPROOF sp_tac @{context} 1 *})?
txt {*
then @{ML sp_tac} prints out
\begin{quote}\small
\begin{tabular}{ll}
params: & @{term x}, @{term y}\\
schematics: & @{term z}\\
assumptions: & @{term "A x y"}, @{term "B y x"}\\
conclusion: & @{term "C (z y) x"}\\
premises: & @{term "A x y"}, @{term "B y x"}
\end{tabular}
\end{quote}
*}
(*<*)oops(*>*)
text {*
where we now also have @{term "B y x"} as an assumption.
One convenience of @{ML SUBPROOF} is that we can apply the assumptions
using the usual tactics, because the parameter @{text prems}
contains them as theorems. With this we can easily
implement a tactic that almost behaves like @{ML atac}, namely:
*}
ML{*val atac' = SUBPROOF (fn {prems, ...} => resolve_tac prems 1)*}
text {*
If we apply it to the next lemma
*}
lemma shows "\<And>x y. \<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
apply(tactic {* atac' @{context} 1 *})
txt{* we get
@{subgoals [display]} *}
(*<*)oops(*>*)
text {*
The restriction in this tactic is that it cannot instantiate any
schematic variable. This might be seen as a defect, but it is actually
an advantage in the situations for which @{ML SUBPROOF} was designed:
the reason is that instantiation of schematic variables can affect
several goals and can render them unprovable. @{ML SUBPROOF} is meant
to avoid this.
Notice that @{ML atac'} calls @{ML resolve_tac} with the subgoal
number @{text "1"} and also the ``outer'' call to @{ML SUBPROOF} in
the \isacommand{apply}-step uses @{text "1"}. Another advantage
of @{ML SUBGOAL} is that the addressing inside it is completely
local to the subproof inside. It is therefore possible to also apply
@{ML atac'} to the second goal by just writing:
*}
lemma shows "True" and "\<And>x y. \<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
apply(tactic {* atac' @{context} 2 *})
apply(rule TrueI)
done
text {*
\begin{readmore}
The function @{ML SUBPROOF} is defined in @{ML_file "Pure/subgoal.ML"} and
also described in \isccite{sec:results}.
\end{readmore}
A similar but less powerful function than @{ML SUBPROOF} is @{ML SUBGOAL}.
It allows you to inspect a given subgoal. With this you can implement
a tactic that applies a rule according to the topmost connective in the
subgoal (to illustrate this we only analyse a few connectives). The code
of this tactic is as follows.\label{tac:selecttac}
*}
ML %linenumbers{*fun select_tac (t,i) =
case t of
@{term "Trueprop"} $ t' => select_tac (t',i)
| @{term "op \<and>"} $ _ $ _ => rtac @{thm conjI} i
| @{term "op \<longrightarrow>"} $ _ $ _ => rtac @{thm impI} i
| @{term "Not"} $ _ => rtac @{thm notI} i
| Const (@{const_name "All"}, _) $ _ => rtac @{thm allI} i
| _ => all_tac*}
text {*
In line 3 you need to descend under the outermost @{term "Trueprop"} in order
to get to the connective you like to analyse. Otherwise goals like @{prop "A \<and> B"}
are not properly analysed. While for the first four pattern we can use the
@{text "@term"}-antiquotation, the pattern in Line 7 cannot be constructed
in this way. The reason is that an antiquotation would fix the type of the
quantified variable. So you really have to construct the pattern
using the term-constructors. This is not necessary in other cases, because
their type is always something involving @{typ bool}. The final patter is
for the case where the goal does not fall into any of the categories before.
In this case we chose to just return @{ML all_tac} (i.e., @{ML select_tac}
never fails).
Let us now see how to apply this tactic. Consider the four goals:
*}
lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
apply(tactic {* SUBGOAL select_tac 4 *})
apply(tactic {* SUBGOAL select_tac 3 *})
apply(tactic {* SUBGOAL select_tac 2 *})
apply(tactic {* SUBGOAL select_tac 1 *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
where in all but the last the tactic applied an introduction rule.
Note that we applied the tactic to subgoals in ``reverse'' order.
This is a trick in order to be independent from what subgoals are
that are produced by the rule. If we had applied it in the other order
*}
lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
apply(tactic {* SUBGOAL select_tac 1 *})
apply(tactic {* SUBGOAL select_tac 3 *})
apply(tactic {* SUBGOAL select_tac 4 *})
apply(tactic {* SUBGOAL select_tac 5 *})
(*<*)oops(*>*)
text {*
then we have to be careful to not apply the tactic to the two subgoals the
first goal produced. To do this can result in quite messy code. In contrast,
the ``reverse application'' is easy to implement.
However, this example is very contrived: there are much simpler methods to implement
such a proof procedure analysing a goal according to its topmost
connective. These simpler methods use tactic combinators which will be explained
in the next section.
*}
section {* Tactic Combinators *}
text {*
The purpose of tactic combinators is to build powerful tactics out of
smaller components. In the previous section we already used @{ML THEN}, which
strings two tactics together in sequence. For example:
*}
lemma shows "(Foo \<and> Bar) \<and> False"
apply(tactic {* rtac @{thm conjI} 1 THEN rtac @{thm conjI} 1 *})
txt {* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
If you want to avoid the hard-coded subgoal addressing, then you can use
the ``primed'' version of @{ML THEN}. For example:
*}
lemma shows "(Foo \<and> Bar) \<and> False"
apply(tactic {* (rtac @{thm conjI} THEN' rtac @{thm conjI}) 1 *})
txt {* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
For most tactic combinators such a ``primed'' version exists and
in what follows we will usually prefer it over the ``unprimed'' one.
If there is a list of tactics that should all be tried out in
sequence, you can use the combinator @{ML EVERY'}. For example
the function @{ML foo_tac'} from page~\ref{tac:footacprime} can also
be written as:
*}
ML{*val foo_tac'' = EVERY' [etac @{thm disjE}, rtac @{thm disjI2},
atac, rtac @{thm disjI1}, atac]*}
text {*
There is even another way: in automatic proof procedures (in contrast to
tactics that might be called by the user) there are often long lists of
tactics that are applied to the first subgoal. Instead of writing the code
above and then calling @{ML "foo_tac'' 1"}, you can also just write
*}
ML{*val foo_tac1 = EVERY1 [etac @{thm disjE}, rtac @{thm disjI2},
atac, rtac @{thm disjI1}, atac]*}
text {*
With the combinators @{ML THEN'}, @{ML EVERY'} and @{ML EVERY1} it must be
guaranteed that all component tactics successfully apply; otherwise the
whole tactic will fail. If you rather want to try out a number of tactics,
then you can use the combinator @{ML ORELSE'} for two tactics, and @{ML FIRST'}
(or @{ML FIRST1}) for a list of tactics. For example, the tactic
*}
ML{*val orelse_xmp = (rtac @{thm disjI1} ORELSE' rtac @{thm conjI})*}
text {*
will first try out whether rule @{text disjI} applies and after that
whether @{text conjI}. To see this consider the proof:
*}
lemma shows "True \<and> False" and "Foo \<or> Bar"
apply(tactic {* orelse_xmp 2 *})
apply(tactic {* orelse_xmp 1 *})
txt {* which results in the goal state
\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}
*}
(*<*)oops(*>*)
text {*
Using @{ML FIRST'} we can write our @{ML select_tac} from Page~\ref{tac:selecttac}
simply as follows:
*}
ML{*val select_tac' = FIRST' [rtac @{thm conjI}, rtac @{thm impI},
rtac @{thm notI}, rtac @{thm allI}, K all_tac]*}
text {*
Since we like to mimic the behaviour of @{ML select_tac} as closely as possible,
we must include @{ML all_tac} at the end of the list (@{ML all_tac} must also
be ``wrapped up'' using the @{ML K}-combinator, as it does not take a subgoal
number as argument). We can test the tactic on the same proof:
*}
lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
apply(tactic {* select_tac' 4 *})
apply(tactic {* select_tac' 3 *})
apply(tactic {* select_tac' 2 *})
apply(tactic {* select_tac' 1 *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
Because such repeated applications of a tactic to the reverse order of
\emph{all} subgoals is quite common, there is the tactics combinator
@{ML ALLGOALS} that simplifies this. Using this combinator we can simply
write: *}
lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
apply(tactic {* ALLGOALS select_tac' *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
Remember that we chose to write @{ML select_tac'} in such a way that it always
succeeds. This can be potentially very confusing for the user, for example,
in cases the goals is the form
*}
lemma shows "E \<Longrightarrow> F"
apply(tactic {* select_tac' 1 *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
where no rule applies. The reason is that the user has little chance
to see whether progress in the proof has been made or not. We could simply
delete th @{ML "K all_tac"} from the end of the list. Then @{ML select_tac'}
would only succeed on goals where it can make progress. But for the sake of
argument, let us suppose that this deletion is not an option. In such cases, you
can use the combinator @{ML CHANGED} to make sure the subgoal has been
changed by the tactic. Because now
*}
lemma shows "E \<Longrightarrow> F"
apply(tactic {* CHANGED (select_tac' 1) *})(*<*)?(*>*)
txt{* results in the error message:
\begin{isabelle}
@{text "*** empty result sequence -- proof command failed"}\\
@{text "*** At command \"apply\"."}
\end{isabelle}
*}(*<*)oops(*>*)
text {*
Meaning the tactic failed.
We can extend @{ML select_tac'} so that it not just applies to the top-most
connective, but also to the ones immediately ``underneath''. For this you can use the
tactic combinator @{ML REPEAT}. For example suppose the following tactic
*}
ML{*val repeat_xmp = REPEAT (CHANGED (select_tac' 1)) *}
text {* and the proof *}
lemma shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
apply(tactic {* repeat_xmp *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
Here it is crucial that @{ML select_tac'} is prefixed with @{ML CHANGED},
because otherwise @{ML REPEAT} runs into an infinite loop. The function
@{ML REPEAT1} is similar, but runs the tactic at least once (failing if
this is not possible).
If you are after the ``primed'' version of @{ML repeat_xmp} then it
needs to be coded as
*}
ML{*val repeat_xmp' = REPEAT o CHANGED o select_tac'*}
text {*
since there are no ``primed'' versions of @{ML REPEAT} and @{ML CHANGED}.
If you look closely at the goal state above, the tactics @{ML repeat_xmp}
and @{ML repeat_xmp'} are not yet quite what we are after: the problem is
that goals 2 and 3 are not yet analysed. This is because both tactics
apply repeatedly only to the first subgoal. To analyse also all
resulting subgoals, you can use the function @{ML REPEAT_ALL_NEW}.
Suppose the tactic
*}
ML{*val repeat_all_new_xmp = REPEAT_ALL_NEW (CHANGED o select_tac')*}
text {*
which analyses the topmost connectives also in all resulting
subgoals.
*}
lemma shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
apply(tactic {* repeat_all_new_xmp 1 *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
The last tactic combinator we describe here is @{ML DETERM}. Recall
that tactics produce a lazy sequence of successor goal states. These
states can be explored using the command \isacommand{back}. For example
*}
lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
apply(tactic {* etac @{thm disjE} 1 *})
txt{* applies the rule to the first assumption
\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)
oops
lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
apply(tactic {* etac @{thm disjE} 1 *})
(*>*)
back
txt{* whereas it is now applied to the second
\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
Sometimes this leads to confusing behaviour of tactics and also has
the potential to explode the search space for tactics build on top.
This can be avoided by prefixing the tactic with @{ML DETERM}.
*}
lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
apply(tactic {* DETERM (etac @{thm disjE} 1) *})
txt {*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
This will prune the search space to just the first possibility.
Attempting to apply \isacommand{back} in this goal states gives the
error message:
\begin{isabelle}
@{text "*** back: no alternatives"}\\
@{text "*** At command \"back\"."}
\end{isabelle}
\begin{readmore}
Most tactic combinators are defined in @{ML_file "Pure/tctical.ML"}.
\end{readmore}
*}
section {* Rewriting and Simplifier Tactics *}
text {*
@{ML rewrite_goals_tac}
@{ML ObjectLogic.full_atomize_tac}
@{ML ObjectLogic.rulify_tac}
*}
section {* Structured Proofs *}
lemma True
proof
{
fix A B C
assume r: "A & B \<Longrightarrow> C"
assume A B
then have "A & B" ..
then have C by (rule r)
}
{
fix A B C
assume r: "A & B \<Longrightarrow> C"
assume A B
note conjI [OF this]
note r [OF this]
}
oops
ML {* fun prop ctxt s =
Thm.cterm_of (ProofContext.theory_of ctxt) (Syntax.read_prop ctxt s) *}
ML {*
val ctxt0 = @{context};
val ctxt = ctxt0;
val (_, ctxt) = Variable.add_fixes ["A", "B", "C"] ctxt;
val ([r], ctxt) = Assumption.add_assumes [prop ctxt "A & B \<Longrightarrow> C"] ctxt;
val (this, ctxt) = Assumption.add_assumes [prop ctxt "A", prop ctxt "B"] ctxt;
val this = [@{thm conjI} OF this];
val this = r OF this;
val this = Assumption.export false ctxt ctxt0 this
val this = Variable.export ctxt ctxt0 [this]
*}
end