theory Tactical
imports Base First_Steps
begin
chapter {* Tactical Reasoning\label{chp:tactical} *}
text {*
\begin{flushright}
{\em
``The first thing I would say is that when you write a program, think of\\
it primarily as a work of literature. You're trying to write something\\
that human beings are going to read. Don't think of it primarily as\\
something a computer is going to follow. The more effective you are\\
at making your program readable, the more effective it's going to\\
be: You'll understand it today, you'll understand it next week, and\\
your successors who are going to maintain and modify it will\\
understand it.''}\\[1ex]
Donald E.~Knuth, from an interview in Dr.~Dobb's Journal, 1996.
\end{flushright}
One of the main reason for descending to the ML-level of Isabelle is to be
able to implement automatic proof procedures. Such proof procedures can
considerably lessen the burden of manual reasoning. They are centred around
the idea of refining a goal state using tactics. This is similar to the
\isacommand{apply}-style reasoning at the user-level, where goals are
modified in a sequence of proof steps until all of them are discharged.
In this chapter we will explain simple tactics and how to combine them using
tactic combinators. We also describe the simplifier, simprocs and conversions.
*}
section {* Basics of Reasoning with Tactics\label{sec:basictactics}*}
text {*
To see how tactics work, let us first transcribe a simple \isacommand{apply}-style proof
into ML. Suppose the following proof.
*}
lemma disj_swap:
shows "P \<or> Q \<Longrightarrow> Q \<or> P"
apply(erule disjE)
apply(rule disjI2)
apply(assumption)
apply(rule disjI1)
apply(assumption)
done
text {*
This proof translates to the following ML-code.
@{ML_response_fake [display,gray]
"let
val ctxt = @{context}
val goal = @{prop \"P \<or> Q \<Longrightarrow> Q \<or> P\"}
in
Goal.prove ctxt [\"P\", \"Q\"] [] goal
(fn _ => etac @{thm disjE} 1
THEN rtac @{thm disjI2} 1
THEN atac 1
THEN rtac @{thm disjI1} 1
THEN atac 1)
end" "?P \<or> ?Q \<Longrightarrow> ?Q \<or> ?P"}
To start the proof, the function @{ML_ind prove in Goal} sets up a goal
state for proving the goal @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"}. We can give this
function some assumptions in the third argument (there are no assumption in
the proof at hand). The second argument stands for a list of variables
(given as strings). This list contains the variables that will be turned
into schematic variables once the goal is proved (in our case @{text P} and
@{text Q}). The last argument is the tactic that proves the goal. This
tactic can make use of the local assumptions (there are none in this
example). The tactics @{ML_ind etac in Tactic}, @{ML_ind rtac in Tactic} and
@{ML_ind atac in Tactic} in the code above correspond roughly to @{text
erule}, @{text rule} and @{text assumption}, respectively. The combinator
@{ML THEN} strings the tactics together.
TBD: Write something about @{ML_ind prove_multi in Goal}.
\begin{readmore}
To learn more about the function @{ML_ind prove in Goal} see
\isccite{sec:results} and the file @{ML_file "Pure/goal.ML"}. See @{ML_file
"Pure/tactic.ML"} and @{ML_file "Pure/tactical.ML"} for the code of basic
tactics and tactic combinators; see also Chapters 3 and 4 in the old
Isabelle Reference Manual, and Chapter 3 in the Isabelle/Isar Implementation
Manual.
\end{readmore}
\index{tactic@ {\tt\slshape{}tactic}}
\index{raw\_tactic@ {\tt\slshape{}raw\_tactic}}
During the development of automatic proof procedures, you will often find it
necessary to test a tactic on examples. This can be conveniently done with
the command \isacommand{apply}@{text "(tactic \<verbopen> \<dots> \<verbclose>)"}.
Consider the following sequence of tactics
*}
ML %grayML{*val foo_tac =
(etac @{thm disjE} 1
THEN rtac @{thm disjI2} 1
THEN atac 1
THEN rtac @{thm disjI1} 1
THEN atac 1)*}
text {* and the Isabelle proof: *}
lemma
shows "P \<or> Q \<Longrightarrow> Q \<or> P"
apply(tactic {* foo_tac *})
done
text {*
By using @{text "tactic \<verbopen> \<dots> \<verbclose>"} you can call from the
user-level of Isabelle the tactic @{ML foo_tac} or
any other function that returns a tactic. There are some goal transformation
that are performed by @{text "tactic"}. They can be avoided by using
@{text "raw_tactic"} instead.
The tactic @{ML foo_tac} is just a sequence of simple tactics stringed
together by @{ML THEN}. As can be seen, each simple tactic in @{ML foo_tac}
has a hard-coded number that stands for the subgoal analysed by the
tactic (@{text "1"} stands for the first, or top-most, subgoal). This hard-coding
of goals is sometimes wanted, but usually it is not. To avoid the explicit numbering,
you can write
*}
ML %grayML{*val foo_tac' =
(etac @{thm disjE}
THEN' rtac @{thm disjI2}
THEN' atac
THEN' rtac @{thm disjI1}
THEN' atac)*}text_raw{*\label{tac:footacprime}*}
text {*
where @{ML_ind THEN' in Tactical} is used instead of @{ML THEN in
Tactical}. (For most combinators that combine tactics---@{ML THEN} is only
one such combinator---a ``primed'' version exists.) With @{ML foo_tac'} you
can give the number for the subgoal explicitly when the tactic is called. So
in the next proof you can first discharge the second subgoal, and
subsequently the first.
*}
lemma
shows "P1 \<or> Q1 \<Longrightarrow> Q1 \<or> P1"
and "P2 \<or> Q2 \<Longrightarrow> Q2 \<or> P2"
apply(tactic {* foo_tac' 2 *})
apply(tactic {* foo_tac' 1 *})
done
text {*
This kind of addressing is more difficult to achieve when the goal is
hard-coded inside the tactic.
The tactics @{ML foo_tac} and @{ML foo_tac'} are very specific for analysing
goals being only of the form @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"}. If the goal is not of
this form, then these tactics return the error message:\footnote{To be
precise, tactics do not produce this error message; the message originates from the
\isacommand{apply} wrapper that calls the tactic.}
\begin{isabelle}
@{text "*** empty result sequence -- proof command failed"}\\
@{text "*** At command \"apply\"."}
\end{isabelle}
This means they failed. The reason for this error message is that tactics
are functions mapping a goal state to a (lazy) sequence of successor
states. Hence the type of a tactic is:
*}
ML %grayML{*type tactic = thm -> thm Seq.seq*}
text {*
By convention, if a tactic fails, then it should return the empty sequence.
Therefore, if you write your own tactics, they should not raise exceptions
willy-nilly; only in very grave failure situations should a tactic raise the
exception @{text THM}.
The simplest tactics are @{ML_ind no_tac in Tactical} and
@{ML_ind all_tac in Tactical}. The first returns the empty sequence and
is defined as
*}
ML %grayML{*fun no_tac thm = Seq.empty*}
text {*
which means @{ML no_tac} always fails. The second returns the given theorem wrapped
in a single member sequence; it is defined as
*}
ML %grayML{*fun all_tac thm = Seq.single thm*}
text {*
which means @{ML all_tac} always succeeds, but also does not make any progress
with the proof.
The lazy list of possible successor goal states shows through at the user-level
of Isabelle when using the command \isacommand{back}. For instance in the
following proof there are two possibilities for how to apply
@{ML foo_tac'}: either using the first assumption or the second.
*}
lemma
shows "\<lbrakk>P \<or> Q; P \<or> Q\<rbrakk> \<Longrightarrow> Q \<or> P"
apply(tactic {* foo_tac' 1 *})
back
done
text {*
By using \isacommand{back}, we construct the proof that uses the
second assumption. While in the proof above, it does not really matter which
assumption is used, in more interesting cases provability might depend
on exploring different possibilities.
\begin{readmore}
See @{ML_file "Pure/General/seq.ML"} for the implementation of lazy
sequences. In day-to-day Isabelle programming, however, one rarely
constructs sequences explicitly, but uses the predefined tactics and
tactic combinators instead.
\end{readmore}
It might be surprising that tactics, which transform
one goal state to the next, are functions from theorems to theorem
(sequences). The surprise resolves by knowing that every
goal state is indeed a theorem. To shed more light on this,
let us modify the code of @{ML all_tac} to obtain the following
tactic
*}
ML %grayML{*fun my_print_tac ctxt thm =
let
val _ = tracing (Pretty.string_of (pretty_thm_no_vars ctxt thm))
in
Seq.single thm
end*}
text {*
which prints out the given theorem (using the string-function defined in
Section~\ref{sec:printing}) and then behaves like @{ML all_tac}. With this
tactic we are in the position to inspect every goal state in a proof. For
this consider the proof in Figure~\ref{fig:goalstates}: as can be seen,
internally every goal state is an implication of the form
@{text[display] "A\<^isub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^isub>n \<Longrightarrow> #C"}
where @{term C} is the goal to be proved and the @{term "A\<^isub>i"} are
the subgoals. So after setting up the lemma, the goal state is always of the
form @{text "C \<Longrightarrow> #C"}; when the proof is finished we are left with @{text
"#C"}. Since the goal @{term C} can potentially be an implication, there is a
``protector'' wrapped around it (the wrapper is the outermost constant
@{text "Const (\"prop\", bool \<Rightarrow> bool)"}; in the figure we make it visible
as a @{text #}). This wrapper prevents that premises of @{text C} are
misinterpreted as open subgoals. While tactics can operate on the subgoals
(the @{text "A\<^isub>i"} above), they are expected to leave the conclusion
@{term C} intact, with the exception of possibly instantiating schematic
variables. This instantiations of schematic variables can be observed
on the user level. Have a look at the following definition and proof.
*}
text_raw {*
\begin{figure}[p]
\begin{boxedminipage}{\textwidth}
\begin{isabelle}
*}
notation (output) "prop" ("#_" [1000] 1000)
lemma
shows "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"
apply(tactic {* my_print_tac @{context} *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}\medskip
\begin{minipage}{\textwidth}
\small\colorbox{gray!20}{
\begin{tabular}{@ {}l@ {}}
internal goal state:\\
@{raw_goal_state}
\end{tabular}}
\end{minipage}\medskip
*}
apply(rule conjI)
apply(tactic {* my_print_tac @{context} *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}\medskip
\begin{minipage}{\textwidth}
\small\colorbox{gray!20}{
\begin{tabular}{@ {}l@ {}}
internal goal state:\\
@{raw_goal_state}
\end{tabular}}
\end{minipage}\medskip
*}
apply(assumption)
apply(tactic {* my_print_tac @{context} *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}\medskip
\begin{minipage}{\textwidth}
\small\colorbox{gray!20}{
\begin{tabular}{@ {}l@ {}}
internal goal state:\\
@{raw_goal_state}
\end{tabular}}
\end{minipage}\medskip
*}
apply(assumption)
apply(tactic {* my_print_tac @{context} *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}\medskip
\begin{minipage}{\textwidth}
\small\colorbox{gray!20}{
\begin{tabular}{@ {}l@ {}}
internal goal state:\\
@{raw_goal_state}
\end{tabular}}
\end{minipage}\medskip
*}
(*<*)oops(*>*)
text_raw {*
\end{isabelle}
\end{boxedminipage}
\caption{The figure shows an Isabelle snippet of a proof where each
intermediate goal state is printed by the Isabelle system and by @{ML
my_print_tac}. The latter shows the goal state as represented internally
(highlighted boxes). This tactic shows that every goal state in Isabelle is
represented by a theorem: when you start the proof of \mbox{@{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow>
A \<and> B"}} the theorem is @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> #(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}; when
you finish the proof the theorem is @{text "#(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and>
B)"}.\label{fig:goalstates}}
\end{figure}
*}
definition
EQ_TRUE
where
"EQ_TRUE X \<equiv> (X = True)"
schematic_lemma test:
shows "EQ_TRUE ?X"
unfolding EQ_TRUE_def
by (rule refl)
text {*
By using \isacommand{schematic\_lemma} it is possible to prove lemmas including
meta-variables on the user level. However, the proved theorem is not @{text "EQ_TRUE ?X"},
as one might expect, but @{thm test}. We can test this with:
\begin{isabelle}
\isacommand{thm}~@{thm [source] test}\\
@{text ">"}~@{thm test}
\end{isabelle}
The reason for this result is that the schematic variable @{text "?X"}
is instantiated inside the proof to be @{term True} and then the
instantiation propagated to the ``outside''.
\begin{readmore}
For more information about the internals of goals see \isccite{sec:tactical-goals}.
\end{readmore}
*}
section {* Simple Tactics\label{sec:simpletacs} *}
text {*
In this section we will introduce more of the simple tactics in Isabelle. The
first one is @{ML_ind print_tac in Tactical}, which is quite useful
for low-level debugging of tactics. It just prints out a message and the current
goal state. Unlike @{ML my_print_tac} shown earlier, it prints the goal state
as the user would see it. For example, processing the proof
*}
lemma
shows "False \<Longrightarrow> True"
apply(tactic {* print_tac "foo message" *})
txt{*gives:
\begin{isabelle}
@{text "foo message"}\\[3mm]
@{prop "False \<Longrightarrow> True"}\\
@{text " 1. False \<Longrightarrow> True"}\\
\end{isabelle}
*}
(*<*)oops(*>*)
text {*
A simple tactic for easy discharge of any proof obligations, even difficult ones, is
@{ML_ind cheat_tac in Skip_Proof} in the structure @{ML_struct Skip_Proof}.
This tactic corresponds to the Isabelle command \isacommand{sorry} and is
sometimes useful during the development of tactics.
*}
lemma
shows "False" and "Goldbach_conjecture"
apply(tactic {* Skip_Proof.cheat_tac @{theory} *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
text {*
Another simple tactic is the function @{ML_ind atac in Tactic}, which, as shown
earlier, corresponds to the assumption method.
*}
lemma
shows "P \<Longrightarrow> P"
apply(tactic {* atac 1 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
text {*
Similarly, @{ML_ind rtac in Tactic}, @{ML_ind dtac in Tactic}, @{ML_ind etac
in Tactic} and @{ML_ind ftac in Tactic} correspond (roughly) to @{text
rule}, @{text drule}, @{text erule} and @{text frule}, respectively. Each of
them takes a theorem as argument and attempts to apply it to a goal. Below
are three self-explanatory examples.
*}
lemma
shows "P \<and> Q"
apply(tactic {* rtac @{thm conjI} 1 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
lemma
shows "P \<and> Q \<Longrightarrow> False"
apply(tactic {* etac @{thm conjE} 1 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
lemma
shows "False \<and> True \<Longrightarrow> False"
apply(tactic {* dtac @{thm conjunct2} 1 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
text {*
The function @{ML_ind resolve_tac in Tactic} is similar to @{ML rtac}, except that it
expects a list of theorems as argument. From this list it will apply the
first applicable theorem (later theorems that are also applicable can be
explored via the lazy sequences mechanism). Given the code
*}
ML %grayML{*val resolve_xmp_tac = resolve_tac [@{thm impI}, @{thm conjI}]*}
text {*
an example for @{ML resolve_tac} is the following proof where first an outermost
implication is analysed and then an outermost conjunction.
*}
lemma
shows "C \<longrightarrow> (A \<and> B)"
and "(A \<longrightarrow> B) \<and> C"
apply(tactic {* resolve_xmp_tac 1 *})
apply(tactic {* resolve_xmp_tac 2 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
text {*
Similar versions taking a list of theorems exist for the tactics
@{ML dtac} (@{ML_ind dresolve_tac in Tactic}), @{ML etac}
(@{ML_ind eresolve_tac in Tactic}) and so on.
Another simple tactic is @{ML_ind cut_facts_tac in Tactic}. It inserts a
list of theorems into the assumptions of the current goal state. Below we
will insert the definitions for the constants @{term True} and @{term
False}. So
*}
lemma
shows "True \<noteq> False"
apply(tactic {* cut_facts_tac [@{thm True_def}, @{thm False_def}] 1 *})
txt{*produces the goal state
\begin{isabelle}
@{subgoals [display]}
\end{isabelle}*}
(*<*)oops(*>*)
text {*
Often proofs on the ML-level involve elaborate operations on assumptions and
@{text "\<And>"}-quantified variables. To do such operations using the basic tactics
shown so far is very unwieldy and brittle. Some convenience and
safety is provided by the functions @{ML_ind FOCUS in Subgoal} and
@{ML_ind SUBPROOF in Subgoal}. These tactics fix the parameters
and bind the various components of a goal state to a record.
To see what happens, suppose the function defined in Figure~\ref{fig:sptac}, which
takes a record and just prints out the contents of this record. Then consider
the proof:
*}
text_raw{*
\begin{figure}[t]
\begin{minipage}{\textwidth}
\begin{isabelle}
*}
ML %grayML{*fun foc_tac {prems, params, asms, concl, context, schematics} =
let
fun assgn1 f1 f2 xs =
let
val out = map (fn (x, y) => Pretty.enum ":=" "" "" [f1 x, f2 y]) xs
in
Pretty.list "" "" out
end
fun assgn2 f xs = assgn1 f f xs
val pps = map (fn (s, pp) => Pretty.block [Pretty.str s, pp])
[("params: ", assgn1 Pretty.str (pretty_cterm context) params),
("assumptions: ", pretty_cterms context asms),
("conclusion: ", pretty_cterm context concl),
("premises: ", pretty_thms_no_vars context prems),
("schematics: ", assgn2 (pretty_cterm context) (snd schematics))]
in
tracing (Pretty.string_of (Pretty.chunks pps)); all_tac
end*}
text_raw{*
\end{isabelle}
\end{minipage}
\caption{A function that prints out the various parameters provided by
@{ML FOCUS in Subgoal} and @{ML SUBPROOF}. It uses the functions defined
in Section~\ref{sec:printing} for extracting strings from @{ML_type cterm}s
and @{ML_type thm}s.\label{fig:sptac}}
\end{figure}
*}
schematic_lemma
shows "\<And>x y. A x y \<Longrightarrow> B y x \<longrightarrow> C (?z y) x"
apply(tactic {* Subgoal.FOCUS foc_tac @{context} 1 *})
txt {*
The tactic produces the following printout:
\begin{quote}\small
\begin{tabular}{ll}
params: & @{text "x:= x"}, @{text "y:= y"}\\
assumptions: & @{term "A x y"}\\
conclusion: & @{term "B y x \<longrightarrow> C (z y) x"}\\
premises: & @{term "A x y"}\\
schematics: & @{text "?z:=z"}
\end{tabular}
\end{quote}
The @{text params} and @{text schematics} stand for list of pairs where the
left-hand side of @{text ":="} is replaced by the right-hand side inside the
tactic. Notice that in the actual output the variables @{term x} and @{term
y} have a brown colour. Although they are parameters in the original goal,
they are fixed inside the tactic. By convention these fixed variables are
printed in brown colour. Similarly the schematic variable @{text ?z}. The
assumption, or premise, @{prop "A x y"} is bound as @{ML_type cterm} to the
record-variable @{text asms}, but also as @{ML_type thm} to @{text prems}.
If we continue the proof script by applying the @{text impI}-rule
*}
apply(rule impI)
apply(tactic {* Subgoal.FOCUS foc_tac @{context} 1 *})
txt {*
then the tactic prints out:
\begin{quote}\small
\begin{tabular}{ll}
params: & @{text "x:= x"}, @{text "y:= y"}\\
assumptions: & @{term "A x y"}, @{term "B y x"}\\
conclusion: & @{term "C (z y) x"}\\
premises: & @{term "A x y"}, @{term "B y x"}\\
schematics: & @{text "?z:=z"}
\end{tabular}
\end{quote}
*}
(*<*)oops(*>*)
text {*
Now also @{term "B y x"} is an assumption bound to @{text asms} and @{text prems}.
One difference between the tactics @{ML SUBPROOF} and @{ML FOCUS in Subgoal}
is that the former expects that the goal is solved completely, which the
latter does not. Another is that @{ML SUBPROOF} cannot instantiate any schematic
variables.
Observe that inside @{ML FOCUS in Subgoal} and @{ML SUBPROOF}, we are in a goal
state where there is only a conclusion. This means the tactics @{ML dtac} and
the like are of no use for manipulating the goal state. The assumptions inside
@{ML FOCUS in Subgoal} and @{ML SUBPROOF} are given as cterms and theorems in
the arguments @{text "asms"} and @{text "prems"}, respectively. This
means we can apply them using the usual assumption tactics. With this you can
for example easily implement a tactic that behaves almost like @{ML atac}:
*}
ML %grayML{*val atac' = Subgoal.FOCUS (fn {prems, ...} => resolve_tac prems 1)*}
text {*
If you apply @{ML atac'} to the next lemma
*}
lemma
shows "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
apply(tactic {* atac' @{context} 1 *})
txt{* it will produce
@{subgoals [display]} *}
(*<*)oops(*>*)
text {*
Notice that @{ML atac'} inside @{ML FOCUS in Subgoal} calls @{ML
resolve_tac} with the subgoal number @{text "1"} and also the outer call to
@{ML FOCUS in Subgoal} in the \isacommand{apply}-step uses @{text "1"}. This
is another advantage of @{ML FOCUS in Subgoal} and @{ML SUBPROOF}: the
addressing inside it is completely local to the tactic inside the
subproof. It is therefore possible to also apply @{ML atac'} to the second
goal by just writing:
*}
lemma
shows "True"
and "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
apply(tactic {* atac' @{context} 2 *})
apply(rule TrueI)
done
text {*
To sum up, both @{ML FOCUS in Subgoal} and @{ML SUBPROOF} are rather
convenient, but can impose a considerable run-time penalty in automatic
proofs. If speed is of the essence, then maybe the two lower level combinators
described next are more appropriate.
\begin{readmore}
The functions @{ML FOCUS in Subgoal} and @{ML SUBPROOF} are defined in
@{ML_file "Pure/subgoal.ML"} and also described in
\isccite{sec:results}.
\end{readmore}
Similar but less powerful functions than @{ML FOCUS in Subgoal},
respectively @{ML SUBPROOF}, are @{ML_ind SUBGOAL in Tactical} and @{ML_ind
CSUBGOAL in Tactical}. They allow you to inspect a given subgoal (the former
presents the subgoal as a @{ML_type term}, while the latter as a @{ML_type
cterm}). With them you can implement a tactic that applies a rule according
to the topmost logic connective in the subgoal (to illustrate this we only
analyse a few connectives). The code of the tactic is as follows.
*}
ML %linenosgray{*fun select_tac (t, i) =
case t of
@{term "Trueprop"} $ t' => select_tac (t', i)
| @{term "op \<Longrightarrow>"} $ _ $ t' => select_tac (t', i)
| @{term "op \<and>"} $ _ $ _ => rtac @{thm conjI} i
| @{term "op \<longrightarrow>"} $ _ $ _ => rtac @{thm impI} i
| @{term "Not"} $ _ => rtac @{thm notI} i
| Const (@{const_name "All"}, _) $ _ => rtac @{thm allI} i
| _ => all_tac*}text_raw{*\label{tac:selecttac}*}
text {*
The input of the function is a term representing the subgoal and a number
specifying the subgoal of interest. In Line 3 you need to descend under the
outermost @{term "Trueprop"} in order to get to the connective you like to
analyse. Otherwise goals like @{prop "A \<and> B"} are not properly
analysed. Similarly with meta-implications in the next line. While for the
first five patterns we can use the @{text "@term"}-antiquotation to
construct the patterns, the pattern in Line 8 cannot be constructed in this
way. The reason is that an antiquotation would fix the type of the
quantified variable. So you really have to construct this pattern using the
basic term-constructors. This is not necessary in the other cases, because their
type is always fixed to function types involving only the type @{typ
bool}. (See Section \ref{sec:terms_types_manually} about constructing terms
manually.) For the catch-all pattern, we chose to just return @{ML all_tac}.
Consequently, @{ML select_tac} never fails.
Let us now see how to apply this tactic. Consider the four goals:
*}
lemma
shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
apply(tactic {* SUBGOAL select_tac 4 *})
apply(tactic {* SUBGOAL select_tac 3 *})
apply(tactic {* SUBGOAL select_tac 2 *})
apply(tactic {* SUBGOAL select_tac 1 *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
where in all but the last the tactic applies an introduction rule.
Note that we applied the tactic to the goals in ``reverse'' order.
This is a trick in order to be independent from the subgoals that are
produced by the rule. If we had applied it in the other order
*}
lemma
shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
apply(tactic {* SUBGOAL select_tac 1 *})
apply(tactic {* SUBGOAL select_tac 3 *})
apply(tactic {* SUBGOAL select_tac 4 *})
apply(tactic {* SUBGOAL select_tac 5 *})
(*<*)oops(*>*)
text {*
then we have to be careful to not apply the tactic to the two subgoals produced by
the first goal. To do this can result in quite messy code. In contrast,
the ``reverse application'' is easy to implement.
Of course, this example is
contrived: there are much simpler methods available in Isabelle for
implementing a tactic analysing a goal according to its topmost
connective. These simpler methods use tactic combinators, which we will
explain in the next section. But before that we will show how
tactic application can be constrained.
\begin{readmore}
The functions @{ML SUBGOAL} and @{ML CSUBGOAL} are defined in @{ML_file "Pure/tactical.ML"}.
\end{readmore}
Since @{ML_ind rtac in Tactic} and the like use higher-order unification, an
automatic proof procedure based on them might become too fragile, if it just
applies theorems as shown above. The reason is that a number of theorems
introduce schematic variables into the goal state. Consider for example the
proof
*}
lemma
shows "\<forall>x \<in> A. P x \<Longrightarrow> Q x"
apply(tactic {* dtac @{thm bspec} 1 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
text {*
where the application of theorem @{text bspec} generates two subgoals involving the
new schematic variable @{text "?x"}. Now, if you are not careful, tactics
applied to the first subgoal might instantiate this schematic variable in such a
way that the second subgoal becomes unprovable. If it is clear what the @{text "?x"}
should be, then this situation can be avoided by introducing a more
constrained version of the @{text bspec}-theorem. One way to give such
constraints is by pre-instantiating theorems with other theorems.
The function @{ML_ind RS in Drule}, for example, does this.
@{ML_response_fake [display,gray]
"@{thm disjI1} RS @{thm conjI}" "\<lbrakk>?P1; ?Q\<rbrakk> \<Longrightarrow> (?P1 \<or> ?Q1) \<and> ?Q"}
In this example it instantiates the first premise of the @{text conjI}-theorem
with the theorem @{text disjI1}. If the instantiation is impossible, as in the
case of
@{ML_response_fake_both [display,gray]
"@{thm conjI} RS @{thm mp}"
"*** Exception- THM (\"RSN: no unifiers\", 1,
[\"\<lbrakk>?P; ?Q\<rbrakk> \<Longrightarrow> ?P \<and> ?Q\", \"\<lbrakk>?P \<longrightarrow> ?Q; ?P\<rbrakk> \<Longrightarrow> ?Q\"]) raised"}
then the function raises an exception. The function @{ML_ind RSN in Drule}
is similar to @{ML RS}, but takes an additional number as argument. This
number makes explicit which premise should be instantiated.
If you want to instantiate more than one premise of a theorem, you can use
the function @{ML_ind MRS in Drule}:
@{ML_response_fake [display,gray]
"[@{thm disjI1}, @{thm disjI2}] MRS @{thm conjI}"
"\<lbrakk>?P2; ?Q1\<rbrakk> \<Longrightarrow> (?P2 \<or> ?Q2) \<and> (?P1 \<or> ?Q1)"}
If you need to instantiate lists of theorems, you can use the
functions @{ML_ind RL in Drule} and @{ML_ind OF in Drule}. For
example in the code below, every theorem in the second list is
instantiated with every theorem in the first.
@{ML_response_fake [display,gray]
"let
val list1 = [@{thm impI}, @{thm disjI2}]
val list2 = [@{thm conjI}, @{thm disjI1}]
in
list1 RL list2
end"
"[\<lbrakk>?P1 \<Longrightarrow> ?Q1; ?Q\<rbrakk> \<Longrightarrow> (?P1 \<longrightarrow> ?Q1) \<and> ?Q,
\<lbrakk>?Q1; ?Q\<rbrakk> \<Longrightarrow> (?P1 \<or> ?Q1) \<and> ?Q,
(?P1 \<Longrightarrow> ?Q1) \<Longrightarrow> (?P1 \<longrightarrow> ?Q1) \<or> ?Q,
?Q1 \<Longrightarrow> (?P1 \<or> ?Q1) \<or> ?Q]"}
\begin{readmore}
The combinators for instantiating theorems with other theorems are
defined in @{ML_file "Pure/drule.ML"}.
\end{readmore}
Higher-order unification is also often in the way when applying certain
congruence theorems. For example we would expect that the theorem
@{thm [source] cong}
\begin{isabelle}
\isacommand{thm}~@{thm [source] cong}\\
@{text "> "}~@{thm cong}
\end{isabelle}
is applicable in the following proof producing the subgoals
@{term "t x = s u"} and @{term "y = w"}.
*}
lemma
fixes x y u w::"'a"
shows "t x y = s u w"
apply(rule cong)
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
text {*
As you can see this is unfortunately \emph{not} the case if we apply @{thm [source]
cong} with the method @{text rule}. The problem is
that higher-order unification produces an instantiation that is not the
intended one. While we can use \isacommand{back} to interactively find the
intended instantiation, this is not an option for an automatic proof
procedure. Fortunately, the tactic @{ML_ind cong_tac in Cong_Tac} helps
with applying congruence theorems and finding the intended instantiation.
For example
*}
lemma
fixes x y u w::"'a"
shows "t x y = s u w"
apply(tactic {* Cong_Tac.cong_tac @{thm cong} 1 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
text {*
However, frequently it is necessary to explicitly match a theorem against a goal
state and in doing so construct manually an appropriate instantiation. Imagine
you have the theorem
*}
lemma rule:
shows "\<lbrakk>f = g; x = y\<rbrakk> \<Longrightarrow> R (f x) (g y)"
sorry
text {*
and you want to apply it to the goal @{term "(t\<^isub>1 t\<^isub>2) \<le>
(s\<^isub>1 s\<^isub>2)"}. Since in the theorem all variables are
schematic, we have a nasty higher-order unification problem and @{text rtac}
will not be able to apply this rule in the way we want. For the goal at hand
we want to use it so that @{term R} is instantiated to the constant @{text \<le>} and
similarly ``obvious'' instantiations for the other variables. To achieve this
we need to match the conclusion of
@{thm [source] rule} against the goal reading off an instantiation for
@{thm [source] rule}. For this the function @{ML_ind first_order_match in Thm}
matches two @{ML_type cterm}s and produces, in the successful case, a matcher
that can be used to instantiate the theorem. The instantiation
can be done with the function @{ML_ind instantiate_normalize in Drule}. To automate
this we implement the following function.
*}
ML %linenosgray{*fun fo_rtac thm = Subgoal.FOCUS (fn {concl, ...} =>
let
val concl_pat = Drule.strip_imp_concl (cprop_of thm)
val insts = Thm.first_order_match (concl_pat, concl)
in
rtac (Drule.instantiate_normalize insts thm) 1
end)*}
text {*
Note that we use @{ML FOCUS in Subgoal} because it gives us directly access
to the conclusion of the goal state, but also because this function
takes care of correctly handling parameters that might be present
in the goal state. In Line 3 we use the function @{ML_ind strip_imp_concl in Drule}
for calculating the conclusion of a theorem (produced as @{ML_type cterm}).
An example of @{ML fo_rtac} is as follows.
*}
lemma
shows "\<And>t\<^isub>1 s\<^isub>1 (t\<^isub>2::'a) (s\<^isub>2::'a). (t\<^isub>1 t\<^isub>2) \<le> (s\<^isub>1 s\<^isub>2)"
apply(tactic {* fo_rtac @{thm rule} @{context} 1 *})
txt{*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}*}
(*<*)oops(*>*)
text {*
We obtain the intended subgoals and also the parameters are correctly
introduced in both of them. Such manual instantiations are quite frequently
necessary in order to appropriately constrain the application of theorems.
Otherwise one can end up with ``wild'' higher-order unification problems
that make automatic proofs fail.
\begin{readmore}
Functions for matching @{ML_type cterm}s are defined in @{ML_file "Pure/thm.ML"}.
Functions for instantiating schematic variables in theorems are defined
in @{ML_file "Pure/drule.ML"}.
\end{readmore}
*}
section {* Tactic Combinators *}
text {*
The purpose of tactic combinators is to build compound tactics out of
smaller tactics. In the previous section we already used @{ML_ind THEN in Tactical},
which just strings together two tactics in a sequence. For example:
*}
lemma
shows "(Foo \<and> Bar) \<and> False"
apply(tactic {* rtac @{thm conjI} 1 THEN rtac @{thm conjI} 1 *})
txt {* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
If you want to avoid the hard-coded subgoal addressing in each component,
then, as seen earlier, you can use the ``primed'' version of @{ML THEN}.
For example:
*}
lemma
shows "(Foo \<and> Bar) \<and> False"
apply(tactic {* (rtac @{thm conjI} THEN' rtac @{thm conjI}) 1 *})
txt {* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
Here you have to specify the subgoal of interest only once and it is
consistently applied to the component. For most tactic combinators such a
``primed'' version exists and in what follows we will usually prefer it over
the ``unprimed'' one.
The tactic combinator @{ML_ind RANGE in Tactical} takes a list of @{text n}
tactics, say, and applies them to each of the first @{text n} subgoals.
For example below we first apply the introduction rule for conjunctions
and then apply a tactic to each of the two subgoals.
*}
lemma
shows "A \<Longrightarrow> True \<and> A"
apply(tactic {* (rtac @{thm conjI}
THEN' RANGE [rtac @{thm TrueI}, atac]) 1 *})
txt {* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
If there is a list of tactics that should all be tried out in sequence on
one specified subgoal, you can use the combinator @{ML_ind EVERY' in
Tactical}. For example the function @{ML foo_tac'} from
page~\pageref{tac:footacprime} can also be written as:
*}
ML %grayML{*val foo_tac'' = EVERY' [etac @{thm disjE}, rtac @{thm disjI2},
atac, rtac @{thm disjI1}, atac]*}
text {*
There is even another way of implementing this tactic: in automatic proof
procedures (in contrast to tactics that might be called by the user) there
are often long lists of tactics that are applied to the first
subgoal. Instead of writing the code above and then calling @{ML "foo_tac'' 1"},
you can also just write
*}
ML %grayML{*val foo_tac1 = EVERY1 [etac @{thm disjE}, rtac @{thm disjI2},
atac, rtac @{thm disjI1}, atac]*}
text {*
and call @{ML foo_tac1}.
With the combinators @{ML THEN'}, @{ML EVERY'} and @{ML_ind EVERY1 in Tactical} it must be
guaranteed that all component tactics successfully apply; otherwise the
whole tactic will fail. If you rather want to try out a number of tactics,
then you can use the combinator @{ML_ind ORELSE' in Tactical} for two tactics, and @{ML_ind
FIRST' in Tactical} (or @{ML_ind FIRST1 in Tactical}) for a list of tactics. For example, the tactic
*}
ML %grayML{*val orelse_xmp_tac = rtac @{thm disjI1} ORELSE' rtac @{thm conjI}*}
text {*
will first try out whether theorem @{text disjI} applies and in case of failure
will try @{text conjI}. To see this consider the proof
*}
lemma
shows "True \<and> False" and "Foo \<or> Bar"
apply(tactic {* orelse_xmp_tac 2 *})
apply(tactic {* orelse_xmp_tac 1 *})
txt {* which results in the goal state
\begin{isabelle}
@{subgoals [display]}
\end{isabelle}
*}
(*<*)oops(*>*)
text {*
Using @{ML FIRST'} we can simplify our @{ML select_tac} from Page~\pageref{tac:selecttac}
as follows:
*}
ML %grayML{*val select_tac' = FIRST' [rtac @{thm conjI}, rtac @{thm impI},
rtac @{thm notI}, rtac @{thm allI}, K all_tac]*}text_raw{*\label{tac:selectprime}*}
text {*
Since we like to mimic the behaviour of @{ML select_tac} as closely as possible,
we must include @{ML all_tac} at the end of the list, otherwise the tactic will
fail if no theorem applies (we also have to wrap @{ML all_tac} using the
@{ML K}-combinator, because it does not take a subgoal number as argument). You can
test the tactic on the same goals:
*}
lemma
shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
apply(tactic {* select_tac' 4 *})
apply(tactic {* select_tac' 3 *})
apply(tactic {* select_tac' 2 *})
apply(tactic {* select_tac' 1 *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
Since such repeated applications of a tactic to the reverse order of
\emph{all} subgoals is quite common, there is the tactic combinator
@{ML_ind ALLGOALS in Tactical} that simplifies this. Using this combinator you can simply
write: *}
lemma
shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
apply(tactic {* ALLGOALS select_tac' *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
Remember that we chose to implement @{ML select_tac'} so that it
always succeeds by fact of having @{ML all_tac} at the end of the tactic
list. The same can be achieved with the tactic combinator @{ML_ind TRY in Tactical}.
For example:
*}
ML %grayML{*val select_tac'' = TRY o FIRST' [rtac @{thm conjI}, rtac @{thm impI},
rtac @{thm notI}, rtac @{thm allI}]*}
text_raw{*\label{tac:selectprimeprime}*}
text {*
This tactic behaves in the same way as @{ML select_tac'}: it tries out
one of the given tactics and if none applies leaves the goal state
unchanged. This, however, can be potentially very confusing when visible to
the user, for example, in cases where the goal is the form
*}
lemma
shows "E \<Longrightarrow> F"
apply(tactic {* select_tac' 1 *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
In this case no theorem applies. But because we wrapped the tactic in a @{ML
TRY}, it does not fail anymore. The problem is that for the user there is
little chance to see whether progress in the proof has been made, or not. By
convention therefore, tactics visible to the user should either change
something or fail.
To comply with this convention, we could simply delete the @{ML "K all_tac"}
in @{ML select_tac'} or delete @{ML TRY} from @{ML select_tac''}. But for
the sake of argument, let us suppose that this deletion is \emph{not} an
option. In such cases, you can use the combinator @{ML_ind CHANGED in
Tactical} to make sure the subgoal has been changed by the tactic. Because
now
*}
lemma
shows "E \<Longrightarrow> F"
apply(tactic {* CHANGED (select_tac' 1) *})(*<*)?(*>*)
txt{* gives the error message:
\begin{isabelle}
@{text "*** empty result sequence -- proof command failed"}\\
@{text "*** At command \"apply\"."}
\end{isabelle}
*}(*<*)oops(*>*)
text {*
We can further extend the @{ML select_tac}s so that they not just apply to the topmost
connective, but also to the ones immediately ``underneath'', i.e.~analyse the goal
completely. For this you can use the tactic combinator @{ML_ind REPEAT in Tactical}. As an example
suppose the following tactic
*}
ML %grayML{*val repeat_xmp_tac = REPEAT (CHANGED (select_tac' 1)) *}
text {* which applied to the proof *}
lemma
shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
apply(tactic {* repeat_xmp_tac *})
txt{* produces
\begin{isabelle}
@{subgoals [display]}
\end{isabelle} *}
(*<*)oops(*>*)
text {*
Here it is crucial that @{ML select_tac'} is prefixed with @{ML CHANGED},
because otherwise @{ML REPEAT} runs into an infinite loop (it applies the
tactic as long as it succeeds). The function @{ML_ind REPEAT1 in Tactical}
is similar, but runs the tactic at least once (failing if this is not
possible).
If you are after the ``primed'' version of @{ML repeat_xmp_tac}, then you
can implement it as
*}
ML %grayML{*val repeat_xmp_tac' = REPEAT o CHANGED o select_tac'*}
text {*
since there are no ``primed'' versions of @{ML REPEAT} and @{ML CHANGED}.
If you look closely at the goal state above, then you see the tactics @{ML repeat_xmp_tac}
and @{ML repeat_xmp_tac'} are not yet quite what we are after: the problem is
that goals 2 and 3 are not analysed. This is because the tactic
is applied repeatedly only to the first subgoal. To analyse also all
resulting subgoals, you can use the tactic combinator @{ML_ind REPEAT_ALL_NEW in Tactical}.
Supposing the tactic
*}
ML %grayML{*val repeat_all_new_xmp_tac = REPEAT_ALL_NEW (CHANGED o select_tac')*}
text {*
you can see that the following goal
*}
lemma
shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
apply(tactic {* repeat_all_new_xmp_tac 1 *})
txt{* \begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
is completely analysed according to the theorems we chose to
include in @{ML select_tac'}.
Recall that tactics produce a lazy sequence of successor goal states. These
states can be explored using the command \isacommand{back}. For example
*}
lemma
shows "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
apply(tactic {* etac @{thm disjE} 1 *})
txt{* applies the rule to the first assumption yielding the goal state:
\begin{isabelle}
@{subgoals [display]}
\end{isabelle}
After typing
*}
(*<*)
oops
lemma
shows "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
apply(tactic {* etac @{thm disjE} 1 *})
(*>*)
back
txt{* the rule now applies to the second assumption.
\begin{isabelle}
@{subgoals [display]}
\end{isabelle} *}
(*<*)oops(*>*)
text {*
Sometimes this leads to confusing behaviour of tactics and also has
the potential to explode the search space for tactics.
These problems can be avoided by prefixing the tactic with the tactic
combinator @{ML_ind DETERM in Tactical}.
*}
lemma
shows "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
apply(tactic {* DETERM (etac @{thm disjE} 1) *})
txt {*\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage} *}
(*<*)oops(*>*)
text {*
This combinator will prune the search space to just the first successful application.
Attempting to apply \isacommand{back} in this goal states gives the
error message:
\begin{isabelle}
@{text "*** back: no alternatives"}\\
@{text "*** At command \"back\"."}
\end{isabelle}
\footnote{\bf FIXME: say something about @{ML_ind COND in Tactical} and COND'}
\footnote{\bf FIXME: PARALLEL-CHOICE PARALLEL-GOALS}
\begin{readmore}
Most tactic combinators described in this section are defined in @{ML_file "Pure/tactical.ML"}.
Some combinators for the purpose of proof search are implemented in @{ML_file "Pure/search.ML"}.
\end{readmore}
*}
text {*
\begin{exercise}\label{ex:dyckhoff}
Dyckhoff presents in \cite{Dyckhoff92} inference rules of a sequent
calculus, called G4ip, for intuitionistic propositional logic. The
interesting feature of this calculus is that no contraction rule is needed
in order to be complete. As a result the rules can be applied exhaustively, which
in turn leads to simple decision procedure for propositional intuitionistic logic.
The task is to implement this decision procedure as a tactic. His rules are
\begin{center}
\def\arraystretch{2.3}
\begin{tabular}{cc}
\infer[Ax]{A,\varGamma \Rightarrow A}{} &
\infer[False]{False,\varGamma \Rightarrow G}{}\\
\infer[\wedge_L]{A \wedge B, \varGamma \Rightarrow G}{A, B, \varGamma \Rightarrow G} &
\infer[\wedge_R]
{\varGamma \Rightarrow A\wedge B}{\varGamma \Rightarrow A & \varGamma \Rightarrow B}\\
\infer[\vee_L]
{A\vee B, \varGamma \Rightarrow G}{A,\varGamma \Rightarrow G & B,\varGamma \Rightarrow G} &
\infer[\vee_{R_1}]
{\varGamma \Rightarrow A \vee B}{\varGamma \Rightarrow A} \hspace{3mm}
\infer[\vee_{R_2}]
{\varGamma \Rightarrow A \vee B}{\varGamma \Rightarrow B}\\
\infer[\longrightarrow_{L_1}]
{A\longrightarrow B, A, \varGamma \Rightarrow G}{B, A, \varGamma \Rightarrow G} &
\infer[\longrightarrow_R]
{\varGamma \Rightarrow A\longrightarrow B}{A,\varGamma \Rightarrow B}\\
\infer[\longrightarrow_{L_2}]
{(C \wedge D)\longrightarrow B, \varGamma \Rightarrow G}
{C\longrightarrow (D \longrightarrow B), \varGamma \Rightarrow G} &
\infer[\longrightarrow_{L_3}]
{(C \vee D)\longrightarrow B, \varGamma \Rightarrow G}
{C\longrightarrow B, D\longrightarrow B, \varGamma \Rightarrow G}\\
\multicolumn{2}{c}{
\infer[\longrightarrow_{L_4}]
{(C \longrightarrow D)\longrightarrow B, \varGamma \Rightarrow G}
{D\longrightarrow B, \varGamma \Rightarrow C \longrightarrow D &
B, \varGamma \Rightarrow G}}\\
\end{tabular}
\end{center}
Note that in Isabelle right rules need to be implemented as
introduction rule, the left rules as elimination rules. You have to to
prove separate theorems corresponding to $\longrightarrow_{L_{1..4}}$. The
tactic should explore all possibilites of applying these rules to a
propositional formula until a goal state is reached in which all subgoals
are discharged. For this you can use the tactic combinator @{ML_ind
DEPTH_SOLVE in Search} in the structure @{ML_struct Search}.
\end{exercise}
\begin{exercise}
Add to the sequent calculus from the previous exercise also rules for
equality and run your tactic on the de Bruijn formulae discussed
in Exercise~\ref{ex:debruijn}.
\end{exercise}
*}
section {* Simplifier Tactics\label{sec:simplifier} *}
text {*
A lot of convenience in reasoning with Isabelle derives from its
powerful simplifier. We will describe it in this section. However,
due to its complexity, we can mostly only scratch the surface.
The power of the simplifier is a strength and a weakness at the same time,
because you can easily make the simplifier run into a loop and in general
its behaviour can be difficult to predict. There is also a multitude of
options that you can configure to change the behaviour of the
simplifier. There are the following five main tactics behind the simplifier
(in parentheses is their user-level counterpart):
\begin{isabelle}
\begin{center}
\begin{tabular}{l@ {\hspace{2cm}}l}
@{ML_ind simp_tac in Simplifier} & @{text "(simp (no_asm))"} \\
@{ML_ind asm_simp_tac in Simplifier} & @{text "(simp (no_asm_simp))"} \\
@{ML_ind full_simp_tac in Simplifier} & @{text "(simp (no_asm_use))"} \\
@{ML_ind asm_lr_simp_tac in Simplifier} & @{text "(simp (asm_lr))"} \\
@{ML_ind asm_full_simp_tac in Simplifier} & @{text "(simp)"}
\end{tabular}
\end{center}
\end{isabelle}
All these tactics take a simpset and an integer as argument (the latter as usual
to specify the goal to be analysed). So the proof
*}
lemma
shows "Suc (1 + 2) < 3 + 2"
apply(simp)
done
text {*
corresponds on the ML-level to the tactic
*}
lemma
shows "Suc (1 + 2) < 3 + 2"
apply(tactic {* asm_full_simp_tac @{simpset} 1 *})
done
text {*
If the simplifier cannot make any progress, then it leaves the goal unchanged,
i.e., does not raise any error message. That means if you use it to unfold a
definition for a constant and this constant is not present in the goal state,
you can still safely apply the simplifier.
\footnote{\bf FIXME: show rewriting of cterms}
There is one restriction you have to keep in mind when using the simplifier:
it can only deal with rewriting rules whose left-hand sides are higher order
pattern (see Section \ref{sec:univ} on unification). Whether a term is a pattern
or not can be tested with the function @{ML_ind pattern in Pattern} from
the structure @{ML_struct Pattern}. If a rule is not a pattern and you want
to use it for rewriting, then you have to use simprocs or conversions, both
of which we shall describe in the next section.
When using the simplifier, the crucial information you have to provide is
the simpset. If this information is not handled with care, then, as
mentioned above, the simplifier can easily run into a loop. Therefore a good
rule of thumb is to use simpsets that are as minimal as possible. It might
be surprising that a simpset is more complex than just a simple collection
of theorems. One reason for the complexity is that the simplifier must be
able to rewrite inside terms and should also be able to rewrite according to
theorems that have premises.
The rewriting inside terms requires congruence theorems, which
are typically meta-equalities of the form
\begin{isabelle}
\begin{center}
\mbox{\inferrule{@{text "t\<^isub>1 \<equiv> s\<^isub>1 \<dots> t\<^isub>n \<equiv> s\<^isub>n"}}
{@{text "constr t\<^isub>1\<dots>t\<^isub>n \<equiv> constr s\<^isub>1\<dots>s\<^isub>n"}}}
\end{center}
\end{isabelle}
with @{text "constr"} being a constant, like @{const "If"}, @{const "Let"}
and so on. Every simpset contains only one congruence rule for each
term-constructor, which however can be overwritten. The user can declare
lemmas to be congruence rules using the attribute @{text "[cong]"}. Note that
in HOL these congruence theorems are usually stated as equations, which are
then internally transformed into meta-equations.
The rewriting with theorems involving premises requires what is in Isabelle
called a subgoaler, a solver and a looper. These can be arbitrary tactics
that can be installed in a simpset and which are executed at various stages
during simplification.
Simpsets can also include simprocs, which produce rewrite rules on
demand according to a pattern (see next section for a detailed description
of simpsets). Another component are split-rules, which can simplify for
example the ``then'' and ``else'' branches of if-statements under the
corresponding preconditions.
\begin{readmore}
For more information about the simplifier see @{ML_file "Pure/raw_simplifier.ML"}
and @{ML_file "Pure/simplifier.ML"}. The generic splitter is implemented in
@{ML_file "Provers/splitter.ML"}.
\end{readmore}
\footnote{\bf FIXME: Find the right place to mention this: Discrimination
nets are implemented in @{ML_file "Pure/net.ML"}.}
The most common combinators for modifying simpsets are:
\begin{isabelle}
\begin{tabular}{l@ {\hspace{10mm}}l}
@{ML_ind addsimps in Raw_Simplifier} & @{ML_ind delsimps in Raw_Simplifier}\\
@{ML_ind addsimprocs in Raw_Simplifier} & @{ML_ind delsimprocs in Raw_Simplifier}\\
@{ML_ind add_cong in Raw_Simplifier} & @{ML_ind del_cong in Raw_Simplifier}\\
\end{tabular}
\end{isabelle}
*}
text_raw {*
\begin{figure}[t]
\begin{minipage}{\textwidth}
\begin{isabelle}*}
ML %grayML{*fun print_ss ctxt ss =
let
val {simps, congs, procs, ...} = Raw_Simplifier.dest_ss ss
fun name_thm (nm, thm) =
Pretty.enclose (nm ^ ": ") "" [pretty_thm_no_vars ctxt thm]
fun name_ctrm (nm, ctrm) =
Pretty.enclose (nm ^ ": ") "" [pretty_cterms ctxt ctrm]
val pps = [Pretty.big_list "Simplification rules:" (map name_thm simps),
Pretty.big_list "Congruences rules:" (map name_thm congs),
Pretty.big_list "Simproc patterns:" (map name_ctrm procs)]
in
pps |> Pretty.chunks
|> pwriteln
end*}
text_raw {*
\end{isabelle}
\end{minipage}
\caption{The function @{ML_ind dest_ss in Raw_Simplifier} returns a record containing
all printable information stored in a simpset. We are here only interested in the
simplification rules, congruence rules and simprocs.\label{fig:printss}}
\end{figure} *}
text {*
To see how they work, consider the function in Figure~\ref{fig:printss}, which
prints out some parts of a simpset. If you use it to print out the components of the
empty simpset, i.e., @{ML_ind empty_ss in Raw_Simplifier}
@{ML_response_fake [display,gray]
"print_ss @{context} empty_ss"
"Simplification rules:
Congruences rules:
Simproc patterns:"}
you can see it contains nothing. This simpset is usually not useful, except as a
building block to build bigger simpsets. For example you can add to @{ML empty_ss}
the simplification rule @{thm [source] Diff_Int} as follows:
*}
ML %grayML{*val ss1 = empty_ss addsimps [@{thm Diff_Int} RS @{thm eq_reflection}] *}
text {*
Printing then out the components of the simpset gives:
@{ML_response_fake [display,gray]
"print_ss @{context} ss1"
"Simplification rules:
??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)
Congruences rules:
Simproc patterns:"}
\footnote{\bf FIXME: Why does it print out ??.unknown}
Adding also the congruence rule @{thm [source] UN_cong}
*}
ML %grayML{*val ss2 = Simplifier.add_cong (@{thm UN_cong} RS @{thm eq_reflection}) ss1 *}
text {*
gives
@{ML_response_fake [display,gray]
"print_ss @{context} ss2"
"Simplification rules:
??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)
Congruences rules:
UNION: \<lbrakk>A = B; \<And>x. x \<in> B \<Longrightarrow> C x = D x\<rbrakk> \<Longrightarrow> \<Union>x\<in>A. C x \<equiv> \<Union>x\<in>B. D x
Simproc patterns:"}
Notice that we had to add these lemmas as meta-equations. The @{ML empty_ss}
expects this form of the simplification and congruence rules. This is
different, if we use for example the simpset @{ML HOL_basic_ss} (see below),
where rules are usually added as equation. However, even
when adding these lemmas to @{ML empty_ss} we do not end up with anything useful yet.
In the context of HOL, the first really useful simpset is @{ML_ind
HOL_basic_ss in Simpdata}. While printing out the components of this simpset
@{ML_response_fake [display,gray]
"print_ss @{context} HOL_basic_ss"
"Simplification rules:
Congruences rules:
Simproc patterns:"}
also produces ``nothing'', the printout is somewhat misleading. In fact
the @{ML HOL_basic_ss} is setup so that it can solve goals of the
form
\begin{isabelle}
@{thm TrueI}, @{thm refl[no_vars]}, @{term "t \<equiv> t"} and @{thm FalseE[no_vars]};
\end{isabelle}
and also resolve with assumptions. For example:
*}
lemma
shows "True"
and "t = t"
and "t \<equiv> t"
and "False \<Longrightarrow> Foo"
and "\<lbrakk>A; B; C\<rbrakk> \<Longrightarrow> A"
apply(tactic {* ALLGOALS (simp_tac HOL_basic_ss) *})
done
text {*
This behaviour is not because of simplification rules, but how the subgoaler, solver
and looper are set up in @{ML HOL_basic_ss}.
The simpset @{ML_ind HOL_ss} is an extension of @{ML HOL_basic_ss} containing
already many useful simplification and congruence rules for the logical
connectives in HOL.
@{ML_response_fake [display,gray]
"print_ss @{context} HOL_ss"
"Simplification rules:
Pure.triv_forall_equality: (\<And>x. PROP V) \<equiv> PROP V
HOL.the_eq_trivial: THE x. x = y \<equiv> y
HOL.the_sym_eq_trivial: THE ya. y = ya \<equiv> y
\<dots>
Congruences rules:
HOL.simp_implies: \<dots>
\<Longrightarrow> (PROP P =simp=> PROP Q) \<equiv> (PROP P' =simp=> PROP Q')
op -->: \<lbrakk>P \<equiv> P'; P' \<Longrightarrow> Q \<equiv> Q'\<rbrakk> \<Longrightarrow> P \<longrightarrow> Q \<equiv> P' \<longrightarrow> Q'
Simproc patterns:
\<dots>"}
\begin{readmore}
The simplifier for HOL is set up in @{ML_file "HOL/Tools/simpdata.ML"}.
The simpset @{ML HOL_ss} is implemented in @{ML_file "HOL/HOL.thy"}.
\end{readmore}
The simplifier is often used to unfold definitions in a proof. For this the
simplifier implements the tactic @{ML_ind rewrite_goal_tac in Raw_Simplifier}.\footnote{\bf FIXME:
see LocalDefs infrastructure.} Suppose for example the
definition
*}
definition "MyTrue \<equiv> True"
text {*
then we can use this tactic to unfold the definition of this constant.
*}
lemma
shows "MyTrue \<Longrightarrow> True"
apply(tactic {* rewrite_goal_tac @{thms MyTrue_def} 1 *})
txt{* producing the goal state
\begin{isabelle}
@{subgoals [display]}
\end{isabelle} *}
(*<*)oops(*>*)
text {*
If you want to unfold definitions in \emph{all} subgoals, not just one,
then use the the tactic @{ML_ind rewrite_goals_tac in Raw_Simplifier}.
*}
text_raw {*
\begin{figure}[p]
\begin{boxedminipage}{\textwidth}
\begin{isabelle} *}
type_synonym prm = "(nat \<times> nat) list"
consts perm :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet> _" [80,80] 80)
overloading
perm_nat \<equiv> "perm :: prm \<Rightarrow> nat \<Rightarrow> nat"
perm_prod \<equiv> "perm :: prm \<Rightarrow> ('a\<times>'b) \<Rightarrow> ('a\<times>'b)"
perm_list \<equiv> "perm :: prm \<Rightarrow> 'a list \<Rightarrow> 'a list"
begin
fun swap::"nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> nat"
where
"swap a b c = (if c=a then b else (if c=b then a else c))"
primrec perm_nat
where
"perm_nat [] c = c"
| "perm_nat (ab#pi) c = swap (fst ab) (snd ab) (perm_nat pi c)"
fun perm_prod
where
"perm_prod pi (x, y) = (pi\<bullet>x, pi\<bullet>y)"
primrec perm_list
where
"perm_list pi [] = []"
| "perm_list pi (x#xs) = (pi\<bullet>x)#(perm_list pi xs)"
end
lemma perm_append[simp]:
fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
shows "((pi\<^isub>1@pi\<^isub>2)\<bullet>c) = (pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c))"
by (induct pi\<^isub>1) (auto)
lemma perm_bij[simp]:
fixes c d::"nat" and pi::"prm"
shows "(pi\<bullet>c = pi\<bullet>d) = (c = d)"
by (induct pi) (auto)
lemma perm_rev[simp]:
fixes c::"nat" and pi::"prm"
shows "pi\<bullet>((rev pi)\<bullet>c) = c"
by (induct pi arbitrary: c) (auto)
lemma perm_compose:
fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>(pi\<^isub>1\<bullet>c)"
by (induct pi\<^isub>2) (auto)
text_raw {*
\end{isabelle}
\end{boxedminipage}
\caption{A simple theory about permutations over @{typ nat}s. The point is that the
lemma @{thm [source] perm_compose} cannot be directly added to the simplifier, as
it would cause the simplifier to loop. It can still be used as a simplification
rule if the permutation in the right-hand side is sufficiently protected.
\label{fig:perms}}
\end{figure} *}
text {*
The simplifier is often used in order to bring terms into a normal form.
Unfortunately, often the situation arises that the corresponding
simplification rules will cause the simplifier to run into an infinite
loop. Consider for example the simple theory about permutations over natural
numbers shown in Figure~\ref{fig:perms}. The purpose of the lemmas is to
push permutations as far inside as possible, where they might disappear by
Lemma @{thm [source] perm_rev}. However, to fully normalise all instances,
it would be desirable to add also the lemma @{thm [source] perm_compose} to
the simplifier for pushing permutations over other permutations. Unfortunately,
the right-hand side of this lemma is again an instance of the left-hand side
and so causes an infinite loop. There seems to be no easy way to reformulate
this rule and so one ends up with clunky proofs like:
*}
lemma
fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
apply(simp)
apply(rule trans)
apply(rule perm_compose)
apply(simp)
done
text {*
It is however possible to create a single simplifier tactic that solves
such proofs. The trick is to introduce an auxiliary constant for permutations
and split the simplification into two phases (below actually three). Let
assume the auxiliary constant is
*}
definition
perm_aux :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet>aux _" [80,80] 80)
where
"pi \<bullet>aux c \<equiv> pi \<bullet> c"
text {* Now the two lemmas *}
lemma perm_aux_expand:
fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = pi\<^isub>1 \<bullet>aux (pi\<^isub>2\<bullet>c)"
unfolding perm_aux_def by (rule refl)
lemma perm_compose_aux:
fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>aux c) = (pi\<^isub>1\<bullet>pi\<^isub>2) \<bullet>aux (pi\<^isub>1\<bullet>c)"
unfolding perm_aux_def by (rule perm_compose)
text {*
are simple consequence of the definition and @{thm [source] perm_compose}.
More importantly, the lemma @{thm [source] perm_compose_aux} can be safely
added to the simplifier, because now the right-hand side is not anymore an instance
of the left-hand side. In a sense it freezes all redexes of permutation compositions
after one step. In this way, we can split simplification of permutations
into three phases without the user noticing anything about the auxiliary
constant. We first freeze any instance of permutation compositions in the term using
lemma @{thm [source] "perm_aux_expand"} (Line 9);
then simplify all other permutations including pushing permutations over
other permutations by rule @{thm [source] perm_compose_aux} (Line 10); and
finally ``unfreeze'' all instances of permutation compositions by unfolding
the definition of the auxiliary constant.
*}
ML %linenosgray{*val perm_simp_tac =
let
val thms1 = [@{thm perm_aux_expand}]
val thms2 = [@{thm perm_append}, @{thm perm_bij}, @{thm perm_rev},
@{thm perm_compose_aux}] @ @{thms perm_prod.simps} @
@{thms perm_list.simps} @ @{thms perm_nat.simps}
val thms3 = [@{thm perm_aux_def}]
in
simp_tac (HOL_basic_ss addsimps thms1)
THEN' simp_tac (HOL_basic_ss addsimps thms2)
THEN' simp_tac (HOL_basic_ss addsimps thms3)
end*}
text {*
For all three phases we have to build simpsets adding specific lemmas. As is sufficient
for our purposes here, we can add these lemmas to @{ML HOL_basic_ss} in order to obtain
the desired results. Now we can solve the following lemma
*}
lemma
fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
apply(tactic {* perm_simp_tac 1 *})
done
text {*
in one step. This tactic can deal with most instances of normalising permutations.
In order to solve all cases we have to deal with corner-cases such as the
lemma being an exact instance of the permutation composition lemma. This can
often be done easier by implementing a simproc or a conversion. Both will be
explained in the next two chapters.
(FIXME: Is it interesting to say something about @{term "op =simp=>"}?)
(FIXME: What are the second components of the congruence rules---something to
do with weak congruence constants?)
(FIXME: Anything interesting to say about @{ML Simplifier.clear_ss}?)
(FIXME: what are @{ML mksimps_pairs}; used in Nominal.thy)
(FIXME: explain @{ML simplify} and @{ML "Simplifier.rewrite_rule"} etc.)
*}
section {* Simprocs *}
text {*
In Isabelle you can also implement custom simplification procedures, called
\emph{simprocs}. Simprocs can be triggered by the simplifier on a specified
term-pattern and rewrite a term according to a theorem. They are useful in
cases where a rewriting rule must be produced on ``demand'' or when
rewriting by simplification is too unpredictable and potentially loops.
To see how simprocs work, let us first write a simproc that just prints out
the pattern which triggers it and otherwise does nothing. For this
you can use the function:
*}
ML %linenosgray{*fun fail_simproc simpset redex =
let
val ctxt = Simplifier.the_context simpset
val _ = pwriteln (Pretty.block [Pretty.str "The redex: ", pretty_cterm ctxt redex])
in
NONE
end*}
text {*
This function takes a simpset and a redex (a @{ML_type cterm}) as
arguments. In Lines 3 and~4, we first extract the context from the given
simpset and then print out a message containing the redex. The function
returns @{ML NONE} (standing for an optional @{ML_type thm}) since at the
moment we are \emph{not} interested in actually rewriting anything. We want
that the simproc is triggered by the pattern @{term "Suc n"}. This can be
done by adding the simproc to the current simpset as follows
*}
simproc_setup %gray fail ("Suc n") = {* K fail_simproc *}
text {*
where the second argument specifies the pattern and the right-hand side
contains the code of the simproc (we have to use @{ML K} since we are ignoring
an argument about morphisms.
After this, the simplifier is aware of the simproc and you can test whether
it fires on the lemma:
*}
lemma
shows "Suc 0 = 1"
apply(simp)
txt{*
\begin{minipage}{\textwidth}
\small@{text "> The redex: Suc 0"}\\
@{text "> The redex: Suc 0"}\\
\end{minipage}*}(*<*)oops(*>*)
text {*
This will print out the message twice: once for the left-hand side and
once for the right-hand side. The reason is that during simplification the
simplifier will at some point reduce the term @{term "1::nat"} to @{term "Suc
0"}, and then the simproc ``fires'' also on that term.
We can add or delete the simproc from the current simpset by the usual
\isacommand{declare}-statement. For example the simproc will be deleted
with the declaration
*}
declare [[simproc del: fail]]
text {*
If you want to see what happens with just \emph{this} simproc, without any
interference from other rewrite rules, you can call @{text fail}
as follows:
*}
lemma
shows "Suc 0 = 1"
apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [@{simproc fail}]) 1*})
(*<*)oops(*>*)
text {*
Now the message shows up only once since the term @{term "1::nat"} is
left unchanged.
Setting up a simproc using the command \isacommand{simproc\_setup} will
always add automatically the simproc to the current simpset. If you do not
want this, then you have to use a slightly different method for setting
up the simproc. First the function @{ML fail_simproc} needs to be modified
to
*}
ML %grayML{*fun fail_simproc' simpset redex =
let
val ctxt = Simplifier.the_context simpset
val _ = pwriteln (Pretty.block [Pretty.str "The redex:", pretty_term ctxt redex])
in
NONE
end*}
text {*
Here the redex is given as a @{ML_type term}, instead of a @{ML_type cterm}
(therefore we printing it out using the function @{ML pretty_term in Pretty}).
We can turn this function into a proper simproc using the function
@{ML Simplifier.simproc_global_i}:
*}
ML %grayML{*val fail' =
let
val thy = @{theory}
val pat = [@{term "Suc n"}]
in
Simplifier.simproc_global_i thy "fail_simproc'" pat (K fail_simproc')
end*}
text {*
Here the pattern is given as @{ML_type term} (instead of @{ML_type cterm}).
The function also takes a list of patterns that can trigger the simproc.
Now the simproc is set up and can be explicitly added using
@{ML_ind addsimprocs in Raw_Simplifier} to a simpset whenever
needed.
Simprocs are applied from inside to outside and from left to right. You can
see this in the proof
*}
lemma
shows "Suc (Suc 0) = (Suc 1)"
apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [fail']) 1*})
(*<*)oops(*>*)
text {*
The simproc @{ML fail'} prints out the sequence
@{text [display]
"> Suc 0
> Suc (Suc 0)
> Suc 1"}
To see how a simproc applies a theorem, let us implement a simproc that
rewrites terms according to the equation:
*}
lemma plus_one:
shows "Suc n \<equiv> n + 1" by simp
text {*
Simprocs expect that the given equation is a meta-equation, however the
equation can contain preconditions (the simproc then will only fire if the
preconditions can be solved). To see that one has relatively precise control over
the rewriting with simprocs, let us further assume we want that the simproc
only rewrites terms ``greater'' than @{term "Suc 0"}. For this we can write
*}
ML %grayML{*fun plus_one_simproc ss redex =
case redex of
@{term "Suc 0"} => NONE
| _ => SOME @{thm plus_one}*}
text {*
and set up the simproc as follows.
*}
ML %grayML{*val plus_one =
let
val thy = @{theory}
val pat = [@{term "Suc n"}]
in
Simplifier.simproc_global_i thy "sproc +1" pat (K plus_one_simproc)
end*}
text {*
Now the simproc is set up so that it is triggered by terms
of the form @{term "Suc n"}, but inside the simproc we only produce
a theorem if the term is not @{term "Suc 0"}. The result you can see
in the following proof
*}
lemma
shows "P (Suc (Suc (Suc 0))) (Suc 0)"
apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [plus_one]) 1*})
txt{*
where the simproc produces the goal state
\begin{isabelle}
@{subgoals[display]}
\end{isabelle}
*}
(*<*)oops(*>*)
text {*
As usual with rewriting you have to worry about looping: you already have
a loop with @{ML plus_one}, if you apply it with the default simpset (because
the default simpset contains a rule which just does the opposite of @{ML plus_one},
namely rewriting @{text [quotes] "+ 1"} to a successor). So you have to be careful
in choosing the right simpset to which you add a simproc.
Next let us implement a simproc that replaces terms of the form @{term "Suc n"}
with the number @{text n} increased by one. First we implement a function that
takes a term and produces the corresponding integer value.
*}
ML %grayML{*fun dest_suc_trm ((Const (@{const_name "Suc"}, _)) $ t) = 1 + dest_suc_trm t
| dest_suc_trm t = snd (HOLogic.dest_number t)*}
text {*
It uses the library function @{ML_ind dest_number in HOLogic} that transforms
(Isabelle) terms, like @{term "0::nat"}, @{term "1::nat"}, @{term "2::nat"} and so
on, into integer values. This function raises the exception @{ML TERM}, if
the term is not a number. The next function expects a pair consisting of a term
@{text t} (containing @{term Suc}s) and the corresponding integer value @{text n}.
*}
ML %linenosgray{*fun get_thm ctxt (t, n) =
let
val num = HOLogic.mk_number @{typ "nat"} n
val goal = Logic.mk_equals (t, num)
in
Goal.prove ctxt [] [] goal (K (Arith_Data.arith_tac ctxt 1))
end*}
text {*
From the integer value it generates the corresponding number term, called
@{text num} (Line 3), and then generates the meta-equation @{text "t \<equiv> num"}
(Line 4), which it proves by the arithmetic tactic in Line 6.
For our purpose at the moment, proving the meta-equation using @{ML
arith_tac in Arith_Data} is fine, but there is also an alternative employing
the simplifier with a special simpset. For the kind of lemmas we
want to prove here, the simpset @{text "num_ss"} should suffice.
*}
ML %grayML{*fun get_thm_alt ctxt (t, n) =
let
val num = HOLogic.mk_number @{typ "nat"} n
val goal = Logic.mk_equals (t, num)
val num_ss = HOL_ss addsimps @{thms semiring_norm}
in
Goal.prove ctxt [] [] goal (K (simp_tac num_ss 1))
end*}
text {*
The advantage of @{ML get_thm_alt} is that it leaves very little room for
something to go wrong; in contrast it is much more difficult to predict
what happens with @{ML arith_tac in Arith_Data}, especially in more complicated
circumstances. The disadvantage of @{ML get_thm_alt} is to find a simpset
that is sufficiently powerful to solve every instance of the lemmas
we like to prove. This requires careful tuning, but is often necessary in
``production code''.\footnote{It would be of great help if there is another
way than tracing the simplifier to obtain the lemmas that are successfully
applied during simplification. Alas, there is none.}
Anyway, either version can be used in the function that produces the actual
theorem for the simproc.
*}
ML %grayML{*fun nat_number_simproc ss t =
let
val ctxt = Simplifier.the_context ss
in
SOME (get_thm_alt ctxt (t, dest_suc_trm t))
handle TERM _ => NONE
end*}
text {*
This function uses the fact that @{ML dest_suc_trm} might raise an exception
@{ML TERM}. In this case there is nothing that can be rewritten and therefore no
theorem is produced (i.e.~the function returns @{ML NONE}). To try out the simproc
on an example, you can set it up as follows:
*}
ML %grayML{*val nat_number =
let
val thy = @{theory}
val pat = [@{term "Suc n"}]
in
Simplifier.simproc_global_i thy "nat_number" pat (K nat_number_simproc)
end*}
text {*
Now in the lemma
*}
lemma
shows "P (Suc (Suc 2)) (Suc 99) (0::nat) (Suc 4 + Suc 0) (Suc (0 + 0))"
apply(tactic {* simp_tac (HOL_ss addsimprocs [nat_number]) 1*})
txt {*
you obtain the more legible goal state
\begin{isabelle}
@{subgoals [display]}
\end{isabelle}*}
(*<*)oops(*>*)
text {*
where the simproc rewrites all @{term "Suc"}s except in the last argument. There it cannot
rewrite anything, because it does not know how to transform the term @{term "Suc (0 + 0)"}
into a number. To solve this problem have a look at the next exercise.
\begin{exercise}\label{ex:addsimproc}
Write a simproc that replaces terms of the form @{term "t\<^isub>1 + t\<^isub>2"} by their
result. You can assume the terms are ``proper'' numbers, that is of the form
@{term "0::nat"}, @{term "1::nat"}, @{term "2::nat"} and so on.
\end{exercise}
(FIXME: We did not do anything with morphisms. Anything interesting
one can say about them?)
*}
section {* Conversions\label{sec:conversion} *}
text {*
Conversions are a thin layer on top of Isabelle's inference kernel, and can
be viewed as a controllable, bare-bone version of Isabelle's simplifier.
The purpose of conversions is to manipulate @{ML_type cterm}s. However,
we will also show in this section how conversions can be applied to theorems
and to goal states. The type of conversions is
*}
ML %grayML{*type conv = cterm -> thm*}
text {*
whereby the produced theorem is always a meta-equality. A simple conversion
is the function @{ML_ind all_conv in Conv}, which maps a @{ML_type cterm} to an
instance of the (meta)reflexivity theorem. For example:
@{ML_response_fake [display,gray]
"Conv.all_conv @{cterm \"Foo \<or> Bar\"}"
"Foo \<or> Bar \<equiv> Foo \<or> Bar"}
Another simple conversion is @{ML_ind no_conv in Conv} which always raises the
exception @{ML CTERM}.
@{ML_response_fake [display,gray]
"Conv.no_conv @{cterm True}"
"*** Exception- CTERM (\"no conversion\", []) raised"}
A more interesting conversion is the function @{ML_ind beta_conversion in Thm}: it
produces a meta-equation between a term and its beta-normal form. For example
@{ML_response_fake [display,gray]
"let
val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
val two = @{cterm \"2::nat\"}
val ten = @{cterm \"10::nat\"}
val ctrm = Thm.apply (Thm.apply add two) ten
in
Thm.beta_conversion true ctrm
end"
"((\<lambda>x y. x + y) 2) 10 \<equiv> 2 + 10"}
If you run this example, you will notice that the actual response is the
seemingly nonsensical @{term
"2 + 10 \<equiv> 2 + (10::nat)"}. The reason is that the pretty-printer for
@{ML_type cterm}s eta-normalises (sic) terms and therefore produces this output.
If we get hold of the ``raw'' representation of the produced theorem,
we obtain the expected result.
@{ML_response [display,gray]
"let
val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
val two = @{cterm \"2::nat\"}
val ten = @{cterm \"10::nat\"}
val ctrm = Thm.apply (Thm.apply add two) ten
in
Thm.prop_of (Thm.beta_conversion true ctrm)
end"
"Const (\"==\",\<dots>) $
(Abs (\"x\",\<dots>,Abs (\"y\",\<dots>,\<dots>)) $\<dots>$\<dots>) $
(Const (\"Groups.plus_class.plus\",\<dots>) $ \<dots> $ \<dots>)"}
or in the pretty-printed form
@{ML_response_fake [display,gray]
"let
val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
val two = @{cterm \"2::nat\"}
val ten = @{cterm \"10::nat\"}
val ctrm = Thm.apply (Thm.apply add two) ten
val ctxt = @{context}
|> Config.put eta_contract false
|> Config.put show_abbrevs false
in
Thm.prop_of (Thm.beta_conversion true ctrm)
|> pretty_term ctxt
|> pwriteln
end"
"(\<lambda>x y. x + y) 2 10 \<equiv> 2 + 10"}
The argument @{ML true} in @{ML beta_conversion in Thm} indicates that
the right-hand side should be fully beta-normalised. If instead
@{ML false} is given, then only a single beta-reduction is performed
on the outer-most level.
The main point of conversions is that they can be used for rewriting
@{ML_type cterm}s. One example is the function
@{ML_ind rewr_conv in Conv}, which expects a meta-equation as an
argument. Suppose the following meta-equation.
*}
lemma true_conj1:
shows "True \<and> P \<equiv> P" by simp
text {*
It can be used for example to rewrite @{term "True \<and> (Foo \<longrightarrow> Bar)"}
to @{term "Foo \<longrightarrow> Bar"}. The code is as follows.
@{ML_response_fake [display,gray]
"let
val ctrm = @{cterm \"True \<and> (Foo \<longrightarrow> Bar)\"}
in
Conv.rewr_conv @{thm true_conj1} ctrm
end"
"True \<and> (Foo \<longrightarrow> Bar) \<equiv> Foo \<longrightarrow> Bar"}
Note, however, that the function @{ML_ind rewr_conv in Conv} only rewrites the
outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match
exactly the
left-hand side of the theorem, then @{ML_ind rewr_conv in Conv} fails, raising
the exception @{ML CTERM}.
This very primitive way of rewriting can be made more powerful by
combining several conversions into one. For this you can use conversion
combinators. The simplest conversion combinator is @{ML_ind then_conv in Conv},
which applies one conversion after another. For example
@{ML_response_fake [display,gray]
"let
val conv1 = Thm.beta_conversion false
val conv2 = Conv.rewr_conv @{thm true_conj1}
val ctrm = Thm.apply @{cterm \"\<lambda>x. x \<and> False\"} @{cterm \"True\"}
in
(conv1 then_conv conv2) ctrm
end"
"(\<lambda>x. x \<and> False) True \<equiv> False"}
where we first beta-reduce the term and then rewrite according to
@{thm [source] true_conj1}. (When running this example recall the
problem with the pretty-printer normalising all terms.)
The conversion combinator @{ML_ind else_conv in Conv} tries out the
first one, and if it does not apply, tries the second. For example
@{ML_response_fake [display,gray]
"let
val conv = Conv.rewr_conv @{thm true_conj1} else_conv Conv.all_conv
val ctrm1 = @{cterm \"True \<and> Q\"}
val ctrm2 = @{cterm \"P \<or> (True \<and> Q)\"}
in
(conv ctrm1, conv ctrm2)
end"
"(True \<and> Q \<equiv> Q, P \<or> True \<and> Q \<equiv> P \<or> True \<and> Q)"}
Here the conversion @{thm [source] true_conj1} only applies
in the first case, but fails in the second. The whole conversion
does not fail, however, because the combinator @{ML else_conv in Conv} will then
try out @{ML all_conv in Conv}, which always succeeds. The same
behaviour can also be achieved with conversion combinator @{ML_ind try_conv in Conv}.
For example
@{ML_response_fake [display,gray]
"let
val conv = Conv.try_conv (Conv.rewr_conv @{thm true_conj1})
val ctrm = @{cterm \"True \<or> P\"}
in
conv ctrm
end"
"True \<or> P \<equiv> True \<or> P"}
Rewriting with more than one theorem can be done using the function
@{ML_ind rewrs_conv in Conv} from the structure @{ML_struct Conv}.
Apart from the function @{ML beta_conversion in Thm}, which is able to fully
beta-normalise a term, the conversions so far are restricted in that they
only apply to the outer-most level of a @{ML_type cterm}. In what follows we
will lift this restriction. The combinators @{ML_ind fun_conv in Conv}
and @{ML_ind arg_conv in Conv} will apply
a conversion to the first, respectively second, argument of an application.
For example
@{ML_response_fake [display,gray]
"let
val conv = Conv.arg_conv (Conv.rewr_conv @{thm true_conj1})
val ctrm = @{cterm \"P \<or> (True \<and> Q)\"}
in
conv ctrm
end"
"P \<or> (True \<and> Q) \<equiv> P \<or> Q"}
The reason for this behaviour is that @{text "(op \<or>)"} expects two
arguments. Therefore the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The
conversion is then applied to @{text "t2"}, which in the example above
stands for @{term "True \<and> Q"}. The function @{ML fun_conv in Conv} would apply
the conversion to the term @{text "(Const \<dots> $ t1)"}.
The function @{ML_ind abs_conv in Conv} applies a conversion under an
abstraction. For example:
@{ML_response_fake [display,gray]
"let
val conv = Conv.rewr_conv @{thm true_conj1}
val ctrm = @{cterm \"\<lambda>P. True \<and> (P \<and> Foo)\"}
in
Conv.abs_conv (K conv) @{context} ctrm
end"
"\<lambda>P. True \<and> (P \<and> Foo) \<equiv> \<lambda>P. P \<and> Foo"}
Note that this conversion needs a context as an argument. We also give the
conversion as @{text "(K conv)"}, which is a function that ignores its
argument (the argument being a sufficiently freshened version of the
variable that is abstracted and a context). The conversion that goes under
an application is @{ML_ind combination_conv in Conv}. It expects two
conversions as arguments, each of which is applied to the corresponding
``branch'' of the application. An abbreviation for this conversion is the
function @{ML_ind comb_conv in Conv}, which applies the same conversion
to both branches.
We can now apply all these functions in a conversion that recursively
descends a term and applies a ``@{thm [source] true_conj1}''-conversion
in all possible positions.
*}
ML %linenosgray{*fun true_conj1_conv ctxt ctrm =
case (Thm.term_of ctrm) of
@{term "op \<and>"} $ @{term True} $ _ =>
(Conv.arg_conv (true_conj1_conv ctxt) then_conv
Conv.rewr_conv @{thm true_conj1}) ctrm
| _ $ _ => Conv.comb_conv (true_conj1_conv ctxt) ctrm
| Abs _ => Conv.abs_conv (fn (_, ctxt) => true_conj1_conv ctxt) ctxt ctrm
| _ => Conv.all_conv ctrm*}
text {*
This function ``fires'' if the term is of the form @{text "(True \<and> \<dots>)"}.
It descends under applications (Line 6) and abstractions
(Line 7); otherwise it leaves the term unchanged (Line 8). In Line 2
we need to transform the @{ML_type cterm} into a @{ML_type term} in order
to be able to pattern-match the term. To see this
conversion in action, consider the following example:
@{ML_response_fake [display,gray]
"let
val conv = true_conj1_conv @{context}
val ctrm = @{cterm \"distinct [1::nat, x] \<longrightarrow> True \<and> 1 \<noteq> x\"}
in
conv ctrm
end"
"distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x \<equiv> distinct [1, x] \<longrightarrow> 1 \<noteq> x"}
*}
text {*
Conversions that traverse terms, like @{ML true_conj1_conv} above, can be
implemented more succinctly with the combinators @{ML_ind bottom_conv in
Conv} and @{ML_ind top_conv in Conv}. For example:
*}
ML %grayML{*fun true_conj1_conv ctxt =
let
val conv = Conv.try_conv (Conv.rewr_conv @{thm true_conj1})
in
Conv.bottom_conv (K conv) ctxt
end*}
text {*
If we regard terms as trees with variables and constants on the top, then
@{ML bottom_conv in Conv} traverses first the the term and
on the ``way down'' applies the conversion, whereas @{ML top_conv in
Conv} applies the conversion on the ``way up''. To see the difference,
assume the following two meta-equations
*}
lemma conj_assoc:
fixes A B C::bool
shows "A \<and> (B \<and> C) \<equiv> (A \<and> B) \<and> C"
and "(A \<and> B) \<and> C \<equiv> A \<and> (B \<and> C)"
by simp_all
text {*
and the @{ML_type cterm} @{text "(a \<and> (b \<and> c)) \<and> d"}. Here you should
pause for a moment to be convinced that rewriting top-down and bottom-up
according to the two meta-equations produces two results. Below these
two results are calculated.
@{ML_response_fake [display, gray]
"let
val ctxt = @{context}
val conv = Conv.try_conv (Conv.rewrs_conv @{thms conj_assoc})
val conv_top = Conv.top_conv (K conv) ctxt
val conv_bot = Conv.bottom_conv (K conv) ctxt
val ctrm = @{cterm \"(a \<and> (b \<and> c)) \<and> d\"}
in
(conv_top ctrm, conv_bot ctrm)
end"
"(\"(a \<and> (b \<and> c)) \<and> d \<equiv> a \<and> (b \<and> (c \<and> d))\",
\"(a \<and> (b \<and> c)) \<and> d \<equiv> (a \<and> b) \<and> (c \<and> d)\")"}
To see how much control you have over rewriting with conversions, let us
make the task a bit more complicated by rewriting according to the rule
@{text true_conj1}, but only in the first arguments of @{term If}s. Then
the conversion should be as follows.
*}
ML %grayML{*fun if_true1_simple_conv ctxt ctrm =
case Thm.term_of ctrm of
Const (@{const_name If}, _) $ _ =>
Conv.arg_conv (true_conj1_conv ctxt) ctrm
| _ => Conv.no_conv ctrm
val if_true1_conv = Conv.top_sweep_conv if_true1_simple_conv*}
text {*
In the first function we only treat the top-most redex and also only the
success case. As default we return @{ML no_conv in Conv}. To apply this
``simple'' conversion inside a term, we use in the last line the combinator @{ML_ind
top_sweep_conv in Conv}, which traverses the term starting from the
root and applies it to all the redexes on the way, but stops in each branch as
soon as it found one redex. Here is an example for this conversion:
@{ML_response_fake [display,gray]
"let
val ctrm = @{cterm \"if P (True \<and> 1 \<noteq> (2::nat))
then True \<and> True else True \<and> False\"}
in
if_true1_conv @{context} ctrm
end"
"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False
\<equiv> if P (1 \<noteq> 2) then True \<and> True else True \<and> False"}
*}
text {*
So far we only applied conversions to @{ML_type cterm}s. Conversions can, however,
also work on theorems using the function @{ML_ind fconv_rule in Conv}. As an example,
consider again the conversion @{ML true_conj1_conv} and the lemma:
*}
lemma foo_test:
shows "P \<or> (True \<and> \<not>P)" by simp
text {*
Using the conversion you can transform this theorem into a
new theorem as follows
@{ML_response_fake [display,gray]
"let
val conv = Conv.fconv_rule (true_conj1_conv @{context})
val thm = @{thm foo_test}
in
conv thm
end"
"?P \<or> \<not> ?P"}
Finally, Isabelle provides function @{ML_ind CONVERSION in Tactical}
for turning conversions into tactics. This needs some predefined conversion
combinators that traverse a goal
state and can selectively apply the conversion. The combinators for
the goal state are:
\begin{itemize}
\item @{ML_ind params_conv in Conv} for converting under parameters
(i.e.~where a goal state is of the form @{text "\<And>x. P x \<Longrightarrow> Q x"})
\item @{ML_ind prems_conv in Conv} for applying a conversion to
premises of a goal state, and
\item @{ML_ind concl_conv in Conv} for applying a conversion to the
conclusion of a goal state.
\end{itemize}
Assume we want to apply @{ML true_conj1_conv} only in the conclusion
of the goal, and @{ML if_true1_conv} should only apply to the premises.
Here is a tactic doing exactly that:
*}
ML %grayML{*fun true1_tac ctxt =
CONVERSION
(Conv.params_conv ~1 (fn ctxt =>
(Conv.prems_conv ~1 (if_true1_conv ctxt) then_conv
Conv.concl_conv ~1 (true_conj1_conv ctxt))) ctxt)*}
text {*
We call the conversions with the argument @{ML "~1"}. By this we
analyse all parameters, all premises and the conclusion of a goal
state. If we call @{ML concl_conv in Conv} with
a non-negative number, say @{text n}, then this conversions will
skip the first @{text n} premises and applies the conversion to the
``rest'' (similar for parameters and conclusions). To test the
tactic, consider the proof
*}
lemma
"if True \<and> P then P else True \<and> False \<Longrightarrow>
(if True \<and> Q then True \<and> Q else P) \<longrightarrow> True \<and> (True \<and> Q)"
apply(tactic {* true1_tac @{context} 1 *})
txt {* where the tactic yields the goal state
\begin{isabelle}
@{subgoals [display]}
\end{isabelle}*}
(*<*)oops(*>*)
text {*
As you can see, the premises are rewritten according to @{ML if_true1_conv}, while
the conclusion according to @{ML true_conj1_conv}. If we only have one
conversion that should be uniformly applied to the whole goal state, we
can simplify @{ML true1_tac} using @{ML_ind top_conv in Conv}.
Conversions are also be helpful for constructing meta-equality theorems.
Such theorems are often definitions. As an example consider the following
two ways of defining the identity function in Isabelle.
*}
definition id1::"'a \<Rightarrow> 'a"
where "id1 x \<equiv> x"
definition id2::"'a \<Rightarrow> 'a"
where "id2 \<equiv> \<lambda>x. x"
text {*
Although both definitions define the same function, the theorems @{thm
[source] id1_def} and @{thm [source] id2_def} are different meta-equations. However it is
easy to transform one into the other. The function @{ML_ind abs_def
in Drule} from the structure @{ML_struct Drule} can automatically transform
theorem @{thm [source] id1_def}
into @{thm [source] id2_def} by abstracting all variables on the
left-hand side in the right-hand side.
@{ML_response_fake [display,gray]
"Drule.abs_def @{thm id1_def}"
"id1 \<equiv> \<lambda>x. x"}
Unfortunately, Isabelle has no built-in function that transforms the
theorems in the other direction. We can implement one using
conversions. The feature of conversions we are using is that if we apply a
@{ML_type cterm} to a conversion we obtain a meta-equality theorem (recall
that the type of conversions is an abbreviation for
@{ML_type "cterm -> thm"}). The code of the transformation is below.
*}
ML %linenosgray{*fun unabs_def ctxt def =
let
val (lhs, rhs) = Thm.dest_equals (cprop_of def)
val xs = strip_abs_vars (term_of rhs)
val (_, ctxt') = Variable.add_fixes (map fst xs) ctxt
val thy = Proof_Context.theory_of ctxt'
val cxs = map (cterm_of thy o Free) xs
val new_lhs = Drule.list_comb (lhs, cxs)
fun get_conv [] = Conv.rewr_conv def
| get_conv (_::xs) = Conv.fun_conv (get_conv xs)
in
get_conv xs new_lhs |>
singleton (Proof_Context.export ctxt' ctxt)
end*}
text {*
In Line 3 we destruct the meta-equality into the @{ML_type cterm}s
corresponding to the left-hand and right-hand side of the meta-equality. The
assumption in @{ML unabs_def} is that the right-hand side is an
abstraction. We compute the possibly empty list of abstracted variables in
Line 4 using the function @{ML_ind strip_abs_vars in Term}. For this we have to
first transform the @{ML_type cterm} into a @{ML_type term}. In Line 5 we
import these variables into the context @{text "ctxt'"}, in order to
export them again in Line 15. In Line 8 we certify the list of variables,
which in turn we apply to the @{ML_text lhs} of the definition using the
function @{ML_ind list_comb in Drule}. In Line 11 and 12 we generate the
conversion according to the length of the list of (abstracted) variables. If
there are none, then we just return the conversion corresponding to the
original definition. If there are variables, then we have to prefix this
conversion with @{ML_ind fun_conv in Conv}. Now in Line 14 we only have to
apply the new left-hand side to the generated conversion and obtain the the
theorem we want to construct. As mentioned above, in Line 15 we only have to
export the context @{text "ctxt'"} back to @{text "ctxt"} in order to
produce meta-variables for the theorem. An example is as follows.
@{ML_response_fake [display, gray]
"unabs_def @{context} @{thm id2_def}"
"id2 ?x1 \<equiv> ?x1"}
*}
text {*
To sum up this section, conversions are more general than the simplifier
or simprocs, but you have to do more work yourself. Also conversions are
often much less efficient than the simplifier. The advantage of conversions,
however, is that they provide much less room for non-termination.
\begin{exercise}\label{ex:addconversion}
Write a tactic that does the same as the simproc in exercise
\ref{ex:addsimproc}, but is based on conversions. You can make
the same assumptions as in \ref{ex:addsimproc}.
\end{exercise}
\begin{exercise}\label{ex:compare}
Compare your solutions of Exercises~\ref{ex:addsimproc} and \ref{ex:addconversion},
and try to determine which way of rewriting such terms is faster. For this you might
have to construct quite large terms. Also see Recipe \ref{rec:timing} for information
about timing.
\end{exercise}
\begin{readmore}
See @{ML_file "Pure/conv.ML"}
for more information about conversion combinators.
Some basic conversions are defined in @{ML_file "Pure/thm.ML"},
@{ML_file "Pure/drule.ML"} and @{ML_file "Pure/raw_simplifier.ML"}.
\end{readmore}
*}
text {*
(FIXME: check whether @{ML Pattern.match_rew} and @{ML Pattern.rewrite_term}
are of any use/efficient)
*}
section {* Declarations (TBD) *}
section {* Structured Proofs\label{sec:structured} (TBD) *}
text {* TBD *}
lemma True
proof
{
fix A B C
assume r: "A & B \<Longrightarrow> C"
assume A B
then have "A & B" ..
then have C by (rule r)
}
{
fix A B C
assume r: "A & B \<Longrightarrow> C"
assume A B
note conjI [OF this]
note r [OF this]
}
oops
ML {*
val ctxt0 = @{context};
val ctxt = ctxt0;
val (_, ctxt) = Variable.add_fixes ["A", "B", "C"] ctxt;
val ([r], ctxt) = Assumption.add_assumes [@{cprop "A & B \<Longrightarrow> C"}] ctxt
val (this, ctxt) = Assumption.add_assumes [@{cprop "A"}, @{cprop "B"}] ctxt;
val this = [@{thm conjI} OF this];
val this = r OF this;
val this = Assumption.export false ctxt ctxt0 this
val this = Variable.export ctxt ctxt0 [this]
*}
section {* Summary *}
text {*
\begin{conventions}
\begin{itemize}
\item Reference theorems with the antiquotation @{text "@{thm \<dots>}"}.
\item If a tactic is supposed to fail, return the empty sequence.
\item If you apply a tactic to a sequence of subgoals, apply it
in reverse order (i.e.~start with the last subgoal).
\item Use simpsets that are as small as possible.
\end{itemize}
\end{conventions}
*}
end