theory Tactical+ −
imports Base FirstSteps+ −
begin+ −
+ −
chapter {* Tactical Reasoning\label{chp:tactical} *}+ −
+ −
text {*+ −
One of the main reason for descending to the ML-level of Isabelle is to be able to+ −
implement automatic proof procedures. Such proof procedures usually lessen+ −
considerably the burden of manual reasoning, for example, when introducing+ −
new definitions. These proof procedures are centred around refining a goal+ −
state using tactics. This is similar to the \isacommand{apply}-style+ −
reasoning at the user-level, where goals are modified in a sequence of proof+ −
steps until all of them are solved. However, there are also more structured+ −
operations available on the ML-level that help with the handling of+ −
variables and assumptions.+ −
*}+ −
+ −
+ −
section {* Basics of Reasoning with Tactics*}+ −
+ −
text {*+ −
To see how tactics work, let us first transcribe a simple \isacommand{apply}-style proof + −
into ML. Suppose the following proof.+ −
*}+ −
+ −
lemma disj_swap: "P \<or> Q \<Longrightarrow> Q \<or> P"+ −
apply(erule disjE)+ −
apply(rule disjI2)+ −
apply(assumption)+ −
apply(rule disjI1)+ −
apply(assumption)+ −
done+ −
+ −
text {*+ −
This proof translates to the following ML-code.+ −
+ −
@{ML_response_fake [display,gray]+ −
"let+ −
val ctxt = @{context}+ −
val goal = @{prop \"P \<or> Q \<Longrightarrow> Q \<or> P\"}+ −
in+ −
Goal.prove ctxt [\"P\", \"Q\"] [] goal + −
(fn _ => + −
etac @{thm disjE} 1+ −
THEN rtac @{thm disjI2} 1+ −
THEN atac 1+ −
THEN rtac @{thm disjI1} 1+ −
THEN atac 1)+ −
end" "?P \<or> ?Q \<Longrightarrow> ?Q \<or> ?P"}+ −
+ −
To start the proof, the function @{ML_ind "Goal.prove"}~@{text "ctxt xs As C+ −
tac"} sets up a goal state for proving the goal @{text C} (that is @{prop "P+ −
\<or> Q \<Longrightarrow> Q \<or> P"} in the proof at hand) under the assumptions @{text As}+ −
(happens to be empty) with the variables @{text xs} that will be generalised+ −
once the goal is proved (in our case @{text P} and @{text+ −
Q}).\footnote{FIXME: explain prove earlier} The @{text "tac"} is the tactic+ −
that proves the goal; it can make use of the local assumptions (there are+ −
none in this example). The tactics @{ML_ind etac}, @{ML_ind rtac} and+ −
@{ML_ind atac} in the code above correspond roughly to @{text erule}, @{text+ −
rule} and @{text assumption}, respectively. The operator @{ML_ind THEN}+ −
strings the tactics together.+ −
+ −
\begin{readmore}+ −
To learn more about the function @{ML_ind prove in Goal} see+ −
\isccite{sec:results} and the file @{ML_file "Pure/goal.ML"}. See @{ML_file+ −
"Pure/tactic.ML"} and @{ML_file "Pure/tactical.ML"} for the code of basic+ −
tactics and tactic combinators; see also Chapters 3 and 4 in the old+ −
Isabelle Reference Manual, and Chapter 3 in the Isabelle/Isar Implementation+ −
Manual.+ −
\end{readmore}+ −
+ −
Note that in the code above we use antiquotations for referencing the+ −
theorems. Many theorems also have ML-bindings with the same name. Therefore,+ −
we could also just have written @{ML "etac disjE 1"}, or in case where there+ −
is no ML-binding obtain the theorem dynamically using the function @{ML+ −
thm}; for example \mbox{@{ML "etac (thm \"disjE\") 1"}}. Both ways however+ −
are considered bad style! The reason is that the binding for @{ML disjE} can+ −
be re-assigned by the user and thus one does not have complete control over+ −
which theorem is actually applied. This problem is nicely prevented by using+ −
antiquotations, because then the theorems are fixed statically at+ −
compile-time.+ −
+ −
During the development of automatic proof procedures, you will often find it+ −
necessary to test a tactic on examples. This can be conveniently done with+ −
the command \isacommand{apply}@{text "(tactic \<verbopen> \<dots> \<verbclose>)"}. + −
Consider the following sequence of tactics+ −
+ −
*}+ −
+ −
ML{*val foo_tac = + −
(etac @{thm disjE} 1+ −
THEN rtac @{thm disjI2} 1+ −
THEN atac 1+ −
THEN rtac @{thm disjI1} 1+ −
THEN atac 1)*}+ −
+ −
text {* and the Isabelle proof: *}+ −
+ −
lemma "P \<or> Q \<Longrightarrow> Q \<or> P"+ −
apply(tactic {* foo_tac *})+ −
done+ −
+ −
text {*+ −
By using @{text "tactic \<verbopen> \<dots> \<verbclose>"} you can call from the + −
user-level of Isabelle the tactic @{ML foo_tac} or + −
any other function that returns a tactic.+ −
+ −
The tactic @{ML foo_tac} is just a sequence of simple tactics stringed+ −
together by @{ML THEN}. As can be seen, each simple tactic in @{ML foo_tac}+ −
has a hard-coded number that stands for the subgoal analysed by the+ −
tactic (@{text "1"} stands for the first, or top-most, subgoal). This hard-coding+ −
of goals is sometimes wanted, but usually it is not. To avoid the explicit numbering, + −
you can write+ −
*}+ −
+ −
ML{*val foo_tac' = + −
(etac @{thm disjE} + −
THEN' rtac @{thm disjI2} + −
THEN' atac + −
THEN' rtac @{thm disjI1} + −
THEN' atac)*}text_raw{*\label{tac:footacprime}*}+ −
+ −
text {* + −
where @{ML_ind THEN'} is used instead of @{ML THEN}. With @{ML foo_tac'} you can give + −
the number for the subgoal explicitly when the tactic is+ −
called. So in the next proof you can first discharge the second subgoal, and+ −
subsequently the first.+ −
*}+ −
+ −
lemma "P1 \<or> Q1 \<Longrightarrow> Q1 \<or> P1"+ −
and "P2 \<or> Q2 \<Longrightarrow> Q2 \<or> P2"+ −
apply(tactic {* foo_tac' 2 *})+ −
apply(tactic {* foo_tac' 1 *})+ −
done+ −
+ −
text {*+ −
This kind of addressing is more difficult to achieve when the goal is + −
hard-coded inside the tactic. For most operators that combine tactics + −
(@{ML THEN} is only one such operator) a ``primed'' version exists.+ −
+ −
The tactics @{ML foo_tac} and @{ML foo_tac'} are very specific for+ −
analysing goals being only of the form @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"}. If the goal is not+ −
of this form, then these tactics return the error message:+ −
+ −
\begin{isabelle}+ −
@{text "*** empty result sequence -- proof command failed"}\\+ −
@{text "*** At command \"apply\"."}+ −
\end{isabelle}+ −
+ −
This means they failed.\footnote{To be precise, tactics do not produce this error+ −
message, it originates from the \isacommand{apply} wrapper.} The reason for this + −
error message is that tactics + −
are functions mapping a goal state to a (lazy) sequence of successor states. + −
Hence the type of a tactic is:+ −
*}+ −
+ −
ML{*type tactic = thm -> thm Seq.seq*}+ −
+ −
text {*+ −
By convention, if a tactic fails, then it should return the empty sequence. + −
Therefore, if you write your own tactics, they should not raise exceptions + −
willy-nilly; only in very grave failure situations should a tactic raise the + −
exception @{text THM}.+ −
+ −
The simplest tactics are @{ML_ind no_tac} and @{ML_ind all_tac}. The first returns+ −
the empty sequence and is defined as+ −
*}+ −
+ −
ML{*fun no_tac thm = Seq.empty*}+ −
+ −
text {*+ −
which means @{ML no_tac} always fails. The second returns the given theorem wrapped + −
in a single member sequence; it is defined as+ −
*}+ −
+ −
ML{*fun all_tac thm = Seq.single thm*}+ −
+ −
text {*+ −
which means @{ML all_tac} always succeeds, but also does not make any progress + −
with the proof.+ −
+ −
The lazy list of possible successor goal states shows through at the user-level + −
of Isabelle when using the command \isacommand{back}. For instance in the + −
following proof there are two possibilities for how to apply + −
@{ML foo_tac'}: either using the first assumption or the second.+ −
*}+ −
+ −
lemma "\<lbrakk>P \<or> Q; P \<or> Q\<rbrakk> \<Longrightarrow> Q \<or> P"+ −
apply(tactic {* foo_tac' 1 *})+ −
back+ −
done+ −
+ −
+ −
text {*+ −
By using \isacommand{back}, we construct the proof that uses the+ −
second assumption. While in the proof above, it does not really matter which + −
assumption is used, in more interesting cases provability might depend+ −
on exploring different possibilities.+ −
+ −
\begin{readmore}+ −
See @{ML_file "Pure/General/seq.ML"} for the implementation of lazy+ −
sequences. In day-to-day Isabelle programming, however, one rarely + −
constructs sequences explicitly, but uses the predefined tactics and + −
tactic combinators instead.+ −
\end{readmore}+ −
+ −
It might be surprising that tactics, which transform+ −
one goal state to the next, are functions from theorems to theorem + −
(sequences). The surprise resolves by knowing that every + −
goal state is indeed a theorem. To shed more light on this,+ −
let us modify the code of @{ML all_tac} to obtain the following+ −
tactic+ −
*}+ −
+ −
ML{*fun my_print_tac ctxt thm =+ −
let+ −
val _ = tracing (string_of_thm_no_vars ctxt thm)+ −
in + −
Seq.single thm+ −
end*}+ −
+ −
text_raw {*+ −
\begin{figure}[p]+ −
\begin{boxedminipage}{\textwidth}+ −
\begin{isabelle}+ −
*}+ −
notation (output) "prop" ("#_" [1000] 1000)+ −
+ −
lemma shows "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B" + −
apply(tactic {* my_print_tac @{context} *})+ −
+ −
txt{* \begin{minipage}{\textwidth}+ −
@{subgoals [display]} + −
\end{minipage}\medskip + −
+ −
\begin{minipage}{\textwidth}+ −
\small\colorbox{gray!20}{+ −
\begin{tabular}{@ {}l@ {}}+ −
internal goal state:\\+ −
@{raw_goal_state}+ −
\end{tabular}}+ −
\end{minipage}\medskip+ −
*}+ −
+ −
apply(rule conjI)+ −
apply(tactic {* my_print_tac @{context} *})+ −
+ −
txt{* \begin{minipage}{\textwidth}+ −
@{subgoals [display]} + −
\end{minipage}\medskip+ −
+ −
\begin{minipage}{\textwidth}+ −
\small\colorbox{gray!20}{+ −
\begin{tabular}{@ {}l@ {}}+ −
internal goal state:\\+ −
@{raw_goal_state}+ −
\end{tabular}}+ −
\end{minipage}\medskip+ −
*}+ −
+ −
apply(assumption)+ −
apply(tactic {* my_print_tac @{context} *})+ −
+ −
txt{* \begin{minipage}{\textwidth}+ −
@{subgoals [display]} + −
\end{minipage}\medskip+ −
+ −
\begin{minipage}{\textwidth}+ −
\small\colorbox{gray!20}{+ −
\begin{tabular}{@ {}l@ {}}+ −
internal goal state:\\+ −
@{raw_goal_state}+ −
\end{tabular}}+ −
\end{minipage}\medskip+ −
*}+ −
+ −
apply(assumption)+ −
apply(tactic {* my_print_tac @{context} *})+ −
+ −
txt{* \begin{minipage}{\textwidth}+ −
@{subgoals [display]} + −
\end{minipage}\medskip + −
+ −
\begin{minipage}{\textwidth}+ −
\small\colorbox{gray!20}{+ −
\begin{tabular}{@ {}l@ {}}+ −
internal goal state:\\+ −
@{raw_goal_state}+ −
\end{tabular}}+ −
\end{minipage}\medskip + −
*}+ −
(*<*)oops(*>*)+ −
text_raw {* + −
\end{isabelle}+ −
\end{boxedminipage}+ −
\caption{The figure shows a proof where each intermediate goal state is+ −
printed by the Isabelle system and by @{ML my_print_tac}. The latter shows+ −
the goal state as represented internally (highlighted boxes). This+ −
tactic shows that every goal state in Isabelle is represented by a theorem:+ −
when you start the proof of \mbox{@{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}} the theorem is+ −
@{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> #(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}; when you finish the proof the+ −
theorem is @{text "#(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}.\label{fig:goalstates}}+ −
\end{figure}+ −
*}+ −
+ −
text {*+ −
which prints out the given theorem (using the string-function defined in+ −
Section~\ref{sec:printing}) and then behaves like @{ML all_tac}. With this+ −
tactic we are in the position to inspect every goal state in a+ −
proof. Consider now the proof in Figure~\ref{fig:goalstates}: as can be seen, + −
internally every goal state is an implication of the form+ −
+ −
@{text[display] "A\<^isub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^isub>n \<Longrightarrow> #C"}+ −
+ −
where @{term C} is the goal to be proved and the @{term "A\<^isub>i"} are+ −
the subgoals. So after setting up the lemma, the goal state is always of the+ −
form @{text "C \<Longrightarrow> #C"}; when the proof is finished we are left with @{text+ −
"#C"}. Since the goal @{term C} can potentially be an implication, there is a+ −
``protector'' wrapped around it (the wrapper is the outermost constant+ −
@{text "Const (\"prop\", bool \<Rightarrow> bool)"}; in the figure we make it visible+ −
as an @{text #}). This wrapper prevents that premises of @{text C} are+ −
misinterpreted as open subgoals. While tactics can operate on the subgoals+ −
(the @{text "A\<^isub>i"} above), they are expected to leave the conclusion+ −
@{term C} intact, with the exception of possibly instantiating schematic+ −
variables. If you use the predefined tactics, which we describe in the next+ −
section, this will always be the case.+ −
+ −
\begin{readmore}+ −
For more information about the internals of goals see \isccite{sec:tactical-goals}.+ −
\end{readmore}+ −
+ −
*}+ −
+ −
section {* Simple Tactics\label{sec:simpletacs} *}+ −
+ −
text {*+ −
Let us start with explaining the simple tactic @{ML_ind print_tac}, which is quite useful + −
for low-level debugging of tactics. It just prints out a message and the current + −
goal state. Unlike @{ML my_print_tac} shown earlier, it prints the goal state + −
as the user would see it. For example, processing the proof+ −
*}+ −
+ −
lemma shows "False \<Longrightarrow> True"+ −
apply(tactic {* print_tac "foo message" *})+ −
txt{*gives:\medskip+ −
+ −
\begin{minipage}{\textwidth}\small+ −
@{text "foo message"}\\[3mm] + −
@{prop "False \<Longrightarrow> True"}\\+ −
@{text " 1. False \<Longrightarrow> True"}\\+ −
\end{minipage}+ −
*}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
A simple tactic for easy discharge of any proof obligations is + −
@{ML_ind cheat_tac in SkipProof}. This tactic corresponds to+ −
the Isabelle command \isacommand{sorry} and is sometimes useful + −
during the development of tactics.+ −
*}+ −
+ −
lemma shows "False" and "Goldbach_conjecture" + −
apply(tactic {* SkipProof.cheat_tac @{theory} *})+ −
txt{*\begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage}*}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
This tactic works however only if the quick-and-dirty mode of Isabelle + −
is switched on.+ −
+ −
Another simple tactic is the function @{ML_ind atac}, which, as shown in the previous+ −
section, corresponds to the assumption command.+ −
*}+ −
+ −
lemma shows "P \<Longrightarrow> P"+ −
apply(tactic {* atac 1 *})+ −
txt{*\begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage}*}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
Similarly, @{ML_ind rtac}, @{ML_ind dtac}, @{ML_ind etac} and + −
@{ML_ind ftac} correspond (roughly)+ −
to @{text rule}, @{text drule}, @{text erule} and @{text frule}, + −
respectively. Each of them takes a theorem as argument and attempts to + −
apply it to a goal. Below are three self-explanatory examples.+ −
*}+ −
+ −
lemma shows "P \<and> Q"+ −
apply(tactic {* rtac @{thm conjI} 1 *})+ −
txt{*\begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage}*}+ −
(*<*)oops(*>*)+ −
+ −
lemma shows "P \<and> Q \<Longrightarrow> False"+ −
apply(tactic {* etac @{thm conjE} 1 *})+ −
txt{*\begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage}*}+ −
(*<*)oops(*>*)+ −
+ −
lemma shows "False \<and> True \<Longrightarrow> False"+ −
apply(tactic {* dtac @{thm conjunct2} 1 *})+ −
txt{*\begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage}*}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
The function @{ML_ind resolve_tac} is similar to @{ML_ind rtac}, except that it+ −
expects a list of theorems as arguments. From this list it will apply the+ −
first applicable theorem (later theorems that are also applicable can be+ −
explored via the lazy sequences mechanism). Given the code+ −
*}+ −
+ −
ML{*val resolve_xmp_tac = resolve_tac [@{thm impI}, @{thm conjI}]*}+ −
+ −
text {*+ −
an example for @{ML resolve_tac} is the following proof where first an outermost + −
implication is analysed and then an outermost conjunction.+ −
*}+ −
+ −
lemma shows "C \<longrightarrow> (A \<and> B)" and "(A \<longrightarrow> B) \<and> C"+ −
apply(tactic {* resolve_xmp_tac 1 *})+ −
apply(tactic {* resolve_xmp_tac 2 *})+ −
txt{*\begin{minipage}{\textwidth}+ −
@{subgoals [display]} + −
\end{minipage}*}+ −
(*<*)oops(*>*)+ −
+ −
text {* + −
Similar versions taking a list of theorems exist for the tactics + −
@{ML dtac} (@{ML_ind dresolve_tac}), @{ML etac} + −
(@{ML_ind eresolve_tac}) and so on.+ −
+ −
+ −
Another simple tactic is @{ML_ind cut_facts_tac}. It inserts a list of theorems+ −
into the assumptions of the current goal state. For example+ −
*}+ −
+ −
lemma shows "True \<noteq> False"+ −
apply(tactic {* cut_facts_tac [@{thm True_def}, @{thm False_def}] 1 *})+ −
txt{*produces the goal state\medskip+ −
+ −
\begin{minipage}{\textwidth}+ −
@{subgoals [display]} + −
\end{minipage}*}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
Since rules are applied using higher-order unification, an automatic proof+ −
procedure might become too fragile, if it just applies inference rules as + −
shown above. The reason is that a number of rules introduce meta-variables + −
into the goal state. Consider for example the proof+ −
*}+ −
+ −
lemma shows "\<forall>x \<in> A. P x \<Longrightarrow> Q x"+ −
apply(tactic {* dtac @{thm bspec} 1 *})+ −
txt{*\begin{minipage}{\textwidth}+ −
@{subgoals [display]} + −
\end{minipage}*}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
where the application of rule @{text bspec} generates two subgoals involving the+ −
meta-variable @{text "?x"}. Now, if you are not careful, tactics + −
applied to the first subgoal might instantiate this meta-variable in such a + −
way that the second subgoal becomes unprovable. If it is clear what the @{text "?x"}+ −
should be, then this situation can be avoided by introducing a more+ −
constrained version of the @{text bspec}-rule. Such constraints can be given by+ −
pre-instantiating theorems with other theorems. One function to do this is+ −
@{ML_ind "RS"}+ −
+ −
@{ML_response_fake [display,gray]+ −
"@{thm disjI1} RS @{thm conjI}" "\<lbrakk>?P1; ?Q\<rbrakk> \<Longrightarrow> (?P1 \<or> ?Q1) \<and> ?Q"}+ −
+ −
which in the example instantiates the first premise of the @{text conjI}-rule + −
with the rule @{text disjI1}. If the instantiation is impossible, as in the + −
case of+ −
+ −
@{ML_response_fake_both [display,gray]+ −
"@{thm conjI} RS @{thm mp}" + −
"*** Exception- THM (\"RSN: no unifiers\", 1, + −
[\"\<lbrakk>?P; ?Q\<rbrakk> \<Longrightarrow> ?P \<and> ?Q\", \"\<lbrakk>?P \<longrightarrow> ?Q; ?P\<rbrakk> \<Longrightarrow> ?Q\"]) raised"}+ −
+ −
then the function raises an exception. The function @{ML_ind RSN} is similar to @{ML RS}, but + −
takes an additional number as argument that makes explicit which premise + −
should be instantiated. + −
+ −
To improve readability of the theorems we shall produce below, we will use the+ −
function @{ML no_vars} from Section~\ref{sec:printing}, which transforms+ −
schematic variables into free ones. Using this function for the first @{ML+ −
RS}-expression above produces the more readable result:+ −
+ −
@{ML_response_fake [display,gray]+ −
"no_vars @{context} (@{thm disjI1} RS @{thm conjI})" "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> Q"}+ −
+ −
If you want to instantiate more than one premise of a theorem, you can use + −
the function @{ML_ind MRS}:+ −
+ −
@{ML_response_fake [display,gray]+ −
"no_vars @{context} ([@{thm disjI1}, @{thm disjI2}] MRS @{thm conjI})" + −
"\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> (Pa \<or> Q)"}+ −
+ −
If you need to instantiate lists of theorems, you can use the+ −
functions @{ML RL} and @{ML_ind MRL}. For example in the code below,+ −
every theorem in the second list is instantiated with every + −
theorem in the first.+ −
+ −
@{ML_response_fake [display,gray]+ −
"map (no_vars @{context}) + −
([@{thm impI}, @{thm disjI2}] RL [@{thm conjI}, @{thm disjI1}])" + −
"[\<lbrakk>P \<Longrightarrow> Q; Qa\<rbrakk> \<Longrightarrow> (P \<longrightarrow> Q) \<and> Qa,+ −
\<lbrakk>Q; Qa\<rbrakk> \<Longrightarrow> (P \<or> Q) \<and> Qa,+ −
(P \<Longrightarrow> Q) \<Longrightarrow> (P \<longrightarrow> Q) \<or> Qa,+ −
Q \<Longrightarrow> (P \<or> Q) \<or> Qa]"}+ −
+ −
\begin{readmore}+ −
The combinators for instantiating theorems are defined in @{ML_file "Pure/drule.ML"}.+ −
\end{readmore}+ −
+ −
Often proofs on the ML-level involve elaborate operations on assumptions and + −
@{text "\<And>"}-quantified variables. To do such operations using the basic tactics + −
shown so far is very unwieldy and brittle. Some convenience and+ −
safety is provided by the functions @{ML_ind FOCUS in Subgoal} and + −
@{ML_ind SUBPROOF}. These tactics fix the parameters + −
and bind the various components of a goal state to a record. + −
To see what happens, use the function defined in Figure~\ref{fig:sptac}, which+ −
takes a record and just prints out the contents of this record. Consider+ −
now the proof:+ −
*}+ −
+ −
text_raw{*+ −
\begin{figure}[t]+ −
\begin{minipage}{\textwidth}+ −
\begin{isabelle}+ −
*}+ −
+ −
+ −
ML{*fun foc_tac {prems, params, asms, concl, context, schematics} = + −
let + −
val string_of_params = string_of_cterms context (map snd params)+ −
val string_of_asms = string_of_cterms context asms+ −
val string_of_concl = string_of_cterm context concl+ −
val string_of_prems = string_of_thms_no_vars context prems + −
val string_of_schms = string_of_cterms context (map fst (snd schematics)) + −
+ −
val strs = ["params: " ^ string_of_params,+ −
"schematics: " ^ string_of_schms,+ −
"assumptions: " ^ string_of_asms,+ −
"conclusion: " ^ string_of_concl,+ −
"premises: " ^ string_of_prems]+ −
in+ −
tracing (cat_lines strs); all_tac + −
end*}+ −
+ −
text_raw{*+ −
\end{isabelle}+ −
\end{minipage}+ −
\caption{A function that prints out the various parameters provided by + −
@{ML FOCUS in Subgoal} and @{ML SUBPROOF}. It uses the functions defined + −
in Section~\ref{sec:printing} for extracting strings from @{ML_type cterm}s + −
and @{ML_type thm}s.\label{fig:sptac}}+ −
\end{figure}+ −
*}+ −
+ −
lemma shows "\<And>x y. A x y \<Longrightarrow> B y x \<longrightarrow> C (?z y) x"+ −
apply(tactic {* Subgoal.FOCUS foc_tac @{context} 1 *})+ −
+ −
txt {*+ −
The tactic produces the following printout:+ −
+ −
\begin{quote}\small+ −
\begin{tabular}{ll}+ −
params: & @{term x}, @{term y}\\+ −
schematics: & @{text ?z}\\+ −
assumptions: & @{term "A x y"}\\+ −
conclusion: & @{term "B y x \<longrightarrow> C (z y) x"}\\+ −
premises: & @{term "A x y"}+ −
\end{tabular}+ −
\end{quote}+ −
+ −
(FIXME: Find out how nowadays the schematics are handled)+ −
+ −
Notice in the actual output the brown colour of the variables @{term x}, + −
and @{term y}. Although they are parameters in the original goal, they are fixed inside+ −
the tactic. By convention these fixed variables are printed in brown colour.+ −
Similarly the schematic variable @{text ?z}. The assumption, or premise, + −
@{prop "A x y"} is bound as @{ML_type cterm} to the record-variable + −
@{text asms}, but also as @{ML_type thm} to @{text prems}.+ −
+ −
If we continue the proof script by applying the @{text impI}-rule+ −
*}+ −
+ −
apply(rule impI)+ −
apply(tactic {* Subgoal.FOCUS foc_tac @{context} 1 *})+ −
+ −
txt {*+ −
then the tactic prints out: + −
+ −
\begin{quote}\small+ −
\begin{tabular}{ll}+ −
params: & @{term x}, @{term y}\\+ −
schematics: & @{text ?z}\\+ −
assumptions: & @{term "A x y"}, @{term "B y x"}\\+ −
conclusion: & @{term "C (z y) x"}\\+ −
premises: & @{term "A x y"}, @{term "B y x"}+ −
\end{tabular}+ −
\end{quote}+ −
*}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
Now also @{term "B y x"} is an assumption bound to @{text asms} and @{text prems}.+ −
+ −
The difference between the tactics @{ML SUBPROOF} and @{ML FOCUS in Subgoal}+ −
is that the former expects that the goal is solved completely, which the+ −
latter does not. @{ML SUBPROOF} can also not instantiate an schematic+ −
variables.+ −
+ −
One convenience of both @{ML FOCUS in Subgoal} and @{ML SUBPROOF} is that we+ −
can apply the assumptions using the usual tactics, because the parameter+ −
@{text prems} contains them as theorems. With this you can easily implement+ −
a tactic that behaves almost like @{ML atac}:+ −
*}+ −
+ −
ML{*val atac' = Subgoal.FOCUS (fn {prems, ...} => resolve_tac prems 1)*}+ −
+ −
text {*+ −
If you apply @{ML atac'} to the next lemma+ −
*}+ −
+ −
lemma shows "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"+ −
apply(tactic {* atac' @{context} 1 *})+ −
txt{* it will produce+ −
@{subgoals [display]} *}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
Notice that @{ML atac'} inside @{ML FOCUS in Subgoal} calls @{ML+ −
resolve_tac} with the subgoal number @{text "1"} and also the outer call to+ −
@{ML FOCUS in Subgoal} in the \isacommand{apply}-step uses @{text "1"}. This+ −
is another advantage of @{ML FOCUS in Subgoal} and @{ML SUBPROOF}: the+ −
addressing inside it is completely local to the tactic inside the+ −
subproof. It is therefore possible to also apply @{ML atac'} to the second+ −
goal by just writing:+ −
+ −
*}+ −
+ −
lemma shows "True" and "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"+ −
apply(tactic {* atac' @{context} 2 *})+ −
apply(rule TrueI)+ −
done+ −
+ −
+ −
text {*+ −
\begin{readmore}+ −
The functions @{ML FOCUS in Subgoal} and @{ML SUBPROOF} are defined in + −
@{ML_file "Pure/subgoal.ML"} and also described in + −
\isccite{sec:results}. + −
\end{readmore}+ −
+ −
Similar but less powerful functions than @{ML FOCUS in Subgoal}, respectively+ −
@{ML SUBPROOF}, are + −
@{ML_ind SUBGOAL} and @{ML_ind CSUBGOAL}. They allow you to + −
inspect a given subgoal (the former+ −
presents the subgoal as a @{ML_type term}, while the latter as a @{ML_type+ −
cterm}). With this you can implement a tactic that applies a rule according+ −
to the topmost logic connective in the subgoal (to illustrate this we only+ −
analyse a few connectives). The code of the tactic is as+ −
follows.+ −
*}+ −
+ −
ML %linenosgray{*fun select_tac (t, i) =+ −
case t of+ −
@{term "Trueprop"} $ t' => select_tac (t', i)+ −
| @{term "op \<Longrightarrow>"} $ _ $ t' => select_tac (t', i)+ −
| @{term "op \<and>"} $ _ $ _ => rtac @{thm conjI} i+ −
| @{term "op \<longrightarrow>"} $ _ $ _ => rtac @{thm impI} i+ −
| @{term "Not"} $ _ => rtac @{thm notI} i+ −
| Const (@{const_name "All"}, _) $ _ => rtac @{thm allI} i+ −
| _ => all_tac*}text_raw{*\label{tac:selecttac}*}+ −
+ −
text {*+ −
The input of the function is a term representing the subgoal and a number+ −
specifying the subgoal of interest. In Line 3 you need to descend under the+ −
outermost @{term "Trueprop"} in order to get to the connective you like to+ −
analyse. Otherwise goals like @{prop "A \<and> B"} are not properly+ −
analysed. Similarly with meta-implications in the next line. While for the+ −
first five patterns we can use the @{text "@term"}-antiquotation to+ −
construct the patterns, the pattern in Line 8 cannot be constructed in this+ −
way. The reason is that an antiquotation would fix the type of the+ −
quantified variable. So you really have to construct the pattern using the+ −
basic term-constructors. This is not necessary in other cases, because their+ −
type is always fixed to function types involving only the type @{typ+ −
bool}. (See Section \ref{sec:terms_types_manually} about constructing terms+ −
manually.) For the catch-all pattern, we chose to just return @{ML all_tac}. + −
Consequently, @{ML select_tac} never fails.+ −
+ −
+ −
Let us now see how to apply this tactic. Consider the four goals:+ −
*}+ −
+ −
+ −
lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"+ −
apply(tactic {* SUBGOAL select_tac 4 *})+ −
apply(tactic {* SUBGOAL select_tac 3 *})+ −
apply(tactic {* SUBGOAL select_tac 2 *})+ −
apply(tactic {* SUBGOAL select_tac 1 *})+ −
txt{* \begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage} *}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
where in all but the last the tactic applied an introduction rule. + −
Note that we applied the tactic to the goals in ``reverse'' order. + −
This is a trick in order to be independent from the subgoals that are + −
produced by the rule. If we had applied it in the other order + −
*}+ −
+ −
lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"+ −
apply(tactic {* SUBGOAL select_tac 1 *})+ −
apply(tactic {* SUBGOAL select_tac 3 *})+ −
apply(tactic {* SUBGOAL select_tac 4 *})+ −
apply(tactic {* SUBGOAL select_tac 5 *})+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
then we have to be careful to not apply the tactic to the two subgoals produced by + −
the first goal. To do this can result in quite messy code. In contrast, + −
the ``reverse application'' is easy to implement.+ −
+ −
Of course, this example is+ −
contrived: there are much simpler methods available in Isabelle for+ −
implementing a tactic analysing a goal according to its topmost+ −
connective. These simpler methods use tactic combinators, which we will+ −
explain in the next section.+ −
+ −
*}+ −
+ −
section {* Tactic Combinators *}+ −
+ −
text {* + −
The purpose of tactic combinators is to build compound tactics out of+ −
smaller tactics. In the previous section we already used @{ML THEN}, which+ −
just strings together two tactics in a sequence. For example:+ −
*}+ −
+ −
lemma shows "(Foo \<and> Bar) \<and> False"+ −
apply(tactic {* rtac @{thm conjI} 1 THEN rtac @{thm conjI} 1 *})+ −
txt {* \begin{minipage}{\textwidth}+ −
@{subgoals [display]} + −
\end{minipage} *}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
If you want to avoid the hard-coded subgoal addressing, then, as + −
seen earlier, you can use+ −
the ``primed'' version of @{ML THEN}. For example:+ −
*}+ −
+ −
lemma shows "(Foo \<and> Bar) \<and> False"+ −
apply(tactic {* (rtac @{thm conjI} THEN' rtac @{thm conjI}) 1 *})+ −
txt {* \begin{minipage}{\textwidth}+ −
@{subgoals [display]} + −
\end{minipage} *}+ −
(*<*)oops(*>*)+ −
+ −
text {* + −
Here you have to specify the subgoal of interest only once and+ −
it is consistently applied to the component tactics.+ −
For most tactic combinators such a ``primed'' version exists and+ −
in what follows we will usually prefer it over the ``unprimed'' one. + −
+ −
If there is a list of tactics that should all be tried out in + −
sequence, you can use the combinator @{ML_ind EVERY'}. For example+ −
the function @{ML foo_tac'} from page~\pageref{tac:footacprime} can also + −
be written as:+ −
*}+ −
+ −
ML{*val foo_tac'' = EVERY' [etac @{thm disjE}, rtac @{thm disjI2}, + −
atac, rtac @{thm disjI1}, atac]*}+ −
+ −
text {*+ −
There is even another way of implementing this tactic: in automatic proof+ −
procedures (in contrast to tactics that might be called by the user) there+ −
are often long lists of tactics that are applied to the first+ −
subgoal. Instead of writing the code above and then calling @{ML "foo_tac'' 1"}, + −
you can also just write+ −
*}+ −
+ −
ML{*val foo_tac1 = EVERY1 [etac @{thm disjE}, rtac @{thm disjI2}, + −
atac, rtac @{thm disjI1}, atac]*}+ −
+ −
text {*+ −
and call @{ML foo_tac1}. + −
+ −
With the combinators @{ML THEN'}, @{ML EVERY'} and @{ML_ind EVERY1} it must be+ −
guaranteed that all component tactics successfully apply; otherwise the+ −
whole tactic will fail. If you rather want to try out a number of tactics,+ −
then you can use the combinator @{ML_ind ORELSE'} for two tactics, and @{ML_ind+ −
FIRST'} (or @{ML_ind FIRST1}) for a list of tactics. For example, the tactic+ −
+ −
*}+ −
+ −
ML{*val orelse_xmp_tac = rtac @{thm disjI1} ORELSE' rtac @{thm conjI}*}+ −
+ −
text {*+ −
will first try out whether rule @{text disjI} applies and in case of failure + −
will try @{text conjI}. To see this consider the proof+ −
*}+ −
+ −
lemma shows "True \<and> False" and "Foo \<or> Bar"+ −
apply(tactic {* orelse_xmp_tac 2 *})+ −
apply(tactic {* orelse_xmp_tac 1 *})+ −
txt {* which results in the goal state+ −
+ −
\begin{minipage}{\textwidth}+ −
@{subgoals [display]} + −
\end{minipage}+ −
*}+ −
(*<*)oops(*>*)+ −
+ −
+ −
text {*+ −
Using @{ML FIRST'} we can simplify our @{ML select_tac} from Page~\pageref{tac:selecttac} + −
as follows:+ −
*}+ −
+ −
ML{*val select_tac' = FIRST' [rtac @{thm conjI}, rtac @{thm impI}, + −
rtac @{thm notI}, rtac @{thm allI}, K all_tac]*}text_raw{*\label{tac:selectprime}*}+ −
+ −
text {*+ −
Since we like to mimic the behaviour of @{ML select_tac} as closely as possible, + −
we must include @{ML all_tac} at the end of the list, otherwise the tactic will+ −
fail if no rule applies (we also have to wrap @{ML all_tac} using the + −
@{ML K}-combinator, because it does not take a subgoal number as argument). You can + −
test the tactic on the same goals:+ −
*}+ −
+ −
lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"+ −
apply(tactic {* select_tac' 4 *})+ −
apply(tactic {* select_tac' 3 *})+ −
apply(tactic {* select_tac' 2 *})+ −
apply(tactic {* select_tac' 1 *})+ −
txt{* \begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage} *}+ −
(*<*)oops(*>*)+ −
+ −
text {* + −
Since such repeated applications of a tactic to the reverse order of + −
\emph{all} subgoals is quite common, there is the tactic combinator + −
@{ML_ind ALLGOALS} that simplifies this. Using this combinator you can simply + −
write: *}+ −
+ −
lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"+ −
apply(tactic {* ALLGOALS select_tac' *})+ −
txt{* \begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage} *}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
Remember that we chose to implement @{ML select_tac'} so that it + −
always succeeds by adding @{ML all_tac} at the end of the tactic+ −
list. The same can be achieved with the tactic combinator @{ML_ind TRY}.+ −
For example:+ −
*}+ −
+ −
ML{*val select_tac'' = TRY o FIRST' [rtac @{thm conjI}, rtac @{thm impI}, + −
rtac @{thm notI}, rtac @{thm allI}]*}+ −
text_raw{*\label{tac:selectprimeprime}*}+ −
+ −
text {*+ −
This tactic behaves in the same way as @{ML select_tac'}: it tries out+ −
one of the given tactics and if none applies leaves the goal state + −
unchanged. This, however, can be potentially very confusing when visible to + −
the user, for example, in cases where the goal is the form+ −
+ −
*}+ −
+ −
lemma shows "E \<Longrightarrow> F"+ −
apply(tactic {* select_tac' 1 *})+ −
txt{* \begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage} *}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
In this case no rule applies, but because of @{ML TRY} or the inclusion of @{ML all_tac}+ −
the tactics do not fail. The problem with this is that for the user there is little + −
chance to see whether or not progress in the proof has been made. By convention+ −
therefore, tactics visible to the user should either change something or fail.+ −
+ −
To comply with this convention, we could simply delete the @{ML "K all_tac"}+ −
from the end of the theorem list. As a result @{ML select_tac'} would only+ −
succeed on goals where it can make progress. But for the sake of argument,+ −
let us suppose that this deletion is \emph{not} an option. In such cases, you can+ −
use the combinator @{ML_ind CHANGED} to make sure the subgoal has been changed+ −
by the tactic. Because now+ −
+ −
*}+ −
+ −
lemma shows "E \<Longrightarrow> F"+ −
apply(tactic {* CHANGED (select_tac' 1) *})(*<*)?(*>*)+ −
txt{* gives the error message:+ −
\begin{isabelle}+ −
@{text "*** empty result sequence -- proof command failed"}\\+ −
@{text "*** At command \"apply\"."}+ −
\end{isabelle} + −
*}(*<*)oops(*>*)+ −
+ −
+ −
text {*+ −
We can further extend @{ML select_tac'} so that it not just applies to the topmost+ −
connective, but also to the ones immediately ``underneath'', i.e.~analyse the goal + −
completely. For this you can use the tactic combinator @{ML_ind REPEAT}. As an example + −
suppose the following tactic+ −
*}+ −
+ −
ML{*val repeat_xmp_tac = REPEAT (CHANGED (select_tac' 1)) *}+ −
+ −
text {* which applied to the proof *}+ −
+ −
lemma shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"+ −
apply(tactic {* repeat_xmp_tac *})+ −
txt{* produces+ −
+ −
\begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage} *}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
Here it is crucial that @{ML select_tac'} is prefixed with @{ML CHANGED}, + −
because otherwise @{ML REPEAT} runs into an infinite loop (it applies the+ −
tactic as long as it succeeds). The function+ −
@{ML_ind REPEAT1} is similar, but runs the tactic at least once (failing if + −
this is not possible).+ −
+ −
If you are after the ``primed'' version of @{ML repeat_xmp_tac}, then you + −
can implement it as+ −
*}+ −
+ −
ML{*val repeat_xmp_tac' = REPEAT o CHANGED o select_tac'*}+ −
+ −
text {*+ −
since there are no ``primed'' versions of @{ML REPEAT} and @{ML CHANGED}.+ −
+ −
If you look closely at the goal state above, then you see the tactics @{ML repeat_xmp_tac}+ −
and @{ML repeat_xmp_tac'} are not yet quite what we are after: the problem is+ −
that goals 2 and 3 are not analysed. This is because the tactic+ −
is applied repeatedly only to the first subgoal. To analyse also all+ −
resulting subgoals, you can use the tactic combinator @{ML_ind REPEAT_ALL_NEW}. + −
Suppose the tactic+ −
*}+ −
+ −
ML{*val repeat_all_new_xmp_tac = REPEAT_ALL_NEW (CHANGED o select_tac')*}+ −
+ −
text {* + −
you see that the following goal+ −
*}+ −
+ −
lemma shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"+ −
apply(tactic {* repeat_all_new_xmp_tac 1 *})+ −
txt{* \begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage} *}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
is completely analysed according to the theorems we chose to+ −
include in @{ML select_tac'}. + −
+ −
Recall that tactics produce a lazy sequence of successor goal states. These+ −
states can be explored using the command \isacommand{back}. For example+ −
+ −
*}+ −
+ −
lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"+ −
apply(tactic {* etac @{thm disjE} 1 *})+ −
txt{* applies the rule to the first assumption yielding the goal state:\smallskip+ −
+ −
\begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage}\smallskip + −
+ −
After typing+ −
*}+ −
(*<*)+ −
oops+ −
lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"+ −
apply(tactic {* etac @{thm disjE} 1 *})+ −
(*>*)+ −
back+ −
txt{* the rule now applies to the second assumption.\smallskip+ −
+ −
\begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage} *}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
Sometimes this leads to confusing behaviour of tactics and also has+ −
the potential to explode the search space for tactics.+ −
These problems can be avoided by prefixing the tactic with the tactic + −
combinator @{ML_ind DETERM}.+ −
*}+ −
+ −
lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"+ −
apply(tactic {* DETERM (etac @{thm disjE} 1) *})+ −
txt {*\begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage} *}+ −
(*<*)oops(*>*)+ −
text {*+ −
This combinator will prune the search space to just the first successful application.+ −
Attempting to apply \isacommand{back} in this goal states gives the+ −
error message:+ −
+ −
\begin{isabelle}+ −
@{text "*** back: no alternatives"}\\+ −
@{text "*** At command \"back\"."}+ −
\end{isabelle}+ −
+ −
Recall that we implemented @{ML select_tac'} on Page~\pageref{tac:selectprime} specifically + −
so that it always succeeds. We achieved this by adding at the end the tactic @{ML all_tac}.+ −
We can achieve this also by using the combinator @{ML TRY}. Suppose, for example the + −
tactic+ −
*}+ −
+ −
ML{*val select_tac'' = FIRST' [rtac @{thm conjI}, rtac @{thm impI}, + −
rtac @{thm notI}, rtac @{thm allI}]*}+ −
+ −
text {*+ −
which will fail if none of the rules applies. However, if you prefix it as follows+ −
*}+ −
+ −
ML{*val select_tac''' = TRY o select_tac''*}+ −
+ −
text {*+ −
then the tactic @{ML select_tac''} will be tried out and any failure is harnessed. + −
We again have to use the construction with \mbox{@{text "TRY o ..."}} since there is + −
no primed version of @{ML_ind TRY}. The tactic combinator @{ML_ind TRYALL} will try out+ −
a tactic on all subgoals. For example the tactic + −
*}+ −
+ −
ML{*val triv_tac = TRYALL (rtac @{thm TrueI} ORELSE' etac @{thm FalseE})*}+ −
+ −
text {*+ −
will solve all trivial subgoals involving @{term True} or @{term "False"}.+ −
+ −
(FIXME: say something about @{ML_ind COND} and COND')+ −
+ −
(FIXME: PARALLEL-CHOICE PARALLEL-GOALS)+ −
+ −
\begin{readmore}+ −
Most tactic combinators described in this section are defined in @{ML_file "Pure/tactical.ML"}.+ −
Some combinators for the purpose of proof search are implemented in @{ML_file "Pure/search.ML"}.+ −
\end{readmore}+ −
*}+ −
+ −
text {*+ −
\begin{exercise}\label{ex:dyckhoff}+ −
Dyckhoff presents in \cite{Dyckhoff92} inference rules for a sequent+ −
calculus, called G4ip, in which no contraction rule is needed in order to be+ −
complete. As a result the rules applied in any order give a simple decision+ −
procedure for propositional intuitionistic logic. His rules are+ −
+ −
\begin{center}+ −
\def\arraystretch{2.3}+ −
\begin{tabular}{cc}+ −
\infer[Ax]{A,\varGamma \Rightarrow A}{} &+ −
\infer[False]{False,\varGamma \Rightarrow G}{}\\+ −
+ −
\infer[\wedge_L]{A \wedge B, \varGamma \Rightarrow G}{A, B, \varGamma \Rightarrow G} &+ −
\infer[\wedge_R]+ −
{\varGamma \Rightarrow A\wedge B}{\varGamma \Rightarrow A & \varGamma \Rightarrow B}\\+ −
+ −
\infer[\vee_L]+ −
{A\vee B, \varGamma \Rightarrow G}{A,\varGamma \Rightarrow G & B,\varGamma \Rightarrow G} &+ −
\infer[\vee_{R_1}]+ −
{\varGamma \Rightarrow A \vee B}{\varGamma \Rightarrow A} \hspace{3mm}+ −
\infer[\vee_{R_2}]+ −
{\varGamma \Rightarrow A \vee B}{\varGamma \Rightarrow B}\\+ −
+ −
\infer[\longrightarrow_{L_1}]+ −
{A\longrightarrow B, A, \varGamma \Rightarrow G}{B, A, \varGamma \Rightarrow G} &+ −
\infer[\longrightarrow_R]+ −
{\varGamma \Rightarrow A\longrightarrow B}{A,\varGamma \Rightarrow B}\\+ −
+ −
\infer[\longrightarrow_{L_2}]+ −
{(C \wedge D)\longrightarrow B, \varGamma \Rightarrow G}+ −
{C\longrightarrow (D \longrightarrow B), \varGamma \Rightarrow G} &+ −
+ −
\infer[\longrightarrow_{L_3}]+ −
{(C \vee D)\longrightarrow B, \varGamma \Rightarrow G}+ −
{C\longrightarrow B, D\longrightarrow B, \varGamma \Rightarrow G}\\+ −
+ −
\multicolumn{2}{c}{+ −
\infer[\longrightarrow_{L_4}]+ −
{(C \longrightarrow D)\longrightarrow B, \varGamma \Rightarrow G}+ −
{D\longrightarrow B, \varGamma \Rightarrow C \longrightarrow D &+ −
B, \varGamma \Rightarrow G}}\\+ −
\end{tabular}+ −
\end{center}+ −
+ −
Implement a tactic that explores all possibilites of applying these rules to+ −
a propositional formula until a goal state is reached in which all subgoals+ −
are discharged. Note that in Isabelle the left-rules need to be implemented+ −
as elimination rules. You need to prove separate lemmas corresponding to+ −
$\longrightarrow_{L_{1..4}}$. In order to explore all possibilities of applying + −
the rules, use the tactic combinator @{ML_ind DEPTH_SOLVE}, which searches + −
for a state in which all subgoals are solved. Add also rules for equality and+ −
run your tactic on the de Bruijn formulae discussed in Exercise~\ref{ex:debruijn}. + −
\end{exercise}+ −
+ −
\footnote{\bf FIXME: explain @{ML_ind Cong_Tac.cong_tac}}+ −
+ −
*}+ −
+ −
section {* Simplifier Tactics *}+ −
+ −
text {*+ −
A lot of convenience in the reasoning with Isabelle derives from its+ −
powerful simplifier. The power of the simplifier is a strength and a weakness at+ −
the same time, because you can easily make the simplifier run into a loop and+ −
in general its+ −
behaviour can be difficult to predict. There is also a multitude+ −
of options that you can configure to control the behaviour of the simplifier. + −
We describe some of them in this and the next section.+ −
+ −
There are the following five main tactics behind + −
the simplifier (in parentheses is their user-level counterpart):+ −
+ −
\begin{isabelle}+ −
\begin{center}+ −
\begin{tabular}{l@ {\hspace{2cm}}l}+ −
@{ML_ind simp_tac} & @{text "(simp (no_asm))"} \\+ −
@{ML_ind asm_simp_tac} & @{text "(simp (no_asm_simp))"} \\+ −
@{ML_ind full_simp_tac} & @{text "(simp (no_asm_use))"} \\+ −
@{ML_ind asm_lr_simp_tac} & @{text "(simp (asm_lr))"} \\+ −
@{ML_ind asm_full_simp_tac} & @{text "(simp)"}+ −
\end{tabular}+ −
\end{center}+ −
\end{isabelle}+ −
+ −
All of the tactics take a simpset and an integer as argument (the latter as usual + −
to specify the goal to be analysed). So the proof+ −
*}+ −
+ −
lemma "Suc (1 + 2) < 3 + 2"+ −
apply(simp)+ −
done+ −
+ −
text {*+ −
corresponds on the ML-level to the tactic+ −
*}+ −
+ −
lemma "Suc (1 + 2) < 3 + 2"+ −
apply(tactic {* asm_full_simp_tac @{simpset} 1 *})+ −
done+ −
+ −
text {*+ −
If the simplifier cannot make any progress, then it leaves the goal unchanged,+ −
i.e., does not raise any error message. That means if you use it to unfold a + −
definition for a constant and this constant is not present in the goal state, + −
you can still safely apply the simplifier.+ −
+ −
(FIXME: show rewriting of cterms)+ −
+ −
+ −
When using the simplifier, the crucial information you have to provide is+ −
the simpset. If this information is not handled with care, then the+ −
simplifier can easily run into a loop. Therefore a good rule of thumb is to+ −
use simpsets that are as minimal as possible. It might be surprising that a+ −
simpset is more complex than just a simple collection of theorems used as+ −
simplification rules. One reason for the complexity is that the simplifier+ −
must be able to rewrite inside terms and should also be able to rewrite+ −
according to rules that have preconditions.+ −
+ −
+ −
The rewriting inside terms requires congruence rules, which+ −
are meta-equalities typical of the form+ −
+ −
\begin{isabelle}+ −
\begin{center}+ −
\mbox{\inferrule{@{text "t\<^isub>1 \<equiv> s\<^isub>1 \<dots> t\<^isub>n \<equiv> s\<^isub>n"}}+ −
{@{text "constr t\<^isub>1\<dots>t\<^isub>n \<equiv> constr s\<^isub>1\<dots>s\<^isub>n"}}}+ −
\end{center}+ −
\end{isabelle}+ −
+ −
with @{text "constr"} being a constant, like @{const "If"} or @{const "Let"}. + −
Every simpset contains only+ −
one congruence rule for each term-constructor, which however can be+ −
overwritten. The user can declare lemmas to be congruence rules using the+ −
attribute @{text "[cong]"}. In HOL, the user usually states these lemmas as+ −
equations, which are then internally transformed into meta-equations.+ −
+ −
+ −
The rewriting with rules involving preconditions requires what is in+ −
Isabelle called a subgoaler, a solver and a looper. These can be arbitrary+ −
tactics that can be installed in a simpset and which are called at+ −
various stages during simplification. However, simpsets also include+ −
simprocs, which can produce rewrite rules on demand (see next+ −
section). Another component are split-rules, which can simplify for example+ −
the ``then'' and ``else'' branches of if-statements under the corresponding+ −
preconditions.+ −
+ −
+ −
\begin{readmore}+ −
For more information about the simplifier see @{ML_file "Pure/meta_simplifier.ML"}+ −
and @{ML_file "Pure/simplifier.ML"}. The simplifier for HOL is set up in + −
@{ML_file "HOL/Tools/simpdata.ML"}. The generic splitter is implemented in + −
@{ML_file "Provers/splitter.ML"}.+ −
\end{readmore}+ −
+ −
\begin{readmore}+ −
FIXME: Find the right place: Discrimination nets are implemented+ −
in @{ML_file "Pure/net.ML"}.+ −
\end{readmore}+ −
+ −
The most common combinators to modify simpsets are:+ −
+ −
\begin{isabelle}+ −
\begin{tabular}{ll}+ −
@{ML_ind addsimps} & @{ML_ind delsimps}\\+ −
@{ML_ind addcongs} & @{ML_ind delcongs}\\+ −
@{ML_ind addsimprocs} & @{ML_ind delsimprocs}\\+ −
\end{tabular}+ −
\end{isabelle}+ −
+ −
(FIXME: What about splitters? @{ML addsplits}, @{ML delsplits})+ −
*}+ −
+ −
text_raw {*+ −
\begin{figure}[t]+ −
\begin{minipage}{\textwidth}+ −
\begin{isabelle}*}+ −
ML{*fun print_ss ctxt ss =+ −
let+ −
val {simps, congs, procs, ...} = MetaSimplifier.dest_ss ss+ −
+ −
fun name_thm (nm, thm) =+ −
" " ^ nm ^ ": " ^ (string_of_thm_no_vars ctxt thm)+ −
fun name_ctrm (nm, ctrm) =+ −
" " ^ nm ^ ": " ^ (string_of_cterms ctxt ctrm)+ −
+ −
val s = ["Simplification rules:"] @ map name_thm simps @+ −
["Congruences rules:"] @ map name_thm congs @+ −
["Simproc patterns:"] @ map name_ctrm procs+ −
in+ −
s |> cat_lines+ −
|> tracing+ −
end*}+ −
text_raw {* + −
\end{isabelle}+ −
\end{minipage}+ −
\caption{The function @{ML_ind dest_ss in MetaSimplifier} returns a record containing+ −
all printable information stored in a simpset. We are here only interested in the + −
simplification rules, congruence rules and simprocs.\label{fig:printss}}+ −
\end{figure} *}+ −
+ −
+ −
text {* + −
To see how they work, consider the function in Figure~\ref{fig:printss}, which+ −
prints out some parts of a simpset. If you use it to print out the components of the+ −
empty simpset, i.e., @{ML_ind empty_ss}+ −
+ −
@{ML_response_fake [display,gray]+ −
"print_ss @{context} empty_ss"+ −
"Simplification rules:+ −
Congruences rules:+ −
Simproc patterns:"}+ −
+ −
you can see it contains nothing. This simpset is usually not useful, except as a + −
building block to build bigger simpsets. For example you can add to @{ML empty_ss} + −
the simplification rule @{thm [source] Diff_Int} as follows:+ −
*}+ −
+ −
ML{*val ss1 = empty_ss addsimps [@{thm Diff_Int} RS @{thm eq_reflection}] *}+ −
+ −
text {*+ −
Printing then out the components of the simpset gives:+ −
+ −
@{ML_response_fake [display,gray]+ −
"print_ss @{context} ss1"+ −
"Simplification rules:+ −
??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)+ −
Congruences rules:+ −
Simproc patterns:"}+ −
+ −
(FIXME: Why does it print out ??.unknown)+ −
+ −
Adding also the congruence rule @{thm [source] UN_cong} + −
*}+ −
+ −
ML{*val ss2 = ss1 addcongs [@{thm UN_cong} RS @{thm eq_reflection}] *}+ −
+ −
text {*+ −
gives+ −
+ −
@{ML_response_fake [display,gray]+ −
"print_ss @{context} ss2"+ −
"Simplification rules:+ −
??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)+ −
Congruences rules:+ −
UNION: \<lbrakk>A = B; \<And>x. x \<in> B \<Longrightarrow> C x = D x\<rbrakk> \<Longrightarrow> \<Union>x\<in>A. C x \<equiv> \<Union>x\<in>B. D x+ −
Simproc patterns:"}+ −
+ −
Notice that we had to add these lemmas as meta-equations. The @{ML empty_ss} + −
expects this form of the simplification and congruence rules. However, even + −
when adding these lemmas to @{ML empty_ss} we do not end up with anything useful yet.+ −
+ −
In the context of HOL, the first really useful simpset is @{ML_ind HOL_basic_ss}. While+ −
printing out the components of this simpset+ −
+ −
@{ML_response_fake [display,gray]+ −
"print_ss @{context} HOL_basic_ss"+ −
"Simplification rules:+ −
Congruences rules:+ −
Simproc patterns:"}+ −
+ −
also produces ``nothing'', the printout is misleading. In fact+ −
the @{ML HOL_basic_ss} is setup so that it can solve goals of the+ −
form + −
+ −
\begin{isabelle}+ −
@{thm TrueI}, @{thm refl[no_vars]}, @{term "t \<equiv> t"} and @{thm FalseE[no_vars]}; + −
\end{isabelle}+ −
+ −
and also resolve with assumptions. For example:+ −
*}+ −
+ −
lemma + −
"True" and "t = t" and "t \<equiv> t" and "False \<Longrightarrow> Foo" and "\<lbrakk>A; B; C\<rbrakk> \<Longrightarrow> A"+ −
apply(tactic {* ALLGOALS (simp_tac HOL_basic_ss) *})+ −
done+ −
+ −
text {*+ −
This behaviour is not because of simplification rules, but how the subgoaler, solver + −
and looper are set up in @{ML_ind HOL_basic_ss}.+ −
+ −
The simpset @{ML_ind HOL_ss} is an extension of @{ML HOL_basic_ss} containing + −
already many useful simplification and congruence rules for the logical + −
connectives in HOL. + −
+ −
@{ML_response_fake [display,gray]+ −
"print_ss @{context} HOL_ss"+ −
"Simplification rules:+ −
Pure.triv_forall_equality: (\<And>x. PROP V) \<equiv> PROP V+ −
HOL.the_eq_trivial: THE x. x = y \<equiv> y+ −
HOL.the_sym_eq_trivial: THE ya. y = ya \<equiv> y+ −
\<dots>+ −
Congruences rules:+ −
HOL.simp_implies: \<dots>+ −
\<Longrightarrow> (PROP P =simp=> PROP Q) \<equiv> (PROP P' =simp=> PROP Q')+ −
op -->: \<lbrakk>P \<equiv> P'; P' \<Longrightarrow> Q \<equiv> Q'\<rbrakk> \<Longrightarrow> P \<longrightarrow> Q \<equiv> P' \<longrightarrow> Q'+ −
Simproc patterns:+ −
\<dots>"}+ −
+ −
+ −
The simplifier is often used to unfold definitions in a proof. For this the+ −
simplifier implements the tactic @{ML_ind rewrite_goal_tac}.\footnote{\bf FIXME: + −
see LocalDefs infrastructure.} Suppose for example the+ −
definition+ −
*}+ −
+ −
definition "MyTrue \<equiv> True"+ −
+ −
text {*+ −
then we can use this tactic to unfold the definition of the constant.+ −
*}+ −
+ −
lemma shows "MyTrue \<Longrightarrow> True"+ −
apply(tactic {* rewrite_goal_tac @{thms MyTrue_def} 1 *})+ −
txt{* producing the goal state+ −
+ −
\begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage} *}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
If you want to unfold definitions in all subgoals, then use the + −
the tactic @{ML_ind rewrite_goals_tac}.+ −
*}+ −
+ −
+ −
text_raw {*+ −
\begin{figure}[p]+ −
\begin{boxedminipage}{\textwidth}+ −
\begin{isabelle} *}+ −
types prm = "(nat \<times> nat) list"+ −
consts perm :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet> _" [80,80] 80)+ −
+ −
overloading + −
perm_nat \<equiv> "perm :: prm \<Rightarrow> nat \<Rightarrow> nat"+ −
perm_prod \<equiv> "perm :: prm \<Rightarrow> ('a\<times>'b) \<Rightarrow> ('a\<times>'b)"+ −
perm_list \<equiv> "perm :: prm \<Rightarrow> 'a list \<Rightarrow> 'a list"+ −
begin+ −
+ −
fun swap::"nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> nat"+ −
where+ −
"swap a b c = (if c=a then b else (if c=b then a else c))"+ −
+ −
primrec perm_nat+ −
where+ −
"perm_nat [] c = c"+ −
| "perm_nat (ab#pi) c = swap (fst ab) (snd ab) (perm_nat pi c)"+ −
+ −
fun perm_prod + −
where+ −
"perm_prod pi (x, y) = (pi\<bullet>x, pi\<bullet>y)"+ −
+ −
primrec perm_list+ −
where+ −
"perm_list pi [] = []"+ −
| "perm_list pi (x#xs) = (pi\<bullet>x)#(perm_list pi xs)"+ −
+ −
end+ −
+ −
lemma perm_append[simp]:+ −
fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"+ −
shows "((pi\<^isub>1@pi\<^isub>2)\<bullet>c) = (pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c))"+ −
by (induct pi\<^isub>1) (auto) + −
+ −
lemma perm_bij[simp]:+ −
fixes c d::"nat" and pi::"prm"+ −
shows "(pi\<bullet>c = pi\<bullet>d) = (c = d)" + −
by (induct pi) (auto)+ −
+ −
lemma perm_rev[simp]:+ −
fixes c::"nat" and pi::"prm"+ −
shows "pi\<bullet>((rev pi)\<bullet>c) = c"+ −
by (induct pi arbitrary: c) (auto)+ −
+ −
lemma perm_compose:+ −
fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"+ −
shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>(pi\<^isub>1\<bullet>c)" + −
by (induct pi\<^isub>2) (auto)+ −
text_raw {*+ −
\end{isabelle}+ −
\end{boxedminipage}+ −
\caption{A simple theory about permutations over @{typ nat}s. The point is that the+ −
lemma @{thm [source] perm_compose} cannot be directly added to the simplifier, as+ −
it would cause the simplifier to loop. It can still be used as a simplification + −
rule if the permutation in the right-hand side is sufficiently protected.+ −
\label{fig:perms}}+ −
\end{figure} *}+ −
+ −
+ −
text {*+ −
The simplifier is often used in order to bring terms into a normal form.+ −
Unfortunately, often the situation arises that the corresponding+ −
simplification rules will cause the simplifier to run into an infinite+ −
loop. Consider for example the simple theory about permutations over natural+ −
numbers shown in Figure~\ref{fig:perms}. The purpose of the lemmas is to+ −
push permutations as far inside as possible, where they might disappear by+ −
Lemma @{thm [source] perm_rev}. However, to fully normalise all instances,+ −
it would be desirable to add also the lemma @{thm [source] perm_compose} to+ −
the simplifier for pushing permutations over other permutations. Unfortunately, + −
the right-hand side of this lemma is again an instance of the left-hand side + −
and so causes an infinite loop. There seems to be no easy way to reformulate + −
this rule and so one ends up with clunky proofs like:+ −
*}+ −
+ −
lemma + −
fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"+ −
shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"+ −
apply(simp)+ −
apply(rule trans)+ −
apply(rule perm_compose)+ −
apply(simp)+ −
done + −
+ −
text {*+ −
It is however possible to create a single simplifier tactic that solves+ −
such proofs. The trick is to introduce an auxiliary constant for permutations + −
and split the simplification into two phases (below actually three). Let + −
assume the auxiliary constant is+ −
*}+ −
+ −
definition+ −
perm_aux :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet>aux _" [80,80] 80)+ −
where+ −
"pi \<bullet>aux c \<equiv> pi \<bullet> c"+ −
+ −
text {* Now the two lemmas *}+ −
+ −
lemma perm_aux_expand:+ −
fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"+ −
shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = pi\<^isub>1 \<bullet>aux (pi\<^isub>2\<bullet>c)" + −
unfolding perm_aux_def by (rule refl)+ −
+ −
lemma perm_compose_aux:+ −
fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"+ −
shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>aux c) = (pi\<^isub>1\<bullet>pi\<^isub>2) \<bullet>aux (pi\<^isub>1\<bullet>c)" + −
unfolding perm_aux_def by (rule perm_compose)+ −
+ −
text {* + −
are simple consequence of the definition and @{thm [source] perm_compose}. + −
More importantly, the lemma @{thm [source] perm_compose_aux} can be safely + −
added to the simplifier, because now the right-hand side is not anymore an instance + −
of the left-hand side. In a sense it freezes all redexes of permutation compositions+ −
after one step. In this way, we can split simplification of permutations+ −
into three phases without the user noticing anything about the auxiliary + −
constant. We first freeze any instance of permutation compositions in the term using + −
lemma @{thm [source] "perm_aux_expand"} (Line 9);+ −
then simplify all other permutations including pushing permutations over+ −
other permutations by rule @{thm [source] perm_compose_aux} (Line 10); and+ −
finally ``unfreeze'' all instances of permutation compositions by unfolding + −
the definition of the auxiliary constant. + −
*}+ −
+ −
ML %linenosgray{*val perm_simp_tac =+ −
let+ −
val thms1 = [@{thm perm_aux_expand}]+ −
val thms2 = [@{thm perm_append}, @{thm perm_bij}, @{thm perm_rev}, + −
@{thm perm_compose_aux}] @ @{thms perm_prod.simps} @ + −
@{thms perm_list.simps} @ @{thms perm_nat.simps}+ −
val thms3 = [@{thm perm_aux_def}]+ −
in+ −
simp_tac (HOL_basic_ss addsimps thms1)+ −
THEN' simp_tac (HOL_basic_ss addsimps thms2)+ −
THEN' simp_tac (HOL_basic_ss addsimps thms3)+ −
end*}+ −
+ −
text {*+ −
For all three phases we have to build simpsets adding specific lemmas. As is sufficient+ −
for our purposes here, we can add these lemmas to @{ML HOL_basic_ss} in order to obtain+ −
the desired results. Now we can solve the following lemma+ −
*}+ −
+ −
lemma + −
fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"+ −
shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"+ −
apply(tactic {* perm_simp_tac 1 *})+ −
done+ −
+ −
+ −
text {*+ −
in one step. This tactic can deal with most instances of normalising permutations.+ −
In order to solve all cases we have to deal with corner-cases such as the+ −
lemma being an exact instance of the permutation composition lemma. This can+ −
often be done easier by implementing a simproc or a conversion. Both will be + −
explained in the next two chapters.+ −
+ −
(FIXME: Is it interesting to say something about @{term "op =simp=>"}?)+ −
+ −
(FIXME: What are the second components of the congruence rules---something to+ −
do with weak congruence constants?)+ −
+ −
(FIXME: Anything interesting to say about @{ML Simplifier.clear_ss}?)+ −
+ −
(FIXME: @{ML ObjectLogic.full_atomize_tac}, + −
@{ML ObjectLogic.rulify_tac})+ −
+ −
(FIXME: what are @{ML mksimps_pairs}; used in Nominal.thy)+ −
+ −
(FIXME: explain @{ML simplify} and @{ML "Simplifier.rewrite_rule"} etc.)+ −
+ −
*}+ −
+ −
section {* Simprocs *}+ −
+ −
text {*+ −
In Isabelle you can also implement custom simplification procedures, called+ −
\emph{simprocs}. Simprocs can be triggered by the simplifier on a specified+ −
term-pattern and rewrite a term according to a theorem. They are useful in+ −
cases where a rewriting rule must be produced on ``demand'' or when+ −
rewriting by simplification is too unpredictable and potentially loops.+ −
+ −
To see how simprocs work, let us first write a simproc that just prints out+ −
the pattern which triggers it and otherwise does nothing. For this+ −
you can use the function:+ −
*}+ −
+ −
ML %linenosgray{*fun fail_simproc simpset redex = + −
let+ −
val ctxt = Simplifier.the_context simpset+ −
val _ = tracing ("The redex: " ^ (string_of_cterm ctxt redex))+ −
in+ −
NONE+ −
end*}+ −
+ −
text {*+ −
This function takes a simpset and a redex (a @{ML_type cterm}) as+ −
arguments. In Lines 3 and~4, we first extract the context from the given+ −
simpset and then print out a message containing the redex. The function+ −
returns @{ML NONE} (standing for an optional @{ML_type thm}) since at the+ −
moment we are \emph{not} interested in actually rewriting anything. We want+ −
that the simproc is triggered by the pattern @{term "Suc n"}. This can be+ −
done by adding the simproc to the current simpset as follows+ −
*}+ −
+ −
simproc_setup %gray fail ("Suc n") = {* K fail_simproc *}+ −
+ −
text {*+ −
where the second argument specifies the pattern and the right-hand side+ −
contains the code of the simproc (we have to use @{ML K} since we are ignoring+ −
an argument about morphisms. + −
After this, the simplifier is aware of the simproc and you can test whether + −
it fires on the lemma:+ −
*}+ −
+ −
lemma shows "Suc 0 = 1"+ −
apply(simp)+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
\begin{isabelle}+ −
@{text "> The redex: Suc 0"}\\+ −
@{text "> The redex: Suc 0"}\\+ −
\end{isabelle}+ −
+ −
This will print out the message twice: once for the left-hand side and+ −
once for the right-hand side. The reason is that during simplification the+ −
simplifier will at some point reduce the term @{term "1::nat"} to @{term "Suc+ −
0"}, and then the simproc ``fires'' also on that term. + −
+ −
We can add or delete the simproc from the current simpset by the usual + −
\isacommand{declare}-statement. For example the simproc will be deleted+ −
with the declaration+ −
*}+ −
+ −
declare [[simproc del: fail]]+ −
+ −
text {*+ −
If you want to see what happens with just \emph{this} simproc, without any + −
interference from other rewrite rules, you can call @{text fail} + −
as follows:+ −
*}+ −
+ −
lemma shows "Suc 0 = 1"+ −
apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [@{simproc fail}]) 1*})+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
Now the message shows up only once since the term @{term "1::nat"} is + −
left unchanged. + −
+ −
Setting up a simproc using the command \isacommand{simproc\_setup} will+ −
always add automatically the simproc to the current simpset. If you do not+ −
want this, then you have to use a slightly different method for setting + −
up the simproc. First the function @{ML fail_simproc} needs to be modified+ −
to+ −
*}+ −
+ −
ML{*fun fail_simproc' simpset redex = + −
let+ −
val ctxt = Simplifier.the_context simpset+ −
val _ = tracing ("The redex: " ^ (string_of_term ctxt redex))+ −
in+ −
NONE+ −
end*}+ −
+ −
text {*+ −
Here the redex is given as a @{ML_type term}, instead of a @{ML_type cterm}+ −
(therefore we printing it out using the function @{ML string_of_term in Syntax}).+ −
We can turn this function into a proper simproc using the function + −
@{ML Simplifier.simproc_i}:+ −
*}+ −
+ −
+ −
ML{*val fail' = + −
let + −
val thy = @{theory}+ −
val pat = [@{term "Suc n"}]+ −
in+ −
Simplifier.simproc_i thy "fail_simproc'" pat (K fail_simproc')+ −
end*}+ −
+ −
text {*+ −
Here the pattern is given as @{ML_type term} (instead of @{ML_type cterm}).+ −
The function also takes a list of patterns that can trigger the simproc.+ −
Now the simproc is set up and can be explicitly added using+ −
@{ML_ind addsimprocs} to a simpset whenever+ −
needed. + −
+ −
Simprocs are applied from inside to outside and from left to right. You can+ −
see this in the proof+ −
*}+ −
+ −
lemma shows "Suc (Suc 0) = (Suc 1)"+ −
apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [fail']) 1*})+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
The simproc @{ML fail'} prints out the sequence + −
+ −
@{text [display]+ −
"> Suc 0+ −
> Suc (Suc 0) + −
> Suc 1"}+ −
+ −
To see how a simproc applies a theorem, let us implement a simproc that+ −
rewrites terms according to the equation:+ −
*}+ −
+ −
lemma plus_one: + −
shows "Suc n \<equiv> n + 1" by simp+ −
+ −
text {*+ −
Simprocs expect that the given equation is a meta-equation, however the+ −
equation can contain preconditions (the simproc then will only fire if the+ −
preconditions can be solved). To see that one has relatively precise control over+ −
the rewriting with simprocs, let us further assume we want that the simproc+ −
only rewrites terms ``greater'' than @{term "Suc 0"}. For this we can write + −
*}+ −
+ −
+ −
ML{*fun plus_one_simproc ss redex =+ −
case redex of+ −
@{term "Suc 0"} => NONE+ −
| _ => SOME @{thm plus_one}*}+ −
+ −
text {*+ −
and set up the simproc as follows.+ −
*}+ −
+ −
ML{*val plus_one =+ −
let+ −
val thy = @{theory}+ −
val pat = [@{term "Suc n"}] + −
in+ −
Simplifier.simproc_i thy "sproc +1" pat (K plus_one_simproc)+ −
end*}+ −
+ −
text {*+ −
Now the simproc is set up so that it is triggered by terms+ −
of the form @{term "Suc n"}, but inside the simproc we only produce+ −
a theorem if the term is not @{term "Suc 0"}. The result you can see+ −
in the following proof+ −
*}+ −
+ −
lemma shows "P (Suc (Suc (Suc 0))) (Suc 0)"+ −
apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [plus_one]) 1*})+ −
txt{*+ −
where the simproc produces the goal state+ −
+ −
\begin{minipage}{\textwidth}+ −
@{subgoals[display]}+ −
\end{minipage}+ −
*} + −
(*<*)oops(*>*)+ −
+ −
text {*+ −
As usual with rewriting you have to worry about looping: you already have+ −
a loop with @{ML plus_one}, if you apply it with the default simpset (because+ −
the default simpset contains a rule which just does the opposite of @{ML plus_one},+ −
namely rewriting @{text [quotes] "+ 1"} to a successor). So you have to be careful + −
in choosing the right simpset to which you add a simproc. + −
+ −
Next let us implement a simproc that replaces terms of the form @{term "Suc n"}+ −
with the number @{text n} increased by one. First we implement a function that+ −
takes a term and produces the corresponding integer value.+ −
*}+ −
+ −
ML{*fun dest_suc_trm ((Const (@{const_name "Suc"}, _)) $ t) = 1 + dest_suc_trm t+ −
| dest_suc_trm t = snd (HOLogic.dest_number t)*}+ −
+ −
text {* + −
It uses the library function @{ML_ind dest_number in HOLogic} that transforms+ −
(Isabelle) terms, like @{term "0::nat"}, @{term "1::nat"}, @{term "2::nat"} and so+ −
on, into integer values. This function raises the exception @{ML TERM}, if+ −
the term is not a number. The next function expects a pair consisting of a term+ −
@{text t} (containing @{term Suc}s) and the corresponding integer value @{text n}. + −
*}+ −
+ −
ML %linenosgray{*fun get_thm ctxt (t, n) =+ −
let+ −
val num = HOLogic.mk_number @{typ "nat"} n+ −
val goal = Logic.mk_equals (t, num)+ −
in+ −
Goal.prove ctxt [] [] goal (K (Arith_Data.arith_tac ctxt 1))+ −
end*}+ −
+ −
text {*+ −
From the integer value it generates the corresponding number term, called + −
@{text num} (Line 3), and then generates the meta-equation @{text "t \<equiv> num"} + −
(Line 4), which it proves by the arithmetic tactic in Line 6. + −
+ −
For our purpose at the moment, proving the meta-equation using @{ML+ −
arith_tac in Arith_Data} is fine, but there is also an alternative employing+ −
the simplifier with a special simpset. For the kind of lemmas we+ −
want to prove here, the simpset @{text "num_ss"} should suffice.+ −
*}+ −
+ −
ML{*fun get_thm_alt ctxt (t, n) =+ −
let+ −
val num = HOLogic.mk_number @{typ "nat"} n+ −
val goal = Logic.mk_equals (t, num)+ −
val num_ss = HOL_ss addsimps [@{thm One_nat_def}, @{thm Let_def}] @ + −
@{thms nat_number} @ @{thms neg_simps} @ @{thms plus_nat.simps}+ −
in+ −
Goal.prove ctxt [] [] goal (K (simp_tac num_ss 1))+ −
end*}+ −
+ −
text {*+ −
The advantage of @{ML get_thm_alt} is that it leaves very little room for + −
something to go wrong; in contrast it is much more difficult to predict + −
what happens with @{ML arith_tac in Arith_Data}, especially in more complicated + −
circumstances. The disadvantage of @{ML get_thm_alt} is to find a simpset+ −
that is sufficiently powerful to solve every instance of the lemmas+ −
we like to prove. This requires careful tuning, but is often necessary in + −
``production code''.\footnote{It would be of great help if there is another+ −
way than tracing the simplifier to obtain the lemmas that are successfully + −
applied during simplification. Alas, there is none.} + −
+ −
Anyway, either version can be used in the function that produces the actual + −
theorem for the simproc.+ −
*}+ −
+ −
ML{*fun nat_number_simproc ss t =+ −
let + −
val ctxt = Simplifier.the_context ss+ −
in+ −
SOME (get_thm ctxt (t, dest_suc_trm t))+ −
handle TERM _ => NONE+ −
end*}+ −
+ −
text {*+ −
This function uses the fact that @{ML dest_suc_trm} might raise an exception + −
@{ML TERM}. In this case there is nothing that can be rewritten and therefore no+ −
theorem is produced (i.e.~the function returns @{ML NONE}). To try out the simproc + −
on an example, you can set it up as follows:+ −
*}+ −
+ −
ML{*val nat_number =+ −
let+ −
val thy = @{theory}+ −
val pat = [@{term "Suc n"}]+ −
in + −
Simplifier.simproc_i thy "nat_number" pat (K nat_number_simproc) + −
end*}+ −
+ −
text {* + −
Now in the lemma+ −
*}+ −
+ −
lemma "P (Suc (Suc 2)) (Suc 99) (0::nat) (Suc 4 + Suc 0) (Suc (0 + 0))"+ −
apply(tactic {* simp_tac (HOL_ss addsimprocs [nat_number]) 1*})+ −
txt {* + −
you obtain the more legible goal state+ −
+ −
\begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage}+ −
*}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
where the simproc rewrites all @{term "Suc"}s except in the last argument. There it cannot + −
rewrite anything, because it does not know how to transform the term @{term "Suc (0 + 0)"}+ −
into a number. To solve this problem have a look at the next exercise. + −
+ −
\begin{exercise}\label{ex:addsimproc}+ −
Write a simproc that replaces terms of the form @{term "t\<^isub>1 + t\<^isub>2"} by their + −
result. You can assume the terms are ``proper'' numbers, that is of the form+ −
@{term "0::nat"}, @{term "1::nat"}, @{term "2::nat"} and so on.+ −
\end{exercise}+ −
+ −
(FIXME: We did not do anything with morphisms. Anything interesting+ −
one can say about them?)+ −
*}+ −
+ −
section {* Conversions\label{sec:conversion} *}+ −
+ −
text {*+ −
Conversions are a thin layer on top of Isabelle's inference kernel, and + −
can be viewed as a controllable, bare-bone version of Isabelle's simplifier.+ −
One difference between conversions and the simplifier is that the former+ −
act on @{ML_type cterm}s while the latter acts on @{ML_type thm}s. + −
However, we will also show in this section how conversions can be applied+ −
to theorems via tactics. The type for conversions is+ −
*}+ −
+ −
ML{*type conv = cterm -> thm*}+ −
+ −
text {*+ −
whereby the produced theorem is always a meta-equality. A simple conversion+ −
is the function @{ML_ind all_conv in Conv}, which maps a @{ML_type cterm} to an+ −
instance of the (meta)reflexivity theorem. For example:+ −
+ −
@{ML_response_fake [display,gray]+ −
"Conv.all_conv @{cterm \"Foo \<or> Bar\"}"+ −
"Foo \<or> Bar \<equiv> Foo \<or> Bar"}+ −
+ −
Another simple conversion is @{ML_ind no_conv in Conv} which always raises the + −
exception @{ML CTERM}.+ −
+ −
@{ML_response_fake [display,gray]+ −
"Conv.no_conv @{cterm True}" + −
"*** Exception- CTERM (\"no conversion\", []) raised"}+ −
+ −
A more interesting conversion is the function @{ML_ind beta_conversion in Thm}: it + −
produces a meta-equation between a term and its beta-normal form. For example+ −
+ −
@{ML_response_fake [display,gray]+ −
"let+ −
val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}+ −
val two = @{cterm \"2::nat\"}+ −
val ten = @{cterm \"10::nat\"}+ −
val ctrm = Thm.capply (Thm.capply add two) ten+ −
in+ −
Thm.beta_conversion true ctrm+ −
end"+ −
"((\<lambda>x y. x + y) 2) 10 \<equiv> 2 + 10"}+ −
+ −
If you run this example, you will notice that the actual response is the + −
seemingly nonsensical @{term+ −
"2 + 10 \<equiv> 2 + (10::nat)"}. The reason is that the pretty-printer for+ −
@{ML_type cterm}s eta-normalises terms and therefore produces this output.+ −
If we get hold of the ``raw'' representation of the produced theorem, + −
we obtain the expected result.+ −
+ −
+ −
@{ML_response [display,gray]+ −
"let+ −
val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}+ −
val two = @{cterm \"2::nat\"}+ −
val ten = @{cterm \"10::nat\"}+ −
val ctrm = Thm.capply (Thm.capply add two) ten+ −
in+ −
Thm.prop_of (Thm.beta_conversion true ctrm)+ −
end"+ −
"Const (\"==\",\<dots>) $ + −
(Abs (\"x\",\<dots>,Abs (\"y\",\<dots>,\<dots>)) $\<dots>$\<dots>) $ + −
(Const (\"HOL.plus_class.plus\",\<dots>) $ \<dots> $ \<dots>)"} + −
+ −
The argument @{ML true} in @{ML beta_conversion in Thm} indicates that + −
the right-hand side should be fully beta-normalised. If instead + −
@{ML false} is given, then only a single beta-reduction is performed + −
on the outer-most level. + −
+ −
The main point of conversions is that they can be used for rewriting+ −
@{ML_type cterm}s. One example is the function + −
@{ML_ind rewr_conv in Conv}, which expects a meta-equation as an + −
argument. Suppose the following meta-equation.+ −
+ −
*}+ −
+ −
lemma true_conj1: "True \<and> P \<equiv> P" by simp+ −
+ −
text {* + −
It can be used for example to rewrite @{term "True \<and> (Foo \<longrightarrow> Bar)"} + −
to @{term "Foo \<longrightarrow> Bar"}. The code is as follows.+ −
+ −
@{ML_response_fake [display,gray]+ −
"let + −
val ctrm = @{cterm \"True \<and> (Foo \<longrightarrow> Bar)\"}+ −
in+ −
Conv.rewr_conv @{thm true_conj1} ctrm+ −
end"+ −
"True \<and> (Foo \<longrightarrow> Bar) \<equiv> Foo \<longrightarrow> Bar"}+ −
+ −
Note, however, that the function @{ML_ind rewr_conv in Conv} only rewrites the + −
outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match + −
exactly the + −
left-hand side of the theorem, then @{ML_ind rewr_conv in Conv} fails, raising + −
the exception @{ML CTERM}.+ −
+ −
This very primitive way of rewriting can be made more powerful by+ −
combining several conversions into one. For this you can use conversion+ −
combinators. The simplest conversion combinator is @{ML_ind then_conv}, + −
which applies one conversion after another. For example+ −
+ −
@{ML_response_fake [display,gray]+ −
"let+ −
val conv1 = Thm.beta_conversion false+ −
val conv2 = Conv.rewr_conv @{thm true_conj1}+ −
val ctrm = Thm.capply @{cterm \"\<lambda>x. x \<and> False\"} @{cterm \"True\"}+ −
in+ −
(conv1 then_conv conv2) ctrm+ −
end"+ −
"(\<lambda>x. x \<and> False) True \<equiv> False"}+ −
+ −
where we first beta-reduce the term and then rewrite according to+ −
@{thm [source] true_conj1}. (When running this example recall the + −
problem with the pretty-printer normalising all terms.)+ −
+ −
The conversion combinator @{ML_ind else_conv} tries out the + −
first one, and if it does not apply, tries the second. For example + −
+ −
@{ML_response_fake [display,gray]+ −
"let+ −
val conv = Conv.rewr_conv @{thm true_conj1} else_conv Conv.all_conv+ −
val ctrm1 = @{cterm \"True \<and> Q\"}+ −
val ctrm2 = @{cterm \"P \<or> (True \<and> Q)\"}+ −
in+ −
(conv ctrm1, conv ctrm2)+ −
end"+ −
"(True \<and> Q \<equiv> Q, P \<or> True \<and> Q \<equiv> P \<or> True \<and> Q)"}+ −
+ −
Here the conversion of @{thm [source] true_conj1} only applies+ −
in the first case, but fails in the second. The whole conversion+ −
does not fail, however, because the combinator @{ML else_conv in Conv} will then + −
try out @{ML all_conv in Conv}, which always succeeds.+ −
+ −
The conversion combinator @{ML_ind try_conv in Conv} constructs a conversion + −
which is tried out on a term, but in case of failure just does nothing.+ −
For example+ −
+ −
@{ML_response_fake [display,gray]+ −
"let+ −
val conv = Conv.try_conv (Conv.rewr_conv @{thm true_conj1})+ −
val ctrm = @{cterm \"True \<or> P\"}+ −
in+ −
conv ctrm+ −
end"+ −
"True \<or> P \<equiv> True \<or> P"}+ −
+ −
Apart from the function @{ML beta_conversion in Thm}, which is able to fully+ −
beta-normalise a term, the conversions so far are restricted in that they+ −
only apply to the outer-most level of a @{ML_type cterm}. In what follows we+ −
will lift this restriction. The combinators @{ML_ind fun_conv in Conv} + −
and @{ML_ind arg_conv in Conv} will apply+ −
a conversion to the first, respectively second, argument of an application. + −
For example+ −
+ −
@{ML_response_fake [display,gray]+ −
"let + −
val conv = Conv.arg_conv (Conv.rewr_conv @{thm true_conj1})+ −
val ctrm = @{cterm \"P \<or> (True \<and> Q)\"}+ −
in+ −
conv ctrm+ −
end"+ −
"P \<or> (True \<and> Q) \<equiv> P \<or> Q"}+ −
+ −
The reason for this behaviour is that @{text "(op \<or>)"} expects two+ −
arguments. Therefore the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The+ −
conversion is then applied to @{text "t2"}, which in the example above+ −
stands for @{term "True \<and> Q"}. The function @{ML fun_conv in Conv} would apply+ −
the conversion to the term @{text "(Const \<dots> $ t1)"}.+ −
+ −
The function @{ML_ind abs_conv in Conv} applies a conversion under an + −
abstraction. For example:+ −
+ −
@{ML_response_fake [display,gray]+ −
"let + −
val conv = Conv.rewr_conv @{thm true_conj1} + −
val ctrm = @{cterm \"\<lambda>P. True \<and> (P \<and> Foo)\"}+ −
in+ −
Conv.abs_conv (K conv) @{context} ctrm+ −
end"+ −
"\<lambda>P. True \<and> (P \<and> Foo) \<equiv> \<lambda>P. P \<and> Foo"}+ −
+ −
Note that this conversion needs a context as an argument. We also give the+ −
conversion as @{text "(K conv)"}, which is a function that ignores its+ −
argument (the argument being a sufficiently freshened version of the+ −
variable that is abstracted and a context). The conversion that goes under+ −
an application is @{ML_ind combination_conv in Conv}. It expects two+ −
conversions as arguments, each of which is applied to the corresponding+ −
``branch'' of the application. An abbreviation for this conversion is the+ −
function @{ML_ind comb_conv in Conv}, which applies the same conversion+ −
to both branches.+ −
+ −
We can now apply all these functions in a conversion that recursively+ −
descends a term and applies a ``@{thm [source] true_conj1}''-conversion + −
in all possible positions.+ −
*}+ −
+ −
ML %linenosgray{*fun all_true1_conv ctxt ctrm =+ −
case (Thm.term_of ctrm) of+ −
@{term "op \<and>"} $ @{term True} $ _ => + −
(Conv.arg_conv (all_true1_conv ctxt) then_conv+ −
Conv.rewr_conv @{thm true_conj1}) ctrm+ −
| _ $ _ => Conv.comb_conv (all_true1_conv ctxt) ctrm+ −
| Abs _ => Conv.abs_conv (fn (_, ctxt) => all_true1_conv ctxt) ctxt ctrm+ −
| _ => Conv.all_conv ctrm*}+ −
+ −
text {*+ −
This function ``fires'' if the terms is of the form @{text "(True \<and> \<dots>)"}. + −
It descends under applications (Line 6 and 7) and abstractions + −
(Line 8); otherwise it leaves the term unchanged (Line 9). In Line 2+ −
we need to transform the @{ML_type cterm} into a @{ML_type term} in order+ −
to be able to pattern-match the term. To see this + −
conversion in action, consider the following example:+ −
+ −
@{ML_response_fake [display,gray]+ −
"let+ −
val conv = all_true1_conv @{context}+ −
val ctrm = @{cterm \"distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x\"}+ −
in+ −
conv ctrm+ −
end"+ −
"distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x \<equiv> distinct [1, x] \<longrightarrow> 1 \<noteq> x"}+ −
+ −
To see how much control you have about rewriting by using conversions, let us + −
make the task a bit more complicated by rewriting according to the rule+ −
@{text true_conj1}, but only in the first arguments of @{term If}s. Then + −
the conversion should be as follows.+ −
*}+ −
+ −
ML{*fun if_true1_conv ctxt ctrm =+ −
case Thm.term_of ctrm of+ −
Const (@{const_name If}, _) $ _ =>+ −
Conv.arg_conv (all_true1_conv ctxt) ctrm+ −
| _ $ _ => Conv.comb_conv (if_true1_conv ctxt) ctrm+ −
| Abs _ => Conv.abs_conv (fn (_, ctxt) => if_true1_conv ctxt) ctxt ctrm+ −
| _ => Conv.all_conv ctrm *}+ −
+ −
text {*+ −
Here is an example for this conversion:+ −
+ −
@{ML_response_fake [display,gray]+ −
"let+ −
val conv = if_true1_conv @{context}+ −
val ctrm = + −
@{cterm \"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False\"}+ −
in+ −
conv ctrm+ −
end"+ −
"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False + −
\<equiv> if P (1 \<noteq> 2) then True \<and> True else True \<and> False"}+ −
*}+ −
+ −
text {*+ −
So far we only applied conversions to @{ML_type cterm}s. Conversions can, however, + −
also work on theorems using the function @{ML_ind fconv_rule in Conv}. As an example, + −
consider the conversion @{ML all_true1_conv} and the lemma:+ −
*}+ −
+ −
lemma foo_test: "P \<or> (True \<and> \<not>P)" by simp+ −
+ −
text {*+ −
Using the conversion @{ML all_true1_conv} you can transform this theorem into a + −
new theorem as follows+ −
+ −
@{ML_response_fake [display,gray]+ −
"let+ −
val conv = Conv.fconv_rule (all_true1_conv @{context}) + −
val thm = @{thm foo_test}+ −
in+ −
conv thm+ −
end" + −
"?P \<or> \<not> ?P"}+ −
+ −
Finally, conversions can also be turned into tactics and then applied to+ −
goal states. This can be done with the help of the function @{ML_ind CONVERSION},+ −
and also some predefined conversion combinators that traverse a goal+ −
state. The combinators for the goal state are: + −
+ −
\begin{itemize}+ −
\item @{ML_ind params_conv in Conv} for converting under parameters+ −
(i.e.~where goals are of the form @{text "\<And>x. P x \<Longrightarrow> Q x"})+ −
+ −
\item @{ML_ind prems_conv in Conv} for applying a conversion to all+ −
premises of a goal, and+ −
+ −
\item @{ML_ind concl_conv in Conv} for applying a conversion to the+ −
conclusion of a goal.+ −
\end{itemize}+ −
+ −
Assume we want to apply @{ML all_true1_conv} only in the conclusion+ −
of the goal, and @{ML if_true1_conv} should only apply to the premises.+ −
Here is a tactic doing exactly that:+ −
*}+ −
+ −
ML{*fun true1_tac ctxt =+ −
CONVERSION + −
(Conv.params_conv ~1 (fn ctxt =>+ −
(Conv.prems_conv ~1 (if_true1_conv ctxt) then_conv+ −
Conv.concl_conv ~1 (all_true1_conv ctxt))) ctxt)*}+ −
+ −
text {* + −
We call the conversions with the argument @{ML "~1"}. This is to + −
analyse all parameters, premises and conclusions. If we call them with + −
a non-negative number, say @{text n}, then these conversions will + −
only be called on @{text n} premises (similar for parameters and+ −
conclusions). To test the tactic, consider the proof+ −
*}+ −
+ −
lemma+ −
"if True \<and> P then P else True \<and> False \<Longrightarrow>+ −
(if True \<and> Q then True \<and> Q else P) \<longrightarrow> True \<and> (True \<and> Q)"+ −
apply(tactic {* true1_tac @{context} 1 *})+ −
txt {* where the tactic yields the goal state+ −
+ −
\begin{minipage}{\textwidth}+ −
@{subgoals [display]}+ −
\end{minipage}*}+ −
(*<*)oops(*>*)+ −
+ −
text {*+ −
As you can see, the premises are rewritten according to @{ML if_true1_conv}, while+ −
the conclusion according to @{ML all_true1_conv}.+ −
+ −
Conversions can also be helpful for constructing meta-equality theorems.+ −
Such theorems are often definitions. As an example consider the following+ −
two ways of defining the identity function in Isabelle. + −
*}+ −
+ −
definition id1::"'a \<Rightarrow> 'a" + −
where "id1 x \<equiv> x"+ −
+ −
definition id2::"'a \<Rightarrow> 'a"+ −
where "id2 \<equiv> \<lambda>x. x"+ −
+ −
text {*+ −
Although both definitions define the same function, the theorems @{thm+ −
[source] id1_def} and @{thm [source] id2_def} are different. However it is+ −
easy to transform one theorem into the other. The function @{ML_ind abs_def+ −
in Drule} can automatically transform theorem @{thm [source] id1_def}+ −
into @{thm [source] id2_def} by abstracting all variables on the + −
left-hand side in the right-hand side.+ −
+ −
@{ML_response_fake [display,gray]+ −
"Drule.abs_def @{thm id1_def}"+ −
"id1 \<equiv> \<lambda>x. x"}+ −
+ −
Unfortunately, Isabelle has no build-in function that transforms the+ −
theorems in the other direction. We can conveniently implement one using+ −
conversions. The feature of conversions we are using is that if we apply a+ −
@{ML_type cterm} to a conversion we obtain a meta-equality theorem (recall+ −
that the type of conversions is an abbreviation for + −
@{ML_type "cterm -> thm"}). The code of the transformation is below.+ −
*}+ −
+ −
ML %linenosgray{*fun unabs_def ctxt def =+ −
let+ −
val (lhs, rhs) = Thm.dest_equals (cprop_of def)+ −
val xs = strip_abs_vars (term_of rhs)+ −
val (_, ctxt') = Variable.add_fixes (map fst xs) ctxt+ −
+ −
val thy = ProofContext.theory_of ctxt'+ −
val cxs = map (cterm_of thy o Free) xs+ −
val new_lhs = Drule.list_comb (lhs, cxs)+ −
+ −
fun get_conv [] = Conv.rewr_conv def+ −
| get_conv (_::xs) = Conv.fun_conv (get_conv xs)+ −
in+ −
get_conv xs new_lhs |> + −
singleton (ProofContext.export ctxt' ctxt)+ −
end*}+ −
+ −
text {*+ −
In Line 3 we destruct the meta-equality into the @{ML_type cterm}s+ −
corresponding to the left-hand and right-hand side of the meta-equality. The+ −
assumption in @{ML unabs_def} is that the right-hand side is an+ −
abstraction. We compute the possibly empty list of abstracted variables in+ −
Line 4 using the function @{ML_ind strip_abs_vars}. For this we have to+ −
transform the @{ML_type cterm} into a @{ML_type term}. In Line 5 we+ −
importing these variables into the context @{ML_text ctxt'}, in order to+ −
export them again in Line 15. In Line 8 we certify the list of variables,+ −
which in turn we apply to the @{ML_text lhs} of the definition using the+ −
function @{ML_ind list_comb in Drule}. In Line 11 and 12 we generate the+ −
conversion according to the length of the list of (abstracted) variables. If+ −
there are none, then we just return the conversion corresponding to the+ −
original definition. If there are variables, then we have to prefix this+ −
conversion with @{ML_ind fun_conv in Conv}. Now in Line 14 we only have to+ −
apply the new left-hand side to the generated conversion and obtain the the+ −
theorem we want to construct. As mentioned above, in Line 15 we only have to+ −
export the context @{ML_text ctxt'} in order to produce meta-variables for+ −
the theorem. An example is as follows.+ −
+ −
@{ML_response_fake [display, gray]+ −
"unabs_def @{context} @{thm id2_def}"+ −
"id2 ?x1 \<equiv> ?x1"}+ −
*}+ −
+ −
text {*+ −
To sum up this section, conversions are more general than the simplifier+ −
or simprocs, but you have to do more work yourself. Also conversions are+ −
often much less efficient than the simplifier. The advantage of conversions, + −
however, that they provide much less room for non-termination.+ −
+ −
\begin{exercise}\label{ex:addconversion}+ −
Write a tactic that does the same as the simproc in exercise+ −
\ref{ex:addsimproc}, but is based on conversions. You can make+ −
the same assumptions as in \ref{ex:addsimproc}. + −
\end{exercise}+ −
+ −
\begin{exercise}\label{ex:compare}+ −
Compare your solutions of Exercises~\ref{ex:addsimproc} and \ref{ex:addconversion},+ −
and try to determine which way of rewriting such terms is faster. For this you might + −
have to construct quite large terms. Also see Recipe \ref{rec:timing} for information + −
about timing.+ −
\end{exercise}+ −
+ −
\begin{readmore}+ −
See @{ML_file "Pure/conv.ML"} for more information about conversion combinators. + −
Some basic conversions are defined in @{ML_file "Pure/thm.ML"},+ −
@{ML_file "Pure/drule.ML"} and @{ML_file "Pure/meta_simplifier.ML"}.+ −
\end{readmore}+ −
+ −
*}+ −
+ −
text {*+ −
(FIXME: check whether @{ML Pattern.match_rew} and @{ML Pattern.rewrite_term}+ −
are of any use/efficient)+ −
*}+ −
+ −
+ −
section {* Declarations (TBD) *}+ −
+ −
section {* Structured Proofs (TBD) *}+ −
+ −
text {* TBD *}+ −
+ −
lemma True+ −
proof+ −
+ −
{+ −
fix A B C+ −
assume r: "A & B \<Longrightarrow> C"+ −
assume A B+ −
then have "A & B" ..+ −
then have C by (rule r)+ −
}+ −
+ −
{+ −
fix A B C+ −
assume r: "A & B \<Longrightarrow> C"+ −
assume A B+ −
note conjI [OF this]+ −
note r [OF this]+ −
}+ −
oops+ −
+ −
ML {* + −
val ctxt0 = @{context};+ −
val ctxt = ctxt0;+ −
val (_, ctxt) = Variable.add_fixes ["A", "B", "C"] ctxt;+ −
val ([r], ctxt) = Assumption.add_assumes [@{cprop "A & B \<Longrightarrow> C"}] ctxt+ −
val (this, ctxt) = Assumption.add_assumes [@{cprop "A"}, @{cprop "B"}] ctxt;+ −
val this = [@{thm conjI} OF this]; + −
val this = r OF this;+ −
val this = Assumption.export false ctxt ctxt0 this + −
val this = Variable.export ctxt ctxt0 [this] + −
*}+ −
+ −
+ −
+ −
end+ −