--- a/ProgTutorial/Tactical.thy Sat May 30 11:12:46 2009 +0200
+++ b/ProgTutorial/Tactical.thy Sat May 30 17:40:20 2009 +0200
@@ -56,12 +56,12 @@
@{text xs} that will be generalised once the goal is proved (in our case
@{text P} and @{text Q}). The @{text "tac"} is the tactic that proves the goal;
it can make use of the local assumptions (there are none in this example).
- The tactics @{ML etac}, @{ML rtac} and @{ML atac} in the code above correspond
+ The tactics @{ML [index] etac}, @{ML [index] rtac} and @{ML [index] atac} in the code above correspond
roughly to @{text erule}, @{text rule} and @{text assumption}, respectively.
- The operator @{ML THEN} strings the tactics together.
+ The operator @{ML [index] THEN} strings the tactics together.
\begin{readmore}
- To learn more about the function @{ML Goal.prove} see \isccite{sec:results}
+ To learn more about the function @{ML [index] prove in Goal} see \isccite{sec:results}
and the file @{ML_file "Pure/goal.ML"}. See @{ML_file
"Pure/tactic.ML"} and @{ML_file "Pure/tctical.ML"} for the code of basic
tactics and tactic combinators; see also Chapters 3 and 4 in the old
@@ -119,7 +119,7 @@
THEN' atac)*}text_raw{*\label{tac:footacprime}*}
text {*
- where @{ML THEN'} is used instead of @{ML THEN}. With @{ML foo_tac'} you can give
+ where @{ML [index] THEN'} is used instead of @{ML THEN}. With @{ML foo_tac'} you can give
the number for the subgoal explicitly when the tactic is
called. So in the next proof you can first discharge the second subgoal, and
subsequently the first.
@@ -160,7 +160,7 @@
willy-nilly; only in very grave failure situations should a tactic raise the
exception @{text THM}.
- The simplest tactics are @{ML no_tac} and @{ML all_tac}. The first returns
+ The simplest tactics are @{ML [index] no_tac} and @{ML [index] all_tac}. The first returns
the empty sequence and is defined as
*}
@@ -333,7 +333,7 @@
section {* Simple Tactics\label{sec:simpletacs} *}
text {*
- Let us start with explaining the simple tactic @{ML print_tac}, which is quite useful
+ Let us start with explaining the simple tactic @{ML [index] print_tac}, which is quite useful
for low-level debugging of tactics. It just prints out a message and the current
goal state. Unlike @{ML my_print_tac} shown earlier, it prints the goal state
as the user would see it. For example, processing the proof
@@ -353,7 +353,7 @@
text {*
A simple tactic for easy discharge of any proof obligations is
- @{ML SkipProof.cheat_tac}. This tactic corresponds to
+ @{ML [index] cheat_tac in SkipProof}. This tactic corresponds to
the Isabelle command \isacommand{sorry} and is sometimes useful
during the development of tactics.
*}
@@ -369,7 +369,7 @@
This tactic works however only if the quick-and-dirty mode of Isabelle
is switched on.
- Another simple tactic is the function @{ML atac}, which, as shown in the previous
+ Another simple tactic is the function @{ML [index] atac}, which, as shown in the previous
section, corresponds to the assumption command.
*}
@@ -381,7 +381,8 @@
(*<*)oops(*>*)
text {*
- Similarly, @{ML rtac}, @{ML dtac}, @{ML etac} and @{ML ftac} correspond (roughly)
+ Similarly, @{ML [index] rtac}, @{ML [index] dtac}, @{ML [index] etac} and
+ @{ML [index] ftac} correspond (roughly)
to @{text rule}, @{text drule}, @{text erule} and @{text frule},
respectively. Each of them take a theorem as argument and attempt to
apply it to a goal. Below are three self-explanatory examples.
@@ -409,7 +410,7 @@
(*<*)oops(*>*)
text {*
- The function @{ML resolve_tac} is similar to @{ML rtac}, except that it
+ The function @{ML [index] resolve_tac} is similar to @{ML [index] rtac}, except that it
expects a list of theorems as arguments. From this list it will apply the
first applicable theorem (later theorems that are also applicable can be
explored via the lazy sequences mechanism). Given the code
@@ -432,10 +433,11 @@
text {*
Similar versions taking a list of theorems exist for the tactics
- @{ML dtac} (@{ML dresolve_tac}), @{ML etac} (@{ML eresolve_tac}) and so on.
+ @{ML dtac} (@{ML [index] dresolve_tac}), @{ML etac}
+ (@{ML [index] eresolve_tac}) and so on.
- Another simple tactic is @{ML cut_facts_tac}. It inserts a list of theorems
+ Another simple tactic is @{ML [index] cut_facts_tac}. It inserts a list of theorems
into the assumptions of the current goal state. For example
*}
@@ -470,7 +472,7 @@
should be, then this situation can be avoided by introducing a more
constrained version of the @{text bspec}-rule. Such constraints can be given by
pre-instantiating theorems with other theorems. One function to do this is
- @{ML RS}
+ @{ML [index] RS}
@{ML_response_fake [display,gray]
"@{thm disjI1} RS @{thm conjI}" "\<lbrakk>?P1; ?Q\<rbrakk> \<Longrightarrow> (?P1 \<or> ?Q1) \<and> ?Q"}
@@ -484,7 +486,7 @@
"*** Exception- THM (\"RSN: no unifiers\", 1,
[\"\<lbrakk>?P; ?Q\<rbrakk> \<Longrightarrow> ?P \<and> ?Q\", \"\<lbrakk>?P \<longrightarrow> ?Q; ?P\<rbrakk> \<Longrightarrow> ?Q\"]) raised"}
- then the function raises an exception. The function @{ML RSN} is similar to @{ML RS}, but
+ then the function raises an exception. The function @{ML [index] RSN} is similar to @{ML RS}, but
takes an additional number as argument that makes explicit which premise
should be instantiated.
@@ -497,14 +499,14 @@
"no_vars @{context} (@{thm disjI1} RS @{thm conjI})" "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> Q"}
If you want to instantiate more than one premise of a theorem, you can use
- the function @{ML MRS}:
+ the function @{ML [index] MRS}:
@{ML_response_fake [display,gray]
"no_vars @{context} ([@{thm disjI1}, @{thm disjI2}] MRS @{thm conjI})"
"\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> (Pa \<or> Q)"}
If you need to instantiate lists of theorems, you can use the
- functions @{ML RL} and @{ML MRL}. For example in the code below,
+ functions @{ML RL} and @{ML [index] MRL}. For example in the code below,
every theorem in the second list is instantiated with every
theorem in the first.
@@ -523,7 +525,7 @@
Often proofs on the ML-level involve elaborate operations on assumptions and
@{text "\<And>"}-quantified variables. To do such operations using the basic tactics
shown so far is very unwieldy and brittle. Some convenience and
- safety is provided by @{ML SUBPROOF}. This tactic fixes the parameters
+ safety is provided by @{ML [index] SUBPROOF}. This tactic fixes the parameters
and binds the various components of a goal state to a record.
To see what happens, assume the function defined in Figure~\ref{fig:sptac}, which
takes a record and just prints out the content of this record (using the
@@ -586,7 +588,7 @@
@{text asms}, but also as @{ML_type thm} to @{text prems}.
Notice also that we had to append @{text [quotes] "?"} to the
- \isacommand{apply}-command. The reason is that @{ML SUBPROOF} normally
+ \isacommand{apply}-command. The reason is that @{ML [index] SUBPROOF} normally
expects that the subgoal is solved completely. Since in the function @{ML
sp_tac} we returned the tactic @{ML no_tac}, the subproof obviously
fails. The question-mark allows us to recover from this failure in a
@@ -663,8 +665,8 @@
also described in \isccite{sec:results}.
\end{readmore}
- Similar but less powerful functions than @{ML SUBPROOF} are @{ML SUBGOAL}
- and @{ML CSUBGOAL}. They allow you to inspect a given subgoal (the former
+ Similar but less powerful functions than @{ML SUBPROOF} are @{ML [index] SUBGOAL}
+ and @{ML [index] CSUBGOAL}. They allow you to inspect a given subgoal (the former
presents the subgoal as a @{ML_type term}, while the latter as a @{ML_type
cterm}). With this you can implement a tactic that applies a rule according
to the topmost logic connective in the subgoal (to illustrate this we only
@@ -775,7 +777,7 @@
in what follows we will usually prefer it over the ``unprimed'' one.
If there is a list of tactics that should all be tried out in
- sequence, you can use the combinator @{ML EVERY'}. For example
+ sequence, you can use the combinator @{ML [index] EVERY'}. For example
the function @{ML foo_tac'} from page~\pageref{tac:footacprime} can also
be written as:
*}
@@ -797,11 +799,11 @@
text {*
and call @{ML foo_tac1}.
- With the combinators @{ML THEN'}, @{ML EVERY'} and @{ML EVERY1} it must be
+ With the combinators @{ML THEN'}, @{ML EVERY'} and @{ML [index] EVERY1} it must be
guaranteed that all component tactics successfully apply; otherwise the
whole tactic will fail. If you rather want to try out a number of tactics,
- then you can use the combinator @{ML ORELSE'} for two tactics, and @{ML
- FIRST'} (or @{ML FIRST1}) for a list of tactics. For example, the tactic
+ then you can use the combinator @{ML [index] ORELSE'} for two tactics, and @{ML
+ [index] FIRST'} (or @{ML [index] FIRST1}) for a list of tactics. For example, the tactic
*}
@@ -853,7 +855,7 @@
text {*
Since such repeated applications of a tactic to the reverse order of
\emph{all} subgoals is quite common, there is the tactic combinator
- @{ML ALLGOALS} that simplifies this. Using this combinator you can simply
+ @{ML [index] ALLGOALS} that simplifies this. Using this combinator you can simply
write: *}
lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
@@ -866,7 +868,7 @@
text {*
Remember that we chose to implement @{ML select_tac'} so that it
always succeeds by adding @{ML all_tac} at the end of the tactic
- list. The same can be achieved with the tactic combinator @{ML TRY}.
+ list. The same can be achieved with the tactic combinator @{ML [index] TRY}.
For example:
*}
@@ -899,7 +901,7 @@
from the end of the theorem list. As a result @{ML select_tac'} would only
succeed on goals where it can make progress. But for the sake of argument,
let us suppose that this deletion is \emph{not} an option. In such cases, you can
- use the combinator @{ML CHANGED} to make sure the subgoal has been changed
+ use the combinator @{ML [index] CHANGED} to make sure the subgoal has been changed
by the tactic. Because now
*}
@@ -917,7 +919,7 @@
text {*
We can further extend @{ML select_tac'} so that it not just applies to the topmost
connective, but also to the ones immediately ``underneath'', i.e.~analyse the goal
- completely. For this you can use the tactic combinator @{ML REPEAT}. As an example
+ completely. For this you can use the tactic combinator @{ML [index] REPEAT}. As an example
suppose the following tactic
*}
@@ -938,7 +940,7 @@
Here it is crucial that @{ML select_tac'} is prefixed with @{ML CHANGED},
because otherwise @{ML REPEAT} runs into an infinite loop (it applies the
tactic as long as it succeeds). The function
- @{ML REPEAT1} is similar, but runs the tactic at least once (failing if
+ @{ML [index] REPEAT1} is similar, but runs the tactic at least once (failing if
this is not possible).
If you are after the ``primed'' version of @{ML repeat_xmp_tac}, then you
@@ -954,7 +956,7 @@
and @{ML repeat_xmp_tac'} are not yet quite what we are after: the problem is
that goals 2 and 3 are not analysed. This is because the tactic
is applied repeatedly only to the first subgoal. To analyse also all
- resulting subgoals, you can use the tactic combinator @{ML REPEAT_ALL_NEW}.
+ resulting subgoals, you can use the tactic combinator @{ML [index] REPEAT_ALL_NEW}.
Suppose the tactic
*}
@@ -1007,7 +1009,7 @@
Sometimes this leads to confusing behaviour of tactics and also has
the potential to explode the search space for tactics.
These problems can be avoided by prefixing the tactic with the tactic
- combinator @{ML DETERM}.
+ combinator @{ML [index] DETERM}.
*}
lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
@@ -1044,7 +1046,7 @@
text {*
then the tactic @{ML select_tac''} will be tried out and any failure is harnessed.
We again have to use the construction with \mbox{@{text "TRY o ..."}} since there is
- no primed version of @{ML TRY}. The tactic combinator @{ML TRYALL} will try out
+ no primed version of @{ML [index] TRY}. The tactic combinator @{ML [index] TRYALL} will try out
a tactic on all subgoals. For example the tactic
*}
@@ -1053,7 +1055,7 @@
text {*
will solve all trivial subgoals involving @{term True} or @{term "False"}.
- (FIXME: say something about @{ML COND} and COND')
+ (FIXME: say something about @{ML [index] COND} and COND')
\begin{readmore}
Most tactic combinators described in this section are defined in @{ML_file "Pure/tctical.ML"}.
@@ -1079,11 +1081,11 @@
\begin{isabelle}
\begin{center}
\begin{tabular}{l@ {\hspace{2cm}}l}
- @{ML simp_tac} & @{text "(simp (no_asm))"} \\
- @{ML asm_simp_tac} & @{text "(simp (no_asm_simp))"} \\
- @{ML full_simp_tac} & @{text "(simp (no_asm_use))"} \\
- @{ML asm_lr_simp_tac} & @{text "(simp (asm_lr))"} \\
- @{ML asm_full_simp_tac} & @{text "(simp)"}
+ @{ML [index] simp_tac} & @{text "(simp (no_asm))"} \\
+ @{ML [index] asm_simp_tac} & @{text "(simp (no_asm_simp))"} \\
+ @{ML [index] full_simp_tac} & @{text "(simp (no_asm_use))"} \\
+ @{ML [index] asm_lr_simp_tac} & @{text "(simp (asm_lr))"} \\
+ @{ML [index] asm_full_simp_tac} & @{text "(simp)"}
\end{tabular}
\end{center}
\end{isabelle}
@@ -1164,9 +1166,9 @@
\begin{isabelle}
\begin{tabular}{ll}
- @{ML addsimps} & @{ML delsimps}\\
- @{ML addcongs} & @{ML delcongs}\\
- @{ML addsimprocs} & @{ML delsimprocs}\\
+ @{ML [index] addsimps} & @{ML [index] delsimps}\\
+ @{ML [index] addcongs} & @{ML [index] delcongs}\\
+ @{ML [index] addsimprocs} & @{ML [index] delsimprocs}\\
\end{tabular}
\end{isabelle}
@@ -1196,7 +1198,7 @@
text_raw {*
\end{isabelle}
\end{minipage}
-\caption{The function @{ML Simplifier.dest_ss} returns a record containing
+\caption{The function @{ML [index] dest_ss in Simplifier} returns a record containing
all printable information stored in a simpset. We are here only interested in the
simplification rules, congruence rules and simprocs.\label{fig:printss}}
\end{figure} *}
@@ -1204,7 +1206,7 @@
text {*
To see how they work, consider the function in Figure~\ref{fig:printss}, which
prints out some parts of a simpset. If you use it to print out the components of the
- empty simpset, i.e., @{ML empty_ss}
+ empty simpset, i.e., @{ML [index] empty_ss}
@{ML_response_fake [display,gray]
"print_ss @{context} empty_ss"
@@ -1251,7 +1253,7 @@
expects this form of the simplification and congruence rules. However, even
when adding these lemmas to @{ML empty_ss} we do not end up with anything useful yet.
- In the context of HOL, the first really useful simpset is @{ML HOL_basic_ss}. While
+ In the context of HOL, the first really useful simpset is @{ML [index] HOL_basic_ss}. While
printing out the components of this simpset
@{ML_response_fake [display,gray]
@@ -1278,9 +1280,9 @@
text {*
This behaviour is not because of simplification rules, but how the subgoaler, solver
- and looper are set up in @{ML HOL_basic_ss}.
+ and looper are set up in @{ML [index] HOL_basic_ss}.
- The simpset @{ML HOL_ss} is an extension of @{ML HOL_basic_ss} containing
+ The simpset @{ML [index] HOL_ss} is an extension of @{ML HOL_basic_ss} containing
already many useful simplification and congruence rules for the logical
connectives in HOL.
@@ -1300,7 +1302,7 @@
The simplifier is often used to unfold definitions in a proof. For this the
- simplifier implements the tactic @{ML rewrite_goals_tac}.\footnote{FIXME:
+ simplifier implements the tactic @{ML [index] rewrite_goals_tac}.\footnote{FIXME:
see LocalDefs infrastructure.} Suppose for example the
definition
*}
@@ -1614,7 +1616,7 @@
Here the pattern is given as @{ML_type term} (instead of @{ML_type cterm}).
The function also takes a list of patterns that can trigger the simproc.
Now the simproc is set up and can be explicitly added using
- @{ML addsimprocs} to a simpset whenever
+ @{ML [index] addsimprocs} to a simpset whenever
needed.
Simprocs are applied from inside to outside and from left to right. You can
@@ -1700,7 +1702,7 @@
| dest_suc_trm t = snd (HOLogic.dest_number t)*}
text {*
- It uses the library function @{ML dest_number in HOLogic} that transforms
+ It uses the library function @{ML [index] dest_number in HOLogic} that transforms
(Isabelle) terms, like @{term "0::nat"}, @{term "1::nat"}, @{term "2::nat"} and so
on, into integer values. This function raises the exception @{ML TERM}, if
the term is not a number. The next function expects a pair consisting of a term
@@ -1820,21 +1822,21 @@
text {*
whereby the produced theorem is always a meta-equality. A simple conversion
- is the function @{ML "Conv.all_conv"}, which maps a @{ML_type cterm} to an
+ is the function @{ML [index] all_conv in Conv}, which maps a @{ML_type cterm} to an
instance of the (meta)reflexivity theorem. For example:
@{ML_response_fake [display,gray]
"Conv.all_conv @{cterm \"Foo \<or> Bar\"}"
"Foo \<or> Bar \<equiv> Foo \<or> Bar"}
- Another simple conversion is @{ML Conv.no_conv} which always raises the
+ Another simple conversion is @{ML [index] no_conv in Conv} which always raises the
exception @{ML CTERM}.
@{ML_response_fake [display,gray]
"Conv.no_conv @{cterm True}"
"*** Exception- CTERM (\"no conversion\", []) raised"}
- A more interesting conversion is the function @{ML "Thm.beta_conversion"}: it
+ A more interesting conversion is the function @{ML [index] beta_conversion in Thm}: it
produces a meta-equation between a term and its beta-normal form. For example
@{ML_response_fake [display,gray]
@@ -1850,7 +1852,7 @@
Note that the actual response in this example is @{term "2 + 10 \<equiv> 2 + (10::nat)"},
since the pretty-printer for @{ML_type cterm}s eta-normalises terms.
But how we constructed the term (using the function
- @{ML Thm.capply}, which is the application @{ML $} for @{ML_type cterm}s)
+ @{ML [index] capply in Thm}, which is the application @{ML $} for @{ML_type cterm}s)
ensures that the left-hand side must contain beta-redexes. Indeed
if we obtain the ``raw'' representation of the produced theorem, we
can see the difference:
@@ -1904,15 +1906,15 @@
end"
"True \<and> (Foo \<longrightarrow> Bar) \<equiv> Foo \<longrightarrow> Bar"}
- Note, however, that the function @{ML Conv.rewr_conv} only rewrites the
+ Note, however, that the function @{ML [index] rewr_conv in Conv} only rewrites the
outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match
exactly the
- left-hand side of the theorem, then @{ML Conv.rewr_conv} fails by raising
+ left-hand side of the theorem, then @{ML [index] rewr_conv in Conv} fails by raising
the exception @{ML CTERM}.
This very primitive way of rewriting can be made more powerful by
combining several conversions into one. For this you can use conversion
- combinators. The simplest conversion combinator is @{ML then_conv},
+ combinators. The simplest conversion combinator is @{ML [index] then_conv},
which applies one conversion after another. For example
@{ML_response_fake [display,gray]
@@ -1929,7 +1931,7 @@
@{thm [source] true_conj1}. (Recall the problem with the pretty-printer
normalising all terms.)
- The conversion combinator @{ML else_conv} tries out the
+ The conversion combinator @{ML [index] else_conv} tries out the
first one, and if it does not apply, tries the second. For example
@{ML_response_fake [display,gray]
@@ -1944,10 +1946,10 @@
Here the conversion of @{thm [source] true_conj1} only applies
in the first case, but fails in the second. The whole conversion
- does not fail, however, because the combinator @{ML Conv.else_conv} will then
- try out @{ML Conv.all_conv}, which always succeeds.
+ does not fail, however, because the combinator @{ML else_conv in Conv} will then
+ try out @{ML all_conv in Conv}, which always succeeds.
- The conversion combinator @{ML Conv.try_conv} constructs a conversion
+ The conversion combinator @{ML [index] try_conv in Conv} constructs a conversion
which is tried out on a term, but in case of failure just does nothing.
For example
@@ -1958,7 +1960,7 @@
Apart from the function @{ML beta_conversion in Thm}, which is able to fully
beta-normalise a term, the conversions so far are restricted in that they
only apply to the outer-most level of a @{ML_type cterm}. In what follows we
- will lift this restriction. The combinator @{ML Conv.arg_conv} will apply
+ will lift this restriction. The combinator @{ML [index] arg_conv in Conv} will apply
the conversion to the first argument of an application, that is the term
must be of the form @{ML "t1 $ t2" for t1 t2} and the conversion is applied
to @{text t2}. For example
@@ -1975,10 +1977,10 @@
The reason for this behaviour is that @{text "(op \<or>)"} expects two
arguments. Therefore the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The
conversion is then applied to @{text "t2"} which in the example above
- stands for @{term "True \<and> Q"}. The function @{ML Conv.fun_conv} applies
+ stands for @{term "True \<and> Q"}. The function @{ML [index] fun_conv in Conv} applies
the conversion to the first argument of an application.
- The function @{ML Conv.abs_conv} applies a conversion under an abstraction.
+ The function @{ML [index] abs_conv in Conv} applies a conversion under an abstraction.
For example:
@{ML_response_fake [display,gray]
@@ -1991,7 +1993,7 @@
"\<lambda>P. True \<and> P \<and> Foo \<equiv> \<lambda>P. P \<and> Foo"}
Note that this conversion needs a context as an argument. The conversion that
- goes under an application is @{ML Conv.combination_conv}. It expects two conversions
+ goes under an application is @{ML [index] combination_conv in Conv}. It expects two conversions
as arguments, each of which is applied to the corresponding ``branch'' of the
application.
@@ -2059,7 +2061,7 @@
text {*
So far we only applied conversions to @{ML_type cterm}s. Conversions can, however,
- also work on theorems using the function @{ML "Conv.fconv_rule"}. As an example,
+ also work on theorems using the function @{ML [index] fconv_rule in Conv}. As an example,
consider the conversion @{ML all_true1_conv} and the lemma:
*}
@@ -2074,12 +2076,12 @@
"?P \<or> \<not> ?P"}
Finally, conversions can also be turned into tactics and then applied to
- goal states. This can be done with the help of the function @{ML CONVERSION},
+ goal states. This can be done with the help of the function @{ML [index] CONVERSION},
and also some predefined conversion combinators that traverse a goal
- state. The combinators for the goal state are: @{ML Conv.params_conv} for
+ state. The combinators for the goal state are: @{ML [index] params_conv in Conv} for
converting under parameters (i.e.~where goals are of the form @{text "\<And>x. P \<Longrightarrow>
- Q"}); the function @{ML Conv.prems_conv} for applying a conversion to all
- premises of a goal, and @{ML Conv.concl_conv} for applying a conversion to
+ Q"}); the function @{ML [index] prems_conv in Conv} for applying a conversion to all
+ premises of a goal, and @{ML [index] concl_conv in Conv} for applying a conversion to
the conclusion of a goal.
Assume we want to apply @{ML all_true1_conv} only in the conclusion