ProgTutorial/Tactical.thy
changeset 189 069d525f8f1d
parent 186 371e4375c994
child 190 ca0ac2e75f6d
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Tactical.thy	Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,2121 @@
+theory Tactical
+imports Base FirstSteps
+begin
+
+chapter {* Tactical Reasoning\label{chp:tactical} *}
+
+text {*
+  The main reason for descending to the ML-level of Isabelle is to be able to
+  implement automatic proof procedures. Such proof procedures usually lessen
+  considerably the burden of manual reasoning, for example, when introducing
+  new definitions. These proof procedures are centred around refining a goal
+  state using tactics. This is similar to the \isacommand{apply}-style
+  reasoning at the user-level, where goals are modified in a sequence of proof
+  steps until all of them are solved. However, there are also more structured
+  operations available on the ML-level that help with the handling of
+  variables and assumptions.
+
+*}
+
+section {* Basics of Reasoning with Tactics*}
+
+text {*
+  To see how tactics work, let us first transcribe a simple \isacommand{apply}-style proof 
+  into ML. Suppose the following proof.
+*}
+
+lemma disj_swap: "P \<or> Q \<Longrightarrow> Q \<or> P"
+apply(erule disjE)
+apply(rule disjI2)
+apply(assumption)
+apply(rule disjI1)
+apply(assumption)
+done
+
+text {*
+  This proof translates to the following ML-code.
+  
+@{ML_response_fake [display,gray]
+"let
+  val ctxt = @{context}
+  val goal = @{prop \"P \<or> Q \<Longrightarrow> Q \<or> P\"}
+in
+  Goal.prove ctxt [\"P\", \"Q\"] [] goal 
+   (fn _ => 
+      etac @{thm disjE} 1
+      THEN rtac @{thm disjI2} 1
+      THEN atac 1
+      THEN rtac @{thm disjI1} 1
+      THEN atac 1)
+end" "?P \<or> ?Q \<Longrightarrow> ?Q \<or> ?P"}
+  
+  To start the proof, the function @{ML "Goal.prove"}~@{text "ctxt xs As C
+  tac"} sets up a goal state for proving the goal @{text C} 
+  (that is @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"} in the proof at hand) under the
+  assumptions @{text As} (happens to be empty) with the variables
+  @{text xs} that will be generalised once the goal is proved (in our case
+  @{text P} and @{text Q}). The @{text "tac"} is the tactic that proves the goal;
+  it can make use of the local assumptions (there are none in this example). 
+  The functions @{ML etac}, @{ML rtac} and @{ML atac} in the code above correspond to 
+  @{text erule}, @{text rule} and @{text assumption}, respectively. 
+  The operator @{ML THEN} strings the tactics together. 
+
+  \begin{readmore}
+  To learn more about the function @{ML Goal.prove} see \isccite{sec:results}
+  and the file @{ML_file "Pure/goal.ML"}.  See @{ML_file
+  "Pure/tactic.ML"} and @{ML_file "Pure/tctical.ML"} for the code of basic
+  tactics and tactic combinators; see also Chapters 3 and 4 in the old
+  Isabelle Reference Manual, and Chapter 3 in the Isabelle/Isar Implementation Manual. 
+  \end{readmore}
+
+  Note that in the code above we use antiquotations for referencing the theorems. Many theorems
+  also have ML-bindings with the same name. Therefore, we could also just have
+  written @{ML "etac disjE 1"}, or in case where there are no ML-binding obtain
+  the theorem dynamically using the function @{ML thm}; for example 
+  \mbox{@{ML  "etac (thm \"disjE\") 1"}}. Both ways however are considered bad style! 
+  The reason
+  is that the binding for @{ML disjE} can be re-assigned by the user and thus
+  one does not have complete control over which theorem is actually
+  applied. This problem is nicely prevented by using antiquotations, because
+  then the theorems are fixed statically at compile-time.
+
+  During the development of automatic proof procedures, you will often find it 
+  necessary to test a tactic on examples. This can be conveniently 
+  done with the command \isacommand{apply}@{text "(tactic \<verbopen> \<dots> \<verbclose>)"}. 
+  Consider the following sequence of tactics
+*}
+
+ML{*val foo_tac = 
+      (etac @{thm disjE} 1
+       THEN rtac @{thm disjI2} 1
+       THEN atac 1
+       THEN rtac @{thm disjI1} 1
+       THEN atac 1)*}
+
+text {* and the Isabelle proof: *}
+
+lemma "P \<or> Q \<Longrightarrow> Q \<or> P"
+apply(tactic {* foo_tac *})
+done
+
+text {*
+  By using @{text "tactic \<verbopen> \<dots> \<verbclose>"} you can call from the 
+  user-level of Isabelle the tactic @{ML foo_tac} or 
+  any other function that returns a tactic.
+
+  The tactic @{ML foo_tac} is just a sequence of simple tactics stringed
+  together by @{ML THEN}. As can be seen, each simple tactic in @{ML foo_tac}
+  has a hard-coded number that stands for the subgoal analysed by the
+  tactic (@{text "1"} stands for the first, or top-most, subgoal). This hard-coding
+  of goals is sometimes wanted, but usually it is not. To avoid the explicit numbering, 
+  you can write\label{tac:footacprime}
+*}
+
+ML{*val foo_tac' = 
+      (etac @{thm disjE} 
+       THEN' rtac @{thm disjI2} 
+       THEN' atac 
+       THEN' rtac @{thm disjI1} 
+       THEN' atac)*}
+
+text {* 
+  and then give the number for the subgoal explicitly when the tactic is
+  called. So in the next proof you can first discharge the second subgoal, and
+  subsequently the first.
+*}
+
+lemma "P1 \<or> Q1 \<Longrightarrow> Q1 \<or> P1"
+   and "P2 \<or> Q2 \<Longrightarrow> Q2 \<or> P2"
+apply(tactic {* foo_tac' 2 *})
+apply(tactic {* foo_tac' 1 *})
+done
+
+text {*
+  This kind of addressing is more difficult to achieve when the goal is 
+  hard-coded inside the tactic. For most operators that combine tactics 
+  (@{ML THEN} is only one such operator) a ``primed'' version exists.
+
+  The tactics @{ML foo_tac} and @{ML foo_tac'} are very specific for
+  analysing goals being only of the form @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"}. If the goal is not
+  of this form, then these tactics return the error message:
+
+  \begin{isabelle}
+  @{text "*** empty result sequence -- proof command failed"}\\
+  @{text "*** At command \"apply\"."}
+  \end{isabelle}
+
+  This means they failed. The reason for this error message is that tactics 
+  are functions mapping a goal state to a (lazy) sequence of successor states. 
+  Hence the type of a tactic is:
+*}
+  
+ML{*type tactic = thm -> thm Seq.seq*}
+
+text {*
+  By convention, if a tactic fails, then it should return the empty sequence. 
+  Therefore, if you write your own tactics, they  should not raise exceptions 
+  willy-nilly; only in very grave failure situations should a tactic raise the 
+  exception @{text THM}.
+
+  The simplest tactics are @{ML no_tac} and @{ML all_tac}. The first returns
+  the empty sequence and is defined as
+*}
+
+ML{*fun no_tac thm = Seq.empty*}
+
+text {*
+  which means @{ML no_tac} always fails. The second returns the given theorem wrapped 
+  in a single member sequence; it is defined as
+*}
+
+ML{*fun all_tac thm = Seq.single thm*}
+
+text {*
+  which means @{ML all_tac} always succeeds, but also does not make any progress 
+  with the proof.
+
+  The lazy list of possible successor goal states shows through at the user-level 
+  of Isabelle when using the command \isacommand{back}. For instance in the 
+  following proof there are two possibilities for how to apply 
+  @{ML foo_tac'}: either using the first assumption or the second.
+*}
+
+lemma "\<lbrakk>P \<or> Q; P \<or> Q\<rbrakk> \<Longrightarrow> Q \<or> P"
+apply(tactic {* foo_tac' 1 *})
+back
+done
+
+
+text {*
+  By using \isacommand{back}, we construct the proof that uses the
+  second assumption. While in the proof above, it does not really matter which 
+  assumption is used, in more interesting cases provability might depend
+  on exploring different possibilities.
+  
+  \begin{readmore}
+  See @{ML_file "Pure/General/seq.ML"} for the implementation of lazy
+  sequences. In day-to-day Isabelle programming, however, one rarely 
+  constructs sequences explicitly, but uses the predefined tactics and 
+  tactic combinators instead.
+  \end{readmore}
+
+  It might be surprising that tactics, which transform
+  one goal state to the next, are functions from theorems to theorem 
+  (sequences). The surprise resolves by knowing that every 
+  goal state is indeed a theorem. To shed more light on this,
+  let us modify the code of @{ML all_tac} to obtain the following
+  tactic
+*}
+
+ML{*fun my_print_tac ctxt thm =
+let
+  val _ = warning (str_of_thm ctxt thm)
+in 
+  Seq.single thm
+end*}
+
+text_raw {*
+  \begin{figure}[p]
+  \begin{boxedminipage}{\textwidth}
+  \begin{isabelle}
+*}
+lemma shows "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B" 
+apply(tactic {* my_print_tac @{context} *})
+
+txt{* \begin{minipage}{\textwidth}
+      @{subgoals [display]} 
+      \end{minipage}\medskip      
+
+      \begin{minipage}{\textwidth}
+      \small\colorbox{gray!20}{
+      \begin{tabular}{@ {}l@ {}}
+      internal goal state:\\
+      @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}
+      \end{tabular}}
+      \end{minipage}\medskip
+*}
+
+apply(rule conjI)
+apply(tactic {* my_print_tac @{context} *})
+
+txt{* \begin{minipage}{\textwidth}
+      @{subgoals [display]} 
+      \end{minipage}\medskip
+
+      \begin{minipage}{\textwidth}
+      \small\colorbox{gray!20}{
+      \begin{tabular}{@ {}l@ {}}
+      internal goal state:\\
+      @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}
+      \end{tabular}}
+      \end{minipage}\medskip
+*}
+
+apply(assumption)
+apply(tactic {* my_print_tac @{context} *})
+
+txt{* \begin{minipage}{\textwidth}
+      @{subgoals [display]} 
+      \end{minipage}\medskip
+
+      \begin{minipage}{\textwidth}
+      \small\colorbox{gray!20}{
+      \begin{tabular}{@ {}l@ {}}
+      internal goal state:\\
+      @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}
+      \end{tabular}}
+      \end{minipage}\medskip
+*}
+
+apply(assumption)
+apply(tactic {* my_print_tac @{context} *})
+
+txt{* \begin{minipage}{\textwidth}
+      @{subgoals [display]} 
+      \end{minipage}\medskip 
+  
+      \begin{minipage}{\textwidth}
+      \small\colorbox{gray!20}{
+      \begin{tabular}{@ {}l@ {}}
+      internal goal state:\\
+      @{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}
+      \end{tabular}}
+      \end{minipage}\medskip   
+   *}
+done
+text_raw {*  
+  \end{isabelle}
+  \end{boxedminipage}
+  \caption{The figure shows a proof where each intermediate goal state is
+  printed by the Isabelle system and by @{ML my_print_tac}. The latter shows
+  the goal state as represented internally (highlighted boxes). This
+  tactic shows that every goal state in Isabelle is represented by a theorem:
+  when you start the proof of \mbox{@{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}} the theorem is
+  @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}; when you finish the proof the
+  theorem is @{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}.\label{fig:goalstates}}
+  \end{figure}
+*}
+
+
+text {*
+  which prints out the given theorem (using the string-function defined in
+  Section~\ref{sec:printing}) and then behaves like @{ML all_tac}. With this
+  tactic we are in the position to inspect every goal state in a
+  proof. Consider now the proof in Figure~\ref{fig:goalstates}: as can be seen, 
+  internally every goal state is an implication of the form
+
+  @{text[display] "A\<^isub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^isub>n \<Longrightarrow> (C)"}
+
+  where @{term C} is the goal to be proved and the @{term "A\<^isub>i"} are
+  the subgoals. So after setting up the lemma, the goal state is always of the
+  form @{text "C \<Longrightarrow> (C)"}; when the proof is finished we are left with @{text
+  "(C)"}. Since the goal @{term C} can potentially be an implication, there is
+  a ``protector'' wrapped around it (in from of an outermost constant @{text
+  "Const (\"prop\", bool \<Rightarrow> bool)"}; however this constant
+  is invisible in the figure). This wrapper prevents that premises of @{text C} are
+  mis-interpreted as open subgoals. While tactics can operate on the subgoals
+  (the @{text "A\<^isub>i"} above), they are expected to leave the conclusion
+  @{term C} intact, with the exception of possibly instantiating schematic
+  variables. If you use the predefined tactics, which we describe in the next
+  section, this will always be the case.
+ 
+  \begin{readmore}
+  For more information about the internals of goals see \isccite{sec:tactical-goals}.
+  \end{readmore}
+
+*}
+
+section {* Simple Tactics *}
+
+text {*
+  Let us start with explaining the simple tactic @{ML print_tac}, which is quite useful 
+  for low-level debugging of tactics. It just prints out a message and the current 
+  goal state. Unlike @{ML my_print_tac} shown earlier, it prints the goal state 
+  as the user would see it.  For example, processing the proof
+*}
+ 
+lemma shows "False \<Longrightarrow> True"
+apply(tactic {* print_tac "foo message" *})
+txt{*gives:\medskip
+
+     \begin{minipage}{\textwidth}\small
+     @{text "foo message"}\\[3mm] 
+     @{prop "False \<Longrightarrow> True"}\\
+     @{text " 1. False \<Longrightarrow> True"}\\
+     \end{minipage}
+   *}
+(*<*)oops(*>*)
+
+text {*
+  Another simple tactic is the function @{ML atac}, which, as shown in the previous
+  section, corresponds to the assumption command.
+*}
+
+lemma shows "P \<Longrightarrow> P"
+apply(tactic {* atac 1 *})
+txt{*\begin{minipage}{\textwidth}
+     @{subgoals [display]}
+     \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+  Similarly, @{ML rtac}, @{ML dtac}, @{ML etac} and @{ML ftac} correspond
+  to @{text rule}, @{text drule}, @{text erule} and @{text frule}, 
+  respectively. Each of them takes a theorem as argument and attempts to 
+  apply it to a goal. Below are three self-explanatory examples.
+*}
+
+lemma shows "P \<and> Q"
+apply(tactic {* rtac @{thm conjI} 1 *})
+txt{*\begin{minipage}{\textwidth}
+     @{subgoals [display]}
+     \end{minipage}*}
+(*<*)oops(*>*)
+
+lemma shows "P \<and> Q \<Longrightarrow> False"
+apply(tactic {* etac @{thm conjE} 1 *})
+txt{*\begin{minipage}{\textwidth}
+     @{subgoals [display]}
+     \end{minipage}*}
+(*<*)oops(*>*)
+
+lemma shows "False \<and> True \<Longrightarrow> False"
+apply(tactic {* dtac @{thm conjunct2} 1 *})
+txt{*\begin{minipage}{\textwidth}
+     @{subgoals [display]}
+     \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+  Note the number in each tactic call. Also as mentioned in the previous section, most 
+  basic tactics take such a number as argument: this argument addresses the subgoal 
+  the tacics are analysing. In the proof below, we first split up the conjunction in  
+  the second subgoal by focusing on this subgoal first.
+*}
+
+lemma shows "Foo" and "P \<and> Q"
+apply(tactic {* rtac @{thm conjI} 2 *})
+txt {*\begin{minipage}{\textwidth}
+      @{subgoals [display]}
+      \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+  The function @{ML resolve_tac} is similar to @{ML rtac}, except that it
+  expects a list of theorems as arguments. From this list it will apply the
+  first applicable theorem (later theorems that are also applicable can be
+  explored via the lazy sequences mechanism). Given the code
+*}
+
+ML{*val resolve_tac_xmp = resolve_tac [@{thm impI}, @{thm conjI}]*}
+
+text {*
+  an example for @{ML resolve_tac} is the following proof where first an outermost 
+  implication is analysed and then an outermost conjunction.
+*}
+
+lemma shows "C \<longrightarrow> (A \<and> B)" and "(A \<longrightarrow> B) \<and> C"
+apply(tactic {* resolve_tac_xmp 1 *})
+apply(tactic {* resolve_tac_xmp 2 *})
+txt{*\begin{minipage}{\textwidth}
+     @{subgoals [display]} 
+     \end{minipage}*}
+(*<*)oops(*>*)
+
+text {* 
+  Similar versions taking a list of theorems exist for the tactics 
+  @{ML dtac} (@{ML dresolve_tac}), @{ML etac} (@{ML eresolve_tac}) and so on.
+
+
+  Another simple tactic is @{ML cut_facts_tac}. It inserts a list of theorems
+  into the assumptions of the current goal state. For example
+*}
+
+lemma shows "True \<noteq> False"
+apply(tactic {* cut_facts_tac [@{thm True_def}, @{thm False_def}] 1 *})
+txt{*produces the goal state\medskip
+
+     \begin{minipage}{\textwidth}
+     @{subgoals [display]} 
+     \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+  Since rules are applied using higher-order unification, an automatic proof
+  procedure might become too fragile, if it just applies inference rules as 
+  shown above. The reason is that a number of rules introduce meta-variables 
+  into the goal state. Consider for example the proof
+*}
+
+lemma shows "\<forall>x\<in>A. P x \<Longrightarrow> Q x"
+apply(tactic {* dtac @{thm bspec} 1 *})
+txt{*\begin{minipage}{\textwidth}
+     @{subgoals [display]} 
+     \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+  where the application of rule @{text bspec} generates two subgoals involving the
+  meta-variable @{text "?x"}. Now, if you are not careful, tactics 
+  applied to the first subgoal might instantiate this meta-variable in such a 
+  way that the second subgoal becomes unprovable. If it is clear what the @{text "?x"}
+  should be, then this situation can be avoided by introducing a more
+  constraint version of the @{text bspec}-rule. Such constraints can be given by
+  pre-instantiating theorems with other theorems. One function to do this is
+  @{ML RS}
+  
+  @{ML_response_fake [display,gray]
+  "@{thm disjI1} RS @{thm conjI}" "\<lbrakk>?P1; ?Q\<rbrakk> \<Longrightarrow> (?P1 \<or> ?Q1) \<and> ?Q"}
+
+  which in the example instantiates the first premise of the @{text conjI}-rule 
+  with the rule @{text disjI1}. If the instantiation is impossible, as in the 
+  case of
+
+  @{ML_response_fake_both [display,gray]
+  "@{thm conjI} RS @{thm mp}" 
+"*** Exception- THM (\"RSN: no unifiers\", 1, 
+[\"\<lbrakk>?P; ?Q\<rbrakk> \<Longrightarrow> ?P \<and> ?Q\", \"\<lbrakk>?P \<longrightarrow> ?Q; ?P\<rbrakk> \<Longrightarrow> ?Q\"]) raised"}
+
+  then the function raises an exception. The function @{ML RSN} is similar to @{ML RS}, but 
+  takes an additional number as argument that makes explicit which premise 
+  should be instantiated. 
+
+  To improve readability of the theorems we produce below, we shall use the
+  function @{ML no_vars} from Section~\ref{sec:printing}, which transforms
+  schematic variables into free ones.  Using this function for the first @{ML
+  RS}-expression above produces the more readable result:
+
+  @{ML_response_fake [display,gray]
+  "no_vars @{context} (@{thm disjI1} RS @{thm conjI})" "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> Q"}
+
+  If you want to instantiate more than one premise of a theorem, you can use 
+  the function @{ML MRS}:
+
+  @{ML_response_fake [display,gray]
+  "no_vars @{context} ([@{thm disjI1}, @{thm disjI2}] MRS @{thm conjI})" 
+  "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> (Pa \<or> Q)"}
+
+  If you need to instantiate lists of theorems, you can use the
+  functions @{ML RL} and @{ML MRL}. For example in the code below,
+  every theorem in the second list is instantiated with every 
+  theorem in the first.
+
+  @{ML_response_fake [display,gray]
+   "[@{thm impI}, @{thm disjI2}] RL [@{thm conjI}, @{thm disjI1}]" 
+"[\<lbrakk>P \<Longrightarrow> Q; Qa\<rbrakk> \<Longrightarrow> (P \<longrightarrow> Q) \<and> Qa,
+ \<lbrakk>Q; Qa\<rbrakk> \<Longrightarrow> (P \<or> Q) \<and> Qa,
+ (P \<Longrightarrow> Q) \<Longrightarrow> (P \<longrightarrow> Q) \<or> Qa,
+ Q \<Longrightarrow> (P \<or> Q) \<or> Qa]"}
+
+  \begin{readmore}
+  The combinators for instantiating theorems are defined in @{ML_file "Pure/drule.ML"}.
+  \end{readmore}
+
+  Often proofs on the ML-level involve elaborate operations on assumptions and 
+  @{text "\<And>"}-quantified variables. To do such operations using the basic tactics 
+  shown so far is very unwieldy and brittle. Some convenience and
+  safety is provided by the tactic @{ML SUBPROOF}. This tactic fixes the parameters 
+  and binds the various components of a goal state to a record. 
+  To see what happens, assume the function defined in Figure~\ref{fig:sptac}, which
+  takes a record and just prints out the content of this record (using the 
+  string transformation functions from in Section~\ref{sec:printing}). Consider
+  now the proof:
+*}
+
+text_raw{*
+\begin{figure}[t]
+\begin{minipage}{\textwidth}
+\begin{isabelle}
+*}
+ML{*fun sp_tac {prems, params, asms, concl, context, schematics} = 
+let 
+  val str_of_params = str_of_cterms context params
+  val str_of_asms = str_of_cterms context asms
+  val str_of_concl = str_of_cterm context concl
+  val str_of_prems = str_of_thms context prems   
+  val str_of_schms = str_of_cterms context (snd schematics)    
+ 
+  val _ = (warning ("params: " ^ str_of_params);
+           warning ("schematics: " ^ str_of_schms);
+           warning ("assumptions: " ^ str_of_asms);
+           warning ("conclusion: " ^ str_of_concl);
+           warning ("premises: " ^ str_of_prems))
+in
+  no_tac 
+end*}
+text_raw{*
+\end{isabelle}
+\end{minipage}
+\caption{A function that prints out the various parameters provided by the tactic
+ @{ML SUBPROOF}. It uses the functions defined in Section~\ref{sec:printing} for
+  extracting strings from @{ML_type cterm}s and @{ML_type thm}s.\label{fig:sptac}}
+\end{figure}
+*}
+
+
+lemma shows "\<And>x y. A x y \<Longrightarrow> B y x \<longrightarrow> C (?z y) x"
+apply(tactic {* SUBPROOF sp_tac @{context} 1 *})?
+
+txt {*
+  The tactic produces the following printout:
+
+  \begin{quote}\small
+  \begin{tabular}{ll}
+  params:      & @{term x}, @{term y}\\
+  schematics:  & @{term z}\\
+  assumptions: & @{term "A x y"}\\
+  conclusion:  & @{term "B y x \<longrightarrow> C (z y) x"}\\
+  premises:    & @{term "A x y"}
+  \end{tabular}
+  \end{quote}
+
+  Notice in the actual output the brown colour of the variables @{term x} and 
+  @{term y}. Although they are parameters in the original goal, they are fixed inside
+  the subproof. By convention these fixed variables are printed in brown colour.
+  Similarly the schematic variable @{term z}. The assumption, or premise, 
+  @{prop "A x y"} is bound as @{ML_type cterm} to the record-variable 
+  @{text asms}, but also as @{ML_type thm} to @{text prems}.
+
+  Notice also that we had to append @{text [quotes] "?"} to the
+  \isacommand{apply}-command. The reason is that @{ML SUBPROOF} normally
+  expects that the subgoal is solved completely.  Since in the function @{ML
+  sp_tac} we returned the tactic @{ML no_tac}, the subproof obviously
+  fails. The question-mark allows us to recover from this failure in a
+  graceful manner so that the warning messages are not overwritten by an 
+  ``empty sequence'' error message.
+
+  If we continue the proof script by applying the @{text impI}-rule
+*}
+
+apply(rule impI)
+apply(tactic {* SUBPROOF sp_tac @{context} 1 *})?
+
+txt {*
+  then the tactic prints out: 
+
+  \begin{quote}\small
+  \begin{tabular}{ll}
+  params:      & @{term x}, @{term y}\\
+  schematics:  & @{term z}\\
+  assumptions: & @{term "A x y"}, @{term "B y x"}\\
+  conclusion:  & @{term "C (z y) x"}\\
+  premises:    & @{term "A x y"}, @{term "B y x"}
+  \end{tabular}
+  \end{quote}
+*}
+(*<*)oops(*>*)
+
+text {*
+  Now also @{term "B y x"} is an assumption bound to @{text asms} and @{text prems}.
+
+  One convenience of @{ML SUBPROOF} is that we can apply the assumptions
+  using the usual tactics, because the parameter @{text prems} 
+  contains them as theorems. With this you can easily 
+  implement a tactic that behaves almost like @{ML atac}:
+*}
+
+ML{*val atac' = SUBPROOF (fn {prems, ...} => resolve_tac prems 1)*}
+
+text {*
+  If you apply @{ML atac'} to the next lemma
+*}
+
+lemma shows "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
+apply(tactic {* atac' @{context} 1 *})
+txt{* it will produce
+      @{subgoals [display]} *}
+(*<*)oops(*>*)
+
+text {*
+  The restriction in this tactic which is not present in @{ML atac} is 
+  that it cannot instantiate any
+  schematic variable. This might be seen as a defect, but it is actually
+  an advantage in the situations for which @{ML SUBPROOF} was designed: 
+  the reason is that, as mentioned before, instantiation of schematic variables can affect 
+  several goals and can render them unprovable. @{ML SUBPROOF} is meant 
+  to avoid this.
+
+  Notice that @{ML atac'} inside @{ML SUBPROOF} calls @{ML resolve_tac} with 
+  the subgoal number @{text "1"} and also the outer call to @{ML SUBPROOF} in 
+  the \isacommand{apply}-step uses @{text "1"}. This is another advantage 
+  of @{ML SUBPROOF}: the addressing  inside it is completely 
+  local to the tactic inside the subproof. It is therefore possible to also apply 
+  @{ML atac'} to the second goal by just writing:
+*}
+
+lemma shows "True" and "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
+apply(tactic {* atac' @{context} 2 *})
+apply(rule TrueI)
+done
+
+
+text {*
+  \begin{readmore}
+  The function @{ML SUBPROOF} is defined in @{ML_file "Pure/subgoal.ML"} and
+  also described in \isccite{sec:results}. 
+  \end{readmore}
+
+  Similar but less powerful functions than @{ML SUBPROOF} are @{ML SUBGOAL}
+  and @{ML CSUBGOAL}. They allow you to inspect a given subgoal (the former
+  presents the subgoal as a @{ML_type term}, while the latter as a @{ML_type
+  cterm}). With this you can implement a tactic that applies a rule according
+  to the topmost logic connective in the subgoal (to illustrate this we only
+  analyse a few connectives). The code of the tactic is as
+  follows.\label{tac:selecttac}
+
+*}
+
+ML %linenosgray{*fun select_tac (t, i) =
+  case t of
+     @{term "Trueprop"} $ t' => select_tac (t', i)
+   | @{term "op \<Longrightarrow>"} $ _ $ t' => select_tac (t', i)
+   | @{term "op \<and>"} $ _ $ _ => rtac @{thm conjI} i
+   | @{term "op \<longrightarrow>"} $ _ $ _ => rtac @{thm impI} i
+   | @{term "Not"} $ _ => rtac @{thm notI} i
+   | Const (@{const_name "All"}, _) $ _ => rtac @{thm allI} i
+   | _ => all_tac*}
+
+text {*
+  The input of the function is a term representing the subgoal and a number
+  specifying the subgoal of interest. In Line 3 you need to descend under the
+  outermost @{term "Trueprop"} in order to get to the connective you like to
+  analyse. Otherwise goals like @{prop "A \<and> B"} are not properly
+  analysed. Similarly with meta-implications in the next line.  While for the
+  first five patterns we can use the @{text "@term"}-antiquotation to
+  construct the patterns, the pattern in Line 8 cannot be constructed in this
+  way. The reason is that an antiquotation would fix the type of the
+  quantified variable. So you really have to construct the pattern using the
+  basic term-constructors. This is not necessary in other cases, because their
+  type is always fixed to function types involving only the type @{typ
+  bool}. (See Section \ref{sec:terms_types_manually} about constructing terms
+  manually.) For the catch-all pattern, we chose to just return @{ML all_tac}. 
+  Consequently, @{ML select_tac} never fails.
+
+
+  Let us now see how to apply this tactic. Consider the four goals:
+*}
+
+
+lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
+apply(tactic {* SUBGOAL select_tac 4 *})
+apply(tactic {* SUBGOAL select_tac 3 *})
+apply(tactic {* SUBGOAL select_tac 2 *})
+apply(tactic {* SUBGOAL select_tac 1 *})
+txt{* \begin{minipage}{\textwidth}
+      @{subgoals [display]}
+      \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+  where in all but the last the tactic applied an introduction rule. 
+  Note that we applied the tactic to the goals in ``reverse'' order. 
+  This is a trick in order to be independent from the subgoals that are 
+  produced by the rule. If we had applied it in the other order 
+*}
+
+lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
+apply(tactic {* SUBGOAL select_tac 1 *})
+apply(tactic {* SUBGOAL select_tac 3 *})
+apply(tactic {* SUBGOAL select_tac 4 *})
+apply(tactic {* SUBGOAL select_tac 5 *})
+(*<*)oops(*>*)
+
+text {*
+  then we have to be careful to not apply the tactic to the two subgoals produced by 
+  the first goal. To do this can result in quite messy code. In contrast, 
+  the ``reverse application'' is easy to implement.
+
+  Of course, this example is
+  contrived: there are much simpler methods available in Isabelle for
+  implementing a proof procedure analysing a goal according to its topmost
+  connective. These simpler methods use tactic combinators, which we will
+  explain in the next section.
+
+*}
+
+section {* Tactic Combinators *}
+
+text {* 
+  The purpose of tactic combinators is to build compound tactics out of
+  smaller tactics. In the previous section we already used @{ML THEN}, which
+  just strings together two tactics in a sequence. For example:
+*}
+
+lemma shows "(Foo \<and> Bar) \<and> False"
+apply(tactic {* rtac @{thm conjI} 1 THEN rtac @{thm conjI} 1 *})
+txt {* \begin{minipage}{\textwidth}
+       @{subgoals [display]} 
+       \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+  If you want to avoid the hard-coded subgoal addressing, then you can use
+  the ``primed'' version of @{ML THEN}. For example:
+*}
+
+lemma shows "(Foo \<and> Bar) \<and> False"
+apply(tactic {* (rtac @{thm conjI} THEN' rtac @{thm conjI}) 1 *})
+txt {* \begin{minipage}{\textwidth}
+       @{subgoals [display]} 
+       \end{minipage} *}
+(*<*)oops(*>*)
+
+text {* 
+  Here you only have to specify the subgoal of interest only once and
+  it is consistently applied to the component tactics.
+  For most tactic combinators such a ``primed'' version exists and
+  in what follows we will usually prefer it over the ``unprimed'' one. 
+
+  If there is a list of tactics that should all be tried out in 
+  sequence, you can use the combinator @{ML EVERY'}. For example
+  the function @{ML foo_tac'} from page~\pageref{tac:footacprime} can also 
+  be written as:
+*}
+
+ML{*val foo_tac'' = EVERY' [etac @{thm disjE}, rtac @{thm disjI2}, 
+                        atac, rtac @{thm disjI1}, atac]*}
+
+text {*
+  There is even another way of implementing this tactic: in automatic proof
+  procedures (in contrast to tactics that might be called by the user) there
+  are often long lists of tactics that are applied to the first
+  subgoal. Instead of writing the code above and then calling @{ML "foo_tac'' 1"}, 
+  you can also just write
+*}
+
+ML{*val foo_tac1 = EVERY1 [etac @{thm disjE}, rtac @{thm disjI2}, 
+                       atac, rtac @{thm disjI1}, atac]*}
+
+text {*
+  and call @{ML foo_tac1}.  
+
+  With the combinators @{ML THEN'}, @{ML EVERY'} and @{ML EVERY1} it must be
+  guaranteed that all component tactics successfully apply; otherwise the
+  whole tactic will fail. If you rather want to try out a number of tactics,
+  then you can use the combinator @{ML ORELSE'} for two tactics, and @{ML
+  FIRST'} (or @{ML FIRST1}) for a list of tactics. For example, the tactic
+
+*}
+
+ML{*val orelse_xmp = rtac @{thm disjI1} ORELSE' rtac @{thm conjI}*}
+
+text {*
+  will first try out whether rule @{text disjI} applies and after that 
+  @{text conjI}. To see this consider the proof
+*}
+
+lemma shows "True \<and> False" and "Foo \<or> Bar"
+apply(tactic {* orelse_xmp 2 *})
+apply(tactic {* orelse_xmp 1 *})
+txt {* which results in the goal state
+
+       \begin{minipage}{\textwidth}
+       @{subgoals [display]} 
+       \end{minipage}
+*}
+(*<*)oops(*>*)
+
+
+text {*
+  Using @{ML FIRST'} we can simplify our @{ML select_tac} from Page~\pageref{tac:selecttac} 
+  as follows:
+*}
+
+ML{*val select_tac' = FIRST' [rtac @{thm conjI}, rtac @{thm impI}, 
+                          rtac @{thm notI}, rtac @{thm allI}, K all_tac]*}
+
+text {*
+  Since we like to mimic the behaviour of @{ML select_tac} as closely as possible, 
+  we must include @{ML all_tac} at the end of the list, otherwise the tactic will
+  fail if no rule applies (we also have to wrap @{ML all_tac} using the 
+  @{ML K}-combinator, because it does not take a subgoal number as argument). You can 
+  test the tactic on the same goals:
+*}
+
+lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
+apply(tactic {* select_tac' 4 *})
+apply(tactic {* select_tac' 3 *})
+apply(tactic {* select_tac' 2 *})
+apply(tactic {* select_tac' 1 *})
+txt{* \begin{minipage}{\textwidth}
+      @{subgoals [display]}
+      \end{minipage} *}
+(*<*)oops(*>*)
+
+text {* 
+  Since such repeated applications of a tactic to the reverse order of 
+  \emph{all} subgoals is quite common, there is the tactic combinator 
+  @{ML ALLGOALS} that simplifies this. Using this combinator you can simply 
+  write: *}
+
+lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
+apply(tactic {* ALLGOALS select_tac' *})
+txt{* \begin{minipage}{\textwidth}
+      @{subgoals [display]}
+      \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+  Remember that we chose to implement @{ML select_tac'} so that it 
+  always  succeeds. This can be potentially very confusing for the user, 
+  for example,  in cases where the goal is the form
+*}
+
+lemma shows "E \<Longrightarrow> F"
+apply(tactic {* select_tac' 1 *})
+txt{* \begin{minipage}{\textwidth}
+      @{subgoals [display]}
+      \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+  In this case no rule applies. The problem for the user is that there is little 
+  chance to see whether or not progress in the proof has been made. By convention
+  therefore, tactics visible to the user should either change something or fail.
+  
+  To comply with this convention, we could simply delete the @{ML "K all_tac"}
+  from the end of the theorem list. As a result @{ML select_tac'} would only
+  succeed on goals where it can make progress. But for the sake of argument,
+  let us suppose that this deletion is \emph{not} an option. In such cases, you can
+  use the combinator @{ML CHANGED} to make sure the subgoal has been changed
+  by the tactic. Because now
+
+*}
+
+lemma shows "E \<Longrightarrow> F"
+apply(tactic {* CHANGED (select_tac' 1) *})(*<*)?(*>*)
+txt{* gives the error message:
+  \begin{isabelle}
+  @{text "*** empty result sequence -- proof command failed"}\\
+  @{text "*** At command \"apply\"."}
+  \end{isabelle} 
+*}(*<*)oops(*>*)
+
+
+text {*
+  We can further extend @{ML select_tac'} so that it not just applies to the topmost
+  connective, but also to the ones immediately ``underneath'', i.e.~analyse the goal 
+  completely. For this you can use the tactic combinator @{ML REPEAT}. As an example 
+  suppose the following tactic
+*}
+
+ML{*val repeat_xmp = REPEAT (CHANGED (select_tac' 1)) *}
+
+text {* which applied to the proof *}
+
+lemma shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
+apply(tactic {* repeat_xmp *})
+txt{* produces
+
+      \begin{minipage}{\textwidth}
+      @{subgoals [display]}
+      \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+  Here it is crucial that @{ML select_tac'} is prefixed with @{ML CHANGED}, 
+  because otherwise @{ML REPEAT} runs into an infinite loop (it applies the
+  tactic as long as it succeeds). The function
+  @{ML REPEAT1} is similar, but runs the tactic at least once (failing if 
+  this is not possible).
+
+  If you are after the ``primed'' version of @{ML repeat_xmp}, then you 
+  need to implement it as
+*}
+
+ML{*val repeat_xmp' = REPEAT o CHANGED o select_tac'*}
+
+text {*
+  since there are no ``primed'' versions of @{ML REPEAT} and @{ML CHANGED}.
+
+  If you look closely at the goal state above, the tactics @{ML repeat_xmp}
+  and @{ML repeat_xmp'} are not yet quite what we are after: the problem is
+  that goals 2 and 3 are not analysed. This is because the tactic
+  is applied repeatedly only to the first subgoal. To analyse also all
+  resulting subgoals, you can use the tactic combinator @{ML REPEAT_ALL_NEW}. 
+  Suppose the tactic
+*}
+
+ML{*val repeat_all_new_xmp = REPEAT_ALL_NEW (CHANGED o select_tac')*}
+
+text {* 
+  you see that the following goal
+*}
+
+lemma shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
+apply(tactic {* repeat_all_new_xmp 1 *})
+txt{* \begin{minipage}{\textwidth}
+      @{subgoals [display]}
+      \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+  is completely analysed according to the theorems we chose to
+  include in @{ML select_tac'}. 
+
+  Recall that tactics produce a lazy sequence of successor goal states. These
+  states can be explored using the command \isacommand{back}. For example
+
+*}
+
+lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
+apply(tactic {* etac @{thm disjE} 1 *})
+txt{* applies the rule to the first assumption yielding the goal state:\smallskip
+      
+      \begin{minipage}{\textwidth}
+      @{subgoals [display]}
+      \end{minipage}\smallskip 
+
+      After typing
+  *}
+(*<*)
+oops
+lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
+apply(tactic {* etac @{thm disjE} 1 *})
+(*>*)
+back
+txt{* the rule now applies to the second assumption.\smallskip
+
+      \begin{minipage}{\textwidth}
+      @{subgoals [display]}
+      \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+  Sometimes this leads to confusing behaviour of tactics and also has
+  the potential to explode the search space for tactics.
+  These problems can be avoided by prefixing the tactic with the tactic 
+  combinator @{ML DETERM}.
+*}
+
+lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
+apply(tactic {* DETERM (etac @{thm disjE} 1) *})
+txt {*\begin{minipage}{\textwidth}
+      @{subgoals [display]}
+      \end{minipage} *}
+(*<*)oops(*>*)
+text {*
+  This combinator will prune the search space to just the first successful application.
+  Attempting to apply \isacommand{back} in this goal states gives the
+  error message:
+
+  \begin{isabelle}
+  @{text "*** back: no alternatives"}\\
+  @{text "*** At command \"back\"."}
+  \end{isabelle}
+
+  \begin{readmore}
+  Most tactic combinators described in this section are defined in @{ML_file "Pure/tctical.ML"}.
+  \end{readmore}
+
+*}
+
+section {* Simplifier Tactics *}
+
+text {*
+  A lot of convenience in the reasoning with Isabelle derives from its
+  powerful simplifier. The power of simplifier is a strength and a weakness at
+  the same time, because you can easily make the simplifier to run into a  loop and its
+  behaviour can be difficult to predict. There is also a multitude
+  of options that you can configurate to control the behaviour of the simplifier. 
+  We describe some of them in this and the next section.
+
+  There are the following five main tactics behind 
+  the simplifier (in parentheses is their user-level counterpart):
+
+  \begin{isabelle}
+  \begin{center}
+  \begin{tabular}{l@ {\hspace{2cm}}l}
+   @{ML simp_tac}            & @{text "(simp (no_asm))"} \\
+   @{ML asm_simp_tac}        & @{text "(simp (no_asm_simp))"} \\
+   @{ML full_simp_tac}       & @{text "(simp (no_asm_use))"} \\
+   @{ML asm_lr_simp_tac}     & @{text "(simp (asm_lr))"} \\
+   @{ML asm_full_simp_tac}   & @{text "(simp)"}
+  \end{tabular}
+  \end{center}
+  \end{isabelle}
+
+  All of the tactics take a simpset and an interger as argument (the latter as usual 
+  to specify  the goal to be analysed). So the proof
+*}
+
+lemma "Suc (1 + 2) < 3 + 2"
+apply(simp)
+done
+
+text {*
+  corresponds on the ML-level to the tactic
+*}
+
+lemma "Suc (1 + 2) < 3 + 2"
+apply(tactic {* asm_full_simp_tac @{simpset} 1 *})
+done
+
+text {*
+  If the simplifier cannot make any progress, then it leaves the goal unchanged,
+  i.e.~does not raise any error message. That means if you use it to unfold a 
+  definition for a constant and this constant is not present in the goal state, 
+  you can still safely apply the simplifier.
+
+  When using the simplifier, the crucial information you have to provide is
+  the simpset. If this information is not handled with care, then the
+  simplifier can easily run into a loop. Therefore a good rule of thumb is to
+  use simpsets that are as minimal as possible. It might be surprising that a
+  simpset is more complex than just a simple collection of theorems used as
+  simplification rules. One reason for the complexity is that the simplifier
+  must be able to rewrite inside terms and should also be able to rewrite
+  according to rules that have precoditions.
+
+
+  The rewriting inside terms requires congruence rules, which
+  are meta-equalities typical of the form
+
+  \begin{isabelle}
+  \begin{center}
+  \mbox{\inferrule{@{text "t\<^isub>1 \<equiv> s\<^isub>1 \<dots> t\<^isub>n \<equiv> s\<^isub>n"}}
+                  {@{text "constr t\<^isub>1\<dots>t\<^isub>n \<equiv> constr s\<^isub>1\<dots>s\<^isub>n"}}}
+  \end{center}
+  \end{isabelle}
+
+  with @{text "constr"} being a term-constructor, like @{const "If"} or @{const "Let"}. 
+  Every simpset contains only
+  one concruence rule for each term-constructor, which however can be
+  overwritten. The user can declare lemmas to be congruence rules using the
+  attribute @{text "[cong]"}. In HOL, the user usually states these lemmas as
+  equations, which are then internally transformed into meta-equations.
+
+
+  The rewriting with rules involving preconditions requires what is in
+  Isabelle called a subgoaler, a solver and a looper. These can be arbitrary
+  tactics that can be installed in a simpset and which are called during
+  various stages during simplification. However, simpsets also include
+  simprocs, which can produce rewrite rules on demand (see next
+  section). Another component are split-rules, which can simplify for example
+  the ``then'' and ``else'' branches of if-statements under the corresponding
+  precoditions.
+
+
+  \begin{readmore}
+  For more information about the simplifier see @{ML_file "Pure/meta_simplifier.ML"}
+  and @{ML_file "Pure/simplifier.ML"}. The simplifier for HOL is set up in 
+  @{ML_file "HOL/Tools/simpdata.ML"}. Generic splitters are implemented in 
+  @{ML_file  "Provers/splitter.ML"}.
+  \end{readmore}
+
+  \begin{readmore}
+  FIXME: Find the right place Discrimination nets are implemented
+  in @{ML_file "Pure/net.ML"}.
+  \end{readmore}
+
+  The most common combinators to modify simpsets are
+
+  \begin{isabelle}
+  \begin{tabular}{ll}
+  @{ML addsimps}   & @{ML delsimps}\\
+  @{ML addcongs}   & @{ML delcongs}\\
+  @{ML addsimprocs} & @{ML delsimprocs}\\
+  \end{tabular}
+  \end{isabelle}
+
+  (FIXME: What about splitters? @{ML addsplits}, @{ML delsplits})
+*}
+
+text_raw {*
+\begin{figure}[t]
+\begin{minipage}{\textwidth}
+\begin{isabelle}*}
+ML{*fun print_ss ctxt ss =
+let
+  val {simps, congs, procs, ...} = MetaSimplifier.dest_ss ss
+
+  fun name_thm (nm, thm) =
+      "  " ^ nm ^ ": " ^ (str_of_thm ctxt thm)
+  fun name_ctrm (nm, ctrm) =
+      "  " ^ nm ^ ": " ^ (str_of_cterms ctxt ctrm)
+
+  val s1 = ["Simplification rules:"]
+  val s2 = map name_thm simps
+  val s3 = ["Congruences rules:"]
+  val s4 = map name_thm congs
+  val s5 = ["Simproc patterns:"]
+  val s6 = map name_ctrm procs
+in
+  (s1 @ s2 @ s3 @ s4 @ s5 @ s6) 
+     |> separate "\n"
+     |> implode
+     |> warning
+end*}
+text_raw {* 
+\end{isabelle}
+\end{minipage}
+\caption{The function @{ML MetaSimplifier.dest_ss} returns a record containing
+  all printable information stored in a simpset. We are here only interested in the 
+  simplifcation rules, congruence rules and simprocs.\label{fig:printss}}
+\end{figure} *}
+
+text {* 
+  To see how they work, consider the function in Figure~\ref{fig:printss}, which
+  prints out some parts of a simpset. If you use it to print out the components of the
+  empty simpset, i.e.~ @{ML empty_ss}
+  
+  @{ML_response_fake [display,gray]
+  "print_ss @{context} empty_ss"
+"Simplification rules:
+Congruences rules:
+Simproc patterns:"}
+
+  you can see it contains nothing. This simpset is usually not useful, except as a 
+  building block to build bigger simpsets. For example you can add to @{ML empty_ss} 
+  the simplification rule @{thm [source] Diff_Int} as follows:
+*}
+
+ML{*val ss1 = empty_ss addsimps [@{thm Diff_Int} RS @{thm eq_reflection}] *}
+
+text {*
+  Printing then out the components of the simpset gives:
+
+  @{ML_response_fake [display,gray]
+  "print_ss @{context} ss1"
+"Simplification rules:
+  ??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)
+Congruences rules:
+Simproc patterns:"}
+
+  (FIXME: Why does it print out ??.unknown)
+
+  Adding also the congruence rule @{thm [source] UN_cong} 
+*}
+
+ML{*val ss2 = ss1 addcongs [@{thm UN_cong} RS @{thm eq_reflection}] *}
+
+text {*
+  gives
+
+  @{ML_response_fake [display,gray]
+  "print_ss @{context} ss2"
+"Simplification rules:
+  ??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)
+Congruences rules:
+  UNION: \<lbrakk>A = B; \<And>x. x \<in> B \<Longrightarrow> C x = D x\<rbrakk> \<Longrightarrow> \<Union>x\<in>A. C x \<equiv> \<Union>x\<in>B. D x
+Simproc patterns:"}
+
+  Notice that we had to add these lemmas as meta-equations. The @{ML empty_ss} 
+  expects this form of the simplification and congruence rules. However, even 
+  when adding these lemmas to @{ML empty_ss} we do not end up with anything useful yet.
+
+  In the context of HOL, the first really useful simpset is @{ML HOL_basic_ss}. While
+  printing out the components of this simpset
+
+  @{ML_response_fake [display,gray]
+  "print_ss @{context} HOL_basic_ss"
+"Simplification rules:
+Congruences rules:
+Simproc patterns:"}
+
+  also produces ``nothing'', the printout is misleading. In fact
+  the @{ML HOL_basic_ss} is setup so that it can solve goals of the
+  form  
+
+  \begin{isabelle}
+  @{thm TrueI}, @{thm refl[no_vars]}, @{term "t \<equiv> t"} and @{thm FalseE[no_vars]}; 
+  \end{isabelle}
+
+  and also resolve with assumptions. For example:
+*}
+
+lemma 
+ "True" and "t = t" and "t \<equiv> t" and "False \<Longrightarrow> Foo" and "\<lbrakk>A; B; C\<rbrakk> \<Longrightarrow> A"
+apply(tactic {* ALLGOALS (simp_tac HOL_basic_ss) *})
+done
+
+text {*
+  This behaviour is not because of simplification rules, but how the subgoaler, solver 
+  and looper are set up in @{ML HOL_basic_ss}.
+
+  The simpset @{ML HOL_ss} is an extention of @{ML HOL_basic_ss} containing 
+  already many useful simplification and congruence rules for the logical 
+  connectives in HOL. 
+
+  @{ML_response_fake [display,gray]
+  "print_ss @{context} HOL_ss"
+"Simplification rules:
+  Pure.triv_forall_equality: (\<And>x. PROP V) \<equiv> PROP V
+  HOL.the_eq_trivial: THE x. x = y \<equiv> y
+  HOL.the_sym_eq_trivial: THE ya. y = ya \<equiv> y
+  \<dots>
+Congruences rules:
+  HOL.simp_implies: \<dots>
+    \<Longrightarrow> (PROP P =simp=> PROP Q) \<equiv> (PROP P' =simp=> PROP Q')
+  op -->: \<lbrakk>P \<equiv> P'; P' \<Longrightarrow> Q \<equiv> Q'\<rbrakk> \<Longrightarrow> P \<longrightarrow> Q \<equiv> P' \<longrightarrow> Q'
+Simproc patterns:
+  \<dots>"}
+
+  
+  The simplifier is often used to unfold definitions in a proof. For this the
+  simplifier contains the @{ML rewrite_goals_tac}. Suppose for example the
+  definition
+*}
+
+definition "MyTrue \<equiv> True"
+
+text {*
+  then in the following proof we can unfold this constant
+*}
+
+lemma shows "MyTrue \<Longrightarrow> True \<and> True"
+apply(rule conjI)
+apply(tactic {* rewrite_goals_tac [@{thm MyTrue_def}] *})
+txt{* producing the goal state
+
+      \begin{minipage}{\textwidth}
+      @{subgoals [display]}
+      \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+  As you can see, the tactic unfolds the definitions in all subgoals.
+*}
+
+
+text_raw {*
+\begin{figure}[p]
+\begin{boxedminipage}{\textwidth}
+\begin{isabelle} *}
+types  prm = "(nat \<times> nat) list"
+consts perm :: "prm \<Rightarrow> 'a \<Rightarrow> 'a"  ("_ \<bullet> _" [80,80] 80)
+
+primrec (perm_nat)
+ "[]\<bullet>c = c"
+ "(ab#pi)\<bullet>c = (if (pi\<bullet>c)=fst ab then snd ab 
+               else (if (pi\<bullet>c)=snd ab then fst ab else (pi\<bullet>c)))" 
+
+primrec (perm_prod)
+ "pi\<bullet>(x, y) = (pi\<bullet>x, pi\<bullet>y)"
+
+primrec (perm_list)
+ "pi\<bullet>[] = []"
+ "pi\<bullet>(x#xs) = (pi\<bullet>x)#(pi\<bullet>xs)"
+
+lemma perm_append[simp]:
+  fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+  shows "((pi\<^isub>1@pi\<^isub>2)\<bullet>c) = (pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c))"
+by (induct pi\<^isub>1) (auto) 
+
+lemma perm_eq[simp]:
+  fixes c::"nat" and pi::"prm"
+  shows "(pi\<bullet>c = pi\<bullet>d) = (c = d)" 
+by (induct pi) (auto)
+
+lemma perm_rev[simp]:
+  fixes c::"nat" and pi::"prm"
+  shows "pi\<bullet>((rev pi)\<bullet>c) = c"
+by (induct pi arbitrary: c) (auto)
+
+lemma perm_compose:
+  fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+  shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>(pi\<^isub>1\<bullet>c)" 
+by (induct pi\<^isub>2) (auto)
+text_raw {*
+\end{isabelle}
+\end{boxedminipage}
+\caption{A simple theory about permutations over @{typ nat}. The point is that the
+  lemma @{thm [source] perm_compose} cannot be directly added to the simplifier, as
+  it would cause the simplifier to loop. It can still be used as a simplification 
+  rule if the permutation is sufficiently protected.\label{fig:perms}
+  (FIXME: Uses old primrec.)}
+\end{figure} *}
+
+
+text {*
+  The simplifier is often used in order to bring terms into a normal form.
+  Unfortunately, often the situation arises that the corresponding
+  simplification rules will cause the simplifier to run into an infinite
+  loop. Consider for example the simple theory about permutations over natural
+  numbers shown in Figure~\ref{fig:perms}. The purpose of the lemmas is to
+  push permutations as far inside as possible, where they might disappear by
+  Lemma @{thm [source] perm_rev}. However, to fully normalise all instances,
+  it would be desirable to add also the lemma @{thm [source] perm_compose} to
+  the simplifier for pushing permutations over other permutations. Unfortunately, 
+  the right-hand side of this lemma is again an instance of the left-hand side 
+  and so causes an infinite loop. The seems to be no easy way to reformulate 
+  this rule and so one ends up with clunky proofs like:
+*}
+
+lemma 
+  fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+  shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
+apply(simp)
+apply(rule trans)
+apply(rule perm_compose)
+apply(simp)
+done 
+
+text {*
+  It is however possible to create a single simplifier tactic that solves
+  such proofs. The trick is to introduce an auxiliary constant for permutations 
+  and split the simplification into two phases (below actually three). Let 
+  assume the auxiliary constant is
+*}
+
+definition
+  perm_aux :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet>aux _" [80,80] 80)
+where
+  "pi \<bullet>aux c \<equiv> pi \<bullet> c"
+
+text {* Now the two lemmas *}
+
+lemma perm_aux_expand:
+  fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+  shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = pi\<^isub>1 \<bullet>aux (pi\<^isub>2\<bullet>c)" 
+unfolding perm_aux_def by (rule refl)
+
+lemma perm_compose_aux:
+  fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+  shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>aux c) = (pi\<^isub>1\<bullet>pi\<^isub>2) \<bullet>aux (pi\<^isub>1\<bullet>c)" 
+unfolding perm_aux_def by (rule perm_compose)
+
+text {* 
+  are simple consequence of the definition and @{thm [source] perm_compose}. 
+  More importantly, the lemma @{thm [source] perm_compose_aux} can be safely 
+  added to the simplifier, because now the right-hand side is not anymore an instance 
+  of the left-hand side. In a sense it freezes all redexes of permutation compositions
+  after one step. In this way, we can split simplification of permutations
+  into three phases without the user not noticing anything about the auxiliary 
+  contant. We first freeze any instance of permutation compositions in the term using 
+  lemma @{thm [source] "perm_aux_expand"} (Line 9);
+  then simplifiy all other permutations including pusing permutations over
+  other permutations by rule @{thm [source] perm_compose_aux} (Line 10); and
+  finally ``unfreeze'' all instances of permutation compositions by unfolding 
+  the definition of the auxiliary constant. 
+*}
+
+ML %linenosgray{*val perm_simp_tac =
+let
+  val thms1 = [@{thm perm_aux_expand}]
+  val thms2 = [@{thm perm_append}, @{thm perm_eq}, @{thm perm_rev}, 
+               @{thm perm_compose_aux}] @ @{thms perm_prod.simps} @ 
+               @{thms perm_list.simps} @ @{thms perm_nat.simps}
+  val thms3 = [@{thm perm_aux_def}]
+in
+  simp_tac (HOL_basic_ss addsimps thms1)
+  THEN' simp_tac (HOL_basic_ss addsimps thms2)
+  THEN' simp_tac (HOL_basic_ss addsimps thms3)
+end*}
+
+text {*
+  For all three phases we have to build simpsets addig specific lemmas. As is sufficient
+  for our purposes here, we can add these lemma to @{ML HOL_basic_ss} in order to obtain
+  the desired results. Now we can solve the following lemma
+*}
+
+lemma 
+  fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+  shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
+apply(tactic {* perm_simp_tac 1 *})
+done
+
+
+text {*
+  in one step. This tactic can deal with most instances of normalising permutations;
+  in order to solve all cases we have to deal with corner-cases such as the
+  lemma being an exact instance of the permutation composition lemma. This can
+  often be done easier by implementing a simproc or a conversion. Both will be 
+  explained in the next two chapters.
+
+  (FIXME: Is it interesting to say something about @{term "op =simp=>"}?)
+
+  (FIXME: What are the second components of the congruence rules---something to
+  do with weak congruence constants?)
+
+  (FIXME: Anything interesting to say about @{ML Simplifier.clear_ss}?)
+
+  (FIXME: @{ML ObjectLogic.full_atomize_tac}, 
+  @{ML ObjectLogic.rulify_tac})
+
+*}
+
+section {* Simprocs *}
+
+text {*
+  In Isabelle you can also implement custom simplification procedures, called
+  \emph{simprocs}. Simprocs can be triggered by the simplifier on a specified
+  term-pattern and rewrite a term according to a theorem. They are useful in
+  cases where a rewriting rule must be produced on ``demand'' or when
+  rewriting by simplification is too unpredictable and potentially loops.
+
+  To see how simprocs work, let us first write a simproc that just prints out
+  the pattern which triggers it and otherwise does nothing. For this
+  you can use the function:
+*}
+
+ML %linenosgray{*fun fail_sp_aux simpset redex = 
+let
+  val ctxt = Simplifier.the_context simpset
+  val _ = warning ("The redex: " ^ (str_of_cterm ctxt redex))
+in
+  NONE
+end*}
+
+text {*
+  This function takes a simpset and a redex (a @{ML_type cterm}) as
+  arguments. In Lines 3 and~4, we first extract the context from the given
+  simpset and then print out a message containing the redex.  The function
+  returns @{ML NONE} (standing for an optional @{ML_type thm}) since at the
+  moment we are \emph{not} interested in actually rewriting anything. We want
+  that the simproc is triggered by the pattern @{term "Suc n"}. This can be
+  done by adding the simproc to the current simpset as follows
+*}
+
+simproc_setup %gray fail_sp ("Suc n") = {* K fail_sp_aux *}
+
+text {*
+  where the second argument specifies the pattern and the right-hand side
+  contains the code of the simproc (we have to use @{ML K} since we ignoring
+  an argument about morphisms\footnote{FIXME: what does the morphism do?}). 
+  After this, the simplifier is aware of the simproc and you can test whether 
+  it fires on the lemma:
+*}
+
+lemma shows "Suc 0 = 1"
+apply(simp)
+(*<*)oops(*>*)
+
+text {*
+  This will print out the message twice: once for the left-hand side and
+  once for the right-hand side. The reason is that during simplification the
+  simplifier will at some point reduce the term @{term "1::nat"} to @{term "Suc
+  0"}, and then the simproc ``fires'' also on that term. 
+
+  We can add or delete the simproc from the current simpset by the usual 
+  \isacommand{declare}-statement. For example the simproc will be deleted
+  with the declaration
+*}
+
+declare [[simproc del: fail_sp]]
+
+text {*
+  If you want to see what happens with just \emph{this} simproc, without any 
+  interference from other rewrite rules, you can call @{text fail_sp} 
+  as follows:
+*}
+
+lemma shows "Suc 0 = 1"
+apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [@{simproc fail_sp}]) 1*})
+(*<*)oops(*>*)
+
+text {*
+  Now the message shows up only once since the term @{term "1::nat"} is 
+  left unchanged. 
+
+  Setting up a simproc using the command \isacommand{simproc\_setup} will
+  always add automatically the simproc to the current simpset. If you do not
+  want this, then you have to use a slightly different method for setting 
+  up the simproc. First the function @{ML fail_sp_aux} needs to be modified
+  to
+*}
+
+ML{*fun fail_sp_aux' simpset redex = 
+let
+  val ctxt = Simplifier.the_context simpset
+  val _ = warning ("The redex: " ^ (Syntax.string_of_term ctxt redex))
+in
+  NONE
+end*}
+
+text {*
+  Here the redex is given as a @{ML_type term}, instead of a @{ML_type cterm}
+  (therefore we printing it out using the function @{ML string_of_term in Syntax}).
+  We can turn this function into a proper simproc using the function 
+  @{ML Simplifier.simproc_i}:
+*}
+
+
+ML{*val fail_sp' = 
+let 
+  val thy = @{theory}
+  val pat = [@{term "Suc n"}]
+in
+  Simplifier.simproc_i thy "fail_sp'" pat (K fail_sp_aux')
+end*}
+
+text {*
+  Here the pattern is given as @{ML_type term} (instead of @{ML_type cterm}).
+  The function also takes a list of patterns that can trigger the simproc.
+  Now the simproc is set up and can be explicitly added using
+  @{ML addsimprocs} to a simpset whenerver
+  needed. 
+
+  Simprocs are applied from inside to outside and from left to right. You can
+  see this in the proof
+*}
+
+lemma shows "Suc (Suc 0) = (Suc 1)"
+apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [fail_sp']) 1*})
+(*<*)oops(*>*)
+
+text {*
+  The simproc @{ML fail_sp'} prints out the sequence 
+
+@{text [display]
+"> Suc 0
+> Suc (Suc 0) 
+> Suc 1"}
+
+  To see how a simproc applies a theorem,  let us implement a simproc that
+  rewrites terms according to the equation:
+*}
+
+lemma plus_one: 
+  shows "Suc n \<equiv> n + 1" by simp
+
+text {*
+  Simprocs expect that the given equation is a meta-equation, however the
+  equation can contain preconditions (the simproc then will only fire if the
+  preconditions can be solved). To see that one has relatively precise control over
+  the rewriting with simprocs, let us further assume we want that the simproc
+  only rewrites terms ``greater'' than @{term "Suc 0"}. For this we can write 
+*}
+
+
+ML{*fun plus_one_sp_aux ss redex =
+  case redex of
+    @{term "Suc 0"} => NONE
+  | _ => SOME @{thm plus_one}*}
+
+text {*
+  and set up the simproc as follows.
+*}
+
+ML{*val plus_one_sp =
+let
+  val thy = @{theory}
+  val pat = [@{term "Suc n"}] 
+in
+  Simplifier.simproc_i thy "sproc +1" pat (K plus_one_sp_aux)
+end*}
+
+text {*
+  Now the simproc is set up so that it is triggered by terms
+  of the form @{term "Suc n"}, but inside the simproc we only produce
+  a theorem if the term is not @{term "Suc 0"}. The result you can see
+  in the following proof
+*}
+
+lemma shows "P (Suc (Suc (Suc 0))) (Suc 0)"
+apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [plus_one_sp]) 1*})
+txt{*
+  where the simproc produces the goal state
+  
+  \begin{minipage}{\textwidth}
+  @{subgoals[display]}
+  \end{minipage}
+*}  
+(*<*)oops(*>*)
+
+text {*
+  As usual with rewriting you have to worry about looping: you already have
+  a loop with @{ML plus_one_sp}, if you apply it with the default simpset (because
+  the default simpset contains a rule which just does the opposite of @{ML plus_one_sp},
+  namely rewriting @{text [quotes] "+ 1"} to a successor). So you have to be careful 
+  in choosing the right simpset to which you add a simproc. 
+
+  Next let us implement a simproc that replaces terms of the form @{term "Suc n"}
+  with the number @{text n} increase by one. First we implement a function that
+  takes a term and produces the corresponding integer value.
+*}
+
+ML{*fun dest_suc_trm ((Const (@{const_name "Suc"}, _)) $ t) = 1 + dest_suc_trm t
+  | dest_suc_trm t = snd (HOLogic.dest_number t)*}
+
+text {* 
+  It uses the library function @{ML dest_number in HOLogic} that transforms
+  (Isabelle) terms, like @{term "0::nat"}, @{term "1::nat"}, @{term "2::nat"} and so
+  on, into integer values. This function raises the exception @{ML TERM}, if
+  the term is not a number. The next function expects a pair consisting of a term
+  @{text t} (containing @{term Suc}s) and the corresponding integer value @{text n}. 
+*}
+
+ML %linenosgray{*fun get_thm ctxt (t, n) =
+let
+  val num = HOLogic.mk_number @{typ "nat"} n
+  val goal = Logic.mk_equals (t, num)
+in
+  Goal.prove ctxt [] [] goal (K (arith_tac ctxt 1))
+end*}
+
+text {*
+  From the integer value it generates the corresponding number term, called 
+  @{text num} (Line 3), and then generates the meta-equation @{text "t \<equiv> num"} 
+  (Line 4), which it proves by the arithmetic tactic in Line 6. 
+
+  For our purpose at the moment, proving the meta-equation using @{ML arith_tac} is
+  fine, but there is also an alternative employing the simplifier with a very
+  restricted simpset. For the kind of lemmas we want to prove, the simpset
+  @{text "num_ss"} in the code will suffice.
+*}
+
+ML{*fun get_thm_alt ctxt (t, n) =
+let
+  val num = HOLogic.mk_number @{typ "nat"} n
+  val goal = Logic.mk_equals (t, num)
+  val num_ss = HOL_ss addsimps [@{thm One_nat_def}, @{thm Let_def}] @ 
+          @{thms nat_number} @ @{thms neg_simps} @ @{thms plus_nat.simps}
+in
+  Goal.prove ctxt [] [] goal (K (simp_tac num_ss 1))
+end*}
+
+text {*
+  The advantage of @{ML get_thm_alt} is that it leaves very little room for 
+  something to go wrong; in contrast it is much more difficult to predict 
+  what happens with @{ML arith_tac}, especially in more complicated 
+  circumstances. The disatvantage of @{ML get_thm_alt} is to find a simpset
+  that is sufficiently powerful to solve every instance of the lemmas
+  we like to prove. This requires careful tuning, but is often necessary in 
+  ``production code''.\footnote{It would be of great help if there is another
+  way than tracing the simplifier to obtain the lemmas that are successfully 
+  applied during simplification. Alas, there is none.} 
+
+  Anyway, either version can be used in the function that produces the actual 
+  theorem for the simproc.
+*}
+
+ML{*fun nat_number_sp_aux ss t =
+let 
+  val ctxt = Simplifier.the_context ss
+in
+  SOME (get_thm ctxt (t, dest_suc_trm t))
+  handle TERM _ => NONE
+end*}
+
+text {*
+  This function uses the fact that @{ML dest_suc_trm} might throw an exception 
+  @{ML TERM}. In this case there is nothing that can be rewritten and therefore no
+  theorem is produced (i.e.~the function returns @{ML NONE}). To try out the simproc 
+  on an example, you can set it up as follows:
+*}
+
+ML{*val nat_number_sp =
+let
+  val thy = @{theory}
+  val pat = [@{term "Suc n"}]
+in 
+  Simplifier.simproc_i thy "nat_number" pat (K nat_number_sp_aux) 
+end*}
+
+text {* 
+  Now in the lemma
+  *}
+
+lemma "P (Suc (Suc 2)) (Suc 99) (0::nat) (Suc 4 + Suc 0) (Suc (0 + 0))"
+apply(tactic {* simp_tac (HOL_ss addsimprocs [nat_number_sp]) 1*})
+txt {* 
+  you obtain the more legible goal state
+  
+  \begin{minipage}{\textwidth}
+  @{subgoals [display]}
+  \end{minipage}
+  *}
+(*<*)oops(*>*)
+
+text {*
+  where the simproc rewrites all @{term "Suc"}s except in the last argument. There it cannot 
+  rewrite anything, because it does not know how to transform the term @{term "Suc (0 + 0)"}
+  into a number. To solve this problem have a look at the next exercise. 
+
+  \begin{exercise}\label{ex:addsimproc}
+  Write a simproc that replaces terms of the form @{term "t\<^isub>1 + t\<^isub>2"} by their 
+  result. You can assume the terms are ``proper'' numbers, that is of the form
+  @{term "0::nat"}, @{term "1::nat"}, @{term "2::nat"} and so on.
+  \end{exercise}
+
+  (FIXME: We did not do anything with morphisms. Anything interesting
+  one can say about them?)
+*}
+
+section {* Conversions\label{sec:conversion} *}
+
+text {*
+
+  Conversions are a thin layer on top of Isabelle's inference kernel, and 
+  can be viewed as a controllable, bare-bone version of Isabelle's simplifier.
+  One difference between conversions and the simplifier is that the former
+  act on @{ML_type cterm}s while the latter acts on @{ML_type thm}s. 
+  However, we will also show in this section how conversions can be applied
+  to theorems via tactics. The type for conversions is
+*}
+
+ML{*type conv = cterm -> thm*}
+
+text {*
+  whereby the produced theorem is always a meta-equality. A simple conversion
+  is the function @{ML "Conv.all_conv"}, which maps a @{ML_type cterm} to an
+  instance of the (meta)reflexivity theorem. For example:
+
+  @{ML_response_fake [display,gray]
+  "Conv.all_conv @{cterm \"Foo \<or> Bar\"}"
+  "Foo \<or> Bar \<equiv> Foo \<or> Bar"}
+
+  Another simple conversion is @{ML Conv.no_conv} which always raises the 
+  exception @{ML CTERM}.
+
+  @{ML_response_fake [display,gray]
+  "Conv.no_conv @{cterm True}" 
+  "*** Exception- CTERM (\"no conversion\", []) raised"}
+
+  A more interesting conversion is the function @{ML "Thm.beta_conversion"}: it 
+  produces a meta-equation between a term and its beta-normal form. For example
+
+  @{ML_response_fake [display,gray]
+  "let
+  val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
+  val two = @{cterm \"2::nat\"}
+  val ten = @{cterm \"10::nat\"}
+in
+  Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten)
+end"
+  "((\<lambda>x y. x + y) 2) 10 \<equiv> 2 + 10"}
+
+  Note that the actual response in this example is @{term "2 + 10 \<equiv> 2 + (10::nat)"}, 
+  since the pretty-printer for @{ML_type cterm}s already beta-normalises terms.
+  But how we constructed the term (using the function 
+  @{ML Thm.capply}, which is the application @{ML $} for @{ML_type cterm}s)
+  ensures that the left-hand side must contain beta-redexes. Indeed
+  if we obtain the ``raw'' representation of the produced theorem, we
+  can see the difference:
+
+  @{ML_response [display,gray]
+"let
+  val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
+  val two = @{cterm \"2::nat\"}
+  val ten = @{cterm \"10::nat\"}
+  val thm = Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten)
+in
+  #prop (rep_thm thm)
+end"
+"Const (\"==\",\<dots>) $ 
+  (Abs (\"x\",\<dots>,Abs (\"y\",\<dots>,\<dots>)) $\<dots>$\<dots>) $ 
+    (Const (\"HOL.plus_class.plus\",\<dots>) $ \<dots> $ \<dots>)"} 
+
+  The argument @{ML true} in @{ML Thm.beta_conversion} indicates that 
+  the right-hand side will be fully beta-normalised. If instead 
+  @{ML false} is given, then only a single beta-reduction is performed 
+  on the outer-most level. For example
+
+  @{ML_response_fake [display,gray]
+  "let
+  val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
+  val two = @{cterm \"2::nat\"}
+in
+  Thm.beta_conversion false (Thm.capply add two)
+end"
+  "((\<lambda>x y. x + y) 2) \<equiv> \<lambda>y. 2 + y"} 
+
+  Again, we actually see as output only the fully normalised term 
+  @{text "\<lambda>y. 2 + y"}.
+
+  The main point of conversions is that they can be used for rewriting
+  @{ML_type cterm}s. To do this you can use the function @{ML
+  "Conv.rewr_conv"}, which expects a meta-equation as an argument. Suppose we
+  want to rewrite a @{ML_type cterm} according to the meta-equation:
+*}
+
+lemma true_conj1: "True \<and> P \<equiv> P" by simp
+
+text {* 
+  You can see how this function works in the example rewriting 
+  @{term "True \<and> (Foo \<longrightarrow> Bar)"} to @{term "Foo \<longrightarrow> Bar"}.
+
+  @{ML_response_fake [display,gray]
+"let 
+  val ctrm = @{cterm \"True \<and> (Foo \<longrightarrow> Bar)\"}
+in
+  Conv.rewr_conv @{thm true_conj1} ctrm
+end"
+  "True \<and> (Foo \<longrightarrow> Bar) \<equiv> Foo \<longrightarrow> Bar"}
+
+  Note, however, that the function @{ML Conv.rewr_conv} only rewrites the 
+  outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match 
+  exactly the 
+  left-hand side of the theorem, then  @{ML Conv.rewr_conv} raises 
+  the exception @{ML CTERM}.
+
+  This very primitive way of rewriting can be made more powerful by
+  combining several conversions into one. For this you can use conversion
+  combinators. The simplest conversion combinator is @{ML then_conv}, 
+  which applies one conversion after another. For example
+
+  @{ML_response_fake [display,gray]
+"let
+  val conv1 = Thm.beta_conversion false
+  val conv2 = Conv.rewr_conv @{thm true_conj1}
+  val ctrm = Thm.capply @{cterm \"\<lambda>x. x \<and> False\"} @{cterm \"True\"}
+in
+  (conv1 then_conv conv2) ctrm
+end"
+  "(\<lambda>x. x \<and> False) True \<equiv> False"}
+
+  where we first beta-reduce the term and then rewrite according to
+  @{thm [source] true_conj1}. (Recall the problem with the pretty-printer
+  normalising all terms.)
+
+  The conversion combinator @{ML else_conv} tries out the 
+  first one, and if it does not apply, tries the second. For example 
+
+  @{ML_response_fake [display,gray]
+"let
+  val conv = Conv.rewr_conv @{thm true_conj1} else_conv Conv.all_conv
+  val ctrm1 = @{cterm \"True \<and> Q\"}
+  val ctrm2 = @{cterm \"P \<or> (True \<and> Q)\"}
+in
+  (conv ctrm1, conv ctrm2)
+end"
+"(True \<and> Q \<equiv> Q, P \<or> True \<and> Q \<equiv> P \<or> True \<and> Q)"}
+
+  Here the conversion of @{thm [source] true_conj1} only applies
+  in the first case, but fails in the second. The whole conversion
+  does not fail, however, because the combinator @{ML Conv.else_conv} will then 
+  try out @{ML Conv.all_conv}, which always succeeds.
+
+  The conversion combinator @{ML Conv.try_conv} constructs a conversion 
+  which is tried out on a term, but in case of failure just does nothing.
+  For example
+  
+  @{ML_response_fake [display,gray]
+  "Conv.try_conv (Conv.rewr_conv @{thm true_conj1}) @{cterm \"True \<or> P\"}"
+  "True \<or> P \<equiv> True \<or> P"}
+
+  Apart from the function @{ML beta_conversion in Thm}, which is able to fully
+  beta-normalise a term, the conversions so far are restricted in that they
+  only apply to the outer-most level of a @{ML_type cterm}. In what follows we
+  will lift this restriction. The combinator @{ML Conv.arg_conv} will apply
+  the conversion to the first argument of an application, that is the term
+  must be of the form @{ML "t1 $ t2" for t1 t2} and the conversion is applied
+  to @{text t2}.  For example
+
+  @{ML_response_fake [display,gray]
+"let 
+  val conv = Conv.rewr_conv @{thm true_conj1}
+  val ctrm = @{cterm \"P \<or> (True \<and> Q)\"}
+in
+  Conv.arg_conv conv ctrm
+end"
+"P \<or> (True \<and> Q) \<equiv> P \<or> Q"}
+
+  The reason for this behaviour is that @{text "(op \<or>)"} expects two
+  arguments.  Therefore the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The
+  conversion is then applied to @{text "t2"} which in the example above
+  stands for @{term "True \<and> Q"}. The function @{ML Conv.fun_conv} applies
+  the conversion to the first argument of an application.
+
+  The function @{ML Conv.abs_conv} applies a conversion under an abstractions.
+  For example:
+
+  @{ML_response_fake [display,gray]
+"let 
+  val conv = K (Conv.rewr_conv @{thm true_conj1}) 
+  val ctrm = @{cterm \"\<lambda>P. True \<and> P \<and> Foo\"}
+in
+  Conv.abs_conv conv @{context} ctrm
+end"
+  "\<lambda>P. True \<and> P \<and> Foo \<equiv> \<lambda>P. P \<and> Foo"}
+  
+  Note that this conversion needs a context as an argument. The conversion that 
+  goes under an application is @{ML Conv.combination_conv}. It expects two conversions 
+  as arguments, each of which is applied to the corresponding ``branch'' of the
+  application. 
+
+  We can now apply all these functions in a conversion that recursively
+  descends a term and applies a ``@{thm [source] true_conj1}''-conversion 
+  in all possible positions.
+*}
+
+ML %linenosgray{*fun all_true1_conv ctxt ctrm =
+  case (Thm.term_of ctrm) of
+    @{term "op \<and>"} $ @{term True} $ _ => 
+      (Conv.arg_conv (all_true1_conv ctxt) then_conv
+         Conv.rewr_conv @{thm true_conj1}) ctrm
+  | _ $ _ => Conv.combination_conv 
+               (all_true1_conv ctxt) (all_true1_conv ctxt) ctrm
+  | Abs _ => Conv.abs_conv (fn (_, ctxt) => all_true1_conv ctxt) ctxt ctrm
+  | _ => Conv.all_conv ctrm*}
+
+text {*
+  This function ``fires'' if the terms is of the form @{text "True \<and> \<dots>"}; 
+  it descends under applications (Line 6 and 7) and abstractions 
+  (Line 8); otherwise it leaves the term unchanged (Line 9). In Line 2
+  we need to transform the @{ML_type cterm} into a @{ML_type term} in order
+  to be able to pattern-match the term. To see this 
+  conversion in action, consider the following example:
+
+@{ML_response_fake [display,gray]
+"let
+  val ctxt = @{context}
+  val ctrm = @{cterm \"distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x\"}
+in
+  all_true1_conv ctxt ctrm
+end"
+  "distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x \<equiv> distinct [1, x] \<longrightarrow> 1 \<noteq> x"}
+
+  To see how much control you have about rewriting by using conversions, let us 
+  make the task a bit more complicated by rewriting according to the rule
+  @{text true_conj1}, but only in the first arguments of @{term If}s. Then 
+  the conversion should be as follows.
+*}
+
+ML{*fun if_true1_conv ctxt ctrm =
+  case Thm.term_of ctrm of
+    Const (@{const_name If}, _) $ _ =>
+      Conv.arg_conv (all_true1_conv ctxt) ctrm
+  | _ $ _ => Conv.combination_conv 
+                (if_true1_conv ctxt) (if_true1_conv ctxt) ctrm
+  | Abs _ => Conv.abs_conv (fn (_, ctxt) => if_true1_conv ctxt) ctxt ctrm
+  | _ => Conv.all_conv ctrm *}
+
+text {*
+  Here is an example for this conversion:
+
+  @{ML_response_fake [display,gray]
+"let
+  val ctxt = @{context}
+  val ctrm = 
+       @{cterm \"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False\"}
+in
+  if_true1_conv ctxt ctrm
+end"
+"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False 
+\<equiv> if P (1 \<noteq> 2) then True \<and> True else True \<and> False"}
+*}
+
+text {*
+  So far we only applied conversions to @{ML_type cterm}s. Conversions can, however, 
+  also work on theorems using the function @{ML "Conv.fconv_rule"}. As an example, 
+  consider the conversion @{ML all_true1_conv} and the lemma:
+*}
+
+lemma foo_test: "P \<or> (True \<and> \<not>P)" by simp
+
+text {*
+  Using the conversion you can transform this theorem into a new theorem
+  as follows
+
+  @{ML_response_fake [display,gray]
+  "Conv.fconv_rule (all_true1_conv @{context}) @{thm foo_test}" 
+  "?P \<or> \<not> ?P"}
+
+  Finally, conversions can also be turned into tactics and then applied to
+  goal states. This can be done with the help of the function @{ML CONVERSION},
+  and also some predefined conversion combinators that traverse a goal
+  state. The combinators for the goal state are: @{ML Conv.params_conv} for
+  converting under parameters (i.e.~where goals are of the form @{text "\<And>x. P \<Longrightarrow>
+  Q"}); the function @{ML Conv.prems_conv} for applying a conversion to all
+  premises of a goal, and @{ML Conv.concl_conv} for applying a conversion to
+  the conclusion of a goal.
+
+  Assume we want to apply @{ML all_true1_conv} only in the conclusion
+  of the goal, and @{ML if_true1_conv} should only apply to the premises.
+  Here is a tactic doing exactly that:
+*}
+
+ML{*fun true1_tac ctxt = CSUBGOAL (fn (goal, i) =>
+  CONVERSION 
+    (Conv.params_conv ~1 (fn ctxt =>
+       (Conv.prems_conv ~1 (if_true1_conv ctxt) then_conv
+          Conv.concl_conv ~1 (all_true1_conv ctxt))) ctxt) i)*}
+
+text {* 
+  We call the conversions with the argument @{ML "~1"}. This is to 
+  analyse all parameters, premises and conclusions. If we call them with 
+  a non-negative number, say @{text n}, then these conversions will 
+  only be called on @{text n} premises (similar for parameters and
+  conclusions). To test the tactic, consider the proof
+*}
+
+lemma
+  "if True \<and> P then P else True \<and> False \<Longrightarrow>
+  (if True \<and> Q then True \<and> Q else P) \<longrightarrow> True \<and> (True \<and> Q)"
+apply(tactic {* true1_tac @{context} 1 *})
+txt {* where the tactic yields the goal state
+
+  \begin{minipage}{\textwidth}
+  @{subgoals [display]}
+  \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+  As you can see, the premises are rewritten according to @{ML if_true1_conv}, while
+  the conclusion according to @{ML all_true1_conv}.
+
+  To sum up this section, conversions are not as powerful as the simplifier
+  and simprocs; the advantage of conversions, however, is that you never have
+  to worry about non-termination.
+
+  \begin{exercise}\label{ex:addconversion}
+  Write a tactic that does the same as the simproc in exercise
+  \ref{ex:addsimproc}, but is based in conversions. That means replace terms
+  of the form @{term "t\<^isub>1 + t\<^isub>2"} by their result. You can make
+  the same assumptions as in \ref{ex:addsimproc}. 
+  \end{exercise}
+
+  \begin{exercise}\label{ex:compare}
+  Compare your solutions of Exercises~\ref{ex:addsimproc} and \ref{ex:addconversion},
+  and try to determine which way of rewriting such terms is faster. For this you might 
+  have to construct quite large terms. Also see Recipe \ref{rec:timing} for information 
+  about timing.
+  \end{exercise}
+
+  \begin{readmore}
+  See @{ML_file "Pure/conv.ML"} for more information about conversion combinators. 
+  Further conversions are defined in  @{ML_file "Pure/thm.ML"},
+  @{ML_file "Pure/drule.ML"} and @{ML_file "Pure/meta_simplifier.ML"}.
+  \end{readmore}
+
+*}
+
+text {*
+  (FIXME: check whether @{ML Pattern.match_rew} and @{ML Pattern.rewrite_term}
+  are of any use/efficient)
+*}
+
+
+section {* Structured Proofs (TBD) *}
+
+text {* TBD *}
+
+lemma True
+proof
+
+  {
+    fix A B C
+    assume r: "A & B \<Longrightarrow> C"
+    assume A B
+    then have "A & B" ..
+    then have C by (rule r)
+  }
+
+  {
+    fix A B C
+    assume r: "A & B \<Longrightarrow> C"
+    assume A B
+    note conjI [OF this]
+    note r [OF this]
+  }
+oops
+
+ML {* fun prop ctxt s =
+  Thm.cterm_of (ProofContext.theory_of ctxt) (Syntax.read_prop ctxt s) *}
+
+ML {* 
+  val ctxt0 = @{context};
+  val ctxt = ctxt0;
+  val (_, ctxt) = Variable.add_fixes ["A", "B", "C"] ctxt;
+  val ([r], ctxt) = Assumption.add_assumes [prop ctxt "A & B \<Longrightarrow> C"] ctxt;
+  val (this, ctxt) = Assumption.add_assumes [prop ctxt "A", prop ctxt "B"] ctxt;
+  val this = [@{thm conjI} OF this]; 
+  val this = r OF this;
+  val this = Assumption.export false ctxt ctxt0 this 
+  val this = Variable.export ctxt ctxt0 [this] 
+*}
+
+
+
+end