ProgTutorial/Essential.thy
changeset 395 2c392f61f400
parent 394 0019ebf76e10
child 396 a2e49e0771b3
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Essential.thy	Thu Nov 19 17:48:44 2009 +0100
@@ -0,0 +1,2359 @@
+theory Essential
+imports Base FirstSteps
+begin
+
+(*<*)
+setup{*
+open_file_with_prelude 
+  "Esseantial_Code.thy"
+  ["theory General", "imports Base FirstSteps", "begin"]
+*}
+(*>*)
+
+
+chapter {* Isabelle Essentials *}
+
+text {*
+  Isabelle is build around a few central ideas. One central idea is the
+  LCF-approach to theorem proving where there is a small trusted core and
+  everything else is build on top of this trusted core 
+  \cite{GordonMilnerWadsworth79}. The fundamental data
+  structures involved in this core are certified terms and certified types, 
+  as well as theorems.
+*}
+
+
+section {* Terms and Types *}
+
+text {*
+  In Isabelle, there are certified terms and uncertified terms (respectively types). 
+  Uncertified terms are often just called terms. One way to construct them is by 
+  using the antiquotation \mbox{@{text "@{term \<dots>}"}}. For example
+
+  @{ML_response [display,gray] 
+"@{term \"(a::nat) + b = c\"}" 
+"Const (\"op =\", \<dots>) $ 
+  (Const (\"HOL.plus_class.plus\", \<dots>) $ \<dots> $ \<dots>) $ \<dots>"}
+
+  constructs the term @{term "(a::nat) + b = c"}. The resulting term is printed using 
+  the internal representation corresponding to the datatype @{ML_type_ind "term"}, 
+  which is defined as follows: 
+*}  
+
+ML_val %linenosgray{*datatype term =
+  Const of string * typ 
+| Free of string * typ 
+| Var of indexname * typ 
+| Bound of int 
+| Abs of string * typ * term 
+| $ of term * term *}
+
+text {*
+  This datatype implements Church-style lambda-terms, where types are
+  explicitly recorded in variables, constants and abstractions.  As can be
+  seen in Line 5, terms use the usual de Bruijn index mechanism for
+  representing bound variables.  For example in
+
+  @{ML_response_fake [display, gray]
+  "@{term \"\<lambda>x y. x y\"}"
+  "Abs (\"x\", \"'a \<Rightarrow> 'b\", Abs (\"y\", \"'a\", Bound 1 $ Bound 0))"}
+
+  the indices refer to the number of Abstractions (@{ML Abs}) that we need to
+  skip until we hit the @{ML Abs} that binds the corresponding
+  variable. Constructing a term with dangling de Bruijn indices is possible,
+  but will be flagged as ill-formed when you try to typecheck or certify it
+  (see Section~\ref{sec:typechecking}). Note that the names of bound variables
+  are kept at abstractions for printing purposes, and so should be treated
+  only as ``comments''.  Application in Isabelle is realised with the
+  term-constructor @{ML $}.
+
+  Isabelle makes a distinction between \emph{free} variables (term-constructor
+  @{ML Free} and written on the user level in blue colour) and
+  \emph{schematic} variables (term-constructor @{ML Var} and written with a
+  leading question mark). Consider the following two examples
+  
+  @{ML_response_fake [display, gray]
+"let
+  val v1 = Var ((\"x\", 3), @{typ bool})
+  val v2 = Var ((\"x1\", 3), @{typ bool})
+  val v3 = Free (\"x\", @{typ bool})
+in
+  string_of_terms @{context} [v1, v2, v3]
+  |> tracing
+end"
+"?x3, ?x1.3, x"}
+
+  When constructing terms, you are usually concerned with free variables (as
+  mentioned earlier, you cannot construct schematic variables using the
+  antiquotation @{text "@{term \<dots>}"}). If you deal with theorems, you have to,
+  however, observe the distinction. The reason is that only schematic
+  variables can be instantiated with terms when a theorem is applied. A
+  similar distinction between free and schematic variables holds for types
+  (see below).
+
+  \begin{readmore}
+  Terms and types are described in detail in \isccite{sec:terms}. Their
+  definition and many useful operations are implemented in @{ML_file "Pure/term.ML"}.
+  For constructing terms involving HOL constants, many helper functions are defined
+  in @{ML_file "HOL/Tools/hologic.ML"}.
+  \end{readmore}
+  
+  Constructing terms via antiquotations has the advantage that only typable
+  terms can be constructed. For example
+
+  @{ML_response_fake_both [display,gray]
+  "@{term \"x x\"}"
+  "Type unification failed: Occurs check!"}
+
+  raises a typing error, while it perfectly ok to construct the term
+
+  @{ML_response_fake [display,gray] 
+"let
+  val omega = Free (\"x\", @{typ \"nat \<Rightarrow> nat\"}) $ Free (\"x\", @{typ nat})
+in 
+  tracing (string_of_term @{context} omega)
+end"
+  "x x"}
+
+  with the raw ML-constructors.
+  
+  Sometimes the internal representation of terms can be surprisingly different
+  from what you see at the user-level, because the layers of
+  parsing/type-checking/pretty printing can be quite elaborate. 
+
+  \begin{exercise}
+  Look at the internal term representation of the following terms, and
+  find out why they are represented like this:
+
+  \begin{itemize}
+  \item @{term "case x of 0 \<Rightarrow> 0 | Suc y \<Rightarrow> y"}  
+  \item @{term "\<lambda>(x,y). P y x"}  
+  \item @{term "{ [x::int] | x. x \<le> -2 }"}  
+  \end{itemize}
+
+  Hint: The third term is already quite big, and the pretty printer
+  may omit parts of it by default. If you want to see all of it, you
+  can use the following ML-function to set the printing depth to a higher 
+  value:
+
+  @{ML [display,gray] "print_depth 50"}
+  \end{exercise}
+
+  The antiquotation @{text "@{prop \<dots>}"} constructs terms by inserting the 
+  usually invisible @{text "Trueprop"}-coercions whenever necessary. 
+  Consider for example the pairs
+
+@{ML_response [display,gray] "(@{term \"P x\"}, @{prop \"P x\"})" 
+"(Free (\"P\", \<dots>) $ Free (\"x\", \<dots>),
+ Const (\"Trueprop\", \<dots>) $ (Free (\"P\", \<dots>) $ Free (\"x\", \<dots>)))"}
+ 
+  where a coercion is inserted in the second component and 
+
+  @{ML_response [display,gray] "(@{term \"P x \<Longrightarrow> Q x\"}, @{prop \"P x \<Longrightarrow> Q x\"})" 
+  "(Const (\"==>\", \<dots>) $ \<dots> $ \<dots>, 
+ Const (\"==>\", \<dots>) $ \<dots> $ \<dots>)"}
+
+  where it is not (since it is already constructed by a meta-implication). 
+  The purpose of the @{text "Trueprop"}-coercion is to embed formulae of
+  an object logic, for example HOL, into the meta-logic of Isabelle. The coercion
+  is needed whenever a term is constructed that will be proved as a theorem. 
+
+  As already seen above, types can be constructed using the antiquotation 
+  @{text "@{typ \<dots>}"}. For example:
+
+  @{ML_response_fake [display,gray] "@{typ \"bool \<Rightarrow> nat\"}" "bool \<Rightarrow> nat"}
+
+  The corresponding datatype is
+*}
+  
+ML_val{*datatype typ =
+  Type  of string * typ list 
+| TFree of string * sort 
+| TVar  of indexname * sort *}
+
+text {*
+  Like with terms, there is the distinction between free type
+  variables (term-constructor @{ML "TFree"}) and schematic
+  type variables (term-constructor @{ML "TVar"} and printed with
+  a leading question mark). A type constant,
+  like @{typ "int"} or @{typ bool}, are types with an empty list
+  of argument types. However, it needs a bit of effort to show an
+  example, because Isabelle always pretty prints types (unlike terms).
+  Using just the antiquotation @{text "@{typ \"bool\"}"} we only see
+
+  @{ML_response [display, gray]
+  "@{typ \"bool\"}"
+  "bool"}
+
+  that is the pretty printed version of @{text "bool"}. However, in PolyML
+  (version 5.3) it is easy to install your own pretty printer. With the
+  function below we mimic the behaviour of the usual pretty printer for
+  datatypes (it uses pretty-printing functions which will be explained in more
+  detail in Section~\ref{sec:pretty}).
+*}
+
+ML{*local
+  fun pp_pair (x, y) = Pretty.list "(" ")" [x, y]
+  fun pp_list xs = Pretty.list "[" "]" xs
+  fun pp_str s   = Pretty.str s
+  fun pp_qstr s  = Pretty.quote (pp_str s)
+  fun pp_int i   = pp_str (string_of_int i)
+  fun pp_sort S  = pp_list (map pp_qstr S)
+  fun pp_constr a args = Pretty.block [pp_str a, Pretty.brk 1, args]
+in
+fun raw_pp_typ (TVar ((a, i), S)) = 
+      pp_constr "TVar" (pp_pair (pp_pair (pp_qstr a, pp_int i), pp_sort S))
+  | raw_pp_typ (TFree (a, S)) = 
+      pp_constr "TFree" (pp_pair (pp_qstr a, pp_sort S))
+  | raw_pp_typ (Type (a, tys)) = 
+      pp_constr "Type" (pp_pair (pp_qstr a, pp_list (map raw_pp_typ tys)))
+end*}
+
+text {*
+  We can install this pretty printer with the function 
+  @{ML_ind addPrettyPrinter in PolyML} as follows.
+*}
+
+ML{*PolyML.addPrettyPrinter 
+  (fn _ => fn _ => ml_pretty o Pretty.to_ML o raw_pp_typ)*}
+
+text {*
+  Now the type bool is printed out in full detail.
+
+  @{ML_response [display,gray]
+  "@{typ \"bool\"}"
+  "Type (\"bool\", [])"}
+
+  When printing out a list-type
+  
+  @{ML_response [display,gray]
+  "@{typ \"'a list\"}"
+  "Type (\"List.list\", [TFree (\"'a\", [\"HOL.type\"])])"}
+
+  we can see the full name of the type is actually @{text "List.list"}, indicating
+  that it is defined in the theory @{text "List"}. However, one has to be 
+  careful with names of types, because even if
+  @{text "fun"}, @{text "bool"} and @{text "nat"} are defined in the 
+  theories @{text "HOL"} and @{text "Nat"}, respectively, they are 
+  still represented by their simple name.
+
+   @{ML_response [display,gray]
+  "@{typ \"bool \<Rightarrow> nat\"}"
+  "Type (\"fun\", [Type (\"bool\", []), Type (\"nat\", [])])"}
+
+  We can restore the usual behaviour of Isabelle's pretty printer
+  with the code
+*}
+
+ML{*PolyML.addPrettyPrinter 
+  (fn _ => fn _ => ml_pretty o Pretty.to_ML o Proof_Display.pp_typ Pure.thy)*}
+
+text {*
+  After that the types for booleans, lists and so on are printed out again 
+  the standard Isabelle way.
+
+  @{ML_response_fake [display, gray]
+  "@{typ \"bool\"};
+@{typ \"'a list\"}"
+  "\"bool\"
+\"'a List.list\""}
+
+  \begin{readmore}
+  Types are described in detail in \isccite{sec:types}. Their
+  definition and many useful operations are implemented 
+  in @{ML_file "Pure/type.ML"}.
+  \end{readmore}
+*}
+
+section {* Constructing Terms and Types Manually\label{sec:terms_types_manually} *} 
+
+text {*
+  While antiquotations are very convenient for constructing terms, they can
+  only construct fixed terms (remember they are ``linked'' at compile-time).
+  However, you often need to construct terms manually. For example, a
+  function that returns the implication @{text "\<And>(x::nat). P x \<Longrightarrow> Q x"} taking
+  @{term P} and @{term Q} as arguments can only be written as:
+
+*}
+
+ML{*fun make_imp P Q =
+let
+  val x = Free ("x", @{typ nat})
+in 
+  Logic.all x (Logic.mk_implies (P $ x, Q $ x))
+end *}
+
+text {*
+  The reason is that you cannot pass the arguments @{term P} and @{term Q} 
+  into an antiquotation.\footnote{At least not at the moment.} For example 
+  the following does \emph{not} work.
+*}
+
+ML{*fun make_wrong_imp P Q = @{prop "\<And>(x::nat). P x \<Longrightarrow> Q x"} *}
+
+text {*
+  To see this, apply @{text "@{term S}"} and @{text "@{term T}"} 
+  to both functions. With @{ML make_imp} you obtain the intended term involving 
+  the given arguments
+
+  @{ML_response [display,gray] "make_imp @{term S} @{term T}" 
+"Const \<dots> $ 
+  Abs (\"x\", Type (\"nat\",[]),
+    Const \<dots> $ (Free (\"S\",\<dots>) $ \<dots>) $ (Free (\"T\",\<dots>) $ \<dots>))"}
+
+  whereas with @{ML make_wrong_imp} you obtain a term involving the @{term "P"} 
+  and @{text "Q"} from the antiquotation.
+
+  @{ML_response [display,gray] "make_wrong_imp @{term S} @{term T}" 
+"Const \<dots> $ 
+  Abs (\"x\", \<dots>,
+    Const \<dots> $ (Const \<dots> $ (Free (\"P\",\<dots>) $ \<dots>)) $ 
+                (Const \<dots> $ (Free (\"Q\",\<dots>) $ \<dots>)))"}
+
+  There are a number of handy functions that are frequently used for
+  constructing terms. One is the function @{ML_ind list_comb in Term}, which
+  takes as argument a term and a list of terms, and produces as output the
+  term list applied to the term. For example
+
+
+@{ML_response_fake [display,gray]
+"let
+  val trm = @{term \"P::nat\"}
+  val args = [@{term \"True\"}, @{term \"False\"}]
+in
+  list_comb (trm, args)
+end"
+"Free (\"P\", \"nat\") $ Const (\"True\", \"bool\") $ Const (\"False\", \"bool\")"}
+
+  Another handy function is @{ML_ind lambda in Term}, which abstracts a variable 
+  in a term. For example
+
+  @{ML_response_fake [display,gray]
+"let
+  val x_nat = @{term \"x::nat\"}
+  val trm = @{term \"(P::nat \<Rightarrow> bool) x\"}
+in
+  lambda x_nat trm
+end"
+  "Abs (\"x\", \"nat\", Free (\"P\", \"bool \<Rightarrow> bool\") $ Bound 0)"}
+
+  In this example, @{ML lambda} produces a de Bruijn index (i.e.~@{ML "Bound 0"}), 
+  and an abstraction, where it also records the type of the abstracted
+  variable and for printing purposes also its name.  Note that because of the
+  typing annotation on @{text "P"}, the variable @{text "x"} in @{text "P x"}
+  is of the same type as the abstracted variable. If it is of different type,
+  as in
+
+  @{ML_response_fake [display,gray]
+"let
+  val x_int = @{term \"x::int\"}
+  val trm = @{term \"(P::nat \<Rightarrow> bool) x\"}
+in
+  lambda x_int trm
+end"
+  "Abs (\"x\", \"int\", Free (\"P\", \"nat \<Rightarrow> bool\") $ Free (\"x\", \"nat\"))"}
+
+  then the variable @{text "Free (\"x\", \"int\")"} is \emph{not} abstracted. 
+  This is a fundamental principle
+  of Church-style typing, where variables with the same name still differ, if they 
+  have different type.
+
+  There is also the function @{ML_ind subst_free in Term} with which terms can be
+  replaced by other terms. For example below, we will replace in @{term
+  "(f::nat \<Rightarrow> nat \<Rightarrow> nat) 0 x"} the subterm @{term "(f::nat \<Rightarrow> nat \<Rightarrow> nat) 0"} by
+  @{term y}, and @{term x} by @{term True}.
+
+  @{ML_response_fake [display,gray]
+"let
+  val sub1 = (@{term \"(f::nat \<Rightarrow> nat \<Rightarrow> nat) 0\"}, @{term \"y::nat \<Rightarrow> nat\"})
+  val sub2 = (@{term \"x::nat\"}, @{term \"True\"})
+  val trm = @{term \"((f::nat \<Rightarrow> nat \<Rightarrow> nat) 0) x\"}
+in
+  subst_free [sub1, sub2] trm
+end"
+  "Free (\"y\", \"nat \<Rightarrow> nat\") $ Const (\"True\", \"bool\")"}
+
+  As can be seen, @{ML subst_free} does not take typability into account.
+  However it takes alpha-equivalence into account:
+
+  @{ML_response_fake [display, gray]
+"let
+  val sub = (@{term \"(\<lambda>y::nat. y)\"}, @{term \"x::nat\"})
+  val trm = @{term \"(\<lambda>x::nat. x)\"}
+in
+  subst_free [sub] trm
+end"
+  "Free (\"x\", \"nat\")"}
+
+  Similarly the function @{ML_ind subst_bounds in Term}, replaces lose bound 
+  variables with terms. To see how this function works, let us implement a
+  function that strips off the outermost quantifiers in a term.
+*}
+
+ML{*fun strip_alls t =
+let 
+  fun aux (x, T, t) = strip_alls t |>> cons (Free (x, T))
+in
+  case t of
+    Const ("All", _) $ Abs body => aux body
+  | _ => ([], t)
+end*}
+
+text {*
+  The function returns a pair consisting of the stripped off variables and
+  the body of the universal quantification. For example
+
+  @{ML_response_fake [display, gray]
+  "strip_alls @{term \"\<forall>x y. x = (y::bool)\"}"
+"([Free (\"x\", \"bool\"), Free (\"y\", \"bool\")],
+  Const (\"op =\", \<dots>) $ Bound 1 $ Bound 0)"}
+
+  After calling @{ML strip_alls}, you obtain a term with lose bound variables. With
+  the function @{ML subst_bounds}, you can replace these lose @{ML_ind 
+  Bound in Term}s with the stripped off variables.
+
+  @{ML_response_fake [display, gray, linenos]
+  "let
+  val (vrs, trm) = strip_alls @{term \"\<forall>x y. x = (y::bool)\"}
+in 
+  subst_bounds (rev vrs, trm) 
+  |> string_of_term @{context}
+  |> tracing
+end"
+  "x = y"}
+
+  Note that in Line 4 we had to reverse the list of variables that @{ML
+  strip_alls} returned. The reason is that the head of the list the function
+  @{ML subst_bounds} takes is the replacement for @{ML "Bound 0"}, the next
+  element for @{ML "Bound 1"} and so on. 
+
+  Notice also that this function might introduce name clashes, since we
+  substitute just a variable with the name recorded in an abstraction. This
+  name is by no means unique. If clashes need to be avoided, then we should
+  use the function @{ML_ind dest_abs in Term}, which returns the body where
+  the lose de Bruijn index is replaced by a unique free variable. For example
+
+
+  @{ML_response_fake [display,gray]
+  "let
+  val body = Bound 0 $ Free (\"x\", @{typ nat})
+in
+  Term.dest_abs (\"x\", @{typ \"nat \<Rightarrow> bool\"}, body)
+end"
+  "(\"xa\", Free (\"xa\", \"nat \<Rightarrow> bool\") $ Free (\"x\", \"nat\"))"}
+
+  There are also many convenient functions that construct specific HOL-terms
+  in the structure @{ML_struct HOLogic}. For
+  example @{ML_ind mk_eq in HOLogic} constructs an equality out of two terms.
+  The types needed in this equality are calculated from the type of the
+  arguments. For example
+
+@{ML_response_fake [gray,display]
+"let
+  val eq = HOLogic.mk_eq (@{term \"True\"}, @{term \"False\"})
+in
+  string_of_term @{context} eq
+  |> tracing
+end"
+  "True = False"}
+*}
+
+text {*
+  \begin{readmore}
+  There are many functions in @{ML_file "Pure/term.ML"}, @{ML_file
+  "Pure/logic.ML"} and @{ML_file "HOL/Tools/hologic.ML"} that make manual
+  constructions of terms and types easier.
+  \end{readmore}
+
+  When constructing terms manually, there are a few subtle issues with
+  constants. They usually crop up when pattern matching terms or types, or
+  when constructing them. While it is perfectly ok to write the function
+  @{text is_true} as follows
+*}
+
+ML{*fun is_true @{term True} = true
+  | is_true _ = false*}
+
+text {*
+  this does not work for picking out @{text "\<forall>"}-quantified terms. Because
+  the function 
+*}
+
+ML{*fun is_all (@{term All} $ _) = true
+  | is_all _ = false*}
+
+text {* 
+  will not correctly match the formula @{prop[source] "\<forall>x::nat. P x"}: 
+
+  @{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "false"}
+
+  The problem is that the @{text "@term"}-antiquotation in the pattern 
+  fixes the type of the constant @{term "All"} to be @{typ "('a \<Rightarrow> bool) \<Rightarrow> bool"} for 
+  an arbitrary, but fixed type @{typ "'a"}. A properly working alternative 
+  for this function is
+*}
+
+ML{*fun is_all (Const ("All", _) $ _) = true
+  | is_all _ = false*}
+
+text {* 
+  because now
+
+@{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "true"}
+
+  matches correctly (the first wildcard in the pattern matches any type and the
+  second any term).
+
+  However there is still a problem: consider the similar function that
+  attempts to pick out @{text "Nil"}-terms:
+*}
+
+ML{*fun is_nil (Const ("Nil", _)) = true
+  | is_nil _ = false *}
+
+text {* 
+  Unfortunately, also this function does \emph{not} work as expected, since
+
+  @{ML_response [display,gray] "is_nil @{term \"Nil\"}" "false"}
+
+  The problem is that on the ML-level the name of a constant is more
+  subtle than you might expect. The function @{ML is_all} worked correctly,
+  because @{term "All"} is such a fundamental constant, which can be referenced
+  by @{ML "Const (\"All\", some_type)" for some_type}. However, if you look at
+
+  @{ML_response [display,gray] "@{term \"Nil\"}" "Const (\"List.list.Nil\", \<dots>)"}
+
+  the name of the constant @{text "Nil"} depends on the theory in which the
+  term constructor is defined (@{text "List"}) and also in which datatype
+  (@{text "list"}). Even worse, some constants have a name involving
+  type-classes. Consider for example the constants for @{term "zero"} and
+  \mbox{@{text "(op *)"}}:
+
+  @{ML_response [display,gray] "(@{term \"0::nat\"}, @{term \"(op *)\"})" 
+ "(Const (\"HOL.zero_class.zero\", \<dots>), 
+ Const (\"HOL.times_class.times\", \<dots>))"}
+
+  While you could use the complete name, for example 
+  @{ML "Const (\"List.list.Nil\", some_type)" for some_type}, for referring to or
+  matching against @{text "Nil"}, this would make the code rather brittle. 
+  The reason is that the theory and the name of the datatype can easily change. 
+  To make the code more robust, it is better to use the antiquotation 
+  @{text "@{const_name \<dots>}"}. With this antiquotation you can harness the 
+  variable parts of the constant's name. Therefore a function for 
+  matching against constants that have a polymorphic type should 
+  be written as follows.
+*}
+
+ML{*fun is_nil_or_all (Const (@{const_name "Nil"}, _)) = true
+  | is_nil_or_all (Const (@{const_name "All"}, _) $ _) = true
+  | is_nil_or_all _ = false *}
+
+text {*
+  The antiquotation for properly referencing type constants is @{text "@{type_name \<dots>}"}.
+  For example
+
+  @{ML_response [display,gray]
+  "@{type_name \"list\"}" "\"List.list\""}
+
+  \footnote{\bf FIXME: Explain the following better; maybe put in a separate
+  section and link with the comment in the antiquotation section.}
+
+  Occasionally you have to calculate what the ``base'' name of a given
+  constant is. For this you can use the function @{ML_ind "Sign.extern_const"} or
+  @{ML_ind  Long_Name.base_name}. For example:
+
+  @{ML_response [display,gray] "Sign.extern_const @{theory} \"List.list.Nil\"" "\"Nil\""}
+
+  The difference between both functions is that @{ML extern_const in Sign} returns
+  the smallest name that is still unique, whereas @{ML base_name in Long_Name} always
+  strips off all qualifiers.
+
+  \begin{readmore}
+  Functions about naming are implemented in @{ML_file "Pure/General/name_space.ML"};
+  functions about signatures in @{ML_file "Pure/sign.ML"}.
+  \end{readmore}
+
+  Although types of terms can often be inferred, there are many
+  situations where you need to construct types manually, especially  
+  when defining constants. For example the function returning a function 
+  type is as follows:
+
+*} 
+
+ML{*fun make_fun_type ty1 ty2 = Type ("fun", [ty1, ty2]) *}
+
+text {* This can be equally written with the combinator @{ML_ind "-->" in Term} as: *}
+
+ML{*fun make_fun_type ty1 ty2 = ty1 --> ty2 *}
+
+text {*
+  If you want to construct a function type with more than one argument
+  type, then you can use @{ML_ind "--->" in Term}.
+*}
+
+ML{*fun make_fun_types tys ty = tys ---> ty *}
+
+text {*
+  A handy function for manipulating terms is @{ML_ind map_types in Term}: it takes a 
+  function and applies it to every type in a term. You can, for example,
+  change every @{typ nat} in a term into an @{typ int} using the function:
+*}
+
+ML{*fun nat_to_int ty =
+  (case ty of
+     @{typ nat} => @{typ int}
+   | Type (s, tys) => Type (s, map nat_to_int tys)
+   | _ => ty)*}
+
+text {*
+  Here is an example:
+
+@{ML_response_fake [display,gray] 
+"map_types nat_to_int @{term \"a = (1::nat)\"}" 
+"Const (\"op =\", \"int \<Rightarrow> int \<Rightarrow> bool\")
+           $ Free (\"a\", \"int\") $ Const (\"HOL.one_class.one\", \"int\")"}
+
+  If you want to obtain the list of free type-variables of a term, you
+  can use the function @{ML_ind  add_tfrees in Term} 
+  (similarly @{ML_ind  add_tvars in Term} for the schematic type-variables). 
+  One would expect that such functions
+  take a term as input and return a list of types. But their type is actually 
+
+  @{text[display] "Term.term -> (string * Term.sort) list -> (string * Term.sort) list"}
+
+  that is they take, besides a term, also a list of type-variables as input. 
+  So in order to obtain the list of type-variables of a term you have to 
+  call them as follows
+
+  @{ML_response [gray,display]
+  "Term.add_tfrees @{term \"(a, b)\"} []"
+  "[(\"'b\", [\"HOL.type\"]), (\"'a\", [\"HOL.type\"])]"}
+
+  The reason for this definition is that @{ML add_tfrees in Term} can
+  be easily folded over a list of terms. Similarly for all functions
+  named @{text "add_*"} in @{ML_file "Pure/term.ML"}.
+
+  \begin{exercise}\label{fun:revsum}
+  Write a function @{text "rev_sum : term -> term"} that takes a
+  term of the form @{text "t\<^isub>1 + t\<^isub>2 + \<dots> + t\<^isub>n"} (whereby @{text "n"} might be one)
+  and returns the reversed sum @{text "t\<^isub>n + \<dots> + t\<^isub>2 + t\<^isub>1"}. Assume
+  the @{text "t\<^isub>i"} can be arbitrary expressions and also note that @{text "+"} 
+  associates to the left. Try your function on some examples. 
+  \end{exercise}
+
+  \begin{exercise}\label{fun:makesum}
+  Write a function that takes two terms representing natural numbers
+  in unary notation (like @{term "Suc (Suc (Suc 0))"}), and produces the
+  number representing their sum.
+  \end{exercise}
+
+  \begin{exercise}\label{ex:debruijn}
+  Implement the function, which we below name deBruijn, that depends on a natural
+  number n$>$0 and constructs terms of the form:
+  
+  \begin{center}
+  \begin{tabular}{r@ {\hspace{2mm}}c@ {\hspace{2mm}}l}
+  {\it rhs n} & $\dn$ & {\large$\bigwedge$}{\it i=1\ldots n.  P\,i}\\
+  {\it lhs n} & $\dn$ & {\large$\bigwedge$}{\it i=1\ldots n. P\,i = P (i + 1 mod n)}
+                        $\longrightarrow$ {\it rhs n}\\
+  {\it deBruijn n} & $\dn$ & {\it lhs n} $\longrightarrow$ {\it rhs n}\\
+  \end{tabular}
+  \end{center}
+
+  This function returns for n=3 the term
+
+  \begin{center}
+  \begin{tabular}{l}
+  (P 1 = P 2 $\longrightarrow$ P 1 $\wedge$ P 2 $\wedge$ P 3) $\wedge$\\
+  (P 2 = P 3 $\longrightarrow$ P 1 $\wedge$ P 2 $\wedge$ P 3) $\wedge$\\ 
+  (P 3 = P 1 $\longrightarrow$ P 1 $\wedge$ P 2 $\wedge$ P 3) $\longrightarrow$ P 1 $\wedge$ P 2 $\wedge$ P 3
+  \end{tabular}
+  \end{center}
+
+  Make sure you use the functions defined in @{ML_file "HOL/Tools/hologic.ML"}
+  for constructing the terms for the logical connectives.\footnote{Thanks to Roy
+  Dyckhoff for suggesting this exercise and working out the details.} 
+  \end{exercise}
+*}
+
+section {* Unification and Matching *}
+
+text {* 
+  As seen earlier, Isabelle's terms and types may contain schematic term variables
+  (term-constructor @{ML Var}) and schematic type variables (term-constructor
+  @{ML TVar}). These variables stand for unknown entities, which can be made
+  more concrete by instantiations. Such instantiations might be a result of
+  unification or matching. While in case of types, unification and matching is
+  relatively straightforward, in case of terms the algorithms are
+  substantially more complicated, because terms need higher-order versions of
+  the unification and matching algorithms. Below we shall use the
+  antiquotations @{text "@{typ_pat \<dots>}"} and @{text "@{term_pat \<dots>}"} from
+  Section~\ref{sec:antiquote} in order to construct examples involving
+  schematic variables.
+
+  Let us begin with describing the unification and matching function for
+  types.  Both return type environments (ML-type @{ML_type "Type.tyenv"})
+  which map schematic type variables to types and sorts. Below we use the
+  function @{ML_ind typ_unify in Sign} from the structure @{ML_struct Sign}
+  for unifying the types @{text "?'a * ?'b"} and @{text "?'b list *
+  nat"}. This will produce the mapping, or type environment, @{text "[?'a :=
+  ?'b list, ?'b := nat]"}.
+*}
+
+ML %linenosgray{*val (tyenv_unif, _) = let
+  val ty1 = @{typ_pat "?'a * ?'b"}
+  val ty2 = @{typ_pat "?'b list * nat"}
+in
+  Sign.typ_unify @{theory} (ty1, ty2) (Vartab.empty, 0) 
+end*}
+
+text {* 
+  The environment @{ML_ind "Vartab.empty"} in line 5 stands for the empty type
+  environment, which is needed for starting the unification without any
+  (pre)instantiations. The @{text 0} is an integer index that will be explained
+  below. In case of failure @{ML typ_unify in Sign} will throw the exception
+  @{text TUNIFY}.  We can print out the resulting type environment bound to 
+  @{ML tyenv_unif} with the built-in function @{ML_ind dest in Vartab} from the
+  structure @{ML_struct Vartab}.
+
+  @{ML_response_fake [display,gray]
+  "Vartab.dest tyenv_unif"
+  "[((\"'a\", 0), ([\"HOL.type\"], \"?'b List.list\")), 
+ ((\"'b\", 0), ([\"HOL.type\"], \"nat\"))]"} 
+*}
+
+text_raw {*
+  \begin{figure}[t]
+  \begin{minipage}{\textwidth}
+  \begin{isabelle}*}
+ML{*fun pretty_helper aux env =
+  env |> Vartab.dest  
+      |> map ((fn (s1, s2) => s1 ^ " := " ^ s2) o aux) 
+      |> commas 
+      |> enclose "[" "]" 
+      |> tracing
+
+fun pretty_tyenv ctxt tyenv =
+let
+  fun get_typs (v, (s, T)) = (TVar (v, s), T)
+  val print = pairself (Syntax.string_of_typ ctxt)
+in 
+  pretty_helper (print o get_typs) tyenv
+end*}
+text_raw {*
+  \end{isabelle}
+  \end{minipage}
+  \caption{A pretty printing function for type environments, which are 
+  produced by unification and matching.\label{fig:prettyenv}}
+  \end{figure}
+*}
+
+text {*
+  The first components in this list stand for the schematic type variables and
+  the second are the associated sorts and types. In this example the sort is
+  the default sort @{text "HOL.type"}. Instead of @{ML "Vartab.dest"}, we will
+  use in what follows our own pretty-printing function from
+  Figure~\ref{fig:prettyenv} for @{ML_type "Type.tyenv"}s. For the type
+  environment in the example this function prints out the more legible:
+
+
+  @{ML_response_fake [display, gray]
+  "pretty_tyenv @{context} tyenv_unif"
+  "[?'a := ?'b list, ?'b := nat]"}
+
+  The way the unification function @{ML typ_unify in Sign} is implemented 
+  using an initial type environment and initial index makes it easy to
+  unify more than two terms. For example 
+*}
+
+ML %linenosgray{*val (tyenvs, _) = let
+  val tys1 = (@{typ_pat "?'a"}, @{typ_pat "?'b list"})
+  val tys2 = (@{typ_pat "?'b"}, @{typ_pat "nat"})
+in
+  fold (Sign.typ_unify @{theory}) [tys1, tys2] (Vartab.empty, 0) 
+end*}
+
+text {*
+  The index @{text 0} in Line 5 is the maximal index of the schematic type
+  variables occurring in @{text tys1} and @{text tys2}. This index will be
+  increased whenever a new schematic type variable is introduced during
+  unification.  This is for example the case when two schematic type variables
+  have different, incomparable sorts. Then a new schematic type variable is
+  introduced with the combined sorts. To show this let us assume two sorts,
+  say @{text "s1"} and @{text "s2"}, which we attach to the schematic type
+  variables @{text "?'a"} and @{text "?'b"}. Since we do not make any
+  assumption about the sorts, they are incomparable.
+*}
+
+ML{*val (tyenv, index) = let
+  val ty1 = @{typ_pat "?'a::s1"}
+  val ty2 = @{typ_pat "?'b::s2"}
+in
+  Sign.typ_unify @{theory} (ty1, ty2) (Vartab.empty, 0) 
+end*}
+
+text {*
+  To print out the result type environment we switch on the printing 
+  of sort information by setting @{ML_ind show_sorts in Syntax} to 
+  true. This allows us to inspect the typing environment.
+
+  @{ML_response_fake [display,gray]
+  "pretty_tyenv @{context} tyenv"
+  "[?'a::s1 := ?'a1::{s1, s2}, ?'b::s2 := ?'a1::{s1, s2}]"}
+
+  As can be seen, the type variables @{text "?'a"} and @{text "?'b"} are instantiated
+  with a new type variable @{text "?'a1"} with sort @{text "{s1, s2}"}. Since a new
+  type variable has been introduced the @{ML index}, originally being @{text 0}, 
+  has been increased to @{text 1}.
+
+  @{ML_response [display,gray]
+  "index"
+  "1"}
+
+  Let us now return to the unification problem @{text "?'a * ?'b"} and 
+  @{text "?'b list * nat"} from the beginning of this section, and the 
+  calculated type environment @{ML tyenv_unif}:
+
+  @{ML_response_fake [display, gray]
+  "pretty_tyenv @{context} tyenv_unif"
+  "[?'a := ?'b list, ?'b := nat]"}
+
+  Observe that the type environment which the function @{ML typ_unify in
+  Sign} returns is in \emph{not} an instantiation in fully solved form: while @{text
+  "?'b"} is instantiated to @{typ nat}, this is not propagated to the
+  instantiation for @{text "?'a"}.  In unification theory, this is often
+  called an instantiation in \emph{triangular form}. These triangular 
+  instantiations, or triangular type environments, are used because of 
+  performance reasons. To apply such a type environment to a type, say @{text "?'a *
+  ?'b"}, you should use the function @{ML_ind norm_type in Envir}:
+
+  @{ML_response_fake [display, gray]
+  "Envir.norm_type tyenv_unif @{typ_pat \"?'a * ?'b\"}"
+  "nat list * nat"}
+
+  Matching of types can be done with the function @{ML_ind typ_match in Sign}
+  also from the structure @{ML_struct Sign}. This function returns a @{ML_type
+  Type.tyenv} as well, but might raise the exception @{text TYPE_MATCH} in case
+  of failure. For example
+*}
+
+ML{*val tyenv_match = let
+  val pat = @{typ_pat "?'a * ?'b"}
+  and ty  = @{typ_pat "bool list * nat"}
+in
+  Sign.typ_match @{theory} (pat, ty) Vartab.empty 
+end*}
+
+text {*
+  Printing out the calculated matcher gives
+
+  @{ML_response_fake [display,gray]
+  "pretty_tyenv @{context} tyenv_match"
+  "[?'a := bool list, ?'b := nat]"}
+  
+  Unlike unification, which uses the function @{ML norm_type in Envir}, 
+  applying the matcher to a type needs to be done with the function
+  @{ML_ind subst_type in Envir}. For example
+
+  @{ML_response_fake [display, gray]
+  "Envir.subst_type tyenv_match @{typ_pat \"?'a * ?'b\"}"
+  "bool list * nat"}
+
+  Be careful to observe the difference: use always 
+  @{ML subst_type in Envir} for matchers and @{ML norm_type in Envir} 
+  for unifiers. To show the difference, let us calculate the 
+  following matcher:
+*}
+
+ML{*val tyenv_match' = let
+  val pat = @{typ_pat "?'a * ?'b"}
+  and ty  = @{typ_pat "?'b list * nat"}
+in
+  Sign.typ_match @{theory} (pat, ty) Vartab.empty 
+end*}
+
+text {*
+  Now @{ML tyenv_unif} is equal to @{ML tyenv_match'}. If we apply 
+  @{ML norm_type in Envir} to the type @{text "?'a * ?'b"} we obtain
+
+  @{ML_response_fake [display, gray]
+  "Envir.norm_type tyenv_match' @{typ_pat \"?'a * ?'b\"}"
+  "nat list * nat"}
+
+  which does not solve the matching problem, and if
+  we apply @{ML subst_type in Envir} to the same type we obtain
+
+  @{ML_response_fake [display, gray]
+  "Envir.subst_type tyenv_unif @{typ_pat \"?'a * ?'b\"}"
+  "?'b list * nat"}
+  
+  which does not solve the unification problem.
+
+  \begin{readmore}
+  Unification and matching for types is implemented
+  in @{ML_file "Pure/type.ML"}. The ``interface'' functions for them
+  are in @{ML_file "Pure/sign.ML"}. Matching and unification produce type environments
+  as results. These are implemented in @{ML_file "Pure/envir.ML"}.
+  This file also includes the substitution and normalisation functions,
+  which apply a type environment to a type. Type environments are lookup 
+  tables which are implemented in @{ML_file "Pure/term_ord.ML"}.
+  \end{readmore}
+*}
+
+text {*
+  Unification and matching of terms is substantially more complicated than the
+  type-case. The reason is that terms have abstractions and, in this context,  
+  unification or matching modulo plain equality is often not meaningful. 
+  Nevertheless, Isabelle implements the function @{ML_ind
+  first_order_match in Pattern} for terms.  This matching function returns a
+  type environment and a term environment.  To pretty print the latter we use
+  the function @{text "pretty_env"}:
+*}
+
+ML{*fun pretty_env ctxt env =
+let
+  fun get_trms (v, (T, t)) = (Var (v, T), t) 
+  val print = pairself (string_of_term ctxt)
+in
+  pretty_helper (print o get_trms) env 
+end*}
+
+text {*
+  As can be seen from the @{text "get_trms"}-function, a term environment associates 
+  a schematic term variable with a type and a term.  An example of a first-order 
+  matching problem is the term @{term "P (\<lambda>a b. Q b a)"} and the pattern 
+  @{text "?X ?Y"}.
+*}
+
+ML{*val (_, fo_env) = let
+  val fo_pat = @{term_pat "(?X::(nat\<Rightarrow>nat\<Rightarrow>nat)\<Rightarrow>bool) ?Y"}
+  val trm_a = @{term "P::(nat\<Rightarrow>nat\<Rightarrow>nat)\<Rightarrow>bool"} 
+  val trm_b = @{term "\<lambda>a b. (Q::nat\<Rightarrow>nat\<Rightarrow>nat) b a"}
+  val init = (Vartab.empty, Vartab.empty) 
+in
+  Pattern.first_order_match @{theory} (fo_pat, trm_a $ trm_b) init
+end *}
+
+text {*
+  In this example we annotated explicitly types because then 
+  the type environment is empty and can be ignored. The 
+  resulting term environment is
+
+  @{ML_response_fake [display, gray]
+  "pretty_env @{context} fo_env"
+  "[?X := P, ?Y := \<lambda>a b. Q b a]"}
+
+  The matcher can be applied to a term using the function @{ML_ind subst_term
+  in Envir} (remember the same convention for types applies to terms: @{ML
+  subst_term in Envir} is for matchers and @{ML norm_term in Envir} for
+  unifiers). The function @{ML subst_term in Envir} expects a type environment,
+  which is set to empty in the example below, and a term environment.
+
+  @{ML_response_fake [display, gray]
+  "let
+  val trm = @{term_pat \"(?X::(nat\<Rightarrow>nat\<Rightarrow>nat)\<Rightarrow>bool) ?Y\"}
+in
+  Envir.subst_term (Vartab.empty, fo_env) trm
+  |> string_of_term @{context}
+  |> tracing
+end"
+  "P (\<lambda>a b. Q b a)"}
+
+  First-order matching is useful for matching against applications and
+  variables. It can deal also with abstractions and a limited form of
+  alpha-equivalence, but this kind of matching should be used with care, since
+  it is not clear whether the result is meaningful. A meaningful example is
+  matching @{text "\<lambda>x. P x"} against the pattern @{text "\<lambda>y. ?X y"}. In this
+  case, first-order matching produces @{text "[?X := P]"}.
+
+  @{ML_response_fake [display, gray]
+  "let
+  val fo_pat = @{term_pat \"\<lambda>y. (?X::nat\<Rightarrow>bool) y\"}
+  val trm = @{term \"\<lambda>x. (P::nat\<Rightarrow>bool) x\"} 
+  val init = (Vartab.empty, Vartab.empty) 
+in
+  Pattern.first_order_match @{theory} (fo_pat, trm) init
+  |> snd 
+  |> pretty_env @{context} 
+end"
+  "[?X := P]"}
+*}
+
+text {*
+  Unification of abstractions is more thoroughly studied in the context
+  of higher-order pattern unification and higher-order pattern matching.  A
+  \emph{\index*{pattern}} is an abstraction term whose ``head symbol'' (that is the
+  first symbol under an abstraction) is either a constant, a schematic or a free
+  variable. If it is a schematic variable then it can be only applied with
+  distinct bound variables. This excludes terms where a schematic variable is an
+  argument of another one and where a schematic variable is applied
+  twice with the same bound variable. The function @{ML_ind pattern in Pattern}
+  in the structure @{ML_struct Pattern} tests whether a term satisfies these
+  restrictions.
+
+  @{ML_response [display, gray]
+  "let
+  val trm_list = 
+        [@{term_pat \"?X\"},              @{term_pat \"a\"},
+         @{term_pat \"\<lambda>a b. ?X a b\"},    @{term_pat \"\<lambda>a b. (op +) a b\"},
+         @{term_pat \"\<lambda>a. (op +) a ?Y\"}, @{term_pat \"?X ?Y\"},
+         @{term_pat \"\<lambda>a b. ?X a b ?Y\"}, @{term_pat \"\<lambda>a. ?X a a\"}] 
+in
+  map Pattern.pattern trm_list
+end"
+  "[true, true, true, true, true, false, false, false]"}
+
+  The point of the restriction to patterns is that unification and matching 
+  are decidable and produce most general unifiers, respectively matchers. 
+  In this way, matching and unification can be implemented as functions that 
+  produce a type and term environment (unification actually returns a 
+  record of type @{ML_type Envir.env} containing a maxind, a type environment 
+  and a term environment). The corresponding functions are @{ML_ind match in Pattern},
+  and @{ML_ind unify in Pattern} both implemented in the structure
+  @{ML_struct Pattern}. An example for higher-order pattern unification is
+
+  @{ML_response_fake [display, gray]
+  "let
+  val trm1 = @{term_pat \"\<lambda>x y. g (?X y x) (f (?Y x))\"}
+  val trm2 = @{term_pat \"\<lambda>u v. g u (f u)\"}
+  val init = Envir.empty 0
+  val env = Pattern.unify @{theory} (trm1, trm2) init
+in
+  pretty_env @{context} (Envir.term_env env)
+end"
+  "[?X := \<lambda>y x. x, ?Y := \<lambda>x. x]"}
+
+  The function @{ML_ind "Envir.empty"} generates a record with a specified
+  max-index for the schematic variables (in the example the index is @{text
+  0}) and empty type and term environments. The function @{ML_ind
+  "Envir.term_env"} pulls out the term environment from the result record. The
+  function for type environment is @{ML_ind "Envir.type_env"}. An assumption of
+  this function is that the terms to be unified have already the same type. In
+  case of failure, the exceptions that are raised are either @{text Pattern},
+  @{text MATCH} or @{text Unif}.
+
+  As mentioned before, unrestricted higher-order unification, respectively
+  higher-order matching, is in general undecidable and might also not posses a
+  single most general solution. Therefore Isabelle implements the unification
+  function @{ML_ind unifiers in Unify} so that it returns a lazy list of
+  potentially infinite unifiers.  An example is as follows
+*}
+
+ML{*val uni_seq =
+let 
+  val trm1 = @{term_pat "?X ?Y"}
+  val trm2 = @{term "f a"}
+  val init = Envir.empty 0
+in
+  Unify.unifiers (@{theory}, init, [(trm1, trm2)])
+end *}
+
+text {*
+  The unifiers can be extracted from the lazy sequence using the 
+  function @{ML_ind "Seq.pull"}. In the example we obtain three 
+  unifiers @{text "un1\<dots>un3"}.
+*}
+
+ML{*val SOME ((un1, _), next1) = Seq.pull uni_seq;
+val SOME ((un2, _), next2) = Seq.pull next1;
+val SOME ((un3, _), next3) = Seq.pull next2;
+val NONE = Seq.pull next3 *}
+
+text {*
+  \footnote{\bf FIXME: what is the list of term pairs in the unifier: flex-flex pairs?}
+
+  We can print them out as follows.
+
+  @{ML_response_fake [display, gray]
+  "pretty_env @{context} (Envir.term_env un1);
+pretty_env @{context} (Envir.term_env un2);
+pretty_env @{context} (Envir.term_env un3)"
+  "[?X := \<lambda>a. a, ?Y := f a]
+[?X := f, ?Y := a]
+[?X := \<lambda>b. f a]"}
+
+
+  In case of failure the function @{ML_ind unifiers in Unify} does not raise
+  an exception, rather returns the empty sequence. For example
+
+  @{ML_response [display, gray]
+"let 
+  val trm1 = @{term \"a\"}
+  val trm2 = @{term \"b\"}
+  val init = Envir.empty 0
+in
+  Unify.unifiers (@{theory}, init, [(trm1, trm2)])
+  |> Seq.pull
+end"
+"NONE"}
+
+  In order to find a
+  reasonable solution for a unification problem, Isabelle also tries first to
+  solve the problem by higher-order pattern unification. 
+
+  For higher-order
+  matching the function is called @{ML_ind matchers in Unify} implemented
+  in the structure @{ML_struct Unify}. Also this
+  function returns sequences with possibly more than one matcher.
+  Like @{ML unifiers in Unify}, this function does not raise an exception
+  in case of failure, but returns an empty sequence. It also first tries 
+  out whether the matching problem can be solved by first-order matching.
+
+  \begin{readmore}
+  Unification and matching of higher-order patterns is implemented in 
+  @{ML_file "Pure/pattern.ML"}. This file also contains a first-order matcher
+  for terms. Full higher-order unification is implemented
+  in @{ML_file "Pure/unify.ML"}. It uses lazy sequences which are implemented
+  in @{ML_file "Pure/General/seq.ML"}.
+  \end{readmore}
+*}
+
+section {* Sorts (TBD) *}
+
+
+section {* Type-Checking\label{sec:typechecking} *}
+
+text {* 
+  Remember Isabelle follows the Church-style typing for terms, i.e., a term contains 
+  enough typing information (constants, free variables and abstractions all have typing 
+  information) so that it is always clear what the type of a term is. 
+  Given a well-typed term, the function @{ML_ind type_of in Term} returns the 
+  type of a term. Consider for example:
+
+  @{ML_response [display,gray] 
+  "type_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::nat\"})" "bool"}
+
+  To calculate the type, this function traverses the whole term and will
+  detect any typing inconsistency. For example changing the type of the variable 
+  @{term "x"} from @{typ "nat"} to @{typ "int"} will result in the error message: 
+
+  @{ML_response_fake [display,gray] 
+  "type_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})" 
+  "*** Exception- TYPE (\"type_of: type mismatch in application\" \<dots>"}
+
+  Since the complete traversal might sometimes be too costly and
+  not necessary, there is the function @{ML_ind fastype_of in Term}, which 
+  also returns the type of a term.
+
+  @{ML_response [display,gray] 
+  "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::nat\"})" "bool"}
+
+  However, efficiency is gained on the expense of skipping some tests. You 
+  can see this in the following example
+
+   @{ML_response [display,gray] 
+  "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})" "bool"}
+
+  where no error is detected.
+
+  Sometimes it is a bit inconvenient to construct a term with 
+  complete typing annotations, especially in cases where the typing 
+  information is redundant. A short-cut is to use the ``place-holder'' 
+  type @{ML_ind dummyT in Term} and then let type-inference figure out the 
+  complete type. An example is as follows:
+
+  @{ML_response_fake [display,gray] 
+"let
+  val c = Const (@{const_name \"plus\"}, dummyT) 
+  val o = @{term \"1::nat\"} 
+  val v = Free (\"x\", dummyT)
+in   
+  Syntax.check_term @{context} (c $ o $ v)
+end"
+"Const (\"HOL.plus_class.plus\", \"nat \<Rightarrow> nat \<Rightarrow> nat\") $
+  Const (\"HOL.one_class.one\", \"nat\") $ Free (\"x\", \"nat\")"}
+
+  Instead of giving explicitly the type for the constant @{text "plus"} and the free 
+  variable @{text "x"}, type-inference fills in the missing information.
+
+  \begin{readmore}
+  See @{ML_file "Pure/Syntax/syntax.ML"} where more functions about reading,
+  checking and pretty-printing of terms are defined. Functions related to
+  type-inference are implemented in @{ML_file "Pure/type.ML"} and 
+  @{ML_file "Pure/type_infer.ML"}. 
+  \end{readmore}
+
+  \footnote{\bf FIXME: say something about sorts.}
+  \footnote{\bf FIXME: give a ``readmore''.}
+
+  \begin{exercise}
+  Check that the function defined in Exercise~\ref{fun:revsum} returns a
+  result that type-checks. See what happens to the  solutions of this 
+  exercise given in Appendix \ref{ch:solutions} when they receive an 
+  ill-typed term as input.
+  \end{exercise}
+*}
+
+section {* Certified Terms and Certified Types *}
+
+text {*
+  You can freely construct and manipulate @{ML_type "term"}s and @{ML_type
+  typ}es, since they are just arbitrary unchecked trees. However, you
+  eventually want to see if a term is well-formed, or type-checks, relative to
+  a theory.  Type-checking is done via the function @{ML_ind cterm_of in Thm}, which
+  converts a @{ML_type term} into a @{ML_type cterm}, a \emph{certified}
+  term. Unlike @{ML_type term}s, which are just trees, @{ML_type "cterm"}s are
+  abstract objects that are guaranteed to be type-correct, and they can only
+  be constructed via ``official interfaces''.
+
+  Certification is always relative to a theory context. For example you can 
+  write:
+
+  @{ML_response_fake [display,gray] 
+  "cterm_of @{theory} @{term \"(a::nat) + b = c\"}" 
+  "a + b = c"}
+
+  This can also be written with an antiquotation:
+
+  @{ML_response_fake [display,gray] 
+  "@{cterm \"(a::nat) + b = c\"}" 
+  "a + b = c"}
+
+  Attempting to obtain the certified term for
+
+  @{ML_response_fake_both [display,gray] 
+  "@{cterm \"1 + True\"}" 
+  "Type unification failed \<dots>"}
+
+  yields an error (since the term is not typable). A slightly more elaborate
+  example that type-checks is:
+
+@{ML_response_fake [display,gray] 
+"let
+  val natT = @{typ \"nat\"}
+  val zero = @{term \"0::nat\"}
+  val plus = Const (@{const_name plus}, [natT, natT] ---> natT)
+in
+  cterm_of @{theory} (plus $ zero $ zero)
+end" 
+  "0 + 0"}
+
+  In Isabelle not just terms need to be certified, but also types. For example, 
+  you obtain the certified type for the Isabelle type @{typ "nat \<Rightarrow> bool"} on 
+  the ML-level as follows:
+
+  @{ML_response_fake [display,gray]
+  "ctyp_of @{theory} (@{typ nat} --> @{typ bool})"
+  "nat \<Rightarrow> bool"}
+
+  or with the antiquotation:
+
+  @{ML_response_fake [display,gray]
+  "@{ctyp \"nat \<Rightarrow> bool\"}"
+  "nat \<Rightarrow> bool"}
+
+  Since certified terms are, unlike terms, abstract objects, we cannot
+  pattern-match against them. However, we can construct them. For example
+  the function @{ML_ind capply in Thm} produces a certified application.
+
+  @{ML_response_fake [display,gray]
+  "Thm.capply @{cterm \"P::nat \<Rightarrow> bool\"} @{cterm \"3::nat\"}"
+  "P 3"}
+
+  Similarly the function @{ML_ind list_comb in Drule} from the structure @{ML_struct Drule}
+  applies a list of @{ML_type cterm}s.
+
+  @{ML_response_fake [display,gray]
+  "let
+  val chead = @{cterm \"P::unit \<Rightarrow> nat \<Rightarrow> bool\"}
+  val cargs = [@{cterm \"()\"}, @{cterm \"3::nat\"}]
+in
+  Drule.list_comb (chead, cargs)
+end"
+  "P () 3"}
+
+  \begin{readmore}
+  For functions related to @{ML_type cterm}s and @{ML_type ctyp}s see 
+  the files @{ML_file "Pure/thm.ML"}, @{ML_file "Pure/more_thm.ML"} and
+  @{ML_file "Pure/drule.ML"}.
+  \end{readmore}
+*}
+
+section {* Theorems *}
+
+text {*
+  Just like @{ML_type cterm}s, theorems are abstract objects of type @{ML_type thm} 
+  that can only be built by going through interfaces. As a consequence, every proof 
+  in Isabelle is correct by construction. This follows the tradition of the LCF-approach.
+
+
+  To see theorems in ``action'', let us give a proof on the ML-level for the following 
+  statement:
+*}
+
+  lemma 
+    assumes assm\<^isub>1: "\<And>(x::nat). P x \<Longrightarrow> Q x" 
+    and     assm\<^isub>2: "P t"
+    shows "Q t"(*<*)oops(*>*) 
+
+text {*
+  The corresponding ML-code is as follows:
+*}
+
+ML{*val my_thm = 
+let
+  val assm1 = @{cprop "\<And>(x::nat). P x \<Longrightarrow> Q x"}
+  val assm2 = @{cprop "(P::nat \<Rightarrow> bool) t"}
+
+  val Pt_implies_Qt = 
+        assume assm1
+        |> forall_elim @{cterm "t::nat"}
+  
+  val Qt = implies_elim Pt_implies_Qt (assume assm2)
+in
+  Qt 
+  |> implies_intr assm2
+  |> implies_intr assm1
+end*}
+
+text {* 
+  If we print out the value of @{ML my_thm} then we see only the 
+  final statement of the theorem.
+
+  @{ML_response_fake [display, gray]
+  "tracing (string_of_thm @{context} my_thm)"
+  "\<lbrakk>\<And>x. P x \<Longrightarrow> Q x; P t\<rbrakk> \<Longrightarrow> Q t"}
+
+  However, internally the code-snippet constructs the following 
+  proof.
+
+  \[
+  \infer[(@{text "\<Longrightarrow>"}$-$intro)]{\vdash @{prop "(\<And>x. P x \<Longrightarrow> Q x) \<Longrightarrow> P t \<Longrightarrow> Q t"}}
+    {\infer[(@{text "\<Longrightarrow>"}$-$intro)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}}
+       {\infer[(@{text "\<Longrightarrow>"}$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"}, @{prop "P t"} \vdash @{prop "Q t"}}
+          {\infer[(@{text "\<And>"}$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}}
+                 {\infer[(assume)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "\<And>x. P x \<Longrightarrow> Q x"}}{}}
+                 &
+           \infer[(assume)]{@{prop "P t"} \vdash @{prop "P t"}}{}
+          }
+       }
+    }
+  \]
+
+  While we obtained a theorem as result, this theorem is not
+  yet stored in Isabelle's theorem database. Consequently, it cannot be 
+  referenced on the user level. One way to store it in the theorem database is
+  by using the function @{ML_ind note in Local_Theory}.\footnote{\bf FIXME: make sure a pointer
+  to the section about local-setup is given earlier.}
+*}
+
+local_setup %gray {*
+  Local_Theory.note ((@{binding "my_thm"}, []), [my_thm]) #> snd *}
+
+text {*
+  The fourth argument of @{ML note in Local_Theory} is the list of theorems we
+  want to store under a name. We can store more than one under a single name. 
+  The first argument @{ML_ind theoremK in Thm} is
+  a kind indicator, which classifies the theorem. There are several such kind
+  indicators: for a theorem arising from a definition you should use @{ML_ind
+  definitionK in Thm}, and for
+  ``normal'' theorems the kinds @{ML_ind theoremK in Thm} or @{ML_ind lemmaK
+  in Thm}.  The second argument of @{ML note in Local_Theory} is the name under
+  which we store the theorem or theorems. The third argument can contain a
+  list of theorem attributes, which we will explain in detail in
+  Section~\ref{sec:attributes}. Below we just use one such attribute for
+  adding the theorem to the simpset:
+*}
+
+local_setup %gray {*
+  Local_Theory.note ((@{binding "my_thm_simp"}, 
+       [Attrib.internal (K Simplifier.simp_add)]), [my_thm]) #> snd *}
+
+text {*
+  Note that we have to use another name under which the theorem is stored,
+  since Isabelle does not allow us to call  @{ML_ind note in Local_Theory} twice
+  with the same name. The attribute needs to be wrapped inside the function @{ML_ind
+  internal in Attrib} from the structure @{ML_struct Attrib}. If we use the function 
+  @{ML get_thm_names_from_ss} from
+  the previous chapter, we can check whether the theorem has actually been
+  added.
+
+
+  @{ML_response [display,gray]
+  "let
+  fun pred s = match_string \"my_thm_simp\" s
+in
+  exists pred (get_thm_names_from_ss @{simpset})
+end"
+  "true"}
+
+  The main point of storing the theorems @{thm [source] my_thm} and @{thm
+  [source] my_thm_simp} is that they can now also be referenced with the
+  \isacommand{thm}-command on the user-level of Isabelle
+
+
+  \begin{isabelle}
+  \isacommand{thm}~@{text "my_thm"}\isanewline
+   @{text ">"}~@{prop "\<lbrakk>\<And>x. P x \<Longrightarrow> Q x; P t\<rbrakk> \<Longrightarrow> Q t"}
+  \end{isabelle}
+
+  or with the @{text "@{thm \<dots>}"}-antiquotation on the ML-level. Otherwise the 
+  user has no access to these theorems. 
+
+  Recall that Isabelle does not let you call @{ML note in Local_Theory} twice
+  with the same theorem name. In effect, once a theorem is stored under a name, 
+  this association is fixed. While this is a ``safety-net'' to make sure a
+  theorem name refers to a particular theorem or collection of theorems, it is 
+  also a bit too restrictive in cases where a theorem name should refer to a 
+  dynamically expanding list of theorems (like a simpset). Therefore Isabelle 
+  also implements a mechanism where a theorem name can refer to a custom theorem 
+  list. For this you can use the function @{ML_ind add_thms_dynamic in PureThy}. 
+  To see how it works let us assume we defined our own theorem list @{text MyThmList}.
+*}
+
+ML{*structure MyThmList = Generic_Data
+  (type T = thm list
+   val empty = []
+   val extend = I
+   val merge = merge Thm.eq_thm_prop)
+
+fun update thm = Context.theory_map (MyThmList.map (Thm.add_thm thm))*}
+
+text {*
+  \footnote{\bf explain @{ML_ind add_thm in Thm} and @{ML_ind eq_thm_prop in Thm}.}
+
+  The function @{ML update} allows us to update the theorem list, for example
+  by adding the theorem @{thm [source] TrueI}.
+*}
+
+setup %gray {* update @{thm TrueI} *}
+ 
+text {*
+  We can now install the theorem list so that it is visible to the user and 
+  can be refered to by a theorem name. For this need to call 
+  @{ML_ind add_thms_dynamic in PureThy}
+*}
+
+setup %gray {* 
+  PureThy.add_thms_dynamic (@{binding "mythmlist"}, MyThmList.get) 
+*}
+
+text {*
+  with a name and a function that accesses the theorem list. Now if the
+  user issues the command
+
+  \begin{isabelle}
+  \isacommand{thm}~@{text "mythmlist"}\\
+  @{text "> True"}
+  \end{isabelle}
+
+  the current content of the theorem list is displayed. If more theorems are stored in 
+  the list, say
+*}
+
+setup %gray {* update @{thm FalseE} *}
+
+text {*
+  then the same command produces
+  
+  \begin{isabelle}
+  \isacommand{thm}~@{text "mythmlist"}\\
+  @{text "> False \<Longrightarrow> ?P"}\\
+  @{text "> True"}
+  \end{isabelle}
+
+  There is a multitude of functions in the structures @{ML_struct Thm} and @{ML_struct Drule} 
+  for managing or manipulating theorems. For example 
+  we can test theorems for alpha equality. Suppose you proved the following three 
+  theorems.
+*}
+
+lemma
+  shows thm1: "\<forall>x. P x" 
+  and   thm2: "\<forall>y. P y" 
+  and   thm3: "\<forall>y. Q y" sorry
+
+text {*
+  Testing them for alpha equality using the function @{ML_ind eq_thm in Thm} produces:
+
+  @{ML_response [display,gray]
+"(Thm.eq_thm (@{thm thm1}, @{thm thm2}),
+ Thm.eq_thm (@{thm thm2}, @{thm thm3}))"
+"(true, false)"}
+
+  Many functions destruct theorems into @{ML_type cterm}s. For example
+  the functions @{ML_ind lhs_of in Thm} and @{ML_ind rhs_of in Thm} return 
+  the left and right-hand side, respectively, of a meta-equality.
+
+  @{ML_response_fake [display,gray]
+  "let
+  val eq = @{thm True_def}
+in
+  (Thm.lhs_of eq, Thm.rhs_of eq) 
+  |> pairself (string_of_cterm @{context})
+end"
+  "(True, (\<lambda>x. x) = (\<lambda>x. x))"}
+
+  Other function produce terms that can be pattern-matched. 
+  Suppose the following two theorems.
+*}
+
+lemma  
+  shows foo_test1: "A \<Longrightarrow> B \<Longrightarrow> C" 
+  and   foo_test2: "A \<longrightarrow> B \<longrightarrow> C" sorry
+
+text {*
+  We can destruct them into premises and conclusions as follows. 
+
+ @{ML_response_fake [display,gray]
+"let
+  val ctxt = @{context}
+  fun prems_and_concl thm =
+     [\"Premises: \"   ^ (string_of_terms ctxt (Thm.prems_of thm))] @ 
+     [\"Conclusion: \" ^ (string_of_term ctxt (Thm.concl_of thm))]
+     |> cat_lines
+     |> tracing
+in
+  prems_and_concl @{thm foo_test1};
+  prems_and_concl @{thm foo_test2}
+end"
+"Premises: ?A, ?B
+Conclusion: ?C
+Premises: 
+Conclusion: ?A \<longrightarrow> ?B \<longrightarrow> ?C"}
+
+  Note that in the second case, there is no premise.
+
+  \begin{readmore}
+  The basic functions for theorems are defined in
+  @{ML_file "Pure/thm.ML"}, @{ML_file "Pure/more_thm.ML"} and @{ML_file "Pure/drule.ML"}. 
+  \end{readmore}
+
+  Although we will explain the simplifier in more detail as tactic in 
+  Section~\ref{sec:simplifier}, the simplifier 
+  can be used to work directly over theorems, for example to unfold definitions. To show
+  this, we build the theorem @{term "True \<equiv> True"} (Line 1) and then 
+  unfold the constant @{term "True"} according to its definition (Line 2).
+
+  @{ML_response_fake [display,gray,linenos]
+  "Thm.reflexive @{cterm \"True\"}
+  |> Simplifier.rewrite_rule [@{thm True_def}]
+  |> string_of_thm @{context}
+  |> tracing"
+  "(\<lambda>x. x) = (\<lambda>x. x) \<equiv> (\<lambda>x. x) = (\<lambda>x. x)"}
+
+  Often it is necessary to transform theorems to and from the object 
+  logic, that is replacing all @{text "\<longrightarrow>"} and @{text "\<forall>"} by @{text "\<Longrightarrow>"} 
+  and @{text "\<And>"}, or the other way around.  A reason for such a transformation 
+  might be stating a definition. The reason is that definitions can only be 
+  stated using object logic connectives, while theorems using the connectives 
+  from the meta logic are more convenient for reasoning. Therefore there are
+  some build in functions which help with these transformations. The function 
+  @{ML_ind rulify in ObjectLogic} 
+  replaces all object connectives by equivalents in the meta logic. For example
+
+  @{ML_response_fake [display, gray]
+  "ObjectLogic.rulify @{thm foo_test2}"
+  "\<lbrakk>?A; ?B\<rbrakk> \<Longrightarrow> ?C"}
+
+  The transformation in the other direction can be achieved with function
+  @{ML_ind atomize in ObjectLogic} and the following code.
+
+  @{ML_response_fake [display,gray]
+  "let
+  val thm = @{thm foo_test1}
+  val meta_eq = ObjectLogic.atomize (cprop_of thm)
+in
+  MetaSimplifier.rewrite_rule [meta_eq] thm
+end"
+  "?A \<longrightarrow> ?B \<longrightarrow> ?C"}
+
+  In this code the function @{ML atomize in ObjectLogic} produces 
+  a meta-equation between the given theorem and the theorem transformed
+  into the object logic. The result is the theorem with object logic 
+  connectives. However, in order to completely transform a theorem
+  involving meta variables, such as @{thm [source] list.induct}, which 
+  is of the form 
+
+  @{thm [display] list.induct}
+
+  we have to first abstract over the meta variables @{text "?P"} and 
+  @{text "?list"}. For this we can use the function 
+  @{ML_ind forall_intr_vars in Drule}. This allows us to implement the
+  following function for atomizing a theorem.
+*}
+
+ML{*fun atomize_thm thm =
+let
+  val thm' = forall_intr_vars thm
+  val thm'' = ObjectLogic.atomize (cprop_of thm')
+in
+  MetaSimplifier.rewrite_rule [thm''] thm'
+end*}
+
+text {*
+  This function produces for the theorem @{thm [source] list.induct}
+
+  @{ML_response_fake [display, gray]
+  "atomize_thm @{thm list.induct}"
+  "\<forall>P list. P [] \<longrightarrow> (\<forall>a list. P list \<longrightarrow> P (a # list)) \<longrightarrow> P list"}
+
+  \footnote{\bf FIXME: say someting about @{ML_ind standard in Drule}.}
+
+  Theorems can also be produced from terms by giving an explicit proof. 
+  One way to achieve this is by using the function @{ML_ind prove in Goal}
+  in the structure @{ML_struct Goal}. For example below we use this function
+  to prove the term @{term "P \<Longrightarrow> P"}.
+  
+  @{ML_response_fake [display,gray]
+  "let
+  val trm = @{term \"P \<Longrightarrow> P::bool\"}
+  val tac = K (atac 1)
+in
+  Goal.prove @{context} [\"P\"] [] trm tac
+end"
+  "?P \<Longrightarrow> ?P"}
+
+  This function takes first a context and second a list of strings. This list
+  specifies which variables should be turned into schematic variables once the term
+  is proved.  The fourth argument is the term to be proved. The fifth is a
+  corresponding proof given in form of a tactic (we explain tactics in
+  Chapter~\ref{chp:tactical}). In the code above, the tactic proves the theorem
+  by assumption. As before this code will produce a theorem, but the theorem
+  is not yet stored in the theorem database. 
+
+  While the LCF-approach of going through interfaces ensures soundness in Isabelle, there
+  is the function @{ML_ind make_thm in Skip_Proof} in the structure 
+  @{ML_struct Skip_Proof} that allows us to turn any proposition into a theorem.
+  Potentially making the system unsound.  This is sometimes useful for developing 
+  purposes, or when explicit proof construction should be omitted due to performace 
+  reasons. An example of this function is as follows:
+
+  @{ML_response_fake [display, gray]
+  "Skip_Proof.make_thm @{theory} @{prop \"True = False\"}"
+  "True = False"}
+
+  The function @{ML make_thm in Skip_Proof} however only works if 
+  the ``quick-and-dirty'' mode is switched on. 
+
+  Theorems also contain auxiliary data, such as the name of the theorem, its
+  kind, the names for cases and so on. This data is stored in a string-string
+  list and can be retrieved with the function @{ML_ind get_tags in
+  Thm}. Assume you prove the following lemma.
+*}
+
+lemma foo_data: 
+  shows "P \<Longrightarrow> P \<Longrightarrow> P" by assumption
+
+text {*
+  The auxiliary data of this lemma can be retrieved using the function 
+  @{ML_ind get_tags in Thm}. So far the the auxiliary data of this lemma is 
+
+  @{ML_response_fake [display,gray]
+  "Thm.get_tags @{thm foo_data}"
+  "[(\"name\", \"General.foo_data\"), (\"kind\", \"lemma\")]"}
+
+  consisting of a name and a kind.  When we store lemmas in the theorem database, 
+  we might want to explicitly extend this data by attaching case names to the 
+  two premises of the lemma.  This can be done with the function @{ML_ind name in Rule_Cases}
+  from the structure @{ML_struct Rule_Cases}.
+*}
+
+local_setup %gray {*
+  Local_Theory.note ((@{binding "foo_data'"}, []), 
+    [(Rule_Cases.name ["foo_case_one", "foo_case_two"] 
+       @{thm foo_data})]) #> snd *}
+
+text {*
+  The data of the theorem @{thm [source] foo_data'} is then as follows:
+
+  @{ML_response_fake [display,gray]
+  "Thm.get_tags @{thm foo_data'}"
+  "[(\"name\", \"General.foo_data'\"), (\"kind\", \"lemma\"), 
+ (\"case_names\", \"foo_case_one;foo_case_two\")]"}
+
+  You can observe the case names of this lemma on the user level when using 
+  the proof methods @{text cases} and @{text induct}. In the proof below
+*}
+
+lemma
+  shows "Q \<Longrightarrow> Q \<Longrightarrow> Q"
+proof (cases rule: foo_data')
+
+(*<*)oops(*>*)
+text_raw{*
+\begin{tabular}{@ {}l}
+\isacommand{print\_cases}\\
+@{text "> cases:"}\\
+@{text ">   foo_case_one:"}\\
+@{text ">     let \"?case\" = \"?P\""}\\
+@{text ">   foo_case_two:"}\\
+@{text ">     let \"?case\" = \"?P\""}
+\end{tabular}*}
+
+text {*
+  we can proceed by analysing the cases @{text "foo_case_one"} and 
+  @{text "foo_case_two"}. While if the theorem has no names, then
+  the cases have standard names @{text 1}, @{text 2} and so 
+  on. This can be seen in the proof below.
+*}
+
+lemma
+  shows "Q \<Longrightarrow> Q \<Longrightarrow> Q"
+proof (cases rule: foo_data)
+
+(*<*)oops(*>*)
+text_raw{*
+\begin{tabular}{@ {}l}
+\isacommand{print\_cases}\\
+@{text "> cases:"}\\
+@{text ">   1:"}\\
+@{text ">     let \"?case\" = \"?P\""}\\
+@{text ">   2:"}\\
+@{text ">     let \"?case\" = \"?P\""}
+\end{tabular}*}
+
+
+text {*
+  One great feature of Isabelle is its document preparation system, where
+  proved theorems can be quoted in documents referencing directly their
+  formalisation. This helps tremendously to minimise cut-and-paste errors. However,
+  sometimes the verbatim quoting is not what is wanted or what can be shown to
+  readers. For such situations Isabelle allows the installation of \emph{\index*{theorem 
+  styles}}. These are, roughly speaking, functions from terms to terms. The input 
+  term stands for the theorem to be presented; the output can be constructed to
+  ones wishes.  Let us, for example, assume we want to quote theorems without
+  leading @{text \<forall>}-quantifiers. For this we can implement the following function 
+  that strips off @{text "\<forall>"}s.
+*}
+
+ML %linenosgray{*fun strip_allq (Const (@{const_name "All"}, _) $ Abs body) = 
+      Term.dest_abs body |> snd |> strip_allq
+  | strip_allq (Const (@{const_name "Trueprop"}, _) $ t) = 
+      strip_allq t
+  | strip_allq t = t*}
+
+text {*
+  We use in Line 2 the function @{ML_ind dest_abs in Term} for deconstructing abstractions,
+  since this function deals correctly with potential name clashes. This function produces
+  a pair consisting of the variable and the body of the abstraction. We are only interested
+  in the body, which we feed into the recursive call. In Line 3 and 4, we also
+  have to explicitly strip of the outermost @{term Trueprop}-coercion. Now we can 
+  install this function as the theorem style named @{text "my_strip_allq"}. 
+*}
+
+setup %gray {*
+  Term_Style.setup "my_strip_allq" (Scan.succeed (K strip_allq)) 
+*}
+
+text {*
+  We can test this theorem style with the following theorem
+*}
+
+theorem style_test:
+  shows "\<forall>x y z. (x, x) = (y, z)" sorry
+
+text {*
+  Now printing out in a document the theorem @{thm [source] style_test} normally
+  using @{text "@{thm \<dots>}"} produces
+
+  \begin{isabelle}
+  @{text "@{thm style_test}"}\\
+  @{text ">"}~@{thm style_test}
+  \end{isabelle}
+
+  as expected. But with the theorem style @{text "@{thm (my_strip_allq) \<dots>}"} 
+  we obtain
+
+  \begin{isabelle}
+  @{text "@{thm (my_strip_allq) style_test}"}\\
+  @{text ">"}~@{thm (my_strip_allq) style_test}\\
+  \end{isabelle}
+  
+  without the leading quantifiers. We can improve this theorem style by explicitly 
+  giving a list of strings that should be used for the replacement of the
+  variables. For this we implement the function which takes a list of strings
+  and uses them as name in the outermost abstractions.
+*}
+
+ML{*fun rename_allq [] t = t
+  | rename_allq (x::xs) (Const (@{const_name "All"}, U) $ Abs (_, T, t)) = 
+      Const (@{const_name "All"}, U) $ Abs (x, T, rename_allq xs t)
+  | rename_allq xs (Const (@{const_name "Trueprop"}, U) $ t) =
+      rename_allq xs t
+  | rename_allq _ t = t*}
+
+text {*
+  We can now install a the modified theorem style as follows
+*}
+
+setup %gray {* let
+  val parser = Scan.repeat Args.name
+  fun action xs = K (rename_allq xs #> strip_allq)
+in
+  Term_Style.setup "my_strip_allq2" (parser >> action)
+end *}
+
+text {*
+  The parser reads a list of names. In the function @{text action} we first
+  call @{ML rename_allq} with the parsed list, then we call @{ML strip_allq}
+  on the resulting term. We can now suggest, for example, two variables for
+  stripping off the first two @{text \<forall>}-quantifiers.
+
+
+  \begin{isabelle}
+  @{text "@{thm (my_strip_allq2 x' x'') style_test}"}\\
+  @{text ">"}~@{thm (my_strip_allq2 x' x'') style_test}
+  \end{isabelle}
+
+  Such theorem styles allow one to print out theorems in documents formatted to
+  ones heart content. Next we explain theorem attributes, which is another
+  mechanism for dealing with theorems.
+
+  \begin{readmore}
+  Theorem styles are implemented in @{ML_file "Pure/Thy/term_style.ML"}.
+  \end{readmore}
+*}
+
+section {* Theorem Attributes\label{sec:attributes} *}
+
+text {*
+  Theorem attributes are @{text "[symmetric]"}, @{text "[THEN \<dots>]"}, @{text
+  "[simp]"} and so on. Such attributes are \emph{neither} tags \emph{nor} flags
+  annotated to theorems, but functions that do further processing of 
+  theorems. In particular, it is not possible to find out
+  what are all theorems that have a given attribute in common, unless of course
+  the function behind the attribute stores the theorems in a retrievable 
+  data structure. 
+
+  If you want to print out all currently known attributes a theorem can have, 
+  you can use the Isabelle command
+
+  \begin{isabelle}
+  \isacommand{print\_attributes}\\
+  @{text "> COMP:  direct composition with rules (no lifting)"}\\
+  @{text "> HOL.dest:  declaration of Classical destruction rule"}\\
+  @{text "> HOL.elim:  declaration of Classical elimination rule"}\\
+  @{text "> \<dots>"}
+  \end{isabelle}
+  
+  The theorem attributes fall roughly into two categories: the first category manipulates
+  theorems (for example @{text "[symmetric]"} and @{text "[THEN \<dots>]"}), and the second
+  stores theorems somewhere as data (for example @{text "[simp]"}, which adds
+  theorems to the current simpset).
+
+  To explain how to write your own attribute, let us start with an extremely simple 
+  version of the attribute @{text "[symmetric]"}. The purpose of this attribute is
+  to produce the ``symmetric'' version of an equation. The main function behind 
+  this attribute is
+*}
+
+ML{*val my_symmetric = Thm.rule_attribute (fn _ => fn thm => thm RS @{thm sym})*}
+
+text {* 
+  where the function @{ML_ind  rule_attribute in Thm} expects a function taking a 
+  context (which we ignore in the code above) and a theorem (@{text thm}), and 
+  returns another theorem (namely @{text thm} resolved with the theorem 
+  @{thm [source] sym}: @{thm sym[no_vars]}; the function @{ML_ind RS in Drule} 
+  is explained in Section~\ref{sec:simpletacs}). The function 
+  @{ML rule_attribute in Thm} then returns  an attribute.
+
+  Before we can use the attribute, we need to set it up. This can be done
+  using the Isabelle command \isacommand{attribute\_setup} as follows:
+*}
+
+attribute_setup %gray my_sym = 
+  {* Scan.succeed my_symmetric *} "applying the sym rule"
+
+text {*
+  Inside the @{text "\<verbopen> \<dots> \<verbclose>"}, we have to specify a parser
+  for the theorem attribute. Since the attribute does not expect any further 
+  arguments (unlike @{text "[THEN \<dots>]"}, for instance), we use the parser @{ML
+  Scan.succeed}. An example for the attribute @{text "[my_sym]"} is the proof
+*} 
+
+lemma test[my_sym]: "2 = Suc (Suc 0)" by simp
+
+text {*
+  which stores the theorem @{thm test} under the name @{thm [source] test}. You
+  can see this, if you query the lemma: 
+
+  \begin{isabelle}
+  \isacommand{thm}~@{text "test"}\\
+  @{text "> "}~@{thm test}
+  \end{isabelle}
+
+  We can also use the attribute when referring to this theorem:
+
+  \begin{isabelle}
+  \isacommand{thm}~@{text "test[my_sym]"}\\
+  @{text "> "}~@{thm test[my_sym]}
+  \end{isabelle}
+
+  An alternative for setting up an attribute is the function @{ML_ind  setup in Attrib}.
+  So instead of using \isacommand{attribute\_setup}, you can also set up the
+  attribute as follows:
+*}
+
+ML{*Attrib.setup @{binding "my_sym"} (Scan.succeed my_symmetric)
+  "applying the sym rule" *}
+
+text {*
+  This gives a function from @{ML_type "theory -> theory"}, which
+  can be used for example with \isacommand{setup} or with
+  @{ML "Context.>> o Context.map_theory"}.\footnote{\bf FIXME: explain what happens here.}
+
+  As an example of a slightly more complicated theorem attribute, we implement 
+  our own version of @{text "[THEN \<dots>]"}. This attribute will take a list of theorems
+  as argument and resolve the theorem with this list (one theorem 
+  after another). The code for this attribute is
+*}
+
+ML{*fun MY_THEN thms = 
+  Thm.rule_attribute 
+    (fn _ => fn thm => fold (curry ((op RS) o swap)) thms thm)*}
+
+text {* 
+  where @{ML swap} swaps the components of a pair. The setup of this theorem
+  attribute uses the parser @{ML thms in Attrib}, which parses a list of
+  theorems. 
+*}
+
+attribute_setup %gray MY_THEN = {* Attrib.thms >> MY_THEN *} 
+  "resolving the list of theorems with the theorem"
+
+text {* 
+  You can, for example, use this theorem attribute to turn an equation into a 
+  meta-equation:
+
+  \begin{isabelle}
+  \isacommand{thm}~@{text "test[MY_THEN eq_reflection]"}\\
+  @{text "> "}~@{thm test[MY_THEN eq_reflection]}
+  \end{isabelle}
+
+  If you need the symmetric version as a meta-equation, you can write
+
+  \begin{isabelle}
+  \isacommand{thm}~@{text "test[MY_THEN sym eq_reflection]"}\\
+  @{text "> "}~@{thm test[MY_THEN sym eq_reflection]}
+  \end{isabelle}
+
+  It is also possible to combine different theorem attributes, as in:
+  
+  \begin{isabelle}
+  \isacommand{thm}~@{text "test[my_sym, MY_THEN eq_reflection]"}\\
+  @{text "> "}~@{thm test[my_sym, MY_THEN eq_reflection]}
+  \end{isabelle}
+  
+  However, here also a weakness of the concept
+  of theorem attributes shows through: since theorem attributes can be
+  arbitrary functions, they do not commute in general. If you try
+
+  \begin{isabelle}
+  \isacommand{thm}~@{text "test[MY_THEN eq_reflection, my_sym]"}\\
+  @{text "> "}~@{text "exception THM 1 raised: RSN: no unifiers"}
+  \end{isabelle}
+
+  you get an exception indicating that the theorem @{thm [source] sym}
+  does not resolve with meta-equations. 
+
+  The purpose of @{ML_ind rule_attribute in Thm} is to directly manipulate
+  theorems.  Another usage of theorem attributes is to add and delete theorems
+  from stored data.  For example the theorem attribute @{text "[simp]"} adds
+  or deletes a theorem from the current simpset. For these applications, you
+  can use @{ML_ind declaration_attribute in Thm}. To illustrate this function,
+  let us introduce a theorem list.
+*}
+
+ML{*structure MyThms = Named_Thms
+  (val name = "attr_thms" 
+   val description = "Theorems for an Attribute") *}
+
+text {* 
+  We are going to modify this list by adding and deleting theorems.
+  For this we use the two functions @{ML MyThms.add_thm} and
+  @{ML MyThms.del_thm}. You can turn them into attributes 
+  with the code
+*}
+
+ML{*val my_add = Thm.declaration_attribute MyThms.add_thm
+val my_del = Thm.declaration_attribute MyThms.del_thm *}
+
+text {* 
+  and set up the attributes as follows
+*}
+
+attribute_setup %gray my_thms = {* Attrib.add_del my_add my_del *} 
+  "maintaining a list of my_thms" 
+
+text {*
+  The parser @{ML_ind  add_del in Attrib} is a predefined parser for 
+  adding and deleting lemmas. Now if you prove the next lemma 
+  and attach to it the attribute @{text "[my_thms]"}
+*}
+
+lemma trueI_2[my_thms]: "True" by simp
+
+text {*
+  then you can see it is added to the initially empty list.
+
+  @{ML_response_fake [display,gray]
+  "MyThms.get @{context}" 
+  "[\"True\"]"}
+
+  You can also add theorems using the command \isacommand{declare}.
+*}
+
+declare test[my_thms] trueI_2[my_thms add]
+
+text {*
+  With this attribute, the @{text "add"} operation is the default and does 
+  not need to be explicitly given. These three declarations will cause the 
+  theorem list to be updated as:
+
+  @{ML_response_fake [display,gray]
+  "MyThms.get @{context}"
+  "[\"True\", \"Suc (Suc 0) = 2\"]"}
+
+  The theorem @{thm [source] trueI_2} only appears once, since the 
+  function @{ML_ind  add_thm in Thm} tests for duplicates, before extending
+  the list. Deletion from the list works as follows:
+*}
+
+declare test[my_thms del]
+
+text {* After this, the theorem list is again: 
+
+  @{ML_response_fake [display,gray]
+  "MyThms.get @{context}"
+  "[\"True\"]"}
+
+  We used in this example two functions declared as @{ML_ind
+  declaration_attribute in Thm}, but there can be any number of them. We just
+  have to change the parser for reading the arguments accordingly.
+
+  \footnote{\bf FIXME What are: @{text "theory_attributes"}, @{text "proof_attributes"}?}
+
+  \begin{readmore}
+  FIXME: @{ML_file "Pure/more_thm.ML"}; parsers for attributes is in 
+  @{ML_file "Pure/Isar/attrib.ML"}...also explained in the chapter about
+  parsing.
+  \end{readmore}
+*}
+
+section {* Pretty-Printing\label{sec:pretty} *}
+
+text {*
+  So far we printed out only plain strings without any formatting except for
+  occasional explicit line breaks using @{text [quotes] "\\n"}. This is
+  sufficient for ``quick-and-dirty'' printouts. For something more
+  sophisticated, Isabelle includes an infrastructure for properly formatting
+  text. Most of its functions do not operate on @{ML_type string}s, but on
+  instances of the pretty type:
+
+  @{ML_type_ind [display, gray] "Pretty.T"}
+
+  The function @{ML str in Pretty} transforms a (plain) string into such a pretty 
+  type. For example
+
+  @{ML_response_fake [display,gray]
+  "Pretty.str \"test\"" "String (\"test\", 4)"}
+
+  where the result indicates that we transformed a string with length 4. Once
+  you have a pretty type, you can, for example, control where linebreaks may
+  occur in case the text wraps over a line, or with how much indentation a
+  text should be printed.  However, if you want to actually output the
+  formatted text, you have to transform the pretty type back into a @{ML_type
+  string}. This can be done with the function @{ML_ind  string_of in Pretty}. In what
+  follows we will use the following wrapper function for printing a pretty
+  type:
+*}
+
+ML{*fun pprint prt = tracing (Pretty.string_of prt)*}
+
+text {*
+  The point of the pretty-printing infrastructure is to give hints about how to
+  layout text and let Isabelle do the actual layout. Let us first explain
+  how you can insert places where a line break can occur. For this assume the
+  following function that replicates a string n times:
+*}
+
+ML{*fun rep n str = implode (replicate n str) *}
+
+text {*
+  and suppose we want to print out the string:
+*}
+
+ML{*val test_str = rep 8 "fooooooooooooooobaaaaaaaaaaaar "*}
+
+text {*
+  We deliberately chose a large string so that it spans over more than one line. 
+  If we print out the string using the usual ``quick-and-dirty'' method, then
+  we obtain the ugly output:
+
+@{ML_response_fake [display,gray]
+"tracing test_str"
+"fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar foooooooooo
+ooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaa
+aaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fo
+oooooooooooooobaaaaaaaaaaaar"}
+
+  We obtain the same if we just use the function @{ML pprint}.
+
+@{ML_response_fake [display,gray]
+"pprint (Pretty.str test_str)"
+"fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar foooooooooo
+ooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaa
+aaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fo
+oooooooooooooobaaaaaaaaaaaar"}
+
+  However by using pretty types you have the ability to indicate possible
+  linebreaks for example at each whitespace. You can achieve this with the
+  function @{ML_ind breaks in Pretty}, which expects a list of pretty types
+  and inserts a possible line break in between every two elements in this
+  list. To print this list of pretty types as a single string, we concatenate
+  them with the function @{ML_ind blk in Pretty} as follows:
+
+@{ML_response_fake [display,gray]
+"let
+  val ptrs = map Pretty.str (space_explode \" \" test_str)
+in
+  pprint (Pretty.blk (0, Pretty.breaks ptrs))
+end"
+"fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar 
+fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar 
+fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar
+fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar"}
+
+  Here the layout of @{ML test_str} is much more pleasing to the 
+  eye. The @{ML "0"} in @{ML_ind  blk in Pretty} stands for no hanging 
+  indentation of the printed string. You can increase the indentation 
+  and obtain
+  
+@{ML_response_fake [display,gray]
+"let
+  val ptrs = map Pretty.str (space_explode \" \" test_str)
+in
+  pprint (Pretty.blk (3, Pretty.breaks ptrs))
+end"
+"fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar 
+   fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar 
+   fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar
+   fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar"}
+
+  where starting from the second line the indent is 3. If you want
+  that every line starts with the same indent, you can use the
+  function @{ML_ind  indent in Pretty} as follows:
+
+@{ML_response_fake [display,gray]
+"let
+  val ptrs = map Pretty.str (space_explode \" \" test_str)
+in
+  pprint (Pretty.indent 10 (Pretty.blk (0, Pretty.breaks ptrs)))
+end"
+"          fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar 
+          fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar 
+          fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar
+          fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar"}
+
+  If you want to print out a list of items separated by commas and 
+  have the linebreaks handled properly, you can use the function 
+  @{ML_ind  commas in Pretty}. For example
+
+@{ML_response_fake [display,gray]
+"let
+  val ptrs = map (Pretty.str o string_of_int) (99998 upto 100020)
+in
+  pprint (Pretty.blk (0, Pretty.commas ptrs))
+end"
+"99998, 99999, 100000, 100001, 100002, 100003, 100004, 100005, 100006, 
+100007, 100008, 100009, 100010, 100011, 100012, 100013, 100014, 100015, 
+100016, 100017, 100018, 100019, 100020"}
+
+  where @{ML upto} generates a list of integers. You can print out this
+  list as a ``set'', that means enclosed inside @{text [quotes] "{"} and
+  @{text [quotes] "}"}, and separated by commas using the function
+  @{ML_ind  enum in Pretty}. For example
+*}
+
+text {*
+  
+@{ML_response_fake [display,gray]
+"let
+  val ptrs = map (Pretty.str o string_of_int) (99998 upto 100020)
+in
+  pprint (Pretty.enum \",\" \"{\" \"}\" ptrs)
+end"
+"{99998, 99999, 100000, 100001, 100002, 100003, 100004, 100005, 100006, 
+  100007, 100008, 100009, 100010, 100011, 100012, 100013, 100014, 100015, 
+  100016, 100017, 100018, 100019, 100020}"}
+
+  As can be seen, this function prints out the ``set'' so that starting 
+  from the second, each new line has an indentation of 2.
+  
+  If you print out something that goes beyond the capabilities of the
+  standard functions, you can do relatively easily the formatting
+  yourself. Assume you want to print out a list of items where like in ``English''
+  the last two items are separated by @{text [quotes] "and"}. For this you can
+  write the function
+
+*}
+
+ML %linenosgray{*fun and_list [] = []
+  | and_list [x] = [x]
+  | and_list xs = 
+      let 
+        val (front, last) = split_last xs
+      in
+        (Pretty.commas front) @ 
+        [Pretty.brk 1, Pretty.str "and", Pretty.brk 1, last]
+      end *}
+
+text {*
+  where Line 7 prints the beginning of the list and Line
+  8 the last item. We have to use @{ML "Pretty.brk 1"} in order
+  to insert a space (of length 1) before the 
+  @{text [quotes] "and"}. This space is also a place where a line break 
+  can occur. We do the same after the @{text [quotes] "and"}. This gives you
+  for example
+
+@{ML_response_fake [display,gray]
+"let
+  val ptrs1 = map (Pretty.str o string_of_int) (1 upto 22)
+  val ptrs2 = map (Pretty.str o string_of_int) (10 upto 28)
+in
+  pprint (Pretty.blk (0, and_list ptrs1));
+  pprint (Pretty.blk (0, and_list ptrs2))
+end"
+"1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21 
+and 22
+10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27 and
+28"}
+
+  Next we like to pretty-print a term and its type. For this we use the
+  function @{text "tell_type"}.
+*}
+
+ML %linenosgray{*fun tell_type ctxt t = 
+let
+  fun pstr s = Pretty.breaks (map Pretty.str (space_explode " " s))
+  val ptrm = Pretty.quote (Syntax.pretty_term ctxt t)
+  val pty  = Pretty.quote (Syntax.pretty_typ ctxt (fastype_of t))
+in
+  pprint (Pretty.blk (0, 
+    (pstr "The term " @ [ptrm] @ pstr " has type " 
+      @ [pty, Pretty.str "."])))
+end*}
+
+text {*
+  In Line 3 we define a function that inserts possible linebreaks in places
+  where a space is. In Lines 4 and 5 we pretty-print the term and its type
+  using the functions @{ML_ind  pretty_term in Syntax} and @{ML_ind 
+  pretty_typ in Syntax}. We also use the function @{ML_ind quote in
+  Pretty} in order to enclose the term and type inside quotation marks. In
+  Line 9 we add a period right after the type without the possibility of a
+  line break, because we do not want that a line break occurs there.
+  Now you can write
+
+  @{ML_response_fake [display,gray]
+  "tell_type @{context} @{term \"min (Suc 0)\"}"
+  "The term \"min (Suc 0)\" has type \"nat \<Rightarrow> nat\"."}
+  
+  To see the proper line breaking, you can try out the function on a bigger term 
+  and type. For example:
+
+  @{ML_response_fake [display,gray]
+  "tell_type @{context} @{term \"op = (op = (op = (op = (op = op =))))\"}"
+  "The term \"op = (op = (op = (op = (op = op =))))\" has type 
+\"((((('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool\"."}
+
+  The function @{ML_ind big_list in Pretty} helps with printing long lists.
+  It is used for example in the command \isacommand{print\_theorems}. An
+  example is as follows.
+
+  @{ML_response_fake [display,gray]
+  "let
+  val pstrs = map (Pretty.str o string_of_int) (4 upto 10)
+in
+  pprint (Pretty.big_list \"header\" pstrs)
+end"
+  "header
+  4
+  5
+  6
+  7
+  8
+  9
+  10"}
+
+  Like @{ML blk in Pretty}, the function @{ML_ind chunks in Pretty} prints out 
+  a list of items, but automatically inserts forced breaks between each item.
+  Compare
+
+  @{ML_response_fake [display,gray]
+  "let
+  val a_and_b = [Pretty.str \"a\", Pretty.str \"b\"]
+in
+  pprint (Pretty.blk (0, a_and_b));
+  pprint (Pretty.chunks a_and_b)
+end"
+"ab
+a
+b"}
+  
+  \footnote{\bf FIXME: What happens with printing big formulae?}
+
+  
+
+  \begin{readmore}
+  The general infrastructure for pretty-printing is implemented in the file
+  @{ML_file "Pure/General/pretty.ML"}. The file @{ML_file "Pure/Syntax/syntax.ML"}
+  contains pretty-printing functions for terms, types, theorems and so on.
+  
+  @{ML_file "Pure/General/markup.ML"}
+  \end{readmore}
+*}
+
+(*
+text {*
+  printing into the goal buffer as part of the proof state
+*}
+
+text {* writing into the goal buffer *}
+
+ML {* fun my_hook interactive state =
+         (interactive ? Proof.goal_message (fn () => Pretty.str  
+"foo")) state
+*}
+
+setup %gray {* Context.theory_map (Specification.add_theorem_hook my_hook) *}
+
+lemma "False"
+oops
+*)
+
+(*
+ML {*
+fun setmp_show_all_types f =
+   setmp show_all_types
+         (! show_types orelse ! show_sorts orelse ! show_all_types) f;
+
+val ctxt = @{context};
+val t1 = @{term "undefined::nat"};
+val t2 = @{term "Suc y"};
+val pty =        Pretty.block (Pretty.breaks
+             [(setmp show_question_marks false o setmp_show_all_types)
+                  (Syntax.pretty_term ctxt) t1,
+              Pretty.str "=", Syntax.pretty_term ctxt t2]);
+pty |> Pretty.string_of  
+*}
+
+text {* the infrastructure of Syntax-pretty-term makes sure it is printed nicely  *}
+
+
+ML {* Pretty.mark Markup.hilite (Pretty.str "foo") |> Pretty.string_of  |> tracing *}
+ML {* (Pretty.str "bar") |> Pretty.string_of |> tracing *}
+*)
+
+section {* Summary *}
+
+text {*
+  \begin{conventions}
+  \begin{itemize}
+  \item Start with a proper context and then pass it around 
+  through all your functions.
+  \end{itemize}
+  \end{conventions}
+*}
+
+end