ProgTutorial/FirstSteps.thy
changeset 189 069d525f8f1d
parent 188 8939b8fd8603
child 190 ca0ac2e75f6d
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/FirstSteps.thy	Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,1302 @@
+theory FirstSteps
+imports Base
+begin
+
+chapter {* First Steps *}
+
+text {*
+
+  Isabelle programming is done in ML.  Just like lemmas and proofs, ML-code
+  in Isabelle is part of a theory. If you want to follow the code given in
+  this chapter, we assume you are working inside the theory starting with
+
+  \begin{center}
+  \begin{tabular}{@ {}l}
+  \isacommand{theory} FirstSteps\\
+  \isacommand{imports} Main\\
+  \isacommand{begin}\\
+  \ldots
+  \end{tabular}
+  \end{center}
+
+  We also generally assume you are working with HOL. The given examples might
+  need to be adapted slightly if you work in a different logic.
+*}
+
+section {* Including ML-Code *}
+
+
+
+text {*
+  The easiest and quickest way to include code in a theory is
+  by using the \isacommand{ML}-command. For example:
+
+\begin{isabelle}
+\begin{graybox}
+\isacommand{ML}~@{text "\<verbopen>"}\isanewline
+\hspace{5mm}@{ML "3 + 4"}\isanewline
+@{text "\<verbclose>"}\isanewline
+@{text "> 7"}\smallskip
+\end{graybox}
+\end{isabelle}
+
+  Like normal Isabelle proof scripts, \isacommand{ML}-commands can be
+  evaluated by using the advance and undo buttons of your Isabelle
+  environment. The code inside the \isacommand{ML}-command can also contain
+  value and function bindings, and even those can be undone when the proof
+  script is retracted. As mentioned in the Introduction, we will drop the
+  \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"} scaffolding whenever we
+  show code. The lines prefixed with @{text [quotes] ">"} are not part of the
+  code, rather they indicate what the response is when the code is evaluated.
+
+  Once a portion of code is relatively stable, you usually want to export it
+  to a separate ML-file. Such files can then be included in a theory by using
+  the \isacommand{uses}-command in the header of the theory, like:
+
+  \begin{center}
+  \begin{tabular}{@ {}l}
+  \isacommand{theory} FirstSteps\\
+  \isacommand{imports} Main\\
+  \isacommand{uses} @{text "\"file_to_be_included.ML\""} @{text "\<dots>"}\\
+  \isacommand{begin}\\
+  \ldots
+  \end{tabular}
+  \end{center}
+  
+*}
+
+section {* Debugging and Printing\label{sec:printing} *}
+
+text {*
+
+  During development you might find it necessary to inspect some data
+  in your code. This can be done in a ``quick-and-dirty'' fashion using 
+  the function @{ML "warning"}. For example 
+
+  @{ML_response_fake [display,gray] "warning \"any string\"" "\"any string\""}
+
+  will print out @{text [quotes] "any string"} inside the response buffer
+  of Isabelle.  This function expects a string as argument. If you develop under PolyML,
+  then there is a convenient, though again ``quick-and-dirty'', method for
+  converting values into strings, namely the function @{ML makestring}:
+
+  @{ML_response_fake [display,gray] "warning (makestring 1)" "\"1\""} 
+
+  However @{ML makestring} only works if the type of what is converted is monomorphic 
+  and not a function.
+
+  The function @{ML "warning"} should only be used for testing purposes, because any
+  output this function generates will be overwritten as soon as an error is
+  raised. For printing anything more serious and elaborate, the
+  function @{ML tracing} is more appropriate. This function writes all output into
+  a separate tracing buffer. For example:
+
+  @{ML_response_fake [display,gray] "tracing \"foo\"" "\"foo\""}
+
+  It is also possible to redirect the ``channel'' where the string @{text "foo"} is 
+  printed to a separate file, e.g.~to prevent ProofGeneral from choking on massive 
+  amounts of trace output. This redirection can be achieved with the code:
+*}
+
+ML{*val strip_specials =
+let
+  fun strip ("\^A" :: _ :: cs) = strip cs
+    | strip (c :: cs) = c :: strip cs
+    | strip [] = [];
+in implode o strip o explode end;
+
+fun redirect_tracing stream =
+ Output.tracing_fn := (fn s =>
+    (TextIO.output (stream, (strip_specials s));
+     TextIO.output (stream, "\n");
+     TextIO.flushOut stream)) *}
+
+text {*
+  Calling @{ML "redirect_tracing"} with @{ML "(TextIO.openOut \"foo.bar\")"} 
+  will cause that all tracing information is printed into the file @{text "foo.bar"}.
+
+  You can print out error messages with the function @{ML error}; for example:
+
+@{ML_response_fake [display,gray] "if 0=1 then true else (error \"foo\")" 
+"Exception- ERROR \"foo\" raised
+At command \"ML\"."}
+
+  Most often you want to inspect data of type @{ML_type term}, @{ML_type cterm} 
+  or @{ML_type thm}. Isabelle contains elaborate pretty-printing functions for printing them, 
+  but  for quick-and-dirty solutions they are far too unwieldy. A simple way to transform 
+  a term into a string is to use the function @{ML Syntax.string_of_term}.
+
+  @{ML_response_fake [display,gray]
+  "Syntax.string_of_term @{context} @{term \"1::nat\"}"
+  "\"\\^E\\^Fterm\\^E\\^E\\^Fconst\\^Fname=HOL.one_class.one\\^E1\\^E\\^F\\^E\\^E\\^F\\^E\""}
+
+  This produces a string with some additional information encoded in it. The string
+  can be properly printed by using the function @{ML warning}.
+
+  @{ML_response_fake [display,gray]
+  "warning (Syntax.string_of_term @{context} @{term \"1::nat\"})"
+  "\"1\""}
+
+  A @{ML_type cterm} can be transformed into a string by the following function.
+*}
+
+ML{*fun str_of_cterm ctxt t =  
+   Syntax.string_of_term ctxt (term_of t)*}
+
+text {*
+  In this example the function @{ML term_of} extracts the @{ML_type term} from
+  a @{ML_type cterm}.  If there are more than one @{ML_type cterm}s to be
+  printed, you can use the function @{ML commas} to separate them.
+*} 
+
+ML{*fun str_of_cterms ctxt ts =  
+   commas (map (str_of_cterm ctxt) ts)*}
+
+text {*
+  The easiest way to get the string of a theorem is to transform it
+  into a @{ML_type cterm} using the function @{ML crep_thm}. Theorems
+  also include schematic variables, such as @{text "?P"}, @{text "?Q"}
+  and so on. In order to improve the readability of theorems we convert
+  these schematic variables into free variables using the 
+  function @{ML Variable.import_thms}.
+*}
+
+ML{*fun no_vars ctxt thm =
+let 
+  val ((_, [thm']), _) = Variable.import_thms true [thm] ctxt
+in
+  thm'
+end
+
+fun str_of_thm ctxt thm =
+  str_of_cterm ctxt (#prop (crep_thm (no_vars ctxt thm)))*}
+
+text {* 
+  Again the function @{ML commas} helps with printing more than one theorem. 
+*}
+
+ML{*fun str_of_thms ctxt thms =  
+  commas (map (str_of_thm ctxt) thms)*}
+
+text {*
+(FIXME @{ML Toplevel.debug} @{ML Toplevel.profiling} @{ML Toplevel.debug})
+*}
+
+section {* Combinators\label{sec:combinators} *}
+
+text {*
+  For beginners perhaps the most puzzling parts in the existing code of Isabelle are
+  the combinators. At first they seem to greatly obstruct the
+  comprehension of the code, but after getting familiar with them, they
+  actually ease the understanding and also the programming.
+
+  The simplest combinator is @{ML I}, which is just the identity function defined as
+*}
+
+ML{*fun I x = x*}
+
+text {* Another simple combinator is @{ML K}, defined as *}
+
+ML{*fun K x = fn _ => x*}
+
+text {*
+  @{ML K} ``wraps'' a function around the argument @{text "x"}. However, this 
+  function ignores its argument. As a result, @{ML K} defines a constant function 
+  always returning @{text x}.
+
+  The next combinator is reverse application, @{ML "|>"}, defined as: 
+*}
+
+ML{*fun x |> f = f x*}
+
+text {* While just syntactic sugar for the usual function application,
+  the purpose of this combinator is to implement functions in a  
+  ``waterfall fashion''. Consider for example the function *}
+
+ML %linenosgray{*fun inc_by_five x =
+  x |> (fn x => x + 1)
+    |> (fn x => (x, x))
+    |> fst
+    |> (fn x => x + 4)*}
+
+text {*
+  which increments its argument @{text x} by 5. It does this by first incrementing 
+  the argument by 1 (Line 2); then storing the result in a pair (Line 3); taking 
+  the first component of the pair (Line 4) and finally incrementing the first 
+  component by 4 (Line 5). This kind of cascading manipulations of values is quite
+  common when dealing with theories (for example by adding a definition, followed by
+  lemmas and so on). The reverse application allows you to read what happens in 
+  a top-down manner. This kind of coding should also be familiar, 
+  if you have been exposed to Haskell's do-notation. Writing the function @{ML inc_by_five} using 
+  the reverse application is much clearer than writing
+*}
+
+ML{*fun inc_by_five x = fst ((fn x => (x, x)) (x + 1)) + 4*}
+
+text {* or *}
+
+ML{*fun inc_by_five x = 
+       ((fn x => x + 4) o fst o (fn x => (x, x)) o (fn x => x + 1)) x*}
+
+text {* and typographically more economical than *}
+
+ML{*fun inc_by_five x =
+   let val y1 = x + 1
+       val y2 = (y1, y1)
+       val y3 = fst y2
+       val y4 = y3 + 4
+   in y4 end*}
+
+text {* 
+  Another reason why the let-bindings in the code above are better to be
+  avoided: it is more than easy to get the intermediate values wrong, not to 
+  mention the nightmares the maintenance of this code causes!
+
+  In the context of Isabelle, a ``real world'' example for a function written in 
+  the waterfall fashion might be the following code:
+*}
+
+ML %linenosgray{*fun apply_fresh_args pred ctxt =
+  pred |> fastype_of
+       |> binder_types 
+       |> map (pair "z")
+       |> Variable.variant_frees ctxt [pred]
+       |> map Free  
+       |> (curry list_comb) pred *}
+
+text {*
+  This code extracts the argument types of a given
+  predicate and then generates for each argument type a distinct variable; finally it
+  applies the generated variables to the predicate. For example
+
+  @{ML_response_fake [display,gray]
+"apply_fresh_args @{term \"P::nat \<Rightarrow> int \<Rightarrow> unit \<Rightarrow> bool\"} @{context} 
+ |> Syntax.string_of_term @{context}
+ |> warning"
+  "P z za zb"}
+
+  You can read off this behaviour from how @{ML apply_fresh_args} is
+  coded: in Line 2, the function @{ML fastype_of} calculates the type of the
+  predicate; @{ML binder_types} in the next line produces the list of argument
+  types (in the case above the list @{text "[nat, int, unit]"}); Line 4 
+  pairs up each type with the string  @{text "z"}; the
+  function @{ML variant_frees in Variable} generates for each @{text "z"} a
+  unique name avoiding the given @{text pred}; the list of name-type pairs is turned
+  into a list of variable terms in Line 6, which in the last line is applied
+  by the function @{ML list_comb} to the predicate. We have to use the
+  function @{ML curry}, because @{ML list_comb} expects the predicate and the
+  variables list as a pair.
+  
+  The combinator @{ML "#>"} is the reverse function composition. It can be
+  used to define the following function
+*}
+
+ML{*val inc_by_six = 
+      (fn x => x + 1)
+   #> (fn x => x + 2)
+   #> (fn x => x + 3)*}
+
+text {* 
+  which is the function composed of first the increment-by-one function and then
+  increment-by-two, followed by increment-by-three. Again, the reverse function 
+  composition allows you to read the code top-down.
+
+  The remaining combinators described in this section add convenience for the
+  ``waterfall method'' of writing functions. The combinator @{ML tap} allows
+  you to get hold of an intermediate result (to do some side-calculations for
+  instance). The function
+
+ *}
+
+ML %linenosgray{*fun inc_by_three x =
+     x |> (fn x => x + 1)
+       |> tap (fn x => tracing (makestring x))
+       |> (fn x => x + 2)*}
+
+text {* 
+  increments the argument first by @{text "1"} and then by @{text "2"}. In the
+  middle (Line 3), however, it uses @{ML tap} for printing the ``plus-one''
+  intermediate result inside the tracing buffer. The function @{ML tap} can
+  only be used for side-calculations, because any value that is computed
+  cannot be merged back into the ``main waterfall''. To do this, you can use
+  the next combinator.
+
+  The combinator @{ML "`"} is similar to @{ML tap}, but applies a function to the value
+  and returns the result together with the value (as a pair). For example
+  the function 
+*}
+
+ML{*fun inc_as_pair x =
+     x |> `(fn x => x + 1)
+       |> (fn (x, y) => (x, y + 1))*}
+
+text {*
+  takes @{text x} as argument, and then increments @{text x}, but also keeps
+  @{text x}. The intermediate result is therefore the pair @{ML "(x + 1, x)"
+  for x}. After that, the function increments the right-hand component of the
+  pair. So finally the result will be @{ML "(x + 1, x + 1)" for x}.
+
+  The combinators @{ML "|>>"} and @{ML "||>"} are defined for 
+  functions manipulating pairs. The first applies the function to
+  the first component of the pair, defined as
+*}
+
+ML{*fun (x, y) |>> f = (f x, y)*}
+
+text {*
+  and the second combinator to the second component, defined as
+*}
+
+ML{*fun (x, y) ||> f = (x, f y)*}
+
+text {*
+  With the combinator @{ML "|->"} you can re-combine the elements from a pair.
+  This combinator is defined as
+*}
+
+ML{*fun (x, y) |-> f = f x y*}
+
+text {* and can be used to write the following roundabout version 
+  of the @{text double} function: 
+*}
+
+ML{*fun double x =
+      x |>  (fn x => (x, x))
+        |-> (fn x => fn y => x + y)*}
+
+text {*
+  Recall that @{ML "|>"} is the reverse function applications. Recall also that the related 
+  reverse function composition is @{ML "#>"}. In fact all the combinators @{ML "|->"},
+  @{ML "|>>"} and @{ML "||>"} described above have related combinators for function
+  composition, namely @{ML "#->"}, @{ML "#>>"} and @{ML "##>"}. Using @{ML "#->"}, 
+  for example, the function @{text double} can also be written as:
+*}
+
+ML{*val double =
+            (fn x => (x, x))
+        #-> (fn x => fn y => x + y)*}
+
+text {*
+  
+  (FIXME: find a good exercise for combinators)
+
+  \begin{readmore}
+  The most frequently used combinator are defined in the files @{ML_file "Pure/library.ML"}
+  and @{ML_file "Pure/General/basics.ML"}. Also \isccite{sec:ML-linear-trans} 
+  contains further information about combinators.
+  \end{readmore}
+ 
+*}
+
+
+section {* Antiquotations *}
+
+text {*
+  The main advantage of embedding all code in a theory is that the code can
+  contain references to entities defined on the logical level of Isabelle. By
+  this we mean definitions, theorems, terms and so on. This kind of reference is
+  realised with antiquotations.  For example, one can print out the name of the current
+  theory by typing
+
+  
+  @{ML_response [display,gray] "Context.theory_name @{theory}" "\"FirstSteps\""}
+ 
+  where @{text "@{theory}"} is an antiquotation that is substituted with the
+  current theory (remember that we assumed we are inside the theory 
+  @{text FirstSteps}). The name of this theory can be extracted using
+  the function @{ML "Context.theory_name"}. 
+
+  Note, however, that antiquotations are statically linked, that is their value is
+  determined at ``compile-time'', not ``run-time''. For example the function
+*}
+
+ML{*fun not_current_thyname () = Context.theory_name @{theory} *}
+
+text {*
+
+  does \emph{not} return the name of the current theory, if it is run in a 
+  different theory. Instead, the code above defines the constant function 
+  that always returns the string @{text [quotes] "FirstSteps"}, no matter where the
+  function is called. Operationally speaking,  the antiquotation @{text "@{theory}"} is 
+  \emph{not} replaced with code that will look up the current theory in 
+  some data structure and return it. Instead, it is literally
+  replaced with the value representing the theory name.
+
+  In a similar way you can use antiquotations to refer to proved theorems: 
+  @{text "@{thm \<dots>}"} for a single theorem
+
+  @{ML_response_fake [display,gray] "@{thm allI}" "(\<And>x. ?P x) \<Longrightarrow> \<forall>x. ?P x"}
+
+  and @{text "@{thms \<dots>}"} for more than one
+
+@{ML_response_fake [display,gray] "@{thms conj_ac}" 
+"(?P \<and> ?Q) = (?Q \<and> ?P)
+(?P \<and> ?Q \<and> ?R) = (?Q \<and> ?P \<and> ?R)
+((?P \<and> ?Q) \<and> ?R) = (?P \<and> ?Q \<and> ?R)"}  
+
+  You can also refer to the current simpset. To illustrate this we implement the
+  function that extracts the theorem names stored in a simpset.
+*}
+
+ML{*fun get_thm_names_from_ss simpset =
+let
+  val {simps,...} = MetaSimplifier.dest_ss simpset
+in
+  map #1 simps
+end*}
+
+text {*
+  The function @{ML dest_ss in MetaSimplifier} returns a record containing all
+  information stored in the simpset. We are only interested in the names of the
+  simp-rules. So now you can feed in the current simpset into this function. 
+  The simpset can be referred to using the antiquotation @{text "@{simpset}"}.
+
+  @{ML_response_fake [display,gray] 
+  "get_thm_names_from_ss @{simpset}" 
+  "[\"Nat.of_nat_eq_id\", \"Int.of_int_eq_id\", \"Nat.One_nat_def\", \<dots>]"}
+
+  Again, this way or referencing simpsets makes you independent from additions
+  of lemmas to the simpset by the user that potentially cause loops.
+
+  While antiquotations have many applications, they were originally introduced in order 
+  to avoid explicit bindings for theorems such as:
+*}
+
+ML{*val allI = thm "allI" *}
+
+text {*
+  These bindings are difficult to maintain and also can be accidentally
+  overwritten by the user. This often broke Isabelle
+  packages. Antiquotations solve this problem, since they are ``linked''
+  statically at compile-time.  However, this static linkage also limits their
+  usefulness in cases where data needs to be build up dynamically. In the
+  course of this chapter you will learn more about these antiquotations:
+  they can simplify Isabelle programming since one can directly access all
+  kinds of logical elements from th ML-level.
+*}
+
+text {*
+  (FIXME: say something about @{text "@{binding \<dots>}"}
+*}
+
+section {* Terms and Types *}
+
+text {*
+  One way to construct terms of Isabelle is by using the antiquotation 
+  \mbox{@{text "@{term \<dots>}"}}. For example:
+
+  @{ML_response [display,gray] 
+"@{term \"(a::nat) + b = c\"}" 
+"Const (\"op =\", \<dots>) $ 
+  (Const (\"HOL.plus_class.plus\", \<dots>) $ \<dots> $ \<dots>) $ \<dots>"}
+
+  This will show the term @{term "(a::nat) + b = c"}, but printed using the internal
+  representation of this term. This internal representation corresponds to the 
+  datatype @{ML_type "term"}.
+  
+  The internal representation of terms uses the usual de Bruijn index mechanism where bound 
+  variables are represented by the constructor @{ML Bound}. The index in @{ML Bound} refers to
+  the number of Abstractions (@{ML Abs}) we have to skip until we hit the @{ML Abs} that
+  binds the corresponding variable. However, in Isabelle the names of bound variables are 
+  kept at abstractions for printing purposes, and so should be treated only as ``comments''. 
+  Application in Isabelle is realised with the term-constructor @{ML $}.
+
+  \begin{readmore}
+  Terms are described in detail in \isccite{sec:terms}. Their
+  definition and many useful operations are implemented in @{ML_file "Pure/term.ML"}.
+  \end{readmore}
+
+  Sometimes the internal representation of terms can be surprisingly different
+  from what you see at the user-level, because the layers of
+  parsing/type-checking/pretty printing can be quite elaborate. 
+
+  \begin{exercise}
+  Look at the internal term representation of the following terms, and
+  find out why they are represented like this:
+
+  \begin{itemize}
+  \item @{term "case x of 0 \<Rightarrow> 0 | Suc y \<Rightarrow> y"}  
+  \item @{term "\<lambda>(x,y). P y x"}  
+  \item @{term "{ [x::int] | x. x \<le> -2 }"}  
+  \end{itemize}
+
+  Hint: The third term is already quite big, and the pretty printer
+  may omit parts of it by default. If you want to see all of it, you
+  can use the following ML-function to set the printing depth to a higher 
+  value:
+
+  @{ML [display,gray] "print_depth 50"}
+  \end{exercise}
+
+  The antiquotation @{text "@{prop \<dots>}"} constructs terms of propositional type, 
+  inserting the invisible @{text "Trueprop"}-coercions whenever necessary. 
+  Consider for example the pairs
+
+@{ML_response [display,gray] "(@{term \"P x\"}, @{prop \"P x\"})" 
+"(Free (\"P\", \<dots>) $ Free (\"x\", \<dots>),
+ Const (\"Trueprop\", \<dots>) $ (Free (\"P\", \<dots>) $ Free (\"x\", \<dots>)))"}
+ 
+  where a coercion is inserted in the second component and 
+
+  @{ML_response [display,gray] "(@{term \"P x \<Longrightarrow> Q x\"}, @{prop \"P x \<Longrightarrow> Q x\"})" 
+  "(Const (\"==>\", \<dots>) $ \<dots> $ \<dots>, Const (\"==>\", \<dots>) $ \<dots> $ \<dots>)"}
+
+  where it is not (since it is already constructed by a meta-implication). 
+
+  Types can be constructed using the antiquotation @{text "@{typ \<dots>}"}. For example:
+
+  @{ML_response_fake [display,gray] "@{typ \"bool \<Rightarrow> nat\"}" "bool \<Rightarrow> nat"}
+
+  \begin{readmore}
+  Types are described in detail in \isccite{sec:types}. Their
+  definition and many useful operations are implemented 
+  in @{ML_file "Pure/type.ML"}.
+  \end{readmore}
+*}
+
+
+section {* Constructing Terms and Types Manually\label{sec:terms_types_manually} *} 
+
+text {*
+  While antiquotations are very convenient for constructing terms, they can
+  only construct fixed terms (remember they are ``linked'' at compile-time).
+  However, you often need to construct terms dynamically. For example, a
+  function that returns the implication @{text "\<And>(x::nat). P x \<Longrightarrow> Q x"} taking
+  @{term P} and @{term Q} as arguments can only be written as:
+
+*}
+
+ML{*fun make_imp P Q =
+let
+  val x = Free ("x", @{typ nat})
+in 
+  Logic.all x (Logic.mk_implies (P $ x, Q $ x))
+end *}
+
+text {*
+  The reason is that you cannot pass the arguments @{term P} and @{term Q} 
+  into an antiquotation. For example the following does \emph{not} work.
+*}
+
+ML{*fun make_wrong_imp P Q = @{prop "\<And>(x::nat). P x \<Longrightarrow> Q x"} *}
+
+text {*
+  To see this apply @{text "@{term S}"} and @{text "@{term T}"} 
+  to both functions. With @{ML make_imp} we obtain the intended term involving 
+  the given arguments
+
+  @{ML_response [display,gray] "make_imp @{term S} @{term T}" 
+"Const \<dots> $ 
+  Abs (\"x\", Type (\"nat\",[]),
+    Const \<dots> $ (Free (\"S\",\<dots>) $ \<dots>) $ (Free (\"T\",\<dots>) $ \<dots>))"}
+
+  whereas with @{ML make_wrong_imp} we obtain a term involving the @{term "P"} 
+  and @{text "Q"} from the antiquotation.
+
+  @{ML_response [display,gray] "make_wrong_imp @{term S} @{term T}" 
+"Const \<dots> $ 
+  Abs (\"x\", \<dots>,
+    Const \<dots> $ (Const \<dots> $ (Free (\"P\",\<dots>) $ \<dots>)) $ 
+               (Const \<dots> $ (Free (\"Q\",\<dots>) $ \<dots>)))"}
+
+  (FIXME: handy functions for constructing terms: @{ML list_comb}, @{ML lambda})
+*}
+
+
+text {*
+  \begin{readmore}
+  There are many functions in @{ML_file "Pure/term.ML"}, @{ML_file "Pure/logic.ML"} and
+  @{ML_file "HOL/Tools/hologic.ML"} that make such manual constructions of terms 
+  and types easier.\end{readmore}
+
+  Have a look at these files and try to solve the following two exercises:
+*}
+
+text {*
+
+  \begin{exercise}\label{fun:revsum}
+  Write a function @{text "rev_sum : term -> term"} that takes a
+  term of the form @{text "t\<^isub>1 + t\<^isub>2 + \<dots> + t\<^isub>n"} (whereby @{text "n"} might be zero)
+  and returns the reversed sum @{text "t\<^isub>n + \<dots> + t\<^isub>2 + t\<^isub>1"}. Assume
+  the @{text "t\<^isub>i"} can be arbitrary expressions and also note that @{text "+"} 
+  associates to the left. Try your function on some examples. 
+  \end{exercise}
+
+  \begin{exercise}\label{fun:makesum}
+  Write a function which takes two terms representing natural numbers
+  in unary notation (like @{term "Suc (Suc (Suc 0))"}), and produce the
+  number representing their sum.
+  \end{exercise}
+
+  There are a few subtle issues with constants. They usually crop up when
+  pattern matching terms or types, or when constructing them. While it is perfectly ok
+  to write the function @{text is_true} as follows
+*}
+
+ML{*fun is_true @{term True} = true
+  | is_true _ = false*}
+
+text {*
+  this does not work for picking out @{text "\<forall>"}-quantified terms. Because
+  the function 
+*}
+
+ML{*fun is_all (@{term All} $ _) = true
+  | is_all _ = false*}
+
+text {* 
+  will not correctly match the formula @{prop "\<forall>x::nat. P x"}: 
+
+  @{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "false"}
+
+  The problem is that the @{text "@term"}-antiquotation in the pattern 
+  fixes the type of the constant @{term "All"} to be @{typ "('a \<Rightarrow> bool) \<Rightarrow> bool"} for 
+  an arbitrary, but fixed type @{typ "'a"}. A properly working alternative 
+  for this function is
+*}
+
+ML{*fun is_all (Const ("All", _) $ _) = true
+  | is_all _ = false*}
+
+text {* 
+  because now
+
+@{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "true"}
+
+  matches correctly (the first wildcard in the pattern matches any type and the
+  second any term).
+
+  However there is still a problem: consider the similar function that
+  attempts to pick out @{text "Nil"}-terms:
+*}
+
+ML{*fun is_nil (Const ("Nil", _)) = true
+  | is_nil _ = false *}
+
+text {* 
+  Unfortunately, also this function does \emph{not} work as expected, since
+
+  @{ML_response [display,gray] "is_nil @{term \"Nil\"}" "false"}
+
+  The problem is that on the ML-level the name of a constant is more
+  subtle than you might expect. The function @{ML is_all} worked correctly,
+  because @{term "All"} is such a fundamental constant, which can be referenced
+  by @{ML "Const (\"All\", some_type)" for some_type}. However, if you look at
+
+  @{ML_response [display,gray] "@{term \"Nil\"}" "Const (\"List.list.Nil\", \<dots>)"}
+
+  the name of the constant @{text "Nil"} depends on the theory in which the
+  term constructor is defined (@{text "List"}) and also in which datatype
+  (@{text "list"}). Even worse, some constants have a name involving
+  type-classes. Consider for example the constants for @{term "zero"} and
+  \mbox{@{text "(op *)"}}:
+
+  @{ML_response [display,gray] "(@{term \"0::nat\"}, @{term \"op *\"})" 
+ "(Const (\"HOL.zero_class.zero\", \<dots>), 
+ Const (\"HOL.times_class.times\", \<dots>))"}
+
+  While you could use the complete name, for example 
+  @{ML "Const (\"List.list.Nil\", some_type)" for some_type}, for referring to or
+  matching against @{text "Nil"}, this would make the code rather brittle. 
+  The reason is that the theory and the name of the datatype can easily change. 
+  To make the code more robust, it is better to use the antiquotation 
+  @{text "@{const_name \<dots>}"}. With this antiquotation you can harness the 
+  variable parts of the constant's name. Therefore a functions for 
+  matching against constants that have a polymorphic type should 
+  be written as follows.
+*}
+
+ML{*fun is_nil_or_all (Const (@{const_name "Nil"}, _)) = true
+  | is_nil_or_all (Const (@{const_name "All"}, _) $ _) = true
+  | is_nil_or_all _ = false *}
+
+text {*
+  Occasional you have to calculate what the ``base'' name of a given
+  constant is. For this you can use the function @{ML Sign.extern_const} or
+  @{ML Long_Name.base_name}. For example:
+
+  @{ML_response [display,gray] "Sign.extern_const @{theory} \"List.list.Nil\"" "\"Nil\""}
+
+  The difference between both functions is that @{ML extern_const in Sign} returns
+  the smallest name that is still unique, whereas @{ML base_name in Long_Name} always
+  strips off all qualifiers.
+
+  \begin{readmore}
+  Functions about naming are implemented in @{ML_file "Pure/General/name_space.ML"};
+  functions about signatures in @{ML_file "Pure/sign.ML"}.
+  \end{readmore}
+
+  Although types of terms can often be inferred, there are many
+  situations where you need to construct types manually, especially  
+  when defining constants. For example the function returning a function 
+  type is as follows:
+
+*} 
+
+ML{*fun make_fun_type tau1 tau2 = Type ("fun", [tau1, tau2]) *}
+
+text {* This can be equally written with the combinator @{ML "-->"} as: *}
+
+ML{*fun make_fun_type tau1 tau2 = tau1 --> tau2 *}
+
+text {*
+  A handy function for manipulating terms is @{ML map_types}: it takes a 
+  function and applies it to every type in a term. You can, for example,
+  change every @{typ nat} in a term into an @{typ int} using the function:
+*}
+
+ML{*fun nat_to_int t =
+  (case t of
+     @{typ nat} => @{typ int}
+   | Type (s, ts) => Type (s, map nat_to_int ts)
+   | _ => t)*}
+
+text {*
+  An example as follows:
+
+@{ML_response_fake [display,gray] 
+"map_types nat_to_int @{term \"a = (1::nat)\"}" 
+"Const (\"op =\", \"int \<Rightarrow> int \<Rightarrow> bool\")
+           $ Free (\"a\", \"int\") $ Const (\"HOL.one_class.one\", \"int\")"}
+
+  (FIXME: readmore about types)
+*}
+
+
+section {* Type-Checking *}
+
+text {* 
+  
+  You can freely construct and manipulate @{ML_type "term"}s and @{ML_type
+  typ}es, since they are just arbitrary unchecked trees. However, you
+  eventually want to see if a term is well-formed, or type-checks, relative to
+  a theory.  Type-checking is done via the function @{ML cterm_of}, which
+  converts a @{ML_type term} into a @{ML_type cterm}, a \emph{certified}
+  term. Unlike @{ML_type term}s, which are just trees, @{ML_type "cterm"}s are
+  abstract objects that are guaranteed to be type-correct, and they can only
+  be constructed via ``official interfaces''.
+
+
+  Type-checking is always relative to a theory context. For now we use
+  the @{ML "@{theory}"} antiquotation to get hold of the current theory.
+  For example you can write:
+
+  @{ML_response_fake [display,gray] "cterm_of @{theory} @{term \"(a::nat) + b = c\"}" "a + b = c"}
+
+  This can also be written with an antiquotation:
+
+  @{ML_response_fake [display,gray] "@{cterm \"(a::nat) + b = c\"}" "a + b = c"}
+
+  Attempting to obtain the certified term for
+
+  @{ML_response_fake_both [display,gray] "@{cterm \"1 + True\"}" "Type unification failed \<dots>"}
+
+  yields an error (since the term is not typable). A slightly more elaborate
+  example that type-checks is:
+
+@{ML_response_fake [display,gray] 
+"let
+  val natT = @{typ \"nat\"}
+  val zero = @{term \"0::nat\"}
+in
+  cterm_of @{theory} 
+      (Const (@{const_name plus}, natT --> natT --> natT) $ zero $ zero)
+end" "0 + 0"}
+
+  In Isabelle not just terms need to be certified, but also types. For example, 
+  you obtain the certified type for the Isablle type @{typ "nat \<Rightarrow> bool"} on 
+  the ML-level as follows:
+
+  @{ML_response_fake [display,gray]
+  "ctyp_of @{theory} (@{typ nat} --> @{typ bool})"
+  "nat \<Rightarrow> bool"}
+
+  \begin{readmore}
+  For functions related to @{ML_type cterm}s and @{ML_type ctyp}s see 
+  the file @{ML_file "Pure/thm.ML"}.
+  \end{readmore}
+
+  \begin{exercise}
+  Check that the function defined in Exercise~\ref{fun:revsum} returns a
+  result that type-checks.
+  \end{exercise}
+
+  Remember Isabelle foolows the Church-style typing for terms, i.e.~a term contains 
+  enough typing information (constants, free variables and abstractions all have typing 
+  information) so that it is always clear what the type of a term is. 
+  Given a well-typed term, the function @{ML type_of} returns the 
+  type of a term. Consider for example:
+
+  @{ML_response [display,gray] 
+  "type_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::nat\"})" "bool"}
+
+  To calculate the type, this function traverses the whole term and will
+  detect any typing inconsistency. For examle changing the type of the variable 
+  @{term "x"} from @{typ "nat"} to @{typ "int"} will result in the error message: 
+
+  @{ML_response_fake [display,gray] 
+  "type_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})" 
+  "*** Exception- TYPE (\"type_of: type mismatch in application\" \<dots>"}
+
+  Since the complete traversal might sometimes be too costly and
+  not necessary, there is the function @{ML fastype_of}, which 
+  also returns the type of a term.
+
+  @{ML_response [display,gray] 
+  "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::nat\"})" "bool"}
+
+  However, efficiency is gained on the expense of skipping some tests. You 
+  can see this in the following example
+
+   @{ML_response [display,gray] 
+  "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})" "bool"}
+
+  where no error is detected.
+
+  Sometimes it is a bit inconvenient to construct a term with 
+  complete typing annotations, especially in cases where the typing 
+  information is redundant. A short-cut is to use the ``place-holder'' 
+  type @{ML "dummyT"} and then let type-inference figure out the 
+  complete type. An example is as follows:
+
+  @{ML_response_fake [display,gray] 
+"let
+  val c = Const (@{const_name \"plus\"}, dummyT) 
+  val o = @{term \"1::nat\"} 
+  val v = Free (\"x\", dummyT)
+in   
+  Syntax.check_term @{context} (c $ o $ v)
+end"
+"Const (\"HOL.plus_class.plus\", \"nat \<Rightarrow> nat \<Rightarrow> nat\") $
+  Const (\"HOL.one_class.one\", \"nat\") $ Free (\"x\", \"nat\")"}
+
+  Instead of giving explicitly the type for the constant @{text "plus"} and the free 
+  variable @{text "x"}, the type-inference filles in the missing information.
+
+  \begin{readmore}
+  See @{ML_file "Pure/Syntax/syntax.ML"} where more functions about reading,
+  checking and pretty-printing of terms are defined. Fuctions related to the
+  type inference are implemented in @{ML_file "Pure/type.ML"} and 
+  @{ML_file "Pure/type_infer.ML"}. 
+  \end{readmore}
+
+  (FIXME: say something about sorts)
+*}
+
+
+section {* Theorems *}
+
+text {*
+  Just like @{ML_type cterm}s, theorems are abstract objects of type @{ML_type thm} 
+  that can only be build by going through interfaces. As a consequence, every proof 
+  in Isabelle is correct by construction. This follows the tradition of the LCF approach
+  \cite{GordonMilnerWadsworth79}.
+
+
+  To see theorems in ``action'', let us give a proof on the ML-level for the following 
+  statement:
+*}
+
+  lemma 
+   assumes assm\<^isub>1: "\<And>(x::nat). P x \<Longrightarrow> Q x" 
+   and     assm\<^isub>2: "P t"
+   shows "Q t" (*<*)oops(*>*) 
+
+text {*
+  The corresponding ML-code is as follows:
+
+@{ML_response_fake [display,gray]
+"let
+  val assm1 = @{cprop \"\<And>(x::nat). P x \<Longrightarrow> Q x\"}
+  val assm2 = @{cprop \"(P::nat\<Rightarrow>bool) t\"}
+
+  val Pt_implies_Qt = 
+        assume assm1
+        |> forall_elim @{cterm \"t::nat\"};
+  
+  val Qt = implies_elim Pt_implies_Qt (assume assm2);
+in
+  Qt 
+  |> implies_intr assm2
+  |> implies_intr assm1
+end" "\<lbrakk>\<And>x. P x \<Longrightarrow> Q x; P t\<rbrakk> \<Longrightarrow> Q t"}
+
+  This code-snippet constructs the following proof:
+
+  \[
+  \infer[(@{text "\<Longrightarrow>"}$-$intro)]{\vdash @{prop "(\<And>x. P x \<Longrightarrow> Q x) \<Longrightarrow> P t \<Longrightarrow> Q t"}}
+    {\infer[(@{text "\<Longrightarrow>"}$-$intro)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}}
+       {\infer[(@{text "\<Longrightarrow>"}$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"}, @{prop "P t"} \vdash @{prop "Q t"}}
+          {\infer[(@{text "\<And>"}$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}}
+                 {\infer[(assume)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "\<And>x. P x \<Longrightarrow> Q x"}}{}}
+                 &
+           \infer[(assume)]{@{prop "P t"} \vdash @{prop "P t"}}{}
+          }
+       }
+    }
+  \]
+
+  However, while we obtained a theorem as result, this theorem is not
+  yet stored in Isabelle's theorem database. So it cannot be referenced later
+  on. How to store theorems will be explained in Section~\ref{sec:storing}.
+
+  \begin{readmore}
+  For the functions @{text "assume"}, @{text "forall_elim"} etc 
+  see \isccite{sec:thms}. The basic functions for theorems are defined in
+  @{ML_file "Pure/thm.ML"}. 
+  \end{readmore}
+
+  (FIXME: how to add case-names to goal states - maybe in the next section)
+*}
+
+section {* Theorem Attributes *}
+
+text {*
+  Theorem attributes are @{text "[simp]"}, @{text "[OF \<dots>]"}, @{text
+  "[symmetric]"} and so on. Such attributes are \emph{neither} tags \emph{nor} flags
+  annotated to theorems, but functions that do further processing once a
+  theorem is proven. In particular, it is not possible to find out
+  what are all theorems that have a given attribute in common, unless of course
+  the function behind the attribute stores the theorems in a retrivable 
+  datastructure. 
+
+  If you want to print out all currently known attributes a theorem 
+  can have, you can use the function:
+
+  @{ML_response_fake [display,gray] "Attrib.print_attributes @{theory}" 
+"COMP:  direct composition with rules (no lifting)
+HOL.dest:  declaration of Classical destruction rule
+HOL.elim:  declaration of Classical elimination rule 
+\<dots>"}
+
+  To explain how to write your own attribute, let us start with an extremely simple 
+  version of the attribute @{text "[symmetric]"}. The purpose of this attribute is
+  to produce the ``symmetric'' version of an equation. The main function behind 
+  this attribute is
+*}
+
+ML{*val my_symmetric = Thm.rule_attribute (fn _ => fn thm => thm RS @{thm sym})*}
+
+text {* 
+  where the function @{ML "Thm.rule_attribute"} expects a function taking a 
+  context (which we ignore in the code above) and a theorem (@{text thm}), and 
+  returns another theorem (namely @{text thm} resolved with the lemma 
+  @{thm [source] sym}: @{thm sym[no_vars]}). The function @{ML "Thm.rule_attribute"} then returns 
+  an attribute.
+
+  Before we can use the attribute, we need to set it up. This can be done
+  using the function @{ML Attrib.setup} as follows.
+*}
+
+setup %gray {* Attrib.setup @{binding "my_sym"} 
+  (Scan.succeed my_symmetric) "applying the sym rule"*}
+
+text {*
+  The attribute does not expect any further arguments (unlike @{text "[OF
+  \<dots>]"}, for example, which can take a list of theorems as argument). Therefore
+  we use the parser @{ML Scan.succeed}. Later on we will also consider
+  attributes taking further arguments. An example for the attribute @{text
+  "[my_sym]"} is the proof
+*} 
+
+lemma test[my_sym]: "2 = Suc (Suc 0)" by simp
+
+text {*
+  which stores the theorem @{thm test} under the name @{thm [source] test}. We 
+  can also use the attribute when referring to this theorem.
+
+  \begin{isabelle}
+  \isacommand{thm}~@{text "test[my_sym]"}\\
+  @{text "> "}~@{thm test[my_sym]}
+  \end{isabelle}
+
+  The purpose of @{ML Thm.rule_attribute} is to directly manipulate theorems.
+  Another usage of attributes is to add and delete theorems from stored data.
+  For example the attribute @{text "[simp]"} adds or deletes a theorem from the
+  current simpset. For these applications, you can use @{ML Thm.declaration_attribute}. 
+  To illustrate this function, let us introduce a reference containing a list
+  of theorems.
+*}
+
+ML{*val my_thms = ref ([]:thm list)*}
+
+text {* 
+  A word of warning: such references must not be used in any code that
+  is meant to be more than just for testing purposes! Here it is only used 
+  to illustrate matters. We will show later how to store data properly without 
+  using references. The function @{ML Thm.declaration_attribute} expects us to 
+  provide two functions that add and delete theorems from this list. 
+  For this we use the two functions:
+*}
+
+ML{*fun my_thms_add thm ctxt =
+  (my_thms := Thm.add_thm thm (!my_thms); ctxt)
+
+fun my_thms_del thm ctxt =
+  (my_thms := Thm.del_thm thm (!my_thms); ctxt)*}
+
+text {*
+  These functions take a theorem and a context and, for what we are explaining
+  here it is sufficient that they just return the context unchanged. They change
+  however the reference @{ML my_thms}, whereby the function @{ML Thm.add_thm}
+  adds a theorem if it is not already included in the list, and @{ML
+  Thm.del_thm} deletes one. Both functions use the predicate @{ML
+  Thm.eq_thm_prop} which compares theorems according to their proved
+  propositions (modulo alpha-equivalence).
+
+
+  You can turn both functions into attributes using
+*}
+
+ML{*val my_add = Thm.declaration_attribute my_thms_add
+val my_del = Thm.declaration_attribute my_thms_del *}
+
+text {* 
+  and set up the attributes as follows
+*}
+
+setup %gray {* Attrib.setup @{binding "my_thms"}
+  (Attrib.add_del my_add my_del) "maintaining a list of my_thms" *}
+
+text {*
+  Now if you prove the lemma attaching the attribute @{text "[my_thms]"}
+*}
+
+lemma trueI_2[my_thms]: "True" by simp
+
+text {*
+  then you can see the lemma is added to the initially empty list.
+
+  @{ML_response_fake [display,gray]
+  "!my_thms" "[\"True\"]"}
+
+  You can also add theorems using the command \isacommand{declare}.
+*}
+
+declare test[my_thms] trueI_2[my_thms add]
+
+text {*
+  The @{text "add"} is the default operation and does not need
+  to be given. This declaration will cause the theorem list to be 
+  updated as follows.
+
+  @{ML_response_fake [display,gray]
+  "!my_thms"
+  "[\"True\", \"Suc (Suc 0) = 2\"]"}
+
+  The theorem @{thm [source] trueI_2} only appears once, since the 
+  function @{ML Thm.add_thm} tests for duplicates, before extending
+  the list. Deletion from the list works as follows:
+*}
+
+declare test[my_thms del]
+
+text {* After this, the theorem list is again: 
+
+  @{ML_response_fake [display,gray]
+  "!my_thms"
+  "[\"True\"]"}
+
+  We used in this example two functions declared as @{ML Thm.declaration_attribute}, 
+  but there can be any number of them. We just have to change the parser for reading
+  the arguments accordingly. 
+
+  However, as said at the beginning of this example, using references for storing theorems is
+  \emph{not} the received way of doing such things. The received way is to 
+  start a ``data slot'' in a context by using the functor @{text GenericDataFun}:
+*}
+
+ML {*structure Data = GenericDataFun
+ (type T = thm list
+  val empty = []
+  val extend = I
+  fun merge _ = Thm.merge_thms) *}
+
+text {*
+  To use this data slot, you only have to change @{ML my_thms_add} and
+  @{ML my_thms_del} to:
+*}
+
+ML{*val thm_add = Data.map o Thm.add_thm
+val thm_del = Data.map o Thm.del_thm*}
+
+text {*
+  where @{ML Data.map} updates the data appropriately in the context. Since storing
+  theorems in a special list is such a common task, there is the functor @{text NamedThmsFun},
+  which does most of the work for you. To obtain such a named theorem lists, you just
+  declare 
+*}
+
+ML{*structure FooRules = NamedThmsFun 
+ (val name = "foo" 
+  val description = "Rules for foo"); *}
+
+text {*
+  and set up the @{ML_struct FooRules} with the command
+*}
+
+setup %gray {* FooRules.setup *}
+
+text {*
+  This code declares a data slot where the theorems are stored,
+  an attribute @{text foo} (with the @{text add} and @{text del} options
+  for adding and deleting theorems) and an internal ML interface to retrieve and 
+  modify the theorems.
+
+  Furthermore, the facts are made available on the user-level under the dynamic 
+  fact name @{text foo}. For example you can declare three lemmas to be of the kind
+  @{text foo} by:
+*}
+
+lemma rule1[foo]: "A" sorry
+lemma rule2[foo]: "B" sorry
+lemma rule3[foo]: "C" sorry
+
+text {* and undeclare the first one by: *}
+
+declare rule1[foo del]
+
+text {* and query the remaining ones with:
+
+  \begin{isabelle}
+  \isacommand{thm}~@{text "foo"}\\
+  @{text "> ?C"}\\
+  @{text "> ?B"}
+  \end{isabelle}
+
+  On the ML-level the rules marked with @{text "foo"} can be retrieved
+  using the function @{ML FooRules.get}:
+
+  @{ML_response_fake [display,gray] "FooRules.get @{context}" "[\"?C\",\"?B\"]"}
+
+  \begin{readmore}
+  For more information see @{ML_file "Pure/Tools/named_thms.ML"} and also
+  the recipe in Section~\ref{recipe:storingdata} about storing arbitrary
+  data.
+  \end{readmore}
+
+  (FIXME What are: @{text "theory_attributes"}, @{text "proof_attributes"}?)
+
+
+  \begin{readmore}
+  FIXME: @{ML_file "Pure/more_thm.ML"} @{ML_file "Pure/Isar/attrib.ML"}
+  \end{readmore}
+*}
+
+
+section {* Setups (TBD) *}
+
+text {*
+  In the previous section we used \isacommand{setup} in order to make
+  a theorem attribute known to Isabelle. What happens behind the scenes
+  is that \isacommand{setup} expects a functions of type 
+  @{ML_type "theory -> theory"}: the input theory is the current theory and the 
+  output the theory where the theory attribute has been stored.
+  
+  This is a fundamental principle in Isabelle. A similar situation occurs 
+  for example with declaring constants. The function that declares a 
+  constant on the ML-level is @{ML Sign.add_consts_i}. 
+  If you write\footnote{Recall that ML-code  needs to be 
+  enclosed in \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}.} 
+*}  
+
+ML{*Sign.add_consts_i [(@{binding "BAR"}, @{typ "nat"}, NoSyn)] @{theory} *}
+
+text {*
+  for declaring the constant @{text "BAR"} with type @{typ nat} and 
+  run the code, then you indeed obtain a theory as result. But if you 
+  query the constant on the Isabelle level using the command \isacommand{term}
+
+  \begin{isabelle}
+  \isacommand{term}~@{text [quotes] "BAR"}\\
+  @{text "> \"BAR\" :: \"'a\""}
+  \end{isabelle}
+
+  you do not obtain a constant of type @{typ nat}, but a free variable (printed in 
+  blue) of polymorphic type. The problem is that the ML-expression above did 
+  not register the declaration with the current theory. This is what the command
+  \isacommand{setup} is for. The constant is properly declared with
+*}
+
+setup %gray {* Sign.add_consts_i [(@{binding "BAR"}, @{typ "nat"}, NoSyn)] *}
+
+text {* 
+  Now 
+  
+  \begin{isabelle}
+  \isacommand{term}~@{text [quotes] "BAR"}\\
+  @{text "> \"BAR\" :: \"nat\""}
+  \end{isabelle}
+
+  returns a (black) constant with the type @{typ nat}.
+
+  A similar command is \isacommand{local\_setup}, which expects a function
+  of type @{ML_type "local_theory -> local_theory"}. Later on we will also
+  use the commands \isacommand{method\_setup} for installing methods in the
+  current theory and \isacommand{simproc\_setup} for adding new simprocs to
+  the current simpset.
+*}
+
+section {* Theories, Contexts and Local Theories (TBD) *}
+
+text {*
+  There are theories, proof contexts and local theories (in this order, if you
+  want to order them). 
+
+  In contrast to an ordinary theory, which simply consists of a type
+  signature, as well as tables for constants, axioms and theorems, a local
+  theory also contains additional context information, such as locally fixed
+  variables and local assumptions that may be used by the package. The type
+  @{ML_type local_theory} is identical to the type of \emph{proof contexts}
+  @{ML_type "Proof.context"}, although not every proof context constitutes a
+  valid local theory.
+*}
+
+section {* Storing Theorems\label{sec:storing} (TBD) *}
+
+text {* @{ML PureThy.add_thms_dynamic} *}
+
+
+
+(* FIXME: some code below *)
+
+(*<*)
+(*
+setup {*
+ Sign.add_consts_i [(Binding"bar", @{typ "nat"},NoSyn)] 
+*}
+*)
+lemma "bar = (1::nat)"
+  oops
+
+(*
+setup {*   
+  Sign.add_consts_i [("foo", @{typ "nat"},NoSyn)] 
+ #> PureThy.add_defs false [((@{binding "foo_def"}, 
+       Logic.mk_equals (Const ("FirstSteps.foo", @{typ "nat"}), @{term "1::nat"})), [])] 
+ #> snd
+*}
+*)
+(*
+lemma "foo = (1::nat)"
+  apply(simp add: foo_def)
+  done
+
+thm foo_def
+*)
+(*>*)
+
+section {* Pretty-Printing (TBD) *}
+
+text {*
+  @{ML Pretty.big_list},
+  @{ML Pretty.brk},
+  @{ML Pretty.block},
+  @{ML Pretty.chunks}
+*}
+
+section {* Misc (TBD) *}
+
+ML {*DatatypePackage.get_datatype @{theory} "List.list"*}
+
+end
\ No newline at end of file