updated to new Isabelle; more work on the data section
theory Generalimports Base FirstStepsbeginchapter {* Isabelle -- The Good, the Bad and the Ugly *}text {* Isabelle is build around a few central ideas. One is the LCF-approach to theorem proving where there is a small trusted core and everything else is build on top of this trusted core.*}section {* Terms and Types *}text {* One way to construct Isabelle terms, is by using the antiquotation \mbox{@{text "@{term \<dots>}"}}. For example @{ML_response [display,gray] "@{term \"(a::nat) + b = c\"}" "Const (\"op =\", \<dots>) $ (Const (\"HOL.plus_class.plus\", \<dots>) $ \<dots> $ \<dots>) $ \<dots>"} will show the term @{term "(a::nat) + b = c"}, but printed using the internal representation corresponding to the datatype @{ML_type "term"} defined as follows: *} ML_val %linenosgray{*datatype term = Const of string * typ | Free of string * typ | Var of indexname * typ | Bound of int | Abs of string * typ * term | $ of term * term *}text {* This datatype implements lambda-terms typed in Church-style. As can be seen in Line 5, terms use the usual de Bruijn index mechanism for representing bound variables. For example in @{ML_response_fake [display, gray] "@{term \"\<lambda>x y. x y\"}" "Abs (\"x\", \"'a \<Rightarrow> 'b\", Abs (\"y\", \"'a\", Bound 1 $ Bound 0))"} the indices refer to the number of Abstractions (@{ML Abs}) that we need to skip until we hit the @{ML Abs} that binds the corresponding variable. Constructing a term with dangling de Bruijn indices is possible, but will be flagged as ill-formed when you try to typecheck or certify it (see Section~\ref{sec:typechecking}). Note that the names of bound variables are kept at abstractions for printing purposes, and so should be treated only as ``comments''. Application in Isabelle is realised with the term-constructor @{ML $}. Isabelle makes a distinction between \emph{free} variables (term-constructor @{ML Free} and written on the user level in blue colour) and \emph{schematic} variables (term-constructor @{ML Var} and written with a leading question mark). Consider the following two examples @{ML_response_fake [display, gray]"let val v1 = Var ((\"x\", 3), @{typ bool}) val v2 = Var ((\"x1\", 3), @{typ bool}) val v3 = Free (\"x\", @{typ bool})in string_of_terms @{context} [v1, v2, v3] |> tracingend""?x3, ?x1.3, x"} When constructing terms, you are usually concerned with free variables (as mentioned earlier, you cannot construct schematic variables using the antiquotation @{text "@{term \<dots>}"}). If you deal with theorems, you have to, however, observe the distinction. The reason is that only schematic varaibles can be instantiated with terms when a theorem is applied. A similar distinction between free and schematic variables holds for types (see below). \begin{readmore} Terms and types are described in detail in \isccite{sec:terms}. Their definition and many useful operations are implemented in @{ML_file "Pure/term.ML"}. For constructing terms involving HOL constants, many helper functions are defined in @{ML_file "HOL/Tools/hologic.ML"}. \end{readmore} Constructing terms via antiquotations has the advantage that only typable terms can be constructed. For example @{ML_response_fake_both [display,gray] "@{term \"x x\"}" "Type unification failed: Occurs check!"} raises a typing error, while it perfectly ok to construct the term @{ML_response_fake [display,gray] "let val omega = Free (\"x\", @{typ nat}) $ Free (\"x\", @{typ nat})in tracing (string_of_term @{context} omega)end" "x x"} with the raw ML-constructors. Sometimes the internal representation of terms can be surprisingly different from what you see at the user-level, because the layers of parsing/type-checking/pretty printing can be quite elaborate. \begin{exercise} Look at the internal term representation of the following terms, and find out why they are represented like this: \begin{itemize} \item @{term "case x of 0 \<Rightarrow> 0 | Suc y \<Rightarrow> y"} \item @{term "\<lambda>(x,y). P y x"} \item @{term "{ [x::int] | x. x \<le> -2 }"} \end{itemize} Hint: The third term is already quite big, and the pretty printer may omit parts of it by default. If you want to see all of it, you can use the following ML-function to set the printing depth to a higher value: @{ML [display,gray] "print_depth 50"} \end{exercise} The antiquotation @{text "@{prop \<dots>}"} constructs terms by inserting the usually invisible @{text "Trueprop"}-coercions whenever necessary. Consider for example the pairs@{ML_response [display,gray] "(@{term \"P x\"}, @{prop \"P x\"})" "(Free (\"P\", \<dots>) $ Free (\"x\", \<dots>), Const (\"Trueprop\", \<dots>) $ (Free (\"P\", \<dots>) $ Free (\"x\", \<dots>)))"} where a coercion is inserted in the second component and @{ML_response [display,gray] "(@{term \"P x \<Longrightarrow> Q x\"}, @{prop \"P x \<Longrightarrow> Q x\"})" "(Const (\"==>\", \<dots>) $ \<dots> $ \<dots>, Const (\"==>\", \<dots>) $ \<dots> $ \<dots>)"} where it is not (since it is already constructed by a meta-implication). The purpose of the @{text "Trueprop"}-coercion is to embed formulae of an object logic, for example HOL, into the meta-logic of Isabelle. It is needed whenever a term is constructed that will be proved as a theorem. As already seen above, types can be constructed using the antiquotation @{text "@{typ \<dots>}"}. For example: @{ML_response_fake [display,gray] "@{typ \"bool \<Rightarrow> nat\"}" "bool \<Rightarrow> nat"} The corresponding datatype is*}ML_val{*datatype typ = Type of string * typ list | TFree of string * sort | TVar of indexname * sort *}text {* Like with terms, there is the distinction between free type variables (term-constructor @{ML "TFree"} and schematic type variables (term-constructor @{ML "TVar"}). A type constant, like @{typ "int"} or @{typ bool}, are types with an empty list of argument types. However, it is a bit difficult to show an example, because Isabelle always pretty-prints types (unlike terms). Here is a contrived example: @{ML_response [display, gray] "if Type (\"bool\", []) = @{typ \"bool\"} then true else false" "true"} \begin{readmore} Types are described in detail in \isccite{sec:types}. Their definition and many useful operations are implemented in @{ML_file "Pure/type.ML"}. \end{readmore}*}section {* Constructing Terms and Types Manually\label{sec:terms_types_manually} *} text {* While antiquotations are very convenient for constructing terms, they can only construct fixed terms (remember they are ``linked'' at compile-time). However, you often need to construct terms dynamically. For example, a function that returns the implication @{text "\<And>(x::nat). P x \<Longrightarrow> Q x"} taking @{term P} and @{term Q} as arguments can only be written as:*}ML{*fun make_imp P Q =let val x = Free ("x", @{typ nat})in Logic.all x (Logic.mk_implies (P $ x, Q $ x))end *}text {* The reason is that you cannot pass the arguments @{term P} and @{term Q} into an antiquotation.\footnote{At least not at the moment.} For example the following does \emph{not} work.*}ML{*fun make_wrong_imp P Q = @{prop "\<And>(x::nat). P x \<Longrightarrow> Q x"} *}text {* To see this, apply @{text "@{term S}"} and @{text "@{term T}"} to both functions. With @{ML make_imp} you obtain the intended term involving the given arguments @{ML_response [display,gray] "make_imp @{term S} @{term T}" "Const \<dots> $ Abs (\"x\", Type (\"nat\",[]), Const \<dots> $ (Free (\"S\",\<dots>) $ \<dots>) $ (Free (\"T\",\<dots>) $ \<dots>))"} whereas with @{ML make_wrong_imp} you obtain a term involving the @{term "P"} and @{text "Q"} from the antiquotation. @{ML_response [display,gray] "make_wrong_imp @{term S} @{term T}" "Const \<dots> $ Abs (\"x\", \<dots>, Const \<dots> $ (Const \<dots> $ (Free (\"P\",\<dots>) $ \<dots>)) $ (Const \<dots> $ (Free (\"Q\",\<dots>) $ \<dots>)))"} There are a number of handy functions that are frequently used for constructing terms. One is the function @{ML_ind list_comb}, which takes a term and a list of terms as arguments, and produces as output the term list applied to the term. For example@{ML_response_fake [display,gray]"let val trm = @{term \"P::nat\"} val args = [@{term \"True\"}, @{term \"False\"}]in list_comb (trm, args)end""Free (\"P\", \"nat\") $ Const (\"True\", \"bool\") $ Const (\"False\", \"bool\")"} Another handy function is @{ML_ind lambda}, which abstracts a variable in a term. For example @{ML_response_fake [display,gray]"let val x_nat = @{term \"x::nat\"} val trm = @{term \"(P::nat \<Rightarrow> bool) x\"}in lambda x_nat trmend" "Abs (\"x\", \"nat\", Free (\"P\", \"bool \<Rightarrow> bool\") $ Bound 0)"} In this example, @{ML lambda} produces a de Bruijn index (i.e.~@{ML "Bound 0"}), and an abstraction. It also records the type of the abstracted variable and for printing purposes also its name. Note that because of the typing annotation on @{text "P"}, the variable @{text "x"} in @{text "P x"} is of the same type as the abstracted variable. If it is of different type, as in @{ML_response_fake [display,gray]"let val x_int = @{term \"x::int\"} val trm = @{term \"(P::nat \<Rightarrow> bool) x\"}in lambda x_int trmend" "Abs (\"x\", \"int\", Free (\"P\", \"nat \<Rightarrow> bool\") $ Free (\"x\", \"nat\"))"} then the variable @{text "Free (\"x\", \"int\")"} is \emph{not} abstracted. This is a fundamental principle of Church-style typing, where variables with the same name still differ, if they have different type. There is also the function @{ML_ind subst_free} with which terms can be replaced by other terms. For example below, we will replace in @{term "(f::nat \<Rightarrow> nat \<Rightarrow> nat) 0 x"} the subterm @{term "(f::nat \<Rightarrow> nat \<Rightarrow> nat) 0"} by @{term y}, and @{term x} by @{term True}. @{ML_response_fake [display,gray]"let val sub1 = (@{term \"(f::nat \<Rightarrow> nat \<Rightarrow> nat) 0\"}, @{term \"y::nat \<Rightarrow> nat\"}) val sub2 = (@{term \"x::nat\"}, @{term \"True\"}) val trm = @{term \"((f::nat \<Rightarrow> nat \<Rightarrow> nat) 0) x\"}in subst_free [sub1, sub2] trmend" "Free (\"y\", \"nat \<Rightarrow> nat\") $ Const (\"True\", \"bool\")"} As can be seen, @{ML subst_free} does not take typability into account. However it takes alpha-equivalence into account: @{ML_response_fake [display, gray]"let val sub = (@{term \"(\<lambda>y::nat. y)\"}, @{term \"x::nat\"}) val trm = @{term \"(\<lambda>x::nat. x)\"}in subst_free [sub] trmend" "Free (\"x\", \"nat\")"} Similarly the function @{ML_ind subst_bounds}, replaces lose bound variables with terms. To see how this function works, let us implement a function that strips off the outermost quantifiers in a term.*}ML{*fun strip_alls (Const ("All", _) $ Abs (n, T, t)) = strip_alls t |>> cons (Free (n, T)) | strip_alls t = ([], t) *}text {* The function returns a pair consisting of the stripped off variables and the body of the universal quantifications. For example @{ML_response_fake [display, gray] "strip_alls @{term \"\<forall>x y. x = (y::bool)\"}""([Free (\"x\", \"bool\"), Free (\"y\", \"bool\")], Const (\"op =\", \<dots>) $ Bound 1 $ Bound 0)"} After calling @{ML strip_alls}, you obtain a term with lose bound variables. With the function @{ML subst_bounds}, you can replace these lose @{ML_ind Bound}s with the stripped off variables. @{ML_response_fake [display, gray, linenos] "let val (vrs, trm) = strip_alls @{term \"\<forall>x y. x = (y::bool)\"}in subst_bounds (rev vrs, trm) |> string_of_term @{context} |> tracingend" "x = y"} Note that in Line 4 we had to reverse the list of variables that @{ML strip_alls} returned. The reason is that the head of the list the function @{ML subst_bounds} takes is the replacement for @{ML "Bound 0"}, the next element for @{ML "Bound 1"} and so on. There are many convenient functions that construct specific HOL-terms. For example @{ML_ind mk_eq in HOLogic} constructs an equality out of two terms. The types needed in this equality are calculated from the type of the arguments. For example@{ML_response_fake [gray,display]"let val eq = HOLogic.mk_eq (@{term \"True\"}, @{term \"False\"})in string_of_term @{context} eq |> tracingend" "True = False"}*}text {* \begin{readmore} There are many functions in @{ML_file "Pure/term.ML"}, @{ML_file "Pure/logic.ML"} and @{ML_file "HOL/Tools/hologic.ML"} that make such manual constructions of terms and types easier. \end{readmore} When constructing terms manually, there are a few subtle issues with constants. They usually crop up when pattern matching terms or types, or when constructing them. While it is perfectly ok to write the function @{text is_true} as follows*}ML{*fun is_true @{term True} = true | is_true _ = false*}text {* this does not work for picking out @{text "\<forall>"}-quantified terms. Because the function *}ML{*fun is_all (@{term All} $ _) = true | is_all _ = false*}text {* will not correctly match the formula @{prop[source] "\<forall>x::nat. P x"}: @{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "false"} The problem is that the @{text "@term"}-antiquotation in the pattern fixes the type of the constant @{term "All"} to be @{typ "('a \<Rightarrow> bool) \<Rightarrow> bool"} for an arbitrary, but fixed type @{typ "'a"}. A properly working alternative for this function is*}ML{*fun is_all (Const ("All", _) $ _) = true | is_all _ = false*}text {* because now@{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "true"} matches correctly (the first wildcard in the pattern matches any type and the second any term). However there is still a problem: consider the similar function that attempts to pick out @{text "Nil"}-terms:*}ML{*fun is_nil (Const ("Nil", _)) = true | is_nil _ = false *}text {* Unfortunately, also this function does \emph{not} work as expected, since @{ML_response [display,gray] "is_nil @{term \"Nil\"}" "false"} The problem is that on the ML-level the name of a constant is more subtle than you might expect. The function @{ML is_all} worked correctly, because @{term "All"} is such a fundamental constant, which can be referenced by @{ML "Const (\"All\", some_type)" for some_type}. However, if you look at @{ML_response [display,gray] "@{term \"Nil\"}" "Const (\"List.list.Nil\", \<dots>)"} the name of the constant @{text "Nil"} depends on the theory in which the term constructor is defined (@{text "List"}) and also in which datatype (@{text "list"}). Even worse, some constants have a name involving type-classes. Consider for example the constants for @{term "zero"} and \mbox{@{text "(op *)"}}: @{ML_response [display,gray] "(@{term \"0::nat\"}, @{term \"(op *)\"})" "(Const (\"HOL.zero_class.zero\", \<dots>), Const (\"HOL.times_class.times\", \<dots>))"} While you could use the complete name, for example @{ML "Const (\"List.list.Nil\", some_type)" for some_type}, for referring to or matching against @{text "Nil"}, this would make the code rather brittle. The reason is that the theory and the name of the datatype can easily change. To make the code more robust, it is better to use the antiquotation @{text "@{const_name \<dots>}"}. With this antiquotation you can harness the variable parts of the constant's name. Therefore a function for matching against constants that have a polymorphic type should be written as follows.*}ML{*fun is_nil_or_all (Const (@{const_name "Nil"}, _)) = true | is_nil_or_all (Const (@{const_name "All"}, _) $ _) = true | is_nil_or_all _ = false *}text {* The antiquotation for properly referencing type constants is @{text "@{type_name \<dots>}"}. For example @{ML_response [display,gray] "@{type_name \"list\"}" "\"List.list\""} (FIXME: Explain the following better.) Occasionally you have to calculate what the ``base'' name of a given constant is. For this you can use the function @{ML_ind "Sign.extern_const"} or @{ML_ind Long_Name.base_name}. For example: @{ML_response [display,gray] "Sign.extern_const @{theory} \"List.list.Nil\"" "\"Nil\""} The difference between both functions is that @{ML extern_const in Sign} returns the smallest name that is still unique, whereas @{ML base_name in Long_Name} always strips off all qualifiers. \begin{readmore} Functions about naming are implemented in @{ML_file "Pure/General/name_space.ML"}; functions about signatures in @{ML_file "Pure/sign.ML"}. \end{readmore} Although types of terms can often be inferred, there are many situations where you need to construct types manually, especially when defining constants. For example the function returning a function type is as follows:*} ML{*fun make_fun_type ty1 ty2 = Type ("fun", [ty1, ty2]) *}text {* This can be equally written with the combinator @{ML_ind "-->"} as: *}ML{*fun make_fun_type ty1 ty2 = ty1 --> ty2 *}text {* If you want to construct a function type with more than one argument type, then you can use @{ML_ind "--->"}.*}ML{*fun make_fun_types tys ty = tys ---> ty *}text {* A handy function for manipulating terms is @{ML_ind map_types}: it takes a function and applies it to every type in a term. You can, for example, change every @{typ nat} in a term into an @{typ int} using the function:*}ML{*fun nat_to_int ty = (case ty of @{typ nat} => @{typ int} | Type (s, tys) => Type (s, map nat_to_int tys) | _ => ty)*}text {* Here is an example:@{ML_response_fake [display,gray] "map_types nat_to_int @{term \"a = (1::nat)\"}" "Const (\"op =\", \"int \<Rightarrow> int \<Rightarrow> bool\") $ Free (\"a\", \"int\") $ Const (\"HOL.one_class.one\", \"int\")"} If you want to obtain the list of free type-variables of a term, you can use the function @{ML_ind add_tfrees in Term} (similarly @{ML_ind add_tvars in Term} for the schematic type-variables). One would expect that such functions take a term as input and return a list of types. But their type is actually @{text[display] "Term.term -> (string * Term.sort) list -> (string * Term.sort) list"} that is they take, besides a term, also a list of type-variables as input. So in order to obtain the list of type-variables of a term you have to call them as follows @{ML_response [gray,display] "Term.add_tfrees @{term \"(a, b)\"} []" "[(\"'b\", [\"HOL.type\"]), (\"'a\", [\"HOL.type\"])]"} The reason for this definition is that @{ML add_tfrees in Term} can be easily folded over a list of terms. Similarly for all functions named @{text "add_*"} in @{ML_file "Pure/term.ML"}. \begin{exercise}\label{fun:revsum} Write a function @{text "rev_sum : term -> term"} that takes a term of the form @{text "t\<^isub>1 + t\<^isub>2 + \<dots> + t\<^isub>n"} (whereby @{text "n"} might be one) and returns the reversed sum @{text "t\<^isub>n + \<dots> + t\<^isub>2 + t\<^isub>1"}. Assume the @{text "t\<^isub>i"} can be arbitrary expressions and also note that @{text "+"} associates to the left. Try your function on some examples. \end{exercise} \begin{exercise}\label{fun:makesum} Write a function which takes two terms representing natural numbers in unary notation (like @{term "Suc (Suc (Suc 0))"}), and produces the number representing their sum. \end{exercise} \begin{exercise}\footnote{Personal communication of de Bruijn to Dyckhoff.}\label{ex:debruijn} Implement the function, which we below name deBruijn, that depends on a natural number n$>$0 and constructs terms of the form: \begin{center} \begin{tabular}{r@ {\hspace{2mm}}c@ {\hspace{2mm}}l} {\it rhs n} & $\dn$ & {\large$\bigwedge$}{\it i=1\ldots n. P\,i}\\ {\it lhs n} & $\dn$ & {\large$\bigwedge$}{\it i=1\ldots n. P\,i = P (i + 1 mod n)} $\longrightarrow$ {\it rhs n}\\ {\it deBruijn n} & $\dn$ & {\it lhs n} $\longrightarrow$ {\it rhs n}\\ \end{tabular} \end{center} For n=3 this function returns the term \begin{center} \begin{tabular}{l} (P 1 = P 2 $\longrightarrow$ P 1 $\wedge$ P 2 $\wedge$ P 3) $\wedge$\\ (P 2 = P 3 $\longrightarrow$ P 1 $\wedge$ P 2 $\wedge$ P 3) $\wedge$\\ (P 3 = P 1 $\longrightarrow$ P 1 $\wedge$ P 2 $\wedge$ P 3) $\longrightarrow$ P 1 $\wedge$ P 2 $\wedge$ P 3 \end{tabular} \end{center} Make sure you use the functions defined in @{ML_file "HOL/Tools/hologic.ML"} for constructing the terms for the logical connectives. \end{exercise}*}section {* Type-Checking\label{sec:typechecking} *}text {* You can freely construct and manipulate @{ML_type "term"}s and @{ML_type typ}es, since they are just arbitrary unchecked trees. However, you eventually want to see if a term is well-formed, or type-checks, relative to a theory. Type-checking is done via the function @{ML_ind cterm_of}, which converts a @{ML_type term} into a @{ML_type cterm}, a \emph{certified} term. Unlike @{ML_type term}s, which are just trees, @{ML_type "cterm"}s are abstract objects that are guaranteed to be type-correct, and they can only be constructed via ``official interfaces''. Type-checking is always relative to a theory context. For now we use the @{ML "@{theory}"} antiquotation to get hold of the current theory. For example you can write: @{ML_response_fake [display,gray] "cterm_of @{theory} @{term \"(a::nat) + b = c\"}" "a + b = c"} This can also be written with an antiquotation: @{ML_response_fake [display,gray] "@{cterm \"(a::nat) + b = c\"}" "a + b = c"} Attempting to obtain the certified term for @{ML_response_fake_both [display,gray] "@{cterm \"1 + True\"}" "Type unification failed \<dots>"} yields an error (since the term is not typable). A slightly more elaborate example that type-checks is:@{ML_response_fake [display,gray] "let val natT = @{typ \"nat\"} val zero = @{term \"0::nat\"}in cterm_of @{theory} (Const (@{const_name plus}, natT --> natT --> natT) $ zero $ zero)end" "0 + 0"} In Isabelle not just terms need to be certified, but also types. For example, you obtain the certified type for the Isabelle type @{typ "nat \<Rightarrow> bool"} on the ML-level as follows: @{ML_response_fake [display,gray] "ctyp_of @{theory} (@{typ nat} --> @{typ bool})" "nat \<Rightarrow> bool"} or with the antiquotation: @{ML_response_fake [display,gray] "@{ctyp \"nat \<Rightarrow> bool\"}" "nat \<Rightarrow> bool"} \begin{readmore} For functions related to @{ML_type cterm}s and @{ML_type ctyp}s see the file @{ML_file "Pure/thm.ML"}. \end{readmore} Remember Isabelle follows the Church-style typing for terms, i.e., a term contains enough typing information (constants, free variables and abstractions all have typing information) so that it is always clear what the type of a term is. Given a well-typed term, the function @{ML_ind type_of} returns the type of a term. Consider for example: @{ML_response [display,gray] "type_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::nat\"})" "bool"} To calculate the type, this function traverses the whole term and will detect any typing inconsistency. For example changing the type of the variable @{term "x"} from @{typ "nat"} to @{typ "int"} will result in the error message: @{ML_response_fake [display,gray] "type_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})" "*** Exception- TYPE (\"type_of: type mismatch in application\" \<dots>"} Since the complete traversal might sometimes be too costly and not necessary, there is the function @{ML_ind fastype_of}, which also returns the type of a term. @{ML_response [display,gray] "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::nat\"})" "bool"} However, efficiency is gained on the expense of skipping some tests. You can see this in the following example @{ML_response [display,gray] "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})" "bool"} where no error is detected. Sometimes it is a bit inconvenient to construct a term with complete typing annotations, especially in cases where the typing information is redundant. A short-cut is to use the ``place-holder'' type @{ML_ind dummyT} and then let type-inference figure out the complete type. An example is as follows: @{ML_response_fake [display,gray] "let val c = Const (@{const_name \"plus\"}, dummyT) val o = @{term \"1::nat\"} val v = Free (\"x\", dummyT)in Syntax.check_term @{context} (c $ o $ v)end""Const (\"HOL.plus_class.plus\", \"nat \<Rightarrow> nat \<Rightarrow> nat\") $ Const (\"HOL.one_class.one\", \"nat\") $ Free (\"x\", \"nat\")"} Instead of giving explicitly the type for the constant @{text "plus"} and the free variable @{text "x"}, type-inference fills in the missing information. \begin{readmore} See @{ML_file "Pure/Syntax/syntax.ML"} where more functions about reading, checking and pretty-printing of terms are defined. Functions related to type-inference are implemented in @{ML_file "Pure/type.ML"} and @{ML_file "Pure/type_infer.ML"}. \end{readmore} (FIXME: say something about sorts) \begin{exercise} Check that the function defined in Exercise~\ref{fun:revsum} returns a result that type-checks. See what happens to the solutions of this exercise given in \ref{ch:solutions} when they receive an ill-typed term as input. \end{exercise}*}section {* Theorems *}text {* Just like @{ML_type cterm}s, theorems are abstract objects of type @{ML_type thm} that can only be built by going through interfaces. As a consequence, every proof in Isabelle is correct by construction. This follows the tradition of the LCF approach \cite{GordonMilnerWadsworth79}. To see theorems in ``action'', let us give a proof on the ML-level for the following statement:*} lemma assumes assm\<^isub>1: "\<And>(x::nat). P x \<Longrightarrow> Q x" and assm\<^isub>2: "P t" shows "Q t" (*<*)oops(*>*) text {* The corresponding ML-code is as follows:@{ML_response_fake [display,gray]"let val assm1 = @{cprop \"\<And>(x::nat). P x \<Longrightarrow> Q x\"} val assm2 = @{cprop \"(P::nat\<Rightarrow>bool) t\"} val Pt_implies_Qt = assume assm1 |> forall_elim @{cterm \"t::nat\"}; val Qt = implies_elim Pt_implies_Qt (assume assm2);in Qt |> implies_intr assm2 |> implies_intr assm1end" "\<lbrakk>\<And>x. P x \<Longrightarrow> Q x; P t\<rbrakk> \<Longrightarrow> Q t"} This code-snippet constructs the following proof: \[ \infer[(@{text "\<Longrightarrow>"}$-$intro)]{\vdash @{prop "(\<And>x. P x \<Longrightarrow> Q x) \<Longrightarrow> P t \<Longrightarrow> Q t"}} {\infer[(@{text "\<Longrightarrow>"}$-$intro)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}} {\infer[(@{text "\<Longrightarrow>"}$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"}, @{prop "P t"} \vdash @{prop "Q t"}} {\infer[(@{text "\<And>"}$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}} {\infer[(assume)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "\<And>x. P x \<Longrightarrow> Q x"}}{}} & \infer[(assume)]{@{prop "P t"} \vdash @{prop "P t"}}{} } } } \] However, while we obtained a theorem as result, this theorem is not yet stored in Isabelle's theorem database. So it cannot be referenced later on. How to store theorems will be explained in Section~\ref{sec:storing}. \begin{readmore} For the functions @{text "assume"}, @{text "forall_elim"} etc see \isccite{sec:thms}. The basic functions for theorems are defined in @{ML_file "Pure/thm.ML"}. \end{readmore} (FIXME: handy functions working on theorems, like @{ML_ind rulify in ObjectLogic} and so on) (FIXME: @{ML_ind prove in Goal}) (FIXME: how to add case-names to goal states - maybe in the next section) (FIXME: example for how to add theorem styles)*}ML {*fun strip_assums_all (params, Const("all",_) $ Abs(a, T, t)) = strip_assums_all ((a, T)::params, t) | strip_assums_all (params, B) = (params, B)fun style_parm_premise i ctxt t = let val prems = Logic.strip_imp_prems t in if i <= length prems then let val (params,t) = strip_assums_all([], nth prems (i - 1)) in subst_bounds(map Free params, t) end else error ("Not enough premises for prem" ^ string_of_int i ^ " in propositon: " ^ string_of_term ctxt t) end;*}ML {*strip_assums_all ([], @{term "\<And>x y. A x y"})*}setup %gray {* TermStyle.add_style "no_all_prem1" (style_parm_premise 1) #> TermStyle.add_style "no_all_prem2" (style_parm_premise 2)*}lemma shows "A \<Longrightarrow> B" and "C \<Longrightarrow> D"oopssection {* Setups (TBD) *}text {* In the previous section we used \isacommand{setup} in order to make a theorem attribute known to Isabelle. What happens behind the scenes is that \isacommand{setup} expects a function of type @{ML_type "theory -> theory"}: the input theory is the current theory and the output the theory where the theory attribute has been stored. This is a fundamental principle in Isabelle. A similar situation occurs for example with declaring constants. The function that declares a constant on the ML-level is @{ML_ind add_consts_i in Sign}. If you write\footnote{Recall that ML-code needs to be enclosed in \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}.} *} ML{*Sign.add_consts_i [(@{binding "BAR"}, @{typ "nat"}, NoSyn)] @{theory} *}text {* for declaring the constant @{text "BAR"} with type @{typ nat} and run the code, then you indeed obtain a theory as result. But if you query the constant on the Isabelle level using the command \isacommand{term} \begin{isabelle} \isacommand{term}~@{text [quotes] "BAR"}\\ @{text "> \"BAR\" :: \"'a\""} \end{isabelle} you do not obtain a constant of type @{typ nat}, but a free variable (printed in blue) of polymorphic type. The problem is that the ML-expression above did not register the declaration with the current theory. This is what the command \isacommand{setup} is for. The constant is properly declared with*}setup %gray {* Sign.add_consts_i [(@{binding "BAR"}, @{typ "nat"}, NoSyn)] *}text {* Now \begin{isabelle} \isacommand{term}~@{text [quotes] "BAR"}\\ @{text "> \"BAR\" :: \"nat\""} \end{isabelle} returns a (black) constant with the type @{typ nat}. A similar command is \isacommand{local\_setup}, which expects a function of type @{ML_type "local_theory -> local_theory"}. Later on we will also use the commands \isacommand{method\_setup} for installing methods in the current theory and \isacommand{simproc\_setup} for adding new simprocs to the current simpset.*}section {* Theorem Attributes\label{sec:attributes} *}text {* Theorem attributes are @{text "[symmetric]"}, @{text "[THEN \<dots>]"}, @{text "[simp]"} and so on. Such attributes are \emph{neither} tags \emph{nor} flags annotated to theorems, but functions that do further processing once a theorem is proved. In particular, it is not possible to find out what are all theorems that have a given attribute in common, unless of course the function behind the attribute stores the theorems in a retrievable data structure. If you want to print out all currently known attributes a theorem can have, you can use the Isabelle command \begin{isabelle} \isacommand{print\_attributes}\\ @{text "> COMP: direct composition with rules (no lifting)"}\\ @{text "> HOL.dest: declaration of Classical destruction rule"}\\ @{text "> HOL.elim: declaration of Classical elimination rule"}\\ @{text "> \<dots>"} \end{isabelle} The theorem attributes fall roughly into two categories: the first category manipulates the proved theorem (for example @{text "[symmetric]"} and @{text "[THEN \<dots>]"}), and the second stores the proved theorem somewhere as data (for example @{text "[simp]"}, which adds the theorem to the current simpset). To explain how to write your own attribute, let us start with an extremely simple version of the attribute @{text "[symmetric]"}. The purpose of this attribute is to produce the ``symmetric'' version of an equation. The main function behind this attribute is*}ML{*val my_symmetric = Thm.rule_attribute (fn _ => fn thm => thm RS @{thm sym})*}text {* where the function @{ML_ind rule_attribute in Thm} expects a function taking a context (which we ignore in the code above) and a theorem (@{text thm}), and returns another theorem (namely @{text thm} resolved with the theorem @{thm [source] sym}: @{thm sym[no_vars]}; the function @{ML_ind "RS"} is explained in Section~\ref{sec:simpletacs}). The function @{ML rule_attribute in Thm} then returns an attribute. Before we can use the attribute, we need to set it up. This can be done using the Isabelle command \isacommand{attribute\_setup} as follows:*}attribute_setup %gray my_sym = {* Scan.succeed my_symmetric *} "applying the sym rule"text {* Inside the @{text "\<verbopen> \<dots> \<verbclose>"}, we have to specify a parser for the theorem attribute. Since the attribute does not expect any further arguments (unlike @{text "[THEN \<dots>]"}, for example), we use the parser @{ML Scan.succeed}. Later on we will also consider attributes taking further arguments. An example for the attribute @{text "[my_sym]"} is the proof*} lemma test[my_sym]: "2 = Suc (Suc 0)" by simptext {* which stores the theorem @{thm test} under the name @{thm [source] test}. You can see this, if you query the lemma: \begin{isabelle} \isacommand{thm}~@{text "test"}\\ @{text "> "}~@{thm test} \end{isabelle} We can also use the attribute when referring to this theorem: \begin{isabelle} \isacommand{thm}~@{text "test[my_sym]"}\\ @{text "> "}~@{thm test[my_sym]} \end{isabelle} An alternative for setting up an attribute is the function @{ML_ind setup in Attrib}. So instead of using \isacommand{attribute\_setup}, you can also set up the attribute as follows:*}ML{*Attrib.setup @{binding "my_sym"} (Scan.succeed my_symmetric) "applying the sym rule" *}text {* This gives a function from @{ML_type "Context.theory -> Context.theory"}, which can be used for example with \isacommand{setup}. As an example of a slightly more complicated theorem attribute, we implement our own version of @{text "[THEN \<dots>]"}. This attribute will take a list of theorems as argument and resolve the proved theorem with this list (one theorem after another). The code for this attribute is*}ML{*fun MY_THEN thms = Thm.rule_attribute (fn _ => fn thm => foldl ((op RS) o swap) thm thms)*}text {* where @{ML swap} swaps the components of a pair. The setup of this theorem attribute uses the parser @{ML thms in Attrib}, which parses a list of theorems. *}attribute_setup %gray MY_THEN = {* Attrib.thms >> MY_THEN *} "resolving the list of theorems with the proved theorem"text {* You can, for example, use this theorem attribute to turn an equation into a meta-equation: \begin{isabelle} \isacommand{thm}~@{text "test[MY_THEN eq_reflection]"}\\ @{text "> "}~@{thm test[MY_THEN eq_reflection]} \end{isabelle} If you need the symmetric version as a meta-equation, you can write \begin{isabelle} \isacommand{thm}~@{text "test[MY_THEN sym eq_reflection]"}\\ @{text "> "}~@{thm test[MY_THEN sym eq_reflection]} \end{isabelle} It is also possible to combine different theorem attributes, as in: \begin{isabelle} \isacommand{thm}~@{text "test[my_sym, MY_THEN eq_reflection]"}\\ @{text "> "}~@{thm test[my_sym, MY_THEN eq_reflection]} \end{isabelle} However, here also a weakness of the concept of theorem attributes shows through: since theorem attributes can be arbitrary functions, they do not in general commute. If you try \begin{isabelle} \isacommand{thm}~@{text "test[MY_THEN eq_reflection, my_sym]"}\\ @{text "> "}~@{text "exception THM 1 raised: RSN: no unifiers"} \end{isabelle} you get an exception indicating that the theorem @{thm [source] sym} does not resolve with meta-equations. The purpose of @{ML_ind rule_attribute in Thm} is to directly manipulate theorems. Another usage of theorem attributes is to add and delete theorems from stored data. For example the theorem attribute @{text "[simp]"} adds or deletes a theorem from the current simpset. For these applications, you can use @{ML_ind declaration_attribute in Thm}. To illustrate this function, let us introduce a reference containing a list of theorems.*}ML{*val my_thms = Unsynchronized.ref ([] : thm list)*}text {* The purpose of this reference is to store a list of theorems. We are going to modify it by adding and deleting theorems. However, a word of warning: such references must not be used in any code that is meant to be more than just for testing purposes! Here it is only used to illustrate matters. We will show later how to store data properly without using references. We need to provide two functions that add and delete theorems from this list. For this we use the two functions:*}ML{*fun my_thm_add thm ctxt = (my_thms := Thm.add_thm thm (!my_thms); ctxt)fun my_thm_del thm ctxt = (my_thms := Thm.del_thm thm (!my_thms); ctxt)*}text {* These functions take a theorem and a context and, for what we are explaining here it is sufficient that they just return the context unchanged. They change however the reference @{ML my_thms}, whereby the function @{ML_ind add_thm in Thm} adds a theorem if it is not already included in the list, and @{ML_ind del_thm in Thm} deletes one (both functions use the predicate @{ML_ind eq_thm_prop in Thm}, which compares theorems according to their proved propositions modulo alpha-equivalence). You can turn functions @{ML my_thm_add} and @{ML my_thm_del} into attributes with the code*}ML{*val my_add = Thm.declaration_attribute my_thm_addval my_del = Thm.declaration_attribute my_thm_del *}text {* and set up the attributes as follows*}attribute_setup %gray my_thms = {* Attrib.add_del my_add my_del *} "maintaining a list of my_thms - rough test only!" text {* The parser @{ML_ind add_del in Attrib} is a predefined parser for adding and deleting lemmas. Now if you prove the next lemma and attach to it the attribute @{text "[my_thms]"}*}lemma trueI_2[my_thms]: "True" by simptext {* then you can see it is added to the initially empty list. @{ML_response_fake [display,gray] "!my_thms" "[\"True\"]"} You can also add theorems using the command \isacommand{declare}.*}declare test[my_thms] trueI_2[my_thms add] text {* With this attribute, the @{text "add"} operation is the default and does not need to be explicitly given. These three declarations will cause the theorem list to be updated as: @{ML_response_fake [display,gray] "!my_thms" "[\"True\", \"Suc (Suc 0) = 2\"]"} The theorem @{thm [source] trueI_2} only appears once, since the function @{ML_ind add_thm in Thm} tests for duplicates, before extending the list. Deletion from the list works as follows:*}declare test[my_thms del]text {* After this, the theorem list is again: @{ML_response_fake [display,gray] "!my_thms" "[\"True\"]"} We used in this example two functions declared as @{ML_ind declaration_attribute in Thm}, but there can be any number of them. We just have to change the parser for reading the arguments accordingly. However, as said at the beginning of this example, using references for storing theorems is \emph{not} the received way of doing such things. The received way is to start a ``data slot'', below called @{text MyThmsData}, generated by the functor @{text GenericDataFun}:*}ML{*structure MyThmsData = GenericDataFun (type T = thm list val empty = [] val extend = I fun merge _ = Thm.merge_thms) *}text {* The type @{text "T"} of this data slot is @{ML_type "thm list"}.\footnote{FIXME: give a pointer to where data slots are explained properly.} To use this data slot, you only have to change @{ML my_thm_add} and @{ML my_thm_del} to:*}ML{*val my_thm_add = MyThmsData.map o Thm.add_thmval my_thm_del = MyThmsData.map o Thm.del_thm*}text {* where @{ML MyThmsData.map} updates the data appropriately. The corresponding theorem attributes are*}ML{*val my_add = Thm.declaration_attribute my_thm_addval my_del = Thm.declaration_attribute my_thm_del *}text {* and the setup is as follows*}attribute_setup %gray my_thms2 = {* Attrib.add_del my_add my_del *} "properly maintaining a list of my_thms"text {* Initially, the data slot is empty @{ML_response_fake [display,gray] "MyThmsData.get (Context.Proof @{context})" "[]"} but if you prove*}lemma three[my_thms2]: "3 = Suc (Suc (Suc 0))" by simptext {* then the lemma is recorded. @{ML_response_fake [display,gray] "MyThmsData.get (Context.Proof @{context})" "[\"3 = Suc (Suc (Suc 0))\"]"} With theorem attribute @{text my_thms2} you can also nicely see why it is important to store data in a ``data slot'' and \emph{not} in a reference. Backtrack to the point just before the lemma @{thm [source] three} was proved and check the the content of @{ML_struct MyThmsData}: it should be empty. The addition has been properly retracted. Now consider the proof:*}lemma four[my_thms]: "4 = Suc (Suc (Suc (Suc 0)))" by simptext {* Checking the content of @{ML my_thms} gives @{ML_response_fake [display,gray] "!my_thms" "[\"4 = Suc (Suc (Suc (Suc 0)))\", \"True\"]"} as expected, but if you backtrack before the lemma @{thm [source] four}, the content of @{ML my_thms} is unchanged. The backtracking mechanism of Isabelle is completely oblivious about what to do with references, but properly treats ``data slots''! Since storing theorems in a list is such a common task, there is the special functor @{ML_functor Named_Thms}, which does most of the work for you. To obtain a named theorem list, you just declare*}ML{*structure FooRules = Named_Thms (val name = "foo" val description = "Rules for foo") *}text {* and set up the @{ML_struct FooRules} with the command*}setup %gray {* FooRules.setup *}text {* This code declares a data slot where the theorems are stored, an attribute @{text foo} (with the @{text add} and @{text del} options for adding and deleting theorems) and an internal ML interface to retrieve and modify the theorems. Furthermore, the facts are made available on the user-level under the dynamic fact name @{text foo}. For example you can declare three lemmas to be of the kind @{text foo} by:*}lemma rule1[foo]: "A" sorrylemma rule2[foo]: "B" sorrylemma rule3[foo]: "C" sorrytext {* and undeclare the first one by: *}declare rule1[foo del]text {* and query the remaining ones with: \begin{isabelle} \isacommand{thm}~@{text "foo"}\\ @{text "> ?C"}\\ @{text "> ?B"} \end{isabelle} On the ML-level the rules marked with @{text "foo"} can be retrieved using the function @{ML FooRules.get}: @{ML_response_fake [display,gray] "FooRules.get @{context}" "[\"?C\",\"?B\"]"} \begin{readmore} For more information see @{ML_file "Pure/Tools/named_thms.ML"}. \end{readmore} (FIXME What are: @{text "theory_attributes"}, @{text "proof_attributes"}?) \begin{readmore} FIXME: @{ML_file "Pure/more_thm.ML"}; parsers for attributes is in @{ML_file "Pure/Isar/attrib.ML"}...also explained in the chapter about parsing. \end{readmore}*}section {* Theories, Contexts and Local Theories (TBD) *}text {* There are theories, proof contexts and local theories (in this order, if you want to order them). In contrast to an ordinary theory, which simply consists of a type signature, as well as tables for constants, axioms and theorems, a local theory contains additional context information, such as locally fixed variables and local assumptions that may be used by the package. The type @{ML_type local_theory} is identical to the type of \emph{proof contexts} @{ML_type "Proof.context"}, although not every proof context constitutes a valid local theory.*}section {* Storing Theorems\label{sec:storing} (TBD) *}text {* @{ML_ind add_thms_dynamic in PureThy} *}local_setup %gray {* LocalTheory.note Thm.theoremK ((@{binding "allI_alt"}, []), [@{thm allI}]) #> snd *}(* FIXME: some code below *)(*<*)(*setup {* Sign.add_consts_i [(Binding"bar", @{typ "nat"},NoSyn)] *}*)lemma "bar = (1::nat)" oops(*setup {* Sign.add_consts_i [("foo", @{typ "nat"},NoSyn)] #> PureThy.add_defs false [((@{binding "foo_def"}, Logic.mk_equals (Const ("FirstSteps.foo", @{typ "nat"}), @{term "1::nat"})), [])] #> snd*}*)(*lemma "foo = (1::nat)" apply(simp add: foo_def) donethm foo_def*)(*>*)section {* Pretty-Printing\label{sec:pretty} *}text {* So far we printed out only plain strings without any formatting except for occasional explicit line breaks using @{text [quotes] "\\n"}. This is sufficient for ``quick-and-dirty'' printouts. For something more sophisticated, Isabelle includes an infrastructure for properly formatting text. This infrastructure is loosely based on a paper by Oppen~\cite{Oppen80}. Most of its functions do not operate on @{ML_type string}s, but on instances of the type: @{ML_type_ind [display, gray] "Pretty.T"} The function @{ML str in Pretty} transforms a (plain) string into such a pretty type. For example @{ML_response_fake [display,gray] "Pretty.str \"test\"" "String (\"test\", 4)"} where the result indicates that we transformed a string with length 4. Once you have a pretty type, you can, for example, control where linebreaks may occur in case the text wraps over a line, or with how much indentation a text should be printed. However, if you want to actually output the formatted text, you have to transform the pretty type back into a @{ML_type string}. This can be done with the function @{ML_ind string_of in Pretty}. In what follows we will use the following wrapper function for printing a pretty type:*}ML{*fun pprint prt = tracing (Pretty.string_of prt)*}text {* The point of the pretty-printing infrastructure is to give hints about how to layout text and let Isabelle do the actual layout. Let us first explain how you can insert places where a line break can occur. For this assume the following function that replicates a string n times:*}ML{*fun rep n str = implode (replicate n str) *}text {* and suppose we want to print out the string:*}ML{*val test_str = rep 8 "fooooooooooooooobaaaaaaaaaaaar "*}text {* We deliberately chose a large string so that it spans over more than one line. If we print out the string using the usual ``quick-and-dirty'' method, then we obtain the ugly output:@{ML_response_fake [display,gray]"tracing test_str""fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar"} We obtain the same if we just use@{ML_response_fake [display,gray]"pprint (Pretty.str test_str)""fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar"} However by using pretty types you have the ability to indicate a possible line break for example at each space. You can achieve this with the function @{ML_ind breaks in Pretty}, which expects a list of pretty types and inserts a possible line break in between every two elements in this list. To print this list of pretty types as a single string, we concatenate them with the function @{ML_ind blk in Pretty} as follows:@{ML_response_fake [display,gray]"let val ptrs = map Pretty.str (space_explode \" \" test_str)in pprint (Pretty.blk (0, Pretty.breaks ptrs))end""fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaarfooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar"} Here the layout of @{ML test_str} is much more pleasing to the eye. The @{ML "0"} in @{ML_ind blk in Pretty} stands for no indentation of the printed string. You can increase the indentation and obtain@{ML_response_fake [display,gray]"let val ptrs = map Pretty.str (space_explode \" \" test_str)in pprint (Pretty.blk (3, Pretty.breaks ptrs))end""fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar"} where starting from the second line the indent is 3. If you want that every line starts with the same indent, you can use the function @{ML_ind indent in Pretty} as follows:@{ML_response_fake [display,gray]"let val ptrs = map Pretty.str (space_explode \" \" test_str)in pprint (Pretty.indent 10 (Pretty.blk (0, Pretty.breaks ptrs)))end"" fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar"} If you want to print out a list of items separated by commas and have the linebreaks handled properly, you can use the function @{ML_ind commas in Pretty}. For example@{ML_response_fake [display,gray]"let val ptrs = map (Pretty.str o string_of_int) (99998 upto 100020)in pprint (Pretty.blk (0, Pretty.commas ptrs))end""99998, 99999, 100000, 100001, 100002, 100003, 100004, 100005, 100006, 100007, 100008, 100009, 100010, 100011, 100012, 100013, 100014, 100015, 100016, 100017, 100018, 100019, 100020"} where @{ML upto} generates a list of integers. You can print out this list as a ``set'', that means enclosed inside @{text [quotes] "{"} and @{text [quotes] "}"}, and separated by commas using the function @{ML_ind enum in Pretty}. For example*}text {*@{ML_response_fake [display,gray]"let val ptrs = map (Pretty.str o string_of_int) (99998 upto 100020)in pprint (Pretty.enum \",\" \"{\" \"}\" ptrs)end""{99998, 99999, 100000, 100001, 100002, 100003, 100004, 100005, 100006, 100007, 100008, 100009, 100010, 100011, 100012, 100013, 100014, 100015, 100016, 100017, 100018, 100019, 100020}"} As can be seen, this function prints out the ``set'' so that starting from the second, each new line as an indentation of 2. If you print out something that goes beyond the capabilities of the standard functions, you can do relatively easily the formatting yourself. Assume you want to print out a list of items where like in ``English'' the last two items are separated by @{text [quotes] "and"}. For this you can write the function*}ML %linenosgray{*fun and_list [] = [] | and_list [x] = [x] | and_list xs = let val (front, last) = split_last xs in (Pretty.commas front) @ [Pretty.brk 1, Pretty.str "and", Pretty.brk 1, last] end *}text {* where Line 7 prints the beginning of the list and Line 8 the last item. We have to use @{ML "Pretty.brk 1"} in order to insert a space (of length 1) before the @{text [quotes] "and"}. This space is also a place where a line break can occur. We do the same after the @{text [quotes] "and"}. This gives you for example@{ML_response_fake [display,gray]"let val ptrs = map (Pretty.str o string_of_int) (1 upto 22)in pprint (Pretty.blk (0, and_list ptrs))end""1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21 and 22"} Next we like to pretty-print a term and its type. For this we use the function @{text "tell_type"}.*}ML %linenosgray{*fun tell_type ctxt t = let fun pstr s = Pretty.breaks (map Pretty.str (space_explode " " s)) val ptrm = Pretty.quote (Syntax.pretty_term ctxt t) val pty = Pretty.quote (Syntax.pretty_typ ctxt (fastype_of t))in pprint (Pretty.blk (0, (pstr "The term " @ [ptrm] @ pstr " has type " @ [pty, Pretty.str "."])))end*}text {* In Line 3 we define a function that inserts possible linebreaks in places where a space is. In Lines 4 and 5 we pretty-print the term and its type using the functions @{ML_ind pretty_term in Syntax} and @{ML_ind pretty_typ in Syntax}. We also use the function @{ML_ind quote in Pretty} in order to enclose the term and type inside quotation marks. In Line 9 we add a period right after the type without the possibility of a line break, because we do not want that a line break occurs there. Now you can write @{ML_response_fake [display,gray] "tell_type @{context} @{term \"min (Suc 0)\"}" "The term \"min (Suc 0)\" has type \"nat \<Rightarrow> nat\"."} To see the proper line breaking, you can try out the function on a bigger term and type. For example: @{ML_response_fake [display,gray] "tell_type @{context} @{term \"op = (op = (op = (op = (op = op =))))\"}" "The term \"op = (op = (op = (op = (op = op =))))\" has type \"((((('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool\"."} FIXME: TBD below*}ML {* pprint (Pretty.big_list "header" (map (Pretty.str o string_of_int) (4 upto 10))) *}text {*chunks inserts forced breaks (unlike blk where you have to do this yourself)*}ML {* (Pretty.chunks [Pretty.str "a", Pretty.str "b"], Pretty.blk (0, [Pretty.str "a", Pretty.str "b"])) *}ML {*fun setmp_show_all_types f = setmp show_all_types (! show_types orelse ! show_sorts orelse ! show_all_types) f;val ctxt = @{context};val t1 = @{term "undefined::nat"};val t2 = @{term "Suc y"};val pty = Pretty.block (Pretty.breaks [(setmp show_question_marks false o setmp_show_all_types) (Syntax.pretty_term ctxt) t1, Pretty.str "=", Syntax.pretty_term ctxt t2]);pty |> Pretty.string_of |> priority*}text {* the infrastructure of Syntax-pretty-term makes sure it is printed nicely *}ML {* Pretty.mark Markup.hilite (Pretty.str "foo") |> Pretty.string_of |> tracing *}ML {* (Pretty.str "bar") |> Pretty.string_of |> tracing *}ML {* Pretty.mark Markup.subgoal (Pretty.str "foo") |> Pretty.string_of |> tracing *}ML {* (Pretty.str "bar") |> Pretty.string_of |> tracing *}text {* Still to be done: What happens with big formulae? \begin{readmore} The general infrastructure for pretty-printing is implemented in the file @{ML_file "Pure/General/pretty.ML"}. The file @{ML_file "Pure/Syntax/syntax.ML"} contains pretty-printing functions for terms, types, theorems and so on. @{ML_file "Pure/General/markup.ML"} \end{readmore}*}text {* printing into the goal buffer as part of the proof state*}ML {* Pretty.mark Markup.hilite (Pretty.str "foo") |> Pretty.string_of |> tracing *}ML {* (Pretty.str "bar") |> Pretty.string_of |> tracing *}text {* writing into the goal buffer *}ML {* fun my_hook interactive state = (interactive ? Proof.goal_message (fn () => Pretty.str "foo")) state*}setup %gray {* Context.theory_map (Specification.add_theorem_hook my_hook) *}lemma "False"oopssection {* Misc (TBD) *}ML {*Datatype.get_info @{theory} "List.list"*}text {* FIXME: association lists:@{ML_file "Pure/General/alist.ML"}FIXME: calling the ML-compiler*}section {* Name Space Issues (TBD) *}end