tuned ML-antiquotations; added intro portions.
authorNorbert Schirmer <norbert.schirmer@web.de>
Thu, 16 May 2019 19:56:12 +0200
changeset 567 f7c97e64cc2a
parent 566 6103b0eadbf2
child 568 be23597e81db
tuned ML-antiquotations; added intro portions.
ProgTutorial/Advanced.thy
ProgTutorial/Base.thy
ProgTutorial/Essential.thy
ProgTutorial/First_Steps.thy
ProgTutorial/Intro.thy
ProgTutorial/Package/Ind_Code.thy
ProgTutorial/Package/Ind_Interface.thy
ProgTutorial/Parsing.thy
ProgTutorial/Readme.thy
ProgTutorial/Recipes/ExternalSolver.thy
ProgTutorial/Recipes/Introspection.thy
ProgTutorial/Recipes/Oracle.thy
ProgTutorial/Recipes/Sat.thy
ProgTutorial/Recipes/TimeLimit.thy
ProgTutorial/Solutions.thy
ProgTutorial/Tactical.thy
ProgTutorial/antiquote_setup.ML
ProgTutorial/document/root.bib
ProgTutorial/document/root.tex
ROOT
--- a/ProgTutorial/Advanced.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/Advanced.thy	Thu May 16 19:56:12 2019 +0200
@@ -59,7 +59,7 @@
 
   Functions acting on theories often end with the suffix \<open>_global\<close>,
   for example the function @{ML read_term_global in Syntax} in the structure
-  @{ML_struct Syntax}. The reason is to set them syntactically apart from
+  @{ML_structure Syntax}. The reason is to set them syntactically apart from
   functions acting on contexts or local theories, which will be discussed in
   the next sections. There is a tendency amongst Isabelle developers to prefer
   ``non-global'' operations, because they have some advantages, as we will also
@@ -74,7 +74,7 @@
   the theory where the attribute has been registered or the theorem has been
   stored.  This is a fundamental principle in Isabelle. A similar situation
   arises with declaring a constant, which can be done on the ML-level with 
-  function @{ML_ind declare_const in Sign} from the structure @{ML_struct
+  function @{ML_ind declare_const in Sign} from the structure @{ML_structure
   Sign}. To see how \isacommand{setup} works, consider the following code:
 
 \<close>  
@@ -176,7 +176,7 @@
   the variables are fixed. For this remember that ML-code inside a proof
   needs to be enclosed inside \isacommand{ML\_prf}~\<open>\<verbopen> \<dots> \<verbclose>\<close>,
   not \isacommand{ML}~\<open>\<verbopen> \<dots> \<verbclose>\<close>. The function 
-  @{ML_ind dest_fixes in Variable} from the structure @{ML_struct Variable} takes 
+  @{ML_ind dest_fixes in Variable} from the structure @{ML_structure Variable} takes 
   a context and returns all its currently fixed variable (names). That 
   means a context has a dataslot containing information about fixed variables.
   The ML-antiquotation \<open>@{context}\<close> points to the context that is
@@ -252,16 +252,16 @@
   of fixed variables is actually quite useful. For example it prevents us from 
   fixing a variable twice
 
-  @{ML_response_fake [gray, display]
+  @{ML_matchresult_fake [gray, display]
   "@{context}
 |> Variable.add_fixes [\"x\", \"x\"]" 
   "ERROR: Duplicate fixed variable(s): \"x\""}
 
   More importantly it also allows us to easily create fresh names for
   fixed variables.  For this you have to use the function @{ML_ind
-  variant_fixes in Variable} from the structure @{ML_struct Variable}.
+  variant_fixes in Variable} from the structure @{ML_structure Variable}.
 
-  @{ML_response_fake [gray, display]
+  @{ML_matchresult_fake [gray, display]
   "@{context}
 |> Variable.variant_fixes [\"y\", \"y\", \"z\"]" 
   "([\"y\", \"ya\", \"z\"], ...)"}
@@ -273,7 +273,7 @@
   create two fresh variables of type @{typ nat} as variants of the
   string @{text [quotes] "x"} (Lines 6 and 7).
 
-  @{ML_response_fake [display, gray, linenos]
+  @{ML_matchresult_fake [display, gray, linenos]
   "let
   val ctxt0 = @{context}
   val (_, ctxt1) = Variable.add_fixes [\"x\"] ctxt0
@@ -291,9 +291,9 @@
 
   Often one has the problem that given some terms, create fresh variables
   avoiding any variable occurring in those terms. For this you can use the
-  function @{ML_ind declare_term in Variable} from the structure @{ML_struct Variable}.
+  function @{ML_ind declare_term in Variable} from the structure @{ML_structure Variable}.
 
-  @{ML_response_fake [gray, display]
+  @{ML_matchresult_fake [gray, display]
   "let
   val ctxt1 = Variable.declare_term @{term \"(x, xa)\"} @{context}
   val frees = replicate 2 (\"x\", @{typ nat})
@@ -327,9 +327,9 @@
   All variables are highligted, indicating that they are not
   fixed. However, declaring a term is helpful when parsing terms using
   the function @{ML_ind read_term in Syntax} from the structure
-  @{ML_struct Syntax}. Consider the following code:
+  @{ML_structure Syntax}. Consider the following code:
 
-  @{ML_response_fake [gray, display]
+  @{ML_matchresult_fake [gray, display]
   "let
   val ctxt0 = @{context}
   val ctxt1 = Variable.declare_term @{term \"x::nat\"} ctxt0
@@ -345,7 +345,7 @@
   type the parsed term receives depends upon the last declaration that
   is made, as the next example illustrates.
 
-  @{ML_response_fake [gray, display]
+  @{ML_matchresult_fake [gray, display]
   "let
   val ctxt1 = Variable.declare_term @{term \"x::nat\"} @{context}
   val ctxt2 = Variable.declare_term @{term \"x::int\"} ctxt1
@@ -379,7 +379,7 @@
 
   In Line 3 we fix the variables @{term x}, @{term y} and @{term z} in
   context \<open>ctxt1\<close>.  The function @{ML_ind export_terms in
-  Variable} from the structure @{ML_struct Variable} can be used to transfer
+  Variable} from the structure @{ML_structure Variable} can be used to transfer
   terms between contexts. Transferring means to turn all (free)
   variables that are fixed in one context, but not in the other, into
   schematic variables. In our example, we are transferring the term
@@ -396,11 +396,11 @@
   that the generalisation of fixed variables to schematic variables is
   not trivial if done manually. For illustration purposes we use in the 
   following code the function @{ML_ind make_thm in Skip_Proof} from the 
-  structure @{ML_struct Skip_Proof}. This function will turn an arbitray 
+  structure @{ML_structure Skip_Proof}. This function will turn an arbitray 
   term, in our case @{term "P x y z x y z"}, into a theorem (disregarding 
   whether it is actually provable).
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "let
   val thy = @{theory}
   val ctxt0 = @{context}
@@ -416,9 +416,9 @@
   that exporting from one to another is not just restricted to
   variables, but also works with assumptions. For this we can use the
   function @{ML_ind export in Assumption} from the structure
-  @{ML_struct Assumption}. Consider the following code.
+  @{ML_structure Assumption}. Consider the following code.
 
-  @{ML_response_fake [display, gray, linenos]
+  @{ML_matchresult_fake [display, gray, linenos]
   "let
   val ctxt0 = @{context}
   val ([eq], ctxt1) = Assumption.add_assumes [@{cprop \"x \<equiv> y\"}] ctxt0
@@ -429,11 +429,11 @@
   "x \<equiv> y \<Longrightarrow> y \<equiv> x"}
   
   The function @{ML_ind add_assumes in Assumption} from the structure
-  @{ML_struct Assumption} adds the assumption \mbox{\<open>x \<equiv> y\<close>}
+  @{ML_structure Assumption} adds the assumption \mbox{\<open>x \<equiv> y\<close>}
   to the context \<open>ctxt1\<close> (Line 3). This function expects a list
   of @{ML_type cterm}s and returns them as theorems, together with the
   new context in which they are ``assumed''. In Line 4 we use the
-  function @{ML_ind symmetric in Thm} from the structure @{ML_struct
+  function @{ML_ind symmetric in Thm} from the structure @{ML_structure
   Thm} in order to obtain the symmetric version of the assumed
   meta-equality. Now exporting the theorem \<open>eq'\<close> from \<open>ctxt1\<close> to \<open>ctxt0\<close> means @{term "y \<equiv> x"} will be prefixed with
   the assumed theorem. The boolean flag in @{ML_ind export in
@@ -446,11 +446,11 @@
   Section~\ref{sec:structured}.
 
   The function @{ML_ind export in Proof_Context} from the structure 
-  @{ML_struct Proof_Context} combines both export functions from 
-  @{ML_struct Variable} and @{ML_struct Assumption}. This can be seen
+  @{ML_structure Proof_Context} combines both export functions from 
+  @{ML_structure Variable} and @{ML_structure Assumption}. This can be seen
   in the following example.
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "let
   val ctxt0 = @{context}
   val ((fvs, [eq]), ctxt1) = ctxt0
@@ -589,7 +589,7 @@
   information and qualifiers. Such qualified names can be generated with the
   antiquotation \<open>@{binding \<dots>}\<close>. For example
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
   "@{binding \"name\"}"
   "name"}
 
@@ -634,7 +634,7 @@
   Occasionally you have to calculate what the ``base'' name of a given
   constant is. For this you can use the function @{ML_ind  Long_Name.base_name}. For example:
 
-  @{ML_response [display,gray] "Long_Name.base_name \"List.list.Nil\"" "\"Nil\""}
+  @{ML_matchresult [display,gray] "Long_Name.base_name \"List.list.Nil\"" "\"Nil\""}
 
   \begin{readmore}
   Functions about naming are implemented in @{ML_file "Pure/General/name_space.ML"};
--- a/ProgTutorial/Base.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/Base.thy	Thu May 16 19:56:12 2019 +0200
@@ -5,10 +5,10 @@
 
 print_ML_antiquotations
 
-text \<open>Hallo ML Antiquotation:
-@{ML \<open>2 + 3\<close>}
+text \<open>
+Why is Base not printed?
+@{cite "isa-imp"}
 \<close>
-
 notation (latex output)
   Cons ("_ # _" [66,65] 65)
 
@@ -33,8 +33,4 @@
 
 print_ML_antiquotations
 
-text \<open>Hallo ML Antiquotation:
-@{ML \<open>2 + 3\<close> }
-\<close>
-
 end
\ No newline at end of file
--- a/ProgTutorial/Essential.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/Essential.thy	Thu May 16 19:56:12 2019 +0200
@@ -27,7 +27,7 @@
   Uncertified terms are often just called terms. One way to construct them is by 
   using the antiquotation \mbox{\<open>@{term \<dots>}\<close>}. For example
 
-  @{ML_response [display,gray] 
+  @{ML_matchresult [display,gray] 
 "@{term \"(a::nat) + b = c\"}" 
 "Const (\"HOL.eq\", _) $ 
   (Const (\"Groups.plus_class.plus\", _) $ _ $ _) $ _"}
@@ -53,7 +53,7 @@
   terms use the usual de Bruijn index mechanism for representing bound
   variables.  For example in
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "@{term \"\<lambda>x y. x y\"}"
   "Abs (\"x\", \"'a \<Rightarrow> 'b\", Abs (\"y\", \"'a\", Bound 1 $ Bound 0))"}
 
@@ -69,7 +69,7 @@
   Be careful if you pretty-print terms. Consider pretty-printing the abstraction
   term shown above:
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
 "@{term \"\<lambda>x y. x y\"}
   |> pretty_term @{context}
   |> pwriteln"
@@ -79,7 +79,7 @@
   tacitly eta-contracted the output. You obtain a similar result 
   with beta-redexes
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
 "let 
   val redex = @{term \"(\<lambda>(x::int) (y::int). x)\"} 
   val arg1 = @{term \"1::int\"} 
@@ -97,7 +97,7 @@
   value @{ML_ind show_abbrevs in Syntax}. For example
 
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "let 
   val redex = @{term \"(\<lambda>(x::int) (y::int). x)\"} 
   val arg1 = @{term \"1::int\"} 
@@ -114,7 +114,7 @@
   \emph{schematic} variables (term-constructor @{ML Var} and written with a
   leading question mark). Consider the following two examples
   
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
 "let
   val v1 = Var ((\"x\", 3), @{typ bool})
   val v2 = Var ((\"x1\", 3), @{typ bool})
@@ -144,14 +144,14 @@
   Constructing terms via antiquotations has the advantage that only typable
   terms can be constructed. For example
 
-  @{ML_response_fake_both [display,gray]
+  @{ML_matchresult_fake_both [display,gray]
   "@{term \"x x\"}"
   "Type unification failed: Occurs check!"}
 
   raises a typing error, while it is perfectly ok to construct the term
   with the raw ML-constructors:
 
-  @{ML_response_fake [display,gray] 
+  @{ML_matchresult_fake [display,gray] 
 "let
   val omega = Free (\"x\", @{typ \"nat \<Rightarrow> nat\"}) $ Free (\"x\", @{typ nat})
 in 
@@ -186,13 +186,13 @@
   usually invisible \<open>Trueprop\<close>-coercions whenever necessary. 
   Consider for example the pairs
 
-@{ML_response [display,gray] "(@{term \"P x\"}, @{prop \"P x\"})" 
+@{ML_matchresult [display,gray] "(@{term \"P x\"}, @{prop \"P x\"})" 
 "(Free (\"P\", _) $ Free (\"x\", _),
  Const (\"HOL.Trueprop\", _) $ (Free (\"P\", _) $ Free (\"x\", _)))"}
 
   where a coercion is inserted in the second component and 
 
-  @{ML_response [display,gray] "(@{term \"P x \<Longrightarrow> Q x\"}, @{prop \"P x \<Longrightarrow> Q x\"})" 
+  @{ML_matchresult [display,gray] "(@{term \"P x \<Longrightarrow> Q x\"}, @{prop \"P x \<Longrightarrow> Q x\"})" 
   "(Const (\"Pure.imp\", _) $ _ $ _, 
  Const (\"Pure.imp\", _) $ _ $ _)"}
 
@@ -204,7 +204,7 @@
   As already seen above, types can be constructed using the antiquotation 
   \<open>@{typ _}\<close>. For example:
 
-  @{ML_response_fake [display,gray] "@{typ \"bool \<Rightarrow> nat\"}" "bool \<Rightarrow> nat"}
+  @{ML_matchresult_fake [display,gray] "@{typ \"bool \<Rightarrow> nat\"}" "bool \<Rightarrow> nat"}
 
   The corresponding datatype is
 \<close>
@@ -224,7 +224,7 @@
   example, because Isabelle always pretty prints types (unlike terms).
   Using just the antiquotation \<open>@{typ "bool"}\<close> we only see
 
-  @{ML_response [display, gray]
+  @{ML_matchresult [display, gray]
   "@{typ \"bool\"}"
   "bool"}
   which is the pretty printed version of \<open>bool\<close>. However, in PolyML
@@ -263,13 +263,13 @@
 text \<open>
   Now the type bool is printed out in full detail.
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
   "@{typ \"bool\"}"
   "Type (\"HOL.bool\", [])"}
 
   When printing out a list-type
   
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
   "@{typ \"'a list\"}"
   "Type (\"List.list\", [TFree (\"'a\", [\"HOL.type\"])])"}
 
@@ -279,7 +279,7 @@
   \<open>fun\<close> is defined in the theory \<open>HOL\<close>, it is  
   still represented by its simple name.
 
-   @{ML_response [display,gray]
+   @{ML_matchresult [display,gray]
   "@{typ \"bool \<Rightarrow> nat\"}"
   "Type (\"fun\", [Type (\"HOL.bool\", []), Type (\"Nat.nat\", [])])"}
 
@@ -294,7 +294,7 @@
   After that the types for booleans, lists and so on are printed out again 
   the standard Isabelle way.
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "@{typ \"bool\"};
 @{typ \"'a list\"}"
   "\"bool\"
@@ -338,7 +338,7 @@
   to both functions. With @{ML make_imp} you obtain the intended term involving 
   the given arguments
 
-  @{ML_response [display,gray] "make_imp @{term S} @{term T}" 
+  @{ML_matchresult [display,gray] "make_imp @{term S} @{term T}" 
 "Const _ $ 
   Abs (\"x\", Type (\"Nat.nat\",[]),
     Const _ $ (Free (\"S\",_) $ _) $ (Free (\"T\",_) $ _))"}
@@ -346,7 +346,7 @@
   whereas with @{ML make_wrong_imp} you obtain a term involving the @{term "P"} 
   and \<open>Q\<close> from the antiquotation.
 
-  @{ML_response [display,gray] "make_wrong_imp @{term S} @{term T}" 
+  @{ML_matchresult [display,gray] "make_wrong_imp @{term S} @{term T}" 
 "Const _ $ 
   Abs (\"x\", _,
     Const _ $ (Const _ $ (Free (\"P\",_) $ _)) $ 
@@ -358,7 +358,7 @@
   term list applied to the term. For example
 
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let
   val trm = @{term \"P::bool \<Rightarrow> bool \<Rightarrow> bool\"}
   val args = [@{term \"True\"}, @{term \"False\"}]
@@ -371,7 +371,7 @@
   Another handy function is @{ML_ind lambda in Term}, which abstracts a variable 
   in a term. For example
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
 "let
   val x_nat = @{term \"x::nat\"}
   val trm = @{term \"(P::nat \<Rightarrow> bool) x\"}
@@ -387,7 +387,7 @@
   is of the same type as the abstracted variable. If it is of different type,
   as in
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
 "let
   val x_int = @{term \"x::int\"}
   val trm = @{term \"(P::nat \<Rightarrow> bool) x\"}
@@ -406,7 +406,7 @@
   "(f::nat \<Rightarrow> nat \<Rightarrow> nat) 0 x"} the subterm @{term "(f::nat \<Rightarrow> nat \<Rightarrow> nat) 0"} by
   @{term y}, and @{term x} by @{term True}.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
 "let
   val sub1 = (@{term \"(f::nat \<Rightarrow> nat \<Rightarrow> nat) 0\"}, @{term \"y::nat \<Rightarrow> nat\"})
   val sub2 = (@{term \"x::nat\"}, @{term \"True\"})
@@ -419,7 +419,7 @@
   As can be seen, @{ML subst_free} does not take typability into account.
   However it takes alpha-equivalence into account:
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
 "let
   val sub = (@{term \"(\<lambda>y::nat. y)\"}, @{term \"x::nat\"})
   val trm = @{term \"(\<lambda>x::nat. x)\"}
@@ -446,7 +446,7 @@
   The function returns a pair consisting of the stripped off variables and
   the body of the universal quantification. For example
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "strip_alls @{term \"\<forall>x y. x = (y::bool)\"}"
 "([Free (\"x\", \"bool\"), Free (\"y\", \"bool\")],
   Const (\"op =\", _) $ Bound 1 $ Bound 0)"}
@@ -466,7 +466,7 @@
   bound variables. With the function @{ML subst_bounds}, you can replace these
   loose @{ML_ind Bound in Term}s with the stripped off variables.
 
-  @{ML_response_fake [display, gray, linenos]
+  @{ML_matchresult_fake [display, gray, linenos]
   "let
   val (vrs, trm) = strip_alls @{term \"\<forall>x y. x = (y::bool)\"}
 in 
@@ -488,7 +488,7 @@
   the loose de Bruijn index is replaced by a unique free variable. For example
 
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "let
   val body = Bound 0 $ Free (\"x\", @{typ nat})
 in
@@ -501,7 +501,7 @@
   @{ML_ind incr_boundvars in Term}, increases by an integer the indices 
   of the loose bound variables in a term. In the code below
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "@{term \"\<forall>x y z u. z = u\"}
   |> strip_alls
   ||> incr_boundvars 2
@@ -519,7 +519,7 @@
   The second function, @{ML_ind loose_bvar1 in Text}, tests whether a term
   contains a loose bound of a certain index. For example
 
-  @{ML_response_fake [gray,display]
+  @{ML_matchresult_fake [gray,display]
 "let
   val body = snd (strip_alls @{term \"\<forall>x y. x = (y::bool)\"})
 in
@@ -530,11 +530,11 @@
   "[true, true, false]"}
 
   There are also many convenient functions that construct specific HOL-terms
-  in the structure @{ML_struct HOLogic}. For example @{ML_ind mk_eq in
+  in the structure @{ML_structure HOLogic}. For example @{ML_ind mk_eq in
   HOLogic} constructs an equality out of two terms.  The types needed in this
   equality are calculated from the type of the arguments. For example
 
-@{ML_response_fake [gray,display]
+@{ML_matchresult_fake [gray,display]
 "let
   val eq = HOLogic.mk_eq (@{term \"True\"}, @{term \"False\"})
 in
@@ -569,7 +569,7 @@
 text \<open>
   will not correctly match the formula @{prop[source] "\<forall>x::nat. P x"}: 
 
-  @{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "false"}
+  @{ML_matchresult [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "false"}
 
   The problem is that the \<open>@term\<close>-antiquotation in the pattern 
   fixes the type of the constant @{term "All"} to be @{typ "('a \<Rightarrow> bool) \<Rightarrow> bool"} for 
@@ -583,7 +583,7 @@
 text \<open>
   because now
 
-  @{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "true"}
+  @{ML_matchresult [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "true"}
 
   matches correctly (the first wildcard in the pattern matches any type and the
   second any term).
@@ -598,14 +598,14 @@
 text \<open>
   Unfortunately, also this function does \emph{not} work as expected, since
 
-  @{ML_response [display,gray] "is_nil @{term \"Nil\"}" "false"}
+  @{ML_matchresult [display,gray] "is_nil @{term \"Nil\"}" "false"}
 
   The problem is that on the ML-level the name of a constant is more
   subtle than you might expect. The function @{ML is_all} worked correctly,
   because @{term "All"} is such a fundamental constant, which can be referenced
   by @{ML "Const (\"All\", some_type)" for some_type}. However, if you look at
 
-  @{ML_response [display,gray] "@{term \"Nil\"}" "Const (\"List.list.Nil\", _)"}
+  @{ML_matchresult [display,gray] "@{term \"Nil\"}" "Const (\"List.list.Nil\", _)"}
 
   the name of the constant \<open>Nil\<close> depends on the theory in which the
   term constructor is defined (\<open>List\<close>) and also in which datatype
@@ -613,7 +613,7 @@
   type-classes. Consider for example the constants for @{term "zero"} and
   \mbox{@{term "times"}}:
 
-  @{ML_response [display,gray] "(@{term \"0::nat\"}, @{term \"times\"})" 
+  @{ML_matchresult [display,gray] "(@{term \"0::nat\"}, @{term \"times\"})" 
  "(Const (\"Groups.zero_class.zero\", _), 
  Const (\"Groups.times_class.times\", _))"}
 
@@ -636,7 +636,7 @@
   The antiquotation for properly referencing type constants is \<open>@{type_name \<dots>}\<close>.
   For example
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
   "@{type_name \"list\"}" "\"List.list\""}
 
   Although types of terms can often be inferred, there are many
@@ -674,7 +674,7 @@
 text \<open>
   Here is an example:
 
-@{ML_response_fake [display,gray] 
+@{ML_matchresult_fake [display,gray] 
 "map_types nat_to_int @{term \"a = (1::nat)\"}" 
 "Const (\"op =\", \"int \<Rightarrow> int \<Rightarrow> bool\")
            $ Free (\"a\", \"int\") $ Const (\"HOL.one_class.one\", \"int\")"}
@@ -691,7 +691,7 @@
   So in order to obtain the list of type-variables of a term you have to 
   call them as follows
 
-  @{ML_response [gray,display]
+  @{ML_matchresult [gray,display]
   "Term.add_tfrees @{term \"(a, b)\"} []"
   "[(\"'b\", [\"HOL.type\"]), (\"'a\", [\"HOL.type\"])]"}
 
@@ -776,7 +776,7 @@
   Let us begin with describing the unification and matching functions for
   types.  Both return type environments (ML-type @{ML_type "Type.tyenv"})
   which map schematic type variables to types and sorts. Below we use the
-  function @{ML_ind typ_unify in Sign} from the structure @{ML_struct Sign}
+  function @{ML_ind typ_unify in Sign} from the structure @{ML_structure Sign}
   for unifying the types \<open>?'a * ?'b\<close> and \<open>?'b list *
   nat\<close>. This will produce the mapping, or type environment, \<open>[?'a :=
   ?'b list, ?'b := nat]\<close>.
@@ -796,9 +796,9 @@
   below. In case of failure, @{ML typ_unify in Sign} will raise the exception
   \<open>TUNIFY\<close>.  We can print out the resulting type environment bound to 
   @{ML tyenv_unif} with the built-in function @{ML_ind dest in Vartab} from the
-  structure @{ML_struct Vartab}.
-
-  @{ML_response_fake [display,gray]
+  structure @{ML_structure Vartab}.
+
+  @{ML_matchresult_fake [display,gray]
   "Vartab.dest tyenv_unif"
   "[((\"'a\", 0), ([\"HOL.type\"], \"?'b List.list\")), 
  ((\"'b\", 0), ([\"HOL.type\"], \"nat\"))]"} 
@@ -839,7 +839,7 @@
   environment in the example this function prints out the more legible:
 
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "pretty_tyenv @{context} tyenv_unif"
   "[?'a := ?'b list, ?'b := nat]"}
 
@@ -882,7 +882,7 @@
   of sort information by setting @{ML_ind show_sorts in Syntax} to 
   true. This allows us to inspect the typing environment.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "pretty_tyenv @{context} tyenv"
   "[?'a::s1 := ?'a1::{s1, s2}, ?'b::s2 := ?'a1::{s1, s2}]"}
 
@@ -891,7 +891,7 @@
   type variable has been introduced the @{ML index}, originally being \<open>0\<close>, 
   has been increased to \<open>1\<close>.
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
   "index"
   "1"}
 
@@ -899,7 +899,7 @@
   \<open>?'b list * nat\<close> from the beginning of this section, and the 
   calculated type environment @{ML tyenv_unif}:
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "pretty_tyenv @{context} tyenv_unif"
   "[?'a := ?'b list, ?'b := nat]"}
 
@@ -911,12 +911,12 @@
   performance reasons. To apply such a type environment to a type, say \<open>?'a *
   ?'b\<close>, you should use the function @{ML_ind norm_type in Envir}:
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "Envir.norm_type tyenv_unif @{typ_pat \"?'a * ?'b\"}"
   "nat list * nat"}
 
   Matching of types can be done with the function @{ML_ind typ_match in Sign}
-  also from the structure @{ML_struct Sign}. This function returns a @{ML_type
+  also from the structure @{ML_structure Sign}. This function returns a @{ML_type
   Type.tyenv} as well, but might raise the exception \<open>TYPE_MATCH\<close> in case
   of failure. For example
 \<close>
@@ -931,7 +931,7 @@
 text \<open>
   Printing out the calculated matcher gives
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "pretty_tyenv @{context} tyenv_match"
   "[?'a := bool list, ?'b := nat]"}
   
@@ -939,7 +939,7 @@
   applying the matcher to a type needs to be done with the function
   @{ML_ind subst_type in Envir}. For example
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "Envir.subst_type tyenv_match @{typ_pat \"?'a * ?'b\"}"
   "bool list * nat"}
 
@@ -960,14 +960,14 @@
   Now @{ML tyenv_unif} is equal to @{ML tyenv_match'}. If we apply 
   @{ML norm_type in Envir} to the type \<open>?'a * ?'b\<close> we obtain
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "Envir.norm_type tyenv_match' @{typ_pat \"?'a * ?'b\"}"
   "nat list * nat"}
 
   which does not solve the matching problem, and if
   we apply @{ML subst_type in Envir} to the same type we obtain
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "Envir.subst_type tyenv_unif @{typ_pat \"?'a * ?'b\"}"
   "?'b list * nat"}
   
@@ -1023,7 +1023,7 @@
   the type environment is empty and can be ignored. The 
   resulting term environment is
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "pretty_env @{context} fo_env"
   "[?X := P, ?Y := \<lambda>a b. Q b a]"}
 
@@ -1033,7 +1033,7 @@
   unifiers). The function @{ML subst_term in Envir} expects a type environment,
   which is set to empty in the example below, and a term environment.
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "let
   val trm = @{term_pat \"(?X::(nat\<Rightarrow>nat\<Rightarrow>nat)\<Rightarrow>bool) ?Y\"}
 in
@@ -1050,7 +1050,7 @@
   matching \<open>\<lambda>x. P x\<close> against the pattern \<open>\<lambda>y. ?X y\<close>. In this
   case, first-order matching produces \<open>[?X := P]\<close>.
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "let
   val fo_pat = @{term_pat \"\<lambda>y. (?X::nat\<Rightarrow>bool) y\"}
   val trm = @{term \"\<lambda>x. (P::nat\<Rightarrow>bool) x\"} 
@@ -1071,11 +1071,11 @@
   In particular this excludes terms where a
   schematic variable is an argument of another one and where a schematic
   variable is applied twice with the same bound variable. The function
-  @{ML_ind pattern in Pattern} in the structure @{ML_struct Pattern} tests
+  @{ML_ind pattern in Pattern} in the structure @{ML_structure Pattern} tests
   whether a term satisfies these restrictions under the assumptions
   that it is beta-normal, well-typed and has no loose bound variables.
 
-  @{ML_response [display, gray]
+  @{ML_matchresult [display, gray]
   "let
   val trm_list = 
         [@{term_pat \"?X\"}, @{term_pat \"a\"},
@@ -1099,9 +1099,9 @@
   record of type @{ML_type Envir.env} containing a max-index, a type environment 
   and a term environment). The corresponding functions are @{ML_ind match in Pattern}
   and @{ML_ind unify in Pattern}, both implemented in the structure
-  @{ML_struct Pattern}. An example for higher-order pattern unification is
-
-  @{ML_response_fake [display, gray]
+  @{ML_structure Pattern}. An example for higher-order pattern unification is
+
+  @{ML_matchresult_fake [display, gray]
   "let
   val trm1 = @{term_pat \"\<lambda>x y. g (?X y x) (f (?Y x))\"}
   val trm2 = @{term_pat \"\<lambda>u v. g u (f u)\"}
@@ -1152,7 +1152,7 @@
 
   We can print them out as follows.
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "pretty_env @{context} (Envir.term_env un1);
 pretty_env @{context} (Envir.term_env un2);
 pretty_env @{context} (Envir.term_env un3)"
@@ -1164,7 +1164,7 @@
   In case of failure the function @{ML_ind unifiers in Unify} does not raise
   an exception, rather returns the empty sequence. For example
 
-  @{ML_response [display, gray]
+  @{ML_matchresult [display, gray]
 "let 
   val trm1 = @{term \"a\"}
   val trm2 = @{term \"b\"}
@@ -1181,7 +1181,7 @@
   called. This function has a built-in bound, which can be accessed and
   manipulated as a configuration value. For example
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "Config.get_global @{theory} (Unify.search_bound)"
   "Int 60"}
 
@@ -1192,7 +1192,7 @@
 
 
   For higher-order matching the function is called @{ML_ind matchers in Unify}
-  implemented in the structure @{ML_struct Unify}. Also this function returns
+  implemented in the structure @{ML_structure Unify}. Also this function returns
   sequences with possibly more than one matcher.  Like @{ML unifiers in
   Unify}, this function does not raise an exception in case of failure, but
   returns an empty sequence. It also first tries out whether the matching
@@ -1213,7 +1213,7 @@
   this problem is to instantiate the schematic variables \<open>?P\<close> and
   \<open>?x\<close>.  Instantiation function for theorems is 
   @{ML_ind instantiate_normalize in Drule} from the structure 
-  @{ML_struct Drule}. One problem, however, is
+  @{ML_structure Drule}. One problem, however, is
   that this function expects the instantiations as lists of @{ML_type "((indexname * sort) * ctyp)"}
   respective @{ML_type "(indexname * typ) * cterm"}:
 
@@ -1266,7 +1266,7 @@
   @{term "Q True"}. 
  
 
-  @{ML_response_fake [gray,display,linenos] 
+  @{ML_matchresult_fake [gray,display,linenos] 
   "let  
   val pat = Logic.strip_imp_concl (Thm.prop_of @{thm spec})
   val trm = @{term \"Trueprop (Q True)\"}
@@ -1354,14 +1354,14 @@
   Given a well-typed term, the function @{ML_ind type_of in Term} returns the 
   type of a term. Consider for example:
 
-  @{ML_response [display,gray] 
+  @{ML_matchresult [display,gray] 
   "type_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::nat\"})" "bool"}
 
   To calculate the type, this function traverses the whole term and will
   detect any typing inconsistency. For example changing the type of the variable 
   @{term "x"} from @{typ "nat"} to @{typ "int"} will result in the error message: 
 
-  @{ML_response_fake [display,gray] 
+  @{ML_matchresult_fake [display,gray] 
   "type_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})" 
   "*** Exception- TYPE (\"type_of: type mismatch in application\" \<dots>"}
 
@@ -1369,13 +1369,13 @@
   not necessary, there is the function @{ML_ind fastype_of in Term}, which 
   also returns the type of a term.
 
-  @{ML_response [display,gray] 
+  @{ML_matchresult [display,gray] 
   "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::nat\"})" "bool"}
 
   However, efficiency is gained on the expense of skipping some tests. You 
   can see this in the following example
 
-   @{ML_response [display,gray] 
+   @{ML_matchresult [display,gray] 
   "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})" "bool"}
 
   where no error is detected.
@@ -1387,7 +1387,7 @@
   complete type. The type inference can be invoked with the function
   @{ML_ind check_term in Syntax}. An example is as follows:
 
-  @{ML_response_fake [display,gray] 
+  @{ML_matchresult_fake [display,gray] 
 "let
   val c = Const (@{const_name \"plus\"}, dummyT) 
   val o = @{term \"1::nat\"} 
@@ -1432,26 +1432,26 @@
   Certification is always relative to a context. For example you can 
   write:
 
-  @{ML_response_fake [display,gray] 
+  @{ML_matchresult_fake [display,gray] 
   "Thm.cterm_of @{context} @{term \"(a::nat) + b = c\"}" 
   "a + b = c"}
 
   This can also be written with an antiquotation:
 
-  @{ML_response_fake [display,gray] 
+  @{ML_matchresult_fake [display,gray] 
   "@{cterm \"(a::nat) + b = c\"}" 
   "a + b = c"}
 
   Attempting to obtain the certified term for
 
-  @{ML_response_fake_both [display,gray] 
+  @{ML_matchresult_fake_both [display,gray] 
   "@{cterm \"1 + True\"}" 
   "Type unification failed \<dots>"}
 
   yields an error (since the term is not typable). A slightly more elaborate
   example that type-checks is:
 
-@{ML_response_fake [display,gray] 
+@{ML_matchresult_fake [display,gray] 
 "let
   val natT = @{typ \"nat\"}
   val zero = @{term \"0::nat\"}
@@ -1465,13 +1465,13 @@
   you obtain the certified type for the Isabelle type @{typ "nat \<Rightarrow> bool"} on 
   the ML-level as follows:
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "Thm.ctyp_of @{context} (@{typ nat} --> @{typ bool})"
   "nat \<Rightarrow> bool"}
 
   or with the antiquotation:
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "@{ctyp \"nat \<Rightarrow> bool\"}"
   "nat \<Rightarrow> bool"}
 
@@ -1479,14 +1479,14 @@
   pattern-match against them. However, we can construct them. For example
   the function @{ML_ind apply in Thm} produces a certified application.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "Thm.apply @{cterm \"P::nat \<Rightarrow> bool\"} @{cterm \"3::nat\"}"
   "P 3"}
 
-  Similarly the function @{ML_ind list_comb in Drule} from the structure @{ML_struct Drule}
+  Similarly the function @{ML_ind list_comb in Drule} from the structure @{ML_structure Drule}
   applies a list of @{ML_type cterm}s.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "let
   val chead = @{cterm \"P::unit \<Rightarrow> nat \<Rightarrow> bool\"}
   val cargs = [@{cterm \"()\"}, @{cterm \"3::nat\"}]
@@ -1546,7 +1546,7 @@
   If we print out the value of @{ML my_thm} then we see only the 
   final statement of the theorem.
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "pwriteln (pretty_thm @{context} my_thm)"
   "\<lbrakk>\<And>x. P x \<Longrightarrow> Q x; P t\<rbrakk> \<Longrightarrow> Q t"}
 
@@ -1570,7 +1570,7 @@
   yet stored in Isabelle's theorem database. Consequently, it cannot be 
   referenced on the user level. One way to store it in the theorem database is
   by using the function @{ML_ind note in Local_Theory} from the structure 
-  @{ML_struct Local_Theory} (the Isabelle command
+  @{ML_structure Local_Theory} (the Isabelle command
   \isacommand{local\_setup} will be explained in more detail in 
   Section~\ref{sec:local}).
 \<close>
@@ -1606,13 +1606,13 @@
   Note that we have to use another name under which the theorem is stored,
   since Isabelle does not allow us to call  @{ML_ind note in Local_Theory} twice
   with the same name. The attribute needs to be wrapped inside the function @{ML_ind
-  internal in Attrib} from the structure @{ML_struct Attrib}. If we use the function 
+  internal in Attrib} from the structure @{ML_structure Attrib}. If we use the function 
   @{ML get_thm_names_from_ss} from
   the previous chapter, we can check whether the theorem has actually been
   added.
 
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
   "let
   fun pred s = match_string \"my_thm_simp\" s
 in
@@ -1716,7 +1716,7 @@
 text \<open>
   Testing them for alpha equality produces:
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
 "(Thm.eq_thm_prop (@{thm thm1}, @{thm thm2}),
  Thm.eq_thm_prop (@{thm thm2}, @{thm thm3}))"
 "(true, false)"}
@@ -1725,7 +1725,7 @@
   the functions @{ML_ind lhs_of in Thm} and @{ML_ind rhs_of in Thm} return 
   the left and right-hand side, respectively, of a meta-equality.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "let
   val eq = @{thm True_def}
 in
@@ -1745,7 +1745,7 @@
 text \<open>
   We can destruct them into premises and conclusions as follows. 
 
- @{ML_response_fake [display,gray]
+ @{ML_matchresult_fake [display,gray]
 "let
   val ctxt = @{context}
   fun prems_and_concl thm =
@@ -1778,7 +1778,7 @@
   this, we build the theorem @{term "True \<equiv> True"} (Line 1) and then 
   unfold the constant @{term "True"} according to its definition (Line 2).
 
-  @{ML_response_fake [display,gray,linenos]
+  @{ML_matchresult_fake [display,gray,linenos]
   "Thm.reflexive @{cterm \"True\"}
   |> Simplifier.rewrite_rule @{context} [@{thm True_def}]
   |> pretty_thm @{context}
@@ -1795,14 +1795,14 @@
   @{ML_ind rulify in Object_Logic} 
   replaces all object connectives by equivalents in the meta logic. For example
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "Object_Logic.rulify @{context} @{thm foo_test2}"
   "\<lbrakk>?A; ?B\<rbrakk> \<Longrightarrow> ?C"}
 
   The transformation in the other direction can be achieved with function
   @{ML_ind atomize in Object_Logic} and the following code.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "let
   val thm = @{thm foo_test1}
   val meta_eq = Object_Logic.atomize @{context} (Thm.cprop_of thm)
@@ -1837,16 +1837,16 @@
 text \<open>
   This function produces for the theorem @{thm [source] list.induct}
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "atomize_thm @{context} @{thm list.induct}"
   "\<forall>P list. P [] \<longrightarrow> (\<forall>a list. P list \<longrightarrow> P (a # list)) \<longrightarrow> P list"}
 
   Theorems can also be produced from terms by giving an explicit proof. 
   One way to achieve this is by using the function @{ML_ind prove in Goal}
-  in the structure @{ML_struct Goal}. For example below we use this function
+  in the structure @{ML_structure Goal}. For example below we use this function
   to prove the term @{term "P \<Longrightarrow> P"}.
   
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "let
   val trm = @{term \"P \<Longrightarrow> P::bool\"}
   val tac = K (assume_tac @{context} 1)
@@ -1878,7 +1878,7 @@
   With them we can now produce three theorem instances of the 
   proposition.
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "multi_test @{context} 3"
   "[\"?f ?x = ?f ?x\", \"?f ?x = ?f ?x\", \"?f ?x = ?f ?x\"]"}
 
@@ -1898,12 +1898,12 @@
 text \<open>
   While the LCF-approach of going through interfaces ensures soundness in Isabelle, there
   is the function @{ML_ind make_thm in Skip_Proof} in the structure 
-  @{ML_struct Skip_Proof} that allows us to turn any proposition into a theorem.
+  @{ML_structure Skip_Proof} that allows us to turn any proposition into a theorem.
   Potentially making the system unsound.  This is sometimes useful for developing 
   purposes, or when explicit proof construction should be omitted due to performace 
   reasons. An example of this function is as follows:
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "Skip_Proof.make_thm @{theory} @{prop \"True = False\"}"
   "True = False"}
 
@@ -1926,14 +1926,14 @@
   The auxiliary data of this lemma can be retrieved using the function 
   @{ML_ind get_tags in Thm}. So far the the auxiliary data of this lemma is 
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "Thm.get_tags @{thm foo_data}"
   "[(\"name\", \"General.foo_data\"), (\"kind\", \"lemma\")]"}
 
   consisting of a name and a kind.  When we store lemmas in the theorem database, 
   we might want to explicitly extend this data by attaching case names to the 
   two premises of the lemma.  This can be done with the function @{ML_ind name in Rule_Cases}
-  from the structure @{ML_struct Rule_Cases}.
+  from the structure @{ML_structure Rule_Cases}.
 \<close>
 
 local_setup %gray \<open>
@@ -1944,7 +1944,7 @@
 text \<open>
   The data of the theorem @{thm [source] foo_data'} is then as follows:
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "Thm.get_tags @{thm foo_data'}"
   "[(\"name\", \"General.foo_data'\"), (\"kind\", \"lemma\"), 
  (\"case_names\", \"foo_case_one;foo_case_two\")]"}
@@ -1996,7 +1996,7 @@
   proving a theorem, especially when the proof is by \<open>auto\<close>. These theorems can be obtained by introspecting the proved theorem.
   To introspect a theorem, let us define the following three functions
   that analyse the @{ML_type_ind proof_body} data-structure from the
-  structure @{ML_struct Proofterm}.
+  structure @{ML_structure Proofterm}.
 \<close>
 
 ML %grayML\<open>fun pthms_of (PBody {thms, ...}) = map #2 thms 
@@ -2036,7 +2036,7 @@
   extracting this information.  Let us first extract the name of the
   established theorem. This can be done with
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "@{thm my_conjIa}
   |> Thm.proof_body_of
   |> get_names"
@@ -2050,7 +2050,7 @@
   and @{thm [source] conjunct2}. We can obtain them by descending into the
   first level of the proof-tree, as follows.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "@{thm my_conjIa}
   |> Thm.proof_body_of
   |> get_pbodies
@@ -2064,7 +2064,7 @@
   proof in Isabelle. Note also that applications of \<open>assumption\<close> do not
   count as a separate theorem, as you can see in the following code.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "@{thm my_conjIb}
   |> Thm.proof_body_of
   |> get_pbodies
@@ -2077,7 +2077,7 @@
   and @{thm [source] my_conjIb}, as can be seen by the theorems that
   are returned for @{thm [source] my_conjIc}.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "@{thm my_conjIc}
   |> Thm.proof_body_of
   |> get_pbodies
@@ -2092,7 +2092,7 @@
   means we obtain the theorems that are used in order to prove
   @{thm [source] conjunct1}, @{thm [source] conjunct2} and @{thm [source] conjI}.
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "@{thm my_conjIa}
   |> Thm.proof_body_of
   |> get_pbodies
@@ -2408,7 +2408,7 @@
 text \<open>
   then you can see it is added to the initially empty list.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "MyThms.get @{context}" 
   "[\"True\"]"}
 
@@ -2422,7 +2422,7 @@
   not need to be explicitly given. These three declarations will cause the 
   theorem list to be updated as:
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "MyThms.get @{context}"
   "[\"True\", \"Suc (Suc 0) = 2\"]"}
 
@@ -2435,7 +2435,7 @@
 
 text \<open>After this, the theorem list is again: 
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "MyThms.get @{context}"
   "[\"True\"]"}
 
@@ -2468,7 +2468,7 @@
   The function @{ML str in Pretty} transforms a (plain) string into such a pretty 
   type. For example
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "Pretty.str \"test\"" "String (\"test\", 4)"}
 
   where the result indicates that we transformed a string with length 4. Once
@@ -2503,7 +2503,7 @@
   If we print out the string using the usual ``quick-and-dirty'' method, then
   we obtain the ugly output:
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "tracing test_str"
 "fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar foooooooooo
 ooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaa
@@ -2512,7 +2512,7 @@
 
   We obtain the same if we just use the function @{ML pprint}.
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "pprint (Pretty.str test_str)"
 "fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar foooooooooo
 ooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaa
@@ -2526,7 +2526,7 @@
   list. To print this list of pretty types as a single string, we concatenate
   them with the function @{ML_ind blk in Pretty} as follows:
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let
   val ptrs = map Pretty.str (space_explode \" \" test_str)
 in
@@ -2542,7 +2542,7 @@
   indentation of the printed string. You can increase the indentation 
   and obtain
   
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let
   val ptrs = map Pretty.str (space_explode \" \" test_str)
 in
@@ -2557,7 +2557,7 @@
   that every line starts with the same indent, you can use the
   function @{ML_ind  indent in Pretty} as follows:
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let
   val ptrs = map Pretty.str (space_explode \" \" test_str)
 in
@@ -2572,7 +2572,7 @@
   have the linebreaks handled properly, you can use the function 
   @{ML_ind  commas in Pretty}. For example
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let
   val ptrs = map (Pretty.str o string_of_int) (99998 upto 100020)
 in
@@ -2590,7 +2590,7 @@
 
 text \<open>
   
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let
   val ptrs = map (Pretty.str o string_of_int) (99998 upto 100020)
 in
@@ -2629,7 +2629,7 @@
   can occur. We do the same after the @{text [quotes] "and"}. This gives you
   for example
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let
   val ptrs1 = map (Pretty.str o string_of_int) (1 upto 22)
   val ptrs2 = map (Pretty.str o string_of_int) (10 upto 28)
@@ -2646,7 +2646,7 @@
   a list of items, but automatically inserts forced breaks between each item.
   Compare
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "let
   val a_and_b = [Pretty.str \"a\", Pretty.str \"b\"]
 in
@@ -2661,7 +2661,7 @@
   It is used for example in the command \isacommand{print\_theorems}. An
   example is as follows.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "let
   val pstrs = map (Pretty.str o string_of_int) (4 upto 10)
 in
@@ -2681,7 +2681,7 @@
   out the the terms produced by the the function @{ML de_bruijn} from 
   exercise~\ref{ex:debruijn} we obtain the following: 
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "pprint (Syntax.pretty_term  @{context} (de_bruijn 4))"
   "(P 3 = P 4 \<longrightarrow> P 4 \<and> P 3 \<and> P 2 \<and> P 1) \<and>
 (P 2 = P 3 \<longrightarrow> P 4 \<and> P 3 \<and> P 2 \<and> P 1) \<and>
@@ -2715,14 +2715,14 @@
   line break, because we do not want that a line break occurs there.
   Now you can write
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "tell_type @{context} @{term \"min (Suc 0)\"}"
   "The term \"min (Suc 0)\" has type \"nat \<Rightarrow> nat\"."}
   
   To see the proper line breaking, you can try out the function on a bigger term 
   and type. For example:
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "tell_type @{context} @{term \"(=) ((=) ((=) ((=) ((=) (=)))))\"}"
   "The term \"(=) ((=) ((=) ((=) ((=) (=)))))\" has type 
 \"((((('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool\"."}
--- a/ProgTutorial/First_Steps.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/First_Steps.thy	Thu May 16 19:56:12 2019 +0200
@@ -45,10 +45,8 @@
   \end{graybox}
   \end{isabelle}
 
-  If you work with ProofGeneral then like normal Isabelle scripts
-  \isacommand{ML}-commands can be evaluated by using the advance and
-  undo buttons of your Isabelle environment.  If you work with the
-  Jedit GUI, then you just have to hover the cursor over the code 
+  If you work with the
+  Isabelle/jEdit, then you just have to hover the cursor over the code 
   and you see the evaluated result in the ``Output'' window.
 
   As mentioned in the Introduction, we will drop the \isacommand{ML}~\<open>\<verbopen> \<dots> \<verbclose>\<close> scaffolding whenever we show code. The lines
@@ -95,7 +93,7 @@
   code. This can be done in a ``quick-and-dirty'' fashion using the function
   @{ML_ind writeln in Output}. For example
 
-  @{ML_response_fake [display,gray] "writeln \"any string\"" "\"any string\""}
+  @{ML_matchresult_fake [display,gray] "writeln \"any string\"" "\"any string\""}
 
   will print out @{text [quotes] "any string"}.  
   This function expects a string as argument. If you develop under
@@ -103,7 +101,7 @@
   for converting values into strings, namely the antiquotation 
   \<open>@{make_string}\<close>:
 
-  @{ML_response_fake [display,gray] \<open>writeln (@{make_string} 1)\<close> \<open>"1"\<close>} 
+  @{ML_matchresult_fake [display,gray] \<open>writeln (@{make_string} 1)\<close> \<open>"1"\<close>} 
 
   However, \<open>@{make_string}\<close> only works if the type of what
   is converted is monomorphic and not a function. 
@@ -111,7 +109,7 @@
   You can print out error messages with the function @{ML_ind error in Library}; 
   for example:
 
-  @{ML_response_fake [display,gray] 
+  @{ML_matchresult_fake [display,gray] 
   "if 0 = 1 then true else (error \"foo\")" 
 "*** foo
 ***"}
@@ -128,8 +126,8 @@
   and @{ML_type thm}. Isabelle provides elaborate pretty-printing functions,
   which we will explain in more detail in Section \ref{sec:pretty}. For now
   we just use the functions @{ML_ind writeln in Pretty}  from the structure
-  @{ML_struct Pretty} and @{ML_ind pretty_term in Syntax} from the structure
-  @{ML_struct Syntax}. For more convenience, we bind them to the toplevel.
+  @{ML_structure Pretty} and @{ML_ind pretty_term in Syntax} from the structure
+  @{ML_structure Syntax}. For more convenience, we bind them to the toplevel.
 \<close>
 
 ML %grayML\<open>val pretty_term = Syntax.pretty_term
@@ -138,7 +136,7 @@
 text \<open>
   They can be used as follows
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "pwriteln (pretty_term @{context} @{term \"1::nat\"})"
   "\"1\""}
 
@@ -161,7 +159,7 @@
 text \<open>
   Now by using this context @{ML pretty_term} prints out
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "pwriteln (pretty_term show_types_ctxt @{term \"(1::nat, x)\"})"
   "(1::nat, x::'a)"}
 
@@ -205,7 +203,7 @@
   @{thm [source] conjI} shown below can be used for any (typable) 
   instantiation of \<open>?P\<close> and \<open>?Q\<close>. 
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "pwriteln (pretty_thm @{context} @{thm conjI})"
   "\<lbrakk>?P; ?Q\<rbrakk> \<Longrightarrow> ?P \<and> ?Q"}
 
@@ -223,7 +221,7 @@
 text \<open>
   With this function, theorem @{thm [source] conjI} is now printed as follows:
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "pwriteln (pretty_thm_no_vars @{context} @{thm conjI})"
   "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> P \<and> Q"}
   
@@ -266,7 +264,7 @@
   should try to print these parcels together in a single string. Therefore do
   \emph{not} print out information as
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "pwriteln (Pretty.str \"First half,\"); 
 pwriteln (Pretty.str \"and second half.\")"
 "First half,
@@ -274,7 +272,7 @@
 
   but as a single string with appropriate formatting. For example
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "pwriteln (Pretty.str (\"First half,\" ^ \"\\n\" ^ \"and second half.\"))"
 "First half,
 and second half."}
@@ -284,7 +282,7 @@
   @{ML_ind cat_lines in Library} concatenates a list of strings 
   and inserts newlines in between each element. 
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "pwriteln (Pretty.str (cat_lines [\"foo\", \"bar\"]))"
   "foo
 bar"}
@@ -389,7 +387,7 @@
   applied to it. For example, below three variables are applied to the term 
   @{term [show_types] "P::nat \<Rightarrow> int \<Rightarrow> unit \<Rightarrow> bool"}:
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
 "let
   val trm = @{term \"P::nat \<Rightarrow> int \<Rightarrow> unit \<Rightarrow> bool\"}
   val ctxt = @{context}
@@ -416,7 +414,7 @@
   terms involving fresh variables. For this the infrastructure helps
   tremendously to avoid any name clashes. Consider for example:
 
-   @{ML_response_fake [display,gray]
+   @{ML_matchresult_fake [display,gray]
 "let
   val trm = @{term \"za::'a \<Rightarrow> 'b \<Rightarrow> 'c\"}
   val ctxt = @{context}
@@ -462,7 +460,7 @@
 
 text \<open>
   The function @{ML_text "my_note"} in line 3 is just a wrapper for the function 
-  @{ML_ind note in Local_Theory} in the structure @{ML_struct Local_Theory}; 
+  @{ML_ind note in Local_Theory} in the structure @{ML_structure Local_Theory}; 
   it stores a theorem under a name. 
   In lines 5 to 6 we call this function to give alternative names for the three
   theorems. The point of @{ML "#>"} is that you can sequence such function calls. 
@@ -589,19 +587,19 @@
   @{ML ||>>} operates on pairs). Each of the next three lines just increment 
   the value by one, but also nest the intermediate results to the left. For example 
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
   "acc_incs 1"
   "((((\"\", 1), 2), 3), 4)"}
 
   You can continue this chain with:
   
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
   "acc_incs 1 ||>> (fn x => (x, x + 2))"
   "(((((\"\", 1), 2), 3), 4), 6)"}
 
   An example where this combinator is useful is as follows
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "let
   val ((names1, names2), _) =
     @{context}
@@ -641,7 +639,7 @@
   both as pairs. We can use this information for example to print out the definiens and 
   the theorem corresponding to the definitions. For example for the first definition:
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "let 
   val (one_trm, (_, one_thm)) = one_def
 in
@@ -675,7 +673,7 @@
   @{ML_ind check_terms in Syntax}, whose purpose is to simultaneously type-check 
   a list of terms. Consider the code:
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "let
   val ctxt = @{context}
 in
@@ -696,7 +694,7 @@
   Syntax} in the file @{ML_file "Pure/Syntax/syntax.ML"} that is implemented 
   in terms of @{ML singleton} and @{ML check_terms in Syntax}.} For example
 
-  @{ML_response_fake [display, gray, linenos]
+  @{ML_matchresult_fake [display, gray, linenos]
   "let 
   val ctxt = @{context}
 in
@@ -745,7 +743,7 @@
   are indicated by the @{ML_text @}-sign followed by text wrapped in \<open>{\<dots>}\<close>.  For example, one can print out the name of the current theory with
   the code
   
-  @{ML_response [display,gray] "Context.theory_name @{theory}" "\"First_Steps\""}
+  @{ML_matchresult [display,gray] "Context.theory_name @{theory}" "\"First_Steps\""}
  
   where \<open>@{theory}\<close> is an antiquotation that is substituted with the
   current theory (remember that we assumed we are inside the theory 
@@ -773,7 +771,7 @@
   function @{ML print_abbrevs in Proof_Context} that list of all currently
   defined abbreviations. For example
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "Proof_Context.print_abbrevs false @{context}"
 "\<dots>
 INTER \<equiv> INFI
@@ -785,11 +783,11 @@
   You can also use antiquotations to refer to proved theorems: 
   \<open>@{thm \<dots>}\<close> for a single theorem
 
-  @{ML_response_fake [display,gray] "@{thm allI}" "(\<And>x. ?P x) \<Longrightarrow> \<forall>x. ?P x"}
+  @{ML_matchresult_fake [display,gray] "@{thm allI}" "(\<And>x. ?P x) \<Longrightarrow> \<forall>x. ?P x"}
 
   and \<open>@{thms \<dots>}\<close> for more than one
 
-@{ML_response_fake [display,gray] 
+@{ML_matchresult_fake [display,gray] 
   "@{thms conj_ac}" 
 "(?P \<and> ?Q) = (?Q \<and> ?P)
 (?P \<and> ?Q \<and> ?R) = (?Q \<and> ?P \<and> ?R)
@@ -799,7 +797,7 @@
   example, if you need the version of the theorem @{thm [source] refl} that 
   has a meta-equality instead of an equality, you can write
 
-@{ML_response_fake [display,gray] 
+@{ML_matchresult_fake [display,gray] 
   "@{thm refl[THEN eq_reflection]}"
   "?x \<equiv> ?x"}
 
@@ -818,7 +816,7 @@
 text \<open>
   The result can be printed out as follows.
 
-  @{ML_response_fake [gray,display]
+  @{ML_matchresult_fake [gray,display]
 "foo_thms |> pretty_thms_no_vars @{context}
          |> pwriteln"
   "True, False \<Longrightarrow> P"}
@@ -842,7 +840,7 @@
   simp-rules. Now you can feed in the current simpset into this function. 
   The current simpset can be referred to using @{ML_ind simpset_of in Raw_Simplifier}.
 
-  @{ML_response_fake [display,gray] 
+  @{ML_matchresult_fake [display,gray] 
   "get_thm_names_from_ss @{context}" 
   "[\"Nat.of_nat_eq_id\", \"Int.of_int_eq_id\", \"Nat.One_nat_def\", \<dots>]"}
 
@@ -857,7 +855,7 @@
   of this antiquotation is that it does not allow you to use schematic
   variables in terms. If you want to have an antiquotation that does not have
   this restriction, you can implement your own using the function @{ML_ind
-  inline in ML_Antiquotation} from the structure @{ML_struct ML_Antiquotation}. The code
+  inline in ML_Antiquotation} from the structure @{ML_structure ML_Antiquotation}. The code
   for the antiquotation \<open>term_pat\<close> is as follows.
 \<close>
 
@@ -886,7 +884,7 @@
   so that the ML-system can understand it. (All these functions will be explained
   in more detail in later sections.) An example for this antiquotation is:
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "@{term_pat \"Suc (?x::nat)\"}"
   "Const (\"Suc\", \"nat \<Rightarrow> nat\") $ Var ((\"x\", 0), \"nat\")"}
 
@@ -982,7 +980,7 @@
 text \<open>
   The data can be retrieved with the projection functions defined above.
   
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
 "project_int (nth foo_list 0); 
 project_bool (nth foo_list 1)" 
 "13
@@ -992,7 +990,7 @@
   a boolean. If we attempt to access the integer as a boolean, then we get 
   a runtime error. 
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
 "project_bool (nth foo_list 0)"  
 "*** exception Match raised"}
 
@@ -1012,13 +1010,13 @@
   changes according to what is needed at the time).
 
   For theories and proof contexts there are, respectively, the functors 
-  @{ML_funct_ind Theory_Data} and @{ML_funct_ind Proof_Data} that help
+  @{ML_functor_ind Theory_Data} and @{ML_functor_ind Proof_Data} that help
   with the data storage. Below we show how to implement a table in which you
   can store theorems and look them up according to a string key. The
   intention in this example is to be able to look up introduction rules for logical
   connectives. Such a table might be useful in an automatic proof procedure
   and therefore it makes sense to store this data inside a theory.  
-  Consequently we use the functor @{ML_funct Theory_Data}.
+  Consequently we use the functor @{ML_functor Theory_Data}.
   The code for the table is:
 \<close>
 
@@ -1064,7 +1062,7 @@
   \emph{current} theory is updated (this is explained further in 
   Section~\ref{sec:theories}). The lookup can now be performed as follows.
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
 "lookup @{theory} \"conj\""
 "SOME \"\<lbrakk>?P; ?Q\<rbrakk> \<Longrightarrow> ?P \<and> ?Q\""}
 
@@ -1078,7 +1076,7 @@
 text \<open>
   and accordingly, @{ML lookup} now produces the introduction rule for @{term "True"}
   
-@{ML_response_fake [display, gray]
+@{ML_matchresult_fake [display, gray]
 "lookup @{theory} \"conj\""
 "SOME \"True\""}
 
@@ -1100,7 +1098,7 @@
   We initialise the reference with the empty list. Consequently a first 
   lookup produces @{ML "ref []" in Unsynchronized}.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "WrongRefData.get @{theory}"
   "ref []"}
 
@@ -1122,7 +1120,7 @@
   A lookup in the current theory gives then the expected list 
   @{ML "ref [1]" in Unsynchronized}.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "WrongRefData.get @{theory}"
   "ref [1]"}
 
@@ -1134,7 +1132,7 @@
   \isacommand{setup}-command again, we do not obtain @{ML "ref [1]" in
   Unsynchronized}, but
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "WrongRefData.get @{theory}"
   "ref [1, 1]"}
 
@@ -1154,7 +1152,7 @@
   \end{readmore}
 
   Storing data in a proof context is done in a similar fashion. As mentioned
-  before, the corresponding functor is @{ML_funct_ind Proof_Data}. With the
+  before, the corresponding functor is @{ML_functor_ind Proof_Data}. With the
   following code we can store a list of terms in a proof context.
 \<close>
 
@@ -1181,7 +1179,7 @@
   Next we start with the context generated by the antiquotation 
   \<open>@{context}\<close> and update it in various ways. 
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
 "let
   val ctxt0 = @{context}
   val ctxt1 = ctxt0 |> update @{term \"False\"}
@@ -1224,7 +1222,7 @@
 
   There are two special instances of the data storage mechanism described 
   above. The first instance implements named theorem lists using the functor
-  @{ML_funct_ind Named_Thms}. This is because storing theorems in a list 
+  @{ML_functor_ind Named_Thms}. This is because storing theorems in a list 
   is such a common task.  To obtain a named theorem list, you just declare
 \<close>
 
@@ -1233,7 +1231,7 @@
    val description = "Theorems for foo")\<close>
 
 text \<open>
-  and set up the @{ML_struct FooRules} with the command
+  and set up the @{ML_structure FooRules} with the command
 \<close>
 
 setup %gray \<open>FooRules.setup\<close>
@@ -1273,7 +1271,7 @@
   The rules in the list can be retrieved using the function 
   @{ML FooRules.get}:
 
-  @{ML_response_fake [display,gray] 
+  @{ML_matchresult_fake [display,gray] 
   "FooRules.get @{context}" 
   "[\"True\", \"?C\",\"?B\"]"}
 
@@ -1310,13 +1308,13 @@
   On the ML-level these values can be retrieved using the 
   function @{ML_ind get in Config} from a proof context
 
-  @{ML_response [display,gray] 
+  @{ML_matchresult [display,gray] 
   "Config.get @{context} bval" 
   "true"}
 
   or directly from a theory using the function @{ML_ind get_global in Config}
 
-  @{ML_response [display,gray] 
+  @{ML_matchresult [display,gray] 
   "Config.get_global @{theory} bval" 
   "true"}
 
@@ -1324,7 +1322,7 @@
   from the ML-level with the functions @{ML_ind put in Config}
   and @{ML_ind put_global in Config}. For example
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
   "let
   val ctxt = @{context}
   val ctxt' = Config.put sval \"foo\" ctxt
--- a/ProgTutorial/Intro.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/Intro.thy	Thu May 16 19:56:12 2019 +0200
@@ -4,7 +4,6 @@
 
 chapter \<open>Introduction\<close>
 
-
 text \<open>
    \begin{flushright}
   {\em
@@ -16,7 +15,7 @@
   \end{flushright}
 
   \medskip
-  If your next project requires you to program on the ML-level of Isabelle,
+  If your next project requires you to program Isabelle with ML,
   then this tutorial is for you. It will guide you through the first steps of
   Isabelle programming, and also explain ``tricks of the trade''. We also hope
   the tutorial will encourage students and researchers to play with Isabelle
@@ -27,7 +26,7 @@
   Isabelle interface with the Jedit GUI. Explanation of this part is beyond
   this tutorial.
 
-  The best way to get to know the ML-level of Isabelle is by experimenting
+  The best way to get to know the Isabelle/ML is by experimenting
   with the many code examples included in the tutorial. The code is as far as
   possible checked against the Isabelle
   distribution.%%\footnote{\input{version.tex}}
@@ -47,11 +46,51 @@
   the functional programming language ML, the language in which most of
   Isabelle is implemented. If you are unfamiliar with either of these two
   subjects, then you should first work through the Isabelle/HOL tutorial
-  \cite{isa-tutorial} or Paulson's book on ML \cite{paulson-ml2}. Recently,
+  @{cite "isa-tutorial"} or Paulson's book on ML @{cite "paulson-ml2"}. Recently,
   Isabelle has adopted a sizable amount of Scala code for a slick GUI
   based on jEdit. This part of the code is beyond the interest of this
   tutorial, since it mostly does not concern the regular Isabelle 
   developer.
+
+  The rich Isabelle infrastructure can be categorized by various aspects @{cite "wenzel-technology"}:
+
+     @{emph \<open>@{bold \<open>Logic\<close>}\<close>} 
+      \begin{description}
+        \item[Isabelle/Pure] is the logical Framework and bootstrap environment. The Pure logic
+          is used to represent rules for Higher-Order Natural Deduction declaratively. This allows
+          the implementation and definition of object logics like HOL using the Pure logic and
+          framework.
+        \item[Isabelle/HOL] is the main library of theories and tools for applications that is used 
+          throughout this tutorial.
+      \end{description}
+
+     @{emph \<open>@{bold\<open>Programming\<close>}\<close>} 
+      \begin{description}
+        \item[Isabelle/ML] is the Isabelle tool implementation and extension language. It is based
+          on Poly/ML\footnote{@{url \<open>http://polyml.org\<close>}}. Both Isabelle/Pure and Isabelle/ML emerge 
+          from the same bootstrap process: the result is a meta-language for programming the logic
+          that is intertwined with it from a technological viewpoint, but logic and programming
+          remain formally separated.
+        \item[Isabelle/Scala] is the Isabelle system programming language. It connects the logical
+          environment with the outside world. Most notably resulting in the Prover IDE Isabelle/jEdit 
+          and the command line tools.
+      \end{description}
+
+     @{emph \<open>@{bold\<open>Proof\<close>}\<close>} 
+      \begin{description}
+        \item[Isabelle/Isar] is the structured proof language of the Isabelle framework. Isar 
+          means, Intelligible semi-automated reasoning. 
+        \item[Document language] for HTML output and \LaTeX type-setting of proof text. A proof
+          document combines formal and informal text to describe what has been proven to a general
+          audience.
+      \end{description}
+
+     @{emph \<open>@{bold\<open>IDE\<close>}\<close>} 
+      \begin{description}
+        \item[Isabelle/jEdit] is the IDE for proof and tool development. It provides a rich interactive
+          frontend to the Isabelle framework in which logic and proof development, document creation as
+          well as ML programming are seamlessly integrated. 
+      \end{description}
 \<close>
 
 section \<open>Existing Documentation\<close>
@@ -61,18 +100,23 @@
   part of the distribution of Isabelle):
 
   \begin{description}
-  \item[The Isabelle/Isar Implementation Manual] describes Isabelle
-  from a high-level perspective, documenting some of the underlying
-  concepts and interfaces. 
+  \item[The Isabelle/Isar Reference Manual] provides a top level view on the Isabelle system,
+  explaining general concepts and specification material (like grammars,
+  examples and so on) about Isabelle, Isar, Pure, HOL and the document language.
 
-  \item[The Isabelle Reference Manual] is an older document that used
+  \item[The Isabelle/Isar Implementation Manual] describes Isabelle
+  implementation from a high-level perspective, documenting the major 
+  underlying concepts and interfaces. 
+
+  \item[Isabelle/jEdit] describes the IDE.
+
+  \item[The Old Introduction to Isabelle] is an older document that used
   to be the main reference of Isabelle at a time when all proof scripts 
-  were written on the ML-level. Many parts of this manual are outdated 
+  were written with ML. Many parts of this manual are outdated 
   now, but some  parts, particularly the chapters on tactics, are still 
   useful.
 
-  \item[The Isar Reference Manual] provides specification material (like grammars,
-  examples and so on) about Isar and its implementation.
+
   \end{description}
 
   Then of course there are:
@@ -86,9 +130,12 @@
   parts, it is often good to look at code that does similar things as you 
   want to do and learn from it. 
   This tutorial contains frequently pointers to the
-  Isabelle sources. Still, the UNIX command \mbox{\<open>grep -R\<close>} is
-  often your best friend while programming with Isabelle.\footnote{Or
-  hypersearch if you work with jEdit.} To understand the sources,
+  Isabelle sources. The best way is to interactively explore the sources
+  within the IDE provided by Isabelle/jEdit. By loading @{ML_file "Pure/ROOT.ML"}
+  into Isabelle/jEdit the sources of Pure are annotated with markup and you can
+  interactively follow the structure.
+  Moreover, the UNIX command \mbox{\<open>grep -R\<close>} or hypersearch within Isabelle/jEdit is
+  often your best friend while programming with Isabelle. To understand the sources,
   it is often also necessary to track the change history of a file or
   files. The Mercurial repository\footnote{\url{http://isabelle.in.tum.de/repos/isabelle/}} 
   for Isabelle  provides convenient interfaces to query the history of
@@ -122,7 +169,7 @@
   generates when evaluated. This response is prefixed with a 
   @{text [quotes] ">"}, like:
 
-  @{ML_response [display,gray] "3 + 4" "7"}
+  @{ML_matchresult [display,gray] "3 + 4" "7"}
 
   The user-level commands of Isabelle (i.e., the non-ML code) are written
   in \isacommand{bold face} (e.g., \isacommand{lemma}, \isacommand{apply},
@@ -155,7 +202,7 @@
   understand code written by somebody else. This is aggravated in Isabelle by
   the fact that many parts of the code contain none or only few
   comments. There is one strategy that might be helpful to navigate your way:
-  ML is an interactive programming environment, which means you can evaluate
+  ML and Isabelle/jEdit is an interactive programming environment, which means you can evaluate
   code on the fly (for example inside an \isacommand{ML}~\<open>\<verbopen>\<dots>\<verbclose>\<close> section). So you can copy (self-contained)
   chunks of existing code into a separate theory file and then study it
   alongside with examples. You can also install ``probes'' inside the copied
@@ -284,7 +331,7 @@
   \item {\bf Jasmin Blanchette} helped greatly with section \ref{sec:pretty}
   and exercise \ref{fun:killqnt}.
 
-  \item {\bf Sascha B�hme} contributed the recipes in \ref{rec:timeout}, 
+  \item {\bf Sascha Böhme} contributed the recipes in \ref{rec:timeout}, 
   \ref{rec:external} and \ref{rec:oracle}. He also wrote section \ref{sec:conversion} 
   and helped with recipe \ref{rec:timing}. Parts of section \ref{sec:storing}
   are by him.
@@ -305,12 +352,15 @@
   in section \ref{sec:theorems}.
 
   \item {\bf Alexander Krauss} wrote a very early version of the ``first-steps'' 
-  chapter and also contributed the material on @{ML_funct Named_Thms}.
+  chapter and also contributed the material on @{ML_functor Named_Thms}.
 
   %%\item {\bf Tobias Nipkow} contributed recipe \ref{rec:callml}.
 
   \item {\bf Michael Norrish} proofread parts of the text.
 
+  \item {\bf Norbert Schirmer} updated the document to work with Isabelle 2018 and 
+  later.
+
   \item {\bf Andreas Schropp} improved and corrected section \ref{sec:univ} and
    contributed towards section \ref{sec:sorts}.
 
--- a/ProgTutorial/Package/Ind_Code.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/Package/Ind_Code.thy	Thu May 16 19:56:12 2019 +0200
@@ -493,7 +493,7 @@
   theorems of the form @{thm [source] all_elims_test}. For example we can instantiate
   the quantifiers in this theorem with @{term a}, @{term b} and @{term c} as follows:
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
 "let
   val ctrms = [@{cterm \"a::nat\"}, @{cterm \"b::nat\"}, @{cterm \"c::nat\"}]
   val new_thm = all_elims ctrms @{thm all_elims_test}
@@ -510,7 +510,7 @@
   For example we can eliminate the preconditions \<open>A\<close> and \<open>B\<close> from
   @{thm [source] imp_elims_test}:
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
 "let
   val res = imp_elims @{thms imp_elims_test'} @{thm imp_elims_test}
 in
--- a/ProgTutorial/Package/Ind_Interface.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/Package/Ind_Interface.thy	Thu May 16 19:56:12 2019 +0200
@@ -107,7 +107,7 @@
   If we feed into the parser the string that corresponds to our definition 
   of @{term even} and @{term odd}
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
 "let
   val input = filtered_input
      (\"even and odd \" ^  
--- a/ProgTutorial/Parsing.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/Parsing.thy	Thu May 16 19:56:12 2019 +0200
@@ -38,10 +38,10 @@
   \begin{readmore}
   The library for writing parser combinators is split up, roughly, into two
   parts: The first part consists of a collection of generic parser combinators
-  defined in the structure @{ML_struct Scan} in the file @{ML_file
+  defined in the structure @{ML_structure Scan} in the file @{ML_file
   "Pure/General/scan.ML"}. The second part of the library consists of
   combinators for dealing with specific token types, which are defined in the
-  structure @{ML_struct Parse} in the file @{ML_file
+  structure @{ML_structure Parse} in the file @{ML_file
   "Pure/Isar/parse.ML"}. In addition specific parsers for packages are 
   defined in @{ML_file "Pure/Isar/parse_spec.ML"}. Parsers for method arguments 
   are defined in @{ML_file "Pure/Isar/args.ML"}.
@@ -59,17 +59,17 @@
   this context means that it will return a pair consisting of this string and
   the rest of the input list. For example:
 
-  @{ML_response [display,gray] 
+  @{ML_matchresult [display,gray] 
   "($$ \"h\") (Symbol.explode \"hello\")" "(\"h\", [\"e\", \"l\", \"l\", \"o\"])"}
 
-  @{ML_response [display,gray] 
+  @{ML_matchresult [display,gray] 
   "($$ \"w\") (Symbol.explode \"world\")" "(\"w\", [\"o\", \"r\", \"l\", \"d\"])"}
 
   The function @{ML "$$"} will either succeed (as in the two examples above)
   or raise the exception \<open>FAIL\<close> if no string can be consumed. For
   example trying to parse
 
-  @{ML_response_fake [display,gray] 
+  @{ML_matchresult_fake [display,gray] 
   "($$ \"x\") (Symbol.explode \"world\")" 
   "Exception FAIL raised"}
   
@@ -91,13 +91,13 @@
   by the programmer (for example to handle them).
 
   In the examples above we use the function @{ML_ind explode in Symbol} from the
-  structure @{ML_struct Symbol}, instead of the more standard library function
+  structure @{ML_structure Symbol}, instead of the more standard library function
   @{ML_ind explode in String}, for obtaining an input list for the parser. The reason is
   that @{ML explode in Symbol} is aware of character
   sequences, for example \<open>\<foo>\<close>, that have a special meaning in
   Isabelle. To see the difference consider
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let 
   val input = \"\<foo> bar\"
 in
@@ -113,7 +113,7 @@
   following parser either consumes an @{text [quotes] "h"} or a @{text
   [quotes] "w"}:
 
-@{ML_response [display,gray] 
+@{ML_matchresult [display,gray] 
 "let 
   val hw = Scan.one (fn x => x = \"h\" orelse x = \"w\")
   val input1 = Symbol.explode \"hello\"
@@ -127,7 +127,7 @@
   For example parsing \<open>h\<close>, \<open>e\<close> and \<open>l\<close> (in this 
   order) you can achieve by:
 
-  @{ML_response [display,gray] 
+  @{ML_matchresult [display,gray] 
   "($$ \"h\" -- $$ \"e\" -- $$ \"l\") (Symbol.explode \"hello\")"
   "(((\"h\", \"e\"), \"l\"), [\"l\", \"o\"])"}
 
@@ -136,7 +136,7 @@
   If, as in the previous example, you want to parse a particular string, 
   then you can use the function @{ML_ind this_string in Scan}.
 
-  @{ML_response [display,gray] 
+  @{ML_matchresult [display,gray] 
   "Scan.this_string \"hell\" (Symbol.explode \"hello\")"
   "(\"hell\", [\"o\"])"}
 
@@ -146,7 +146,7 @@
   result of \<open>q\<close>. For example:
 
 
-@{ML_response [display,gray] 
+@{ML_matchresult [display,gray] 
 "let 
   val hw = $$ \"h\" || $$ \"w\"
   val input1 = Symbol.explode \"hello\"
@@ -162,7 +162,7 @@
   one that @{ML_ind "|--" in Scan} and @{ML_ind "--|" in Scan} ``point'' away.
   For example:
   
-@{ML_response [display,gray]
+@{ML_matchresult [display,gray]
 "let 
   val just_e = $$ \"h\" |-- $$ \"e\" 
   val just_h = $$ \"h\" --| $$ \"e\" 
@@ -176,7 +176,7 @@
   \<open>p\<close>, in case it succeeds; otherwise it returns 
   the default value \<open>x\<close>. For example:
 
-@{ML_response [display,gray]
+@{ML_matchresult [display,gray]
 "let 
   val p = Scan.optional ($$ \"h\") \"x\"
   val input1 = Symbol.explode \"hello\"
@@ -189,7 +189,7 @@
   The function @{ML_ind option in Scan} works similarly, except no default value can
   be given. Instead, the result is wrapped as an \<open>option\<close>-type. For example:
 
-@{ML_response [display,gray]
+@{ML_matchresult [display,gray]
 "let 
   val p = Scan.option ($$ \"h\")
   val input1 = Symbol.explode \"hello\"
@@ -201,7 +201,7 @@
   The function @{ML_ind ahead in Scan} parses some input, but leaves the original
   input unchanged. For example:
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
   "Scan.ahead (Scan.this_string \"foo\") (Symbol.explode \"foo\")" 
   "(\"foo\", [\"f\", \"o\", \"o\"])"} 
 
@@ -230,20 +230,20 @@
 
   on @{text [quotes] "hello"}, the parsing succeeds
 
-  @{ML_response [display,gray] 
+  @{ML_matchresult [display,gray] 
   "(!! (fn _ => fn _ => \"foo\") ($$ \"h\")) (Symbol.explode \"hello\")" 
   "(\"h\", [\"e\", \"l\", \"l\", \"o\"])"}
 
   but if you invoke it on @{text [quotes] "world"}
 
-  @{ML_response_fake [display,gray] "(!! (fn _ => fn _ => \"foo\") ($$ \"h\")) (Symbol.explode \"world\")"
+  @{ML_matchresult_fake [display,gray] "(!! (fn _ => fn _ => \"foo\") ($$ \"h\")) (Symbol.explode \"world\")"
                                "Exception ABORT raised"}
 
   then the parsing aborts and the error message \<open>foo\<close> is printed. In order to
   see the error message properly, you need to prefix the parser with the function 
   @{ML_ind error in Scan}. For example:
 
-  @{ML_response_fake [display,gray] 
+  @{ML_matchresult_fake [display,gray] 
   "Scan.error (!! (fn _ => fn _ => \"foo\") ($$ \"h\")) (Symbol.explode \"world\")"
   "Exception Error \"foo\" raised"}
 
@@ -269,12 +269,12 @@
   @{text [quotes] "h"}, @{text [quotes] "e"} and @{text [quotes] "w"}, and 
   the input @{text [quotes] "holle"} 
 
-  @{ML_response_fake [display,gray] "Scan.error (p_followed_by_q \"h\" \"e\" \"w\") (Symbol.explode \"holle\")"
+  @{ML_matchresult_fake [display,gray] "Scan.error (p_followed_by_q \"h\" \"e\" \"w\") (Symbol.explode \"holle\")"
                                "Exception ERROR \"h is not followed by e\" raised"} 
 
   produces the correct error message. Running it with
  
-  @{ML_response [display,gray] "Scan.error (p_followed_by_q \"h\" \"e\" \"w\") (Symbol.explode \"wworld\")"
+  @{ML_matchresult [display,gray] "Scan.error (p_followed_by_q \"h\" \"e\" \"w\") (Symbol.explode \"wworld\")"
                           "((\"w\", \"w\"), [\"o\", \"r\", \"l\", \"d\"])"}
   
   yields the expected parsing. 
@@ -282,7 +282,7 @@
   The function @{ML "Scan.repeat p" for p} will apply a parser \<open>p\<close> as 
   long as it succeeds. For example:
   
-  @{ML_response [display,gray] "Scan.repeat ($$ \"h\") (Symbol.explode \"hhhhello\")" 
+  @{ML_matchresult [display,gray] "Scan.repeat ($$ \"h\") (Symbol.explode \"hhhhello\")" 
                 "([\"h\", \"h\", \"h\", \"h\"], [\"e\", \"l\", \"l\", \"o\"])"}
   
   Note that @{ML_ind repeat in Scan} stores the parsed items in a list. The function
@@ -294,7 +294,7 @@
   the wrapper @{ML_ind finite in Scan} and the ``stopper-token'' 
   @{ML_ind stopper in Symbol}. With them you can write:
   
-  @{ML_response [display,gray] "Scan.finite Symbol.stopper (Scan.repeat ($$ \"h\")) (Symbol.explode \"hhhh\")" 
+  @{ML_matchresult [display,gray] "Scan.finite Symbol.stopper (Scan.repeat ($$ \"h\")) (Symbol.explode \"hhhh\")" 
                 "([\"h\", \"h\", \"h\", \"h\"], [])"}
 
   The function @{ML stopper in Symbol} is the ``end-of-input'' indicator for parsing strings;
@@ -304,7 +304,7 @@
   The function @{ML_ind repeat in Scan} can be used with @{ML_ind one in Scan} to read any 
   string as in
 
-  @{ML_response [display,gray] 
+  @{ML_matchresult [display,gray] 
 "let 
    val p = Scan.repeat (Scan.one Symbol.not_eof)
    val input = Symbol.explode \"foo bar foo\"
@@ -319,12 +319,12 @@
   The function @{ML_ind unless in Scan} takes two parsers: if the first one can 
   parse the input, then the whole parser fails; if not, then the second is tried. Therefore
   
-  @{ML_response_fake_both [display,gray] "Scan.unless ($$ \"h\") ($$ \"w\") (Symbol.explode \"hello\")"
+  @{ML_matchresult_fake_both [display,gray] "Scan.unless ($$ \"h\") ($$ \"w\") (Symbol.explode \"hello\")"
                                "Exception FAIL raised"}
 
   fails, while
 
-  @{ML_response [display,gray] "Scan.unless ($$ \"h\") ($$ \"w\") (Symbol.explode \"world\")"
+  @{ML_matchresult [display,gray] "Scan.unless ($$ \"h\") ($$ \"w\") (Symbol.explode \"world\")"
                           "(\"w\",[\"o\", \"r\", \"l\", \"d\"])"}
 
   succeeds. 
@@ -333,7 +333,7 @@
   be combined to read any input until a certain marker symbol is reached. In the 
   example below the marker symbol is a @{text [quotes] "*"}.
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
 "let 
   val p = Scan.repeat (Scan.unless ($$ \"*\") (Scan.one Symbol.not_eof))
   val input1 = Symbol.explode \"fooooo\"
@@ -352,7 +352,7 @@
   first the parser \<open>p\<close> and upon successful completion applies the 
   function \<open>f\<close> to the result. For example
 
-@{ML_response [display,gray]
+@{ML_matchresult [display,gray]
 "let 
   fun double (x, y) = (x ^ x, y ^ y) 
   val parser = $$ \"h\" -- $$ \"e\"
@@ -363,7 +363,7 @@
 
   doubles the two parsed input strings; or
 
-  @{ML_response [display,gray] 
+  @{ML_matchresult [display,gray] 
 "let 
    val p = Scan.repeat (Scan.one Symbol.not_eof)
    val input = Symbol.explode \"foo bar foo\" 
@@ -379,7 +379,7 @@
   the given parser to the second component of the pair and leaves the  first component 
   untouched. For example
 
-@{ML_response [display,gray]
+@{ML_matchresult [display,gray]
 "Scan.lift ($$ \"h\" -- $$ \"e\") (1, Symbol.explode \"hello\")"
 "((\"h\", \"e\"), (1, [\"l\", \"l\", \"o\"]))"}
 
@@ -425,7 +425,7 @@
   because of the mutual recursion, this parser will immediately run into a
   loop, even if it is called without any input. For example
 
-  @{ML_response_fake_both [display, gray]
+  @{ML_matchresult_fake_both [display, gray]
   "parse_tree \"A\""
   "*** Exception- TOPLEVEL_ERROR raised"}
 
@@ -449,7 +449,7 @@
   some string is  applied for the argument \<open>xs\<close>. This gives us 
   exactly the parser what we wanted. An example is as follows:
 
-  @{ML_response [display, gray]
+  @{ML_matchresult [display, gray]
   "let
   val input = Symbol.explode \"(A,((A))),A\"
 in
@@ -487,11 +487,11 @@
 
   \begin{readmore}
   The parser functions for the theory syntax are contained in the structure
-  @{ML_struct Parse} defined in the file @{ML_file  "Pure/Isar/parse.ML"}.
+  @{ML_structure Parse} defined in the file @{ML_file  "Pure/Isar/parse.ML"}.
   The definition for tokens is in the file @{ML_file "Pure/Isar/token.ML"}.
   \end{readmore}
 
-  The structure @{ML_struct  Token} defines several kinds of tokens (for
+  The structure @{ML_structure  Token} defines several kinds of tokens (for
   example @{ML_ind Ident in Token} for identifiers, @{ML Keyword in
   Token} for keywords and @{ML_ind Command in Token} for commands). Some
   token parsers take into account the kind of tokens. The first example shows
@@ -502,7 +502,7 @@
   messages. The following code
 
 
-@{ML_response_fake [display,gray] \<open>Token.explode 
+@{ML_matchresult_fake [display,gray] \<open>Token.explode 
   (Thy_Header.get_keywords' @{context}) Position.none "hello world"\<close>
 \<open>[Token (_,(Ident, "hello"),_), 
  Token (_,(Space, " "),_), 
@@ -524,7 +524,7 @@
 text \<open>
   then lexing @{text [quotes] "hello world"} will produce
 
-  @{ML_response_fake [display,gray] "Token.explode 
+  @{ML_matchresult_fake [display,gray] "Token.explode 
   (Thy_Header.get_keywords' @{context}) Position.none \"hello world\"" 
 "[Token (_,(Keyword, \"hello\"),_), 
  Token (_,(Space, \" \"),_), 
@@ -535,7 +535,7 @@
   functions @{ML filter} and @{ML_ind is_proper in Token} to do this. 
   For example:
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let
    val input = Token.explode (Thy_Header.get_keywords' @{context}) Position.none \"hello world\"
 in
@@ -553,7 +553,7 @@
 text \<open>
   If you now parse
 
-@{ML_response_fake [display,gray] 
+@{ML_matchresult_fake [display,gray] 
 "filtered_input \"inductive | for\"" 
 "[Token (_,(Command, \"inductive\"),_), 
  Token (_,(Keyword, \"|\"),_), 
@@ -568,7 +568,7 @@
 
   The parser @{ML_ind "$$$" in Parse} parses a single keyword. For example:
 
-@{ML_response [display,gray]
+@{ML_matchresult [display,gray]
 "let 
   val input1 = filtered_input \"where for\"
   val input2 = filtered_input \"| in\"
@@ -580,7 +580,7 @@
   Any non-keyword string can be parsed with the function @{ML_ind reserved in Parse}.
   For example:
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
 "let 
   val p = Parse.reserved \"bar\"
   val input = filtered_input \"bar\"
@@ -591,7 +591,7 @@
 
   Like before, you can sequentially connect parsers with @{ML "--"}. For example: 
 
-@{ML_response [display,gray]
+@{ML_matchresult [display,gray]
 "let 
   val input = filtered_input \"| in\"
 in 
@@ -603,7 +603,7 @@
   list of items recognised by the parser \<open>p\<close>, where the items being parsed
   are separated by the string \<open>s\<close>. For example:
 
-@{ML_response [display,gray]
+@{ML_matchresult [display,gray]
 "let 
   val input = filtered_input \"in | in | in foo\"
 in 
@@ -618,7 +618,7 @@
   wrapper @{ML Scan.finite}. This time, however, we have to use the
   ``stopper-token'' @{ML Token.stopper}. We can write:
 
-@{ML_response [display,gray]
+@{ML_matchresult [display,gray]
 "let 
   val input = filtered_input \"in | in | in\"
   val p = Parse.enum \"|\" (Parse.$$$ \"in\")
@@ -639,7 +639,7 @@
   "Parse.!!!"} is fixed to be @{text [quotes] "Outer syntax error"}
   together with a relatively precise description of the failure. For example:
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let 
   val input = filtered_input \"in |\"
   val parse_bar_then_in = Parse.$$$ \"|\" -- Parse.$$$ \"in\"
@@ -671,7 +671,7 @@
 text \<open>
   where we pretend the parsed string starts on line 7. An example is
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "filtered_input' \"foo \\n bar\""
 "[Token ((\"foo\", ({line=7, end_line=7}, {line=7})), (Ident, \"foo\"), _),
  Token ((\"bar\", ({line=8, end_line=8}, {line=8})), (Ident, \"bar\"), _)]"}
@@ -682,7 +682,7 @@
   By using the parser @{ML position in Parse} you can access the token 
   position and return it as part of the parser result. For example
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let
   val input = filtered_input' \"where\"
 in 
@@ -753,7 +753,7 @@
   for terms and  types: you can just call the predefined parsers. Terms can 
   be parsed using the function @{ML_ind term in Parse}. For example:
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let 
   val input = Token.explode (Thy_Header.get_keywords' @{context}) Position.none \"foo\"
 in 
@@ -765,11 +765,11 @@
   The function @{ML_ind prop in Parse} is similar, except that it gives a different
   error message, when parsing fails. As you can see, the parser not just returns 
   the parsed string, but also some markup information. You can decode the
-  information with the function @{ML_ind parse in YXML} in @{ML_struct YXML}. 
+  information with the function @{ML_ind parse in YXML} in @{ML_structure YXML}. 
   The result of the decoding is an XML-tree. You can see better what is going on if
   you replace @{ML Position.none} by @{ML "Position.line 42"}, say:
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let 
   val input = Token.explode (Thy_Header.get_keywords' @{context}) (Position.line 42) \"foo\"
 in 
@@ -832,7 +832,7 @@
   To see what the parser returns, let us parse the string corresponding to the 
   definition of @{term even} and @{term odd}:
 
-@{ML_response [display,gray]
+@{ML_matchresult [display,gray]
 "let
   val input = filtered_input
      (\"even and odd \" ^  
@@ -868,7 +868,7 @@
   \<open>\"int \<Rightarrow> bool\"\<close> in order to properly escape the double quotes
   in the compound type.}
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let
   val input = filtered_input 
         \"foo::\\\"int \<Rightarrow> bool\\\" and bar::nat (\\\"BAR\\\" 100) and blonk\"
@@ -899,7 +899,7 @@
   parsed by @{ML_ind prop in Parse}. However, they can include an optional
   theorem name plus some attributes. For example
 
-@{ML_response [display,gray] "let 
+@{ML_matchresult [display,gray] "let 
   val input = filtered_input \"foo_lemma[intro,dest!]:\"
   val ((name, attrib), _) = parse (Parse_Spec.thm_name \":\") input 
 in 
@@ -1118,7 +1118,7 @@
   The first problem for \isacommand{foobar\_proof} is to parse some
   text as ML-source and then interpret it as an Isabelle term using
   the ML-runtime.  For the parsing part, we can use the function
-  @{ML_ind "ML_source" in Parse} in the structure @{ML_struct
+  @{ML_ind "ML_source" in Parse} in the structure @{ML_structure
   Parse}. For running the ML-interpreter we need the following
   scaffolding code.
 \<close>
@@ -1158,7 +1158,7 @@
   In Line 12, we implement a parser that first reads in an optional lemma name (terminated 
   by ``:'') and then some ML-code. The function in Lines 5 to 10 takes the ML-text
   and lets the ML-runtime evaluate it using the function @{ML_ind value in Code_Runtime}
-  in the structure @{ML_struct Code_Runtime}.  Once the ML-text has been turned into a term, 
+  in the structure @{ML_structure Code_Runtime}.  Once the ML-text has been turned into a term, 
   the function @{ML theorem in Proof} opens a corresponding proof-state. This function takes the
   function \<open>after_qed\<close> as argument, whose purpose is to store the theorem
   (once it is proven) under the given name \<open>thm_name\<close>.
--- a/ProgTutorial/Readme.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/Readme.thy	Thu May 16 19:56:12 2019 +0200
@@ -54,7 +54,7 @@
   \end{tabular}
   \end{center}
 
-  \item[$\bullet$] \<open>@{ML_response "expr" "pat"}\<close> should be used to
+  \item[$\bullet$] \<open>@{ML_matchresult "expr" "pat"}\<close> should be used to
   display ML-expressions and their response.  The first expression is checked
   like in the antiquotation \<open>@{ML "expr"}\<close>; the second is a pattern
   that specifies the result the first expression produces. This pattern can
@@ -63,8 +63,8 @@
 
   \begin{center}\small
   \begin{tabular}{l}
-  \<open>@{ML_response "1+2" "3"}\<close>\\
-  \<open>@{ML_response "(1+2,3)" "(3,\<dots>)"}\<close>
+  \<open>@{ML_matchresult "1+2" "3"}\<close>\\
+  \<open>@{ML_matchresult "(1+2,3)" "(3,\<dots>)"}\<close>
   \end{tabular}
   \end{center}
 
@@ -72,8 +72,8 @@
 
   \begin{center}\small
   \begin{tabular}{p{3cm}p{3cm}}
-  @{ML_response "1+2" "3"} &  
-  @{ML_response "(1+2,3)" "(3,\<dots>)"}
+  @{ML_matchresult "1+2" "3"} &  
+  @{ML_matchresult "(1+2,3)" "(3,\<dots>)"}
   \end{tabular}
   \end{center}
   
@@ -81,17 +81,17 @@
   constructed: it does not work when the code produces an exception or returns 
   an abstract datatype (like @{ML_type thm} or @{ML_type cterm}).
 
-  \item[$\bullet$] \<open>@{ML_response_fake "expr" "pat"}\<close> works just
-  like the antiquotation \<open>@{ML_response "expr" "pat"}\<close> above,
+  \item[$\bullet$] \<open>@{ML_matchresult_fake "expr" "pat"}\<close> works just
+  like the antiquotation \<open>@{ML_matchresult "expr" "pat"}\<close> above,
   except that the result-specification is not checked. Use this antiquotation
   when the result cannot be constructed or the code generates an
   exception. Examples are:
 
   \begin{center}\small
   \begin{tabular}{ll}
-  \<open>@{ML_response_fake\<close> & \<open>"cterm_of @{theory} @{term \"a + b = c\"}"}\<close>\\
+  \<open>@{ML_matchresult_fake\<close> & \<open>"cterm_of @{theory} @{term \"a + b = c\"}"}\<close>\\
                                & \<open>"a + b = c"}\<close>\smallskip\\ 
-  \<open>@{ML_response_fake\<close> & \<open>"($$ \"x\") (explode \"world\")"\<close>\\ 
+  \<open>@{ML_matchresult_fake\<close> & \<open>"($$ \"x\") (explode \"world\")"\<close>\\ 
                                & \<open>"Exception FAIL raised"}\<close>
   \end{tabular}
   \end{center}
@@ -100,21 +100,21 @@
 
   \begin{center}\small
   \begin{tabular}{p{7.2cm}}
-  @{ML_response_fake "cterm_of @{theory} @{term \"a + b = c\"}" "a + b = c"}\smallskip\\
-  @{ML_response_fake "($$ \"x\") (explode \"world\")" "Exception FAIL raised"}
+  @{ML_matchresult_fake "cterm_of @{theory} @{term \"a + b = c\"}" "a + b = c"}\smallskip\\
+  @{ML_matchresult_fake "($$ \"x\") (explode \"world\")" "Exception FAIL raised"}
   \end{tabular}
   \end{center}
   
   This output mimics to some extend what the user sees when running the
   code.
 
-  \item[$\bullet$] \<open>@{ML_response_fake_both "expr" "pat"}\<close> can be
+  \item[$\bullet$] \<open>@{ML_matchresult_fake_both "expr" "pat"}\<close> can be
   used to show erroneous code. Neither the code nor the response will be
   checked. An example is:
 
   \begin{center}\small
   \begin{tabular}{ll}
-  \<open>@{ML_response_fake_both\<close> & \<open>"@{cterm \"1 + True\"}"\<close>\\
+  \<open>@{ML_matchresult_fake_both\<close> & \<open>"@{cterm \"1 + True\"}"\<close>\\
                                     & \<open>"Type unification failed \<dots>"}\<close>
   \end{tabular}
   \end{center}
--- a/ProgTutorial/Recipes/ExternalSolver.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/Recipes/ExternalSolver.thy	Thu May 16 19:56:12 2019 +0200
@@ -19,7 +19,7 @@
 
   For example, consider running an ordinary shell commands:
 
-  @{ML_response [display,gray] 
+  @{ML_matchresult [display,gray] 
     "Isabelle_System.bash_output \"echo Hello world!\"" "(\"Hello world!\\n\", 0)"}
 
   Note that it works also fine with timeouts (see Recipe~\ref{rec:timeout}
@@ -27,7 +27,7 @@
   properly. For example, the following expression takes only approximately
   one second:
 
-  @{ML_response_fake [display,gray] 
+  @{ML_matchresult_fake [display,gray] 
     "Timeout.apply (Time.fromSeconds 1) Isabelle_System.bash_output \"sleep 30\"
      handle TIMEOUT => (\"timeout\", ~1)" "(\"timeout\", ~1)"}
 
@@ -46,7 +46,7 @@
 
   In Isabelle, this application may now be executed by
 
-  @{ML_response_fake [display,gray] "Isabelle_System.bash_output \"$FOO\"" "\<dots>"}
+  @{ML_matchresult_fake [display,gray] "Isabelle_System.bash_output \"$FOO\"" "\<dots>"}
 \<close>
 
 
--- a/ProgTutorial/Recipes/Introspection.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/Recipes/Introspection.thy	Thu May 16 19:56:12 2019 +0200
@@ -13,7 +13,7 @@
 
   To introspect a theorem, let us define the following three functions that 
   analyse the @{ML_type_ind proof_body} data-structure from the structure 
-  @{ML_struct Proofterm}.
+  @{ML_structure Proofterm}.
 \<close>
 
 ML %grayML\<open>fun pthms_of (PBody {thms, ...}) = map #2 thms 
@@ -52,7 +52,7 @@
   extracting this information.  Let us first extract the name of the
   established theorem. This can be done with
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
   "@{thm my_conjIa}
   |> Thm.proof_body_of
   |> get_names"
@@ -67,7 +67,7 @@
   and @{thm [source] conjunct2}. We can obtain them by descending into the
   first level of the proof-tree, as follows.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "@{thm my_conjIa}
   |> Thm.proof_body_of
   |> get_pbodies
@@ -82,7 +82,7 @@
   proof in Isabelle. Note also that applications of \<open>assumption\<close> do not
   count as a separate theorem, as you can see in the following code.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "@{thm my_conjIb}
   |> Thm.proof_body_of
   |> get_pbodies
@@ -96,7 +96,7 @@
   and @{thm [source] my_conjIb}, as can be seen by the theorems that
   are returned for @{thm [source] my_conjIc}.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "@{thm my_conjIc}
   |> Thm.proof_body_of
   |> get_pbodies
@@ -112,7 +112,7 @@
   means we obtain the theorems that are used in order to prove
   @{thm [source] conjunct1}, @{thm [source] conjunct2} and @{thm [source] conjI}.
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "@{thm my_conjIa}
   |> Thm.proof_body_of
   |> get_pbodies
@@ -124,6 +124,9 @@
   "[\"\", \"Pure.protectD\",
  \"Pure.protectI\"]"}
 \<close>
+
+
+
 text \<open>
   \begin{readmore} 
   The data-structure @{ML_type proof_body} is implemented
--- a/ProgTutorial/Recipes/Oracle.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/Recipes/Oracle.thy	Thu May 16 19:56:12 2019 +0200
@@ -76,19 +76,19 @@
 text \<open>
   Here is the string representation of the term @{term "p = (q = p)"}:
 
-  @{ML_response 
+  @{ML_matchresult 
     "translate @{term \"p = (q = p)\"}" 
     "\" ( p <=> ( q <=> p ) ) \""}
 
   Let us check, what the solver returns when given a tautology:
 
-  @{ML_response 
+  @{ML_matchresult 
     "IffSolver.decide (translate @{term \"p = (q = p) = q\"})"
     "true"}
 
   And here is what it returns for a formula which is not valid:
 
-  @{ML_response 
+  @{ML_matchresult 
     "IffSolver.decide (translate @{term \"p = (q = p)\"})" 
     "false"}
 \<close>
@@ -113,7 +113,7 @@
 text \<open>
   Here is what we get when applying the oracle:
 
-  @{ML_response_fake "iff_oracle @{cprop \"p = (p::bool)\"}" "p = p"}
+  @{ML_matchresult_fake "iff_oracle @{cprop \"p = (p::bool)\"}" "p = p"}
 
   (FIXME: is there a better way to present the theorem?)
 
--- a/ProgTutorial/Recipes/Sat.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/Recipes/Sat.thy	Thu May 16 19:56:12 2019 +0200
@@ -42,7 +42,7 @@
   (appropriately updated) in the result. In the case above the
   input table is empty (i.e.~@{ML Termtab.empty}) and the output table is
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "Termtab.dest table"
   "[(Free (\"A\", \"bool\"), 1), (Free (\"B\", \"bool\"), 2)]"}
 
@@ -58,7 +58,7 @@
   returns @{ML "BoolVar 1" in Prop_Logic} for  @{ML pform'} and the table 
   @{ML table'} is:
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "map (apfst (Syntax.string_of_term @{context})) (Termtab.dest table')"
   "(\<forall>x. P x, 1)"}
   
@@ -68,7 +68,7 @@
   Having produced a propositional formula, you can now call the SAT solvers 
   with the function @{ML "SAT_Solver.invoke_solver"}. For example
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "SAT_Solver.invoke_solver \"minisat\" pform"
   "SAT_Solver.SATISFIABLE assg"}
 \<close>
@@ -77,7 +77,7 @@
   determines that the formula @{ML pform} is satisfiable. If we inspect
   the returned function \<open>assg\<close>
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
 "let
   val SAT_Solver.SATISFIABLE assg = SAT_Solver.invoke_solver \"auto\" pform
 in 
--- a/ProgTutorial/Recipes/TimeLimit.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/Recipes/TimeLimit.thy	Thu May 16 19:56:12 2019 +0200
@@ -21,12 +21,12 @@
 
   Now the call 
 
-  @{ML_response_fake_both [display,gray] "ackermann (4, 12)" "\<dots>"}
+  @{ML_matchresult_fake_both [display,gray] "ackermann (4, 12)" "\<dots>"}
 
   takes a bit of time before it finishes. To avoid this, the call can be encapsulated 
   in a time limit of five seconds. For this you have to write
 
-@{ML_response_fake_both [display,gray]
+@{ML_matchresult_fake_both [display,gray]
 "Timeout.apply (Time.fromSeconds 5) ackermann (4, 12) 
   handle TIMEOUT => ~1"
 "~1"}
@@ -40,7 +40,7 @@
 
 \begin{readmore}
    The function @{ML "apply" in Timeout} is defined in the structure
-  @{ML_struct Timeout} which can be found in the file 
+  @{ML_structure Timeout} which can be found in the file 
   @{ML_file "Pure/concurrent/timeout.ML"}.
 \end{readmore}
 
--- a/ProgTutorial/Solutions.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/Solutions.thy	Thu May 16 19:56:12 2019 +0200
@@ -55,7 +55,7 @@
   \<open>-1\<close> to account for the deleted quantifier. An example is 
   as follows:
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "@{prop \"\<forall>x y z. P x = P z\"}
   |> kill_trivial_quantifiers
   |> pretty_term @{context} 
@@ -110,7 +110,7 @@
   @{ML scan_all} retruns a string, instead of the pair a parser would
   normally return. For example:
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
 "let
   val input1 = (Symbol.explode \"foo bar\")
   val input2 = (Symbol.explode \"foo (*test*) bar (*test*)\")
@@ -330,7 +330,7 @@
 text \<open>
   This function generates for example:
 
-  @{ML_response_fake [display,gray] 
+  @{ML_matchresult_fake [display,gray] 
   "pwriteln (pretty_term @{context} (term_tree 2))" 
   "(1 + 2) + (3 + 4)"} 
 
--- a/ProgTutorial/Tactical.thy	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/Tactical.thy	Thu May 16 19:56:12 2019 +0200
@@ -47,7 +47,7 @@
 text \<open>
   This proof translates to the following ML-code.
   
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let
   val ctxt = @{context}
   val goal = @{prop \"P \<or> Q \<Longrightarrow> Q \<or> P\"}
@@ -396,7 +396,7 @@
 
 text \<open>
   A simple tactic for easy discharge of any proof obligations, even difficult ones, is 
-  @{ML_ind cheat_tac in Skip_Proof} in the structure @{ML_struct Skip_Proof}. 
+  @{ML_ind cheat_tac in Skip_Proof} in the structure @{ML_structure Skip_Proof}. 
   This tactic corresponds to the Isabelle command \isacommand{sorry} and is 
   sometimes useful during the development of tactics.
 \<close>
@@ -763,14 +763,14 @@
   constraints is by pre-instantiating theorems with other theorems. 
   The function @{ML_ind RS in Drule}, for example, does this.
   
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "@{thm disjI1} RS @{thm conjI}" "\<lbrakk>?P1; ?Q\<rbrakk> \<Longrightarrow> (?P1 \<or> ?Q1) \<and> ?Q"}
 
   In this example it instantiates the first premise of the \<open>conjI\<close>-theorem 
   with the theorem \<open>disjI1\<close>. If the instantiation is impossible, as in the 
   case of
 
-  @{ML_response_fake_both [display,gray]
+  @{ML_matchresult_fake_both [display,gray]
   "@{thm conjI} RS @{thm mp}" 
 "*** Exception- THM (\"RSN: no unifiers\", 1, 
 [\"\<lbrakk>?P; ?Q\<rbrakk> \<Longrightarrow> ?P \<and> ?Q\", \"\<lbrakk>?P \<longrightarrow> ?Q; ?P\<rbrakk> \<Longrightarrow> ?Q\"]) raised"}
@@ -782,7 +782,7 @@
   If you want to instantiate more than one premise of a theorem, you can use 
   the function @{ML_ind MRS in Drule}:
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "[@{thm disjI1}, @{thm disjI2}] MRS @{thm conjI}" 
   "\<lbrakk>?P2; ?Q1\<rbrakk> \<Longrightarrow> (?P2 \<or> ?Q2) \<and> (?P1 \<or> ?Q1)"}
 
@@ -791,7 +791,7 @@
   example in the code below, every theorem in the second list is 
   instantiated with every theorem in the first.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
 "let
   val list1 = [@{thm impI}, @{thm disjI2}]
   val list2 = [@{thm conjI}, @{thm disjI1}]
@@ -1293,7 +1293,7 @@
   tactic should explore all possibilites of applying these rules to a
   propositional formula until a goal state is reached in which all subgoals
   are discharged. For this you can use the tactic combinator @{ML_ind
-  DEPTH_SOLVE in Search} in the structure @{ML_struct Search}.
+  DEPTH_SOLVE in Search} in the structure @{ML_structure Search}.
   \end{exercise}
 
   \begin{exercise}
@@ -1359,7 +1359,7 @@
   it can only deal with rewriting rules whose left-hand sides are higher order 
   pattern (see Section \ref{sec:univ} on unification). Whether a term is a pattern
   or not can be tested with the function @{ML_ind pattern in Pattern} from
-  the structure @{ML_struct Pattern}. If a rule is not a pattern and you want
+  the structure @{ML_structure Pattern}. If a rule is not a pattern and you want
   to use it for rewriting, then you have to use simprocs or conversions, both 
   of which we shall describe in the next section.
 
@@ -1458,7 +1458,7 @@
   prints out some parts of a simpset. If you use it to print out the components of the
   empty simpset, i.e., @{ML_ind empty_ss in Raw_Simplifier}
   
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "print_ss @{context} empty_ss"
 "Simplification rules:
 Congruences rules:
@@ -1474,7 +1474,7 @@
 text \<open>
   Printing then out the components of the simpset gives:
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "print_ss @{context} (Raw_Simplifier.simpset_of ss1)"
 "Simplification rules:
   ??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)
@@ -1491,7 +1491,7 @@
 text \<open>
   gives
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "print_ss @{context} (Raw_Simplifier.simpset_of ss2)"
 "Simplification rules:
   ??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)
@@ -1508,7 +1508,7 @@
   In the context of HOL, the first really useful simpset is @{ML_ind
   HOL_basic_ss in Simpdata}. While printing out the components of this simpset
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "print_ss @{context} HOL_basic_ss"
 "Simplification rules:
 Congruences rules:
@@ -1542,7 +1542,7 @@
   already many useful simplification and congruence rules for the logical 
   connectives in HOL. 
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "print_ss @{context} HOL_ss"
 "Simplification rules:
   Pure.triv_forall_equality: (\<And>x. PROP V) \<equiv> PROP V
@@ -2064,21 +2064,21 @@
   is the function @{ML_ind  all_conv in Conv}, which maps a @{ML_type cterm} to an
   instance of the (meta)reflexivity theorem. For example:
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "Conv.all_conv @{cterm \"Foo \<or> Bar\"}"
   "Foo \<or> Bar \<equiv> Foo \<or> Bar"}
 
   Another simple conversion is @{ML_ind  no_conv in Conv} which always raises the 
   exception @{ML CTERM}.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "Conv.no_conv @{cterm True}" 
   "*** Exception- CTERM (\"no conversion\", []) raised"}
 
   A more interesting conversion is the function @{ML_ind  beta_conversion in Thm}: it 
   produces a meta-equation between a term and its beta-normal form. For example
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "let
   val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
   val two = @{cterm \"2::nat\"}
@@ -2096,7 +2096,7 @@
   If we get hold of the ``raw'' representation of the produced theorem, 
   we obtain the expected result.
 
-  @{ML_response [display,gray]
+  @{ML_matchresult [display,gray]
 "let
   val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
   val two = @{cterm \"2::nat\"}
@@ -2111,7 +2111,7 @@
 
 or in the pretty-printed form
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
 "let
    val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
    val two = @{cterm \"2::nat\"}
@@ -2146,7 +2146,7 @@
   It can be used for example to rewrite @{term "True \<and> (Foo \<longrightarrow> Bar)"} 
   to @{term "Foo \<longrightarrow> Bar"}. The code is as follows.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
 "let 
   val ctrm = @{cterm \"True \<and> (Foo \<longrightarrow> Bar)\"}
 in
@@ -2165,7 +2165,7 @@
   combinators. The simplest conversion combinator is @{ML_ind then_conv in Conv}, 
   which applies one conversion after another. For example
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
 "let
   val conv1 = Thm.beta_conversion false
   val conv2 = Conv.rewr_conv @{thm true_conj1}
@@ -2182,7 +2182,7 @@
   The conversion combinator @{ML_ind else_conv in Conv} tries out the 
   first one, and if it does not apply, tries the second. For example 
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
 "let
   val conv = Conv.rewr_conv @{thm true_conj1} else_conv Conv.all_conv
   val ctrm1 = @{cterm \"True \<and> Q\"}
@@ -2199,7 +2199,7 @@
   behaviour can also be achieved with conversion combinator @{ML_ind try_conv in Conv}.
   For example
   
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
 "let
   val conv = Conv.try_conv (Conv.rewr_conv @{thm true_conj1})
   val ctrm = @{cterm \"True \<or> P\"}
@@ -2209,7 +2209,7 @@
   "True \<or> P \<equiv> True \<or> P"}
 
   Rewriting with more than one theorem can be done using the function 
-  @{ML_ind rewrs_conv in Conv} from the structure @{ML_struct Conv}.
+  @{ML_ind rewrs_conv in Conv} from the structure @{ML_structure Conv}.
   
 
   Apart from the function @{ML beta_conversion in Thm}, which is able to fully
@@ -2220,7 +2220,7 @@
   a conversion to the first, respectively second, argument of an application.  
   For example
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
 "let 
   val conv = Conv.arg_conv (Conv.rewr_conv @{thm true_conj1})
   val ctrm = @{cterm \"P \<or> (True \<and> Q)\"}
@@ -2238,7 +2238,7 @@
   The function @{ML_ind  abs_conv in Conv} applies a conversion under an 
   abstraction. For example:
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
 "let 
   val conv = Conv.rewr_conv @{thm true_conj1} 
   val ctrm = @{cterm \"\<lambda>P. True \<and> (P \<and> Foo)\"}
@@ -2279,7 +2279,7 @@
   to be able to pattern-match the term. To see this 
   conversion in action, consider the following example:
 
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
 "let
   val conv = true_conj1_conv @{context}
   val ctrm = @{cterm \"distinct [1::nat, x] \<longrightarrow> True \<and> 1 \<noteq> x\"}
@@ -2322,7 +2322,7 @@
   according to the two meta-equations produces two results. Below these
   two results are calculated. 
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "let
   val ctxt = @{context}
   val conv = Conv.try_conv (Conv.rewrs_conv @{thms conj_assoc})
@@ -2358,7 +2358,7 @@
   soon as it found one redex. Here is an example for this conversion:
 
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
 "let
   val ctrm = @{cterm \"if P (True \<and> 1 \<noteq> (2::nat)) 
                         then True \<and> True else True \<and> False\"}
@@ -2382,7 +2382,7 @@
   Using the conversion you can transform this theorem into a 
   new theorem as follows
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
 "let
   val conv = Conv.fconv_rule (true_conj1_conv @{context}) 
   val thm = @{thm foo_test}
@@ -2460,12 +2460,12 @@
   Although both definitions define the same function, the theorems @{thm
   [source] id1_def} and @{thm [source] id2_def} are different meta-equations. However it is
   easy to transform one into the other. The function @{ML_ind abs_def
-  in Drule} from the structure @{ML_struct Drule} can automatically transform 
+  in Drule} from the structure @{ML_structure Drule} can automatically transform 
   theorem @{thm [source] id1_def}
   into @{thm [source] id2_def} by abstracting all variables on the 
   left-hand side in the right-hand side.
 
-  @{ML_response_fake [display,gray]
+  @{ML_matchresult_fake [display,gray]
   "Drule.abs_def @{thm id1_def}"
   "id1 \<equiv> \<lambda>x. x"}
 
@@ -2513,7 +2513,7 @@
   export the context \<open>ctxt'\<close> back to \<open>ctxt\<close> in order to 
   produce meta-variables for the theorem.  An example is as follows.
 
-  @{ML_response_fake [display, gray]
+  @{ML_matchresult_fake [display, gray]
   "unabs_def @{context} @{thm id2_def}"
   "id2 ?x1 \<equiv> ?x1"}
 \<close>
--- a/ProgTutorial/antiquote_setup.ML	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/antiquote_setup.ML	Thu May 16 19:56:12 2019 +0200
@@ -45,6 +45,21 @@
 fun eval_fn ctxt prep exp =
   ML_Context.eval_in (SOME ctxt) ML_Compiler.flags (Input.pos_of exp) (prep exp)
 
+(* Evalate expression and convert result to a string *)
+fun eval_response ctxt exp =
+let 
+  fun compute exp = 
+     let 
+       val input = ML_Lex.read "(let val r = " @ ML_Lex.read_source false exp @ 
+                   ML_Lex.read " in (raise ERROR (@{make_string} r); ()) end )"
+       val _ = ML_Context.eval_in (SOME ctxt) ML_Compiler.flags (Input.pos_of exp) input 
+     in ""
+     end 
+in
+  (compute exp 
+   handle ERROR s => s)
+end
+
 (* checks and prints a possibly open expressions, no index *)
 
 fun output_ml ctxt (src, (vs, stru)) =
@@ -55,37 +70,60 @@
   (Scan.optional (Args.$$$ "for" |-- Parse.!!! (Scan.repeat1 Args.name)) [] --
    Scan.option (Args.$$$ "in"  |-- Parse.!!! Args.name))) 
 
+fun output_ml_response ctxt src =
+let 
+  val res = eval_response ctxt src
+in 
+  OutputTutorial.output ctxt ([Input.source_content src] @ [Library.prefix_lines "> " res])
+end
+
 
 (* checks and prints a single ML-item and produces an index entry *)
-fun output_ml_ind ctxt (txt, stru) =
-  (eval_fn ctxt (ml_val [] stru) (Input.string txt); 
+fun output_ml_ind ctxt (src, stru) =
+let 
+  val txt = Input.source_content src
+in
+  (eval_fn ctxt (ml_val [] stru) src; 
    case (stru, Long_Name.base_name txt, Long_Name.qualifier txt) of
      (NONE, _, "")  => output_indexed ctxt {main = Code txt, minor = NoString} (split_lines txt)
    | (NONE, bn, qn)  => output_indexed ctxt {main = Code bn,  minor = Struct qn} (split_lines txt)
    | (SOME st, _, _) => output_indexed ctxt {main = Code txt, minor = Struct st} (split_lines txt))
+end
 
-val parser_ml_ind = Scan.lift (Args.name --
+val parser_ml_ind = Scan.lift (Args.text_input --
   Scan.option (Args.$$$ "in"  |-- Parse.!!! Args.name))
 
 (* checks and prints structures *)
-fun gen_output_struct outfn ctxt txt = 
-  (eval_fn ctxt ml_struct (Input.string txt); 
+fun gen_output_struct outfn ctxt src = 
+let 
+  val txt = Input.source_content src
+in
+  (eval_fn ctxt ml_struct src; 
    outfn {main = Code txt, minor = Plain "structure"} (split_lines txt))
+end
 
 fun output_struct ctxt = gen_output_struct (K (output ctxt)) ctxt 
 fun output_struct_ind ctxt = gen_output_struct (output_indexed ctxt) ctxt 
 
 (* prints functors; no checks *)
-fun gen_output_funct outfn txt = 
+fun gen_output_funct outfn src = 
+let
+  val txt = Input.source_content src
+in
   (outfn {main = Code txt, minor = Plain "functor"} (split_lines txt))
+end
 
 fun output_funct ctxt = gen_output_funct (K (output ctxt)) 
 fun output_funct_ind ctxt = gen_output_funct (output_indexed ctxt)
 
 (* checks and prints types *)
-fun gen_output_type outfn ctxt txt = 
-  (eval_fn ctxt ml_type (Input.string txt); 
+fun gen_output_type outfn ctxt src = 
+let 
+  val txt = Input.source_content src
+in
+  (eval_fn ctxt ml_type src; 
    outfn {main = Code txt, minor = Plain "type"} (split_lines txt))
+end
 
 fun output_type ctxt = gen_output_type (K (output ctxt)) ctxt
 fun output_type_ind ctxt = gen_output_type (output_indexed ctxt) ctxt 
@@ -109,22 +147,20 @@
     (output ctxt ((split_lines (Input.source_content lhs)) @ 
      (prefix_lines "> " (dots_pat (Input.source_content pat)))))
 
-val single_arg = Scan.lift (Args.name)
+val single_arg = Scan.lift (Args.text_input)
 val two_args   = Scan.lift (Args.text_input -- Args.text_input)
 val test = Scan.lift (Args.name -- Args.name -- Scan.option (Args.$$$ "with"  |-- Args.name))
 
 val ml_setup = 
-  Thy_Output.antiquotation_raw @{binding "ML"} parser_ml output_ml 
+  Thy_Output.antiquotation_raw @{binding "ML"} parser_ml output_ml
+  #> Thy_Output.antiquotation_raw @{binding "ML_response"} single_arg output_ml_response
   #> Thy_Output.antiquotation_raw @{binding "ML_ind"} parser_ml_ind output_ml_ind
-  #> Thy_Output.antiquotation_raw @{binding "ML_type"} single_arg output_type
   #> Thy_Output.antiquotation_raw @{binding "ML_type_ind"} single_arg output_type_ind
-  #> Thy_Output.antiquotation_raw @{binding "ML_struct"} single_arg output_struct
-  #> Thy_Output.antiquotation_raw @{binding "ML_struct_ind"} single_arg output_struct_ind
-  #> Thy_Output.antiquotation_raw @{binding "ML_funct"} single_arg output_funct
-  #> Thy_Output.antiquotation_raw @{binding "ML_funct_ind"} single_arg output_funct_ind
-  #> Thy_Output.antiquotation_raw @{binding "ML_response"} two_args output_response
-  #> Thy_Output.antiquotation_raw @{binding "ML_response_fake"} two_args output_response_fake
-  #> Thy_Output.antiquotation_raw @{binding "ML_response_fake_both"} two_args ouput_response_fake_both
+  #> Thy_Output.antiquotation_raw @{binding "ML_structure_ind"} single_arg output_struct_ind
+  #> Thy_Output.antiquotation_raw @{binding "ML_functor_ind"} single_arg output_funct_ind
+  #> Thy_Output.antiquotation_raw @{binding "ML_matchresult"} two_args output_response
+  #> Thy_Output.antiquotation_raw @{binding "ML_matchresult_fake"} two_args output_response_fake
+  #> Thy_Output.antiquotation_raw @{binding "ML_matchresult_fake_both"} two_args ouput_response_fake_both
 
 (* checks whether a file exists in the Isabelle distribution *)
 fun href_link txt =
@@ -135,10 +171,14 @@
  implode [raw "\\href{", raw path, raw txt, raw "}{", get_word txt, raw "}"]
 end 
 
-fun check_file_exists _ txt =
+fun check_file_exists _ src =
+let
+  val txt = Input.source_content src
+in
   (if File.exists (Path.append (Path.explode ("~~/src")) (Path.explode txt)) 
    then Latex.string (href_link txt)
    else error (implode ["Source file ", quote txt, " does not exist."]))
+end
 
 val ml_file_setup = Thy_Output.antiquotation_raw @{binding "ML_file"} single_arg check_file_exists
 
--- a/ProgTutorial/document/root.bib	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/document/root.bib	Thu May 16 19:56:12 2019 +0200
@@ -9,6 +9,14 @@
                   delivered on 22nd January 2004 at the School of
                   Computing Science, Middlesex University}
 }
+@Misc{wenzel-technology,
+  author = {M.~Wenzel},
+  title = {Further Scaling of Isabelle Technology},
+  howpublished = {http://sketis.net},
+  month = {April},
+  year = {2018},
+  note = {},
+}
 
 @Book{isa-tutorial,
   author	= {T.~Nipkow and L.~C.~Paulson and M.~Wenzel},
--- a/ProgTutorial/document/root.tex	Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/document/root.tex	Thu May 16 19:56:12 2019 +0200
@@ -146,8 +146,10 @@
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 % for {*  *} in antiquotations
-\newcommand{\isasymverbopen}{\isacharverbatimopen}
-\newcommand{\isasymverbclose}{\isacharverbatimclose}
+%\newcommand{\isasymverbopen}{\isacharverbatimopen}
+%\newcommand{\isasymverbclose}{\isacharverbatimclose}
+\newcommand{\isasymverbopen}{\isacartoucheopen}
+\newcommand{\isasymverbclose}{\isacartoucheclose}
 \newcommand{\isasymfoo}{\isa{{\isacharbackslash}{\isacharless}foo{\isachargreater}}}
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@@ -165,7 +167,7 @@
 \title{\mbox{}\\[-10ex]
        \includegraphics[scale=0.5]{tutorial-logo.jpg}\\[3ex]
        {\huge\bf The Isabelle Cookbook}\\
-       \mbox{A Gentle Tutorial for Programming on the ML-Level of Isabelle}\\ (draft)}
+       \mbox{A Gentle Tutorial for Programming Isabelle/ML}\\ (draft)}
 
 \author{by Christian Urban with contributions from:\\[2ex] 
         \begin{tabular}{r@{\hspace{1.8mm}}l}
@@ -177,6 +179,7 @@
         Rafal & Kolanski\\
         Alexander & Krauss\\
         Tobias & Nipkow\\
+        Norbert & Schirmer\\
         Andreas & Schropp\\
         Christian & Sternagel\\ 
         \end{tabular}}
--- a/ROOT	Tue May 14 17:45:13 2019 +0200
+++ b/ROOT	Thu May 16 19:56:12 2019 +0200
@@ -1,8 +1,8 @@
 session "Cookbook" in "ProgTutorial" = HOL +
   options [document = pdf, browser_info = false, document_output = ".."]
-  theories [document = false]
+ theories [document = false]
     "Base"
-    "Package/Simple_Inductive_Package"
+    "Package/Simple_Inductive_Package" 
   theories [quick_and_dirty, document = true] 
     "Intro"
     "First_Steps"