isabelle update_cartouches -t
authorNorbert Schirmer <norbert.schirmer@web.de>
Tue, 14 May 2019 17:10:47 +0200
changeset 565 cecd7a941885
parent 564 6e2479089226
child 566 6103b0eadbf2
isabelle update_cartouches -t
ProgTutorial/Advanced.thy
ProgTutorial/Appendix.thy
ProgTutorial/Base.thy
ProgTutorial/Essential.thy
ProgTutorial/First_Steps.thy
ProgTutorial/Helper/Command/Command.thy
ProgTutorial/Intro.thy
ProgTutorial/Package/Ind_Code.thy
ProgTutorial/Package/Ind_Extensions.thy
ProgTutorial/Package/Ind_General_Scheme.thy
ProgTutorial/Package/Ind_Interface.thy
ProgTutorial/Package/Ind_Intro.thy
ProgTutorial/Package/Ind_Prelims.thy
ProgTutorial/Parsing.thy
ProgTutorial/Readme.thy
ProgTutorial/Recipes/Antiquotes.thy
ProgTutorial/Recipes/ExternalSolver.thy
ProgTutorial/Recipes/Introspection.thy
ProgTutorial/Recipes/Oracle.thy
ProgTutorial/Recipes/Sat.thy
ProgTutorial/Recipes/TimeLimit.thy
ProgTutorial/Recipes/Timing.thy
ProgTutorial/Recipes/USTypes.thy
ProgTutorial/Solutions.thy
ProgTutorial/Tactical.thy
--- a/ProgTutorial/Advanced.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Advanced.thy	Tue May 14 17:10:47 2019 +0200
@@ -3,9 +3,9 @@
 begin
 
 
-chapter {* Advanced Isabelle\label{chp:advanced} *}
+chapter \<open>Advanced Isabelle\label{chp:advanced}\<close>
 
-text {*
+text \<open>
    \begin{flushright}
   {\em All things are difficult before they are easy.} \\[1ex]
   proverb\\[2ex]
@@ -27,11 +27,11 @@
   other hand, store local data for a task at hand. They act like the
   ``short-term memory'' and there can be many of them that are active in
   parallel.
-*}
+\<close>
 
-section {* Theories\label{sec:theories} *}
+section \<open>Theories\label{sec:theories}\<close>
 
-text {*
+text \<open>
   Theories, as said above, are the most basic layer of abstraction in
   Isabelle. They record information about definitions, syntax declarations, axioms,
   theorems and much more.  For example, if a definition is made, it
@@ -43,21 +43,21 @@
 
   \begin{isabelle}
   \isacommand{print\_theory}\\
-  @{text "> names: Pure Code_Generator HOL \<dots>"}\\
-  @{text "> classes: Inf < type \<dots>"}\\
-  @{text "> default sort: type"}\\
-  @{text "> syntactic types: #prop \<dots>"}\\
-  @{text "> logical types: 'a \<times> 'b \<dots>"}\\
-  @{text "> type arities: * :: (random, random) random \<dots>"}\\
-  @{text "> logical constants: == :: 'a \<Rightarrow> 'a \<Rightarrow> prop \<dots>"}\\
-  @{text "> abbreviations: \<dots>"}\\
-  @{text "> axioms: \<dots>"}\\
-  @{text "> oracles: \<dots>"}\\
-  @{text "> definitions: \<dots>"}\\
-  @{text "> theorems: \<dots>"}
+  \<open>> names: Pure Code_Generator HOL \<dots>\<close>\\
+  \<open>> classes: Inf < type \<dots>\<close>\\
+  \<open>> default sort: type\<close>\\
+  \<open>> syntactic types: #prop \<dots>\<close>\\
+  \<open>> logical types: 'a \<times> 'b \<dots>\<close>\\
+  \<open>> type arities: * :: (random, random) random \<dots>\<close>\\
+  \<open>> logical constants: == :: 'a \<Rightarrow> 'a \<Rightarrow> prop \<dots>\<close>\\
+  \<open>> abbreviations: \<dots>\<close>\\
+  \<open>> axioms: \<dots>\<close>\\
+  \<open>> oracles: \<dots>\<close>\\
+  \<open>> definitions: \<dots>\<close>\\
+  \<open>> theorems: \<dots>\<close>
   \end{isabelle}
 
-  Functions acting on theories often end with the suffix @{text "_global"},
+  Functions acting on theories often end with the suffix \<open>_global\<close>,
   for example the function @{ML read_term_global in Syntax} in the structure
   @{ML_struct Syntax}. The reason is to set them syntactically apart from
   functions acting on contexts or local theories, which will be discussed in
@@ -77,25 +77,25 @@
   function @{ML_ind declare_const in Sign} from the structure @{ML_struct
   Sign}. To see how \isacommand{setup} works, consider the following code:
 
-*}  
+\<close>  
 
-ML %grayML{*let
+ML %grayML\<open>let
   val thy = @{theory}
   val bar_const = ((@{binding "BAR"}, @{typ "nat"}), NoSyn)
 in 
   Sign.declare_const @{context} bar_const thy  
-end*}
+end\<close>
 
-text {*
+text \<open>
   If you simply run this code\footnote{Recall that ML-code needs to be enclosed in
-  \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}.} with the
-  intention of declaring a constant @{text "BAR"} having type @{typ nat}, then 
+  \isacommand{ML}~\<open>\<verbopen> \<dots> \<verbclose>\<close>.} with the
+  intention of declaring a constant \<open>BAR\<close> having type @{typ nat}, then 
   indeed you obtain a theory as result. But if you query the
   constant on the Isabelle level using the command \isacommand{term}
 
   \begin{isabelle}
-  \isacommand{term}~@{text BAR}\\
-  @{text "> \"BAR\" :: \"'a\""}
+  \isacommand{term}~\<open>BAR\<close>\\
+  \<open>> "BAR" :: "'a"\<close>
   \end{isabelle}
 
   you can see that you do \emph{not} obtain the expected constant of type @{typ nat}, but a free 
@@ -103,113 +103,113 @@
   ML-expression above did not ``register'' the declaration with the current theory. 
   This is what the command \isacommand{setup} is for. The constant is properly 
   declared with
-*}
+\<close>
 
-setup %gray {* fn thy => 
+setup %gray \<open>fn thy => 
 let
   val bar_const = ((@{binding "BAR"}, @{typ "nat"}), NoSyn)
   val (_, thy') = Sign.declare_const @{context} bar_const thy
 in 
   thy'
-end *}
+end\<close>
 
-text {* 
+text \<open>
   where the declaration is actually applied to the current theory and
   
   \begin{isabelle}
   \isacommand{term}~@{text [quotes] "BAR"}\\
-  @{text "> \"BAR\" :: \"nat\""}
+  \<open>> "BAR" :: "nat"\<close>
   \end{isabelle}
 
   now returns a (black) constant with the type @{typ nat}, as expected.
 
   In a sense, \isacommand{setup} can be seen as a transaction that
-  takes the current theory @{text thy}, applies an operation, and
-  produces a new current theory @{text thy'}. This means that we have
+  takes the current theory \<open>thy\<close>, applies an operation, and
+  produces a new current theory \<open>thy'\<close>. This means that we have
   to be careful to apply operations always to the most current theory,
   not to a \emph{stale} one. Consider again the function inside the
   \isacommand{setup}-command:
 
   \begin{isabelle}
   \begin{graybox}
-  \isacommand{setup}~@{text "\<verbopen>"} @{ML
+  \isacommand{setup}~\<open>\<verbopen>\<close> @{ML
 "fn thy => 
 let
   val bar_const = ((@{binding \"BAR\"}, @{typ \"nat\"}), NoSyn)
   val (_, thy') = Sign.declare_const @{context} bar_const thy
 in
   thy
-end"}~@{text "\<verbclose>"}\isanewline
-  @{text "> ERROR: \"Stale theory encountered\""}
+end"}~\<open>\<verbclose>\<close>\isanewline
+  \<open>> ERROR: "Stale theory encountered"\<close>
   \end{graybox}
   \end{isabelle}
 
-  This time we erroneously return the original theory @{text thy}, instead of
-  the modified one @{text thy'}. Such buggy code will always result into 
+  This time we erroneously return the original theory \<open>thy\<close>, instead of
+  the modified one \<open>thy'\<close>. Such buggy code will always result into 
   a runtime error message about stale theories.
 
   \begin{readmore}
   Most of the functions about theories are implemented in
   @{ML_file "Pure/theory.ML"} and @{ML_file "Pure/global_theory.ML"}.
   \end{readmore}
-*}
+\<close>
 
-section {* Contexts *}
+section \<open>Contexts\<close>
 
-text {*
+text \<open>
   Contexts are arguably more important than theories, even though they only 
   contain ``short-term memory data''. The reason is that a vast number of
   functions in Isabelle depend in one way or another on contexts. Even such
   mundane operations like printing out a term make essential use of contexts.
   For this consider the following contrived proof-snippet whose purpose is to 
   fix two variables:
-*}
+\<close>
 
 lemma "True"
 proof -
-  ML_prf {* Variable.dest_fixes @{context} *} 
+  ML_prf \<open>Variable.dest_fixes @{context}\<close> 
   fix x y  
-  ML_prf {* Variable.dest_fixes @{context} *}(*<*)oops(*>*)
+  ML_prf \<open>Variable.dest_fixes @{context}\<close>(*<*)oops(*>*)
 
-text {*
+text \<open>
   The interesting point is that we injected ML-code before and after
   the variables are fixed. For this remember that ML-code inside a proof
-  needs to be enclosed inside \isacommand{ML\_prf}~@{text "\<verbopen> \<dots> \<verbclose>"},
-  not \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}. The function 
+  needs to be enclosed inside \isacommand{ML\_prf}~\<open>\<verbopen> \<dots> \<verbclose>\<close>,
+  not \isacommand{ML}~\<open>\<verbopen> \<dots> \<verbclose>\<close>. The function 
   @{ML_ind dest_fixes in Variable} from the structure @{ML_struct Variable} takes 
   a context and returns all its currently fixed variable (names). That 
   means a context has a dataslot containing information about fixed variables.
-  The ML-antiquotation @{text "@{context}"} points to the context that is
+  The ML-antiquotation \<open>@{context}\<close> points to the context that is
   active at that point of the theory. Consequently, in the first call to 
   @{ML dest_fixes in Variable} this dataslot is  empty; in the second it is 
-  filled with @{text x} and @{text y}. What is interesting is that contexts
+  filled with \<open>x\<close> and \<open>y\<close>. What is interesting is that contexts
   can be stacked. For this consider the following proof fragment:
-*}
+\<close>
 
 lemma "True"
 proof -
   fix x y
   { fix z w
-    ML_prf {* Variable.dest_fixes @{context} *} 
+    ML_prf \<open>Variable.dest_fixes @{context}\<close> 
   }
-  ML_prf {* Variable.dest_fixes @{context} *}(*<*)oops(*>*)
+  ML_prf \<open>Variable.dest_fixes @{context}\<close>(*<*)oops(*>*)
 
-text {*
+text \<open>
   Here the first time we call @{ML dest_fixes in Variable} we have four fixes variables;
-  the second time we get only the fixes variables @{text x} and @{text y} as answer, since
-  @{text z} and @{text w} are not anymore in the scope of the context. 
+  the second time we get only the fixes variables \<open>x\<close> and \<open>y\<close> as answer, since
+  \<open>z\<close> and \<open>w\<close> are not anymore in the scope of the context. 
   This means the curly-braces act on the Isabelle level as opening and closing statements 
   for a context. The above proof fragment corresponds roughly to the following ML-code
-*}
+\<close>
 
-ML %grayML{*val ctxt0 = @{context};
+ML %grayML\<open>val ctxt0 = @{context};
 val ([x, y], ctxt1) = Variable.add_fixes ["x", "y"] ctxt0;
-val ([z, w], ctxt2) = Variable.add_fixes ["z", "w"] ctxt1*}
+val ([z, w], ctxt2) = Variable.add_fixes ["z", "w"] ctxt1\<close>
 
-text {*
+text \<open>
   where the function @{ML_ind add_fixes in Variable} fixes a list of variables
   specified by strings. Let us come back to the point about printing terms. Consider
-  printing out the term \mbox{@{text "(x, y, z, w)"}} using our function @{ML_ind pretty_term}.
+  printing out the term \mbox{\<open>(x, y, z, w)\<close>} using our function @{ML_ind pretty_term}.
   This function takes a term and a context as argument. Notice how the printing
   of the term changes according to which context is used.
 
@@ -224,18 +224,18 @@
       pretty_term ctxt2 trm ])
 end"}\\
   \setlength{\fboxsep}{0mm}
-  @{text ">"}~@{text "("}\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text x}}}@{text ","}~%
-  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text y}}}@{text ","}~%
-  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text z}}}@{text ","}~%
-  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text w}}}@{text ")"}\\
-  @{text ">"}~@{text "("}\colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text x}}}@{text ","}~%
-  \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text y}}}@{text ","}~%
-  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text z}}}@{text ","}~%
-  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text w}}}@{text ")"}\\
-  @{text ">"}~@{text "("}\colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text x}}}@{text ","}~%
-  \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text y}}}@{text ","}~%
-  \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text z}}}@{text ","}~%
-  \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text w}}}@{text ")"}
+  \<open>>\<close>~\<open>(\<close>\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>x\<close>}}\<open>,\<close>~%
+  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>y\<close>}}\<open>,\<close>~%
+  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>z\<close>}}\<open>,\<close>~%
+  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>w\<close>}}\<open>)\<close>\\
+  \<open>>\<close>~\<open>(\<close>\colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{\<open>x\<close>}}\<open>,\<close>~%
+  \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{\<open>y\<close>}}\<open>,\<close>~%
+  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>z\<close>}}\<open>,\<close>~%
+  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>w\<close>}}\<open>)\<close>\\
+  \<open>>\<close>~\<open>(\<close>\colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{\<open>x\<close>}}\<open>,\<close>~%
+  \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{\<open>y\<close>}}\<open>,\<close>~%
+  \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{\<open>z\<close>}}\<open>,\<close>~%
+  \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{\<open>w\<close>}}\<open>)\<close>
   \end{graybox}
   \end{isabelle}
 
@@ -243,8 +243,8 @@
   The term we are printing is in all three cases the same---a tuple containing
   four free variables. As you can see, however, in case of @{ML "ctxt0"} all
   variables are highlighted indicating a problem, while in case of @{ML
-  "ctxt1"} @{text x} and @{text y} are printed as normal (blue) free
-  variables, but not @{text z} and @{text w}. In the last case all variables
+  "ctxt1"} \<open>x\<close> and \<open>y\<close> are printed as normal (blue) free
+  variables, but not \<open>z\<close> and \<open>w\<close>. In the last case all variables
   are printed as expected. The point of this example is that the context
   contains the information which variables are fixed, and designates all other
   free variables as being alien or faulty.  Therefore the highlighting. 
@@ -266,10 +266,10 @@
 |> Variable.variant_fixes [\"y\", \"y\", \"z\"]" 
   "([\"y\", \"ya\", \"z\"], ...)"}
 
-  Now a fresh variant for the second occurence of @{text y} is created
+  Now a fresh variant for the second occurence of \<open>y\<close> is created
   avoiding any clash. In this way we can also create fresh free variables
   that avoid any clashes with fixed variables. In Line~3 below we fix
-  the variable @{text x} in the context @{text ctxt1}. Next we want to
+  the variable \<open>x\<close> in the context \<open>ctxt1\<close>. Next we want to
   create two fresh variables of type @{typ nat} as variants of the
   string @{text [quotes] "x"} (Lines 6 and 7).
 
@@ -285,9 +285,9 @@
   "([(\"x\", \"nat\"), (\"xa\", \"nat\")], 
  [(\"xa\", \"nat\"), (\"xb\", \"nat\")])"}
 
-  As you can see, if we create the fresh variables with the context @{text ctxt0},
-  then we obtain @{text "x"} and @{text "xa"}; but in @{text ctxt1} we obtain @{text "xa"}
-  and @{text "xb"} avoiding @{text x}, which was fixed in this context.
+  As you can see, if we create the fresh variables with the context \<open>ctxt0\<close>,
+  then we obtain \<open>x\<close> and \<open>xa\<close>; but in \<open>ctxt1\<close> we obtain \<open>xa\<close>
+  and \<open>xb\<close> avoiding \<open>x\<close>, which was fixed in this context.
 
   Often one has the problem that given some terms, create fresh variables
   avoiding any variable occurring in those terms. For this you can use the
@@ -302,8 +302,8 @@
 end"
   "[(\"xb\", \"nat\"), (\"xc\", \"nat\")]"}
 
-  The result is @{text xb} and @{text xc} for the names of the fresh
-  variables, since @{text x} and @{text xa} occur in the term we declared. 
+  The result is \<open>xb\<close> and \<open>xc\<close> for the names of the fresh
+  variables, since \<open>x\<close> and \<open>xa\<close> occur in the term we declared. 
   Note that @{ML_ind declare_term in Variable} does not fix the
   variables; it just makes them ``known'' to the context. You can see
   that if you print out a declared term.
@@ -317,10 +317,10 @@
   pwriteln (pretty_term ctxt1 trm)
 end"}\\
   \setlength{\fboxsep}{0mm}
-  @{text ">"}~\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text P}}}~%
-  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text x}}}~%
-  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text y}}}~%
-  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text z}}}
+  \<open>>\<close>~\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>P\<close>}}~%
+  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>x\<close>}}~%
+  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>y\<close>}}~%
+  \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>z\<close>}}
   \end{graybox}
   \end{isabelle}
 
@@ -339,8 +339,8 @@
 end"
   "(Free (\"x\", \"'a\"), Free (\"x\", \"nat\"))"}
   
-  Parsing the string in the context @{text "ctxt0"} results in a free variable 
-  with a default polymorphic type, but in case of @{text "ctxt1"} we obtain a
+  Parsing the string in the context \<open>ctxt0\<close> results in a free variable 
+  with a default polymorphic type, but in case of \<open>ctxt1\<close> we obtain a
   free variable of type @{typ nat} as previously declared. Which
   type the parsed term receives depends upon the last declaration that
   is made, as the next example illustrates.
@@ -372,22 +372,22 @@
 end"}
   \end{linenos}
   \setlength{\fboxsep}{0mm}
-  @{text ">"}~\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text P}}}~%
-  @{text "?x ?y ?z"}
+  \<open>>\<close>~\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>P\<close>}}~%
+  \<open>?x ?y ?z\<close>
   \end{graybox}
   \end{isabelle}
 
   In Line 3 we fix the variables @{term x}, @{term y} and @{term z} in
-  context @{text ctxt1}.  The function @{ML_ind export_terms in
+  context \<open>ctxt1\<close>.  The function @{ML_ind export_terms in
   Variable} from the structure @{ML_struct Variable} can be used to transfer
   terms between contexts. Transferring means to turn all (free)
   variables that are fixed in one context, but not in the other, into
   schematic variables. In our example, we are transferring the term
-  @{text "P x y z"} from context @{text "ctxt1"} to @{text "ctxt0"},
+  \<open>P x y z\<close> from context \<open>ctxt1\<close> to \<open>ctxt0\<close>,
   which means @{term x}, @{term y} and @{term z} become schematic
   variables (as can be seen by the leading question marks in the result). 
-  Note that the variable @{text P} stays a free variable, since it not fixed in
-  @{text ctxt1}; it is even highlighed, because @{text "ctxt0"} does
+  Note that the variable \<open>P\<close> stays a free variable, since it not fixed in
+  \<open>ctxt1\<close>; it is even highlighed, because \<open>ctxt0\<close> does
   not know about it. Note also that in Line 6 we had to use the
   function @{ML_ind singleton}, because the function @{ML_ind
   export_terms in Variable} normally works over lists of terms.
@@ -411,7 +411,7 @@
 end"
   "?P ?x ?y ?z ?x ?y ?z"}
 
-  Since we fixed all variables in @{text ctxt1}, in the exported
+  Since we fixed all variables in \<open>ctxt1\<close>, in the exported
   result all of them are schematic. The great point of contexts is
   that exporting from one to another is not just restricted to
   variables, but also works with assumptions. For this we can use the
@@ -429,14 +429,13 @@
   "x \<equiv> y \<Longrightarrow> y \<equiv> x"}
   
   The function @{ML_ind add_assumes in Assumption} from the structure
-  @{ML_struct Assumption} adds the assumption \mbox{@{text "x \<equiv> y"}}
-  to the context @{text ctxt1} (Line 3). This function expects a list
+  @{ML_struct Assumption} adds the assumption \mbox{\<open>x \<equiv> y\<close>}
+  to the context \<open>ctxt1\<close> (Line 3). This function expects a list
   of @{ML_type cterm}s and returns them as theorems, together with the
   new context in which they are ``assumed''. In Line 4 we use the
   function @{ML_ind symmetric in Thm} from the structure @{ML_struct
   Thm} in order to obtain the symmetric version of the assumed
-  meta-equality. Now exporting the theorem @{text "eq'"} from @{text
-  ctxt1} to @{text ctxt0} means @{term "y \<equiv> x"} will be prefixed with
+  meta-equality. Now exporting the theorem \<open>eq'\<close> from \<open>ctxt1\<close> to \<open>ctxt0\<close> means @{term "y \<equiv> x"} will be prefixed with
   the assumed theorem. The boolean flag in @{ML_ind export in
   Assumption} indicates whether the assumptions should be marked with
   the goal marker (see Section~\ref{sec:basictactics}). In normal
@@ -462,13 +461,13 @@
   Proof_Context.export ctxt1 ctxt0 [eq']
 end"
   "[?x \<equiv> ?y \<Longrightarrow> ?y \<equiv> ?x]"}
-*}
+\<close>
 
 
 
-text {*
+text \<open>
 
-*}
+\<close>
 
 
 (*
@@ -505,9 +504,9 @@
 oops
 *)
 
-section {* Local Theories and Local Setups\label{sec:local} (TBD) *}
+section \<open>Local Theories and Local Setups\label{sec:local} (TBD)\<close>
 
-text {*
+text \<open>
   In contrast to an ordinary theory, which simply consists of a type
   signature, as well as tables for constants, axioms and theorems, a local
   theory contains additional context information, such as locally fixed
@@ -524,44 +523,44 @@
   use the commands \isacommand{method\_setup} for installing methods in the
   current theory and \isacommand{simproc\_setup} for adding new simprocs to
   the current simpset.
-*}
+\<close>
 
 
-section {* Morphisms (TBD) *}
+section \<open>Morphisms (TBD)\<close>
 
-text {*
+text \<open>
   Morphisms are arbitrary transformations over terms, types, theorems and bindings.
   They can be constructed using the function @{ML_ind morphism in Morphism},
   which expects a record with functions of type
 
   \begin{isabelle}
   \begin{tabular}{rl}
-  @{text "binding:"} & @{text "binding -> binding"}\\
-  @{text "typ:"}     & @{text "typ -> typ"}\\
-  @{text "term:"}    & @{text "term -> term"}\\
-  @{text "fact:"}    & @{text "thm list -> thm list"}
+  \<open>binding:\<close> & \<open>binding -> binding\<close>\\
+  \<open>typ:\<close>     & \<open>typ -> typ\<close>\\
+  \<open>term:\<close>    & \<open>term -> term\<close>\\
+  \<open>fact:\<close>    & \<open>thm list -> thm list\<close>
   \end{tabular}
   \end{isabelle}
 
   The simplest morphism is the  @{ML_ind identity in Morphism}-morphism defined as
-*}
+\<close>
 
-ML %grayML{*val identity = Morphism.morphism "" {binding = [], typ = [], term = [], fact = []}*}
+ML %grayML\<open>val identity = Morphism.morphism "" {binding = [], typ = [], term = [], fact = []}\<close>
   
-text {*
+text \<open>
   Morphisms can be composed with the function @{ML_ind "$>" in Morphism}
-*}
+\<close>
 
-ML %grayML{*fun trm_phi (Free (x, T)) = Var ((x, 0), T) 
+ML %grayML\<open>fun trm_phi (Free (x, T)) = Var ((x, 0), T) 
   | trm_phi (Abs (x, T, t)) = Abs (x, T, trm_phi t)
   | trm_phi (t $ s) = (trm_phi t) $ (trm_phi s)
-  | trm_phi t = t*}
+  | trm_phi t = t\<close>
 
-ML %grayML{*val phi = Morphism.term_morphism "foo" trm_phi*}
+ML %grayML\<open>val phi = Morphism.term_morphism "foo" trm_phi\<close>
 
-ML %grayML{*Morphism.term phi @{term "P x y"}*}
+ML %grayML\<open>Morphism.term phi @{term "P x y"}\<close>
 
-text {*
+text \<open>
   @{ML_ind term_morphism in Morphism}
 
   @{ML_ind term in Morphism},
@@ -570,25 +569,25 @@
   \begin{readmore}
   Morphisms are implemented in the file @{ML_file "Pure/morphism.ML"}.
   \end{readmore}
-*}
+\<close>
 
-section {* Misc (TBD) *}
+section \<open>Misc (TBD)\<close>
 
-text {* 
+text \<open>
 FIXME: association lists:
 @{ML_file "Pure/General/alist.ML"}
 
 FIXME: calling the ML-compiler
 
-*}
+\<close>
 
-section {* What Is In an Isabelle Name? (TBD) *}
+section \<open>What Is In an Isabelle Name? (TBD)\<close>
 
-text {*
+text \<open>
   On the ML-level of Isabelle, you often have to work with qualified names.
   These are strings with some additional information, such as positional
   information and qualifiers. Such qualified names can be generated with the
-  antiquotation @{text "@{binding \<dots>}"}. For example
+  antiquotation \<open>@{binding \<dots>}\<close>. For example
 
   @{ML_response [display,gray]
   "@{binding \"name\"}"
@@ -597,18 +596,18 @@
   An example where a qualified name is needed is the function 
   @{ML_ind define in Local_Theory}.  This function is used below to define 
   the constant @{term "TrueConj"} as the conjunction @{term "True \<and> True"}.
-*}
+\<close>
 
-local_setup %gray {* 
+local_setup %gray \<open>
   Local_Theory.define ((@{binding "TrueConj"}, NoSyn), 
-      ((@{binding "TrueConj_def"}, []), @{term "True \<and> True"})) #> snd *}
+      ((@{binding "TrueConj_def"}, []), @{term "True \<and> True"})) #> snd\<close>
 
-text {* 
+text \<open>
   Now querying the definition you obtain:
 
   \begin{isabelle}
-  \isacommand{thm}~@{text "TrueConj_def"}\\
-  @{text "> "}~@{thm TrueConj_def}
+  \isacommand{thm}~\<open>TrueConj_def\<close>\\
+  \<open>> \<close>~@{thm TrueConj_def}
   \end{isabelle}
 
   \begin{readmore}
@@ -622,13 +621,13 @@
   \footnote{\bf FIXME: There should probably a separate section on binding, long-names
   and sign.}
 
-*}
+\<close>
 
 
-ML {* Sign.intern_type @{theory} "list" *}
-ML {* Sign.intern_const @{theory} "prod_fun" *}
+ML \<open>Sign.intern_type @{theory} "list"\<close>
+ML \<open>Sign.intern_const @{theory} "prod_fun"\<close>
 
-text {*
+text \<open>
   \footnote{\bf FIXME: Explain the following better; maybe put in a separate
   section and link with the comment in the antiquotation section.}
 
@@ -641,27 +640,27 @@
   Functions about naming are implemented in @{ML_file "Pure/General/name_space.ML"};
   functions about signatures in @{ML_file "Pure/sign.ML"}.
   \end{readmore}
-*}
+\<close>
 
-text {* 
+text \<open>
   @{ML_ind "Binding.name_of"} returns the string without markup
 
   @{ML_ind "Binding.concealed"} 
-*}
+\<close>
 
-section {* Concurrency (TBD) *}
+section \<open>Concurrency (TBD)\<close>
 
-text {*
+text \<open>
   @{ML_ind prove_future in Goal}
   @{ML_ind future_result in Goal}
-*}
+\<close>
 
-section {* Parse and Print Translations (TBD) *}
+section \<open>Parse and Print Translations (TBD)\<close>
 
-section {* Summary *}
+section \<open>Summary\<close>
 
-text {*
+text \<open>
   TBD
-*}
+\<close>
 
 end
--- a/ProgTutorial/Appendix.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Appendix.thy	Tue May 14 17:10:47 2019 +0200
@@ -3,12 +3,12 @@
 imports Base
 begin
 
-text {* \appendix *}
+text \<open>\appendix\<close>
 
 
-chapter {* Recipes *}
+chapter \<open>Recipes\<close>
 
-text {*
+text \<open>
   Possible topics: 
 
   \begin{itemize}
@@ -21,6 +21,6 @@
 
   \item Brief history of Isabelle
   \end{itemize}
-*}
+\<close>
 
 end
--- a/ProgTutorial/Base.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Base.thy	Tue May 14 17:10:47 2019 +0200
@@ -12,7 +12,7 @@
 notation (latex output)
   Cons ("_ # _" [66,65] 65)
 
-ML %linenosgrayML{*fun P n = @{term "P::nat \<Rightarrow> bool"} $ (HOLogic.mk_number @{typ "nat"} n) 
+ML %linenosgrayML\<open>fun P n = @{term "P::nat \<Rightarrow> bool"} $ (HOLogic.mk_number @{typ "nat"} n) 
 
 fun rhs 1 = P 1
   | rhs n = HOLogic.mk_conj (P n, rhs (n - 1))
@@ -22,14 +22,14 @@
                  (HOLogic.mk_eq (P (m - 1), P m), rhs n), lhs (m - 1) n)
 
 fun de_bruijn n =
-  HOLogic.mk_Trueprop (HOLogic.mk_imp (lhs n n, rhs n))*}
+  HOLogic.mk_Trueprop (HOLogic.mk_imp (lhs n n, rhs n))\<close>
 
 ML_file "output_tutorial.ML"
 ML_file "antiquote_setup.ML"
 
 
 (*setup {* OutputTutorial.setup *}*) (* Seems to be standard now, we don't need this anymoe *)
-setup {* AntiquoteSetup.setup *}
+setup \<open>AntiquoteSetup.setup\<close>
 
 print_ML_antiquotations
 
--- a/ProgTutorial/Essential.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Essential.thy	Tue May 14 17:10:47 2019 +0200
@@ -2,9 +2,9 @@
 imports Base First_Steps
 begin
 
-chapter {* Isabelle Essentials *}
-
-text {*
+chapter \<open>Isabelle Essentials\<close>
+
+text \<open>
    \begin{flushright}
   {\em One man's obfuscation is another man's abstraction.} \\[1ex]
   Frank Ch.~Eigler on the Linux Kernel Mailing List,\\ 
@@ -17,15 +17,15 @@
   is a small trusted core and everything else is built on top of this trusted
   core. The fundamental data structures involved in this core are certified
   terms and certified types, as well as theorems.
-*}
-
-
-section {* Terms and Types *}
-
-text {*
+\<close>
+
+
+section \<open>Terms and Types\<close>
+
+text \<open>
   In Isabelle, there are certified terms and uncertified terms (respectively types). 
   Uncertified terms are often just called terms. One way to construct them is by 
-  using the antiquotation \mbox{@{text "@{term \<dots>}"}}. For example
+  using the antiquotation \mbox{\<open>@{term \<dots>}\<close>}. For example
 
   @{ML_response [display,gray] 
 "@{term \"(a::nat) + b = c\"}" 
@@ -35,17 +35,17 @@
   constructs the term @{term "(a::nat) + b = c"}. The resulting term is printed using 
   the internal representation corresponding to the datatype @{ML_type_ind "term"}, 
   which is defined as follows: 
-*}  
-
-ML_val %linenosgray{*datatype term =
+\<close>  
+
+ML_val %linenosgray\<open>datatype term =
   Const of string * typ 
 | Free of string * typ 
 | Var of indexname * typ 
 | Bound of int 
 | Abs of string * typ * term 
-| $ of term * term *}
-
-text {*
+| $ of term * term\<close>
+
+text \<open>
   This datatype implements Church-style lambda-terms, where types are
   explicitly recorded in variables, constants and abstractions.  The
   important point of having terms is that you can pattern-match against them;
@@ -127,8 +127,8 @@
 
   When constructing terms, you are usually concerned with free
   variables (as mentioned earlier, you cannot construct schematic
-  variables using the built-in antiquotation \mbox{@{text "@{term
-  \<dots>}"}}). If you deal with theorems, you have to, however, observe the
+  variables using the built-in antiquotation \mbox{\<open>@{term
+  \<dots>}\<close>}). If you deal with theorems, you have to, however, observe the
   distinction. The reason is that only schematic variables can be
   instantiated with terms when a theorem is applied. A similar
   distinction between free and schematic variables holds for types
@@ -177,13 +177,13 @@
   may omit parts of it by default. If you want to see all of it, you
   need to set the printing depth to a higher value by 
   \end{exercise}
-*}
+\<close>
 
 declare [[ML_print_depth = 50]]
 
-text {*
-  The antiquotation @{text "@{prop \<dots>}"} constructs terms by inserting the 
-  usually invisible @{text "Trueprop"}-coercions whenever necessary. 
+text \<open>
+  The antiquotation \<open>@{prop \<dots>}\<close> constructs terms by inserting the 
+  usually invisible \<open>Trueprop\<close>-coercions whenever necessary. 
   Consider for example the pairs
 
 @{ML_response [display,gray] "(@{term \"P x\"}, @{prop \"P x\"})" 
@@ -197,24 +197,24 @@
  Const (\"Pure.imp\", \<dots>) $ \<dots> $ \<dots>)"}
 
   where it is not (since it is already constructed by a meta-implication). 
-  The purpose of the @{text "Trueprop"}-coercion is to embed formulae of
+  The purpose of the \<open>Trueprop\<close>-coercion is to embed formulae of
   an object logic, for example HOL, into the meta-logic of Isabelle. The coercion
   is needed whenever a term is constructed that will be proved as a theorem. 
 
   As already seen above, types can be constructed using the antiquotation 
-  @{text "@{typ \<dots>}"}. For example:
+  \<open>@{typ \<dots>}\<close>. For example:
 
   @{ML_response_fake [display,gray] "@{typ \"bool \<Rightarrow> nat\"}" "bool \<Rightarrow> nat"}
 
   The corresponding datatype is
-*}
+\<close>
   
-ML_val %grayML{*datatype typ =
+ML_val %grayML\<open>datatype typ =
   Type  of string * typ list 
 | TFree of string * sort 
-| TVar  of indexname * sort *}
-
-text {*
+| TVar  of indexname * sort\<close>
+
+text \<open>
   Like with terms, there is the distinction between free type
   variables (term-constructor @{ML "TFree"}) and schematic
   type variables (term-constructor @{ML "TVar"} and printed with
@@ -222,19 +222,19 @@
   like @{typ "int"} or @{typ bool}, are types with an empty list
   of argument types. However, it needs a bit of effort to show an
   example, because Isabelle always pretty prints types (unlike terms).
-  Using just the antiquotation @{text "@{typ \"bool\"}"} we only see
+  Using just the antiquotation \<open>@{typ "bool"}\<close> we only see
 
   @{ML_response [display, gray]
   "@{typ \"bool\"}"
   "bool"}
-  which is the pretty printed version of @{text "bool"}. However, in PolyML
-  (version @{text "\<ge>"}5.3) it is easy to install your own pretty printer. With the
+  which is the pretty printed version of \<open>bool\<close>. However, in PolyML
+  (version \<open>\<ge>\<close>5.3) it is easy to install your own pretty printer. With the
   function below we mimic the behaviour of the usual pretty printer for
   datatypes (it uses pretty-printing functions which will be explained in more
   detail in Section~\ref{sec:pretty}).
-*}
-
-ML %grayML{*local
+\<close>
+
+ML %grayML\<open>local
   fun pp_pair (x, y) = Pretty.list "(" ")" [x, y]
   fun pp_list xs = Pretty.list "[" "]" xs
   fun pp_str s   = Pretty.str s
@@ -249,18 +249,18 @@
       pp_constr "TFree" (pp_pair (pp_qstr a, pp_sort S))
   | raw_pp_typ (Type (a, tys)) = 
       pp_constr "Type" (pp_pair (pp_qstr a, pp_list (map raw_pp_typ tys)))
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   We can install this pretty printer with the function 
   @{ML_ind ML_system_pp} as follows.
-*}
-
-ML %grayML{*ML_system_pp
-  (fn _ => fn _ => Pretty.to_polyml o raw_pp_typ)*}
+\<close>
+
+ML %grayML\<open>ML_system_pp
+  (fn _ => fn _ => Pretty.to_polyml o raw_pp_typ)\<close>
 
 ML \<open>@{typ "bool"}\<close>
-text {*
+text \<open>
   Now the type bool is printed out in full detail.
 
   @{ML_response [display,gray]
@@ -273,10 +273,10 @@
   "@{typ \"'a list\"}"
   "Type (\"List.list\", [TFree (\"'a\", [\"HOL.type\"])])"}
 
-  we can see the full name of the type is actually @{text "List.list"}, indicating
-  that it is defined in the theory @{text "List"}. However, one has to be 
+  we can see the full name of the type is actually \<open>List.list\<close>, indicating
+  that it is defined in the theory \<open>List\<close>. However, one has to be 
   careful with names of types, because even if
-  @{text "fun"} is defined in the theory @{text "HOL"}, it is  
+  \<open>fun\<close> is defined in the theory \<open>HOL\<close>, it is  
   still represented by its simple name.
 
    @{ML_response [display,gray]
@@ -285,12 +285,12 @@
 
   We can restore the usual behaviour of Isabelle's pretty printer
   with the code
-*}
-
-ML %grayML{*ML_system_pp
-  (fn depth => fn _ => ML_Pretty.to_polyml o Pretty.to_ML depth o Proof_Display.pp_typ Theory.get_pure)*}
-
-text {*
+\<close>
+
+ML %grayML\<open>ML_system_pp
+  (fn depth => fn _ => ML_Pretty.to_polyml o Pretty.to_ML depth o Proof_Display.pp_typ Theory.get_pure)\<close>
+
+text \<open>
   After that the types for booleans, lists and so on are printed out again 
   the standard Isabelle way.
 
@@ -305,36 +305,36 @@
   definition and many useful operations are implemented 
   in @{ML_file "Pure/type.ML"}.
   \end{readmore}
-*}
-
-section {* Constructing Terms and Types Manually\label{sec:terms_types_manually} *} 
-
-text {*
+\<close>
+
+section \<open>Constructing Terms and Types Manually\label{sec:terms_types_manually}\<close> 
+
+text \<open>
   While antiquotations are very convenient for constructing terms, they can
   only construct fixed terms (remember they are ``linked'' at compile-time).
   However, you often need to construct terms manually. For example, a
-  function that returns the implication @{text "\<And>(x::nat). P x \<Longrightarrow> Q x"} taking
+  function that returns the implication \<open>\<And>(x::nat). P x \<Longrightarrow> Q x\<close> taking
   @{term P} and @{term Q} as arguments can only be written as:
 
-*}
-
-ML %grayML{*fun make_imp P Q =
+\<close>
+
+ML %grayML\<open>fun make_imp P Q =
 let
   val x = Free ("x", @{typ nat})
 in 
   Logic.all x (Logic.mk_implies (P $ x, Q $ x))
-end *}
-
-text {*
+end\<close>
+
+text \<open>
   The reason is that you cannot pass the arguments @{term P} and @{term Q} 
   into an antiquotation.\footnote{At least not at the moment.} For example 
   the following does \emph{not} work.
-*}
-
-ML %grayML{*fun make_wrong_imp P Q = @{prop "\<And>(x::nat). P x \<Longrightarrow> Q x"} *}
-
-text {*
-  To see this, apply @{text "@{term S}"} and @{text "@{term T}"} 
+\<close>
+
+ML %grayML\<open>fun make_wrong_imp P Q = @{prop "\<And>(x::nat). P x \<Longrightarrow> Q x"}\<close>
+
+text \<open>
+  To see this, apply \<open>@{term S}\<close> and \<open>@{term T}\<close> 
   to both functions. With @{ML make_imp} you obtain the intended term involving 
   the given arguments
 
@@ -344,7 +344,7 @@
     Const \<dots> $ (Free (\"S\",\<dots>) $ \<dots>) $ (Free (\"T\",\<dots>) $ \<dots>))"}
 
   whereas with @{ML make_wrong_imp} you obtain a term involving the @{term "P"} 
-  and @{text "Q"} from the antiquotation.
+  and \<open>Q\<close> from the antiquotation.
 
   @{ML_response [display,gray] "make_wrong_imp @{term S} @{term T}" 
 "Const \<dots> $ 
@@ -383,7 +383,7 @@
   In this example, @{ML lambda} produces a de Bruijn index (i.e.\ @{ML "Bound 0"}), 
   and an abstraction, where it also records the type of the abstracted
   variable and for printing purposes also its name.  Note that because of the
-  typing annotation on @{text "P"}, the variable @{text "x"} in @{text "P x"}
+  typing annotation on \<open>P\<close>, the variable \<open>x\<close> in \<open>P x\<close>
   is of the same type as the abstracted variable. If it is of different type,
   as in
 
@@ -396,7 +396,7 @@
 end"
   "Abs (\"x\", \"int\", Free (\"P\", \"nat \<Rightarrow> bool\") $ Free (\"x\", \"nat\"))"}
 
-  then the variable @{text "Free (\"x\", \"nat\")"} is \emph{not} abstracted. 
+  then the variable \<open>Free ("x", "nat")\<close> is \emph{not} abstracted. 
   This is a fundamental principle
   of Church-style typing, where variables with the same name still differ, if they 
   have different type.
@@ -431,18 +431,18 @@
   Similarly the function @{ML_ind subst_bounds in Term}, replaces lose bound 
   variables with terms. To see how this function works, let us implement a
   function that strips off the outermost forall quantifiers in a term.
-*}
-
-ML %grayML{*fun strip_alls t =
+\<close>
+
+ML %grayML\<open>fun strip_alls t =
 let 
   fun aux (x, T, t) = strip_alls t |>> cons (Free (x, T))
 in
   case t of
     Const (@{const_name All}, _) $ Abs body => aux body
   | _ => ([], t)
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   The function returns a pair consisting of the stripped off variables and
   the body of the universal quantification. For example
 
@@ -454,14 +454,14 @@
   Note that we produced in the body two dangling de Bruijn indices. 
   Later on we will also use the inverse function that
   builds the quantification from a body and a list of (free) variables.
-*}
+\<close>
   
-ML %grayML{*fun build_alls ([], t) = t
+ML %grayML\<open>fun build_alls ([], t) = t
   | build_alls (Free (x, T) :: vs, t) = 
       Const (@{const_name "All"}, (T --> @{typ bool}) --> @{typ bool}) 
-        $ Abs (x, T, build_alls (vs, t))*}
-
-text {*
+        $ Abs (x, T, build_alls (vs, t))\<close>
+
+text \<open>
   As said above, after calling @{ML strip_alls}, you obtain a term with loose
   bound variables. With the function @{ML subst_bounds}, you can replace these
   loose @{ML_ind Bound in Term}s with the stripped off variables.
@@ -552,35 +552,35 @@
   When constructing terms manually, there are a few subtle issues with
   constants. They usually crop up when pattern matching terms or types, or
   when constructing them. While it is perfectly ok to write the function
-  @{text is_true} as follows
-*}
-
-ML %grayML{*fun is_true @{term True} = true
-  | is_true _ = false*}
-
-text {*
-  this does not work for picking out @{text "\<forall>"}-quantified terms. Because
+  \<open>is_true\<close> as follows
+\<close>
+
+ML %grayML\<open>fun is_true @{term True} = true
+  | is_true _ = false\<close>
+
+text \<open>
+  this does not work for picking out \<open>\<forall>\<close>-quantified terms. Because
   the function 
-*}
-
-ML %grayML{*fun is_all (@{term All} $ _) = true
-  | is_all _ = false*}
-
-text {* 
+\<close>
+
+ML %grayML\<open>fun is_all (@{term All} $ _) = true
+  | is_all _ = false\<close>
+
+text \<open>
   will not correctly match the formula @{prop[source] "\<forall>x::nat. P x"}: 
 
   @{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "false"}
 
-  The problem is that the @{text "@term"}-antiquotation in the pattern 
+  The problem is that the \<open>@term\<close>-antiquotation in the pattern 
   fixes the type of the constant @{term "All"} to be @{typ "('a \<Rightarrow> bool) \<Rightarrow> bool"} for 
   an arbitrary, but fixed type @{typ "'a"}. A properly working alternative 
   for this function is
-*}
-
-ML %grayML{*fun is_all (Const ("HOL.All", _) $ _) = true
-  | is_all _ = false*}
-
-text {* 
+\<close>
+
+ML %grayML\<open>fun is_all (Const ("HOL.All", _) $ _) = true
+  | is_all _ = false\<close>
+
+text \<open>
   because now
 
   @{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "true"}
@@ -589,13 +589,13 @@
   second any term).
 
   However there is still a problem: consider the similar function that
-  attempts to pick out @{text "Nil"}-terms:
-*}
-
-ML %grayML{*fun is_nil (Const ("Nil", _)) = true
-  | is_nil _ = false *}
-
-text {* 
+  attempts to pick out \<open>Nil\<close>-terms:
+\<close>
+
+ML %grayML\<open>fun is_nil (Const ("Nil", _)) = true
+  | is_nil _ = false\<close>
+
+text \<open>
   Unfortunately, also this function does \emph{not} work as expected, since
 
   @{ML_response [display,gray] "is_nil @{term \"Nil\"}" "false"}
@@ -607,9 +607,9 @@
 
   @{ML_response [display,gray] "@{term \"Nil\"}" "Const (\"List.list.Nil\", \<dots>)"}
 
-  the name of the constant @{text "Nil"} depends on the theory in which the
-  term constructor is defined (@{text "List"}) and also in which datatype
-  (@{text "list"}). Even worse, some constants have a name involving
+  the name of the constant \<open>Nil\<close> depends on the theory in which the
+  term constructor is defined (\<open>List\<close>) and also in which datatype
+  (\<open>list\<close>). Even worse, some constants have a name involving
   type-classes. Consider for example the constants for @{term "zero"} and
   \mbox{@{term "times"}}:
 
@@ -619,21 +619,21 @@
 
   While you could use the complete name, for example 
   @{ML "Const (\"List.list.Nil\", some_type)" for some_type}, for referring to or
-  matching against @{text "Nil"}, this would make the code rather brittle. 
+  matching against \<open>Nil\<close>, this would make the code rather brittle. 
   The reason is that the theory and the name of the datatype can easily change. 
   To make the code more robust, it is better to use the antiquotation 
-  @{text "@{const_name \<dots>}"}. With this antiquotation you can harness the 
+  \<open>@{const_name \<dots>}\<close>. With this antiquotation you can harness the 
   variable parts of the constant's name. Therefore a function for 
   matching against constants that have a polymorphic type should 
   be written as follows.
-*}
-
-ML %grayML{*fun is_nil_or_all (Const (@{const_name "Nil"}, _)) = true
+\<close>
+
+ML %grayML\<open>fun is_nil_or_all (Const (@{const_name "Nil"}, _)) = true
   | is_nil_or_all (Const (@{const_name "All"}, _) $ _) = true
-  | is_nil_or_all _ = false *}
-
-text {*
-  The antiquotation for properly referencing type constants is @{text "@{type_name \<dots>}"}.
+  | is_nil_or_all _ = false\<close>
+
+text \<open>
+  The antiquotation for properly referencing type constants is \<open>@{type_name \<dots>}\<close>.
   For example
 
   @{ML_response [display,gray]
@@ -644,34 +644,34 @@
   when defining constants. For example the function returning a function 
   type is as follows:
 
-*} 
-
-ML %grayML{*fun make_fun_type ty1 ty2 = Type ("fun", [ty1, ty2]) *}
-
-text {* This can be equally written with the combinator @{ML_ind "-->" in Term} as: *}
-
-ML %grayML{*fun make_fun_type ty1 ty2 = ty1 --> ty2 *}
-
-text {*
+\<close> 
+
+ML %grayML\<open>fun make_fun_type ty1 ty2 = Type ("fun", [ty1, ty2])\<close>
+
+text \<open>This can be equally written with the combinator @{ML_ind "-->" in Term} as:\<close>
+
+ML %grayML\<open>fun make_fun_type ty1 ty2 = ty1 --> ty2\<close>
+
+text \<open>
   If you want to construct a function type with more than one argument
   type, then you can use @{ML_ind "--->" in Term}.
-*}
-
-ML %grayML{*fun make_fun_types tys ty = tys ---> ty *}
-
-text {*
+\<close>
+
+ML %grayML\<open>fun make_fun_types tys ty = tys ---> ty\<close>
+
+text \<open>
   A handy function for manipulating terms is @{ML_ind map_types in Term}: it takes a 
   function and applies it to every type in a term. You can, for example,
   change every @{typ nat} in a term into an @{typ int} using the function:
-*}
-
-ML %grayML{*fun nat_to_int ty =
+\<close>
+
+ML %grayML\<open>fun nat_to_int ty =
   (case ty of
      @{typ nat} => @{typ int}
    | Type (s, tys) => Type (s, map nat_to_int tys)
-   | _ => ty)*}
-
-text {*
+   | _ => ty)\<close>
+
+text \<open>
   Here is an example:
 
 @{ML_response_fake [display,gray] 
@@ -697,13 +697,13 @@
 
   The reason for this definition is that @{ML add_tfrees in Term} can
   be easily folded over a list of terms. Similarly for all functions
-  named @{text "add_*"} in @{ML_file "Pure/term.ML"}.
+  named \<open>add_*\<close> in @{ML_file "Pure/term.ML"}.
 
   \begin{exercise}\label{fun:revsum}
-  Write a function @{text "rev_sum : term -> term"} that takes a
-  term of the form @{text "t\<^sub>1 + t\<^sub>2 + \<dots> + t\<^sub>n"} (whereby @{text "n"} might be one)
-  and returns the reversed sum @{text "t\<^sub>n + \<dots> + t\<^sub>2 + t\<^sub>1"}. Assume
-  the @{text "t\<^sub>i"} can be arbitrary expressions and also note that @{text "+"} 
+  Write a function \<open>rev_sum : term -> term\<close> that takes a
+  term of the form \<open>t\<^sub>1 + t\<^sub>2 + \<dots> + t\<^sub>n\<close> (whereby \<open>n\<close> might be one)
+  and returns the reversed sum \<open>t\<^sub>n + \<dots> + t\<^sub>2 + t\<^sub>1\<close>. Assume
+  the \<open>t\<^sub>i\<close> can be arbitrary expressions and also note that \<open>+\<close> 
   associates to the left. Try your function on some examples. 
   \end{exercise}
 
@@ -722,8 +722,8 @@
   \end{exercise}
 
   \begin{exercise}\label{fun:makelist}
-  Write a function that takes an integer @{text i} and
-  produces an Isabelle integer list from @{text 1} upto @{text i}, 
+  Write a function that takes an integer \<open>i\<close> and
+  produces an Isabelle integer list from \<open>1\<close> upto \<open>i\<close>, 
   and then builds the reverse of this list using @{const rev}. 
   The relevant helper functions are @{ML upto}, 
   @{ML HOLogic.mk_number} and @{ML HOLogic.mk_list}.
@@ -756,11 +756,11 @@
   for constructing the terms for the logical connectives.\footnote{Thanks to Roy
   Dyckhoff for suggesting this exercise and working out the details.} 
   \end{exercise}
-*}
-
-section {* Unification and Matching\label{sec:univ} *}
-
-text {* 
+\<close>
+
+section \<open>Unification and Matching\label{sec:univ}\<close>
+
+text \<open>
   As seen earlier, Isabelle's terms and types may contain schematic term variables
   (term-constructor @{ML Var}) and schematic type variables (term-constructor
   @{ML TVar}). These variables stand for unknown entities, which can be made
@@ -769,7 +769,7 @@
   relatively straightforward, in case of terms the algorithms are
   substantially more complicated, because terms need higher-order versions of
   the unification and matching algorithms. Below we shall use the
-  antiquotations @{text "@{typ_pat \<dots>}"} and @{text "@{term_pat \<dots>}"} from
+  antiquotations \<open>@{typ_pat \<dots>}\<close> and \<open>@{term_pat \<dots>}\<close> from
   Section~\ref{sec:antiquote} in order to construct examples involving
   schematic variables.
 
@@ -777,24 +777,24 @@
   types.  Both return type environments (ML-type @{ML_type "Type.tyenv"})
   which map schematic type variables to types and sorts. Below we use the
   function @{ML_ind typ_unify in Sign} from the structure @{ML_struct Sign}
-  for unifying the types @{text "?'a * ?'b"} and @{text "?'b list *
-  nat"}. This will produce the mapping, or type environment, @{text "[?'a :=
-  ?'b list, ?'b := nat]"}.
-*}
-
-ML %linenosgray{*val (tyenv_unif, _) = let
+  for unifying the types \<open>?'a * ?'b\<close> and \<open>?'b list *
+  nat\<close>. This will produce the mapping, or type environment, \<open>[?'a :=
+  ?'b list, ?'b := nat]\<close>.
+\<close>
+
+ML %linenosgray\<open>val (tyenv_unif, _) = let
   val ty1 = @{typ_pat "?'a * ?'b"}
   val ty2 = @{typ_pat "?'b list * nat"}
 in
   Sign.typ_unify @{theory} (ty1, ty2) (Vartab.empty, 0) 
-end*}
-
-text {* 
+end\<close>
+
+text \<open>
   The environment @{ML_ind "Vartab.empty"} in line 5 stands for the empty type
   environment, which is needed for starting the unification without any
-  (pre)instantiations. The @{text 0} is an integer index that will be explained
+  (pre)instantiations. The \<open>0\<close> is an integer index that will be explained
   below. In case of failure, @{ML typ_unify in Sign} will raise the exception
-  @{text TUNIFY}.  We can print out the resulting type environment bound to 
+  \<open>TUNIFY\<close>.  We can print out the resulting type environment bound to 
   @{ML tyenv_unif} with the built-in function @{ML_ind dest in Vartab} from the
   structure @{ML_struct Vartab}.
 
@@ -802,13 +802,13 @@
   "Vartab.dest tyenv_unif"
   "[((\"'a\", 0), ([\"HOL.type\"], \"?'b List.list\")), 
  ((\"'b\", 0), ([\"HOL.type\"], \"nat\"))]"} 
-*}
-
-text_raw {*
+\<close>
+
+text_raw \<open>
   \begin{figure}[t]
   \begin{minipage}{\textwidth}
-  \begin{isabelle}*}
-ML %grayML{*fun pretty_helper aux env =
+  \begin{isabelle}\<close>
+ML %grayML\<open>fun pretty_helper aux env =
   env |> Vartab.dest  
       |> map aux
       |> map (fn (s1, s2) => Pretty.block [s1, Pretty.str " := ", s2]) 
@@ -821,19 +821,19 @@
   val print = apply2 (pretty_typ ctxt)
 in 
   pretty_helper (print o get_typs) tyenv
-end*}
-text_raw {*
+end\<close>
+text_raw \<open>
   \end{isabelle}
   \end{minipage}
   \caption{A pretty printing function for type environments, which are 
   produced by unification and matching.\label{fig:prettyenv}}
   \end{figure}
-*}
-
-text {*
+\<close>
+
+text \<open>
   The first components in this list stand for the schematic type variables and
   the second are the associated sorts and types. In this example the sort is
-  the default sort @{text "HOL.type"}. Instead of @{ML "Vartab.dest"}, we will
+  the default sort \<open>HOL.type\<close>. Instead of @{ML "Vartab.dest"}, we will
   use in what follows our own pretty-printing function from
   Figure~\ref{fig:prettyenv} for @{ML_type "Type.tyenv"}s. For the type
   environment in the example this function prints out the more legible:
@@ -846,38 +846,38 @@
   The way the unification function @{ML typ_unify in Sign} is implemented 
   using an initial type environment and initial index makes it easy to
   unify more than two terms. For example 
-*}
-
-ML %linenosgray{*val (tyenvs, _) = let
+\<close>
+
+ML %linenosgray\<open>val (tyenvs, _) = let
   val tys1 = (@{typ_pat "?'a"}, @{typ_pat "?'b list"})
   val tys2 = (@{typ_pat "?'b"}, @{typ_pat "nat"})
 in
   fold (Sign.typ_unify @{theory}) [tys1, tys2] (Vartab.empty, 0) 
-end*}
-
-text {*
-  The index @{text 0} in Line 5 is the maximal index of the schematic type
-  variables occurring in @{text tys1} and @{text tys2}. This index will be
+end\<close>
+
+text \<open>
+  The index \<open>0\<close> in Line 5 is the maximal index of the schematic type
+  variables occurring in \<open>tys1\<close> and \<open>tys2\<close>. This index will be
   increased whenever a new schematic type variable is introduced during
   unification.  This is for example the case when two schematic type variables
   have different, incomparable sorts. Then a new schematic type variable is
   introduced with the combined sorts. To show this let us assume two sorts,
-  say @{text "s1"} and @{text "s2"}, which we attach to the schematic type
-  variables @{text "?'a"} and @{text "?'b"}. Since we do not make any
+  say \<open>s1\<close> and \<open>s2\<close>, which we attach to the schematic type
+  variables \<open>?'a\<close> and \<open>?'b\<close>. Since we do not make any
   assumption about the sorts, they are incomparable.
-*}
+\<close>
 
 class s1
 class s2 
 
-ML %grayML{*val (tyenv, index) = let
+ML %grayML\<open>val (tyenv, index) = let
   val ty1 = @{typ_pat "?'a::s1"}
   val ty2 = @{typ_pat "?'b::s2"}
 in
   Sign.typ_unify @{theory} (ty1, ty2) (Vartab.empty, 0) 
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   To print out the result type environment we switch on the printing 
   of sort information by setting @{ML_ind show_sorts in Syntax} to 
   true. This allows us to inspect the typing environment.
@@ -886,17 +886,17 @@
   "pretty_tyenv @{context} tyenv"
   "[?'a::s1 := ?'a1::{s1, s2}, ?'b::s2 := ?'a1::{s1, s2}]"}
 
-  As can be seen, the type variables @{text "?'a"} and @{text "?'b"} are instantiated
-  with a new type variable @{text "?'a1"} with sort @{text "{s1, s2}"}. Since a new
-  type variable has been introduced the @{ML index}, originally being @{text 0}, 
-  has been increased to @{text 1}.
+  As can be seen, the type variables \<open>?'a\<close> and \<open>?'b\<close> are instantiated
+  with a new type variable \<open>?'a1\<close> with sort \<open>{s1, s2}\<close>. Since a new
+  type variable has been introduced the @{ML index}, originally being \<open>0\<close>, 
+  has been increased to \<open>1\<close>.
 
   @{ML_response [display,gray]
   "index"
   "1"}
 
-  Let us now return to the unification problem @{text "?'a * ?'b"} and 
-  @{text "?'b list * nat"} from the beginning of this section, and the 
+  Let us now return to the unification problem \<open>?'a * ?'b\<close> and 
+  \<open>?'b list * nat\<close> from the beginning of this section, and the 
   calculated type environment @{ML tyenv_unif}:
 
   @{ML_response_fake [display, gray]
@@ -904,13 +904,12 @@
   "[?'a := ?'b list, ?'b := nat]"}
 
   Observe that the type environment which the function @{ML typ_unify in
-  Sign} returns is \emph{not} an instantiation in fully solved form: while @{text
-  "?'b"} is instantiated to @{typ nat}, this is not propagated to the
-  instantiation for @{text "?'a"}.  In unification theory, this is often
+  Sign} returns is \emph{not} an instantiation in fully solved form: while \<open>?'b\<close> is instantiated to @{typ nat}, this is not propagated to the
+  instantiation for \<open>?'a\<close>.  In unification theory, this is often
   called an instantiation in \emph{triangular form}. These triangular 
   instantiations, or triangular type environments, are used because of 
-  performance reasons. To apply such a type environment to a type, say @{text "?'a *
-  ?'b"}, you should use the function @{ML_ind norm_type in Envir}:
+  performance reasons. To apply such a type environment to a type, say \<open>?'a *
+  ?'b\<close>, you should use the function @{ML_ind norm_type in Envir}:
 
   @{ML_response_fake [display, gray]
   "Envir.norm_type tyenv_unif @{typ_pat \"?'a * ?'b\"}"
@@ -918,18 +917,18 @@
 
   Matching of types can be done with the function @{ML_ind typ_match in Sign}
   also from the structure @{ML_struct Sign}. This function returns a @{ML_type
-  Type.tyenv} as well, but might raise the exception @{text TYPE_MATCH} in case
+  Type.tyenv} as well, but might raise the exception \<open>TYPE_MATCH\<close> in case
   of failure. For example
-*}
-
-ML %grayML{*val tyenv_match = let
+\<close>
+
+ML %grayML\<open>val tyenv_match = let
   val pat = @{typ_pat "?'a * ?'b"}
   and ty  = @{typ_pat "bool list * nat"}
 in
   Sign.typ_match @{theory} (pat, ty) Vartab.empty 
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   Printing out the calculated matcher gives
 
   @{ML_response_fake [display,gray]
@@ -948,18 +947,18 @@
   @{ML subst_type in Envir} for matchers and @{ML norm_type in Envir} 
   for unifiers. To show the difference, let us calculate the 
   following matcher:
-*}
-
-ML %grayML{*val tyenv_match' = let
+\<close>
+
+ML %grayML\<open>val tyenv_match' = let
   val pat = @{typ_pat "?'a * ?'b"}
   and ty  = @{typ_pat "?'b list * nat"}
 in
   Sign.typ_match @{theory} (pat, ty) Vartab.empty 
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   Now @{ML tyenv_unif} is equal to @{ML tyenv_match'}. If we apply 
-  @{ML norm_type in Envir} to the type @{text "?'a * ?'b"} we obtain
+  @{ML norm_type in Envir} to the type \<open>?'a * ?'b\<close> we obtain
 
   @{ML_response_fake [display, gray]
   "Envir.norm_type tyenv_match' @{typ_pat \"?'a * ?'b\"}"
@@ -983,43 +982,43 @@
   which apply a type environment to a type. Type environments are lookup 
   tables which are implemented in @{ML_file "Pure/term_ord.ML"}.
   \end{readmore}
-*}
-
-text {*
+\<close>
+
+text \<open>
   Unification and matching of terms is substantially more complicated than the
   type-case. The reason is that terms have abstractions and, in this context,  
   unification or matching modulo plain equality is often not meaningful. 
   Nevertheless, Isabelle implements the function @{ML_ind
   first_order_match in Pattern} for terms.  This matching function returns a
   type environment and a term environment.  To pretty print the latter we use
-  the function @{text "pretty_env"}:
-*}
-
-ML %grayML{*fun pretty_env ctxt env =
+  the function \<open>pretty_env\<close>:
+\<close>
+
+ML %grayML\<open>fun pretty_env ctxt env =
 let
   fun get_trms (v, (T, t)) = (Var (v, T), t) 
   val print = apply2 (pretty_term ctxt)
 in
   pretty_helper (print o get_trms) env 
-end*}
-
-text {*
-  As can be seen from the @{text "get_trms"}-function, a term environment associates 
+end\<close>
+
+text \<open>
+  As can be seen from the \<open>get_trms\<close>-function, a term environment associates 
   a schematic term variable with a type and a term.  An example of a first-order 
   matching problem is the term @{term "P (\<lambda>a b. Q b a)"} and the pattern 
-  @{text "?X ?Y"}.
-*}
-
-ML %grayML{*val (_, fo_env) = let
+  \<open>?X ?Y\<close>.
+\<close>
+
+ML %grayML\<open>val (_, fo_env) = let
   val fo_pat = @{term_pat "(?X::(nat\<Rightarrow>nat\<Rightarrow>nat)\<Rightarrow>bool) ?Y"}
   val trm_a = @{term "P::(nat\<Rightarrow>nat\<Rightarrow>nat)\<Rightarrow>bool"} 
   val trm_b = @{term "\<lambda>a b. (Q::nat\<Rightarrow>nat\<Rightarrow>nat) b a"}
   val init = (Vartab.empty, Vartab.empty) 
 in
   Pattern.first_order_match @{theory} (fo_pat, trm_a $ trm_b) init
-end *}
-
-text {*
+end\<close>
+
+text \<open>
   In this example we annotated types explicitly because then 
   the type environment is empty and can be ignored. The 
   resulting term environment is
@@ -1048,8 +1047,8 @@
   variables. It can also deal with abstractions and a limited form of
   alpha-equivalence, but this kind of matching should be used with care, since
   it is not clear whether the result is meaningful. A meaningful example is
-  matching @{text "\<lambda>x. P x"} against the pattern @{text "\<lambda>y. ?X y"}. In this
-  case, first-order matching produces @{text "[?X := P]"}.
+  matching \<open>\<lambda>x. P x\<close> against the pattern \<open>\<lambda>y. ?X y\<close>. In this
+  case, first-order matching produces \<open>[?X := P]\<close>.
 
   @{ML_response_fake [display, gray]
   "let
@@ -1062,9 +1061,9 @@
   |> pretty_env @{context} 
 end"
   "[?X := P]"}
-*}
-
-text {*
+\<close>
+
+text \<open>
   Unification of abstractions is more thoroughly studied in the context of
   higher-order pattern unification and higher-order pattern matching.  A
   \emph{\index*{pattern}} is a well-formed term in which the arguments to
@@ -1114,42 +1113,41 @@
   "[?X := \<lambda>y x. x, ?Y := \<lambda>x. x]"}
 
   The function @{ML_ind "Envir.empty"} generates a record with a specified
-  max-index for the schematic variables (in the example the index is @{text
-  0}) and empty type and term environments. The function @{ML_ind
+  max-index for the schematic variables (in the example the index is \<open>0\<close>) and empty type and term environments. The function @{ML_ind
   "Envir.term_env"} pulls out the term environment from the result record. The
   corresponding function for type environment is @{ML_ind
   "Envir.type_env"}. An assumption of this function is that the terms to be
   unified have already the same type. In case of failure, the exceptions that
-  are raised are either @{text Pattern}, @{text MATCH} or @{text Unif}.
+  are raised are either \<open>Pattern\<close>, \<open>MATCH\<close> or \<open>Unif\<close>.
 
   As mentioned before, unrestricted higher-order unification, respectively
   unrestricted higher-order matching, is in general undecidable and might also
   not possess a single most general solution. Therefore Isabelle implements the
   unification function @{ML_ind unifiers in Unify} so that it returns a lazy
   list of potentially infinite unifiers.  An example is as follows
-*}
-
-ML %grayML{*val uni_seq =
+\<close>
+
+ML %grayML\<open>val uni_seq =
 let 
   val trm1 = @{term_pat "?X ?Y"}
   val trm2 = @{term "f a"}
   val init = Envir.empty 0
 in
   Unify.unifiers (Context.Proof @{context}, init, [(trm1, trm2)])
-end *}
-
-text {*
+end\<close>
+
+text \<open>
   The unifiers can be extracted from the lazy sequence using the 
   function @{ML_ind "Seq.pull"}. In the example we obtain three 
-  unifiers @{text "un1\<dots>un3"}.
-*}
-
-ML %grayML{*val SOME ((un1, _), next1) = Seq.pull uni_seq;
+  unifiers \<open>un1\<dots>un3\<close>.
+\<close>
+
+ML %grayML\<open>val SOME ((un1, _), next1) = Seq.pull uni_seq;
 val SOME ((un2, _), next2) = Seq.pull next1;
 val SOME ((un3, _), next3) = Seq.pull next2;
-val NONE = Seq.pull next3 *}
-
-text {*
+val NONE = Seq.pull next3\<close>
+
+text \<open>
   \footnote{\bf FIXME: what is the list of term pairs in the unifier: flex-flex pairs?}
 
   We can print them out as follows.
@@ -1207,20 +1205,20 @@
 
   \begin{isabelle}
   \isacommand{thm}~@{thm [source] spec}\\
-  @{text "> "}~@{thm spec}
+  \<open>> \<close>~@{thm spec}
   \end{isabelle}
 
   as an introduction rule. Applying it directly can lead to unexpected
   behaviour since the unification has more than one solution. One way round
-  this problem is to instantiate the schematic variables @{text "?P"} and
-  @{text "?x"}.  Instantiation function for theorems is 
+  this problem is to instantiate the schematic variables \<open>?P\<close> and
+  \<open>?x\<close>.  Instantiation function for theorems is 
   @{ML_ind instantiate_normalize in Drule} from the structure 
   @{ML_struct Drule}. One problem, however, is
   that this function expects the instantiations as lists of @{ML_type "((indexname * sort) * ctyp)"}
   respective @{ML_type "(indexname * typ) * cterm"}:
 
   \begin{isabelle}
-  @{ML instantiate_normalize in Drule}@{text ":"}
+  @{ML instantiate_normalize in Drule}\<open>:\<close>
   @{ML_type "((indexname * sort) * ctyp) list * ((indexname * typ) * cterm) list -> thm -> thm"}
   \end{isabelle}
 
@@ -1230,23 +1228,23 @@
   from an environment the corresponding variable mappings for schematic type 
   and term variables. These mappings can be turned into proper 
   @{ML_type ctyp}-pairs with the function
-*}
-
-ML %grayML{*fun prep_trm ctxt (x, (T, t)) = 
-  ((x, T), Thm.cterm_of ctxt t)*} 
-
-text {*
+\<close>
+
+ML %grayML\<open>fun prep_trm ctxt (x, (T, t)) = 
+  ((x, T), Thm.cterm_of ctxt t)\<close> 
+
+text \<open>
   and into proper @{ML_type cterm}-pairs with
-*}
-
-ML %grayML{*fun prep_ty ctxt (x, (S, ty)) = 
-  ((x, S), Thm.ctyp_of ctxt ty)*}
-
-text {*
+\<close>
+
+ML %grayML\<open>fun prep_ty ctxt (x, (S, ty)) = 
+  ((x, S), Thm.ctyp_of ctxt ty)\<close>
+
+text \<open>
   We can now calculate the instantiations from the matching function. 
-*}
-
-ML %linenosgray{*fun matcher_inst ctxt pat trm i = 
+\<close>
+
+ML %linenosgray\<open>fun matcher_inst ctxt pat trm i = 
 let
   val univ = Unify.matchers (Context.Proof ctxt) [(pat, trm)] 
   val env = nth (Seq.list_of univ) i
@@ -1254,13 +1252,13 @@
   val tyenv = Vartab.dest (Envir.type_env env)
 in
   (map (prep_ty ctxt) tyenv, map (prep_trm ctxt) tenv)
-end*}
+end\<close>
 
 ML \<open>Context.get_generic_context\<close>
-text {*
+text \<open>
   In Line 3 we obtain the higher-order matcher. We assume there is a finite
   number of them and select the one we are interested in via the parameter 
-  @{text i} in the next line. In Lines 5 and 6 we destruct the resulting 
+  \<open>i\<close> in the next line. In Lines 5 and 6 we destruct the resulting 
   environments using the function @{ML_ind dest in Vartab}. Finally, we need 
   to map the functions @{ML prep_trm} and @{ML prep_ty} over the respective 
   environments (Line 8). As a simple example we instantiate the
@@ -1278,7 +1276,7 @@
 end"
   "\<forall>x. Q x \<Longrightarrow> Q True"}
 
-  Note that we had to insert a @{text "Trueprop"}-coercion in Line 3 since the 
+  Note that we had to insert a \<open>Trueprop\<close>-coercion in Line 3 since the 
   conclusion of @{thm [source] spec} contains one.
  
   \begin{readmore}
@@ -1288,11 +1286,11 @@
   in @{ML_file "Pure/unify.ML"}. It uses lazy sequences which are implemented
   in @{ML_file "Pure/General/seq.ML"}.
   \end{readmore}
-*}
-
-section {* Sorts (TBD)\label{sec:sorts} *}
-
-text {*
+\<close>
+
+section \<open>Sorts (TBD)\label{sec:sorts}\<close>
+
+text \<open>
   Type classes are formal names in the type system which are linked to
   predicates of one type variable (via the axclass mechanism) and thereby
   express extra properties on types, to be propagated by the type system.
@@ -1320,14 +1318,14 @@
   Free and schematic type variables are always annotated with sorts, thereby restricting
   the domain of types they quantify over and corresponding to an implicit hypothesis
   about the type variable.
-*}
-
-
-ML {* Sign.classes_of @{theory} *}
-
-ML {* Sign.of_sort @{theory} *}
-
-text {*
+\<close>
+
+
+ML \<open>Sign.classes_of @{theory}\<close>
+
+ML \<open>Sign.of_sort @{theory}\<close>
+
+text \<open>
   \begin{readmore}
   Classes, sorts and arities are defined in @{ML_file "Pure/term.ML"}.
   
@@ -1344,12 +1342,12 @@
     functionality.
     It also provides the most needed functionality from individual underlying modules.
   \end{readmore}
-*}
-
-
-section {* Type-Checking\label{sec:typechecking} *}
-
-text {* 
+\<close>
+
+
+section \<open>Type-Checking\label{sec:typechecking}\<close>
+
+text \<open>
   Remember Isabelle follows the Church-style typing for terms, i.e.\ a term contains 
   enough typing information (constants, free variables and abstractions all have typing 
   information) so that it is always clear what the type of a term is. 
@@ -1400,8 +1398,8 @@
 "Const (\"HOL.plus_class.plus\", \"nat \<Rightarrow> nat \<Rightarrow> nat\") $
   Const (\"HOL.one_class.one\", \"nat\") $ Free (\"x\", \"nat\")"}
 
-  Instead of giving explicitly the type for the constant @{text "plus"} and the free 
-  variable @{text "x"}, type-inference fills in the missing information.
+  Instead of giving explicitly the type for the constant \<open>plus\<close> and the free 
+  variable \<open>x\<close>, type-inference fills in the missing information.
 
   \begin{readmore}
   See @{ML_file "Pure/Syntax/syntax.ML"} where more functions about reading,
@@ -1417,11 +1415,11 @@
   exercise given in Appendix \ref{ch:solutions} when they receive an 
   ill-typed term as input.
   \end{exercise}
-*}
-
-section {* Certified Terms and Certified Types *}
-
-text {*
+\<close>
+
+section \<open>Certified Terms and Certified Types\<close>
+
+text \<open>
   You can freely construct and manipulate @{ML_type "term"}s and @{ML_type
   typ}es, since they are just arbitrary unchecked trees. However, you
   eventually want to see if a term is well-formed, or type-checks, relative to
@@ -1502,11 +1500,11 @@
   the files @{ML_file "Pure/thm.ML"}, @{ML_file "Pure/more_thm.ML"} and
   @{ML_file "Pure/drule.ML"}.
   \end{readmore}
-*}
-
-section {* Theorems\label{sec:theorems} *}
-
-text {*
+\<close>
+
+section \<open>Theorems\label{sec:theorems}\<close>
+
+text \<open>
   Just like @{ML_type cterm}s, theorems are abstract objects of type @{ML_type thm} 
   that can only be built by going through interfaces. As a consequence, every proof 
   in Isabelle is correct by construction. This follows the tradition of the LCF-approach.
@@ -1514,18 +1512,18 @@
 
   To see theorems in ``action'', let us give a proof on the ML-level for the following 
   statement:
-*}
+\<close>
 
   lemma 
     assumes assm\<^sub>1: "\<And>(x::nat). P x \<Longrightarrow> Q x" 
     and     assm\<^sub>2: "P t"
     shows "Q t"(*<*)oops(*>*) 
 
-text {*
+text \<open>
   The corresponding ML-code is as follows:
-*}
-
-ML %linenosgray{*val my_thm = 
+\<close>
+
+ML %linenosgray\<open>val my_thm = 
 let
   val assm1 = @{cprop "\<And>(x::nat). P x \<Longrightarrow> Q x"}
   val assm2 = @{cprop "(P::nat \<Rightarrow> bool) t"}
@@ -1539,11 +1537,11 @@
   Qt 
   |> Thm.implies_intr assm2
   |> Thm.implies_intr assm1
-end*}
-
-text {*
-  Note that in Line 3 and 4 we use the antiquotation @{text "@{cprop \<dots>}"}, which
-  inserts necessary @{text "Trueprop"}s.
+end\<close>
+
+text \<open>
+  Note that in Line 3 and 4 we use the antiquotation \<open>@{cprop \<dots>}\<close>, which
+  inserts necessary \<open>Trueprop\<close>s.
 
   If we print out the value of @{ML my_thm} then we see only the 
   final statement of the theorem.
@@ -1556,10 +1554,10 @@
   proof.
 
   \[
-  \infer[(@{text "\<Longrightarrow>"}$-$intro)]{\vdash @{prop "(\<And>x. P x \<Longrightarrow> Q x) \<Longrightarrow> P t \<Longrightarrow> Q t"}}
-    {\infer[(@{text "\<Longrightarrow>"}$-$intro)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}}
-       {\infer[(@{text "\<Longrightarrow>"}$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"}, @{prop "P t"} \vdash @{prop "Q t"}}
-          {\infer[(@{text "\<And>"}$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}}
+  \infer[(\<open>\<Longrightarrow>\<close>$-$intro)]{\vdash @{prop "(\<And>x. P x \<Longrightarrow> Q x) \<Longrightarrow> P t \<Longrightarrow> Q t"}}
+    {\infer[(\<open>\<Longrightarrow>\<close>$-$intro)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}}
+       {\infer[(\<open>\<Longrightarrow>\<close>$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"}, @{prop "P t"} \vdash @{prop "Q t"}}
+          {\infer[(\<open>\<And>\<close>$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}}
                  {\infer[(assume)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "\<And>x. P x \<Longrightarrow> Q x"}}{}}
                  &
            \infer[(assume)]{@{prop "P t"} \vdash @{prop "P t"}}{}
@@ -1575,22 +1573,22 @@
   @{ML_struct Local_Theory} (the Isabelle command
   \isacommand{local\_setup} will be explained in more detail in 
   Section~\ref{sec:local}).
-*}
+\<close>
 
 (*FIXME: add forward reference to Proof_Context.export *)
-ML %linenosgray{*val my_thm_vars =
+ML %linenosgray\<open>val my_thm_vars =
 let
   val ctxt0 = @{context}
   val (_, ctxt1) = Variable.add_fixes ["P", "Q", "t"] ctxt0
 in
   singleton (Proof_Context.export ctxt1 ctxt0) my_thm
-end*}
-
-local_setup %gray {*
-  Local_Theory.note ((@{binding "my_thm"}, []), [my_thm_vars]) #> snd *}
-
-
-text {*
+end\<close>
+
+local_setup %gray \<open>
+  Local_Theory.note ((@{binding "my_thm"}, []), [my_thm_vars]) #> snd\<close>
+
+
+text \<open>
   The third argument of @{ML note in Local_Theory} is the list of theorems we
   want to store under a name. We can store more than one under a single name. 
   The first argument of @{ML note in Local_Theory} is the name under
@@ -1598,13 +1596,13 @@
   list of theorem attributes, which we will explain in detail in
   Section~\ref{sec:attributes}. Below we just use one such attribute,
   @{ML_ind simp_add in Simplifier}, for adding the theorem to the simpset:
-*}
-
-local_setup %gray {*
+\<close>
+
+local_setup %gray \<open>
   Local_Theory.note ((@{binding "my_thm_simp"}, 
-       [Attrib.internal (K Simplifier.simp_add)]), [my_thm_vars]) #> snd *}
-
-text {*
+       [Attrib.internal (K Simplifier.simp_add)]), [my_thm_vars]) #> snd\<close>
+
+text \<open>
   Note that we have to use another name under which the theorem is stored,
   since Isabelle does not allow us to call  @{ML_ind note in Local_Theory} twice
   with the same name. The attribute needs to be wrapped inside the function @{ML_ind
@@ -1628,12 +1626,12 @@
 
     
   \begin{isabelle}
-  \isacommand{thm}~@{text "my_thm my_thm_simp"}\isanewline
-  @{text ">"}~@{prop "\<lbrakk>\<And>x. P x \<Longrightarrow> Q x; P t\<rbrakk> \<Longrightarrow> Q t"}\isanewline
-  @{text ">"}~@{prop "\<lbrakk>\<And>x. P x \<Longrightarrow> Q x; P t\<rbrakk> \<Longrightarrow> Q t"}
+  \isacommand{thm}~\<open>my_thm my_thm_simp\<close>\isanewline
+  \<open>>\<close>~@{prop "\<lbrakk>\<And>x. P x \<Longrightarrow> Q x; P t\<rbrakk> \<Longrightarrow> Q t"}\isanewline
+  \<open>>\<close>~@{prop "\<lbrakk>\<And>x. P x \<Longrightarrow> Q x; P t\<rbrakk> \<Longrightarrow> Q t"}
   \end{isabelle}
 
-  or with the @{text "@{thm \<dots>}"}-antiquotation on the ML-level. Otherwise the 
+  or with the \<open>@{thm \<dots>}\<close>-antiquotation on the ML-level. Otherwise the 
   user has no access to these theorems. 
 
   Recall that Isabelle does not let you call @{ML note in Local_Theory} twice
@@ -1644,78 +1642,78 @@
   dynamically expanding list of theorems (like a simpset). Therefore Isabelle 
   also implements a mechanism where a theorem name can refer to a custom theorem 
   list. For this you can use the function @{ML_ind add_thms_dynamic in Global_Theory}. 
-  To see how it works let us assume we defined our own theorem list @{text MyThmList}.
-*}
-
-ML %grayML{*structure MyThmList = Generic_Data
+  To see how it works let us assume we defined our own theorem list \<open>MyThmList\<close>.
+\<close>
+
+ML %grayML\<open>structure MyThmList = Generic_Data
   (type T = thm list
    val empty = []
    val extend = I
    val merge = merge Thm.eq_thm_prop)
 
-fun update thm = Context.theory_map (MyThmList.map (Thm.add_thm thm))*}
-
-text {*
+fun update thm = Context.theory_map (MyThmList.map (Thm.add_thm thm))\<close>
+
+text \<open>
   The function @{ML update} allows us to update the theorem list, for example
   by adding the theorem @{thm [source] TrueI}.
-*}
-
-setup %gray {* update @{thm TrueI} *}
+\<close>
+
+setup %gray \<open>update @{thm TrueI}\<close>
  
-text {*
+text \<open>
   We can now install the theorem list so that it is visible to the user and 
   can be refered to by a theorem name. For this need to call 
   @{ML_ind add_thms_dynamic in Global_Theory}
-*}
-
-setup %gray {* 
+\<close>
+
+setup %gray \<open>
   Global_Theory.add_thms_dynamic (@{binding "mythmlist"}, MyThmList.get) 
-*}
-
-text {*
+\<close>
+
+text \<open>
   with a name and a function that accesses the theorem list. Now if the
   user issues the command
 
   \begin{isabelle}
-  \isacommand{thm}~@{text "mythmlist"}\\
-  @{text "> True"}
+  \isacommand{thm}~\<open>mythmlist\<close>\\
+  \<open>> True\<close>
   \end{isabelle}
 
   the current content of the theorem list is displayed. If more theorems are stored in 
   the list, say
-*}
-
-setup %gray {* update @{thm FalseE} *}
-
-text {*
+\<close>
+
+setup %gray \<open>update @{thm FalseE}\<close>
+
+text \<open>
   then the same command produces
   
   \begin{isabelle}
-  \isacommand{thm}~@{text "mythmlist"}\\
-  @{text "> False \<Longrightarrow> ?P"}\\
-  @{text "> True"}
+  \isacommand{thm}~\<open>mythmlist\<close>\\
+  \<open>> False \<Longrightarrow> ?P\<close>\\
+  \<open>> True\<close>
   \end{isabelle}
 
   Note that if we add the theorem @{thm [source] FalseE} again to the list
-*}
-
-setup %gray {* update @{thm FalseE} *}
-
-text {*
+\<close>
+
+setup %gray \<open>update @{thm FalseE}\<close>
+
+text \<open>
   we still obtain the same list. The reason is that we used the function @{ML_ind
   add_thm in Thm} in our update function. This is a dedicated function which
   tests whether the theorem is already in the list.  This test is done
   according to alpha-equivalence of the proposition of the theorem. The
   corresponding testing function is @{ML_ind eq_thm_prop in Thm}.
   Suppose you proved the following three theorems.
-*}
+\<close>
 
 lemma
   shows thm1: "\<forall>x. P x" 
   and   thm2: "\<forall>y. P y" 
   and   thm3: "\<forall>y. Q y" sorry
 
-text {*
+text \<open>
   Testing them for alpha equality produces:
 
   @{ML_response [display,gray]
@@ -1738,13 +1736,13 @@
 
   Other function produce terms that can be pattern-matched. 
   Suppose the following two theorems.
-*}
+\<close>
 
 lemma  
   shows foo_test1: "A \<Longrightarrow> B \<Longrightarrow> C" 
   and   foo_test2: "A \<longrightarrow> B \<longrightarrow> C" sorry
 
-text {*
+text \<open>
   We can destruct them into premises and conclusions as follows. 
 
  @{ML_response_fake [display,gray]
@@ -1765,8 +1763,8 @@
 Premises: 
 Conclusion: ?A \<longrightarrow> ?B \<longrightarrow> ?C"}
 
-  Note that in the second case, there is no premise. The reason is that @{text "\<Longrightarrow>"}
-  separates premises and conclusion, while @{text "\<longrightarrow>"} is the object implication
+  Note that in the second case, there is no premise. The reason is that \<open>\<Longrightarrow>\<close>
+  separates premises and conclusion, while \<open>\<longrightarrow>\<close> is the object implication
   from HOL, which just constructs a formula.
 
   \begin{readmore}
@@ -1788,8 +1786,8 @@
   "(\<lambda>x. x) = (\<lambda>x. x) \<equiv> (\<lambda>x. x) = (\<lambda>x. x)"}
 
   Often it is necessary to transform theorems to and from the object 
-  logic, that is replacing all @{text "\<longrightarrow>"} and @{text "\<forall>"} by @{text "\<Longrightarrow>"} 
-  and @{text "\<And>"}, or the other way around.  A reason for such a transformation 
+  logic, that is replacing all \<open>\<longrightarrow>\<close> and \<open>\<forall>\<close> by \<open>\<Longrightarrow>\<close> 
+  and \<open>\<And>\<close>, or the other way around.  A reason for such a transformation 
   might be stating a definition. The reason is that definitions can only be 
   stated using object logic connectives, while theorems using the connectives 
   from the meta logic are more convenient for reasoning. Therefore there are
@@ -1822,21 +1820,21 @@
 
   @{thm [display] list.induct}
 
-  we have to first abstract over the meta variables @{text "?P"} and 
-  @{text "?list"}. For this we can use the function 
+  we have to first abstract over the meta variables \<open>?P\<close> and 
+  \<open>?list\<close>. For this we can use the function 
   @{ML_ind forall_intr_vars in Drule}. This allows us to implement the
   following function for atomizing a theorem.
-*}
-
-ML %grayML{*fun atomize_thm ctxt thm =
+\<close>
+
+ML %grayML\<open>fun atomize_thm ctxt thm =
 let
   val thm' = forall_intr_vars thm
   val thm'' = Object_Logic.atomize ctxt (Thm.cprop_of thm')
 in
   Raw_Simplifier.rewrite_rule ctxt [thm''] thm'
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   This function produces for the theorem @{thm [source] list.induct}
 
   @{ML_response_fake [display, gray]
@@ -1859,7 +1857,7 @@
 
   This function takes first a context and second a list of strings. This list
   specifies which variables should be turned into schematic variables once the term
-  is proved (in this case only @{text "\"P\""}).  The fourth argument is the term to be 
+  is proved (in this case only \<open>"P"\<close>).  The fourth argument is the term to be 
   proved. The fifth is a corresponding proof given in form of a tactic (we explain 
   tactics in Chapter~\ref{chp:tactical}). In the code above, the tactic proves the 
   theorem by assumption. 
@@ -1867,16 +1865,16 @@
   There is also the possibility of proving multiple goals at the same time
   using the function @{ML_ind prove_common in Goal}. For this let us define the
   following three helper functions.
-*}
-
-ML %grayML{*fun rep_goals i = replicate i @{prop "f x = f x"}
+\<close>
+
+ML %grayML\<open>fun rep_goals i = replicate i @{prop "f x = f x"}
 fun rep_tacs i = replicate i (resolve_tac @{context} [@{thm refl}])
 
 fun multi_test ctxt i =
   Goal.prove_common ctxt NONE ["f", "x"] [] (rep_goals i) 
-    (K ((Goal.conjunction_tac THEN' RANGE (rep_tacs i)) 1))*}
-
-text {*
+    (K ((Goal.conjunction_tac THEN' RANGE (rep_tacs i)) 1))\<close>
+
+text \<open>
   With them we can now produce three theorem instances of the 
   proposition.
 
@@ -1885,19 +1883,19 @@
   "[\"?f ?x = ?f ?x\", \"?f ?x = ?f ?x\", \"?f ?x = ?f ?x\"]"}
 
   However you should be careful with @{ML prove_common in Goal} and very
-  large goals. If you increase the counter in the code above to @{text 3000}, 
+  large goals. If you increase the counter in the code above to \<open>3000\<close>, 
   you will notice that takes approximately ten(!) times longer than
   using @{ML map} and @{ML prove in Goal}.
-*}
+\<close>
   
-ML %grayML{*let 
+ML %grayML\<open>let 
   fun test_prove ctxt thm =
     Goal.prove ctxt ["P", "x"] [] thm (K (resolve_tac  @{context} [@{thm refl}] 1))
 in
   map (test_prove @{context}) (rep_goals 3000)
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   While the LCF-approach of going through interfaces ensures soundness in Isabelle, there
   is the function @{ML_ind make_thm in Skip_Proof} in the structure 
   @{ML_struct Skip_Proof} that allows us to turn any proposition into a theorem.
@@ -1919,12 +1917,12 @@
   kind, the names for cases and so on. This data is stored in a string-string
   list and can be retrieved with the function @{ML_ind get_tags in
   Thm}. Assume you prove the following lemma.
-*}
+\<close>
 
 lemma foo_data: 
   shows "P \<Longrightarrow> P \<Longrightarrow> P" by assumption
 
-text {*
+text \<open>
   The auxiliary data of this lemma can be retrieved using the function 
   @{ML_ind get_tags in Thm}. So far the the auxiliary data of this lemma is 
 
@@ -1936,14 +1934,14 @@
   we might want to explicitly extend this data by attaching case names to the 
   two premises of the lemma.  This can be done with the function @{ML_ind name in Rule_Cases}
   from the structure @{ML_struct Rule_Cases}.
-*}
-
-local_setup %gray {*
+\<close>
+
+local_setup %gray \<open>
   Local_Theory.note ((@{binding "foo_data'"}, []), 
     [(Rule_Cases.name ["foo_case_one", "foo_case_two"] 
-       @{thm foo_data})]) #> snd *}
-
-text {*
+       @{thm foo_data})]) #> snd\<close>
+
+text \<open>
   The data of the theorem @{thm [source] foo_data'} is then as follows:
 
   @{ML_response_fake [display,gray]
@@ -1952,63 +1950,62 @@
  (\"case_names\", \"foo_case_one;foo_case_two\")]"}
 
   You can observe the case names of this lemma on the user level when using 
-  the proof methods @{text cases} and @{text induct}. In the proof below
-*}
+  the proof methods \<open>cases\<close> and \<open>induct\<close>. In the proof below
+\<close>
 
 lemma
 shows "Q \<Longrightarrow> Q \<Longrightarrow> Q"
 proof (cases rule: foo_data')
 
 (*<*)oops(*>*)
-text_raw{*
+text_raw\<open>
 \begin{tabular}{@ {}l}
 \isacommand{print\_cases}\\
-@{text "> cases:"}\\
-@{text ">   foo_case_one:"}\\
-@{text ">     let \"?case\" = \"?P\""}\\
-@{text ">   foo_case_two:"}\\
-@{text ">     let \"?case\" = \"?P\""}
-\end{tabular}*}
-
-text {*
-  we can proceed by analysing the cases @{text "foo_case_one"} and 
-  @{text "foo_case_two"}. While if the theorem has no names, then
-  the cases have standard names @{text 1}, @{text 2} and so 
+\<open>> cases:\<close>\\
+\<open>>   foo_case_one:\<close>\\
+\<open>>     let "?case" = "?P"\<close>\\
+\<open>>   foo_case_two:\<close>\\
+\<open>>     let "?case" = "?P"\<close>
+\end{tabular}\<close>
+
+text \<open>
+  we can proceed by analysing the cases \<open>foo_case_one\<close> and 
+  \<open>foo_case_two\<close>. While if the theorem has no names, then
+  the cases have standard names \<open>1\<close>, \<open>2\<close> and so 
   on. This can be seen in the proof below.
-*}
+\<close>
 
 lemma
 shows "Q \<Longrightarrow> Q \<Longrightarrow> Q"
 proof (cases rule: foo_data)
 
 (*<*)oops(*>*)
-text_raw{*
+text_raw\<open>
 \begin{tabular}{@ {}l}
 \isacommand{print\_cases}\\
-@{text "> cases:"}\\
-@{text ">   1:"}\\
-@{text ">     let \"?case\" = \"?P\""}\\
-@{text ">   2:"}\\
-@{text ">     let \"?case\" = \"?P\""}
-\end{tabular}*}
+\<open>> cases:\<close>\\
+\<open>>   1:\<close>\\
+\<open>>     let "?case" = "?P"\<close>\\
+\<open>>   2:\<close>\\
+\<open>>     let "?case" = "?P"\<close>
+\end{tabular}\<close>
 
  
-text {*
+text \<open>
   Sometimes one wants to know the theorems that are involved in
-  proving a theorem, especially when the proof is by @{text
-  auto}. These theorems can be obtained by introspecting the proved theorem.
+  proving a theorem, especially when the proof is by \<open>auto\<close>. These theorems can be obtained by introspecting the proved theorem.
   To introspect a theorem, let us define the following three functions
   that analyse the @{ML_type_ind proof_body} data-structure from the
   structure @{ML_struct Proofterm}.
-*}
-
-ML %grayML{*fun pthms_of (PBody {thms, ...}) = map #2 thms 
+\<close>
+
+ML %grayML\<open>fun pthms_of (PBody {thms, ...}) = map #2 thms 
 val get_names = (map Proofterm.thm_node_name) o pthms_of 
-val get_pbodies = map (Future.join o Proofterm.thm_node_body) o pthms_of *}
-
-text {* 
+val get_pbodies = map (Future.join o Proofterm.thm_node_body) o pthms_of\<close>
+
+text \<open>
   To see what their purpose is, consider the following three short proofs.
-*}
+\<close>
 
 lemma my_conjIa:
 shows "A \<and> B \<Longrightarrow> A \<and> B"
@@ -2030,10 +2027,10 @@
 done
 
 
-text {*
+text \<open>
   While the information about which theorems are used is obvious in
   the first two proofs, it is not obvious in the third, because of the
-  @{text auto}-step.  Fortunately, ``behind'' every proved theorem is
+  \<open>auto\<close>-step.  Fortunately, ``behind'' every proved theorem is
   a proof-tree that records all theorems that are employed for
   establishing this theorem.  We can traverse this proof-tree
   extracting this information.  Let us first extract the name of the
@@ -2045,7 +2042,7 @@
   |> get_names"
   "[\"Essential.my_conjIa\"]"}
 
-  whereby @{text "Essential"} refers to the theory name in which
+  whereby \<open>Essential\<close> refers to the theory name in which
   we established the theorem @{thm [source] my_conjIa}. The function @{ML_ind
   proof_body_of in Thm} returns a part of the data that is stored in a
   theorem.  Notice that the first proof above references
@@ -2064,7 +2061,7 @@
 
   The theorems @{thm [source] Pure.protectD} and @{thm [source]
   Pure.protectI} that are internal theorems are always part of a
-  proof in Isabelle. Note also that applications of @{text assumption} do not
+  proof in Isabelle. Note also that applications of \<open>assumption\<close> do not
   count as a separate theorem, as you can see in the following code.
 
   @{ML_response_fake [display,gray]
@@ -2126,54 +2123,54 @@
   styles}}. These are, roughly speaking, functions from terms to terms. The input 
   term stands for the theorem to be presented; the output can be constructed to
   ones wishes.  Let us, for example, assume we want to quote theorems without
-  leading @{text \<forall>}-quantifiers. For this we can implement the following function 
-  that strips off @{text "\<forall>"}s.
-*}
-
-ML %linenosgray{*fun strip_allq (Const (@{const_name "All"}, _) $ Abs body) = 
+  leading \<open>\<forall>\<close>-quantifiers. For this we can implement the following function 
+  that strips off \<open>\<forall>\<close>s.
+\<close>
+
+ML %linenosgray\<open>fun strip_allq (Const (@{const_name "All"}, _) $ Abs body) = 
       Term.dest_abs body |> snd |> strip_allq
   | strip_allq (Const (@{const_name "Trueprop"}, _) $ t) = 
       strip_allq t
-  | strip_allq t = t*}
-
-text {*
+  | strip_allq t = t\<close>
+
+text \<open>
   We use in Line 2 the function @{ML_ind dest_abs in Term} for deconstructing abstractions,
   since this function deals correctly with potential name clashes. This function produces
   a pair consisting of the variable and the body of the abstraction. We are only interested
   in the body, which we feed into the recursive call. In Line 3 and 4, we also
   have to explicitly strip of the outermost @{term Trueprop}-coercion. Now we can 
-  install this function as the theorem style named @{text "my_strip_allq"}. 
-*}
-
-setup %gray{* 
+  install this function as the theorem style named \<open>my_strip_allq\<close>. 
+\<close>
+
+setup %gray\<open>
   Term_Style.setup @{binding "my_strip_allq"} (Scan.succeed (K strip_allq)) 
-*}
-
-text {*
+\<close>
+
+text \<open>
   We can test this theorem style with the following theorem
-*}
+\<close>
 
 theorem style_test:
 shows "\<forall>x y z. (x, x) = (y, z)" sorry
 
-text {*
+text \<open>
   Now printing out in a document the theorem @{thm [source] style_test} normally
-  using @{text "@{thm \<dots>}"} produces
+  using \<open>@{thm \<dots>}\<close> produces
 
   \begin{isabelle}
   \begin{graybox}
-  @{text "@{thm style_test}"}\\
-  @{text ">"}~@{thm style_test}
+  \<open>@{thm style_test}\<close>\\
+  \<open>>\<close>~@{thm style_test}
   \end{graybox}
   \end{isabelle}
 
-  as expected. But with the theorem style @{text "@{thm (my_strip_allq) \<dots>}"} 
+  as expected. But with the theorem style \<open>@{thm (my_strip_allq) \<dots>}\<close> 
   we obtain
 
   \begin{isabelle}
   \begin{graybox}
-  @{text "@{thm (my_strip_allq) style_test}"}\\
-  @{text ">"}~@{thm (my_strip_allq) style_test}
+  \<open>@{thm (my_strip_allq) style_test}\<close>\\
+  \<open>>\<close>~@{thm (my_strip_allq) style_test}
   \end{graybox}
   \end{isabelle}
   
@@ -2181,43 +2178,43 @@
   giving a list of strings that should be used for the replacement of the
   variables. For this we implement the function which takes a list of strings
   and uses them as name in the outermost abstractions.
-*}
-
-ML %grayML{*fun rename_allq [] t = t
+\<close>
+
+ML %grayML\<open>fun rename_allq [] t = t
   | rename_allq (x::xs) (Const (@{const_name "All"}, U) $ Abs (_, T, t)) = 
       Const (@{const_name "All"}, U) $ Abs (x, T, rename_allq xs t)
   | rename_allq xs (Const (@{const_name "Trueprop"}, U) $ t) =
       rename_allq xs t
-  | rename_allq _ t = t*}
-
-text {*
+  | rename_allq _ t = t\<close>
+
+text \<open>
   We can now install a the modified theorem style as follows
-*}
-
-setup %gray {* let
+\<close>
+
+setup %gray \<open>let
   val parser = Scan.repeat Args.name
   fun action xs = K (rename_allq xs #> strip_allq)
 in
   Term_Style.setup @{binding "my_strip_allq2"} (parser >> action)
-end *}
-
-text {*
-  The parser reads a list of names. In the function @{text action} we first
+end\<close>
+
+text \<open>
+  The parser reads a list of names. In the function \<open>action\<close> we first
   call @{ML rename_allq} with the parsed list, then we call @{ML strip_allq}
   on the resulting term. We can now suggest, for example, two variables for
-  stripping off the first two @{text \<forall>}-quantifiers.
+  stripping off the first two \<open>\<forall>\<close>-quantifiers.
 
   \begin{isabelle}
   \begin{graybox}
-  @{text "@{thm (my_strip_allq2 x' x'') style_test}"}\\
-  @{text ">"}~@{thm (my_strip_allq2 x' x'') style_test}
+  \<open>@{thm (my_strip_allq2 x' x'') style_test}\<close>\\
+  \<open>>\<close>~@{thm (my_strip_allq2 x' x'') style_test}
   \end{graybox}
   \end{isabelle}
 
   Such styles allow one to print out theorems in documents formatted to 
   ones heart content. The styles can also be used in the document 
-  antiquotations @{text "@{prop ...}"}, @{text "@{term_type ...}"}
-  and @{text "@{typeof ...}"}.
+  antiquotations \<open>@{prop ...}\<close>, \<open>@{term_type ...}\<close>
+  and \<open>@{typeof ...}\<close>.
 
   Next we explain theorem attributes, which is another
   mechanism for dealing with theorems.
@@ -2225,13 +2222,12 @@
   \begin{readmore}
   Theorem styles are implemented in @{ML_file "Pure/Thy/term_style.ML"}.
   \end{readmore}
-*}
-
-section {* Theorem Attributes\label{sec:attributes} *}
-
-text {*
-  Theorem attributes are @{text "[symmetric]"}, @{text "[THEN \<dots>]"}, @{text
-  "[simp]"} and so on. Such attributes are \emph{neither} tags \emph{nor} flags
+\<close>
+
+section \<open>Theorem Attributes\label{sec:attributes}\<close>
+
+text \<open>
+  Theorem attributes are \<open>[symmetric]\<close>, \<open>[THEN \<dots>]\<close>, \<open>[simp]\<close> and so on. Such attributes are \emph{neither} tags \emph{nor} flags
   annotated to theorems, but functions that do further processing of 
   theorems. In particular, it is not possible to find out
   what are all theorems that have a given attribute in common, unless of course
@@ -2243,121 +2239,121 @@
 
   \begin{isabelle}
   \isacommand{print\_attributes}\\
-  @{text "> COMP:  direct composition with rules (no lifting)"}\\
-  @{text "> HOL.dest:  declaration of Classical destruction rule"}\\
-  @{text "> HOL.elim:  declaration of Classical elimination rule"}\\
-  @{text "> \<dots>"}
+  \<open>> COMP:  direct composition with rules (no lifting)\<close>\\
+  \<open>> HOL.dest:  declaration of Classical destruction rule\<close>\\
+  \<open>> HOL.elim:  declaration of Classical elimination rule\<close>\\
+  \<open>> \<dots>\<close>
   \end{isabelle}
   
   The theorem attributes fall roughly into two categories: the first category manipulates
-  theorems (for example @{text "[symmetric]"} and @{text "[THEN \<dots>]"}), and the second
-  stores theorems somewhere as data (for example @{text "[simp]"}, which adds
+  theorems (for example \<open>[symmetric]\<close> and \<open>[THEN \<dots>]\<close>), and the second
+  stores theorems somewhere as data (for example \<open>[simp]\<close>, which adds
   theorems to the current simpset).
 
   To explain how to write your own attribute, let us start with an extremely simple 
-  version of the attribute @{text "[symmetric]"}. The purpose of this attribute is
+  version of the attribute \<open>[symmetric]\<close>. The purpose of this attribute is
   to produce the ``symmetric'' version of an equation. The main function behind 
   this attribute is
-*}
-
-ML %grayML{*val my_symmetric = Thm.rule_attribute [] (fn _ => fn thm => thm RS @{thm sym})*}
-
-text {* 
+\<close>
+
+ML %grayML\<open>val my_symmetric = Thm.rule_attribute [] (fn _ => fn thm => thm RS @{thm sym})\<close>
+
+text \<open>
   where the function @{ML_ind  rule_attribute in Thm} expects a function taking a 
-  context (which we ignore in the code above) and a theorem (@{text thm}), and 
-  returns another theorem (namely @{text thm} resolved with the theorem 
+  context (which we ignore in the code above) and a theorem (\<open>thm\<close>), and 
+  returns another theorem (namely \<open>thm\<close> resolved with the theorem 
   @{thm [source] sym}: @{thm sym[no_vars]}; the function @{ML_ind RS in Drule} 
   is explained in Section~\ref{sec:simpletacs}). The function 
   @{ML rule_attribute in Thm} then returns  an attribute.
 
   Before we can use the attribute, we need to set it up. This can be done
   using the Isabelle command \isacommand{attribute\_setup} as follows:
-*}
+\<close>
 
 attribute_setup %gray my_sym = 
-  {* Scan.succeed my_symmetric *} "applying the sym rule"
-
-text {*
-  Inside the @{text "\<verbopen> \<dots> \<verbclose>"}, we have to specify a parser
+  \<open>Scan.succeed my_symmetric\<close> "applying the sym rule"
+
+text \<open>
+  Inside the \<open>\<verbopen> \<dots> \<verbclose>\<close>, we have to specify a parser
   for the theorem attribute. Since the attribute does not expect any further 
-  arguments (unlike @{text "[THEN \<dots>]"}, for instance), we use the parser @{ML
-  Scan.succeed}. An example for the attribute @{text "[my_sym]"} is the proof
-*} 
+  arguments (unlike \<open>[THEN \<dots>]\<close>, for instance), we use the parser @{ML
+  Scan.succeed}. An example for the attribute \<open>[my_sym]\<close> is the proof
+\<close> 
 
 lemma test[my_sym]: "2 = Suc (Suc 0)" by simp
 
-text {*
+text \<open>
   which stores the theorem @{thm test} under the name @{thm [source] test}. You
   can see this, if you query the lemma: 
 
   \begin{isabelle}
-  \isacommand{thm}~@{text "test"}\\
-  @{text "> "}~@{thm test}
+  \isacommand{thm}~\<open>test\<close>\\
+  \<open>> \<close>~@{thm test}
   \end{isabelle}
 
   We can also use the attribute when referring to this theorem:
 
   \begin{isabelle}
-  \isacommand{thm}~@{text "test[my_sym]"}\\
-  @{text "> "}~@{thm test[my_sym]}
+  \isacommand{thm}~\<open>test[my_sym]\<close>\\
+  \<open>> \<close>~@{thm test[my_sym]}
   \end{isabelle}
 
   An alternative for setting up an attribute is the function @{ML_ind  setup in Attrib}.
   So instead of using \isacommand{attribute\_setup}, you can also set up the
   attribute as follows:
-*}
-
-ML %grayML{*Attrib.setup @{binding "my_sym"} (Scan.succeed my_symmetric)
-  "applying the sym rule" *}
-
-text {*
+\<close>
+
+ML %grayML\<open>Attrib.setup @{binding "my_sym"} (Scan.succeed my_symmetric)
+  "applying the sym rule"\<close>
+
+text \<open>
   This gives a function from @{ML_type "theory -> theory"}, which
   can be used for example with \isacommand{setup} or with
   @{ML "Context.>> o Context.map_theory"}.\footnote{\bf FIXME: explain what happens here.}
 
   As an example of a slightly more complicated theorem attribute, we implement 
-  our own version of @{text "[THEN \<dots>]"}. This attribute will take a list of theorems
+  our own version of \<open>[THEN \<dots>]\<close>. This attribute will take a list of theorems
   as argument and resolve the theorem with this list (one theorem 
   after another). The code for this attribute is
-*}
-
-ML %grayML{*fun MY_THEN thms = 
+\<close>
+
+ML %grayML\<open>fun MY_THEN thms = 
 let
   fun RS_rev thm1 thm2 = thm2 RS thm1
 in
   Thm.rule_attribute [] (fn _ => fn thm => fold RS_rev thms thm)
-end*}
-
-text {* 
+end\<close>
+
+text \<open>
   where for convenience we define the reverse and curried version of @{ML RS}. 
   The setup of this theorem attribute uses the parser @{ML thms in Attrib}, 
   which parses a list of theorems. 
-*}
-
-attribute_setup %gray MY_THEN = {* Attrib.thms >> MY_THEN *} 
+\<close>
+
+attribute_setup %gray MY_THEN = \<open>Attrib.thms >> MY_THEN\<close> 
   "resolving the list of theorems with the theorem"
 
-text {* 
+text \<open>
   You can, for example, use this theorem attribute to turn an equation into a 
   meta-equation:
 
   \begin{isabelle}
-  \isacommand{thm}~@{text "test[MY_THEN eq_reflection]"}\\
-  @{text "> "}~@{thm test[MY_THEN eq_reflection]}
+  \isacommand{thm}~\<open>test[MY_THEN eq_reflection]\<close>\\
+  \<open>> \<close>~@{thm test[MY_THEN eq_reflection]}
   \end{isabelle}
 
   If you need the symmetric version as a meta-equation, you can write
 
   \begin{isabelle}
-  \isacommand{thm}~@{text "test[MY_THEN sym eq_reflection]"}\\
-  @{text "> "}~@{thm test[MY_THEN sym eq_reflection]}
+  \isacommand{thm}~\<open>test[MY_THEN sym eq_reflection]\<close>\\
+  \<open>> \<close>~@{thm test[MY_THEN sym eq_reflection]}
   \end{isabelle}
 
   It is also possible to combine different theorem attributes, as in:
   
   \begin{isabelle}
-  \isacommand{thm}~@{text "test[my_sym, MY_THEN eq_reflection]"}\\
-  @{text "> "}~@{thm test[my_sym, MY_THEN eq_reflection]}
+  \isacommand{thm}~\<open>test[my_sym, MY_THEN eq_reflection]\<close>\\
+  \<open>> \<close>~@{thm test[my_sym, MY_THEN eq_reflection]}
   \end{isabelle}
   
   However, here also a weakness of the concept
@@ -2365,8 +2361,8 @@
   arbitrary functions, they do not commute in general. If you try
 
   \begin{isabelle}
-  \isacommand{thm}~@{text "test[MY_THEN eq_reflection, my_sym]"}\\
-  @{text "> "}~@{text "exception THM 1 raised: RSN: no unifiers"}
+  \isacommand{thm}~\<open>test[MY_THEN eq_reflection, my_sym]\<close>\\
+  \<open>> \<close>~\<open>exception THM 1 raised: RSN: no unifiers\<close>
   \end{isabelle}
 
   you get an exception indicating that the theorem @{thm [source] sym}
@@ -2374,42 +2370,42 @@
 
   The purpose of @{ML_ind rule_attribute in Thm} is to directly manipulate
   theorems.  Another usage of theorem attributes is to add and delete theorems
-  from stored data.  For example the theorem attribute @{text "[simp]"} adds
+  from stored data.  For example the theorem attribute \<open>[simp]\<close> adds
   or deletes a theorem from the current simpset. For these applications, you
   can use @{ML_ind declaration_attribute in Thm}. To illustrate this function,
   let us introduce a theorem list.
-*}
-
-ML %grayML{*structure MyThms = Named_Thms
+\<close>
+
+ML %grayML\<open>structure MyThms = Named_Thms
   (val name = @{binding "attr_thms"} 
-   val description = "Theorems for an Attribute") *}
-
-text {* 
+   val description = "Theorems for an Attribute")\<close>
+
+text \<open>
   We are going to modify this list by adding and deleting theorems.
   For this we use the two functions @{ML MyThms.add_thm} and
   @{ML MyThms.del_thm}. You can turn them into attributes 
   with the code
-*}
-
-ML %grayML{*val my_add = Thm.declaration_attribute MyThms.add_thm
-val my_del = Thm.declaration_attribute MyThms.del_thm *}
-
-text {* 
+\<close>
+
+ML %grayML\<open>val my_add = Thm.declaration_attribute MyThms.add_thm
+val my_del = Thm.declaration_attribute MyThms.del_thm\<close>
+
+text \<open>
   and set up the attributes as follows
-*}
-
-attribute_setup %gray my_thms = {* Attrib.add_del my_add my_del *} 
+\<close>
+
+attribute_setup %gray my_thms = \<open>Attrib.add_del my_add my_del\<close> 
   "maintaining a list of my_thms" 
 
-text {*
+text \<open>
   The parser @{ML_ind  add_del in Attrib} is a predefined parser for 
   adding and deleting lemmas. Now if you prove the next lemma 
-  and attach to it the attribute @{text "[my_thms]"}
-*}
+  and attach to it the attribute \<open>[my_thms]\<close>
+\<close>
 
 lemma trueI_2[my_thms]: "True" by simp
 
-text {*
+text \<open>
   then you can see it is added to the initially empty list.
 
   @{ML_response_fake [display,gray]
@@ -2417,12 +2413,12 @@
   "[\"True\"]"}
 
   You can also add theorems using the command \isacommand{declare}.
-*}
+\<close>
 
 declare test[my_thms] trueI_2[my_thms add]
 
-text {*
-  With this attribute, the @{text "add"} operation is the default and does 
+text \<open>
+  With this attribute, the \<open>add\<close> operation is the default and does 
   not need to be explicitly given. These three declarations will cause the 
   theorem list to be updated as:
 
@@ -2433,11 +2429,11 @@
   The theorem @{thm [source] trueI_2} only appears once, since the 
   function @{ML_ind  add_thm in Thm} tests for duplicates, before extending
   the list. Deletion from the list works as follows:
-*}
+\<close>
 
 declare test[my_thms del]
 
-text {* After this, the theorem list is again: 
+text \<open>After this, the theorem list is again: 
 
   @{ML_response_fake [display,gray]
   "MyThms.get @{context}"
@@ -2447,7 +2443,7 @@
   declaration_attribute in Thm}, but there can be any number of them. We just
   have to change the parser for reading the arguments accordingly.
 
-  \footnote{\bf FIXME What are: @{text "theory_attributes"}, @{text "proof_attributes"}?}
+  \footnote{\bf FIXME What are: \<open>theory_attributes\<close>, \<open>proof_attributes\<close>?}
   \footnote{\bf FIXME readmore}
 
   \begin{readmore}
@@ -2455,11 +2451,11 @@
   @{ML_file "Pure/Isar/attrib.ML"}...also explained in the chapter about
   parsing.
   \end{readmore}
-*}
-
-section {* Pretty-Printing\label{sec:pretty} *}
-
-text {*
+\<close>
+
+section \<open>Pretty-Printing\label{sec:pretty}\<close>
+
+text \<open>
   So far we printed out only plain strings without any formatting except for
   occasional explicit line breaks using @{text [quotes] "\\n"}. This is
   sufficient for ``quick-and-dirty'' printouts. For something more
@@ -2483,26 +2479,26 @@
   string}. This can be done with the function @{ML_ind  string_of in Pretty}. In what
   follows we will use the following wrapper function for printing a pretty
   type:
-*}
-
-ML %grayML{*fun pprint prt = tracing (Pretty.string_of prt)*}
-
-text {*
+\<close>
+
+ML %grayML\<open>fun pprint prt = tracing (Pretty.string_of prt)\<close>
+
+text \<open>
   The point of the pretty-printing infrastructure is to give hints about how to
   layout text and let Isabelle do the actual layout. Let us first explain
   how you can insert places where a line break can occur. For this assume the
   following function that replicates a string n times:
-*}
-
-ML %grayML{*fun rep n str = implode (replicate n str) *}
-
-text {*
+\<close>
+
+ML %grayML\<open>fun rep n str = implode (replicate n str)\<close>
+
+text \<open>
   and suppose we want to print out the string:
-*}
-
-ML %grayML{*val test_str = rep 8 "fooooooooooooooobaaaaaaaaaaaar "*}
-
-text {*
+\<close>
+
+ML %grayML\<open>val test_str = rep 8 "fooooooooooooooobaaaaaaaaaaaar "\<close>
+
+text \<open>
   We deliberately chose a large string so that it spans over more than one line. 
   If we print out the string using the usual ``quick-and-dirty'' method, then
   we obtain the ugly output:
@@ -2590,9 +2586,9 @@
   list as a ``set'', that means enclosed inside @{text [quotes] "{"} and
   @{text [quotes] "}"}, and separated by commas using the function
   @{ML_ind  enum in Pretty}. For example
-*}
-
-text {*
+\<close>
+
+text \<open>
   
 @{ML_response_fake [display,gray]
 "let
@@ -2613,9 +2609,9 @@
   the last two items are separated by @{text [quotes] "and"}. For this you can
   write the function
 
-*}
-
-ML %linenosgray{*fun and_list [] = []
+\<close>
+
+ML %linenosgray\<open>fun and_list [] = []
   | and_list [x] = [x]
   | and_list xs = 
       let 
@@ -2623,9 +2619,9 @@
       in
         (Pretty.commas front) @ 
         [Pretty.brk 1, Pretty.str "and", Pretty.brk 1, last]
-      end *}
-
-text {*
+      end\<close>
+
+text \<open>
   where Line 7 prints the beginning of the list and Line
   8 the last item. We have to use @{ML "Pretty.brk 1"} in order
   to insert a space (of length 1) before the 
@@ -2695,10 +2691,10 @@
 
   We use the function @{ML_ind pretty_term in Syntax} for pretty-printing
   terms. Next we like to pretty-print a term and its type. For this we use the
-  function @{text "tell_type"}.
-*}
-
-ML %linenosgray{*fun tell_type ctxt trm = 
+  function \<open>tell_type\<close>.
+\<close>
+
+ML %linenosgray\<open>fun tell_type ctxt trm = 
 let
   fun pstr s = Pretty.breaks (map Pretty.str (space_explode " " s))
   val ptrm = Pretty.quote (Syntax.pretty_term ctxt trm)
@@ -2707,9 +2703,9 @@
   pprint (Pretty.blk (0, 
     (pstr "The term " @ [ptrm] @ pstr " has type " 
       @ [pty, Pretty.str "."])))
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   In Line 3 we define a function that inserts possible linebreaks in places
   where a space is. In Lines 4 and 5 we pretty-print the term and its type
   using the functions @{ML pretty_term in Syntax} and @{ML_ind 
@@ -2738,17 +2734,17 @@
   
   @{ML_file "Pure/PIDE/markup.ML"}
   \end{readmore}
-*}
-
-section {* Summary *}
-
-text {*
+\<close>
+
+section \<open>Summary\<close>
+
+text \<open>
   \begin{conventions}
   \begin{itemize}
   \item Start with a proper context and then pass it around 
   through all your functions.
   \end{itemize}
   \end{conventions}
-*}
+\<close>
 
 end
--- a/ProgTutorial/First_Steps.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/First_Steps.thy	Tue May 14 17:10:47 2019 +0200
@@ -2,11 +2,11 @@
 imports Base
 begin
                                                   
-chapter {* First Steps\label{chp:firststeps} *}
+chapter \<open>First Steps\label{chp:firststeps}\<close>
 
 
 
-text {*
+text \<open>
   \begin{flushright}
   {\em ``The most effective debugging tool is still careful thought,\\ 
   coupled with judiciously placed print statements.''} \\[1ex]
@@ -29,20 +29,20 @@
 
   We also generally assume you are working with the logic HOL. The examples
   that will be given might need to be adapted if you work in a different logic.
-*}
+\<close>
 
-section {* Including ML-Code\label{sec:include} *}
+section \<open>Including ML-Code\label{sec:include}\<close>
 
-text {*
+text \<open>
   The easiest and quickest way to include code in a theory is by using the
   \isacommand{ML}-command. For example:
 
   \begin{isabelle}
   \begin{graybox}
-  \isacommand{ML}~@{text "\<verbopen>"}\isanewline
+  \isacommand{ML}~\<open>\<verbopen>\<close>\isanewline
   \hspace{5mm}@{ML "3 + 4"}\isanewline
-  @{text "\<verbclose>"}\isanewline
-  @{text "> 7"}\smallskip
+  \<open>\<verbclose>\<close>\isanewline
+  \<open>> 7\<close>\smallskip
   \end{graybox}
   \end{isabelle}
 
@@ -52,8 +52,7 @@
   Jedit GUI, then you just have to hover the cursor over the code 
   and you see the evaluated result in the ``Output'' window.
 
-  As mentioned in the Introduction, we will drop the \isacommand{ML}~@{text
-  "\<verbopen> \<dots> \<verbclose>"} scaffolding whenever we show code. The lines
+  As mentioned in the Introduction, we will drop the \isacommand{ML}~\<open>\<verbopen> \<dots> \<verbclose>\<close> scaffolding whenever we show code. The lines
   prefixed with @{text [quotes] ">"} are not part of the code, rather they
   indicate what the response is when the code is evaluated.  There are also
   the commands \isacommand{ML\_val} and \isacommand{ML\_prf} for including
@@ -63,9 +62,9 @@
 
   \begin{quote}
   \begin{isabelle}
-  \isacommand{lemma}~@{text "test:"}\isanewline
+  \isacommand{lemma}~\<open>test:\<close>\isanewline
   \isacommand{shows}~@{text [quotes] "True"}\isanewline
-  \isacommand{ML\_prf}~@{text "\<verbopen>"}~@{ML "writeln \"Trivial!\""}~@{text "\<verbclose>"}\isanewline
+  \isacommand{ML\_prf}~\<open>\<verbopen>\<close>~@{ML "writeln \"Trivial!\""}~\<open>\<verbclose>\<close>\isanewline
   \isacommand{oops}
   \end{isabelle}
   \end{quote}
@@ -84,15 +83,15 @@
   \isacommand{imports} Main\\
   \isacommand{begin}\\
   \ldots\\
-  \isacommand{ML\_file}~@{text "\"file_to_be_included.ML\""}\\
+  \isacommand{ML\_file}~\<open>"file_to_be_included.ML"\<close>\\
   \ldots
   \end{tabular}
   \end{quote}
-*}
+\<close>
 
-section {* Printing and Debugging\label{sec:printing} *}
+section \<open>Printing and Debugging\label{sec:printing}\<close>
 
-text {*
+text \<open>
   During development you might find it necessary to inspect data in your
   code. This can be done in a ``quick-and-dirty'' fashion using the function
   @{ML_ind writeln in Output}. For example
@@ -103,11 +102,11 @@
   This function expects a string as argument. If you develop under
   PolyML, then there is a convenient, though again ``quick-and-dirty'', method
   for converting values into strings, namely the antiquotation 
-  @{text "@{make_string}"}:
+  \<open>@{make_string}\<close>:
 
   @{ML_response_fake [display,gray] "writeln (@{make_string} 1)" "\"1\""} 
 
-  However, @{text "@{make_string}"} only works if the type of what
+  However, \<open>@{make_string}\<close> only works if the type of what
   is converted is monomorphic and not a function. 
 
   You can print out error messages with the function @{ML_ind error in Library}; 
@@ -118,12 +117,12 @@
 "*** foo
 ***"}
 
-  This function raises the exception @{text ERROR}, which will then 
+  This function raises the exception \<open>ERROR\<close>, which will then 
   be displayed by the infrastructure indicating that it is an error by
   painting the output red. Note that this exception is meant 
   for ``user-level'' error messages seen by the ``end-user''. 
   For messages where you want to indicate a genuine program error
-  use the exception @{text Fail}. 
+  use the exception \<open>Fail\<close>. 
 
   Most often you want to inspect contents of Isabelle's basic data structures,
   namely @{ML_type term}, @{ML_type typ}, @{ML_type cterm}, @{ML_type ctyp}
@@ -132,12 +131,12 @@
   we just use the functions @{ML_ind writeln in Pretty}  from the structure
   @{ML_struct Pretty} and @{ML_ind pretty_term in Syntax} from the structure
   @{ML_struct Syntax}. For more convenience, we bind them to the toplevel.
-*}
+\<close>
 
-ML %grayML{*val pretty_term = Syntax.pretty_term
-val pwriteln = Pretty.writeln*}
+ML %grayML\<open>val pretty_term = Syntax.pretty_term
+val pwriteln = Pretty.writeln\<close>
 
-text {*
+text \<open>
   They can be used as follows
 
   @{ML_response_fake [display,gray]
@@ -147,27 +146,27 @@
   If there is more than one term to be printed, you can use the 
   function @{ML_ind commas in Pretty} and @{ML_ind block in Pretty} 
   to separate them.
-*} 
+\<close> 
 
-ML %grayML{*fun pretty_terms ctxt trms =  
-  Pretty.block (Pretty.commas (map (pretty_term ctxt) trms))*}
+ML %grayML\<open>fun pretty_terms ctxt trms =  
+  Pretty.block (Pretty.commas (map (pretty_term ctxt) trms))\<close>
 
-text {*
+text \<open>
   To print out terms together with their typing information,
   set the configuration value
   @{ML_ind show_types in Syntax} to @{ML true}.
-*}
+\<close>
 
-ML %grayML{*val show_types_ctxt = Config.put show_types true @{context}*}
+ML %grayML\<open>val show_types_ctxt = Config.put show_types true @{context}\<close>
 
-text {*
+text \<open>
   Now by using this context @{ML pretty_term} prints out
 
   @{ML_response_fake [display, gray]
   "pwriteln (pretty_term show_types_ctxt @{term \"(1::nat, x)\"})"
   "(1::nat, x::'a)"}
 
-  where @{text 1} and @{text x} are displayed with their inferred types.
+  where \<open>1\<close> and \<open>x\<close> are displayed with their inferred types.
   Other configuration values that influence
   printing of terms include 
 
@@ -178,34 +177,34 @@
   \end{itemize}
 
   A @{ML_type cterm} can be printed with the following function.
-*}
+\<close>
 
-ML %grayML %grayML{*fun pretty_cterm ctxt ctrm =  
-  pretty_term ctxt (Thm.term_of ctrm)*}
+ML %grayML %grayML\<open>fun pretty_cterm ctxt ctrm =  
+  pretty_term ctxt (Thm.term_of ctrm)\<close>
 
-text {*
+text \<open>
   Here the function @{ML_ind term_of in Thm} extracts the @{ML_type
   term} from a @{ML_type cterm}.  More than one @{ML_type cterm}s can be
   printed again with @{ML commas in Pretty}.
-*} 
+\<close> 
 
-ML %grayML{*fun pretty_cterms ctxt ctrms =  
-  Pretty.block (Pretty.commas (map (pretty_cterm ctxt) ctrms))*}
+ML %grayML\<open>fun pretty_cterms ctxt ctrms =  
+  Pretty.block (Pretty.commas (map (pretty_cterm ctxt) ctrms))\<close>
 
-text {*
+text \<open>
   The easiest way to get the string of a theorem is to transform it
   into a @{ML_type term} using the function @{ML_ind prop_of in Thm}. 
-*}
+\<close>
 
-ML %grayML{*fun pretty_thm ctxt thm =
-  pretty_term ctxt (Thm.prop_of thm)*}
+ML %grayML\<open>fun pretty_thm ctxt thm =
+  pretty_term ctxt (Thm.prop_of thm)\<close>
 
-text {*
-  Theorems include schematic variables, such as @{text "?P"}, 
-  @{text "?Q"} and so on. They are instantiated by Isabelle when a theorem is applied.
+text \<open>
+  Theorems include schematic variables, such as \<open>?P\<close>, 
+  \<open>?Q\<close> and so on. They are instantiated by Isabelle when a theorem is applied.
   For example, the theorem 
   @{thm [source] conjI} shown below can be used for any (typable) 
-  instantiation of @{text "?P"} and @{text "?Q"}. 
+  instantiation of \<open>?P\<close> and \<open>?Q\<close>. 
 
   @{ML_response_fake [display, gray]
   "pwriteln (pretty_thm @{context} @{thm conjI})"
@@ -213,16 +212,16 @@
 
   However, to improve the readability when printing theorems, we
   can switch off the question marks as follows:
-*}
+\<close>
 
-ML %grayML{*fun pretty_thm_no_vars ctxt thm =
+ML %grayML\<open>fun pretty_thm_no_vars ctxt thm =
 let
   val ctxt' = Config.put show_question_marks false ctxt
 in
   pretty_term ctxt' (Thm.prop_of thm)
-end*}
+end\<close>
 
-text {* 
+text \<open>
   With this function, theorem @{thm [source] conjI} is now printed as follows:
 
   @{ML_response_fake [display, gray]
@@ -231,31 +230,31 @@
   
   Again the functions @{ML commas} and @{ML block in Pretty} help 
   with printing more than one theorem. 
-*}
+\<close>
 
-ML %grayML{*fun pretty_thms ctxt thms =  
+ML %grayML\<open>fun pretty_thms ctxt thms =  
   Pretty.block (Pretty.commas (map (pretty_thm ctxt) thms))
 
 fun pretty_thms_no_vars ctxt thms =  
-  Pretty.block (Pretty.commas (map (pretty_thm_no_vars ctxt) thms))*}
+  Pretty.block (Pretty.commas (map (pretty_thm_no_vars ctxt) thms))\<close>
 
-text {*
+text \<open>
   Printing functions for @{ML_type typ} are
-*}
+\<close>
 
-ML %grayML{*fun pretty_typ ctxt ty = Syntax.pretty_typ ctxt ty
+ML %grayML\<open>fun pretty_typ ctxt ty = Syntax.pretty_typ ctxt ty
 fun pretty_typs ctxt tys = 
-  Pretty.block (Pretty.commas (map (pretty_typ ctxt) tys))*}
+  Pretty.block (Pretty.commas (map (pretty_typ ctxt) tys))\<close>
 
-text {*
+text \<open>
   respectively @{ML_type ctyp}
-*}
+\<close>
 
-ML %grayML{*fun pretty_ctyp ctxt cty = pretty_typ ctxt (Thm.typ_of cty)
+ML %grayML\<open>fun pretty_ctyp ctxt cty = pretty_typ ctxt (Thm.typ_of cty)
 fun pretty_ctyps ctxt ctys = 
-  Pretty.block (Pretty.commas (map (pretty_ctyp ctxt) ctys))*}
+  Pretty.block (Pretty.commas (map (pretty_ctyp ctxt) ctys))\<close>
 
-text {*
+text \<open>
   \begin{readmore}
   The simple conversion functions from Isabelle's main datatypes to 
   @{ML_type string}s are implemented in @{ML_file "Pure/Syntax/syntax.ML"}. 
@@ -298,12 +297,12 @@
   Most of the basic string functions of Isabelle are defined in 
   @{ML_file "Pure/library.ML"}.
   \end{readmore}
-*}
+\<close>
 
 
-section {* Combinators\label{sec:combinators} *}
+section \<open>Combinators\label{sec:combinators}\<close>
 
-text {*
+text \<open>
   For beginners perhaps the most puzzling parts in the existing code of
   Isabelle are the combinators. At first they seem to greatly obstruct the
   comprehension of code, but after getting familiar with them and handled with
@@ -311,37 +310,37 @@
 
   The simplest combinator is @{ML_ind I in Library}, which is just the 
   identity function defined as
-*}
+\<close>
 
-ML %grayML{*fun I x = x*}
+ML %grayML\<open>fun I x = x\<close>
 
-text {* 
+text \<open>
   Another simple combinator is @{ML_ind K in Library}, defined as 
-*}
+\<close>
 
-ML %grayML{*fun K x = fn _ => x*}
+ML %grayML\<open>fun K x = fn _ => x\<close>
 
-text {*
-  @{ML K} ``wraps'' a function around @{text "x"} that ignores its argument. As a 
-  result, @{ML K} defines a constant function always returning @{text x}.
+text \<open>
+  @{ML K} ``wraps'' a function around \<open>x\<close> that ignores its argument. As a 
+  result, @{ML K} defines a constant function always returning \<open>x\<close>.
 
   The next combinator is reverse application, @{ML_ind  "|>" in Basics}, defined as: 
-*}
+\<close>
 
-ML %grayML{*fun x |> f = f x*}
+ML %grayML\<open>fun x |> f = f x\<close>
 
-text {* While just syntactic sugar for the usual function application,
+text \<open>While just syntactic sugar for the usual function application,
   the purpose of this combinator is to implement functions in a  
-  ``waterfall fashion''. Consider for example the function *}
+  ``waterfall fashion''. Consider for example the function\<close>
 
-ML %linenosgray{*fun inc_by_five x =
+ML %linenosgray\<open>fun inc_by_five x =
   x |> (fn x => x + 1)
     |> (fn x => (x, x))
     |> fst
-    |> (fn x => x + 4)*}
+    |> (fn x => x + 4)\<close>
 
-text {*
-  which increments its argument @{text x} by 5. It does this by first
+text \<open>
+  which increments its argument \<open>x\<close> by 5. It does this by first
   incrementing the argument by 1 (Line 2); then storing the result in a pair
   (Line 3); taking the first component of the pair (Line 4) and finally
   incrementing the first component by 4 (Line 5). This kind of cascading
@@ -350,42 +349,42 @@
   manner. This kind of coding should be familiar, if you have been exposed to
   Haskell's {\it do}-notation. Writing the function @{ML inc_by_five} using
   the reverse application is much clearer than writing
-*}
+\<close>
 
-ML %grayML{*fun inc_by_five x = fst ((fn x => (x, x)) (x + 1)) + 4*}
+ML %grayML\<open>fun inc_by_five x = fst ((fn x => (x, x)) (x + 1)) + 4\<close>
 
-text {* or *}
+text \<open>or\<close>
 
-ML %grayML{*fun inc_by_five x = 
-  ((fn x => x + 4) o fst o (fn x => (x, x)) o (fn x => x + 1)) x*}
+ML %grayML\<open>fun inc_by_five x = 
+  ((fn x => x + 4) o fst o (fn x => (x, x)) o (fn x => x + 1)) x\<close>
 
-text {* and typographically more economical than *}
+text \<open>and typographically more economical than\<close>
 
-ML %grayML{*fun inc_by_five x =
+ML %grayML\<open>fun inc_by_five x =
 let val y1 = x + 1
     val y2 = (y1, y1)
     val y3 = fst y2
     val y4 = y3 + 4
-in y4 end*}
+in y4 end\<close>
 
-text {* 
+text \<open>
   Another reasons to avoid the let-bindings in the code above:
   it is easy to get the intermediate values wrong and
   the resulting code is difficult to maintain.
 
   In Isabelle a ``real world'' example for a function written in 
   the waterfall fashion might be the following code:
-*}
+\<close>
 
-ML %linenosgray{*fun apply_fresh_args f ctxt =
+ML %linenosgray\<open>fun apply_fresh_args f ctxt =
     f |> fastype_of
       |> binder_types 
       |> map (pair "z")
       |> Variable.variant_frees ctxt [f]
       |> map Free  
-      |> curry list_comb f *}
+      |> curry list_comb f\<close>
 
-text {*
+text \<open>
   This function takes a term and a context as arguments. If the term is of function
   type, then @{ML "apply_fresh_args"} returns the term with distinct variables 
   applied to it. For example, below three variables are applied to the term 
@@ -405,10 +404,10 @@
   You can read off this behaviour from how @{ML apply_fresh_args} is coded: in
   Line 2, the function @{ML_ind fastype_of in Term} calculates the type of the
   term; @{ML_ind binder_types in Term} in the next line produces the list of
-  argument types (in the case above the list @{text "[nat, int, unit]"}); Line
-  4 pairs up each type with the string @{text "z"}; the function @{ML_ind
-  variant_frees in Variable} generates for each @{text "z"} a unique name
-  avoiding the given @{text f}; the list of name-type pairs is turned into a
+  argument types (in the case above the list \<open>[nat, int, unit]\<close>); Line
+  4 pairs up each type with the string \<open>z\<close>; the function @{ML_ind
+  variant_frees in Variable} generates for each \<open>z\<close> a unique name
+  avoiding the given \<open>f\<close>; the list of name-type pairs is turned into a
   list of variable terms in Line 6, which in the last line is applied by the
   function @{ML_ind list_comb in Term} to the original term. In this last step we have
   to use the function @{ML_ind curry in Library}, because @{ML list_comb}
@@ -429,18 +428,18 @@
 end"
   "za z zb"}
   
-  where the @{text "za"} is correctly avoided.
+  where the \<open>za\<close> is correctly avoided.
 
   The combinator @{ML_ind "#>" in Basics} is the reverse function composition. 
   It can be used to define the following function
-*}
+\<close>
 
-ML %grayML{*val inc_by_six = 
+ML %grayML\<open>val inc_by_six = 
   (fn x => x + 1) #> 
   (fn x => x + 2) #> 
-  (fn x => x + 3)*}
+  (fn x => x + 3)\<close>
 
-text {* 
+text \<open>
   which is the function composed of first the increment-by-one function and
   then increment-by-two, followed by increment-by-three. Again, the reverse
   function composition allows you to read the code top-down. This combinator
@@ -451,18 +450,18 @@
   can be composed with @{ML "#>"}. Consider for example the following code, 
   where we store the theorems @{thm [source] conjI}, @{thm [source] conjunct1} 
   and @{thm [source] conjunct2} under alternative names.
-*}  
+\<close>  
 
-local_setup %graylinenos {* 
+local_setup %graylinenos \<open>
 let  
   fun my_note name thm = Local_Theory.note ((name, []), [thm]) #> snd
 in
   my_note @{binding "foo_conjI"} @{thm conjI} #>
   my_note @{binding "bar_conjunct1"} @{thm conjunct1} #>
   my_note @{binding "bar_conjunct2"} @{thm conjunct2}
-end *}
+end\<close>
 
-text {*
+text \<open>
   The function @{ML_text "my_note"} in line 3 is just a wrapper for the function 
   @{ML_ind note in Local_Theory} in the structure @{ML_struct Local_Theory}; 
   it stores a theorem under a name. 
@@ -474,15 +473,15 @@
   in Basics} allows you to get hold of an intermediate result (to do some
   side-calculations or print out an intermediate result, for instance). The
   function
- *}
+\<close>
 
-ML %linenosgray{*fun inc_by_three x =
+ML %linenosgray\<open>fun inc_by_three x =
      x |> (fn x => x + 1)
        |> tap (fn x => pwriteln (Pretty.str (@{make_string} x)))
-       |> (fn x => x + 2)*}
+       |> (fn x => x + 2)\<close>
 
-text {* 
-  increments the argument first by @{text "1"} and then by @{text "2"}. In the
+text \<open>
+  increments the argument first by \<open>1\<close> and then by \<open>2\<close>. In the
   middle (Line 3), however, it uses @{ML tap} for printing the ``plus-one''
   intermediate result. The function @{ML tap} can only be used for
   side-calculations, because any value that is computed cannot be merged back
@@ -491,102 +490,102 @@
   The combinator @{ML_ind "`" in Basics} (a backtick) is similar to @{ML tap},
   but applies a function to the value and returns the result together with the
   value (as a pair). It is defined as
-*}
+\<close>
 
-ML %grayML{*fun `f = fn x => (f x, x)*}
+ML %grayML\<open>fun `f = fn x => (f x, x)\<close>
 
-text {*
+text \<open>
   An example for this combinator is the function
-*}
+\<close>
 
-ML %grayML{*fun inc_as_pair x =
+ML %grayML\<open>fun inc_as_pair x =
      x |> `(fn x => x + 1)
-       |> (fn (x, y) => (x, y + 1))*}
+       |> (fn (x, y) => (x, y + 1))\<close>
 
-text {*
-  which takes @{text x} as argument, and then increments @{text x}, but also keeps
-  @{text x}. The intermediate result is therefore the pair @{ML "(x + 1, x)"
+text \<open>
+  which takes \<open>x\<close> as argument, and then increments \<open>x\<close>, but also keeps
+  \<open>x\<close>. The intermediate result is therefore the pair @{ML "(x + 1, x)"
   for x}. After that, the function increments the right-hand component of the
   pair. So finally the result will be @{ML "(x + 1, x + 1)" for x}.
 
   The combinators @{ML_ind "|>>" in Basics} and @{ML_ind "||>" in Basics} are
   defined for functions manipulating pairs. The first applies the function to
   the first component of the pair, defined as
-*}
+\<close>
 
-ML %grayML{*fun (x, y) |>> f = (f x, y)*}
+ML %grayML\<open>fun (x, y) |>> f = (f x, y)\<close>
 
-text {*
+text \<open>
   and the second combinator to the second component, defined as
-*}
+\<close>
 
-ML %grayML{*fun (x, y) ||> f = (x, f y)*}
+ML %grayML\<open>fun (x, y) ||> f = (x, f y)\<close>
 
-text {*
-  These two functions can, for example, be used to avoid explicit @{text "lets"} for
+text \<open>
+  These two functions can, for example, be used to avoid explicit \<open>lets\<close> for
   intermediate values in functions that return pairs. As an example, suppose you
   want to separate a list of integers into two lists according to a
   threshold. If the threshold is @{ML "5"}, the list @{ML "[1,6,2,5,3,4]"}
   should be separated as @{ML "([1,2,3,4], [6,5])"}.  Such a function can be
   implemented as
-*}
+\<close>
 
-ML %grayML{*fun separate i [] = ([], [])
+ML %grayML\<open>fun separate i [] = ([], [])
   | separate i (x::xs) =
       let 
         val (los, grs) = separate i xs
       in
         if i <= x then (los, x::grs) else (x::los, grs)
-      end*}
+      end\<close>
 
-text {*
+text \<open>
   where the return value of the recursive call is bound explicitly to
   the pair @{ML "(los, grs)" for los grs}. However, this function
   can be implemented more concisely as
-*}
+\<close>
 
-ML %grayML{*fun separate _ [] = ([], [])
+ML %grayML\<open>fun separate _ [] = ([], [])
   | separate i (x::xs) =
       if i <= x 
       then separate i xs ||> cons x
-      else separate i xs |>> cons x*}
+      else separate i xs |>> cons x\<close>
 
-text {*
-  avoiding the explicit @{text "let"}. While in this example the gain in
+text \<open>
+  avoiding the explicit \<open>let\<close>. While in this example the gain in
   conciseness is only small, in more complicated situations the benefit of
-  avoiding @{text "lets"} can be substantial.
+  avoiding \<open>lets\<close> can be substantial.
 
   With the combinator @{ML_ind "|->" in Basics} you can re-combine the 
   elements from a pair. This combinator is defined as
-*}
+\<close>
 
-ML %grayML{*fun (x, y) |-> f = f x y*}
+ML %grayML\<open>fun (x, y) |-> f = f x y\<close>
 
-text {* 
+text \<open>
   and can be used to write the following roundabout version 
-  of the @{text double} function: 
-*}
+  of the \<open>double\<close> function: 
+\<close>
 
-ML %grayML{*fun double x =
+ML %grayML\<open>fun double x =
       x |>  (fn x => (x, x))
-        |-> (fn x => fn y => x + y)*}
+        |-> (fn x => fn y => x + y)\<close>
 
-text {* 
+text \<open>
   The combinator @{ML_ind ||>> in Basics} plays a central role whenever your
   task is to update a theory and the update also produces a side-result (for
   example a theorem). Functions for such tasks return a pair whose second
   component is the theory and the fist component is the side-result. Using
   @{ML ||>>}, you can do conveniently the update and also
   accumulate the side-results. Consider the following simple function.
-*}
+\<close>
 
-ML %linenosgray{*fun acc_incs x =
+ML %linenosgray\<open>fun acc_incs x =
     x |> (fn x => ("", x)) 
       ||>> (fn x => (x, x + 1))
       ||>> (fn x => (x, x + 1))
-      ||>> (fn x => (x, x + 1))*}
+      ||>> (fn x => (x, x + 1))\<close>
 
-text {*
+text \<open>
   The purpose of Line 2 is to just pair up the argument with a dummy value (since
   @{ML ||>>} operates on pairs). Each of the next three lines just increment 
   the value by one, but also nest the intermediate results to the left. For example 
@@ -626,15 +625,15 @@
   obtained by previous calls.
   
   A more realistic example for this combinator is the following code
-*}
+\<close>
 
-ML %grayML{*val ((([one_def], [two_def]), [three_def]), ctxt') = 
+ML %grayML\<open>val ((([one_def], [two_def]), [three_def]), ctxt') = 
   @{context}
   |> Local_Defs.define [((@{binding "One"}, NoSyn), (Binding.empty_atts, @{term "1::nat"}))]
   ||>> Local_Defs.define [((@{binding "Two"}, NoSyn), (Binding.empty_atts,@{term "2::nat"}))]
-  ||>> Local_Defs.define [((@{binding "Three"}, NoSyn), (Binding.empty_atts,@{term "3::nat"}))]*}
+  ||>> Local_Defs.define [((@{binding "Three"}, NoSyn), (Binding.empty_atts,@{term "3::nat"}))]\<close>
 
-text {*
+text \<open>
   where we make three definitions, namely @{term "One \<equiv> 1::nat"}, @{term "Two \<equiv> 2::nat"}
   and @{term "Three \<equiv> 3::nat"}. The point of this code is that we augment the initial
   context with the definitions. The result we are interested in is the
@@ -658,15 +657,15 @@
   described above have related combinators for function composition, namely
   @{ML_ind "#->" in Basics}, @{ML_ind "#>>" in Basics}, @{ML_ind "##>" in
   Basics} and @{ML_ind "##>>" in Basics}. Using @{ML "#->"}, for example, the
-  function @{text double} can also be written as:
-*}
+  function \<open>double\<close> can also be written as:
+\<close>
 
-ML %grayML{*val double =
+ML %grayML\<open>val double =
    (fn x => (x, x)) #-> 
-   (fn x => fn y => x + y)*}
+   (fn x => fn y => x + y)\<close>
 
 
-text {* 
+text \<open>
   When using combinators for writing functions in waterfall fashion, it is
   sometimes necessary to do some ``plumbing'' in order to fit functions
   together. We have already seen such plumbing in the function @{ML
@@ -687,11 +686,11 @@
   |> pwriteln
 end"
   "m + n, m * n, m - n"}
-*}
+\<close>
 
-text {*
+text \<open>
   In this example we obtain three terms (using the function 
-  @{ML_ind parse_term in Syntax}) whose variables @{text m} and @{text n} 
+  @{ML_ind parse_term in Syntax}) whose variables \<open>m\<close> and \<open>n\<close> 
   are of type @{typ "nat"}. If you have only a single term, then @{ML
   check_terms in Syntax} needs plumbing. This can be done with the function
   @{ML_ind singleton in Library}.\footnote{There is already a function @{ML check_term in
@@ -722,12 +721,12 @@
   \begin{exercise}
   Find out what the combinator @{ML "K I"} does.
   \end{exercise}
-*}
+\<close>
 
 
-section {* ML-Antiquotations\label{sec:antiquote} *}
+section \<open>ML-Antiquotations\label{sec:antiquote}\<close>
 
-text {*
+text \<open>
   Recall from Section \ref{sec:include} that code in Isabelle is always
   embedded in a theory.  The main advantage of this is that the code can
   contain references to entities defined on the logical level of Isabelle. By
@@ -744,33 +743,32 @@
   user level and therefore we are not interested in them in this Tutorial,
   except in Appendix \ref{rec:docantiquotations} where we show how to
   implement your own document antiquotations.}  Syntactically antiquotations
-  are indicated by the @{ML_text @}-sign followed by text wrapped in @{text
-  "{\<dots>}"}.  For example, one can print out the name of the current theory with
+  are indicated by the @{ML_text @}-sign followed by text wrapped in \<open>{\<dots>}\<close>.  For example, one can print out the name of the current theory with
   the code
   
   @{ML_response [display,gray] "Context.theory_name @{theory}" "\"First_Steps\""}
  
-  where @{text "@{theory}"} is an antiquotation that is substituted with the
+  where \<open>@{theory}\<close> is an antiquotation that is substituted with the
   current theory (remember that we assumed we are inside the theory 
-  @{text First_Steps}). The name of this theory can be extracted using
+  \<open>First_Steps\<close>). The name of this theory can be extracted using
   the function @{ML_ind theory_name in Context}. 
 
   Note, however, that antiquotations are statically linked, that is their value is
   determined at ``compile-time'', not at ``run-time''. For example the function
-*}
+\<close>
 
-ML %grayML{*fun not_current_thyname () = Context.theory_name @{theory} *}
+ML %grayML\<open>fun not_current_thyname () = Context.theory_name @{theory}\<close>
 
-text {*
+text \<open>
   does \emph{not} return the name of the current theory, if it is run in a 
   different theory. Instead, the code above defines the constant function 
   that always returns the string @{text [quotes] "First_Steps"}, no matter where the
-  function is called. Operationally speaking,  the antiquotation @{text "@{theory}"} is 
+  function is called. Operationally speaking,  the antiquotation \<open>@{theory}\<close> is 
   \emph{not} replaced with code that will look up the current theory in 
   some data structure and return it. Instead, it is literally
   replaced with the value representing the theory.
 
-  Another important antiquotation is @{text "@{context}"}. (What the
+  Another important antiquotation is \<open>@{context}\<close>. (What the
   difference between a theory and a context is will be described in Chapter
   \ref{chp:advanced}.) A context is for example needed to use the
   function @{ML print_abbrevs in Proof_Context} that list of all currently
@@ -786,11 +784,11 @@
   The precise output of course depends on the abbreviations that are
   currently defined; this can change over time.
   You can also use antiquotations to refer to proved theorems: 
-  @{text "@{thm \<dots>}"} for a single theorem
+  \<open>@{thm \<dots>}\<close> for a single theorem
 
   @{ML_response_fake [display,gray] "@{thm allI}" "(\<And>x. ?P x) \<Longrightarrow> \<forall>x. ?P x"}
 
-  and @{text "@{thms \<dots>}"} for more than one
+  and \<open>@{thms \<dots>}\<close> for more than one
 
 @{ML_response_fake [display,gray] 
   "@{thms conj_ac}" 
@@ -811,14 +809,14 @@
   under this name (this becomes especially important when you deal with
   theorem lists; see Section \ref{sec:storing}).
 
-  It is also possible to prove lemmas with the antiquotation @{text "@{lemma \<dots> by \<dots>}"}
-  whose first argument is a statement (possibly many of them separated by @{text "and"})
+  It is also possible to prove lemmas with the antiquotation \<open>@{lemma \<dots> by \<dots>}\<close>
+  whose first argument is a statement (possibly many of them separated by \<open>and\<close>)
   and the second is a proof. For example
-*}
+\<close>
 
-ML %grayML{*val foo_thms = @{lemma "True" and "False \<Longrightarrow> P" by simp_all} *}
+ML %grayML\<open>val foo_thms = @{lemma "True" and "False \<Longrightarrow> P" by simp_all}\<close>
 
-text {*
+text \<open>
   The result can be printed out as follows.
 
   @{ML_response_fake [gray,display]
@@ -829,17 +827,17 @@
   You can also refer to the current simpset via an antiquotation. To illustrate 
   this we implement the function that extracts the theorem names stored in a 
   simpset.
-*}
+\<close>
 
-ML %grayML{*fun get_thm_names_from_ss ctxt =
+ML %grayML\<open>fun get_thm_names_from_ss ctxt =
 let
   val simpset = Raw_Simplifier.simpset_of ctxt
   val {simps,...} = Raw_Simplifier.dest_ss simpset
 in
   map #1 simps
-end*}
+end\<close>
 
-text {*
+text \<open>
   The function @{ML_ind dest_ss in Raw_Simplifier} returns a record containing all
   information stored in the simpset, but here we are only interested in the names of the
   simp-rules. Now you can feed in the current simpset into this function. 
@@ -856,15 +854,15 @@
   It is also possible to define your own antiquotations. But you should
   exercise care when introducing new ones, as they can also make your code
   difficult to read. In the next chapter we describe how to construct
-  terms with the (built-in) antiquotation @{text "@{term \<dots>}"}.  A restriction
+  terms with the (built-in) antiquotation \<open>@{term \<dots>}\<close>.  A restriction
   of this antiquotation is that it does not allow you to use schematic
   variables in terms. If you want to have an antiquotation that does not have
   this restriction, you can implement your own using the function @{ML_ind
   inline in ML_Antiquotation} from the structure @{ML_struct ML_Antiquotation}. The code
-  for the antiquotation @{text "term_pat"} is as follows.
-*}
+  for the antiquotation \<open>term_pat\<close> is as follows.
+\<close>
 
-ML %linenosgray{*val term_pat_setup = 
+ML %linenosgray\<open>val term_pat_setup = 
 let
   val parser = Args.context -- Scan.lift Args.embedded_inner_syntax
 
@@ -874,15 +872,15 @@
          |> ML_Syntax.atomic
 in
   ML_Antiquotation.inline @{binding "term_pat"} (parser >> term_pat)
-end*}
+end\<close>
 
-text {*
+text \<open>
   To use it you also have to install it using \isacommand{setup} like so
-*}
+\<close>
 
-setup %gray {* term_pat_setup *}
+setup %gray \<open>term_pat_setup\<close>
 
-text {*
+text \<open>
   The parser in Line 2 provides us with a context and a string; this string is
   transformed into a term using the function @{ML_ind read_term_pattern in
   Proof_Context} (Line 5); the next two lines transform the term into a string
@@ -893,11 +891,11 @@
   "@{term_pat \"Suc (?x::nat)\"}"
   "Const (\"Suc\", \"nat \<Rightarrow> nat\") $ Var ((\"x\", 0), \"nat\")"}
 
-  which shows the internal representation of the term @{text "Suc ?x"}. Similarly
+  which shows the internal representation of the term \<open>Suc ?x\<close>. Similarly
   we can write an antiquotation for type patterns. Its code is
-*}
+\<close>
 
-ML %grayML{*val type_pat_setup = 
+ML %grayML\<open>val type_pat_setup = 
 let
   val parser = Args.context -- Scan.lift Args.embedded_inner_syntax
 
@@ -911,15 +909,15 @@
       end
 in
   ML_Antiquotation.inline @{binding "typ_pat"} (parser >> typ_pat)
-end*}
+end\<close>
 
-text {*
+text \<open>
   which can be installed with
-*}
+\<close>
 
-setup %gray {* type_pat_setup *}
+setup %gray \<open>type_pat_setup\<close>
 
-text {* 
+text \<open>
 However, a word of warning is in order: new antiquotations should be introduced only after
 careful deliberations. They can potentially make your code harder, rather than easier, to read.
 
@@ -928,11 +926,11 @@
   contain the infrastructure and definitions for most antiquotations. Most of the basic operations 
   on ML-syntax are implemented in @{ML_file "Pure/ML/ml_syntax.ML"}.
   \end{readmore}
-*}
+\<close>
 
-section {* Storing Data in Isabelle\label{sec:storing} *}
+section \<open>Storing Data in Isabelle\label{sec:storing}\<close>
 
-text {*
+text \<open>
   Isabelle provides mechanisms for storing (and retrieving) arbitrary
   data. Before we delve into the details, let us digress a bit. Conventional
   wisdom has it that the type-system of ML ensures that  an
@@ -956,9 +954,9 @@
   We will show the usage of the universal type by storing an integer and
   a boolean into a single list. Let us first define injection and projection 
   functions for booleans and integers into and from the type @{ML_type Universal.universal}.
-*}
+\<close>
 
-ML %grayML{*local
+ML %grayML\<open>local
   val fn_int  = Universal.tag () : int  Universal.tag
   val fn_bool = Universal.tag () : bool Universal.tag
 in
@@ -966,23 +964,23 @@
   val inject_bool  = Universal.tagInject fn_bool;
   val project_int  = Universal.tagProject fn_int;
   val project_bool = Universal.tagProject fn_bool
-end*}
+end\<close>
 
-text {*
+text \<open>
   Using the injection functions, we can inject the integer @{ML_text "13"} 
   and the boolean value @{ML_text "true"} into @{ML_type Universal.universal}, and
   then store them in a @{ML_type "Universal.universal list"} as follows:
-*}
+\<close>
 
-ML %grayML{*val foo_list = 
+ML %grayML\<open>val foo_list = 
 let
   val thirteen = inject_int 13
   val truth_val = inject_bool true
 in
   [thirteen, truth_val]
-end*}
+end\<close>
 
-text {*
+text \<open>
   The data can be retrieved with the projection functions defined above.
   
   @{ML_response_fake [display, gray]
@@ -1023,15 +1021,15 @@
   and therefore it makes sense to store this data inside a theory.  
   Consequently we use the functor @{ML_funct Theory_Data}.
   The code for the table is:
-*}
+\<close>
 
-ML %linenosgray{*structure Data = Theory_Data
+ML %linenosgray\<open>structure Data = Theory_Data
   (type T = thm Symtab.table
    val empty = Symtab.empty
    val extend = I
-   val merge = Symtab.merge (K true))*}
+   val merge = Symtab.merge (K true))\<close>
 
-text {*
+text \<open>
   In order to store data in a theory, we have to specify the type of the data
   (Line 2). In this case we specify the type @{ML_type "thm Symtab.table"},
   which stands for a table in which @{ML_type string}s can be looked up
@@ -1046,23 +1044,23 @@
   it (@{ML Data.map}). There is also the function @{ML Data.put}, but it is 
   not relevant here. Below we define two
   auxiliary functions, which help us with accessing the table.
-*}
+\<close>
 
-ML %grayML{*val lookup = Symtab.lookup o Data.get
-fun update k v = Data.map (Symtab.update (k, v))*}
+ML %grayML\<open>val lookup = Symtab.lookup o Data.get
+fun update k v = Data.map (Symtab.update (k, v))\<close>
 
-text {*
+text \<open>
   Since we want to store introduction rules associated with their 
   logical connective, we can fill the table as follows.
-*}
+\<close>
 
-setup %gray {*
+setup %gray \<open>
   update "conj" @{thm conjI} #>
   update "imp"  @{thm impI}  #>
   update "all"  @{thm allI}
-*}
+\<close>
 
-text {*
+text \<open>
   The use of the command \isacommand{setup} makes sure the table in the 
   \emph{current} theory is updated (this is explained further in 
   Section~\ref{sec:theories}). The lookup can now be performed as follows.
@@ -1074,11 +1072,11 @@
   An important point to note is that these tables (and data in general)
   need to be treated in a purely functional fashion. Although
   we can update the table as follows
-*}
+\<close>
 
-setup %gray {* update "conj" @{thm TrueI} *}
+setup %gray \<open>update "conj" @{thm TrueI}\<close>
 
-text {*
+text \<open>
   and accordingly, @{ML lookup} now produces the introduction rule for @{term "True"}
   
 @{ML_response_fake [display, gray]
@@ -1091,15 +1089,15 @@
   defeat its undo-mechanism. To see the latter, consider the 
   following data container where we maintain a reference to a list of 
   integers.
-*}
+\<close>
 
-ML %grayML{*structure WrongRefData = Theory_Data
+ML %grayML\<open>structure WrongRefData = Theory_Data
   (type T = (int list) Unsynchronized.ref
    val empty = Unsynchronized.ref []
    val extend = I
-   val merge = fst)*}
+   val merge = fst)\<close>
 
-text {*
+text \<open>
   We initialise the reference with the empty list. Consequently a first 
   lookup produces @{ML "ref []" in Unsynchronized}.
 
@@ -1108,20 +1106,20 @@
   "ref []"}
 
   For updating the reference we use the following function
-*}
+\<close>
 
-ML %grayML{*fun ref_update n = WrongRefData.map 
-      (fn r => let val _ = r := n::(!r) in r end)*}
+ML %grayML\<open>fun ref_update n = WrongRefData.map 
+      (fn r => let val _ = r := n::(!r) in r end)\<close>
 
-text {*
+text \<open>
   which takes an integer and adds it to the content of the reference.
   As before, we update the reference with the command 
   \isacommand{setup}. 
-*}
+\<close>
 
-setup %gray {* ref_update 1 *}
+setup %gray \<open>ref_update 1\<close>
 
-text {*
+text \<open>
   A lookup in the current theory gives then the expected list 
   @{ML "ref [1]" in Unsynchronized}.
 
@@ -1159,30 +1157,30 @@
   Storing data in a proof context is done in a similar fashion. As mentioned
   before, the corresponding functor is @{ML_funct_ind Proof_Data}. With the
   following code we can store a list of terms in a proof context.
-*}
+\<close>
 
-ML %grayML{*structure Data = Proof_Data
+ML %grayML\<open>structure Data = Proof_Data
   (type T = term list
-   fun init _ = [])*}
+   fun init _ = [])\<close>
 
-text {*
+text \<open>
   The init-function we have to specify must produce a list for when a context 
   is initialised (possibly taking the theory into account from which the 
   context is derived). We choose here to just return the empty list. Next 
   we define two auxiliary functions for updating the list with a given
   term and printing the list. 
-*}
+\<close>
 
-ML %grayML{*fun update trm = Data.map (fn trms => trm::trms)
+ML %grayML\<open>fun update trm = Data.map (fn trms => trm::trms)
 
 fun print ctxt =
   case (Data.get ctxt) of
     [] => pwriteln (Pretty.str "Empty!")
-  | trms => pwriteln (pretty_terms ctxt trms)*}
+  | trms => pwriteln (pretty_terms ctxt trms)\<close>
 
-text {*
+text \<open>
   Next we start with the context generated by the antiquotation 
-  @{text "@{context}"} and update it in various ways. 
+  \<open>@{context}\<close> and update it in various ways. 
 
   @{ML_response_fake [display,gray]
 "let
@@ -1216,63 +1214,63 @@
 
   For convenience there is an abstract layer, namely the type @{ML_type Context.generic}, 
   for treating theories and proof contexts more uniformly. This type is defined as follows
-*}
+\<close>
 
-ML_val %grayML{*datatype generic = 
+ML_val %grayML\<open>datatype generic = 
   Theory of theory 
-| Proof of proof*}
+| Proof of proof\<close>
 
-text {*
+text \<open>
   \footnote{\bf FIXME: say more about generic contexts.}
 
   There are two special instances of the data storage mechanism described 
   above. The first instance implements named theorem lists using the functor
   @{ML_funct_ind Named_Thms}. This is because storing theorems in a list 
   is such a common task.  To obtain a named theorem list, you just declare
-*}
+\<close>
 
-ML %grayML{*structure FooRules = Named_Thms
+ML %grayML\<open>structure FooRules = Named_Thms
   (val name = @{binding "foo"} 
-   val description = "Theorems for foo") *}
+   val description = "Theorems for foo")\<close>
 
-text {*
+text \<open>
   and set up the @{ML_struct FooRules} with the command
-*}
+\<close>
 
-setup %gray {* FooRules.setup *}
+setup %gray \<open>FooRules.setup\<close>
 
-text {*
+text \<open>
   This code declares a data container where the theorems are stored,
-  an attribute @{text foo} (with the @{text add} and @{text del} options
+  an attribute \<open>foo\<close> (with the \<open>add\<close> and \<open>del\<close> options
   for adding and deleting theorems) and an internal ML-interface for retrieving and 
   modifying the theorems.
   Furthermore, the theorems are made available on the user-level under the name 
-  @{text foo}. For example you can declare three lemmas to be a member of the 
-  theorem list @{text foo} by:
-*}
+  \<open>foo\<close>. For example you can declare three lemmas to be a member of the 
+  theorem list \<open>foo\<close> by:
+\<close>
 
 lemma rule1[foo]: "A" sorry
 lemma rule2[foo]: "B" sorry
 lemma rule3[foo]: "C" sorry
 
-text {* and undeclare the first one by: *}
+text \<open>and undeclare the first one by:\<close>
 
 declare rule1[foo del]
 
-text {* You can query the remaining ones with:
+text \<open>You can query the remaining ones with:
 
   \begin{isabelle}
-  \isacommand{thm}~@{text "foo"}\\
-  @{text "> ?C"}\\
-  @{text "> ?B"}
+  \isacommand{thm}~\<open>foo\<close>\\
+  \<open>> ?C\<close>\\
+  \<open>> ?B\<close>
   \end{isabelle}
 
   On the ML-level, we can add theorems to the list with @{ML FooRules.add_thm}:
-*} 
+\<close> 
 
-setup %gray {* Context.theory_map (FooRules.add_thm @{thm TrueI}) *}
+setup %gray \<open>Context.theory_map (FooRules.add_thm @{thm TrueI})\<close>
 
-text {*
+text \<open>
   The rules in the list can be retrieved using the function 
   @{ML FooRules.get}:
 
@@ -1293,24 +1291,23 @@
   The second special instance of the data storage mechanism are configuration
   values. They are used to enable users to configure tools without having to
   resort to the ML-level (and also to avoid references). Assume you want the
-  user to control three values, say @{text bval} containing a boolean, @{text
-  ival} containing an integer and @{text sval} containing a string. These
+  user to control three values, say \<open>bval\<close> containing a boolean, \<open>ival\<close> containing an integer and \<open>sval\<close> containing a string. These
   values can be declared by
-*}
+\<close>
 
-ML %grayML{*val bval = Attrib.setup_config_bool @{binding "bval"} (K false)
+ML %grayML\<open>val bval = Attrib.setup_config_bool @{binding "bval"} (K false)
 val ival = Attrib.setup_config_int @{binding "ival"} (K 0)
-val sval = Attrib.setup_config_string @{binding "sval"} (K "some string") *}
+val sval = Attrib.setup_config_string @{binding "sval"} (K "some string")\<close>
 
-text {* 
+text \<open>
   where each value needs to be given a default. 
   The user can now manipulate the values from the user-level of Isabelle 
   with the command
-*}
+\<close>
 
 declare [[bval = true, ival = 3]]
 
-text {*
+text \<open>
   On the ML-level these values can be retrieved using the 
   function @{ML_ind get in Config} from a proof context
 
@@ -1343,7 +1340,7 @@
   A concrete example for a configuration value is 
   @{ML_ind simp_trace in Raw_Simplifier}, which switches on trace information
   in the simplifier. This can be used for example in the following proof
-*}
+\<close>
 
 
 lemma
@@ -1354,7 +1351,7 @@
   show "True" by simp
 qed
 
-text {*
+text \<open>
   in order to inspect how the simplifier solves the first subgoal.
 
   \begin{readmore}
@@ -1362,11 +1359,11 @@
   the files @{ML_file "Pure/Isar/attrib.ML"} and 
   @{ML_file "Pure/config.ML"}.
   \end{readmore}
-*}
+\<close>
 
-section {* Summary *}
+section \<open>Summary\<close>
 
-text {*
+text \<open>
   This chapter describes the combinators that are used in Isabelle, as well
   as a simple printing infrastructure for @{ML_type term}, @{ML_type cterm}
   and @{ML_type thm}. The section on ML-antiquotations shows how to refer 
@@ -1381,5 +1378,5 @@
   \end{itemize}
   \end{conventions}
 
-*}
+\<close>
 end
--- a/ProgTutorial/Helper/Command/Command.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Helper/Command/Command.thy	Tue May 14 17:10:47 2019 +0200
@@ -4,7 +4,7 @@
          "foobar_goal" "foobar_prove" :: thy_goal
 begin
 
-ML {*
+ML \<open>
 let
    val do_nothing = Scan.succeed (Local_Theory.background_theory I)
 in
@@ -12,9 +12,9 @@
      "description of foobar" 
        do_nothing
 end
-*}
+\<close>
 
-ML {*
+ML \<open>
 let
   fun trace_prop str =
      Local_Theory.background_theory (fn lthy => (tracing str; lthy))
@@ -24,9 +24,9 @@
     "traces a proposition"
       trace_prop_parser
 end
-*}
+\<close>
 
-ML {*
+ML \<open>
 let
    fun prove_prop str lthy =
      let
@@ -40,17 +40,17 @@
      "proving a proposition"
        prove_prop_parser
 end
-*}
+\<close>
 
-ML {*
+ML \<open>
 structure Result = Proof_Data(
   type T = unit -> term
   fun init thy () = error "Result")
 
 val result_cookie = (Result.get, Result.put, "Result.put")
-*}
+\<close>
 
-ML{*
+ML\<open>
 let
    fun after_qed thm_name thms lthy =
         Local_Theory.note (thm_name, (flat thms)) lthy |> snd
@@ -68,7 +68,7 @@
    Outer_Syntax.local_theory_to_proof @{command_spec "foobar_prove"} 
      "proving a proposition" 
         (parser >> setup_proof)
-end*}
+end\<close>
 
 
 
--- a/ProgTutorial/Intro.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Intro.thy	Tue May 14 17:10:47 2019 +0200
@@ -2,10 +2,10 @@
 imports Base
 begin
 
-chapter {* Introduction *}
+chapter \<open>Introduction\<close>
 
 
-text {*
+text \<open>
    \begin{flushright}
   {\em
   ``My thesis is that programming is not at the bottom of the intellectual \\
@@ -37,11 +37,11 @@
   criticism or like to add to the tutorial, please feel free---you are most
   welcome!! The tutorial is meant to be gentle and comprehensive. To achieve
   this we need your help and feedback.
-*}
+\<close>
 
-section {* Intended Audience and Prior Knowledge *}
+section \<open>Intended Audience and Prior Knowledge\<close>
 
-text {* 
+text \<open>
   This tutorial targets readers who already know how to use Isabelle for
   writing theories and proofs. We also assume that readers are familiar with
   the functional programming language ML, the language in which most of
@@ -52,11 +52,11 @@
   based on jEdit. This part of the code is beyond the interest of this
   tutorial, since it mostly does not concern the regular Isabelle 
   developer.
-*}
+\<close>
 
-section {* Existing Documentation *}
+section \<open>Existing Documentation\<close>
 
-text {*
+text \<open>
   The following documentation about Isabelle programming already exists (and is
   part of the distribution of Isabelle):
 
@@ -86,7 +86,7 @@
   parts, it is often good to look at code that does similar things as you 
   want to do and learn from it. 
   This tutorial contains frequently pointers to the
-  Isabelle sources. Still, the UNIX command \mbox{@{text "grep -R"}} is
+  Isabelle sources. Still, the UNIX command \mbox{\<open>grep -R\<close>} is
   often your best friend while programming with Isabelle.\footnote{Or
   hypersearch if you work with jEdit.} To understand the sources,
   it is often also necessary to track the change history of a file or
@@ -94,26 +94,26 @@
   for Isabelle  provides convenient interfaces to query the history of
   files and ``change sets''.
   \end{description}
-*}
+\<close>
 
-section {* Typographic Conventions *}
+section \<open>Typographic Conventions\<close>
 
-text {*
+text \<open>
   All ML-code in this tutorial is typeset in shaded boxes, like the following 
   simple ML-expression:
 
   \begin{isabelle}
   \begin{graybox}
-  \isacommand{ML}~@{text "\<verbopen>"}\isanewline
+  \isacommand{ML}~\<open>\<verbopen>\<close>\isanewline
   \hspace{5mm}@{ML "3 + 4"}\isanewline
-  @{text "\<verbclose>"}
+  \<open>\<verbclose>\<close>
   \end{graybox}
   \end{isabelle}
   
   These boxes correspond to how code can be processed inside the interactive
   environment of Isabelle. It is therefore easy to experiment with the code
   that is shown in this tutorial. However, for better readability we will drop
-  the enclosing \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"} and just
+  the enclosing \isacommand{ML}~\<open>\<verbopen> \<dots> \<verbclose>\<close> and just
   write:
 
   @{ML [display,gray] "3 + 4"}
@@ -126,7 +126,7 @@
 
   The user-level commands of Isabelle (i.e., the non-ML code) are written
   in \isacommand{bold face} (e.g., \isacommand{lemma}, \isacommand{apply},
-  \isacommand{foobar} and so on).  We use @{text "$ \<dots>"} to indicate that a
+  \isacommand{foobar} and so on).  We use \<open>$ \<dots>\<close> to indicate that a
   command needs to be run in a UNIX-shell, for example:
 
   @{text [display] "$ grep -R Thy_Output *"}
@@ -146,18 +146,17 @@
   A few exercises are scattered around the text. Their solutions are given 
   in Appendix~\ref{ch:solutions}. Of course, you learn most, if you first try
   to solve the exercises on your own, and then look at the solutions.
-*}
+\<close>
 
-section {* How To Understand Isabelle Code *}
+section \<open>How To Understand Isabelle Code\<close>
 
-text {*
+text \<open>
   One of the more difficult aspects of any kind of programming is to
   understand code written by somebody else. This is aggravated in Isabelle by
   the fact that many parts of the code contain none or only few
   comments. There is one strategy that might be helpful to navigate your way:
   ML is an interactive programming environment, which means you can evaluate
-  code on the fly (for example inside an \isacommand{ML}~@{text
-  "\<verbopen>\<dots>\<verbclose>"} section). So you can copy (self-contained)
+  code on the fly (for example inside an \isacommand{ML}~\<open>\<verbopen>\<dots>\<verbclose>\<close> section). So you can copy (self-contained)
   chunks of existing code into a separate theory file and then study it
   alongside with examples. You can also install ``probes'' inside the copied
   code without having to recompile the whole Isabelle distribution. Such
@@ -168,12 +167,12 @@
   quick results with this! It is painful. Depending on the size of the code
   you are looking at, you will spend the better part of a quiet afternoon with
   it. And there seems to be no better way for understanding code in Isabelle.
-*}
+\<close>
 
 
-section {* Aaaaargh! My Code Does not Work Anymore *}
+section \<open>Aaaaargh! My Code Does not Work Anymore\<close>
 
-text {*
+text \<open>
   One unpleasant aspect of any code development inside a larger system is that
   one has to aim at a ``moving target''. Isabelle is no exception of this. Every 
   update lets potentially all hell break loose, because other developers have
@@ -208,11 +207,11 @@
   checking agains the distribution, but generally problems will be caught and
   the developer, who caused them, is expected to fix them. So also in this
   case code maintenance is done for you.
-*}
+\<close>
 
-section {* Serious Isabelle ML-Programming *}
+section \<open>Serious Isabelle ML-Programming\<close>
 
-text {*
+text \<open>
   As already pointed out in the previous section, Isabelle is a joint effort 
   of many developers. Therefore, disruptions that break the work of others
   are generally frowned upon. ``Accidents'' however do happen and everybody knows
@@ -236,57 +235,56 @@
    //home/isabelle-repository/repos/testboard"}
 
   where the dots need to be replaced by your login name.  Note that for
-  pushing changes to the testboard you need to add the option @{text "-f"},
+  pushing changes to the testboard you need to add the option \<open>-f\<close>,
   which should \emph{never} be used with the main Isabelle
   repository. While the testboard is a great system for supporting Isabelle
   developers, its disadvantage is that it needs login permissions for the
   computers in Munich. So in order to use it, you might have to ask other
   developers to obtain one.
-*}
+\<close>
 
 
-section {* Some Naming Conventions in the Isabelle Sources *}
+section \<open>Some Naming Conventions in the Isabelle Sources\<close>
 
-text {*
+text \<open>
   There are a few naming conventions in the Isabelle code that might aid reading 
   and writing code. (Remember that code is written once, but read many 
   times.) The most important conventions are:
 
   \begin{itemize}
-  \item @{text t}, @{text u}, @{text trm} for (raw) terms; ML-type: @{ML_type term}
-  \item @{text ct}, @{text cu} for certified terms; ML-type: @{ML_type cterm}
-  \item @{text "ty"}, @{text T}, @{text U} for (raw) types; ML-type: @{ML_type typ}
-  \item @{text "S"} for sorts; ML-type: @{ML_type sort}
-  \item @{text th}, @{text thm} for theorems; ML-type: @{ML_type thm}
-  \item @{text "foo_tac"} for tactics; ML-type: @{ML_type tactic}
-  \item @{text thy} for theories; ML-type: @{ML_type theory}
-  \item @{text ctxt} for proof contexts; ML-type: @{ML_type Proof.context}
-  \item @{text lthy} for local theories; ML-type: @{ML_type local_theory}
-  \item @{text context} for generic contexts; ML-type @{ML_type Context.generic}
-  \item @{text mx} for mixfix syntax annotations; ML-type @{ML_type mixfix}
-  \item @{text prt} for pretty printing; ML-type @{ML_type Pretty.T}
-  \item @{text phi} for morphisms; ML-type @{ML_type morphism}
+  \item \<open>t\<close>, \<open>u\<close>, \<open>trm\<close> for (raw) terms; ML-type: @{ML_type term}
+  \item \<open>ct\<close>, \<open>cu\<close> for certified terms; ML-type: @{ML_type cterm}
+  \item \<open>ty\<close>, \<open>T\<close>, \<open>U\<close> for (raw) types; ML-type: @{ML_type typ}
+  \item \<open>S\<close> for sorts; ML-type: @{ML_type sort}
+  \item \<open>th\<close>, \<open>thm\<close> for theorems; ML-type: @{ML_type thm}
+  \item \<open>foo_tac\<close> for tactics; ML-type: @{ML_type tactic}
+  \item \<open>thy\<close> for theories; ML-type: @{ML_type theory}
+  \item \<open>ctxt\<close> for proof contexts; ML-type: @{ML_type Proof.context}
+  \item \<open>lthy\<close> for local theories; ML-type: @{ML_type local_theory}
+  \item \<open>context\<close> for generic contexts; ML-type @{ML_type Context.generic}
+  \item \<open>mx\<close> for mixfix syntax annotations; ML-type @{ML_type mixfix}
+  \item \<open>prt\<close> for pretty printing; ML-type @{ML_type Pretty.T}
+  \item \<open>phi\<close> for morphisms; ML-type @{ML_type morphism}
   \end{itemize}
-*}
+\<close>
 
-section {* Acknowledgements *}
+section \<open>Acknowledgements\<close>
 
-text {*
+text \<open>
   Financial support for this tutorial was provided by the German 
   Research Council (DFG) under grant number URB 165/5-1. The following
   people contributed to the text:
 
   \begin{itemize}
   \item {\bf Stefan Berghofer} wrote nearly all of the ML-code of the
-  \simpleinductive-package and the code for the @{text
-  "chunk"}-antiquotation. He also wrote the first version of chapter
+  \simpleinductive-package and the code for the \<open>chunk\<close>-antiquotation. He also wrote the first version of chapter
   \ref{chp:package} describing this package and has been helpful \emph{beyond
   measure} with answering questions about Isabelle.
 
   \item {\bf Jasmin Blanchette} helped greatly with section \ref{sec:pretty}
   and exercise \ref{fun:killqnt}.
 
-  \item {\bf Sascha Böhme} contributed the recipes in \ref{rec:timeout}, 
+  \item {\bf Sascha B�hme} contributed the recipes in \ref{rec:timeout}, 
   \ref{rec:external} and \ref{rec:oracle}. He also wrote section \ref{sec:conversion} 
   and helped with recipe \ref{rec:timing}. Parts of section \ref{sec:storing}
   are by him.
@@ -320,7 +318,7 @@
   many improvemets to the text. 
 
   \item {\bf Dmitriy Traytel} suggested to use the ML-antiquotation 
-  @{text "command_spec"} in section~\ref{sec:newcommand}, which simplified the code. 
+  \<open>command_spec\<close> in section~\ref{sec:newcommand}, which simplified the code. 
 
   \item {\bf Piotr Trojanek} proofread the text.
   \end{itemize}
@@ -342,6 +340,6 @@
   %%This document (version \input{tip.tex}\hspace{-0.5ex}) was compiled with:\\
   %%\input{version.tex}\\
   %% \input{pversion}
-*}
+\<close>
 
 end
--- a/ProgTutorial/Package/Ind_Code.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Package/Ind_Code.thy	Tue May 14 17:10:47 2019 +0200
@@ -2,18 +2,18 @@
 imports Ind_General_Scheme "../First_Steps" 
 begin
 
-section {* The Gory Details\label{sec:code} *} 
+section \<open>The Gory Details\label{sec:code}\<close> 
 
-text {*
+text \<open>
   As mentioned before the code falls roughly into three parts: the code that deals
   with the definitions, with the induction principles and with the introduction
   rules. In addition there are some administrative functions that string everything 
   together.
-*}
+\<close>
 
-subsection {* Definitions *}
+subsection \<open>Definitions\<close>
 
-text {*
+text \<open>
   We first have to produce for each predicate the user specifies an appropriate
   definition, whose general form is
 
@@ -23,17 +23,17 @@
   To do the latter, we use the following wrapper for the function
   @{ML_ind define in Local_Theory}. The wrapper takes a predicate name, a syntax
   annotation and a term representing the right-hand side of the definition.
-*}
+\<close>
 
-ML %linenosgray{*fun make_defn ((predname, mx), trm) lthy =
+ML %linenosgray\<open>fun make_defn ((predname, mx), trm) lthy =
 let 
   val arg = ((predname, mx), (Binding.empty_atts, trm))
   val ((_, (_ , thm)), lthy') = Local_Theory.define arg lthy
 in 
   (thm, lthy') 
-end*}
+end\<close>
 
-text {*
+text \<open>
   It returns the definition (as a theorem) and the local theory in which the
   definition has been made. We use @{ML_ind empty_atts in Binding} in Line 3, 
   since the definitions for our inductive predicates are not meant to be seen 
@@ -44,19 +44,19 @@
 
   @{text [display] "\<lambda>zs. \<forall>preds. orules \<longrightarrow> pred zs"}
 
-  When constructing these terms, the variables @{text "zs"} need to be chosen so 
-  that they do not occur in the @{text orules} and also be distinct from the 
-  @{text "preds"}.
+  When constructing these terms, the variables \<open>zs\<close> need to be chosen so 
+  that they do not occur in the \<open>orules\<close> and also be distinct from the 
+  \<open>preds\<close>.
 
 
-  The first function, named @{text defn_aux}, constructs the term for one
-  particular predicate (the argument @{text "pred"} in the code below). The
+  The first function, named \<open>defn_aux\<close>, constructs the term for one
+  particular predicate (the argument \<open>pred\<close> in the code below). The
   number of arguments of this predicate is determined by the number of
-  argument types given in @{text "arg_tys"}. The other arguments of the
-  function are the @{text orules} and all the @{text "preds"}.
-*}
+  argument types given in \<open>arg_tys\<close>. The other arguments of the
+  function are the \<open>orules\<close> and all the \<open>preds\<close>.
+\<close>
 
-ML %linenosgray{*fun defn_aux lthy orules preds (pred, arg_tys) =
+ML %linenosgray\<open>fun defn_aux lthy orules preds (pred, arg_tys) =
 let 
   fun mk_all x P = HOLogic.all_const (fastype_of x) $ lambda x P
 
@@ -70,34 +70,33 @@
   |> fold_rev (curry HOLogic.mk_imp) orules
   |> fold_rev mk_all preds
   |> fold_rev lambda fresh_args 
-end*}
+end\<close>
 
-text {*
-  The function @{text mk_all} in Line 3 is just a helper function for constructing 
-  universal quantifications. The code in Lines 5 to 9 produces the fresh @{text
-  "zs"}. For this it pairs every argument type with the string
+text \<open>
+  The function \<open>mk_all\<close> in Line 3 is just a helper function for constructing 
+  universal quantifications. The code in Lines 5 to 9 produces the fresh \<open>zs\<close>. For this it pairs every argument type with the string
   @{text [quotes] "z"} (Line 7); then generates variants for all these strings
-  so that they are unique w.r.t.~to the predicates and @{text "orules"} (Line 8);
+  so that they are unique w.r.t.~to the predicates and \<open>orules\<close> (Line 8);
   in Line 9 it generates the corresponding variable terms for the unique
   strings.
 
   The unique variables are applied to the predicate in Line 11 using the
-  function @{ML list_comb}; then the @{text orules} are prefixed (Line 12); in
+  function @{ML list_comb}; then the \<open>orules\<close> are prefixed (Line 12); in
   Line 13 we quantify over all predicates; and in line 14 we just abstract
-  over all the @{text "zs"}, i.e., the fresh arguments of the
+  over all the \<open>zs\<close>, i.e., the fresh arguments of the
   predicate. A testcase for this function is
-*}
+\<close>
 
-local_setup %gray {* fn lthy =>
+local_setup %gray \<open>fn lthy =>
 let
   val def = defn_aux lthy eo_orules eo_preds (e_pred, e_arg_tys)
 in
   pwriteln (pretty_term lthy def); lthy
-end *}
+end\<close>
 
-text {*
+text \<open>
   where we use the shorthands defined in Figure~\ref{fig:shorthands}.
-  The testcase calls @{ML defn_aux} for the predicate @{text "even"} and prints
+  The testcase calls @{ML defn_aux} for the predicate \<open>even\<close> and prints
   out the generated definition. So we obtain as printout 
 
   @{text [display] 
@@ -105,18 +104,18 @@
                          \<longrightarrow> (\<forall>n. even n \<longrightarrow> odd (Suc n)) \<longrightarrow> even z"}
 
   If we try out the function with the rules for freshness
-*}
+\<close>
 
-local_setup %gray {* fn lthy =>
+local_setup %gray \<open>fn lthy =>
 let
   val arg = (fresh_pred, fresh_arg_tys)
   val def = defn_aux lthy fresh_orules [fresh_pred] arg
 in
   pwriteln (pretty_term lthy def); lthy
-end *}
+end\<close>
 
 
-text {*
+text \<open>
   we obtain
 
   @{term [display] 
@@ -126,23 +125,22 @@
                 (\<forall>a b t. \<not> a = b \<longrightarrow> fresh a t \<longrightarrow> fresh a (Lam b t)) \<longrightarrow> fresh z za"}
 
 
-  The second function, named @{text defns}, has to iterate the function
-  @{ML defn_aux} over all predicates. The argument @{text "preds"} is again
-  the list of predicates as @{ML_type term}s; the argument @{text
-  "prednames"} is the list of binding names of the predicates; @{text mxs} 
+  The second function, named \<open>defns\<close>, has to iterate the function
+  @{ML defn_aux} over all predicates. The argument \<open>preds\<close> is again
+  the list of predicates as @{ML_type term}s; the argument \<open>prednames\<close> is the list of binding names of the predicates; \<open>mxs\<close> 
   are the list of syntax, or mixfix, annotations for the predicates; 
-  @{text "arg_tyss"} is the list of argument-type-lists.
-*}
+  \<open>arg_tyss\<close> is the list of argument-type-lists.
+\<close>
 
-ML %linenosgray{*fun defns rules preds prednames mxs arg_typss lthy =
+ML %linenosgray\<open>fun defns rules preds prednames mxs arg_typss lthy =
 let
   val orules = map (Object_Logic.atomize_term lthy) rules
   val defs = map (defn_aux lthy orules preds) (preds ~~ arg_typss) 
 in
   fold_map make_defn (prednames ~~ mxs ~~ defs) lthy
-end*}
+end\<close>
 
-text {*
+text \<open>
   The user will state the introduction rules using meta-implications and
   meta-quanti\-fications. In Line 4, we transform these introduction rules
   into the object logic (since definitions cannot be stated with
@@ -154,19 +152,19 @@
   definitions. The actual definitions are then made in Line 7.  The result of
   the function is a list of theorems and a local theory (the theorems are
   registered with the local theory). A testcase for this function is
-*}
+\<close>
 
-local_setup %gray {* fn lthy =>
+local_setup %gray \<open>fn lthy =>
 let
   val (defs, lthy') = 
     defns eo_rules eo_preds eo_prednames eo_mxs eo_arg_tyss lthy
 in
   pwriteln (pretty_thms_no_vars lthy' defs); lthy
-end *}
+end\<close>
 
-text {*
+text \<open>
   where we feed into the function all parameters corresponding to
-  the @{text even}/@{text odd} example. The definitions we obtain
+  the \<open>even\<close>/\<open>odd\<close> example. The definitions we obtain
   are:
 
   @{text [display, break]
@@ -175,22 +173,22 @@
 odd \<equiv> \<lambda>z. \<forall>even odd. (even 0) \<longrightarrow> (\<forall>n. odd n \<longrightarrow> even (Suc n)) 
                                \<longrightarrow> (\<forall>n. even n \<longrightarrow> odd (Suc n)) \<longrightarrow> odd z"}
 
-  Note that in the testcase we return the local theory @{text lthy} 
-  (not the modified @{text lthy'}). As a result the test case has no effect
+  Note that in the testcase we return the local theory \<open>lthy\<close> 
+  (not the modified \<open>lthy'\<close>). As a result the test case has no effect
   on the ambient theory. The reason is that if we introduce the
   definition again, we pollute the name space with two versions of 
-  @{text "even"} and @{text "odd"}. We want to avoid this here.
+  \<open>even\<close> and \<open>odd\<close>. We want to avoid this here.
 
   This completes the code for introducing the definitions. Next we deal with
   the induction principles. 
-*}
+\<close>
 
-subsection {* Induction Principles *}
+subsection \<open>Induction Principles\<close>
 
-text {*
+text \<open>
   Recall that the manual proof for the induction principle 
-  of @{text "even"} was:
-*}
+  of \<open>even\<close> was:
+\<close>
 
 lemma manual_ind_prin_even: 
 assumes prem: "even z"
@@ -203,65 +201,65 @@
 apply(assumption)
 done
 
-text {* 
+text \<open>
   The code for automating such induction principles has to accomplish two tasks: 
   constructing the induction principles from the given introduction
   rules and then automatically generating proofs for them using a tactic. 
   
   The tactic will use the following helper function for instantiating universal 
   quantifiers. 
-*}
+\<close>
 
-ML %grayML{*fun inst_spec ctrm =
+ML %grayML\<open>fun inst_spec ctrm =
 let
   val cty = Thm.ctyp_of_cterm ctrm
 in 
   Thm.instantiate' [SOME cty] [NONE, SOME ctrm] @{thm spec} 
-end*}
+end\<close>
 
-text {*
+text \<open>
   This helper function uses the function @{ML_ind instantiate' in Thm}
-  and instantiates the @{text "?x"} in the theorem @{thm spec} with a given
+  and instantiates the \<open>?x\<close> in the theorem @{thm spec} with a given
   @{ML_type cterm}. We call this helper function in the following
   tactic.\label{fun:instspectac}.
-*}
+\<close>
 
-ML %grayML{*fun inst_spec_tac ctxt ctrms = 
-  EVERY' (map (dresolve_tac ctxt o single o inst_spec) ctrms)*}
+ML %grayML\<open>fun inst_spec_tac ctxt ctrms = 
+  EVERY' (map (dresolve_tac ctxt o single o inst_spec) ctrms)\<close>
 
-text {*
+text \<open>
   This tactic expects a list of @{ML_type cterm}s. It allows us in the 
   proof below to instantiate the three quantifiers in the assumption. 
-*}
+\<close>
 
 lemma 
 fixes P::"nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> bool"
 shows "\<forall>x y z. P x y z \<Longrightarrow> True"
-apply (tactic {* 
-  inst_spec_tac @{context} [@{cterm "a::nat"},@{cterm "b::nat"},@{cterm "c::nat"}] 1 *})
-txt {* 
+apply (tactic \<open>
+  inst_spec_tac @{context} [@{cterm "a::nat"},@{cterm "b::nat"},@{cterm "c::nat"}] 1\<close>)
+txt \<open>
   We obtain the goal state
 
   \begin{minipage}{\textwidth}
   @{subgoals} 
-  \end{minipage}*}
+  \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   The complete tactic for proving the induction principles can now
   be implemented as follows:
-*}
+\<close>
 
-ML %linenosgray{*fun ind_tac ctxt defs prem insts =
+ML %linenosgray\<open>fun ind_tac ctxt defs prem insts =
   EVERY1 [Object_Logic.full_atomize_tac ctxt,
           cut_facts_tac prem,
           rewrite_goal_tac ctxt defs,
           inst_spec_tac ctxt insts,
-          assume_tac ctxt]*}
+          assume_tac ctxt]\<close>
 
-text {*
+text \<open>
   We have to give it as arguments the definitions, the premise (a list of
-  formulae) and the instantiations. The premise is @{text "even n"} in lemma
+  formulae) and the instantiations. The premise is \<open>even n\<close> in lemma
   @{thm [source] manual_ind_prin_even} shown above; in our code it will always be a list
   consisting of a single formula. Compare this tactic with the manual proof
   for the lemma @{thm [source] manual_ind_prin_even}: as you can see there is
@@ -271,13 +269,13 @@
   the assumptions of the goal (Line 5) and then conclude with @{ML assume_tac}.
 
   Two testcases for this tactic are:
-*}
+\<close>
 
 lemma automatic_ind_prin_even:
 assumes prem: "even z"
 shows "P 0 \<Longrightarrow> (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P z"
-by (tactic {* ind_tac @{context} eo_defs @{thms prem} 
-                    [@{cterm "P::nat\<Rightarrow>bool"}, @{cterm "Q::nat\<Rightarrow>bool"}] *})
+by (tactic \<open>ind_tac @{context} eo_defs @{thms prem} 
+                    [@{cterm "P::nat\<Rightarrow>bool"}, @{cterm "Q::nat\<Rightarrow>bool"}]\<close>)
 
 lemma automatic_ind_prin_fresh:
 assumes prem: "fresh z za" 
@@ -285,11 +283,11 @@
         (\<And>a t s. \<lbrakk>P a t; P a s\<rbrakk> \<Longrightarrow> P a (App t s)) \<Longrightarrow>
         (\<And>a t. P a (Lam a t)) \<Longrightarrow> 
         (\<And>a b t. \<lbrakk>a \<noteq> b; P a t\<rbrakk> \<Longrightarrow> P a (Lam b t)) \<Longrightarrow> P z za"
-by (tactic {* ind_tac @{context} @{thms fresh_def} @{thms prem} 
-                          [@{cterm "P::string\<Rightarrow>trm\<Rightarrow>bool"}] *})
+by (tactic \<open>ind_tac @{context} @{thms fresh_def} @{thms prem} 
+                          [@{cterm "P::string\<Rightarrow>trm\<Rightarrow>bool"}]\<close>)
 
 
-text {*
+text \<open>
   While the tactic for proving the induction principles is relatively simple,
   it will be a bit more work to construct the goals from the introduction rules
   the user provides.  Therefore let us have a closer look at the first 
@@ -297,10 +295,10 @@
 
   \begin{isabelle}
   \isacommand{thm}~@{thm [source] automatic_ind_prin_even}\\
-  @{text "> "}~@{thm automatic_ind_prin_even}
+  \<open>> \<close>~@{thm automatic_ind_prin_even}
   \end{isabelle}
 
-  The variables @{text "z"}, @{text "P"} and @{text "Q"} are schematic
+  The variables \<open>z\<close>, \<open>P\<close> and \<open>Q\<close> are schematic
   variables (since they are not quantified in the lemma). These 
   variables must be schematic, otherwise they cannot be instantiated 
   by the user. To generate these schematic variables we use a common trick
@@ -308,27 +306,26 @@
   \emph{but fixed}, and then use the infrastructure to turn them into 
   schematic variables.
 
-  In general we have to construct for each predicate @{text "pred"} a goal 
+  In general we have to construct for each predicate \<open>pred\<close> a goal 
   of the form
 
   @{text [display] 
   "pred ?zs \<Longrightarrow> rules[preds := ?Ps] \<Longrightarrow> ?P ?zs"}
 
-  where the predicates @{text preds} are replaced in @{text rules} by new 
-  distinct variables @{text "?Ps"}. We also need to generate fresh arguments 
-  @{text "?zs"} for the predicate  @{text "pred"} and the @{text "?P"} in 
+  where the predicates \<open>preds\<close> are replaced in \<open>rules\<close> by new 
+  distinct variables \<open>?Ps\<close>. We also need to generate fresh arguments 
+  \<open>?zs\<close> for the predicate  \<open>pred\<close> and the \<open>?P\<close> in 
   the conclusion. 
 
-  We generate these goals in two steps. The first function, named @{text prove_ind}, 
+  We generate these goals in two steps. The first function, named \<open>prove_ind\<close>, 
   expects that the introduction rules are already appropriately substituted. The argument
-  @{text "srules"} stands for these substituted rules; @{text cnewpreds} are
-  the certified terms coresponding to the variables @{text "?Ps"}; @{text
-  "pred"} is the predicate for which we prove the induction principle;
-  @{text "newpred"} is its replacement and @{text "arg_tys"} are the argument
+  \<open>srules\<close> stands for these substituted rules; \<open>cnewpreds\<close> are
+  the certified terms coresponding to the variables \<open>?Ps\<close>; \<open>pred\<close> is the predicate for which we prove the induction principle;
+  \<open>newpred\<close> is its replacement and \<open>arg_tys\<close> are the argument
   types of this predicate.
-*}
+\<close>
 
-ML %linenosgray{*fun prove_ind lthy defs srules cnewpreds ((pred, newpred), arg_tys) =
+ML %linenosgray\<open>fun prove_ind lthy defs srules cnewpreds ((pred, newpred), arg_tys) =
 let
   val zs = replicate (length arg_tys) "z"
   val (newargnames, lthy') = Variable.variant_fixes zs lthy;
@@ -341,17 +338,17 @@
   Goal.prove lthy' [] [prem] goal
       (fn {prems, context, ...} => ind_tac context defs prems cnewpreds)
   |> singleton (Proof_Context.export lthy' lthy)
-end *}
+end\<close>
 
-text {* 
-  In Line 3 we produce names @{text "zs"} for each type in the 
+text \<open>
+  In Line 3 we produce names \<open>zs\<close> for each type in the 
   argument type list. Line 4 makes these names unique and declares them as 
-  free, but fixed, variables in the local theory @{text "lthy'"}. 
+  free, but fixed, variables in the local theory \<open>lthy'\<close>. 
   That means they are not schematic variables (yet).
   In Line 5 we construct the terms corresponding to these variables. 
   The variables are applied to the predicate in Line 7 (this corresponds
-  to the first premise @{text "pred zs"} of the induction principle). 
-  In Line 8 and 9, we first construct the term  @{text "P zs"} 
+  to the first premise \<open>pred zs\<close> of the induction principle). 
+  In Line 8 and 9, we first construct the term  \<open>P zs\<close> 
   and then add the (substituted) introduction rules as preconditions. In 
   case that no introduction rules are given, the conclusion of this 
   implication needs to be wrapped inside a @{term Trueprop}, otherwise 
@@ -364,14 +361,14 @@
   definitions, the premise and the (certified) predicates with which the
   introduction rules have been substituted. The code in these two lines will
   return a theorem. However, it is a theorem proved inside the local theory
-  @{text "lthy'"}, where the variables @{text "zs"} are free, but fixed (see
-  Line 4). By exporting this theorem from @{text "lthy'"} (which contains the
-  @{text "zs"} as free variables) to @{text "lthy"} (which does not), we
-  obtain the desired schematic variables @{text "?zs"}.  A testcase for this
+  \<open>lthy'\<close>, where the variables \<open>zs\<close> are free, but fixed (see
+  Line 4). By exporting this theorem from \<open>lthy'\<close> (which contains the
+  \<open>zs\<close> as free variables) to \<open>lthy\<close> (which does not), we
+  obtain the desired schematic variables \<open>?zs\<close>.  A testcase for this
   function is
-*}
+\<close>
 
-local_setup %gray {* fn lthy =>
+local_setup %gray \<open>fn lthy =>
 let
   val newpreds = [@{term "P::nat \<Rightarrow> bool"}, @{term "Q::nat \<Rightarrow> bool"}]
   val cnewpreds = [@{cterm "P::nat \<Rightarrow> bool"}, @{cterm "Q::nat \<Rightarrow> bool"}]
@@ -381,25 +378,25 @@
       prove_ind lthy eo_defs srules cnewpreds ((e_pred, newpred), e_arg_tys)
 in
   pwriteln (pretty_thm lthy intro); lthy
-end *}
+end\<close>
 
-text {*
+text \<open>
   This prints out the theorem:
 
   @{text [display]
   " \<lbrakk>even ?z; P 0; \<And>n. Q n \<Longrightarrow> P (Suc n); \<And>n. P n \<Longrightarrow> Q (Suc n)\<rbrakk> \<Longrightarrow> P ?z"}
 
-  The export from @{text lthy'} to @{text lthy} in Line 13 above 
-  has correctly turned the free, but fixed, @{text "z"} into a schematic 
-  variable @{text "?z"}; the variables @{text "P"} and @{text "Q"} are not yet
+  The export from \<open>lthy'\<close> to \<open>lthy\<close> in Line 13 above 
+  has correctly turned the free, but fixed, \<open>z\<close> into a schematic 
+  variable \<open>?z\<close>; the variables \<open>P\<close> and \<open>Q\<close> are not yet
   schematic. 
 
   We still have to produce the new predicates with which the introduction
   rules are substituted and iterate @{ML prove_ind} over all
-  predicates. This is what the second function, named @{text inds} does. 
-*}
+  predicates. This is what the second function, named \<open>inds\<close> does. 
+\<close>
 
-ML %linenosgray{*fun inds rules defs preds arg_tyss lthy  =
+ML %linenosgray\<open>fun inds rules defs preds arg_tyss lthy  =
 let
   val Ps = replicate (length preds) "P"
   val (newprednames, lthy') = Variable.variant_fixes Ps lthy
@@ -413,33 +410,33 @@
   map (prove_ind lthy' defs srules cnewpreds) 
         (preds ~~ newpreds ~~ arg_tyss)
           |> Proof_Context.export lthy' lthy
-end*}
+end\<close>
 
-text {*
+text \<open>
   In Line 3, we generate a string @{text [quotes] "P"} for each predicate. 
   In Line 4, we use the same trick as in the previous function, that is making the 
-  @{text "Ps"} fresh and declaring them as free, but fixed, in
-  the new local theory @{text "lthy'"}. In Line 6, we construct the types of these new predicates
+  \<open>Ps\<close> fresh and declaring them as free, but fixed, in
+  the new local theory \<open>lthy'\<close>. In Line 6, we construct the types of these new predicates
   using the given argument types. Next we turn them into terms and subsequently
   certify them (Line 7 and 8). We can now produce the substituted introduction rules 
   (Line 9) using the function @{ML_ind subst_free in Term}. Line 12 and 13 just iterate 
   the proofs for all predicates.
   From this we obtain a list of theorems. Finally we need to export the 
-  fixed variables @{text "Ps"} to obtain the schematic variables @{text "?Ps"} 
+  fixed variables \<open>Ps\<close> to obtain the schematic variables \<open>?Ps\<close> 
   (Line 14).
 
   A testcase for this function is
-*}
+\<close>
 
-local_setup %gray {* fn lthy =>
+local_setup %gray \<open>fn lthy =>
 let 
   val ind_thms = inds eo_rules eo_defs eo_preds eo_arg_tyss lthy
 in
   pwriteln (pretty_thms lthy ind_thms); lthy
-end *}
+end\<close>
 
 
-text {*
+text \<open>
   which prints out
 
 @{text [display]
@@ -448,17 +445,17 @@
 odd ?z \<Longrightarrow> ?P1 0 \<Longrightarrow>
  (\<And>m. ?Pa1 m \<Longrightarrow> ?P1 (Suc m)) \<Longrightarrow> (\<And>m. ?P1 m \<Longrightarrow> ?Pa1 (Suc m)) \<Longrightarrow> ?Pa1 ?z"}
 
-  Note that now both, the @{text "?Ps"} and the @{text "?zs"}, are schematic
+  Note that now both, the \<open>?Ps\<close> and the \<open>?zs\<close>, are schematic
   variables. The numbers attached to these variables have been introduced by 
   the pretty-printer and are \emph{not} important for the user. 
 
   This completes the code for the induction principles. The final peice
   of reasoning infrastructure we need are the introduction rules. 
-*}
+\<close>
 
-subsection {* Introduction Rules *}
+subsection \<open>Introduction Rules\<close>
 
-text {*
+text \<open>
   Constructing the goals for the introduction rules is easy: they
   are just the rules given by the user. However, their proofs are 
   quite a bit more involved than the ones for the induction principles. 
@@ -471,15 +468,15 @@
   
   about freshness for lambdas. In order to ease somewhat 
   our work here, we use the following two helper functions.
-*}
+\<close>
 
-ML %grayML{*val all_elims = fold (fn ct => fn th => th RS inst_spec ct)
-val imp_elims = fold (fn th => fn th' => [th', th] MRS @{thm mp})*}
+ML %grayML\<open>val all_elims = fold (fn ct => fn th => th RS inst_spec ct)
+val imp_elims = fold (fn th => fn th' => [th', th] MRS @{thm mp})\<close>
 
-text {* 
+text \<open>
   To see what these functions do, let us suppose we have the following three
   theorems. 
-*}
+\<close>
 
 lemma all_elims_test:
 fixes P::"nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> bool"
@@ -491,7 +488,7 @@
 lemma imp_elims_test':
 shows "A" "B" sorry
 
-text {*
+text \<open>
   The function @{ML all_elims} takes a list of (certified) terms and instantiates
   theorems of the form @{thm [source] all_elims_test}. For example we can instantiate
   the quantifiers in this theorem with @{term a}, @{term b} and @{term c} as follows:
@@ -510,7 +507,7 @@
   @{ML all_elims} operates on theorems. 
 
   Similarly, the function @{ML imp_elims} eliminates preconditions from implications. 
-  For example we can eliminate the preconditions @{text "A"} and @{text "B"} from
+  For example we can eliminate the preconditions \<open>A\<close> and \<open>B\<close> from
   @{thm [source] imp_elims_test}:
 
   @{ML_response_fake [display, gray]
@@ -522,52 +519,50 @@
   "C"}
 
   Now we set up the proof for the introduction rule as follows:
-*}
+\<close>
 
 lemma fresh_Lam:
 shows "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   The first step in the proof will be to expand the definitions of freshness
   and then introduce quantifiers and implications. For this we
   will use the tactic
-*}
+\<close>
 
-ML %linenosgray{*fun expand_tac ctxt defs =
+ML %linenosgray\<open>fun expand_tac ctxt defs =
   Object_Logic.rulify_tac ctxt 1
   THEN rewrite_goal_tac ctxt defs 1
-  THEN (REPEAT (resolve_tac ctxt [@{thm allI}, @{thm impI}] 1)) *}
+  THEN (REPEAT (resolve_tac ctxt [@{thm allI}, @{thm impI}] 1))\<close>
 
-text {*
+text \<open>
   The function in Line 2 ``rulifies'' the lemma.\footnote{FIXME: explain this better} 
   This will turn out to 
-  be important later on. Applying this tactic in our proof of @{text "fresh_Lem"}
-*}
+  be important later on. Applying this tactic in our proof of \<open>fresh_Lem\<close>
+\<close>
 
 (*<*)
 lemma fresh_Lam:
 shows "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
 (*>*)
-apply(tactic {* expand_tac @{context} @{thms fresh_def} *})
+apply(tactic \<open>expand_tac @{context} @{thms fresh_def}\<close>)
 
-txt {*
+txt \<open>
   gives us the goal state
 
   \begin{isabelle}
   @{subgoals [display]}
   \end{isabelle}
 
-  As you can see, there are parameters (namely @{text "a"}, @{text "b"} and
-  @{text "t"}) which come from the introduction rule and parameters (in the
-  case above only @{text "fresh"}) which come from the universal
+  As you can see, there are parameters (namely \<open>a\<close>, \<open>b\<close> and
+  \<open>t\<close>) which come from the introduction rule and parameters (in the
+  case above only \<open>fresh\<close>) which come from the universal
   quantification in the definition @{term "fresh a (App t s)"}.  Similarly,
   there are assumptions that come from the premises of the rule (namely the
   first two) and assumptions from the definition of the predicate (assumption
   three to six). We need to treat these parameters and assumptions
-  differently. In the code below we will therefore separate them into @{text
-  "params1"} and @{text params2}, respectively @{text "prems1"} and @{text
-  "prems2"}. To do this separation, it is best to open a subproof with the
+  differently. In the code below we will therefore separate them into \<open>params1\<close> and \<open>params2\<close>, respectively \<open>prems1\<close> and \<open>prems2\<close>. To do this separation, it is best to open a subproof with the
   tactic @{ML_ind SUBPROOF in Subgoal}, since this tactic provides us with the parameters (as
   list of @{ML_type cterm}s) and the assumptions (as list of @{ML_type thm}s). 
   The problem with @{ML SUBPROOF}, however, is that it always expects us to 
@@ -576,14 +571,14 @@
   we use first the function @{ML_ind  FOCUS in Subgoal}, which does s
   ame as @{ML SUBPROOF} 
   but does not require us to completely discharge the goal. 
-*}
+\<close>
 (*<*)oops(*>*)
-text_raw {*
+text_raw \<open>
 \begin{figure}[t]
 \begin{minipage}{\textwidth}
 \begin{isabelle}
-*}
-ML %grayML{*fun chop_print params1 params2 prems1 prems2 ctxt =
+\<close>
+ML %grayML\<open>fun chop_print params1 params2 prems1 prems2 ctxt =
 let 
  val pps = [Pretty.big_list "Params1 from the rule:" (map (pretty_cterm ctxt) params1), 
             Pretty.big_list "Params2 from the predicate:" (map (pretty_cterm ctxt) params2), 
@@ -593,26 +588,26 @@
   pps |> Pretty.chunks
       |> Pretty.string_of
       |> tracing
-end*}
+end\<close>
 
-text_raw{*
+text_raw\<open>
 \end{isabelle}
 \end{minipage}
 \caption{A helper function that prints out the parameters and premises that
   need to be treated differently.\label{fig:chopprint}}
 \end{figure}
-*}
+\<close>
 
-text {*
-  First we calculate the values for @{text "params1/2"} and @{text "prems1/2"}
-  from @{text "params"} and @{text "prems"}, respectively. To better see what is
+text \<open>
+  First we calculate the values for \<open>params1/2\<close> and \<open>prems1/2\<close>
+  from \<open>params\<close> and \<open>prems\<close>, respectively. To better see what is
   going in our example, we will print out these values using the printing
   function in Figure~\ref{fig:chopprint}. Since @{ML FOCUS in Subgoal} will
-  supply us the @{text "params"} and @{text "prems"} as lists, we can 
+  supply us the \<open>params\<close> and \<open>prems\<close> as lists, we can 
   separate them using the function @{ML_ind chop in Library}. 
-*}
+\<close>
 
-ML %linenosgray{*fun chop_test_tac preds rules =
+ML %linenosgray\<open>fun chop_test_tac preds rules =
   Subgoal.FOCUS (fn {params, prems, context, ...} =>
   let
     val cparams = map snd params
@@ -620,40 +615,40 @@
     val (prems1, prems2) = chop (length prems - length rules) prems
   in
     chop_print params1 params2 prems1 prems2 context; all_tac
-  end) *}
+  end)\<close>
 
-text {* 
+text \<open>
   For the separation we can rely on the fact that Isabelle deterministically 
   produces parameters and premises in a goal state. The last parameters
   that were introduced come from the quantifications in the definitions
   (see the tactic @{ML expand_tac}).
   Therefore we only have to subtract in Line 5 the number of predicates (in this
-  case only @{text "1"}) from the lenghts of all parameters. Similarly
-  with the @{text "prems"} in line 6: the last premises in the goal state come from 
+  case only \<open>1\<close>) from the lenghts of all parameters. Similarly
+  with the \<open>prems\<close> in line 6: the last premises in the goal state come from 
   unfolding the definition of the predicate in the conclusion. So we can 
   just subtract the number of rules from the number of all premises. 
   To check our calculations we print them out in Line 8 using the
   function @{ML chop_print} from Figure~\ref{fig:chopprint} and then 
   just do nothing, that is @{ML all_tac}. Applying this tactic in our example 
-*}
+\<close>
 
 (*<*)
 lemma fresh_Lam:
 shows "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
-apply(tactic {* expand_tac @{context} @{thms fresh_def} *})
+apply(tactic \<open>expand_tac @{context} @{thms fresh_def}\<close>)
 (*>*)
-apply(tactic {* chop_test_tac [fresh_pred] fresh_rules @{context} 1 *})
+apply(tactic \<open>chop_test_tac [fresh_pred] fresh_rules @{context} 1\<close>)
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   gives
 
   \begin{isabelle}
-  @{text "Params1 from the rule:"}\\
-  @{text "a, b, t"}\\
-  @{text "Params2 from the predicate:"}\\
-  @{text "fresh"}\\
-  @{text "Prems1 from the rule:"}\\
+  \<open>Params1 from the rule:\<close>\\
+  \<open>a, b, t\<close>\\
+  \<open>Params2 from the predicate:\<close>\\
+  \<open>fresh\<close>\\
+  \<open>Prems1 from the rule:\<close>\\
   @{term "a \<noteq> b"}\\
   @{text [break]
 "\<forall>fresh.
@@ -661,7 +656,7 @@
       (\<forall>a t s. fresh a t \<longrightarrow> fresh a s \<longrightarrow> fresh a (App t s)) \<longrightarrow>
       (\<forall>a t. fresh a (Lam a t)) \<longrightarrow> 
       (\<forall>a b t. a \<noteq> b \<longrightarrow> fresh a t \<longrightarrow> fresh a (Lam b t)) \<longrightarrow> fresh a t"}\\
-   @{text "Prems2 from the predicate:"}\\
+   \<open>Prems2 from the predicate:\<close>\\
    @{term "\<forall>a b. a \<noteq> b \<longrightarrow> fresh a (Var b)"}\\
    @{term "\<forall>a t s. fresh a t \<longrightarrow> fresh a s \<longrightarrow> fresh a (App t s)"}\\
    @{term "\<forall>a t. fresh a (Lam a t)"}\\
@@ -669,18 +664,18 @@
   \end{isabelle}
 
 
-  We now have to select from @{text prems2} the premise 
+  We now have to select from \<open>prems2\<close> the premise 
   that corresponds to the introduction rule we prove, namely:
 
   @{term [display] "\<forall>a b t. a \<noteq> b \<longrightarrow> fresh a t \<longrightarrow> fresh a (Lam a t)"}
 
   To use this premise with @{ML resolve_tac}, we need to instantiate its 
-  quantifiers (with @{text params1}) and transform it into rule 
+  quantifiers (with \<open>params1\<close>) and transform it into rule 
   format (using @{ML_ind  rulify in Object_Logic}). So we can modify the 
   code as follows:
-*}
+\<close>
 
-ML %linenosgray{*fun apply_prem_tac i preds rules =
+ML %linenosgray\<open>fun apply_prem_tac i preds rules =
   Subgoal.FOCUS (fn {params, prems, context, ...} =>
   let
     val cparams = map snd params
@@ -688,37 +683,37 @@
     val (prems1, prems2) = chop (length prems - length rules) prems
   in
     resolve_tac  context [Object_Logic.rulify context (all_elims params1 (nth prems2 i))] 1
-  end) *}
+  end)\<close>
 
-text {* 
-  The argument @{text i} corresponds to the number of the 
+text \<open>
+  The argument \<open>i\<close> corresponds to the number of the 
   introduction we want to prove. We will later on let it range
-  from @{text 0} to the number of @{text "rules - 1"}.
-  Below we apply this function with @{text 3}, since 
+  from \<open>0\<close> to the number of \<open>rules - 1\<close>.
+  Below we apply this function with \<open>3\<close>, since 
   we are proving the fourth introduction rule. 
-*}
+\<close>
 
 (*<*)
 lemma fresh_Lam:
 shows "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
-apply(tactic {* expand_tac @{context} @{thms fresh_def} *})
+apply(tactic \<open>expand_tac @{context} @{thms fresh_def}\<close>)
 (*>*)
-apply(tactic {* apply_prem_tac 3 [fresh_pred] fresh_rules @{context} 1 *})
+apply(tactic \<open>apply_prem_tac 3 [fresh_pred] fresh_rules @{context} 1\<close>)
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   The goal state we obtain is: 
 
   \begin{isabelle}
-  @{text "1."}~@{text "\<dots> \<Longrightarrow> "}~@{prop "a \<noteq> b"}\\
-  @{text "2."}~@{text "\<dots> \<Longrightarrow> "}~@{prop "fresh a t"}
+  \<open>1.\<close>~\<open>\<dots> \<Longrightarrow> \<close>~@{prop "a \<noteq> b"}\\
+  \<open>2.\<close>~\<open>\<dots> \<Longrightarrow> \<close>~@{prop "fresh a t"}
   \end{isabelle}
 
   As expected there are two subgoals, where the first comes from the
   non-recursive premise of the introduction rule and the second comes 
   from the recursive one. The first goal can be solved immediately 
-  by @{text "prems1"}. The second needs more work. It can be solved 
-  with the other premise in @{text "prems1"}, namely
+  by \<open>prems1\<close>. The second needs more work. It can be solved 
+  with the other premise in \<open>prems1\<close>, namely
 
 
   @{term [break,display]
@@ -729,31 +724,31 @@
       (\<forall>a b t. a \<noteq> b \<longrightarrow> fresh a t \<longrightarrow> fresh a (Lam b t)) \<longrightarrow> fresh a t"}
 
   but we have to instantiate it appropriately. These instantiations
-  come from @{text "params1"} and @{text "prems2"}. We can determine
+  come from \<open>params1\<close> and \<open>prems2\<close>. We can determine
   whether we are in the simple or complicated case by checking whether
-  the topmost connective is an @{text "\<forall>"}. The premises in the simple
+  the topmost connective is an \<open>\<forall>\<close>. The premises in the simple
   case cannot have such a quantification, since the first step 
   of @{ML "expand_tac"} was to ``rulify'' the lemma. 
-  The premise of the complicated case must have at least one  @{text "\<forall>"}
-  coming from the quantification over the @{text preds}. So 
+  The premise of the complicated case must have at least one  \<open>\<forall>\<close>
+  coming from the quantification over the \<open>preds\<close>. So 
   we can implement the following function
-*}
+\<close>
 
-ML %grayML{*fun prepare_prem ctxt params2 prems2 prem =  
+ML %grayML\<open>fun prepare_prem ctxt params2 prems2 prem =  
   resolve_tac ctxt [case Thm.prop_of prem of
            _ $ (Const (@{const_name All}, _) $ _) =>
                  prem |> all_elims params2 
                       |> imp_elims prems2
-         | _ => prem] *}
+         | _ => prem]\<close>
 
-text {* 
+text \<open>
   which either applies the premise outright (the default case) or if 
   it has an outermost universial quantification, instantiates it first 
-  with  @{text "params1"} and then @{text "prems1"}. The following 
+  with  \<open>params1\<close> and then \<open>prems1\<close>. The following 
   tactic will therefore prove the lemma completely.
-*}
+\<close>
 
-ML %grayML{*fun prove_intro_tac i preds rules =
+ML %grayML\<open>fun prove_intro_tac i preds rules =
   SUBPROOF (fn {params, prems, context, ...} =>
   let
     val cparams = map snd params
@@ -762,20 +757,20 @@
   in
     resolve_tac context [Object_Logic.rulify context (all_elims params1 (nth prems2 i))] 1
     THEN EVERY1 (map (prepare_prem context params2 prems2) prems1)
-  end) *}
+  end)\<close>
 
-text {*
+text \<open>
   Note that the tactic is now @{ML SUBPROOF}, not @{ML FOCUS in Subgoal} anymore. 
   The full proof of the introduction rule is as follows:
-*}
+\<close>
 
 lemma fresh_Lam:
 shows "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
-apply(tactic {* expand_tac @{context} @{thms fresh_def} *})
-apply(tactic {* prove_intro_tac 3 [fresh_pred] fresh_rules @{context} 1 *})
+apply(tactic \<open>expand_tac @{context} @{thms fresh_def}\<close>)
+apply(tactic \<open>prove_intro_tac 3 [fresh_pred] fresh_rules @{context} 1\<close>)
 done
 
-text {* 
+text \<open>
   Phew!\ldots  
 
   Unfortunately, not everything is done yet. If you look closely
@@ -783,27 +778,27 @@
   Section~\ref{sec:nutshell}, we have  not yet dealt with the case where 
   recursive premises have preconditions. The introduction rule
   of the accessible part is such a rule. 
-*}
+\<close>
 
 
 lemma accpartI:
 shows "\<And>R x. (\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
-apply(tactic {* expand_tac @{context} @{thms accpart_def} *})
-apply(tactic {* chop_test_tac [acc_pred] acc_rules @{context} 1 *})
-apply(tactic {* apply_prem_tac 0 [acc_pred] acc_rules @{context} 1 *})
+apply(tactic \<open>expand_tac @{context} @{thms accpart_def}\<close>)
+apply(tactic \<open>chop_test_tac [acc_pred] acc_rules @{context} 1\<close>)
+apply(tactic \<open>apply_prem_tac 0 [acc_pred] acc_rules @{context} 1\<close>)
 
-txt {*
+txt \<open>
   Here @{ML chop_test_tac} prints out the following
-  values for @{text "params1/2"} and @{text "prems1/2"}
+  values for \<open>params1/2\<close> and \<open>prems1/2\<close>
 
   \begin{isabelle}
-  @{text "Params1 from the rule:"}\\
-  @{text "x"}\\
-  @{text "Params2 from the predicate:"}\\
-  @{text "P"}\\
-  @{text "Prems1 from the rule:"}\\
-  @{text "R ?y x \<Longrightarrow> \<forall>P. (\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x) \<longrightarrow> P ?y"}\\
-  @{text "Prems2 from the predicate:"}\\
+  \<open>Params1 from the rule:\<close>\\
+  \<open>x\<close>\\
+  \<open>Params2 from the predicate:\<close>\\
+  \<open>P\<close>\\
+  \<open>Prems1 from the rule:\<close>\\
+  \<open>R ?y x \<Longrightarrow> \<forall>P. (\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x) \<longrightarrow> P ?y\<close>\\
+  \<open>Prems2 from the predicate:\<close>\\
   @{term "\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x"}\\
   \end{isabelle}
 
@@ -811,21 +806,21 @@
   using @{ML apply_prem_tac}, we are in the goal state
 
   \begin{isabelle}
-  @{text "1."}~@{term "\<And>y. R y x \<Longrightarrow> P y"}
+  \<open>1.\<close>~@{term "\<And>y. R y x \<Longrightarrow> P y"}
   \end{isabelle}
   
   
-*}(*<*)oops(*>*)
+\<close>(*<*)oops(*>*)
 
-text {*
+text \<open>
   In order to make progress, we have to use the precondition
-  @{text "R y x"} (in general there can be many of them). The best way
+  \<open>R y x\<close> (in general there can be many of them). The best way
   to get a handle on these preconditions is to open up another subproof,
-  since the preconditions will then be bound to @{text prems}. Therfore we
+  since the preconditions will then be bound to \<open>prems\<close>. Therfore we
   modify the function @{ML prepare_prem} as follows
-*}
+\<close>
 
-ML %linenosgray{*fun prepare_prem params2 prems2 ctxt prem =  
+ML %linenosgray\<open>fun prepare_prem params2 prems2 ctxt prem =  
   SUBPROOF (fn {prems, ...} =>
   let
     val prem' = prems MRS prem
@@ -835,24 +830,24 @@
                  prem' |> all_elims params2 
                        |> imp_elims prems2
          | _ => prem'] 1
-  end) ctxt *}
+  end) ctxt\<close>
 
-text {*
-  In Line 4 we use the @{text prems} from the @{ML SUBPROOF} and resolve 
-  them with @{text prem}. In the simple cases, that is where the @{text prem} 
-  comes from a non-recursive premise of the rule, @{text prems} will be 
+text \<open>
+  In Line 4 we use the \<open>prems\<close> from the @{ML SUBPROOF} and resolve 
+  them with \<open>prem\<close>. In the simple cases, that is where the \<open>prem\<close> 
+  comes from a non-recursive premise of the rule, \<open>prems\<close> will be 
   just the empty list and the function @{ML_ind MRS in Drule} does nothing. Similarly, in the 
   cases where the recursive premises of the rule do not have preconditions. 
   In case there are preconditions, then Line 4 discharges them. After
   that we can proceed as before, i.e., check whether the outermost
-  connective is @{text "\<forall>"}.
+  connective is \<open>\<forall>\<close>.
   
   The function @{ML prove_intro_tac} only needs to be changed so that it
   gives the context to @{ML prepare_prem} (Line 8). The modified version
   is below.
-*}
+\<close>
 
-ML %linenosgray{*fun prove_intro_tac i preds rules =
+ML %linenosgray\<open>fun prove_intro_tac i preds rules =
   SUBPROOF (fn {params, prems, context, ...} =>
   let
     val cparams = map snd params
@@ -861,75 +856,75 @@
   in
     resolve_tac context [Object_Logic.rulify context (all_elims params1 (nth prems2 i))] 1
     THEN EVERY1 (map (prepare_prem params2 prems2 context) prems1)
-  end) *}
+  end)\<close>
 
-text {*
+text \<open>
   With these two functions we can now also prove the introduction
   rule for the accessible part. 
-*}
+\<close>
 
 lemma accpartI:
 shows "\<And>R x. (\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
-apply(tactic {* expand_tac @{context} @{thms accpart_def} *})
-apply(tactic {* prove_intro_tac 0 [acc_pred] acc_rules @{context} 1 *})
+apply(tactic \<open>expand_tac @{context} @{thms accpart_def}\<close>)
+apply(tactic \<open>prove_intro_tac 0 [acc_pred] acc_rules @{context} 1\<close>)
 done
 
-text {*
+text \<open>
   Finally we need two functions that string everything together. The first
   function is the tactic that performs the proofs.
-*}
+\<close>
 
-ML %linenosgray{*fun intro_tac defs rules preds i ctxt =
+ML %linenosgray\<open>fun intro_tac defs rules preds i ctxt =
   EVERY1 [Object_Logic.rulify_tac ctxt,
           rewrite_goal_tac ctxt defs,
           REPEAT o (resolve_tac ctxt [@{thm allI}, @{thm impI}]),
-          prove_intro_tac i preds rules ctxt]*}
+          prove_intro_tac i preds rules ctxt]\<close>
 
-text {*
+text \<open>
   Lines 2 to 4 in this tactic correspond to the function @{ML expand_tac}. 
   Some testcases for this tactic are:
-*}
+\<close>
 
 lemma even0_intro:
 shows "even 0"
-by (tactic {* intro_tac eo_defs eo_rules eo_preds 0 @{context} *})
+by (tactic \<open>intro_tac eo_defs eo_rules eo_preds 0 @{context}\<close>)
 
 lemma evenS_intro:
 shows "\<And>m. odd m \<Longrightarrow> even (Suc m)"
-by (tactic {* intro_tac eo_defs eo_rules eo_preds 1 @{context} *})
+by (tactic \<open>intro_tac eo_defs eo_rules eo_preds 1 @{context}\<close>)
 
 lemma fresh_App:
 shows "\<And>a t s. \<lbrakk>fresh a t; fresh a s\<rbrakk> \<Longrightarrow> fresh a (App t s)"
-by (tactic {* 
-  intro_tac @{thms fresh_def} fresh_rules [fresh_pred] 1 @{context} *})
+by (tactic \<open>
+  intro_tac @{thms fresh_def} fresh_rules [fresh_pred] 1 @{context}\<close>)
 
-text {*
+text \<open>
   The second function sets up in Line 4 the goals to be proved (this is easy
   for the introduction rules since they are exactly the rules 
   given by the user) and iterates @{ML intro_tac} over all 
   introduction rules.
-*}
+\<close>
 
-ML %linenosgray{*fun intros rules preds defs lthy = 
+ML %linenosgray\<open>fun intros rules preds defs lthy = 
 let
   fun intros_aux (i, goal) =
     Goal.prove lthy [] [] goal
       (fn {context, ...} => intro_tac defs rules preds i context)
 in
   map_index intros_aux rules
-end*}
+end\<close>
 
-text {*
+text \<open>
   The iteration is done with the function @{ML_ind map_index in Library} since we
   need the introduction rule together with its number (counted from
-  @{text 0}). This completes the code for the functions deriving the
+  \<open>0\<close>). This completes the code for the functions deriving the
   reasoning infrastructure. It remains to implement some administrative
   code that strings everything together.
-*}
+\<close>
 
-subsection {* Administrative Functions *}
+subsection \<open>Administrative Functions\<close>
 
-text {* 
+text \<open>
   We have produced various theorems (definitions, induction principles and
   introduction rules), but apart from the definitions, we have not yet 
   registered them with the theorem database. This is what the functions 
@@ -938,24 +933,24 @@
 
   For convenience, we use the following 
   three wrappers this function:
-*}
+\<close>
 
-ML %grayML{*fun note_many qname ((name, attrs), thms) = 
+ML %grayML\<open>fun note_many qname ((name, attrs), thms) = 
   Local_Theory.note ((Binding.qualify false qname name, attrs), thms) 
 
 fun note_single1 qname ((name, attrs), thm) = 
   note_many qname ((name, attrs), [thm]) 
 
 fun note_single2 name attrs (qname, thm) = 
-  note_many (Binding.name_of qname) ((name, attrs), [thm]) *}
+  note_many (Binding.name_of qname) ((name, attrs), [thm])\<close>
 
-text {*
-  The function that ``holds everything together'' is @{text "add_inductive"}. 
-  Its arguments are the specification of the predicates @{text "pred_specs"} 
-  and the introduction rules @{text "rule_spec"}.   
-*}
+text \<open>
+  The function that ``holds everything together'' is \<open>add_inductive\<close>. 
+  Its arguments are the specification of the predicates \<open>pred_specs\<close> 
+  and the introduction rules \<open>rule_spec\<close>.   
+\<close>
 
-ML %linenosgray{*fun add_inductive pred_specs rule_specs lthy =
+ML %linenosgray\<open>fun add_inductive pred_specs rule_specs lthy =
 let
   val mxs = map snd pred_specs
   val pred_specs' = map fst pred_specs
@@ -981,35 +976,35 @@
                Attrib.internal (K (Induct.induct_pred ""))]) 
              (prednames ~~ ind_prins) 
         |> snd
-end*}
+end\<close>
 
-text {*
+text \<open>
   In Line 3 the function extracts the syntax annotations from the predicates. 
   Lines 4 to 6 extract the names of the predicates and generate
   the variables terms (with types) corresponding to the predicates. 
   Line 7 produces the argument types for each predicate. 
 
   Line 9 extracts the introduction rules from the specifications
-  and stores also in @{text namesattrs} the names and attributes the
+  and stores also in \<open>namesattrs\<close> the names and attributes the
   user may have attached to these rules.
 
   Line 11 produces the definitions and also registers the definitions
-  in the local theory @{text "lthy'"}. The next two lines produce
+  in the local theory \<open>lthy'\<close>. The next two lines produce
   the induction principles and the introduction rules (all of them
-  as theorems). Both need the local theory @{text lthy'} in which
+  as theorems). Both need the local theory \<open>lthy'\<close> in which
   the definitions have been registered.
 
   Lines 15 produces the name that is used to register the introduction
   rules. It is costum to collect all introduction rules under 
-  @{text "string.intros"}, whereby @{text "string"} stands for the 
+  \<open>string.intros\<close>, whereby \<open>string\<close> stands for the 
   @{text [quotes] "_"}-separated list of predicate names (for example
-  @{text "even_odd"}. Also by custom, the case names in intuction 
+  \<open>even_odd\<close>. Also by custom, the case names in intuction 
   proofs correspond to the names of the introduction rules. These
   are generated in Line 16.
 
-  Lines 18 and 19 now add to @{text "lthy'"} all the introduction rules 
-  und induction principles under the name @{text "mut_name.intros"} and
-  @{text "mut_name.inducts"}, respectively (see previous paragraph).
+  Lines 18 and 19 now add to \<open>lthy'\<close> all the introduction rules 
+  und induction principles under the name \<open>mut_name.intros\<close> and
+  \<open>mut_name.inducts\<close>, respectively (see previous paragraph).
   
   Line 20 add further every introduction rule under its own name
   (given by the user).\footnote{FIXME: what happens if the user did not give
@@ -1024,5 +1019,5 @@
   in Section~\ref{sec:interface}.\footnote{FIXME: Describe @{ML Induct.induct_pred}. 
   Why the mut-name? 
   What does @{ML Binding.qualify} do?}
-*}
+\<close>
 (*<*)end(*>*)
--- a/ProgTutorial/Package/Ind_Extensions.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Package/Ind_Extensions.thy	Tue May 14 17:10:47 2019 +0200
@@ -2,7 +2,7 @@
 imports Simple_Inductive_Package Ind_Intro
 begin
 
-section {* Extensions of the Package (TBD) *}
+section \<open>Extensions of the Package (TBD)\<close>
 
 (*
 text {*
@@ -29,7 +29,7 @@
 *}
 *)
 
-text {*
+text \<open>
   Things to include at the end:
 
   \begin{itemize}
@@ -39,7 +39,7 @@
   \item say something about the two interfaces for calling packages
   \end{itemize}
   
-*}
+\<close>
 
 (*
 simple_inductive
@@ -129,32 +129,32 @@
   accpartI: "(\<And>y. R y x \<Longrightarrow> accpart'' R y) \<Longrightarrow> accpart'' R x"
 *)
 
-text {*
+text \<open>
   \begin{exercise}
   In Section~\ref{sec:nutshell} we required that introduction rules must be of the
   form
 
   \begin{isabelle}
-  @{text "rule ::= \<And>xs. As \<Longrightarrow> (\<And>ys. Bs \<Longrightarrow> pred ss)\<^sup>* \<Longrightarrow> pred ts"}
+  \<open>rule ::= \<And>xs. As \<Longrightarrow> (\<And>ys. Bs \<Longrightarrow> pred ss)\<^sup>* \<Longrightarrow> pred ts\<close>
   \end{isabelle}
 
-  where the @{text "As"} and @{text "Bs"} can be any collection of formulae
-  not containing the @{text "preds"}. This requirement is important,
+  where the \<open>As\<close> and \<open>Bs\<close> can be any collection of formulae
+  not containing the \<open>preds\<close>. This requirement is important,
   because if violated, the theory behind the inductive package does not work
   and also the proofs break. Write code that tests whether the introduction
   rules given by the user fit into the scheme described above. Hint: It 
   is not important in which order the premises ar given; the
-  @{text "As"} and @{text "(\<And>ys. Bs \<Longrightarrow> pred ss)"} premises can occur
+  \<open>As\<close> and \<open>(\<And>ys. Bs \<Longrightarrow> pred ss)\<close> premises can occur
   in any order.
   \end{exercise}
-*}  
+\<close>  
 
-text_raw {*
+text_raw \<open>
   \begin{exercise}
-  If you define @{text even} and @{text odd} with the standard inductive
+  If you define \<open>even\<close> and \<open>odd\<close> with the standard inductive
   package
   \begin{isabelle}
-*}
+\<close>
 inductive 
   even_2 and odd_2
 where
@@ -162,11 +162,11 @@
 | evenS_2: "odd_2 m \<Longrightarrow> even_2 (Suc m)"
 | oddS_2:  "even_2 m \<Longrightarrow> odd_2 (Suc m)"
 
-text_raw{*
+text_raw\<open>
   \end{isabelle}
 
-  you will see that the generated induction principle for @{text "even'"} (namely
-  @{text "even_2_odd_2.inducts"} has the additional assumptions 
+  you will see that the generated induction principle for \<open>even'\<close> (namely
+  \<open>even_2_odd_2.inducts\<close> has the additional assumptions 
   @{prop "odd_2 m"} and @{prop "even_2 m"} in the recursive cases. These additional 
   assumptions can sometimes make ``life easier'' in proofs. Since 
   more assumptions can be made when proving properties, these induction principles 
@@ -186,29 +186,29 @@
   provides a richer reasoning infrastructure. The code of this package can be found in 
   @{ML_file "HOL/Tools/inductive.ML"}.
   \end{readmore}
-*}
+\<close>
 
-section {* Definitional Packages *}
+section \<open>Definitional Packages\<close>
 
-text {* Type declarations *}
+text \<open>Type declarations\<close>
 
 (*
 ML %grayML{*Typedef.add_typedef_global (@{binding test}, [], NoSyn)
   @{term "{1}::nat set"} NONE (simp_tac @{context} 1) @{theory} *}
 *)
 
-ML %grayML{*fun pat_completeness_auto ctxt =
+ML %grayML\<open>fun pat_completeness_auto ctxt =
   Pat_Completeness.pat_completeness_tac ctxt 1
-    THEN auto_tac ctxt*}
+    THEN auto_tac ctxt\<close>
 
-ML {*
+ML \<open>
   val conf = Function_Common.default_config
-*}
+\<close>
 
 datatype foo = Foo nat
 
-local_setup{*Function.add_function [(@{binding "baz"}, NONE, NoSyn)] 
+local_setup\<open>Function.add_function [(@{binding "baz"}, NONE, NoSyn)] 
     [((Binding.empty_atts,@{term "\<And>x. baz (Foo x) = x"}),[],[])]
-      conf pat_completeness_auto #> snd*}
+      conf pat_completeness_auto #> snd\<close>
 
 (*<*)end(*>*)
--- a/ProgTutorial/Package/Ind_General_Scheme.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Package/Ind_General_Scheme.thy	Tue May 14 17:10:47 2019 +0200
@@ -36,45 +36,42 @@
 (*>*)
 
 
-section {* The Code in a Nutshell\label{sec:nutshell} *}
+section \<open>The Code in a Nutshell\label{sec:nutshell}\<close>
 
 
-text {*
+text \<open>
   The inductive package will generate the reasoning infrastructure for
-  mutually recursive predicates, say @{text "pred\<^sub>1\<dots>pred\<^sub>n"}. In what
+  mutually recursive predicates, say \<open>pred\<^sub>1\<dots>pred\<^sub>n\<close>. In what
   follows we will have the convention that various, possibly empty collections
   of ``things'' (lists, terms, nested implications and so on) are indicated either by
   adding an @{text [quotes] "s"} or by adding a superscript @{text [quotes]
-  "\<^sup>*"}. The shorthand for the predicates will therefore be @{text
-  "preds"} or @{text "pred\<^sup>*"}. In the case of the predicates there must
+  "\<^sup>*"}. The shorthand for the predicates will therefore be \<open>preds\<close> or \<open>pred\<^sup>*\<close>. In the case of the predicates there must
   be, of course, at least a single one in order to obtain a meaningful
   definition.
 
-  The input for the inductive package will be some @{text "preds"} with possible 
+  The input for the inductive package will be some \<open>preds\<close> with possible 
   typing and syntax annotations, and also some introduction rules. We call below the 
-  introduction rules short as @{text "rules"}. Borrowing some idealised Isabelle 
-  notation, one such @{text "rule"} is assumed to be of the form
+  introduction rules short as \<open>rules\<close>. Borrowing some idealised Isabelle 
+  notation, one such \<open>rule\<close> is assumed to be of the form
 
   \begin{isabelle}
-  @{text "rule ::= 
-  \<And>xs. \<^latex>\<open>$\\underbrace{\\mbox{\<close>As\<^latex>\<open>}}_{\\text{\\makebox[0mm]{\\rm non-recursive premises}}}$\<close> \<Longrightarrow> 
-  \<^latex>\<open>$\\underbrace{\\mbox{\<close>(\<And>ys. Bs \<Longrightarrow> pred ss)\<^sup>*\<^latex>\<open>}}_{\\text{\\rm recursive premises}}$>\<close> 
-  \<Longrightarrow> pred ts"}
+  \<open>rule ::= 
+  \<And>xs. \<^latex>\<open>$\underbrace{\mbox{\<close>As\<^latex>\<open>}}_{\text{\makebox[0mm]{\rm non-recursive premises}}}$\<close> \<Longrightarrow> 
+  \<^latex>\<open>$\underbrace{\mbox{\<close>(\<And>ys. Bs \<Longrightarrow> pred ss)\<^sup>*\<^latex>\<open>}}_{\text{\rm recursive premises}}$>\<close> 
+  \<Longrightarrow> pred ts\<close>
   \end{isabelle}
   
-  For the purposes here, we will assume the @{text rules} have this format and
+  For the purposes here, we will assume the \<open>rules\<close> have this format and
   omit any code that actually tests this. Therefore ``things'' can go horribly
-  wrong, if the @{text "rules"} are not of this form.  The @{text As} and
-  @{text Bs} in a @{text "rule"} stand for formulae not involving the
-  inductive predicates @{text "preds"}; the instances @{text "pred ss"} and
-  @{text "pred ts"} can stand for different predicates, like @{text
-  "pred\<^sub>1 ss"} and @{text "pred\<^sub>2 ts"}, in case mutual recursive
-  predicates are defined; the terms @{text ss} and @{text ts} are the
+  wrong, if the \<open>rules\<close> are not of this form.  The \<open>As\<close> and
+  \<open>Bs\<close> in a \<open>rule\<close> stand for formulae not involving the
+  inductive predicates \<open>preds\<close>; the instances \<open>pred ss\<close> and
+  \<open>pred ts\<close> can stand for different predicates, like \<open>pred\<^sub>1 ss\<close> and \<open>pred\<^sub>2 ts\<close>, in case mutual recursive
+  predicates are defined; the terms \<open>ss\<close> and \<open>ts\<close> are the
   arguments of these predicates. Every formula left of @{text [quotes] "\<Longrightarrow> pred
-  ts"} is a premise of the rule. The outermost quantified variables @{text
-  "xs"} are usually omitted in the user's input. The quantification for the
-  variables @{text "ys"} is local with respect to one recursive premise and
-  must be given. Some examples of @{text "rule"}s are
+  ts"} is a premise of the rule. The outermost quantified variables \<open>xs\<close> are usually omitted in the user's input. The quantification for the
+  variables \<open>ys\<close> is local with respect to one recursive premise and
+  must be given. Some examples of \<open>rule\<close>s are
 
 
   @{thm [display] fresh_var[no_vars]}
@@ -88,35 +85,35 @@
   @{thm [display] accpartI[no_vars]}
 
   has a single recursive premise that has a precondition. As is custom all 
-  rules are stated without the leading meta-quantification @{text "\<And>xs"}.
+  rules are stated without the leading meta-quantification \<open>\<And>xs\<close>.
 
   The output of the inductive package will be definitions for the predicates, 
   induction principles and introduction rules.  For the definitions we need to have the
-  @{text rules} in a form where the meta-quantifiers and meta-implications are
-  replaced by their object logic equivalents. Therefore an @{text "orule"} is
+  \<open>rules\<close> in a form where the meta-quantifiers and meta-implications are
+  replaced by their object logic equivalents. Therefore an \<open>orule\<close> is
   of the form
 
   @{text [display] "orule ::= \<forall>xs. As \<longrightarrow> (\<forall>ys. Bs \<longrightarrow> pred ss)\<^sup>* \<longrightarrow> pred ts"}
 
-  A definition for the predicate @{text "pred"} has then the form
+  A definition for the predicate \<open>pred\<close> has then the form
 
   @{text [display] "def ::= pred \<equiv> \<lambda>zs. \<forall>preds. orules \<longrightarrow> pred zs"}
 
-  The induction principles for every predicate @{text "pred"} are of the
+  The induction principles for every predicate \<open>pred\<close> are of the
   form
 
   @{text [display] "ind ::= pred ?zs \<Longrightarrow> rules[preds := ?Ps] \<Longrightarrow> ?P ?zs"}
 
-  where in the @{text "rules"}-part every @{text pred} is replaced by a fresh
-  schematic variable @{text "?P"}.
+  where in the \<open>rules\<close>-part every \<open>pred\<close> is replaced by a fresh
+  schematic variable \<open>?P\<close>.
 
-  In order to derive an induction principle for the predicate @{text "pred"},
-  we first transform @{text ind} into the object logic and fix the schematic variables. 
+  In order to derive an induction principle for the predicate \<open>pred\<close>,
+  we first transform \<open>ind\<close> into the object logic and fix the schematic variables. 
   Hence we have to prove a formula of the form
 
   @{text [display] "pred zs \<longrightarrow> orules[preds := Ps] \<longrightarrow> P zs"}
 
-  If we assume @{text "pred zs"} and unfold its definition, then we have an
+  If we assume \<open>pred zs\<close> and unfold its definition, then we have an
   assumption
   
   @{text [display] "\<forall>preds. orules \<longrightarrow> pred zs"} 
@@ -125,12 +122,12 @@
 
   @{text [display] "orules[preds := Ps] \<longrightarrow> P zs"}
 
-  This can be done by instantiating the @{text "\<forall>preds"}-quantification 
-  with the @{text "Ps"}. Then we are done since we are left with a simple 
+  This can be done by instantiating the \<open>\<forall>preds\<close>-quantification 
+  with the \<open>Ps\<close>. Then we are done since we are left with a simple 
   identity.
   
-  Although the user declares the introduction rules @{text rules}, they must 
-  also be derived from the @{text defs}. These derivations are a bit involved. 
+  Although the user declares the introduction rules \<open>rules\<close>, they must 
+  also be derived from the \<open>defs\<close>. These derivations are a bit involved. 
   Assuming we want to prove the introduction rule 
 
   @{text [display] "\<And>xs. As \<Longrightarrow> (\<And>ys. Bs \<Longrightarrow> pred ss)\<^sup>* \<Longrightarrow> pred ts"}
@@ -138,29 +135,29 @@
   then we have assumptions of the form
 
   \begin{isabelle}
-  (i)~~@{text "As"}\\
-  (ii)~@{text "(\<And>ys. Bs \<Longrightarrow> pred ss)\<^sup>*"}
+  (i)~~\<open>As\<close>\\
+  (ii)~\<open>(\<And>ys. Bs \<Longrightarrow> pred ss)\<^sup>*\<close>
   \end{isabelle}
 
   and must show the goal
 
   @{text [display] "pred ts"}
   
-  If we now unfold the definitions for the @{text preds}, we have assumptions
+  If we now unfold the definitions for the \<open>preds\<close>, we have assumptions
 
   \begin{isabelle}
-  (i)~~~@{text "As"}\\
-  (ii)~~@{text "(\<And>ys. Bs \<Longrightarrow> \<forall>preds. orules \<longrightarrow> pred ss)\<^sup>*"}\\
-  (iii)~@{text "orules"}
+  (i)~~~\<open>As\<close>\\
+  (ii)~~\<open>(\<And>ys. Bs \<Longrightarrow> \<forall>preds. orules \<longrightarrow> pred ss)\<^sup>*\<close>\\
+  (iii)~\<open>orules\<close>
   \end{isabelle}
 
   and need to show
 
   @{text [display] "pred ts"}
 
-  In the last step we removed some quantifiers and moved the precondition @{text "orules"}  
-  into the assumption. The @{text "orules"} stand for all introduction rules that are given 
-  by the user. We apply the @{text orule} that corresponds to introduction rule we are 
+  In the last step we removed some quantifiers and moved the precondition \<open>orules\<close>  
+  into the assumption. The \<open>orules\<close> stand for all introduction rules that are given 
+  by the user. We apply the \<open>orule\<close> that corresponds to introduction rule we are 
   proving. After transforming the object connectives into meta-connectives, this introduction 
   rule must necessarily be of the form 
 
@@ -170,34 +167,34 @@
   goals of the form
 
   \begin{isabelle}
-  (a)~@{text "As"}\\
-  (b)~@{text "(\<And>ys. Bs \<Longrightarrow> pred ss)\<^sup>*"}
+  (a)~\<open>As\<close>\\
+  (b)~\<open>(\<And>ys. Bs \<Longrightarrow> pred ss)\<^sup>*\<close>
   \end{isabelle}
 
-  We can immediately discharge the goals @{text "As"} using the assumptions in 
-  @{text "(i)"}. The goals in @{text "(b)"} can be discharged as follows: we 
-  assume the @{text "Bs"} and prove @{text "pred ss"}. For this we resolve the 
-  @{text "Bs"}  with the assumptions in @{text "(ii)"}. This gives us the 
+  We can immediately discharge the goals \<open>As\<close> using the assumptions in 
+  \<open>(i)\<close>. The goals in \<open>(b)\<close> can be discharged as follows: we 
+  assume the \<open>Bs\<close> and prove \<open>pred ss\<close>. For this we resolve the 
+  \<open>Bs\<close>  with the assumptions in \<open>(ii)\<close>. This gives us the 
   assumptions
 
   @{text [display] "(\<forall>preds. orules \<longrightarrow> pred ss)\<^sup>*"}
 
   Instantiating the universal quantifiers and then resolving with the assumptions 
-  in @{text "(iii)"} gives us @{text "pred ss"}, which is the goal we are after.
+  in \<open>(iii)\<close> gives us \<open>pred ss\<close>, which is the goal we are after.
   This completes the proof for introduction rules.
 
   What remains is to implement in Isabelle the reasoning outlined in this section. 
   We will describe the code in the next section. For building testcases, we use the shorthands for 
-  @{text "even/odd"}, @{term "fresh"} and @{term "accpart"}
+  \<open>even/odd\<close>, @{term "fresh"} and @{term "accpart"}
   defined in Figure~\ref{fig:shorthands}.
-*}
+\<close>
 
 
-text_raw{*
+text_raw\<open>
 \begin{figure}[p]
 \begin{minipage}{\textwidth}
-\begin{isabelle}*}  
-ML %grayML{*(* even-odd example *)
+\begin{isabelle}\<close>  
+ML %grayML\<open>(* even-odd example *)
 val eo_defs = [@{thm even_def}, @{thm odd_def}]
 
 val eo_rules =  
@@ -240,17 +237,17 @@
 (* accessible-part example *)
 val acc_rules = 
      [@{prop "\<And>R x. (\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"}]
-val acc_pred = @{term "accpart::('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow>'a \<Rightarrow> bool"}*}
-text_raw{*
+val acc_pred = @{term "accpart::('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow>'a \<Rightarrow> bool"}\<close>
+text_raw\<open>
 \end{isabelle}
 \end{minipage}
-\caption{Shorthands for the inductive predicates @{text "even"}/@{text "odd"}, 
-  @{text "fresh"} and @{text "accpart"}. The names of these shorthands follow 
-  the convention @{text "rules"}, @{text "orules"}, @{text "preds"} and so on. 
+\caption{Shorthands for the inductive predicates \<open>even\<close>/\<open>odd\<close>, 
+  \<open>fresh\<close> and \<open>accpart\<close>. The names of these shorthands follow 
+  the convention \<open>rules\<close>, \<open>orules\<close>, \<open>preds\<close> and so on. 
   The purpose of these shorthands is to simplify the construction of testcases
   in Section~\ref{sec:code}.\label{fig:shorthands}}
 \end{figure}
-*}
+\<close>
 
 
 
--- a/ProgTutorial/Package/Ind_Interface.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Package/Ind_Interface.thy	Tue May 14 17:10:47 2019 +0200
@@ -3,13 +3,13 @@
 keywords "simple_inductive2" :: thy_decl
 begin
 
-section {* Parsing and Typing the Specification\label{sec:interface} *}
+section \<open>Parsing and Typing the Specification\label{sec:interface}\<close>
 
-text_raw {*
+text_raw \<open>
 \begin{figure}[t]
 \begin{boxedminipage}{\textwidth}
 \begin{isabelle}
-*}
+\<close>
 simple_inductive
   trcl :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool"
 where
@@ -42,16 +42,16 @@
 | fresh_app: "\<lbrakk>fresh a t; fresh a s\<rbrakk> \<Longrightarrow> fresh a (App t s)"
 | fresh_lam1: "fresh a (Lam a t)"
 | fresh_lam2: "\<lbrakk>a\<noteq>b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
-text_raw {*
+text_raw \<open>
 \end{isabelle}
 \end{boxedminipage}
 \caption{Specification given by the user for the inductive predicates
 @{term "trcl"}, @{term "even"} and @{term "odd"}, @{term "accpart"} and 
 @{term "fresh"}.\label{fig:specs}}
 \end{figure}  
-*}
+\<close>
 
-text {* 
+text \<open>
   To be able to write down the specifications of inductive predicates, we have
   to introduce a new command (see Section~\ref{sec:newcommand}).  As the
   keyword for the new command we chose \simpleinductive{}. Examples of
@@ -59,31 +59,31 @@
   Figure~\ref{fig:specs}. The syntax used in these examples more or
   less translates directly into the parser:
 
-*}
+\<close>
 
-ML %grayML{*val spec_parser = 
+ML %grayML\<open>val spec_parser = 
    Parse.vars -- 
    Scan.optional 
      (Parse.$$$ "where" |--
         Parse.!!! 
           (Parse.enum1 "|" 
-             (Parse_Spec.opt_thm_name ":" -- Parse.prop))) []*}
+             (Parse_Spec.opt_thm_name ":" -- Parse.prop))) []\<close>
 
-text {*
+text \<open>
   which we explained in Section~\ref{sec:parsingspecs}.  There is no code included 
   for parsing the keyword and what is called a \emph{target}. The latter  can be given 
   optionally after the keyword. The target is an ``advanced'' feature which we will 
   inherit for ``free'' from the infrastructure on which we shall build the package. 
   The target stands for a locale and allows us to specify
-*}
+\<close>
 
 locale rel =
   fixes R :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
 
-text {*
+text \<open>
   and then define the transitive closure and the accessible part of this 
   locale as follows:
-*}
+\<close>
 
 simple_inductive (in rel) 
   trcl' 
@@ -95,11 +95,11 @@
   accpart'
 where
   accpartI: "(\<And>y. R y x \<Longrightarrow> accpart' y) \<Longrightarrow> accpart' x"
-(*<*)ML %no{*fun filtered_input str = 
+(*<*)ML %no\<open>fun filtered_input str = 
   filter Token.is_proper (Token.explode (Thy_Header.get_keywords' @{context}) Position.none str) 
-fun parse p input = Scan.finite Token.stopper (Scan.error p) input*}(*>*)
-text {*
-  Note that in these definitions the parameter @{text R}, standing for the
+fun parse p input = Scan.finite Token.stopper (Scan.error p) input\<close>(*>*)
+text \<open>
+  Note that in these definitions the parameter \<open>R\<close>, standing for the
   relation, is left implicit. For the moment we will ignore this kind
   of implicit parameters and rely on the fact that the infrastructure will
   deal with them. Later, however, we will come back to them.
@@ -127,24 +127,24 @@
   and specifications of the introduction rules. This is all the information we
   need for calling the package and setting up the keyword. The latter is
   done in Lines 5 to 7 in the code below.
-*}
-(*<*)ML %no{*fun add_inductive_cmd pred_specs rule_specs lthy = lthy 
-   fun add_inductive pred_specs rule_specs lthy = lthy*}(*>*) (* FIXME: Is the dummy simple_inductive2 installed with ML, before there was ML_val*)
-ML %linenosgray{*val specification : (local_theory -> local_theory) parser =
+\<close>
+(*<*)ML %no\<open>fun add_inductive_cmd pred_specs rule_specs lthy = lthy 
+   fun add_inductive pred_specs rule_specs lthy = lthy\<close>(*>*) (* FIXME: Is the dummy simple_inductive2 installed with ML, before there was ML_val*)
+ML %linenosgray\<open>val specification : (local_theory -> local_theory) parser =
   spec_parser >>
     (fn (pred_specs, rule_specs) => add_inductive_cmd pred_specs rule_specs)
 
 val _ = Outer_Syntax.local_theory @{command_keyword "simple_inductive2"}
           "definition of simple inductive predicates"
-             specification*}
+             specification\<close>
 
-text {*
+text \<open>
   We call @{ML_ind  local_theory in Outer_Syntax} with the kind-indicator 
   @{ML_ind  thy_decl in Keyword} since the package does not need to open 
   up any proof (see Section~\ref{sec:newcommand}). 
-  The auxiliary function @{text specification} in Lines 1 to 3
+  The auxiliary function \<open>specification\<close> in Lines 1 to 3
   gathers the information from the parser to be processed further
-  by the function @{text "add_inductive_cmd"}, which we describe below. 
+  by the function \<open>add_inductive_cmd\<close>, which we describe below. 
 
   Note that the predicates when they come out of the parser are just some
   ``naked'' strings: they have no type yet (even if we annotate them with
@@ -158,23 +158,23 @@
   user and the type constraints by the ``ambient'' theory. It returns 
   the type for the predicates and also returns typed terms for the
   introduction rules. So at the heart of the function 
-  @{text "add_inductive_cmd"} is a call to @{ML read_multi_specs in Specification}.
-*}
+  \<open>add_inductive_cmd\<close> is a call to @{ML read_multi_specs in Specification}.
+\<close>
 
-ML_val%grayML{*fun add_inductive_cmd pred_specs rule_specs lthy =
+ML_val%grayML\<open>fun add_inductive_cmd pred_specs rule_specs lthy =
 let
   val ((pred_specs', rule_specs'), _) = 
          Specification.read_multi_specs pred_specs rule_specs lthy
 in
   add_inductive pred_specs' rule_specs' lthy
-end*}
+end\<close>
 
-text {*
+text \<open>
   Once we have the input data as some internal datastructure, we call
-  the function @{text add_inductive}. This function does the  heavy duty 
+  the function \<open>add_inductive\<close>. This function does the  heavy duty 
   lifting in the package: it generates definitions for the 
   predicates and derives from them corresponding induction principles and 
   introduction rules. The description of this function will span over
   the next two sections.
-*}
+\<close>
 (*<*)end(*>*)
--- a/ProgTutorial/Package/Ind_Intro.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Package/Ind_Intro.thy	Tue May 14 17:10:47 2019 +0200
@@ -3,9 +3,9 @@
 begin
 
 
-chapter {* How to Write a Definitional Package\label{chp:package} *}
+chapter \<open>How to Write a Definitional Package\label{chp:package}\<close>
 
-text {*
+text \<open>
    \begin{flushright}
   {\em
   ``We will most likely never realize the full importance of painting the Tower,\\ 
@@ -44,6 +44,6 @@
   Moreover, it only proves a weaker form of the induction principle for inductive
   predicates. But it illustrates the implementation pf a typical package in
   Isabelle.
-*}
+\<close>
 
 end
--- a/ProgTutorial/Package/Ind_Prelims.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Package/Ind_Prelims.thy	Tue May 14 17:10:47 2019 +0200
@@ -2,9 +2,9 @@
 imports Ind_Intro 
 begin
 
-section{* Preliminaries *}
+section\<open>Preliminaries\<close>
   
-text {*
+text \<open>
   The user will just give a specification of inductive predicate(s) and
   expects from the package to produce a convenient reasoning
   infrastructure. This infrastructure needs to be derived from the definition
@@ -18,7 +18,7 @@
   the ML-implementation we will develop in later sections.
 
 
-  We first consider the transitive closure of a relation @{text R}. The 
+  We first consider the transitive closure of a relation \<open>R\<close>. The 
   ``pencil-and-paper'' specification for the transitive closure is:
 
   \begin{center}\small
@@ -29,18 +29,17 @@
   As mentioned before, the package has to make an appropriate definition for
   @{term "trcl"}. Since an inductively defined predicate is the least
   predicate closed under a collection of introduction rules, the predicate
-  @{text "trcl R x y"} can be defined so that it holds if and only if @{text
-  "P x y"} holds for every predicate @{text P} closed under the rules
+  \<open>trcl R x y\<close> can be defined so that it holds if and only if \<open>P x y\<close> holds for every predicate \<open>P\<close> closed under the rules
   above. This gives rise to the definition
-*}
+\<close>
 
 definition "trcl \<equiv> 
      \<lambda>R x y. \<forall>P. (\<forall>x. P x x) 
                   \<longrightarrow> (\<forall>x y z. R x y \<longrightarrow> P y z \<longrightarrow> P x z) \<longrightarrow> P x y"
 
-text {*
-  Note we have to use the object implication @{text "\<longrightarrow>"} and object
-  quantification @{text "\<forall>"} for stating this definition (there is no other
+text \<open>
+  Note we have to use the object implication \<open>\<longrightarrow>\<close> and object
+  quantification \<open>\<forall>\<close> for stating this definition (there is no other
   way for definitions in HOL). However, the introduction rules and induction
   principles associated with the transitive closure should use the
   meta-connectives, since they simplify the reasoning for the user.
@@ -49,11 +48,11 @@
   With this definition, the proof of the induction principle for @{term trcl}
   is almost immediate. It suffices to convert all the meta-level
   connectives in the lemma to object-level connectives using the
-  proof method @{text atomize} (Line 4 below), expand the definition of @{term trcl}
+  proof method \<open>atomize\<close> (Line 4 below), expand the definition of @{term trcl}
   (Line 5 and 6), eliminate the universal quantifier contained in it (Line~7),
-  and then solve the goal by @{text assumption} (Line 8).
+  and then solve the goal by \<open>assumption\<close> (Line 8).
 
-*}
+\<close>
 
 lemma %linenos trcl_induct:
 assumes "trcl R x y"
@@ -65,10 +64,10 @@
 apply(assumption)
 done
 
-text {*
+text \<open>
   The proofs for the introduction rules are slightly more complicated. 
   For the first one, we need to prove the following lemma:
-*}
+\<close>
 
 lemma %linenos trcl_base: 
 shows "trcl R x x"
@@ -78,45 +77,45 @@
 apply(assumption)
 done
 
-text {*
+text \<open>
   We again unfold first the definition and apply introduction rules 
-  for @{text "\<forall>"} and @{text "\<longrightarrow>"} as often as possible (Lines 3 and 4).
+  for \<open>\<forall>\<close> and \<open>\<longrightarrow>\<close> as often as possible (Lines 3 and 4).
   We then end up in the goal state:
-*}
+\<close>
 
 (*<*)lemma "trcl R x x"
 apply (unfold trcl_def)
 apply (rule allI impI)+(*>*)
-txt {* @{subgoals [display]} *}
+txt \<open>@{subgoals [display]}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   The two assumptions come from the definition of @{term trcl} and correspond
   to the introduction rules. Thus, all we have to do is to eliminate the
   universal quantifier in front of the first assumption (Line 5), and then
-  solve the goal by @{text assumption} (Line 6).
-*}
+  solve the goal by \<open>assumption\<close> (Line 6).
+\<close>
 
-text {*
+text \<open>
   Next we have to show that the second introduction rule also follows from the
   definition.  Since this rule has premises, the proof is a bit more
   involved. After unfolding the definitions and applying the introduction
-  rules for @{text "\<forall>"} and @{text "\<longrightarrow>"}
-*}
+  rules for \<open>\<forall>\<close> and \<open>\<longrightarrow>\<close>
+\<close>
 
 lemma trcl_step: 
 shows "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"
 apply (unfold trcl_def)
 apply (rule allI impI)+
 
-txt {* 
+txt \<open>
   we obtain the goal state
 
   @{subgoals [display]} 
 
   To see better where we are, let us explicitly name the assumptions 
   by starting a subproof.
-*}
+\<close>
 
 proof -
   case (goal1 P)
@@ -127,42 +126,40 @@
   have r2: "\<forall>x y z. R x y \<longrightarrow> P y z \<longrightarrow> P x z" by fact
   show "P x z"
   
-txt {*
-  The assumptions @{text "p1"} and @{text "p2"} correspond to the premises of
-  the second introduction rule (unfolded); the assumptions @{text "r1"} and @{text "r2"}
-  come from the definition of @{term trcl}. We apply @{text "r2"} to the goal
+txt \<open>
+  The assumptions \<open>p1\<close> and \<open>p2\<close> correspond to the premises of
+  the second introduction rule (unfolded); the assumptions \<open>r1\<close> and \<open>r2\<close>
+  come from the definition of @{term trcl}. We apply \<open>r2\<close> to the goal
   @{term "P x z"}. In order for this assumption to be applicable as a rule, we
   have to eliminate the universal quantifier and turn the object-level
-  implications into meta-level ones. This can be accomplished using the @{text
-  rule_format} attribute. So we continue the proof with:
+  implications into meta-level ones. This can be accomplished using the \<open>rule_format\<close> attribute. So we continue the proof with:
 
-*}
+\<close>
 
     apply (rule r2[rule_format])
 
-txt {*
+txt \<open>
   This gives us two new subgoals
 
   @{subgoals [display]} 
 
-  which can be solved using assumptions @{text p1} and @{text p2}. The latter
+  which can be solved using assumptions \<open>p1\<close> and \<open>p2\<close>. The latter
   involves a quantifier and implications that have to be eliminated before it
   can be applied. To avoid potential problems with higher-order unification,
-  we explicitly instantiate the quantifier to @{text "P"} and also match
-  explicitly the implications with @{text "r1"} and @{text "r2"}. This gives
+  we explicitly instantiate the quantifier to \<open>P\<close> and also match
+  explicitly the implications with \<open>r1\<close> and \<open>r2\<close>. This gives
   the proof:
-*}
+\<close>
 
     apply(rule p1)
     apply(rule p2[THEN spec[where x=P], THEN mp, THEN mp, OF r1, OF r2])
     done
 qed
 
-text {*
+text \<open>
   Now we are done. It might be surprising that we are not using the automatic
-  tactics available in Isabelle/HOL for proving this lemmas. After all @{text
-  "blast"} would easily dispense of it.
-*}
+  tactics available in Isabelle/HOL for proving this lemmas. After all \<open>blast\<close> would easily dispense of it.
+\<close>
 
 lemma trcl_step_blast: 
 shows "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"
@@ -170,9 +167,9 @@
 apply(blast)
 done
 term "even"
-text {*
+text \<open>
   Experience has shown that it is generally a bad idea to rely heavily on
-  @{text blast}, @{text auto} and the like in automated proofs. The reason is
+  \<open>blast\<close>, \<open>auto\<close> and the like in automated proofs. The reason is
   that you do not have precise control over them (the user can, for example,
   declare new intro- or simplification rules that can throw automatic tactics
   off course) and also it is very hard to debug proofs involving automatic
@@ -181,7 +178,7 @@
 
   The method of defining inductive predicates by impredicative quantification
   also generalises to mutually inductive predicates. The next example defines
-  the predicates @{text even} and @{text odd} given by
+  the predicates \<open>even\<close> and \<open>odd\<close> given by
 
   \begin{center}\small
   @{prop[mode=Axiom] "even (0::nat)"} \hspace{5mm}
@@ -191,8 +188,8 @@
 
   Since the predicates @{term even} and @{term odd} are mutually inductive, each 
   corresponding definition must quantify over both predicates (we name them 
-  below @{text "P"} and @{text "Q"}).
-*}
+  below \<open>P\<close> and \<open>Q\<close>).
+\<close>
 
 hide_const even
 hide_const odd
@@ -205,10 +202,10 @@
   \<lambda>n. \<forall>P Q. P 0 \<longrightarrow> (\<forall>m. Q m \<longrightarrow> P (Suc m)) 
                  \<longrightarrow> (\<forall>m. P m \<longrightarrow> Q (Suc m)) \<longrightarrow> Q n"
 
-text {*
+text \<open>
   For proving the induction principles, we use exactly the same technique 
   as in the transitive closure example, namely:
-*}
+\<close>
 
 lemma even_induct:
 assumes "even n"
@@ -221,14 +218,14 @@
 apply(assumption)
 done
 
-text {*
-  The only difference with the proof @{text "trcl_induct"} is that we have to
+text \<open>
+  The only difference with the proof \<open>trcl_induct\<close> is that we have to
   instantiate here two universal quantifiers.  We omit the other induction
   principle that has @{prop "even n"} as premise and @{term "Q n"} as conclusion.  
   The proofs of the introduction rules are also very similar to the ones in 
-  the @{text "trcl"}-example. We only show the proof of the second introduction 
+  the \<open>trcl\<close>-example. We only show the proof of the second introduction 
   rule.
-*}
+\<close>
 
 lemma %linenos evenS: 
 shows "odd m \<Longrightarrow> even (Suc m)"
@@ -248,18 +245,18 @@
     done
 qed
 
-text {*
+text \<open>
   The interesting lines are 7 to 15. Again, the assumptions fall into two categories:
-  @{text p1} corresponds to the premise of the introduction rule; @{text "r1"}
-  to @{text "r3"} come from the definition of @{text "even"}.
-  In Line 13, we apply the assumption @{text "r2"} (since we prove the second
-  introduction rule). In Lines 14 and 15 we apply assumption @{text "p1"} (if
+  \<open>p1\<close> corresponds to the premise of the introduction rule; \<open>r1\<close>
+  to \<open>r3\<close> come from the definition of \<open>even\<close>.
+  In Line 13, we apply the assumption \<open>r2\<close> (since we prove the second
+  introduction rule). In Lines 14 and 15 we apply assumption \<open>p1\<close> (if
   the second introduction rule had more premises we have to do that for all
   of them). In order for this assumption to be applicable, the quantifiers
   need to be instantiated and then also the implications need to be resolved
   with the other rules.
 
-  Next we define the accessible part of a relation @{text R} given by
+  Next we define the accessible part of a relation \<open>R\<close> given by
   the single rule:
 
   \begin{center}\small
@@ -268,16 +265,16 @@
 
   The interesting point of this definition is that it contains a quantification
   that ranges only over the premise and the premise has also a precondition.
-  The definition of @{text "accpart"} is:
-*}
+  The definition of \<open>accpart\<close> is:
+\<close>
 
 definition "accpart \<equiv> \<lambda>R x. \<forall>P. (\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x) \<longrightarrow> P x"
 
-text {*
+text \<open>
   The proof of the induction principle is again straightforward and omitted.
   Proving the introduction rule is a little more complicated, because the 
   quantifier and the implication in the premise. The proof is as follows.
-*}
+\<close>
 
 lemma %linenos accpartI: 
 shows "(\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
@@ -299,13 +296,11 @@
   qed
 qed
 
-text {*
+text \<open>
   As you can see, there are now two subproofs. The assumptions fall as usual into
-  two categories (Lines 7 to 9). In Line 11, applying the assumption @{text
-  "r1"} generates a goal state with the new local assumption @{term "R y x"},
-  named @{text "r1_prem"} in the second subproof (Line 14). This local assumption is
-  used to solve the goal @{term "P y"} with the help of assumption @{text
-  "p1"}.
+  two categories (Lines 7 to 9). In Line 11, applying the assumption \<open>r1\<close> generates a goal state with the new local assumption @{term "R y x"},
+  named \<open>r1_prem\<close> in the second subproof (Line 14). This local assumption is
+  used to solve the goal @{term "P y"} with the help of assumption \<open>p1\<close>.
 
 
   \begin{exercise}
@@ -328,5 +323,5 @@
   them.  This is usually the first step in writing a package. We next explain
   the parsing and typing part of the package.
 
-*}
+\<close>
 (*<*)end(*>*)
--- a/ProgTutorial/Parsing.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Parsing.thy	Tue May 14 17:10:47 2019 +0200
@@ -4,10 +4,10 @@
          "foobar_goal" "foobar_prove" :: thy_goal
 begin
 
-chapter {* Parsing\label{chp:parsing} *}
+chapter \<open>Parsing\label{chp:parsing}\<close>
 
 
-text {*
+text \<open>
   \begin{flushright}
   {\em An important principle underlying the success and popularity of Unix\\ is
   the philosophy of building on the work of others.} \\[1ex]
@@ -47,11 +47,11 @@
   are defined in @{ML_file "Pure/Isar/args.ML"}.
   \end{readmore}
 
-*}
+\<close>
 
-section {* Building Generic Parsers *}
+section \<open>Building Generic Parsers\<close>
 
-text {*
+text \<open>
 
   Let us first have a look at parsing strings using generic parsing
   combinators. The function @{ML_ind "$$" in Scan} takes a string as argument and will
@@ -66,24 +66,24 @@
   "($$ \"w\") (Symbol.explode \"world\")" "(\"w\", [\"o\", \"r\", \"l\", \"d\"])"}
 
   The function @{ML "$$"} will either succeed (as in the two examples above)
-  or raise the exception @{text "FAIL"} if no string can be consumed. For
+  or raise the exception \<open>FAIL\<close> if no string can be consumed. For
   example trying to parse
 
   @{ML_response_fake [display,gray] 
   "($$ \"x\") (Symbol.explode \"world\")" 
   "Exception FAIL raised"}
   
-  will raise the exception @{text "FAIL"}. The function @{ML_ind "$$" in Scan} will also
+  will raise the exception \<open>FAIL\<close>. The function @{ML_ind "$$" in Scan} will also
   fail if you try to consume more than a single character.
 
   There are three exceptions that are raised by the parsing combinators:
 
   \begin{itemize}
-  \item @{text "FAIL"} indicates that alternative routes of parsing 
+  \item \<open>FAIL\<close> indicates that alternative routes of parsing 
   might be explored. 
-  \item @{text "MORE"} indicates that there is not enough input for the parser. For example 
-  in @{text "($$ \"h\") []"}.
-  \item @{text "ABORT"} indicates that a dead end is reached. 
+  \item \<open>MORE\<close> indicates that there is not enough input for the parser. For example 
+  in \<open>($$ "h") []\<close>.
+  \item \<open>ABORT\<close> indicates that a dead end is reached. 
   It is used for example in the function @{ML "!!"} (see below).
   \end{itemize}
 
@@ -94,7 +94,7 @@
   structure @{ML_struct Symbol}, instead of the more standard library function
   @{ML_ind explode in String}, for obtaining an input list for the parser. The reason is
   that @{ML explode in Symbol} is aware of character
-  sequences, for example @{text "\<foo>"}, that have a special meaning in
+  sequences, for example \<open>\<foo>\<close>, that have a special meaning in
   Isabelle. To see the difference consider
 
 @{ML_response_fake [display,gray]
@@ -124,7 +124,7 @@
     "((\"h\", [\"e\", \"l\", \"l\", \"o\"]),(\"w\", [\"o\", \"r\", \"l\", \"d\"]))"}
 
   Two parsers can be connected in sequence by using the function @{ML_ind "--" in Scan}. 
-  For example parsing @{text "h"}, @{text "e"} and @{text "l"} (in this 
+  For example parsing \<open>h\<close>, \<open>e\<close> and \<open>l\<close> (in this 
   order) you can achieve by:
 
   @{ML_response [display,gray] 
@@ -142,8 +142,8 @@
 
   Parsers that explore alternatives can be constructed using the function 
   @{ML_ind  "||" in Scan}. The parser @{ML "(p || q)" for p q} returns the
-  result of @{text "p"}, in case it succeeds; otherwise it returns the
-  result of @{text "q"}. For example:
+  result of \<open>p\<close>, in case it succeeds; otherwise it returns the
+  result of \<open>q\<close>. For example:
 
 
 @{ML_response [display,gray] 
@@ -173,8 +173,8 @@
   "((\"e\", [\"l\", \"l\", \"o\"]), (\"h\", [\"l\", \"l\", \"o\"]))"}
 
   The parser @{ML "Scan.optional p x" for p x} returns the result of the parser 
-  @{text "p"}, in case it succeeds; otherwise it returns 
-  the default value @{text "x"}. For example:
+  \<open>p\<close>, in case it succeeds; otherwise it returns 
+  the default value \<open>x\<close>. For example:
 
 @{ML_response [display,gray]
 "let 
@@ -187,7 +187,7 @@
  "((\"h\", [\"e\", \"l\", \"l\", \"o\"]), (\"x\", [\"w\", \"o\", \"r\", \"l\", \"d\"]))"}
 
   The function @{ML_ind option in Scan} works similarly, except no default value can
-  be given. Instead, the result is wrapped as an @{text "option"}-type. For example:
+  be given. Instead, the result is wrapped as an \<open>option\<close>-type. For example:
 
 @{ML_response [display,gray]
 "let 
@@ -206,21 +206,21 @@
   "(\"foo\", [\"f\", \"o\", \"o\"])"} 
 
   The function @{ML_ind "!!" in Scan} helps with producing appropriate error messages
-  during parsing. For example if you want to parse @{text p} immediately 
-  followed by @{text q}, or start a completely different parser @{text r},
+  during parsing. For example if you want to parse \<open>p\<close> immediately 
+  followed by \<open>q\<close>, or start a completely different parser \<open>r\<close>,
   you might write:
 
   @{ML [display,gray] "(p -- q) || r" for p q r}
 
   However, this parser is problematic for producing a useful error
   message, if the parsing of @{ML "(p -- q)" for p q} fails. Because with the
-  parser above you lose the information that @{text p} should be followed by @{text q}.
-  To see this assume that @{text p} is present in the input, but it is not
-  followed by @{text q}. That means @{ML "(p -- q)" for p q} will fail and
-  hence the alternative parser @{text r} will be tried. However, in many
-  circumstances this will be the wrong parser for the input ``@{text "p"}-followed-by-something''
+  parser above you lose the information that \<open>p\<close> should be followed by \<open>q\<close>.
+  To see this assume that \<open>p\<close> is present in the input, but it is not
+  followed by \<open>q\<close>. That means @{ML "(p -- q)" for p q} will fail and
+  hence the alternative parser \<open>r\<close> will be tried. However, in many
+  circumstances this will be the wrong parser for the input ``\<open>p\<close>-followed-by-something''
   and therefore will also fail. The error message is then caused by the failure
-  of @{text r}, not by the absence of @{text q} in the input. Such
+  of \<open>r\<close>, not by the absence of \<open>q\<close> in the input. Such
   situation can be avoided when using the function @{ML "!!"}.  This function
   aborts the whole process of parsing in case of a failure and prints an error
   message. For example if you invoke the parser
@@ -239,7 +239,7 @@
   @{ML_response_fake [display,gray] "(!! (fn _ => fn _ => \"foo\") ($$ \"h\")) (Symbol.explode \"world\")"
                                "Exception ABORT raised"}
 
-  then the parsing aborts and the error message @{text "foo"} is printed. In order to
+  then the parsing aborts and the error message \<open>foo\<close> is printed. In order to
   see the error message properly, you need to prefix the parser with the function 
   @{ML_ind error in Scan}. For example:
 
@@ -253,18 +253,18 @@
   
   Let us now return to our example of parsing @{ML "(p -- q) || r" for p q
   r}. If you want to generate the correct error message for failure
-  of parsing @{text "p"}-followed-by-@{text "q"}, then you have to write:
-*}
+  of parsing \<open>p\<close>-followed-by-\<open>q\<close>, then you have to write:
+\<close>
 
-ML %grayML{*fun p_followed_by_q p q r =
+ML %grayML\<open>fun p_followed_by_q p q r =
 let 
   val err_msg = fn _ => p ^ " is not followed by " ^ q
 in
   ($$ p -- (!! (fn _ => err_msg) ($$ q))) || ($$ r -- $$ r)
-end *}
+end\<close>
 
 
-text {*
+text \<open>
   Running this parser with the arguments
   @{text [quotes] "h"}, @{text [quotes] "e"} and @{text [quotes] "w"}, and 
   the input @{text [quotes] "holle"} 
@@ -279,17 +279,17 @@
   
   yields the expected parsing. 
 
-  The function @{ML "Scan.repeat p" for p} will apply a parser @{text p} as 
+  The function @{ML "Scan.repeat p" for p} will apply a parser \<open>p\<close> as 
   long as it succeeds. For example:
   
   @{ML_response [display,gray] "Scan.repeat ($$ \"h\") (Symbol.explode \"hhhhello\")" 
                 "([\"h\", \"h\", \"h\", \"h\"], [\"e\", \"l\", \"l\", \"o\"])"}
   
   Note that @{ML_ind repeat in Scan} stores the parsed items in a list. The function
-  @{ML_ind repeat1 in Scan} is similar, but requires that the parser @{text "p"} 
+  @{ML_ind repeat1 in Scan} is similar, but requires that the parser \<open>p\<close> 
   succeeds at least once.
 
-  Also note that the parser would have aborted with the exception @{text MORE}, if
+  Also note that the parser would have aborted with the exception \<open>MORE\<close>, if
   you had it run with the string @{text [quotes] "hhhh"}. This can be avoided by using
   the wrapper @{ML_ind finite in Scan} and the ``stopper-token'' 
   @{ML_ind stopper in Symbol}. With them you can write:
@@ -349,8 +349,8 @@
   After parsing is done, you almost always want to apply a function to the parsed 
   items. One way to do this is the function @{ML_ind ">>" in Scan} where 
   @{ML "(p >> f)" for p f} runs 
-  first the parser @{text p} and upon successful completion applies the 
-  function @{text f} to the result. For example
+  first the parser \<open>p\<close> and upon successful completion applies the 
+  function \<open>f\<close> to the result. For example
 
 @{ML_response [display,gray]
 "let 
@@ -383,42 +383,42 @@
 "Scan.lift ($$ \"h\" -- $$ \"e\") (1, Symbol.explode \"hello\")"
 "((\"h\", \"e\"), (1, [\"l\", \"l\", \"o\"]))"}
 
-  \footnote{\bf FIXME: In which situations is @{text "lift"} useful? Give examples.} 
+  \footnote{\bf FIXME: In which situations is \<open>lift\<close> useful? Give examples.} 
 
   Be aware of recursive parsers. Suppose you want to read strings separated by
   commas and by parentheses into a tree datastructure; for example, generating
   the tree corresponding to the string @{text [quotes] "(A, A), (A, A)"} where
-  the @{text "A"}s will be the leaves.  We assume the trees are represented by the
+  the \<open>A\<close>s will be the leaves.  We assume the trees are represented by the
   datatype:
-*}
+\<close>
 
-ML %grayML{*datatype tree = 
+ML %grayML\<open>datatype tree = 
     Lf of string 
-  | Br of tree * tree*}
+  | Br of tree * tree\<close>
 
-text {*
+text \<open>
   Since nested parentheses should be treated in a meaningful way---for example
   the string @{text [quotes] "((A))"} should be read into a single 
   leaf---you might implement the following parser.
-*}
+\<close>
 
-ML %grayML{*fun parse_basic s = 
+ML %grayML\<open>fun parse_basic s = 
   $$ s >> Lf || $$ "(" |-- parse_tree s --| $$ ")"  
 
 and parse_tree s = 
-  parse_basic s --| $$ "," -- parse_tree s >> Br || parse_basic s*}
+  parse_basic s --| $$ "," -- parse_tree s >> Br || parse_basic s\<close>
 
-text {*
+text \<open>
   This parser corresponds to the grammar:
 
   \begin{center}
   \begin{tabular}{lcl}
-  @{text "<Basic>"}  & @{text "::="} & @{text "<String> | (<Tree>)"}\\
-  @{text "<Tree>"}   & @{text "::="} & @{text "<Basic>, <Tree> | <Basic>"}\\
+  \<open><Basic>\<close>  & \<open>::=\<close> & \<open><String> | (<Tree>)\<close>\\
+  \<open><Tree>\<close>   & \<open>::=\<close> & \<open><Basic>, <Tree> | <Basic>\<close>\\
   \end{tabular}
   \end{center}
 
-  The parameter @{text "s"} is the string over which the tree is parsed. The
+  The parameter \<open>s\<close> is the string over which the tree is parsed. The
   parser @{ML parse_basic} reads either a leaf or a tree enclosed in
   parentheses. The parser @{ML parse_tree} reads either a pair of trees
   separated by a comma, or acts like @{ML parse_basic}. Unfortunately,
@@ -435,18 +435,18 @@
   parser until an input is given. This can be done by adding the parsed
   string as an explicit argument. So the parser above should be implemented
   as follows.
-*}
+\<close>
 
-ML %grayML{*fun parse_basic s xs = 
+ML %grayML\<open>fun parse_basic s xs = 
   ($$ s >> Lf || $$ "(" |-- parse_tree s --| $$ ")") xs 
 
 and parse_tree s xs = 
-  (parse_basic s --| $$ "," -- parse_tree s >> Br || parse_basic s) xs*}
+  (parse_basic s --| $$ "," -- parse_tree s >> Br || parse_basic s) xs\<close>
 
-text {*
+text \<open>
   While the type of the parser is unchanged by the addition, its behaviour 
   changed: with this version of the parser the execution is delayed until 
-  some string is  applied for the argument @{text "xs"}. This gives us 
+  some string is  applied for the argument \<open>xs\<close>. This gives us 
   exactly the parser what we wanted. An example is as follows:
 
   @{ML_response [display, gray]
@@ -460,24 +460,24 @@
 
   \begin{exercise}\label{ex:scancmts}
   Write a parser that parses an input string so that any comment enclosed
-  within @{text "(*\<dots>*)"} is replaced by the same comment but enclosed within
-  @{text "(**\<dots>**)"} in the output string. To enclose a string, you can use the
+  within \<open>(*\<dots>*)\<close> is replaced by the same comment but enclosed within
+  \<open>(**\<dots>**)\<close> in the output string. To enclose a string, you can use the
   function @{ML "enclose s1 s2 s" for s1 s2 s} which produces the string @{ML
   "s1 ^ s ^ s2" for s1 s2 s}. Hint: To simplify the task ignore the proper 
   nesting of comments.
   \end{exercise}
-*}
+\<close>
 
-section {* Parsing Theory Syntax *}
+section \<open>Parsing Theory Syntax\<close>
 
-text {*
+text \<open>
   Most of the time, however, Isabelle developers have to deal with parsing
   tokens, not strings.  These token parsers have the type:
-*}
+\<close>
   
-ML %grayML{*type 'a parser = Token.T list -> 'a * Token.T list*}
+ML %grayML\<open>type 'a parser = Token.T list -> 'a * Token.T list\<close>
 ML "Thy_Header.get_keywords'"
-text {*
+text \<open>
   {\bf REDO!!}
 
 
@@ -514,14 +514,14 @@
 
   We can easily change what is recognised as a keyword with the function
   @{ML_ind add_keywords in Thy_Header}. For example calling it with 
-*}
+\<close>
 
 
 
-setup {* Thy_Header.add_keywords [(("hello", Position.none),Keyword.no_spec)] *}
+setup \<open>Thy_Header.add_keywords [(("hello", Position.none),Keyword.no_spec)]\<close>
 
 
-text {*
+text \<open>
   then lexing @{text [quotes] "hello world"} will produce
 
   @{ML_response_fake [display,gray] "Token.explode 
@@ -544,13 +544,13 @@
 "[Token (\<dots>,(Ident, \"hello\"), \<dots>), Token (\<dots>,(Ident, \"world\"), \<dots>)]"}
 
   For convenience we define the function:
-*}
+\<close>
 
-ML %grayML{*fun filtered_input str = 
-  filter Token.is_proper (Token.explode (Thy_Header.get_keywords' @{context}) Position.none str) *}
+ML %grayML\<open>fun filtered_input str = 
+  filter Token.is_proper (Token.explode (Thy_Header.get_keywords' @{context}) Position.none str)\<close>
 
 ML \<open>filtered_input "inductive | for"\<close>
-text {* 
+text \<open>
   If you now parse
 
 @{ML_response_fake [display,gray] 
@@ -563,7 +563,7 @@
   If you want to see which keywords and commands are currently known to Isabelle, 
   use the function @{ML_ind get_keywords' in Thy_Header}:
 
-  You might have to adjust the @{text ML_print_depth} in order to
+  You might have to adjust the \<open>ML_print_depth\<close> in order to
   see the complete list.
 
   The parser @{ML_ind "$$$" in Parse} parses a single keyword. For example:
@@ -600,8 +600,8 @@
 "((\"|\", \"in\"), [])"}
 
   The parser @{ML "Parse.enum s p" for s p} parses a possibly empty 
-  list of items recognised by the parser @{text p}, where the items being parsed
-  are separated by the string @{text s}. For example:
+  list of items recognised by the parser \<open>p\<close>, where the items being parsed
+  are separated by the string \<open>s\<close>. For example:
 
 @{ML_response [display,gray]
 "let 
@@ -614,8 +614,7 @@
   The function @{ML_ind enum1 in Parse} works similarly, except that the
   parsed list must be non-empty. Note that we had to add a string @{text
   [quotes] "foo"} at the end of the parsed string, otherwise the parser would
-  have consumed all tokens and then failed with the exception @{text
-  "MORE"}. Like in the previous section, we can avoid this exception using the
+  have consumed all tokens and then failed with the exception \<open>MORE\<close>. Like in the previous section, we can avoid this exception using the
   wrapper @{ML Scan.finite}. This time, however, we have to use the
   ``stopper-token'' @{ML Token.stopper}. We can write:
 
@@ -629,11 +628,11 @@
 "([\"in\", \"in\", \"in\"], [])"}
 
   The following function will help to run examples.
-*}
+\<close>
 
-ML %grayML{*fun parse p input = Scan.finite Token.stopper (Scan.error p) input *}
+ML %grayML\<open>fun parse p input = Scan.finite Token.stopper (Scan.error p) input\<close>
 
-text {*
+text \<open>
   The function @{ML_ind "!!!" in Parse} can be used to force termination
   of the parser in case of a dead end, just like @{ML "Scan.!!"} (see previous
   section). A difference, however, is that the error message of @{ML
@@ -664,12 +663,12 @@
   occurred. For this Isabelle can attach positional information to tokens
   and then thread this information up the ``processing chain''. To see this,
   modify the function @{ML filtered_input}, described earlier, as follows 
-*}
+\<close>
 
-ML %grayML{*fun filtered_input' str = 
-       filter Token.is_proper (Token.explode (Thy_Header.get_keywords' @{context}) (Position.line 7) str) *}
+ML %grayML\<open>fun filtered_input' str = 
+       filter Token.is_proper (Token.explode (Thy_Header.get_keywords' @{context}) (Position.line 7) str)\<close>
 
-text {*
+text \<open>
   where we pretend the parsed string starts on line 7. An example is
 
 @{ML_response_fake [display,gray]
@@ -704,27 +703,27 @@
 
   \begin{center}
   \begin{tabular}{lcl}
-  @{text "<Basic>"}  & @{text "::="} & @{text "<Number> | (<Expr>)"}\\
-  @{text "<Factor>"} & @{text "::="} & @{text "<Basic> * <Factor> | <Basic>"}\\
-  @{text "<Expr>"}   & @{text "::="} & @{text "<Factor> + <Expr>  | <Factor>"}\\
+  \<open><Basic>\<close>  & \<open>::=\<close> & \<open><Number> | (<Expr>)\<close>\\
+  \<open><Factor>\<close> & \<open>::=\<close> & \<open><Basic> * <Factor> | <Basic>\<close>\\
+  \<open><Expr>\<close>   & \<open>::=\<close> & \<open><Factor> + <Expr>  | <Factor>\<close>\\
   \end{tabular}
   \end{center}
 
   Hint: Be careful with recursive parsers.
   \end{exercise}
-*}
+\<close>
 
-section {* Parsers for ML-Code (TBD) *}
+section \<open>Parsers for ML-Code (TBD)\<close>
 
-text {*
+text \<open>
   @{ML_ind ML_source in Parse}
-*}
+\<close>
 
-section {* Context Parser (TBD) *}
+section \<open>Context Parser (TBD)\<close>
 
-text {*
+text \<open>
   @{ML_ind Args.context}
-*}
+\<close>
 (*
 ML {*
 let
@@ -739,17 +738,17 @@
 *}
 *)
 
-text {*
+text \<open>
   @{ML_ind Args.context}
 
   Used for example in \isacommand{attribute\_setup} and \isacommand{method\_setup}.
-*}
+\<close>
 
-section {* Argument and Attribute Parsers (TBD) *}
+section \<open>Argument and Attribute Parsers (TBD)\<close>
 
-section {* Parsing Inner Syntax *}
+section \<open>Parsing Inner Syntax\<close>
 
-text {*
+text \<open>
   There is usually no need to write your own parser for parsing inner syntax, that is 
   for terms and  types: you can just call the predefined parsers. Terms can 
   be parsed using the function @{ML_ind term in Parse}. For example:
@@ -792,17 +791,17 @@
   @{ML_ind read_term in Syntax} @{ML_ind read_term in Syntax}
 
 
-*}
+\<close>
 
-section {* Parsing Specifications\label{sec:parsingspecs} *}
+section \<open>Parsing Specifications\label{sec:parsingspecs}\<close>
 
 
-text {*
+text \<open>
   There are a number of special purpose parsers that help with parsing
   specifications of function definitions, inductive predicates and so on. In
   Chapter~\ref{chp:package}, for example, we will need to parse specifications
   for inductive predicates of the form:
-*}
+\<close>
 
 
 simple_inductive
@@ -812,19 +811,19 @@
 | evenS: "odd n \<Longrightarrow> even (Suc n)"
 | oddS: "even n \<Longrightarrow> odd (Suc n)"
 
-text {*
+text \<open>
   For this we are going to use the parser:
-*}
+\<close>
 
-ML %linenosgray{*val spec_parser = 
+ML %linenosgray\<open>val spec_parser = 
      Parse.vars -- 
      Scan.optional 
        (Parse.$$$ "where" |--
           Parse.!!! 
             (Parse.enum1 "|" 
-               (Parse_Spec.opt_thm_name ":" -- Parse.prop))) []*}
+               (Parse_Spec.opt_thm_name ":" -- Parse.prop))) []\<close>
 
-text {*
+text \<open>
   Note that the parser must not parse the keyword \simpleinductive, even if it is
   meant to process definitions as shown above. The parser of the keyword 
   will be given by the infrastructure that will eventually call @{ML spec_parser}.
@@ -848,7 +847,7 @@
    [((even0,\<dots>),\<dots>),
     ((evenS,\<dots>),\<dots>),
     ((oddS,\<dots>),\<dots>)]), [])"}
-*}
+\<close>
 
 ML \<open>let
   val input = filtered_input 
@@ -857,7 +856,7 @@
   parse Parse.vars input
 end\<close>
 
-text {*
+text \<open>
   As you see, the result is a pair consisting of a list of
   variables with optional type-annotation and syntax-annotation, and a list of
   rules where every rule has optionally a name and an attribute.
@@ -866,7 +865,7 @@
   \isacommand{and}-separated 
   list of variables that can include optional type annotations and syntax translations. 
   For example:\footnote{Note that in the code we need to write 
-  @{text "\\\"int \<Rightarrow> bool\\\""} in order to properly escape the double quotes
+  \<open>\"int \<Rightarrow> bool\"\<close> in order to properly escape the double quotes
   in the compound type.}
 
 @{ML_response_fake [display,gray]
@@ -879,9 +878,9 @@
 "([(foo, SOME \<dots>, NoSyn), 
   (bar, SOME \<dots>, Mixfix (Source {text=\"BAR\",...}, [], 100, \<dots>)), 
   (blonk, NONE, NoSyn)],[])"}  
-*}
+\<close>
 
-text {*
+text \<open>
   Whenever types are given, they are stored in the @{ML SOME}s. The types are
   not yet used to type the variables: this must be done by type-inference later
   on. Since types are part of the inner syntax they are strings with some
@@ -916,9 +915,9 @@
   Attributes and arguments are implemented in the files @{ML_file "Pure/Isar/attrib.ML"} 
   and @{ML_file "Pure/Isar/args.ML"}.
   \end{readmore}
-*}
+\<close>
 
-text_raw {*
+text_raw \<open>
   \begin{exercise}
   Have a look at how the parser @{ML Parse_Spec.where_multi_specs} is implemented
   in file @{ML_file "Pure/Isar/parse_spec.ML"}. This parser corresponds
@@ -926,8 +925,8 @@
   we paraphrase the code of @{ML_ind where_multi_specs in Parse_Spec} adapted to our
   purposes. 
   \begin{isabelle}
-*}
-ML %linenosgray{*val spec_parser' = 
+\<close>
+ML %linenosgray\<open>val spec_parser' = 
      Parse.vars -- 
      Scan.optional
      (Parse.$$$ "where" |-- 
@@ -936,8 +935,8 @@
              ((Parse_Spec.opt_thm_name ":" -- Parse.prop) --| 
                   Scan.option (Scan.ahead (Parse.name || 
                   Parse.$$$ "[") -- 
-                  Parse.!!! (Parse.$$$ "|"))))) [] *}
-text_raw {*
+                  Parse.!!! (Parse.$$$ "|"))))) []\<close>
+text_raw \<open>
   \end{isabelle}
   Both parsers accept the same input% that's not true:
   % spec_parser accepts input that is refuted by spec_parser'
@@ -945,16 +944,16 @@
   an additional  ``tail'' (Lines 8 to 10) in @{ML spec_parser'}. What is the purpose of 
   this additional ``tail''?
   \end{exercise}
-*}
+\<close>
 
-text {*
+text \<open>
   (FIXME: @{ML Parse.type_args}, @{ML Parse.typ}, @{ML Parse.opt_mixfix})
-*}
+\<close>
 
 
-section {* New Commands\label{sec:newcommand} *}
+section \<open>New Commands\label{sec:newcommand}\<close>
 
-text {*
+text \<open>
   Often new commands, for example for providing new definitional principles,
   need to be implemented. While this is not difficult on the ML-level and for
   jEdit, in order to be backwards compatible, new commands need also to be recognised 
@@ -969,39 +968,39 @@
   we need to write something like
 
   \begin{graybox}
-  \isacommand{theory}~@{text Foo}\\
-  \isacommand{imports}~@{text Main}\\
-  \isacommand{keywords} @{text [quotes] "foobar"} @{text "::"} @{text "thy_decl"}\\
+  \isacommand{theory}~\<open>Foo\<close>\\
+  \isacommand{imports}~\<open>Main\<close>\\
+  \isacommand{keywords} @{text [quotes] "foobar"} \<open>::\<close> \<open>thy_decl\<close>\\
   ...
   \end{graybox}
 
   where @{ML_ind "thy_decl" in Keyword} indicates the kind of the
-  command.  Another possible kind is @{text "thy_goal"}, or you can
+  command.  Another possible kind is \<open>thy_goal\<close>, or you can
   also omit the kind entirely, in which case you declare a keyword
   (something that is part of a command).
 
   Now you can implement \isacommand{foobar} as follows.
-*}
+\<close>
 
-ML %grayML{*let
+ML %grayML\<open>let
   val do_nothing = Scan.succeed (Local_Theory.background_theory I)
 in
   Outer_Syntax.local_theory @{command_keyword "foobar"} 
     "description of foobar" 
       do_nothing
-end *}
+end\<close>
 
-text {*
+text \<open>
   The crucial function @{ML_ind local_theory in Outer_Syntax} expects
   the name for the command, a kind indicator, a short description and 
   a parser producing a local theory transition (explained later). For the
-  name and the kind, you can use the ML-antiquotation @{text "@{command_spec ...}"}.
+  name and the kind, you can use the ML-antiquotation \<open>@{command_spec ...}\<close>.
   You can now write in your theory 
-*}
+\<close>
 
 foobar
 
-text {*
+text \<open>
   but of course you will not see anything since \isacommand{foobar} is
   not intended to do anything.  Remember, however, that this only
   works in jEdit. In order to enable also Proof-General recognise this
@@ -1013,7 +1012,7 @@
   the command \isacommand{foobar\_trace} in the theory header as
 
   \begin{graybox}
-  \isacommand{keywords} @{text [quotes] "foobar_trace"} @{text "::"} @{text "thy_decl"}
+  \isacommand{keywords} @{text [quotes] "foobar_trace"} \<open>::\<close> \<open>thy_decl\<close>
   \end{graybox}
 
   The crucial part of a command is the function that determines the
@@ -1023,27 +1022,27 @@
   function @{ML "Local_Theory.background_theory I"}. We can replace
   this code by a function that first parses a proposition (using the
   parser @{ML Parse.prop}), then prints out some tracing information
-  (using the function @{text trace_prop}) and finally does
+  (using the function \<open>trace_prop\<close>) and finally does
   nothing. For this you can write:
-*}
+\<close>
 
-ML %grayML{*let
+ML %grayML\<open>let
   fun trace_prop str = 
     Local_Theory.background_theory (fn ctxt => (tracing str; ctxt))
 in
   Outer_Syntax.local_theory @{command_keyword "foobar_trace"} 
     "traces a proposition" 
       (Parse.prop >> trace_prop)
-end *}
+end\<close>
 
-text {*
+text \<open>
   This command can now be used to 
   see the proposition in the tracing buffer.
-*}
+\<close>
 
 foobar_trace "True \<and> False"
 
-text {*
+text \<open>
   Note that so far we used @{ML_ind thy_decl in Keyword} as the kind
   indicator for the new command.  This means that the command finishes as soon as
   the arguments are processed. Examples of this kind of commands are
@@ -1057,16 +1056,16 @@
   Below we show the command \isacommand{foobar\_goal} which takes a
   proposition as argument and then starts a proof in order to prove
   it. Therefore, we need to announce this command in the header 
-  as @{text "thy_goal"}.
+  as \<open>thy_goal\<close>.
 
   \begin{graybox}
-  \isacommand{keywords} @{text [quotes] "foobar_goal"} @{text "::"} @{text "thy_goal"}
+  \isacommand{keywords} @{text [quotes] "foobar_goal"} \<open>::\<close> \<open>thy_goal\<close>
   \end{graybox}
 
   Then we can write:
-*}
+\<close>
 
-ML%linenosgray{*let
+ML%linenosgray\<open>let
   fun goal_prop str ctxt =
     let
       val prop = Syntax.read_prop ctxt str
@@ -1077,10 +1076,10 @@
   Outer_Syntax.local_theory_to_proof @{command_keyword "foobar_goal"}
     "proves a proposition" 
       (Parse.prop >> goal_prop)
-end *}
+end\<close>
 
-text {*
-  The function @{text goal_prop} in Lines 2 to 7 takes a string (the proposition to be
+text \<open>
+  The function \<open>goal_prop\<close> in Lines 2 to 7 takes a string (the proposition to be
   proved) and a context as argument.  The context is necessary in order to be able to use
   @{ML_ind read_prop in Syntax}, which converts a string into a proper proposition.
   In Line 6 the function @{ML_ind theorem in Proof} starts the proof for the
@@ -1091,21 +1090,21 @@
 
   If you now type \isacommand{foobar\_goal}~@{text [quotes] "True \<and> True"},
   you obtain the following proof state:
-*}
+\<close>
 
 foobar_goal "True \<and> True"
-txt {*
+txt \<open>
   \begin{minipage}{\textwidth}
   @{subgoals [display]}
   \end{minipage}\medskip
 
   and can prove the proposition as follows.
-*}
+\<close>
 apply(rule conjI)
 apply(rule TrueI)+
 done
 
-text {*
+text \<open>
   The last command we describe here is
   \isacommand{foobar\_proof}. Like \isacommand{foobar\_goal}, its purpose is
   to take a proposition and open a corresponding proof-state that
@@ -1122,21 +1121,21 @@
   @{ML_ind "ML_source" in Parse} in the structure @{ML_struct
   Parse}. For running the ML-interpreter we need the following
   scaffolding code.
-*}
+\<close>
 
-ML %grayML{*
+ML %grayML\<open>
 structure Result = Proof_Data 
   (type T = unit -> term
    fun init thy () = error "Result")
 
-val result_cookie = (Result.get, Result.put, "Result.put") *}
+val result_cookie = (Result.get, Result.put, "Result.put")\<close>
 
-text {*
+text \<open>
   With this in place, we can implement the code for \isacommand{foobar\_prove} 
   as follows.
-*}
+\<close>
 
-ML %linenosgray{*let
+ML %linenosgray\<open>let
    fun after_qed thm_name thms lthy =
         Local_Theory.note (thm_name, (flat thms)) lthy |> snd
 
@@ -1153,39 +1152,39 @@
    Outer_Syntax.local_theory_to_proof @{command_keyword "foobar_prove"}
      "proving a proposition" 
        (parser >> setup_proof)
-end*}
+end\<close>
 
-text {*
+text \<open>
   In Line 12, we implement a parser that first reads in an optional lemma name (terminated 
   by ``:'') and then some ML-code. The function in Lines 5 to 10 takes the ML-text
   and lets the ML-runtime evaluate it using the function @{ML_ind value in Code_Runtime}
   in the structure @{ML_struct Code_Runtime}.  Once the ML-text has been turned into a term, 
   the function @{ML theorem in Proof} opens a corresponding proof-state. This function takes the
-  function @{text "after_qed"} as argument, whose purpose is to store the theorem
-  (once it is proven) under the given name @{text "thm_name"}.
+  function \<open>after_qed\<close> as argument, whose purpose is to store the theorem
+  (once it is proven) under the given name \<open>thm_name\<close>.
 
   You can now define a term, for example
-*}
+\<close>
 
-ML %grayML{*val prop_true = @{prop "True"}*}
+ML %grayML\<open>val prop_true = @{prop "True"}\<close>
 
-text {*
+text \<open>
   and give it a proof using \isacommand{foobar\_prove}:
-*}
+\<close>
 
 foobar_prove test: prop_true
 apply(rule TrueI)
 done
 
-text {*
+text \<open>
   Finally you can test whether the lemma has been stored under the given name.
   
   \begin{isabelle}
-  \isacommand{thm}~@{text "test"}\\
-  @{text "> "}~@{thm TrueI}
+  \isacommand{thm}~\<open>test\<close>\\
+  \<open>> \<close>~@{thm TrueI}
   \end{isabelle}
 
-*}
+\<close>
 
 
 (*
@@ -1324,9 +1323,9 @@
 *}
 *)
 
-section {* Methods (TBD) *}
+section \<open>Methods (TBD)\<close>
 
-text {*
+text \<open>
   (FIXME: maybe move to after the tactic section)
 
   Methods are central to Isabelle. You use them, for example,
@@ -1335,65 +1334,65 @@
 
   \begin{isabelle}
   \isacommand{print\_methods}\\
-  @{text "> methods:"}\\
-  @{text ">   -:  do nothing (insert current facts only)"}\\
-  @{text ">   HOL.default:  apply some intro/elim rule (potentially classical)"}\\
-  @{text ">   ..."}
+  \<open>> methods:\<close>\\
+  \<open>>   -:  do nothing (insert current facts only)\<close>\\
+  \<open>>   HOL.default:  apply some intro/elim rule (potentially classical)\<close>\\
+  \<open>>   ...\<close>
   \end{isabelle}
 
   An example of a very simple method is:
-*}
+\<close>
 
 method_setup %gray foo = 
- {* Scan.succeed
-      (fn ctxt => (SIMPLE_METHOD ((eresolve_tac ctxt [@{thm conjE}] THEN' resolve_tac ctxt [@{thm conjI}]) 1))) *}
+ \<open>Scan.succeed
+      (fn ctxt => (SIMPLE_METHOD ((eresolve_tac ctxt [@{thm conjE}] THEN' resolve_tac ctxt [@{thm conjI}]) 1)))\<close>
          "foo method for conjE and conjI"
 
-text {*
-  It defines the method @{text foo}, which takes no arguments (therefore the
+text \<open>
+  It defines the method \<open>foo\<close>, which takes no arguments (therefore the
   parser @{ML Scan.succeed}) and only applies a single tactic, namely the tactic which 
   applies @{thm [source] conjE} and then @{thm [source] conjI}. The function 
   @{ML_ind SIMPLE_METHOD in Method}
-  turns such a tactic into a method. The method @{text "foo"} can be used as follows
-*}
+  turns such a tactic into a method. The method \<open>foo\<close> can be used as follows
+\<close>
 
 lemma shows "A \<and> B \<Longrightarrow> C \<and> D"
 apply(foo)
-txt {*
+txt \<open>
   where it results in the goal state
 
   \begin{minipage}{\textwidth}
   @{subgoals}
-  \end{minipage} *}
+  \end{minipage}\<close>
 (*<*)oops(*>*)
 
-method_setup test = {* 
-  Scan.lift (Scan.succeed (K Method.succeed)) *} {* bla *}
+method_setup test = \<open>
+  Scan.lift (Scan.succeed (K Method.succeed))\<close> \<open>bla\<close>
 
 lemma "True"
 apply(test)
 oops
 
-method_setup joker = {* 
-  Scan.lift (Scan.succeed (fn ctxt => Method.cheating true)) *} {* bla *}
+method_setup joker = \<open>
+  Scan.lift (Scan.succeed (fn ctxt => Method.cheating true))\<close> \<open>bla\<close>
 
 lemma "False"
 apply(joker)
 oops
 
-text {* if true is set then always works *}
+text \<open>if true is set then always works\<close>
 
-ML {* assume_tac *} 
+ML \<open>assume_tac\<close> 
 
-method_setup first_atac = {* Scan.lift (Scan.succeed (fn ctxt => (SIMPLE_METHOD (assume_tac ctxt 1)))) *} {* bla *}
+method_setup first_atac = \<open>Scan.lift (Scan.succeed (fn ctxt => (SIMPLE_METHOD (assume_tac ctxt 1))))\<close> \<open>bla\<close>
 
-ML {* HEADGOAL *}
+ML \<open>HEADGOAL\<close>
 
 lemma "A \<Longrightarrow> A"
 apply(first_atac)
 oops
 
-method_setup my_atac = {* Scan.lift (Scan.succeed (fn ctxt => (SIMPLE_METHOD' (assume_tac ctxt)))) *} {* bla *}
+method_setup my_atac = \<open>Scan.lift (Scan.succeed (fn ctxt => (SIMPLE_METHOD' (assume_tac ctxt))))\<close> \<open>bla\<close>
 
 lemma "A \<Longrightarrow> A"
 apply(my_atac)
@@ -1412,11 +1411,11 @@
 ML {* Scan.succeed  *}
 *)
 
-ML {* resolve_tac *}
+ML \<open>resolve_tac\<close>
 
 method_setup myrule =
-  {* Scan.lift (Scan.succeed (fn ctxt => (METHOD (fn thms => resolve_tac ctxt thms 1)))) *}
-  {* bla *}
+  \<open>Scan.lift (Scan.succeed (fn ctxt => (METHOD (fn thms => resolve_tac ctxt thms 1))))\<close>
+  \<open>bla\<close>
 
 lemma
   assumes a: "A \<Longrightarrow> B \<Longrightarrow> C"
@@ -1427,9 +1426,9 @@
 
 
 
-text {*
+text \<open>
   (********************************************************)
   (FIXME: explain a version of rule-tac)
-*}
+\<close>
 
 end
--- a/ProgTutorial/Readme.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Readme.thy	Tue May 14 17:10:47 2019 +0200
@@ -2,9 +2,9 @@
 imports Base
 begin
 
-chapter {* Comments for Authors *}
+chapter \<open>Comments for Authors\<close>
 
-text {*
+text \<open>
 
   \begin{itemize}
   \item This tutorial can be compiled on the command-line with:
@@ -22,12 +22,12 @@
   \begin{center}
   \begin{tabular}{l|c|c}
    & Chapters & Sections\\\hline
-  Implementation Manual & @{text "\\ichcite{\<dots>}"} & @{text "\\isccite{\<dots>}"}\\
-  Isar Reference Manual & @{text "\\rchcite{\<dots>}"} & @{text "\\rsccite{\<dots>}"}\\
+  Implementation Manual & \<open>\ichcite{\<dots>}\<close> & \<open>\isccite{\<dots>}\<close>\\
+  Isar Reference Manual & \<open>\rchcite{\<dots>}\<close> & \<open>\rsccite{\<dots>}\<close>\\
   \end{tabular}
   \end{center}
 
-  So @{text "\\ichcite{ch:logic}"} yields a reference for the chapter about logic 
+  So \<open>\ichcite{ch:logic}\<close> yields a reference for the chapter about logic 
   in the implementation manual, namely \ichcite{ch:logic}.
 
   \item There are various document antiquotations defined for the 
@@ -39,33 +39,32 @@
   The following antiquotations are defined:
 
   \begin{itemize}
-  \item[$\bullet$] @{text "@{ML \"expr\" for vars in structs}"} should be used
+  \item[$\bullet$] \<open>@{ML "expr" for vars in structs}\<close> should be used
   for displaying any ML-ex\-pression, because the antiquotation checks whether
-  the expression is valid ML-code. The @{text "for"}- and @{text
-  "in"}-arguments are optional. The former is used for evaluating open
+  the expression is valid ML-code. The \<open>for\<close>- and \<open>in\<close>-arguments are optional. The former is used for evaluating open
   expressions by giving a list of free variables. The latter is used to
   indicate in which structure or structures the ML-expression should be
   evaluated. Examples are:
   
   \begin{center}\small
   \begin{tabular}{lll}
-  @{text "@{ML \"1 + 3\"}"}         &                 & @{ML "1 + 3"}\\
-  @{text "@{ML \"a + b\" for a b}"} & \;\;produce\;\; & @{ML "a + b" for a b}\\
-  @{text "@{ML Ident in OuterLex}"} &                 & @{ML Ident in OuterLex}\\
+  \<open>@{ML "1 + 3"}\<close>         &                 & @{ML "1 + 3"}\\
+  \<open>@{ML "a + b" for a b}\<close> & \;\;produce\;\; & @{ML "a + b" for a b}\\
+  \<open>@{ML Ident in OuterLex}\<close> &                 & @{ML Ident in OuterLex}\\
   \end{tabular}
   \end{center}
 
-  \item[$\bullet$] @{text "@{ML_response \"expr\" \"pat\"}"} should be used to
+  \item[$\bullet$] \<open>@{ML_response "expr" "pat"}\<close> should be used to
   display ML-expressions and their response.  The first expression is checked
-  like in the antiquotation @{text "@{ML \"expr\"}"}; the second is a pattern
+  like in the antiquotation \<open>@{ML "expr"}\<close>; the second is a pattern
   that specifies the result the first expression produces. This pattern can
   contain @{text [quotes] "\<dots>"} for parts that you like to omit. The response of the
   first expression will be checked against this pattern. Examples are:
 
   \begin{center}\small
   \begin{tabular}{l}
-  @{text "@{ML_response \"1+2\" \"3\"}"}\\
-  @{text "@{ML_response \"(1+2,3)\" \"(3,\<dots>)\"}"}
+  \<open>@{ML_response "1+2" "3"}\<close>\\
+  \<open>@{ML_response "(1+2,3)" "(3,\<dots>)"}\<close>
   \end{tabular}
   \end{center}
 
@@ -82,18 +81,18 @@
   constructed: it does not work when the code produces an exception or returns 
   an abstract datatype (like @{ML_type thm} or @{ML_type cterm}).
 
-  \item[$\bullet$] @{text "@{ML_response_fake \"expr\" \"pat\"}"} works just
-  like the antiquotation @{text "@{ML_response \"expr\" \"pat\"}"} above,
+  \item[$\bullet$] \<open>@{ML_response_fake "expr" "pat"}\<close> works just
+  like the antiquotation \<open>@{ML_response "expr" "pat"}\<close> above,
   except that the result-specification is not checked. Use this antiquotation
   when the result cannot be constructed or the code generates an
   exception. Examples are:
 
   \begin{center}\small
   \begin{tabular}{ll}
-  @{text "@{ML_response_fake"} & @{text "\"cterm_of @{theory} @{term \\\"a + b = c\\\"}\"}"}\\
-                               & @{text "\"a + b = c\"}"}\smallskip\\ 
-  @{text "@{ML_response_fake"} & @{text "\"($$ \\\"x\\\") (explode \\\"world\\\")\""}\\ 
-                               & @{text "\"Exception FAIL raised\"}"}
+  \<open>@{ML_response_fake\<close> & \<open>"cterm_of @{theory} @{term \"a + b = c\"}"}\<close>\\
+                               & \<open>"a + b = c"}\<close>\smallskip\\ 
+  \<open>@{ML_response_fake\<close> & \<open>"($$ \"x\") (explode \"world\")"\<close>\\ 
+                               & \<open>"Exception FAIL raised"}\<close>
   \end{tabular}
   \end{center}
 
@@ -109,50 +108,50 @@
   This output mimics to some extend what the user sees when running the
   code.
 
-  \item[$\bullet$] @{text "@{ML_response_fake_both \"expr\" \"pat\"}"} can be
+  \item[$\bullet$] \<open>@{ML_response_fake_both "expr" "pat"}\<close> can be
   used to show erroneous code. Neither the code nor the response will be
   checked. An example is:
 
   \begin{center}\small
   \begin{tabular}{ll}
-  @{text "@{ML_response_fake_both"} & @{text "\"@{cterm \\\"1 + True\\\"}\""}\\
-                                    & @{text "\"Type unification failed \<dots>\"}"}
+  \<open>@{ML_response_fake_both\<close> & \<open>"@{cterm \"1 + True\"}"\<close>\\
+                                    & \<open>"Type unification failed \<dots>"}\<close>
   \end{tabular}
   \end{center}
 
-  \item[$\bullet$] @{text "@{ML_file \"name\"}"} should be used when
+  \item[$\bullet$] \<open>@{ML_file "name"}\<close> should be used when
   referring to a file. It checks whether the file exists.  An example
   is 
 
   @{text [display] "@{ML_file \"Pure/General/basics.ML\"}"}
   \end{itemize}
 
-  The listed antiquotations honour options including @{text "[display]"} and 
-  @{text "[quotes]"}. For example
+  The listed antiquotations honour options including \<open>[display]\<close> and 
+  \<open>[quotes]\<close>. For example
 
   \begin{center}\small
-  @{text "@{ML [quotes] \"\\\"foo\\\" ^ \\\"bar\\\"\"}"} \;\;produces\;\; @{text [quotes] "foobar"}
+  \<open>@{ML [quotes] "\"foo\" ^ \"bar\""}\<close> \;\;produces\;\; @{text [quotes] "foobar"}
   \end{center}
 
   whereas
   
   \begin{center}\small
-  @{text "@{ML \"\\\"foo\\\" ^ \\\"bar\\\"\"}"} \;\;produces only\;\; @{text "foobar"}
+  \<open>@{ML "\"foo\" ^ \"bar\""}\<close> \;\;produces only\;\; \<open>foobar\<close>
   \end{center}
   
   \item Functions and value bindings cannot be defined inside antiquotations; they need
-  to be included inside \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}
+  to be included inside \isacommand{ML}~\<open>\<verbopen> \<dots> \<verbclose>\<close>
   environments. In this way they are also checked by the compiler. Some \LaTeX-hack in 
   the tutorial, however, ensures that the environment markers are not printed.
 
   \item Line numbers can be printed using 
-  \isacommand{ML} \isa{\%linenos}~@{text "\<verbopen> \<dots> \<verbclose>"}
-  for ML-code or \isacommand{lemma} \isa{\%linenos} @{text "..."} for proofs. The
+  \isacommand{ML} \isa{\%linenos}~\<open>\<verbopen> \<dots> \<verbclose>\<close>
+  for ML-code or \isacommand{lemma} \isa{\%linenos} \<open>...\<close> for proofs. The
   tag is \isa{\%linenosgray} when the numbered text should be gray. 
 
   \end{itemize}
 
-*}
+\<close>
 
 
 
--- a/ProgTutorial/Recipes/Antiquotes.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Recipes/Antiquotes.thy	Tue May 14 17:10:47 2019 +0200
@@ -3,9 +3,9 @@
 imports "../Appendix"
 begin
 
-section {* Useful Document Antiquotations\label{rec:docantiquotations} *}
+section \<open>Useful Document Antiquotations\label{rec:docantiquotations}\<close>
 
-text {*
+text \<open>
   {\bf Problem:} 
   How to keep your ML-code inside a document synchronised with the actual code?\smallskip
 
@@ -14,11 +14,11 @@
   Document antiquotations can be used for ensuring consistent type-setting of
   various entities in a document. They can also be used for sophisticated
   \LaTeX-hacking. If you type on the Isabelle level
-*}
+\<close>
 
 print_antiquotations
 
-text {*
+text \<open>
   you obtain a list of all currently available document antiquotations and
   their options.  
 
@@ -29,7 +29,7 @@
   and also allows you to keep documents in sync with other code, for example
   Isabelle.
 
-  We first describe the antiquotation @{text "ML_checked"} with the syntax:
+  We first describe the antiquotation \<open>ML_checked\<close> with the syntax:
  
   @{text [display] "@{ML_checked \"a_piece_of_code\"}"}
 
@@ -38,12 +38,12 @@
   "ML_Context.eval_source_in"} in Line 7 below). The complete code of the
   document antiquotation is as follows:
 
-*}
+\<close>
 ML \<open>Input.pos_of\<close>
-ML%linenosgray{*fun ml_enclose bg en source =
-  ML_Lex.read bg @ ML_Lex.read_source false source @ ML_Lex.read en;*}
+ML%linenosgray\<open>fun ml_enclose bg en source =
+  ML_Lex.read bg @ ML_Lex.read_source false source @ ML_Lex.read en;\<close>
 
-ML%linenosgray{*fun ml_val code_txt = (ml_enclose "val _ = " "" code_txt)
+ML%linenosgray\<open>fun ml_val code_txt = (ml_enclose "val _ = " "" code_txt)
 
 fun output_ml ctxt code_txt =
 let
@@ -52,12 +52,12 @@
    Pretty.str (Input.source_content code_txt)
 end
 
-val ml_checked_setup = Thy_Output.antiquotation_pretty_source @{binding "ML_checked"} (Scan.lift Args.text_input) output_ml*}
+val ml_checked_setup = Thy_Output.antiquotation_pretty_source @{binding "ML_checked"} (Scan.lift Args.text_input) output_ml\<close>
 
-setup {* ml_checked_setup *}
+setup \<open>ml_checked_setup\<close>
 
 
-text {*
+text \<open>
   The parser @{ML "(Scan.lift Args.name)"} in Line 7 parses a string, in this
   case the code, and then calls the function @{ML output_ml}. As mentioned
   before, the parsed code is sent to the ML-compiler in Line 4 using the
@@ -69,8 +69,8 @@
   line in the output. Therefore the use of @{ML "(space_explode \"\\n\" txt)"
   for txt} which produces such a list according to linebreaks.  There are a
   number of options for antiquotations that are observed by the function 
-  @{ML "output" in Document_Antiquotation} when printing the code (including @{text "[display]"} 
-  and @{text "[quotes]"}). The function @{ML "antiquotation_raw" in Thy_Output} in 
+  @{ML "output" in Document_Antiquotation} when printing the code (including \<open>[display]\<close> 
+  and \<open>[quotes]\<close>). The function @{ML "antiquotation_raw" in Thy_Output} in 
   Line 7 sets up the new document antiquotation.
 
   \begin{readmore}
@@ -80,7 +80,7 @@
   Since we used the argument @{ML "Position.none"}, the compiler cannot give specific 
   information about the line number, in case an error is detected. We 
   can improve the code above slightly by writing 
-*}
+\<close>
 (* FIXME: remove
 ML%linenosgray{*fun output_ml ctxt (code_txt, pos) =
 let
@@ -100,12 +100,12 @@
 
 setup {* ml_checked_setup2 *}
 *)
-text {*
+text \<open>
   where in Lines 1 and 2 the positional information is properly treated. The
   parser @{ML Parse.position} encodes the positional information in the 
   result.
 
-  We can now write @{text "@{ML_checked2 \"2 + 3\"}"} in a document in order to
+  We can now write \<open>@{ML_checked2 "2 + 3"}\<close> in a document in order to
   obtain @{ML_checked "2 + 3"} and be sure that this code compiles until
   somebody changes the definition of addition.
 
@@ -119,18 +119,18 @@
   @{text [display] "@{ML_resp \"a_piece_of_code\" \"a_pattern\"}"}
   
   To add some convenience and also to deal with large outputs, the user can
-  give a partial specification by using ellipses. For example @{text "(\<dots>, \<dots>)"}
+  give a partial specification by using ellipses. For example \<open>(\<dots>, \<dots>)\<close>
   for specifying a pair.  In order to check consistency between the pattern
   and the output of the code, we have to change the ML-expression that is sent 
-  to the compiler: in @{text "ML_checked2"} we sent the expression @{text [quotes]
-  "val _ = a_piece_of_code"} to the compiler; now the wildcard @{text "_"}
+  to the compiler: in \<open>ML_checked2\<close> we sent the expression @{text [quotes]
+  "val _ = a_piece_of_code"} to the compiler; now the wildcard \<open>_\<close>
   must be be replaced by the given pattern. However, we have to remove all
   ellipses from it and replace them by @{text [quotes] "_"}. The following 
   function will do this:
-*}
+\<close>
 
-ML%linenosgray{*fun ml_pat pat code =
-  ML_Lex.read "val" @ ML_Lex.read_source false pat @ ML_Lex.read " = " @ ML_Lex.read_source false code*}
+ML%linenosgray\<open>fun ml_pat pat code =
+  ML_Lex.read "val" @ ML_Lex.read_source false pat @ ML_Lex.read " = " @ ML_Lex.read_source false code\<close>
 
 (*
 ML %grayML{*fun ml_pat code_txt pat =
@@ -140,16 +140,16 @@
   ml_enclose ("val " ^ pat' ^ " = ") "" code_txt 
 end*}
 *)
-text {* 
+text \<open>
   Next we add a response indicator to the result using:
-*}
+\<close>
 
 
-ML %grayML{*fun add_resp pat = map (fn s => "> " ^ s) pat*}
+ML %grayML\<open>fun add_resp pat = map (fn s => "> " ^ s) pat\<close>
 
-text {* 
-  The rest of the code of @{text "ML_resp"} is: 
-*}
+text \<open>
+  The rest of the code of \<open>ML_resp\<close> is: 
+\<close>
 
 ML %linenosgray\<open>
 fun output_ml_resp ctxt (code_txt, pat) =
@@ -184,11 +184,11 @@
           (Scan.lift (Parse.position (Args.text_input -- Args.text_input))) 
              output_ml_resp*}
 *)
-setup {* ml_response_setup *}
+setup \<open>ml_response_setup\<close>
 
 (* FIXME *)
-text {* 
-  In comparison with @{text "ML_checked"}, we only changed the line about 
+text \<open>
+  In comparison with \<open>ML_checked\<close>, we only changed the line about 
   the compiler (Line~2), the lines about
   the output (Lines 4 to 7) and the parser in the setup (Line 11). Now 
   you can write
@@ -212,5 +212,5 @@
   pattern can only be given for values that can be constructed. This excludes
   values that are abstract datatypes, like @{ML_type thm}s and @{ML_type cterm}s.
 
-*}
+\<close>
 end
--- a/ProgTutorial/Recipes/ExternalSolver.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Recipes/ExternalSolver.thy	Tue May 14 17:10:47 2019 +0200
@@ -3,9 +3,9 @@
 begin
 
 
-section {* Executing an External Application (TBD) \label{rec:external}*}
+section \<open>Executing an External Application (TBD) \label{rec:external}\<close>
 
-text {*
+text \<open>
   {\bf Problem:}
   You want to use an external application.
   \smallskip
@@ -36,18 +36,18 @@
   one has to make sure that Isabelle can find the particular executable.
   One way to ensure this is by adding a Bash-like variable binding into
   one of Isabelle's settings file (prefer the user settings file usually to
-  be found at @{text "$HOME/.isabelle/etc/settings"}).
+  be found at \<open>$HOME/.isabelle/etc/settings\<close>).
   
-  For example, assume you want to use the application @{text foo} which
-  is here supposed to be located at @{text "/usr/local/bin/"}.
+  For example, assume you want to use the application \<open>foo\<close> which
+  is here supposed to be located at \<open>/usr/local/bin/\<close>.
   The following line has to be added to one of Isabelle's settings file:
 
-  @{text "FOO=/usr/local/bin/foo"}
+  \<open>FOO=/usr/local/bin/foo\<close>
 
   In Isabelle, this application may now be executed by
 
   @{ML_response_fake [display,gray] "Isabelle_System.bash_output \"$FOO\"" "\<dots>"}
-*}
+\<close>
 
 
 end
--- a/ProgTutorial/Recipes/Introspection.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Recipes/Introspection.thy	Tue May 14 17:10:47 2019 +0200
@@ -3,9 +3,9 @@
 imports "../Appendix"
 begin
 
-section {* Introspection of Theorems and Proofs \label{rec:introspection} *}
+section \<open>Introspection of Theorems and Proofs \label{rec:introspection}\<close>
 
-text{* 
+text\<open>
   {\bf Problem:} 
   How to obtain all theorems that are used in the proof of a theorem?\smallskip
 
@@ -14,15 +14,15 @@
   To introspect a theorem, let us define the following three functions that 
   analyse the @{ML_type_ind proof_body} data-structure from the structure 
   @{ML_struct Proofterm}.
-*}
+\<close>
 
-ML %grayML{*fun pthms_of (PBody {thms, ...}) = map #2 thms 
+ML %grayML\<open>fun pthms_of (PBody {thms, ...}) = map #2 thms 
 val get_names = (map Proofterm.thm_node_name) o pthms_of 
-val get_pbodies = map (Future.join o Proofterm.thm_node_body) o pthms_of *}
+val get_pbodies = map (Future.join o Proofterm.thm_node_body) o pthms_of\<close>
 
-text {* 
+text \<open>
   To see what their purpose is, consider the following three short proofs.
-*}
+\<close>
 
 lemma my_conjIa:
 shows "A \<and> B \<Longrightarrow> A \<and> B"
@@ -43,10 +43,10 @@
 apply(auto)
 done
 
-text {*
+text \<open>
   While the information about which theorems are used is obvious in
   the first two proofs, it is not obvious in the third, because of the
-  @{text auto}-step.  Fortunately, ``behind'' every proved theorem is
+  \<open>auto\<close>-step.  Fortunately, ``behind'' every proved theorem is
   a proof-tree that records all theorems that are employed for
   establishing this theorem.  We can traverse this proof-tree
   extracting this information.  Let us first extract the name of the
@@ -57,9 +57,9 @@
   |> Thm.proof_body_of
   |> get_names"
   "[\"Introspection.my_conjIa\"]"}
-*}
-text {*
-  whereby @{text "Introspection"} refers to the theory name in which
+\<close>
+text \<open>
+  whereby \<open>Introspection\<close> refers to the theory name in which
   we established the theorem @{thm [source] my_conjIa}. The function @{ML_ind
   proof_body_of in Thm} returns a part of the data that is stored in a
   theorem.  Notice that the first proof above references
@@ -75,11 +75,11 @@
   |> List.concat"
   "[\"HOL.conjunct2\", \"HOL.conjunct1\", \"HOL.conjI\", \"Pure.protectD\", 
   \"Pure.protectI\"]"}
-*}
-text {*
+\<close>
+text \<open>
   The theorems @{thm [source] protectD} and @{thm [source]
   protectI} that are internal theorems are always part of a
-  proof in Isabelle. Note also that applications of @{text assumption} do not
+  proof in Isabelle. Note also that applications of \<open>assumption\<close> do not
   count as a separate theorem, as you can see in the following code.
 
   @{ML_response_fake [display,gray]
@@ -89,8 +89,8 @@
   |> map get_names
   |> List.concat"
   "[\"Pure.protectD\", \"Pure.protectI\"]"}
-*}
-text {*
+\<close>
+text \<open>
   Interestingly, but not surprisingly, the proof of @{thm [source]
   my_conjIc} procceeds quite differently from @{thm [source] my_conjIa}
   and @{thm [source] my_conjIb}, as can be seen by the theorems that
@@ -105,8 +105,8 @@
   "[\"HOL.implies_True_equals\", \"HOL.Eq_TrueI\", \"HOL.simp_thms_25\", \"HOL.eq_reflection\", 
     \"HOL.conjunct2\", \"HOL.conjunct1\", \"HOL.TrueI\", \"Pure.protectD\",
     \"Pure.protectI\"]"}
-*}
-text {*
+\<close>
+text \<open>
   Of course we can also descend into the second level of the tree 
   ``behind'' @{thm [source] my_conjIa} say, which
   means we obtain the theorems that are used in order to prove
@@ -123,14 +123,14 @@
   |> duplicates (op =)"
   "[\"\", \"Pure.protectD\",
  \"Pure.protectI\"]"}
-*}
-text {*
+\<close>
+text \<open>
   \begin{readmore} 
   The data-structure @{ML_type proof_body} is implemented
   in the file @{ML_file "Pure/proofterm.ML"}. The implementation 
   of theorems and related functions are in @{ML_file "Pure/thm.ML"}.  
   \end{readmore}
-*}
+\<close>
 
 
 
--- a/ProgTutorial/Recipes/Oracle.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Recipes/Oracle.thy	Tue May 14 17:10:47 2019 +0200
@@ -2,9 +2,9 @@
 imports "../Appendix"
 begin
 
-section {* Writing an Oracle (TBD)\label{rec:oracle} *} 
+section \<open>Writing an Oracle (TBD)\label{rec:oracle}\<close> 
 
-text {*
+text \<open>
   {\bf Problem:}
   You want to use a fast, new decision procedure not based one Isabelle's
   tactics, and you do not care whether it is sound.
@@ -29,11 +29,11 @@
   Assume, that we have a decision procedure for such formulae, implemented
   in ML. Since we do not care how it works, we will use it here as an
   ``external solver'':
-*}
+\<close>
 
 ML_file "external_solver.ML"
 
-text {*
+text \<open>
   We do, however, know that the solver provides a function
   @{ML IffSolver.decide}.
   It takes a string representation of a formula and returns either
@@ -53,10 +53,10 @@
 
   Let us start with the translation function from Isabelle propositions into
   the solver's string representation. To increase efficiency while building
-  the string, we use functions from the @{text Buffer} module.
-  *}
+  the string, we use functions from the \<open>Buffer\<close> module.
+\<close>
 
-ML %grayML{*fun translate t =
+ML %grayML\<open>fun translate t =
   let
     fun trans t =
       (case t of
@@ -71,9 +71,9 @@
          Buffer.add n #>
          Buffer.add " "
       | _ => error "inacceptable term")
-  in Buffer.content (trans t Buffer.empty) end*}
+  in Buffer.content (trans t Buffer.empty) end\<close>
 
-text {*
+text \<open>
   Here is the string representation of the term @{term "p = (q = p)"}:
 
   @{ML_response 
@@ -91,9 +91,9 @@
   @{ML_response 
     "IffSolver.decide (translate @{term \"p = (q = p)\"})" 
     "false"}
-*}
+\<close>
 
-text {* 
+text \<open>
   Now, we combine these functions into an oracle. In general, an oracle may
   be given any input, but it has to return a certified proposition (a
   special term which is type-checked), out of which Isabelle's inference
@@ -103,14 +103,14 @@
   to first extract the term which is then passed to the translation and
   decision procedure. If the solver finds this term to be valid, we return
   the given proposition unchanged to be turned then into a theorem:
-*}
+\<close>
 
-oracle iff_oracle = {* fn ct =>
+oracle iff_oracle = \<open>fn ct =>
   if IffSolver.decide (translate (HOLogic.dest_Trueprop (Thm.term_of ct)))
   then ct
-  else error "Proof failed."*}
+  else error "Proof failed."\<close>
 
-text {*
+text \<open>
   Here is what we get when applying the oracle:
 
   @{ML_response_fake "iff_oracle @{cprop \"p = (p::bool)\"}" "p = p"}
@@ -118,25 +118,25 @@
   (FIXME: is there a better way to present the theorem?)
 
   To be able to use our oracle for Isar proofs, we wrap it into a tactic:
-*}
+\<close>
 
-ML %grayML{*fun iff_oracle_tac ctxt =
+ML %grayML\<open>fun iff_oracle_tac ctxt =
   CSUBGOAL (fn (goal, i) => 
     (case try iff_oracle goal of
       NONE => no_tac
-    | SOME thm => resolve_tac ctxt [thm] i))*}
+    | SOME thm => resolve_tac ctxt [thm] i))\<close>
 
-text {*
+text \<open>
   and create a new method solely based on this tactic:
-*}
+\<close>
 
-method_setup iff_oracle = {*
+method_setup iff_oracle = \<open>
    Scan.succeed (fn ctxt => (Method.SIMPLE_METHOD' (iff_oracle_tac ctxt)))
-*} "Oracle-based decision procedure for chains of equivalences"
+\<close> "Oracle-based decision procedure for chains of equivalences"
 
-text {*
+text \<open>
   Finally, we can test our oracle to prove some theorems:
-*}
+\<close>
 
 lemma "p = (p::bool)"
    by iff_oracle
@@ -145,9 +145,9 @@
    by iff_oracle
 
 
-text {*
+text \<open>
 (FIXME: say something about what the proof of the oracle is ... what do you mean?)
-*} 
+\<close> 
 
 
 end
--- a/ProgTutorial/Recipes/Sat.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Recipes/Sat.thy	Tue May 14 17:10:47 2019 +0200
@@ -3,9 +3,9 @@
 imports "../Appendix" "../First_Steps" 
 begin
 
-section {* SAT Solvers\label{rec:sat} *}
+section \<open>SAT Solvers\label{rec:sat}\<close>
 
-text {*
+text \<open>
   {\bf Problem:}
   You like to use a SAT solver to find out whether
   an Isabelle formula is satisfiable or not.\smallskip
@@ -25,19 +25,19 @@
   term into a propositional formula. Let
   us illustrate this function by translating @{term "A \<and> \<not>A \<or> B"}. 
   The function will return a propositional formula and a table. Suppose 
-*}
+\<close>
 
-ML %grayML{*val (pform, table) = 
-       Prop_Logic.prop_formula_of_term @{term "A \<and> \<not>A \<or> B"}  Termtab.empty*}
+ML %grayML\<open>val (pform, table) = 
+       Prop_Logic.prop_formula_of_term @{term "A \<and> \<not>A \<or> B"}  Termtab.empty\<close>
 
-text {*
+text \<open>
   then the resulting propositional formula @{ML pform} is 
   
   @{ML [display] "Or (And (BoolVar 1, Not (BoolVar 1)), BoolVar 2)" in Prop_Logic} 
   
 
   where indices are assigned for the variables 
-  @{text "A"} and @{text "B"}, respectively. This assignment is recorded 
+  \<open>A\<close> and \<open>B\<close>, respectively. This assignment is recorded 
   in the table that is given to the translation function and also returned 
   (appropriately updated) in the result. In the case above the
   input table is empty (i.e.~@{ML Termtab.empty}) and the output table is
@@ -49,12 +49,12 @@
   An index is also produced whenever the translation
   function cannot find an appropriate propositional formula for a term. 
   Attempting to translate @{term "\<forall>x::nat. P x"}
-*}
+\<close>
 
-ML %grayML{*val (pform', table') = 
-       Prop_Logic.prop_formula_of_term @{term "\<forall>x::nat. P x"}  Termtab.empty*}
+ML %grayML\<open>val (pform', table') = 
+       Prop_Logic.prop_formula_of_term @{term "\<forall>x::nat. P x"}  Termtab.empty\<close>
 
-text {*
+text \<open>
   returns @{ML "BoolVar 1" in Prop_Logic} for  @{ML pform'} and the table 
   @{ML table'} is:
 
@@ -71,11 +71,11 @@
   @{ML_response_fake [display,gray]
   "SAT_Solver.invoke_solver \"minisat\" pform"
   "SAT_Solver.SATISFIABLE assg"}
-*}
+\<close>
 
-text {*
+text \<open>
   determines that the formula @{ML pform} is satisfiable. If we inspect
-  the returned function @{text assg}
+  the returned function \<open>assg\<close>
 
   @{ML_response [display,gray]
 "let
@@ -85,7 +85,7 @@
 end"
   "(NONE, SOME true, NONE)"}
 
-  we obtain a possible assignment for the variables @{text "A"} an @{text "B"}
+  we obtain a possible assignment for the variables \<open>A\<close> an \<open>B\<close>
   that makes the formula satisfiable. 
 
   Note that we invoked the SAT solver with the string @{text [quotes] auto}. 
@@ -95,13 +95,13 @@
 
   There are also two tactics that make use of SAT solvers. One 
   is the tactic @{ML sat_tac in SAT}. For example 
-*}
+\<close>
 
 lemma "True"
-apply(tactic {* SAT.sat_tac @{context} 1 *})
+apply(tactic \<open>SAT.sat_tac @{context} 1\<close>)
 done
 
-text {*
+text \<open>
   However, for proving anything more exciting using @{ML "sat_tac" in SAT} you 
   have to use a SAT solver that can produce a proof. The internal 
   one is not usuable for this. 
@@ -115,7 +115,7 @@
   "HOL/Tools/prop_logic.ML"}. The tables used in the translation function are
   implemented in @{ML_file "Pure/General/table.ML"}.  
   \end{readmore}
-*}
+\<close>
 
 
 end
--- a/ProgTutorial/Recipes/TimeLimit.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Recipes/TimeLimit.thy	Tue May 14 17:10:47 2019 +0200
@@ -2,8 +2,8 @@
 imports "../Appendix"
 begin
 
-section {* Restricting the Runtime of a Function\label{rec:timeout} *} 
-text {*
+section \<open>Restricting the Runtime of a Function\label{rec:timeout}\<close> 
+text \<open>
   {\bf Problem:}
   Your tool should run only a specified amount of time.\smallskip
 
@@ -11,13 +11,13 @@
   using the function @{ML apply in Timeout}.\smallskip
 
   Assume you defined the Ackermann function on the ML-level.
-*}
+\<close>
 
-ML %grayML{*fun ackermann (0, n) = n + 1
+ML %grayML\<open>fun ackermann (0, n) = n + 1
   | ackermann (m, 0) = ackermann (m - 1, 1)
-  | ackermann (m, n) = ackermann (m - 1, ackermann (m, n - 1)) *}
+  | ackermann (m, n) = ackermann (m - 1, ackermann (m, n - 1))\<close>
 
-text {*
+text \<open>
 
   Now the call 
 
@@ -31,7 +31,7 @@
   handle TIMEOUT => ~1"
 "~1"}
 
-  where @{text TimeOut} is the exception raised when the time limit
+  where \<open>TimeOut\<close> is the exception raised when the time limit
   is reached.
 
   Note that @{ML  "apply" in Timeout} is only meaningful when you use PolyML 5.2.1
@@ -45,5 +45,5 @@
 \end{readmore}
 
  
-*}
+\<close>
 end
--- a/ProgTutorial/Recipes/Timing.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Recipes/Timing.thy	Tue May 14 17:10:47 2019 +0200
@@ -2,9 +2,9 @@
 imports "../Appendix"
 begin
 
-section {* Measuring Time\label{rec:timing} *} 
+section \<open>Measuring Time\label{rec:timing}\<close> 
 
-text {*
+text \<open>
  {\bf Problem:}
   You want to measure the running time of a tactic or function.\smallskip
 
@@ -12,7 +12,7 @@
   @{ML start in Timing} and @{ML result in Timing}.\smallskip
 
   Suppose you defined the Ackermann function on the Isabelle level. 
-*}
+\<close>
 
 fun 
  ackermann:: "(nat \<times> nat) \<Rightarrow> nat"
@@ -21,24 +21,23 @@
   | "ackermann (m, 0) = ackermann (m - 1, 1)"
   | "ackermann (m, n) = ackermann (m - 1, ackermann (m, n - 1))"
 
-text {* 
+text \<open>
   You can measure how long the simplifier takes to verify a datapoint
   of this function. The actual timing is done inside the
   wrapper function:
-*}
+\<close>
 
-ML %linenosgray{*fun timing_wrapper tac st =
+ML %linenosgray\<open>fun timing_wrapper tac st =
 let 
   val t_start = Timing.start ();
   val res = tac st;
   val t_end = Timing.result t_start;
 in
   (writeln (Timing.message t_end); res)
-end*}
+end\<close>
 
-text {*
-  Note that this function, in addition to a tactic, also takes a state @{text
-  "st"} as argument and applies this state to the tactic (Line 4). The reason is that
+text \<open>
+  Note that this function, in addition to a tactic, also takes a state \<open>st\<close> as argument and applies this state to the tactic (Line 4). The reason is that
   tactics are lazy functions and you need to force them to run, otherwise the
   timing will be meaningless. The simplifier tactic, amongst others,  can be 
   forced to run by just applying the state to it. But ``fully'' lazy tactics,
@@ -48,14 +47,14 @@
   The time between start and finish of the simplifier will be calculated 
   as the end time minus the start time.  An example of the
   wrapper is the proof
-*}
+\<close>
 
 lemma "ackermann (3, 4) = 125"
-apply(tactic {* 
-  timing_wrapper (simp_tac (@{context} addsimps @{thms "eval_nat_numeral"}) 1) *})
+apply(tactic \<open>
+  timing_wrapper (simp_tac (@{context} addsimps @{thms "eval_nat_numeral"}) 1)\<close>)
 done
 
-text {*
+text \<open>
   where it returns something on the scale of 3 seconds. We chose to return
   this information as a string, but the timing information is also accessible
   in number format.
@@ -65,7 +64,7 @@
   "Pure/General/timing.ML"} (for the PolyML compiler). Some more
   advanced functions are defined in @{ML_file "Pure/General/output.ML"}.
   \end{readmore}
-*}
+\<close>
 
 
 
--- a/ProgTutorial/Recipes/USTypes.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Recipes/USTypes.thy	Tue May 14 17:10:47 2019 +0200
@@ -4,7 +4,7 @@
 begin
 
 
-section {* User Space Type-Systems (TBD) *}
+section \<open>User Space Type-Systems (TBD)\<close>
 
 
 
--- a/ProgTutorial/Solutions.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Solutions.thy	Tue May 14 17:10:47 2019 +0200
@@ -2,34 +2,34 @@
 imports First_Steps "Recipes/Timing"
 begin
 
-chapter {* Solutions to Most Exercises\label{ch:solutions} *}
+chapter \<open>Solutions to Most Exercises\label{ch:solutions}\<close>
 
-text {* \solution{fun:revsum} *}
+text \<open>\solution{fun:revsum}\<close>
 
-ML %grayML{*fun rev_sum 
+ML %grayML\<open>fun rev_sum 
   ((p as Const (@{const_name plus}, _)) $ t $ u) = p $ u $ rev_sum t
-  | rev_sum t = t *}
+  | rev_sum t = t\<close>
 
-text {* 
+text \<open>
   An alternative solution using the function @{ML_ind mk_binop in HOLogic} is:
- *}
+\<close>
 
-ML %grayML{*fun rev_sum t =
+ML %grayML\<open>fun rev_sum t =
 let
   fun dest_sum (Const (@{const_name plus}, _) $ u $ u') = u' :: dest_sum u
     | dest_sum u = [u]
 in
   foldl1 (HOLogic.mk_binop @{const_name plus}) (dest_sum t)
-end *}
+end\<close>
 
-text {* \solution{fun:makesum} *}
+text \<open>\solution{fun:makesum}\<close>
 
-ML %grayML{*fun make_sum t1 t2 =
-  HOLogic.mk_nat (HOLogic.dest_nat t1 + HOLogic.dest_nat t2) *}
+ML %grayML\<open>fun make_sum t1 t2 =
+  HOLogic.mk_nat (HOLogic.dest_nat t1 + HOLogic.dest_nat t2)\<close>
 
-text {* \solution{fun:killqnt} *}
+text \<open>\solution{fun:killqnt}\<close>
 
-ML %linenosgray{*val quantifiers = [@{const_name All}, @{const_name Ex}]
+ML %linenosgray\<open>val quantifiers = [@{const_name All}, @{const_name Ex}]
 
 fun kill_trivial_quantifiers trm =
 let
@@ -44,15 +44,15 @@
     | _ => t
 in 
   aux trm
-end*}
+end\<close>
 
-text {*
+text \<open>
   In line 7 we traverse the term, by first checking whether a term is an
   application of a constant with an abstraction. If the constant stands for
   a listed quantifier (see Line 1) and the bound variable does not occur
   as a loose bound variable in the body, then we delete the quantifier. 
   For this we have to increase all other dangling de Bruijn indices by
-  @{text "-1"} to account for the deleted quantifier. An example is 
+  \<open>-1\<close> to account for the deleted quantifier. An example is 
   as follows:
 
   @{ML_response_fake [display,gray]
@@ -61,21 +61,21 @@
   |> pretty_term @{context} 
   |> pwriteln"
   "\<forall>x z. P x = P z"}
-*}
+\<close>
 
 
 
-text {* \solution{fun:makelist} *}
+text \<open>\solution{fun:makelist}\<close>
 
-ML %grayML{*fun mk_rev_upto i = 
+ML %grayML\<open>fun mk_rev_upto i = 
   1 upto i
   |> map (HOLogic.mk_number @{typ int})
   |> HOLogic.mk_list @{typ int}
-  |> curry (op $) @{term "rev :: int list \<Rightarrow> int list"}*}
+  |> curry (op $) @{term "rev :: int list \<Rightarrow> int list"}\<close>
 
-text {* \solution{ex:debruijn} *}
+text \<open>\solution{ex:debruijn}\<close>
 
-ML %grayML{*fun P n = @{term "P::nat \<Rightarrow> bool"} $ (HOLogic.mk_number @{typ "nat"} n) 
+ML %grayML\<open>fun P n = @{term "P::nat \<Rightarrow> bool"} $ (HOLogic.mk_number @{typ "nat"} n) 
 
 fun rhs 1 = P 1
   | rhs n = HOLogic.mk_conj (P n, rhs (n - 1))
@@ -85,11 +85,11 @@
                  (HOLogic.mk_eq (P (m - 1), P m), rhs n), lhs (m - 1) n)
 
 fun de_bruijn n =
-  HOLogic.mk_Trueprop (HOLogic.mk_imp (lhs n n, rhs n))*}
+  HOLogic.mk_Trueprop (HOLogic.mk_imp (lhs n n, rhs n))\<close>
 
-text {* \solution{ex:scancmts} *}
+text \<open>\solution{ex:scancmts}\<close>
 
-ML %grayML{*val any = Scan.one (Symbol.not_eof)
+ML %grayML\<open>val any = Scan.one (Symbol.not_eof)
 
 val scan_cmt =
 let
@@ -103,10 +103,10 @@
 val parser = Scan.repeat (scan_cmt || any)
 
 val scan_all =
-      Scan.finite Symbol.stopper parser >> implode #> fst *}
+      Scan.finite Symbol.stopper parser >> implode #> fst\<close>
 
-text {*
-  By using @{text "#> fst"} in the last line, the function 
+text \<open>
+  By using \<open>#> fst\<close> in the last line, the function 
   @{ML scan_all} retruns a string, instead of the pair a parser would
   normally return. For example:
 
@@ -118,11 +118,11 @@
   (scan_all input1, scan_all input2)
 end"
 "(\"foo bar\", \"foo (**test**) bar (**test**)\")"}
-*}
+\<close>
 
-text {* \solution{ex:contextfree} *}
+text \<open>\solution{ex:contextfree}\<close>
 
-ML %grayML{*datatype expr = 
+ML %grayML\<open>datatype expr = 
    Number of int
  | Mult of expr * expr 
  | Add of expr * expr
@@ -135,12 +135,12 @@
    || parse_basic) xs
 and parse_expr xs =
   (parse_factor --| Parse.$$$ "+" -- parse_expr >> Add 
-   || parse_factor) xs*}
+   || parse_factor) xs\<close>
 
 
-text {* \solution{ex:dyckhoff} *}
+text \<open>\solution{ex:dyckhoff}\<close>
 
-text {* 
+text \<open>
   The axiom rule can be implemented with the function @{ML assume_tac}. The other
   rules correspond to the theorems:
 
@@ -164,7 +164,7 @@
   \end{center}
 
   For the other rules we need to prove the following lemmas.
-*}
+\<close>
 
 lemma impE1:
   shows "\<lbrakk>A \<longrightarrow> B; A; B \<Longrightarrow> R\<rbrakk> \<Longrightarrow> R"
@@ -177,12 +177,12 @@
   and   "\<lbrakk>(C = D) \<longrightarrow> B; (C \<longrightarrow> D) \<longrightarrow> ((D \<longrightarrow> C) \<longrightarrow> B) \<Longrightarrow> R\<rbrakk> \<Longrightarrow> R"
   by iprover+
 
-text {*
+text \<open>
   Now the tactic which applies a single rule can be implemented
   as follows.
-*}
+\<close>
 
-ML %linenosgray{*fun apply_tac ctxt =
+ML %linenosgray\<open>fun apply_tac ctxt =
 let
   val intros = @{thms conjI disjI1 disjI2 impI iffI}
   val elims = @{thms FalseE conjE disjE iffE impE2}
@@ -191,41 +191,41 @@
   ORELSE' resolve_tac ctxt intros
   ORELSE' eresolve_tac ctxt elims
   ORELSE' (eresolve_tac ctxt [@{thm impE1}] THEN' assume_tac ctxt)
-end*}
+end\<close>
 
-text {*
+text \<open>
   In Line 11 we apply the rule @{thm [source] impE1} in concjunction 
   with @{ML assume_tac} in order to reduce the number of possibilities that
   need to be explored. You can use the tactic as follows.
-*}
+\<close>
 
 lemma
   shows "((((P \<longrightarrow> Q) \<longrightarrow> P) \<longrightarrow> P) \<longrightarrow> Q) \<longrightarrow> Q"
-apply(tactic {* (DEPTH_SOLVE o apply_tac @{context}) 1 *})
+apply(tactic \<open>(DEPTH_SOLVE o apply_tac @{context}) 1\<close>)
 done
 
-text {*
+text \<open>
   We can use the tactic to prove or disprove automatically the
   de Bruijn formulae from Exercise \ref{ex:debruijn}.
-*}
+\<close>
 
-ML %grayML{*fun de_bruijn_prove ctxt n =
+ML %grayML\<open>fun de_bruijn_prove ctxt n =
 let 
   val goal = HOLogic.mk_Trueprop (HOLogic.mk_imp (lhs n n, rhs n))
 in
   Goal.prove ctxt ["P"] [] goal
    (fn _ => (DEPTH_SOLVE o apply_tac ctxt) 1)
-end*}
+end\<close>
 
-text {* 
+text \<open>
   You can use this function to prove de Bruijn formulae.
-*}
+\<close>
 
-ML %grayML{*de_bruijn_prove @{context} 3 *}
+ML %grayML\<open>de_bruijn_prove @{context} 3\<close>
 
-text {* \solution{ex:addsimproc} *}
+text \<open>\solution{ex:addsimproc}\<close>
 
-ML %grayML{*fun dest_sum term =
+ML %grayML\<open>fun dest_sum term =
   case term of 
     (@{term "(+):: nat \<Rightarrow> nat \<Rightarrow> nat"} $ t1 $ t2) =>
         (snd (HOLogic.dest_number t1), snd (HOLogic.dest_number t2))
@@ -245,32 +245,32 @@
 in
   SOME (get_sum_thm ctxt t' (dest_sum t'))
   handle TERM _ => NONE
-end*}
+end\<close>
 
-text {* The setup for the simproc is *}
+text \<open>The setup for the simproc is\<close>
 
-simproc_setup %gray add_sp ("t1 + t2") = {* K add_sp_aux *}
+simproc_setup %gray add_sp ("t1 + t2") = \<open>K add_sp_aux\<close>
  
-text {* and a test case is the lemma *}
+text \<open>and a test case is the lemma\<close>
 
 lemma "P (Suc (99 + 1)) ((0 + 0)::nat) (Suc (3 + 3 + 3)) ((4 + 1)::nat)"
-  apply(tactic {* simp_tac (put_simpset HOL_basic_ss @{context} addsimprocs [@{simproc add_sp}]) 1 *})
-txt {* 
+  apply(tactic \<open>simp_tac (put_simpset HOL_basic_ss @{context} addsimprocs [@{simproc add_sp}]) 1\<close>)
+txt \<open>
   where the simproc produces the goal state
   
   \begin{minipage}{\textwidth}
   @{subgoals [display]}
   \end{minipage}\bigskip
-*}(*<*)oops(*>*)
+\<close>(*<*)oops(*>*)
 
-text {* \solution{ex:addconversion} *}
+text \<open>\solution{ex:addconversion}\<close>
 
-text {*
+text \<open>
   The following code assumes the function @{ML dest_sum} from the previous
   exercise.
-*}
+\<close>
 
-ML %grayML{*fun add_simple_conv ctxt ctrm =
+ML %grayML\<open>fun add_simple_conv ctxt ctrm =
 let
   val trm = Thm.term_of ctrm
 in
@@ -282,25 +282,25 @@
 
 val add_conv = Conv.bottom_conv add_simple_conv
 
-fun add_tac ctxt = CONVERSION (add_conv ctxt)*}
+fun add_tac ctxt = CONVERSION (add_conv ctxt)\<close>
 
-text {*
+text \<open>
   A test case for this conversion is as follows
-*}
+\<close>
 
 lemma "P (Suc (99 + 1)) ((0 + 0)::nat) (Suc (3 + 3 + 3)) ((4 + 1)::nat)"
-  apply(tactic {* add_tac @{context} 1 *})?
-txt {* 
+  apply(tactic \<open>add_tac @{context} 1\<close>)?
+txt \<open>
   where it produces the goal state
   
   \begin{minipage}{\textwidth}
   @{subgoals [display]}
   \end{minipage}\bigskip
-*}(*<*)oops(*>*)
+\<close>(*<*)oops(*>*)
 
-text {* \solution{ex:compare} *}
+text \<open>\solution{ex:compare}\<close>
 
-text {* 
+text \<open>
   We use the timing function @{ML timing_wrapper} from Recipe~\ref{rec:timing}.
   To measure any difference between the simproc and conversion, we will create 
   mechanically terms involving additions and then set up a goal to be 
@@ -312,9 +312,9 @@
   For constructing test cases, we first define a function that returns a 
   complete binary tree whose leaves are numbers and the nodes are 
   additions.
-*}
+\<close>
 
-ML %grayML{*fun term_tree n =
+ML %grayML\<open>fun term_tree n =
 let
   val count = Unsynchronized.ref 0; 
 
@@ -325,47 +325,47 @@
              $ (term_tree_aux (n - 1)) $ (term_tree_aux (n - 1))
 in
   term_tree_aux n
-end*}
+end\<close>
 
-text {*
+text \<open>
   This function generates for example:
 
   @{ML_response_fake [display,gray] 
   "pwriteln (pretty_term @{context} (term_tree 2))" 
   "(1 + 2) + (3 + 4)"} 
 
-  The next function constructs a goal of the form @{text "P \<dots>"} with a term 
+  The next function constructs a goal of the form \<open>P \<dots>\<close> with a term 
   produced by @{ML term_tree} filled in.
-*}
+\<close>
 
-ML %grayML{*fun goal n = HOLogic.mk_Trueprop (@{term "P::nat\<Rightarrow> bool"} $ (term_tree n))*}
+ML %grayML\<open>fun goal n = HOLogic.mk_Trueprop (@{term "P::nat\<Rightarrow> bool"} $ (term_tree n))\<close>
 
-text {*
+text \<open>
   Note that the goal needs to be wrapped in a @{term "Trueprop"}. Next we define
-  two tactics, @{text "c_tac"} and @{text "s_tac"}, for the conversion and simproc,
+  two tactics, \<open>c_tac\<close> and \<open>s_tac\<close>, for the conversion and simproc,
   respectively. The idea is to first apply the conversion (respectively simproc) and 
   then prove the remaining goal using @{ML "cheat_tac" in Skip_Proof}.
-*}
+\<close>
 
 ML Skip_Proof.cheat_tac
 
-ML %grayML{*local
+ML %grayML\<open>local
   fun mk_tac ctxt tac = 
         timing_wrapper (EVERY1 [tac, Skip_Proof.cheat_tac ctxt])
 in
   fun c_tac ctxt = mk_tac ctxt (add_tac ctxt) 
   fun s_tac ctxt = mk_tac ctxt (simp_tac 
     (put_simpset HOL_basic_ss ctxt addsimprocs [@{simproc add_sp}]))
-end*}
+end\<close>
 
-text {*
+text \<open>
   This is all we need to let the conversion run against the simproc:
-*}
+\<close>
 
-ML %grayML{*val _ = Goal.prove @{context} [] [] (goal 8) (fn {context, ...} => c_tac context)
-val _ = Goal.prove @{context} [] [] (goal 8) (fn {context, ...} => s_tac context)*}
+ML %grayML\<open>val _ = Goal.prove @{context} [] [] (goal 8) (fn {context, ...} => c_tac context)
+val _ = Goal.prove @{context} [] [] (goal 8) (fn {context, ...} => s_tac context)\<close>
 
-text {*
+text \<open>
   If you do the exercise, you can see that both ways of simplifying additions
   perform relatively similar with perhaps some advantages for the
   simproc. That means the simplifier, even if much more complicated than
@@ -374,6 +374,6 @@
   conversions. Conversions only have clear advantages in special situations:
   for example if you need to have control over innermost or outermost
   rewriting, or when rewriting rules are lead to non-termination.
-*}
+\<close>
 
 end
--- a/ProgTutorial/Tactical.thy	Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Tactical.thy	Tue May 14 17:10:47 2019 +0200
@@ -2,9 +2,9 @@
 imports Base First_Steps
 begin
 
-chapter {* Tactical Reasoning\label{chp:tactical} *}
-
-text {*
+chapter \<open>Tactical Reasoning\label{chp:tactical}\<close>
+
+text \<open>
    \begin{flushright}
   {\em
   ``The first thing I would say is that when you write a program, think of\\ 
@@ -26,14 +26,14 @@
   modified in a sequence of proof steps until all of them are discharged.
   In this chapter we explain how to implement simple tactics and how to combine them using
   tactic combinators. We also describe the simplifier, simprocs and conversions.
-*}
-
-section {* Basics of Reasoning with Tactics\label{sec:basictactics}*}
-
-text {*
+\<close>
+
+section \<open>Basics of Reasoning with Tactics\label{sec:basictactics}\<close>
+
+text \<open>
   To see how tactics work, let us first transcribe a simple \isacommand{apply}-style proof 
   into ML. Suppose the following proof.
-*}
+\<close>
 
 lemma disj_swap: 
   shows "P \<or> Q \<Longrightarrow> Q \<or> P"
@@ -44,7 +44,7 @@
 apply(assumption)
 done
 
-text {*
+text \<open>
   This proof translates to the following ML-code.
   
 @{ML_response_fake [display,gray]
@@ -65,12 +65,11 @@
   function some assumptions in the third argument (there are no assumption in
   the proof at hand). The second argument stands for a list of variables
   (given as strings). This list contains the variables that will be turned
-  into schematic variables once the goal is proved (in our case @{text P} and
-  @{text Q}). The last argument is the tactic that proves the goal. This
+  into schematic variables once the goal is proved (in our case \<open>P\<close> and
+  \<open>Q\<close>). The last argument is the tactic that proves the goal. This
   tactic can make use of the local assumptions (there are none in this
   example). The tactics @{ML_ind eresolve_tac in Tactic}, @{ML_ind resolve_tac in Tactic} and
-  @{ML_ind assume_tac in Tactic} in the code above correspond roughly to @{text
-  erule}, @{text rule} and @{text assumption}, respectively. The combinator
+  @{ML_ind assume_tac in Tactic} in the code above correspond roughly to \<open>erule\<close>, \<open>rule\<close> and \<open>assumption\<close>, respectively. The combinator
   @{ML THEN} strings the tactics together.
 
   TBD: Write something about @{ML_ind prove_common in Goal}.
@@ -89,64 +88,64 @@
   \index{raw\_tactic@ {\tt\slshape{}raw\_tactic}}
   During the development of automatic proof procedures, you will often find it
   necessary to test a tactic on examples. This can be conveniently done with
-  the command \isacommand{apply}@{text "(tactic \<verbopen> \<dots> \<verbclose>)"}. 
+  the command \isacommand{apply}\<open>(tactic \<verbopen> \<dots> \<verbclose>)\<close>. 
   Consider the following sequence of tactics
 
-*}
-
-ML %grayML{*fun foo_tac ctxt = 
+\<close>
+
+ML %grayML\<open>fun foo_tac ctxt = 
       (eresolve_tac ctxt [@{thm disjE}] 1
        THEN resolve_tac  ctxt [@{thm disjI2}] 1
        THEN assume_tac  ctxt 1
        THEN resolve_tac ctxt [@{thm disjI1}] 1
-       THEN assume_tac  ctxt 1)*}
-
-text {* and the Isabelle proof: *}
+       THEN assume_tac  ctxt 1)\<close>
+
+text \<open>and the Isabelle proof:\<close>
 
 lemma 
   shows "P \<or> Q \<Longrightarrow> Q \<or> P"
-apply(tactic {* foo_tac @{context} *})
+apply(tactic \<open>foo_tac @{context}\<close>)
 done
 
-text {*
-  By using @{text "tactic \<verbopen> \<dots> \<verbclose>"} you can call from the 
+text \<open>
+  By using \<open>tactic \<verbopen> \<dots> \<verbclose>\<close> you can call from the 
   user-level of Isabelle the tactic @{ML foo_tac} or 
   any other function that returns a tactic. There are some goal transformations
-  that are performed by @{text "tactic"}. They can be avoided by using 
-  @{text "raw_tactic"} instead.
+  that are performed by \<open>tactic\<close>. They can be avoided by using 
+  \<open>raw_tactic\<close> instead.
 
   The tactic @{ML foo_tac} is just a sequence of simple tactics stringed
   together by @{ML THEN}. As can be seen, each simple tactic in @{ML foo_tac}
   has a hard-coded number that stands for the subgoal analysed by the
-  tactic (@{text "1"} stands for the first, or top-most, subgoal). This hard-coding
+  tactic (\<open>1\<close> stands for the first, or top-most, subgoal). This hard-coding
   of goals is sometimes wanted, but usually it is not. To avoid the explicit numbering, 
   you can write
-*}
-
-ML %grayML{*fun foo_tac' ctxt = 
+\<close>
+
+ML %grayML\<open>fun foo_tac' ctxt = 
       (eresolve_tac ctxt [@{thm disjE}] 
        THEN' resolve_tac ctxt [@{thm disjI2}] 
        THEN' assume_tac ctxt 
        THEN' resolve_tac ctxt [@{thm disjI1}] 
-       THEN' assume_tac ctxt)*}text_raw{*\label{tac:footacprime}*}
-
-text {* 
+       THEN' assume_tac ctxt)\<close>text_raw\<open>\label{tac:footacprime}\<close>
+
+text \<open>
   where @{ML_ind THEN' in Tactical} is used instead of @{ML THEN in
   Tactical}. (For most combinators that combine tactics---@{ML THEN} is only
   one such combinator---a ``primed'' version exists.)  With @{ML foo_tac'} you
   can give the number for the subgoal explicitly when the tactic is called. So
   in the next proof you can first discharge the second subgoal, and
   then the first.
-*}
+\<close>
 
 lemma 
   shows "P1 \<or> Q1 \<Longrightarrow> Q1 \<or> P1"
   and   "P2 \<or> Q2 \<Longrightarrow> Q2 \<or> P2"
-apply(tactic {* foo_tac' @{context} 2 *})
-apply(tactic {* foo_tac' @{context} 1 *})
+apply(tactic \<open>foo_tac' @{context} 2\<close>)
+apply(tactic \<open>foo_tac' @{context} 1\<close>)
 done
 
-text {*
+text \<open>
   This kind of addressing is more difficult to achieve when the goal is 
   hard-coded inside the tactic. 
 
@@ -158,38 +157,38 @@
 
 
   \begin{isabelle}
-  @{text "*** empty result sequence -- proof command failed"}\\
-  @{text "*** At command \"apply\"."}
+  \<open>*** empty result sequence -- proof command failed\<close>\\
+  \<open>*** At command "apply".\<close>
   \end{isabelle}
 
   This means they failed. The reason for this error message is that tactics
   are functions mapping a goal state to a (lazy) sequence of successor
   states. Hence the type of a tactic is:
-*}
+\<close>
   
-ML %grayML{*type tactic = thm -> thm Seq.seq*}
-
-text {*
+ML %grayML\<open>type tactic = thm -> thm Seq.seq\<close>
+
+text \<open>
   By convention, if a tactic fails, then it should return the empty sequence. 
   Therefore, your own tactics should not raise exceptions 
   willy-nilly; only in very grave failure situations should a tactic raise the
-  exception @{text THM}.
+  exception \<open>THM\<close>.
 
   The simplest tactics are @{ML_ind no_tac in Tactical} and 
   @{ML_ind all_tac in Tactical}. The first returns the empty sequence and 
   is defined as
-*}
-
-ML %grayML{*fun no_tac thm = Seq.empty*}
-
-text {*
+\<close>
+
+ML %grayML\<open>fun no_tac thm = Seq.empty\<close>
+
+text \<open>
   which means @{ML no_tac} always fails. The second returns the given theorem wrapped 
   in a single member sequence; it is defined as
-*}
-
-ML %grayML{*fun all_tac thm = Seq.single thm*}
-
-text {*
+\<close>
+
+ML %grayML\<open>fun all_tac thm = Seq.single thm\<close>
+
+text \<open>
   which means @{ML all_tac} always succeeds, but also does not make any progress 
   with the proof.
 
@@ -197,16 +196,16 @@
   of Isabelle when using the command \isacommand{back}. For instance in the 
   following proof there are two possibilities for how to apply 
   @{ML foo_tac'}: either using the first assumption or the second.
-*}
+\<close>
 
 lemma 
   shows "\<lbrakk>P \<or> Q; P \<or> Q\<rbrakk> \<Longrightarrow> Q \<or> P"
-apply(tactic {* foo_tac' @{context} 1 *})
+apply(tactic \<open>foo_tac' @{context} 1\<close>)
 back
 done
 
 
-text {*
+text \<open>
   By using \isacommand{back}, we construct the proof that uses the
   second assumption. While in the proof above, it does not really matter which 
   assumption is used, in more interesting cases provability might depend
@@ -225,16 +224,16 @@
   goal state is indeed a theorem. To shed more light on this,
   let us modify the code of @{ML all_tac} to obtain the following
   tactic
-*}
-
-ML %grayML{*fun my_print_tac ctxt thm =
+\<close>
+
+ML %grayML\<open>fun my_print_tac ctxt thm =
 let
   val _ = tracing (Pretty.string_of (pretty_thm_no_vars ctxt thm))
 in 
   Seq.single thm
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   which prints out the given theorem (using the string-function defined in
   Section~\ref{sec:printing}) and then behaves like @{ML all_tac}. With this
   tactic we are in the position to inspect every goal state in a proof. For
@@ -245,30 +244,29 @@
 
   where @{term C} is the goal to be proved and the @{term "A\<^sub>i"} are
   the subgoals. So after setting up the lemma, the goal state is always of the
-  form @{text "C \<Longrightarrow> #C"}; when the proof is finished we are left with @{text
-  "#C"}. Since the goal @{term C} can potentially be an implication, there is a
+  form \<open>C \<Longrightarrow> #C\<close>; when the proof is finished we are left with \<open>#C\<close>. Since the goal @{term C} can potentially be an implication, there is a
   ``protector'' wrapped around it (the wrapper is the outermost constant
-  @{text "Const (\"Pure.prop\", bool \<Rightarrow> bool)"}; in the figure we make it visible
-  as a @{text #}). This wrapper prevents that premises of @{text C} are
+  \<open>Const ("Pure.prop", bool \<Rightarrow> bool)\<close>; in the figure we make it visible
+  as a \<open>#\<close>). This wrapper prevents that premises of \<open>C\<close> are
   misinterpreted as open subgoals. While tactics can operate on the subgoals
-  (the @{text "A\<^sub>i"} above), they are expected to leave the conclusion
+  (the \<open>A\<^sub>i\<close> above), they are expected to leave the conclusion
   @{term C} intact, with the exception of possibly instantiating schematic
   variables. This instantiations of schematic variables can be observed 
   on the user level. Have a look at the following definition and proof.
-*}
-
-text_raw {*
+\<close>
+
+text_raw \<open>
   \begin{figure}[p]
   \begin{boxedminipage}{\textwidth}
   \begin{isabelle}
-*}
+\<close>
 notation (output) "Pure.prop"  ("#_" [1000] 1000)
 
 lemma 
   shows "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B" 
-apply(tactic {* my_print_tac @{context} *})
-
-txt{* \begin{minipage}{\textwidth}
+apply(tactic \<open>my_print_tac @{context}\<close>)
+
+txt\<open>\begin{minipage}{\textwidth}
       @{subgoals [display]} 
       \end{minipage}\medskip      
 
@@ -279,12 +277,12 @@
       @{raw_goal_state}
       \end{tabular}}
       \end{minipage}\medskip
-*}
+\<close>
 
 apply(rule conjI)
-apply(tactic {* my_print_tac @{context} *})
-
-txt{* \begin{minipage}{\textwidth}
+apply(tactic \<open>my_print_tac @{context}\<close>)
+
+txt\<open>\begin{minipage}{\textwidth}
       @{subgoals [display]} 
       \end{minipage}\medskip
 
@@ -295,12 +293,12 @@
       @{raw_goal_state}
       \end{tabular}}
       \end{minipage}\medskip
-*}
+\<close>
 
 apply(assumption)
-apply(tactic {* my_print_tac @{context} *})
-
-txt{* \begin{minipage}{\textwidth}
+apply(tactic \<open>my_print_tac @{context}\<close>)
+
+txt\<open>\begin{minipage}{\textwidth}
       @{subgoals [display]} 
       \end{minipage}\medskip
 
@@ -311,12 +309,12 @@
       @{raw_goal_state}
       \end{tabular}}
       \end{minipage}\medskip
-*}
+\<close>
 
 apply(assumption)
-apply(tactic {* my_print_tac @{context} *})
-
-txt{* \begin{minipage}{\textwidth}
+apply(tactic \<open>my_print_tac @{context}\<close>)
+
+txt\<open>\begin{minipage}{\textwidth}
       @{subgoals [display]} 
       \end{minipage}\medskip 
   
@@ -327,21 +325,21 @@
       @{raw_goal_state}
       \end{tabular}}
       \end{minipage}\medskip   
-   *}
+\<close>
 (*<*)oops(*>*)
-text_raw {*  
+text_raw \<open>
   \end{isabelle}
   \end{boxedminipage}
   \caption{The figure shows an Isabelle snippet of a proof where each
   intermediate goal state is printed by the Isabelle system and by @{ML
   my_print_tac}. The latter shows the goal state as represented internally
   (highlighted boxes). This tactic shows that every goal state in Isabelle is
-  represented by a theorem: when you start the proof of \mbox{@{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow>
-  A \<and> B"}} the theorem is @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> #(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}; when
-  you finish the proof the theorem is @{text "#(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and>
-  B)"}.\label{fig:goalstates}}
+  represented by a theorem: when you start the proof of \mbox{\<open>\<lbrakk>A; B\<rbrakk> \<Longrightarrow>
+  A \<and> B\<close>} the theorem is \<open>(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> #(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)\<close>; when
+  you finish the proof the theorem is \<open>#(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and>
+  B)\<close>.\label{fig:goalstates}}
   \end{figure}
-*}
+\<close>
 
 definition 
   EQ_TRUE 
@@ -353,17 +351,17 @@
 unfolding EQ_TRUE_def
 by (rule refl)
 
-text {*
+text \<open>
   By using \isacommand{schematic\_lemma} it is possible to prove lemmas including
-  meta-variables on the user level. However, the proved theorem is not @{text "EQ_TRUE ?X"}, 
+  meta-variables on the user level. However, the proved theorem is not \<open>EQ_TRUE ?X\<close>, 
   as one might expect, but @{thm test}. We can test this with:
 
   \begin{isabelle}
   \isacommand{thm}~@{thm [source] test}\\
-  @{text ">"}~@{thm test}
+  \<open>>\<close>~@{thm test}
   \end{isabelle}
  
-  The reason for this result is that the schematic variable @{text "?X"} 
+  The reason for this result is that the schematic variable \<open>?X\<close> 
   is instantiated inside the proof to be @{term True} and then the 
   instantiation propagated to the ``outside''.
 
@@ -371,116 +369,115 @@
   For more information about the internals of goals see \isccite{sec:tactical-goals}.
   \end{readmore}
 
-*}
-
-
-section {* Simple Tactics\label{sec:simpletacs} *}
-
-text {*
+\<close>
+
+
+section \<open>Simple Tactics\label{sec:simpletacs}\<close>
+
+text \<open>
   In this section we will introduce more of the simple tactics in Isabelle. The 
   first one is @{ML_ind print_tac in Tactical}, which is quite useful 
   for low-level debugging of tactics. It just prints out a message and the current 
   goal state. Unlike @{ML my_print_tac} shown earlier, it prints the goal state 
   as the user would see it.  For example, processing the proof
-*}
+\<close>
  
 lemma 
   shows "False \<Longrightarrow> True"
-apply(tactic {* print_tac @{context} "foo message" *})
-txt{*gives:
+apply(tactic \<open>print_tac @{context} "foo message"\<close>)
+txt\<open>gives:
      \begin{isabelle}
-     @{text "foo message"}\\[3mm] 
+     \<open>foo message\<close>\\[3mm] 
      @{prop "False \<Longrightarrow> True"}\\
-     @{text " 1. False \<Longrightarrow> True"}\\
+     \<open> 1. False \<Longrightarrow> True\<close>\\
      \end{isabelle}
-   *}
+\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   A simple tactic for easy discharge of any proof obligations, even difficult ones, is 
   @{ML_ind cheat_tac in Skip_Proof} in the structure @{ML_struct Skip_Proof}. 
   This tactic corresponds to the Isabelle command \isacommand{sorry} and is 
   sometimes useful during the development of tactics.
-*}
+\<close>
 
 lemma 
   shows "False" and "Goldbach_conjecture"  
-apply(tactic {* Skip_Proof.cheat_tac @{context} 1 *})
-txt{*\begin{minipage}{\textwidth}
+apply(tactic \<open>Skip_Proof.cheat_tac @{context} 1\<close>)
+txt\<open>\begin{minipage}{\textwidth}
      @{subgoals [display]}
-     \end{minipage}*}
+     \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   Another simple tactic is the function @{ML_ind assume_tac in Tactic}, which, as shown 
   earlier, corresponds to the assumption method.
-*}
+\<close>
 
 lemma 
   shows "P \<Longrightarrow> P"
-apply(tactic {* assume_tac @{context} 1 *})
-txt{*\begin{minipage}{\textwidth}
+apply(tactic \<open>assume_tac @{context} 1\<close>)
+txt\<open>\begin{minipage}{\textwidth}
      @{subgoals [display]}
-     \end{minipage}*}
+     \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   Similarly, @{ML_ind resolve_tac in Tactic}, @{ML_ind dresolve_tac in Tactic}, @{ML_ind eresolve_tac
-  in Tactic} and @{ML_ind forward_tac in Tactic} correspond (roughly) to @{text
-  rule}, @{text drule}, @{text erule} and @{text frule}, respectively. Each of
+  in Tactic} and @{ML_ind forward_tac in Tactic} correspond (roughly) to \<open>rule\<close>, \<open>drule\<close>, \<open>erule\<close> and \<open>frule\<close>, respectively. Each of
   them takes a theorem as argument and attempts to apply it to a goal. Below
   are three self-explanatory examples.
-*}
+\<close>
 
 lemma 
   shows "P \<and> Q"
-apply(tactic {* resolve_tac @{context} [@{thm conjI}] 1 *})
-txt{*\begin{minipage}{\textwidth}
+apply(tactic \<open>resolve_tac @{context} [@{thm conjI}] 1\<close>)
+txt\<open>\begin{minipage}{\textwidth}
      @{subgoals [display]}
-     \end{minipage}*}
+     \end{minipage}\<close>
 (*<*)oops(*>*)
 
 lemma 
   shows "P \<and> Q \<Longrightarrow> False"
-apply(tactic {* eresolve_tac @{context} [@{thm conjE}] 1 *})
-txt{*\begin{minipage}{\textwidth}
+apply(tactic \<open>eresolve_tac @{context} [@{thm conjE}] 1\<close>)
+txt\<open>\begin{minipage}{\textwidth}
      @{subgoals [display]}
-     \end{minipage}*}
+     \end{minipage}\<close>
 (*<*)oops(*>*)
 
 lemma 
   shows "False \<and> True \<Longrightarrow> False"
-apply(tactic {* dresolve_tac @{context} [@{thm conjunct2}] 1 *})
-txt{*\begin{minipage}{\textwidth}
+apply(tactic \<open>dresolve_tac @{context} [@{thm conjunct2}] 1\<close>)
+txt\<open>\begin{minipage}{\textwidth}
      @{subgoals [display]}
-     \end{minipage}*}
+     \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   The function @{ML_ind resolve_tac in Tactic} 
   expects a list of theorems as argument. From this list it will apply the
   first applicable theorem (later theorems that are also applicable can be
   explored via the lazy sequences mechanism). Given the code
-*}
-
-ML %grayML{*fun resolve_xmp_tac ctxt = resolve_tac ctxt [@{thm impI}, @{thm conjI}]*}
-
-text {*
+\<close>
+
+ML %grayML\<open>fun resolve_xmp_tac ctxt = resolve_tac ctxt [@{thm impI}, @{thm conjI}]\<close>
+
+text \<open>
   an example for @{ML resolve_tac} is the following proof where first an outermost 
   implication is analysed and then an outermost conjunction.
-*}
+\<close>
 
 lemma 
   shows "C \<longrightarrow> (A \<and> B)" 
   and   "(A \<longrightarrow> B) \<and> C"
-apply(tactic {* resolve_xmp_tac @{context} 1 *})
-apply(tactic {* resolve_xmp_tac @{context} 2 *})
-txt{*\begin{minipage}{\textwidth}
+apply(tactic \<open>resolve_xmp_tac @{context} 1\<close>)
+apply(tactic \<open>resolve_xmp_tac @{context} 2\<close>)
+txt\<open>\begin{minipage}{\textwidth}
      @{subgoals [display]} 
-     \end{minipage}*}
+     \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {* 
+text \<open>
   Similar versions taking a list of theorems exist for the tactics 
    @{ML_ind dresolve_tac in Tactic},  
   @{ML_ind eresolve_tac in Tactic} and so on.
@@ -489,20 +486,20 @@
   list of theorems into the assumptions of the current goal state. Below we
   will insert the definitions for the constants @{term True} and @{term
   False}. So
-*}
+\<close>
 
 lemma 
   shows "True \<noteq> False"
-apply(tactic {* cut_facts_tac [@{thm True_def}, @{thm False_def}] 1 *})
-txt{*produces the goal state
+apply(tactic \<open>cut_facts_tac [@{thm True_def}, @{thm False_def}] 1\<close>)
+txt\<open>produces the goal state
      \begin{isabelle}
      @{subgoals [display]} 
-     \end{isabelle}*}
+     \end{isabelle}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   Often proofs on the ML-level involve elaborate operations on assumptions and 
-  @{text "\<And>"}-quantified variables. To do such operations using the basic tactics 
+  \<open>\<And>\<close>-quantified variables. To do such operations using the basic tactics 
   shown so far is very unwieldy and brittle. Some convenience and
   safety is provided by the functions @{ML_ind FOCUS in Subgoal} and 
   @{ML_ind SUBPROOF in Subgoal}. These tactics fix the parameters 
@@ -510,16 +507,16 @@
   To see what happens, suppose the function defined in Figure~\ref{fig:sptac}, which
   takes a record and just prints out the contents of this record. Then consider
   the proof:
-*}
-
-
-text_raw{*
+\<close>
+
+
+text_raw\<open>
 \begin{figure}[t]
 \begin{minipage}{\textwidth}
 \begin{isabelle}
-*}
-
-ML %grayML{*fun foc_tac {context, params, prems, asms, concl, schematics} = 
+\<close>
+
+ML %grayML\<open>fun foc_tac {context, params, prems, asms, concl, schematics} = 
 let 
   fun assgn1 f1 f2 xs =
     let
@@ -538,9 +535,9 @@
      ("schematics: ", assgn2 (pretty_cterm context) (snd schematics))]
 in
   tracing (Pretty.string_of (Pretty.chunks pps)); all_tac 
-end*}
-
-text_raw{*
+end\<close>
+
+text_raw\<open>
 \end{isabelle}
 \end{minipage}
 \caption{A function that prints out the various parameters provided by 
@@ -548,57 +545,57 @@
   in Section~\ref{sec:printing} for extracting strings from @{ML_type cterm}s 
   and @{ML_type thm}s.\label{fig:sptac}}
 \end{figure}
-*}
+\<close>
 
 schematic_goal
   shows "\<And>x y. A x y \<Longrightarrow> B y x \<longrightarrow> C (?z y) x"
-apply(tactic {* Subgoal.FOCUS foc_tac @{context} 1 *})
-
-txt {*
+apply(tactic \<open>Subgoal.FOCUS foc_tac @{context} 1\<close>)
+
+txt \<open>
   The tactic produces the following printout:
 
   \begin{quote}\small
   \begin{tabular}{ll}
-  params:      & @{text "x:= x"}, @{text "y:= y"}\\
+  params:      & \<open>x:= x\<close>, \<open>y:= y\<close>\\
   assumptions: & @{term "A x y"}\\
   conclusion:  & @{term "B y x \<longrightarrow> C (z y) x"}\\
   premises:    & @{term "A x y"}\\
-  schematics:  & @{text "?z:=z"}
+  schematics:  & \<open>?z:=z\<close>
   \end{tabular}
   \end{quote}
 
-  The @{text params} and @{text schematics} stand for list of pairs where the
-  left-hand side of @{text ":="} is replaced by the right-hand side inside the
+  The \<open>params\<close> and \<open>schematics\<close> stand for list of pairs where the
+  left-hand side of \<open>:=\<close> is replaced by the right-hand side inside the
   tactic.  Notice that in the actual output the variables @{term x} and @{term
   y} have a brown colour. Although they are parameters in the original goal,
   they are fixed inside the tactic. By convention these fixed variables are
-  printed in brown colour.  Similarly the schematic variable @{text ?z}. The
+  printed in brown colour.  Similarly the schematic variable \<open>?z\<close>. The
   assumption, or premise, @{prop "A x y"} is bound as @{ML_type cterm} to the
-  record-variable @{text asms}, but also as @{ML_type thm} to @{text prems}.
-
-  If we continue the proof script by applying the @{text impI}-rule
-*}
+  record-variable \<open>asms\<close>, but also as @{ML_type thm} to \<open>prems\<close>.
+
+  If we continue the proof script by applying the \<open>impI\<close>-rule
+\<close>
 
 apply(rule impI)
-apply(tactic {* Subgoal.FOCUS foc_tac @{context} 1 *})
-
-txt {*
+apply(tactic \<open>Subgoal.FOCUS foc_tac @{context} 1\<close>)
+
+txt \<open>
   then the tactic prints out: 
 
   \begin{quote}\small
   \begin{tabular}{ll}
-  params:      & @{text "x:= x"}, @{text "y:= y"}\\
+  params:      & \<open>x:= x\<close>, \<open>y:= y\<close>\\
   assumptions: & @{term "A x y"}, @{term "B y x"}\\
   conclusion:  & @{term "C (z y) x"}\\
   premises:    & @{term "A x y"}, @{term "B y x"}\\
-  schematics:  & @{text "?z:=z"}
+  schematics:  & \<open>?z:=z\<close>
   \end{tabular}
   \end{quote}
-*}
+\<close>
 (*<*)oops(*>*)
 
-text {*
-  Now also @{term "B y x"} is an assumption bound to @{text asms} and @{text prems}.
+text \<open>
+  Now also @{term "B y x"} is an assumption bound to \<open>asms\<close> and \<open>prems\<close>.
 
   One difference between the tactics @{ML SUBPROOF} and @{ML FOCUS in Subgoal}
   is that the former expects that the goal is solved completely, which the
@@ -609,43 +606,43 @@
   state where there is only a conclusion. This means the tactics @{ML dresolve_tac} and 
   the like are of no use for manipulating the goal state. The assumptions inside 
   @{ML FOCUS in Subgoal} and @{ML SUBPROOF} are given as cterms and theorems in 
-  the arguments @{text "asms"} and @{text "prems"}, respectively. This 
+  the arguments \<open>asms\<close> and \<open>prems\<close>, respectively. This 
   means we can apply them using the usual assumption tactics. With this you can 
   for example easily implement a tactic that behaves almost like @{ML assume_tac}:
-*}
-
-ML %grayML{*val atac' = Subgoal.FOCUS (fn {prems, context, ...} => resolve_tac context prems 1)*}
-
-text {*
+\<close>
+
+ML %grayML\<open>val atac' = Subgoal.FOCUS (fn {prems, context, ...} => resolve_tac context prems 1)\<close>
+
+text \<open>
   If you apply @{ML atac'} to the next lemma
-*}
+\<close>
 
 lemma 
   shows "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
-apply(tactic {* atac' @{context} 1 *})
-txt{* it will produce
-      @{subgoals [display]} *}
+apply(tactic \<open>atac' @{context} 1\<close>)
+txt\<open>it will produce
+      @{subgoals [display]}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   Notice that @{ML atac'} inside @{ML FOCUS in Subgoal} calls @{ML
-  resolve_tac} with the subgoal number @{text "1"} and also the outer call to
-  @{ML FOCUS in Subgoal} in the \isacommand{apply}-step uses @{text "1"}. This
+  resolve_tac} with the subgoal number \<open>1\<close> and also the outer call to
+  @{ML FOCUS in Subgoal} in the \isacommand{apply}-step uses \<open>1\<close>. This
   is another advantage of @{ML FOCUS in Subgoal} and @{ML SUBPROOF}: the
   addressing inside it is completely local to the tactic inside the
   subproof. It is therefore possible to also apply @{ML atac'} to the second
   goal by just writing:
 
-*}
+\<close>
 
 lemma 
   shows "True" 
   and   "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
-apply(tactic {* atac' @{context} 2 *})
+apply(tactic \<open>atac' @{context} 2\<close>)
 apply(rule TrueI)
 done
 
-text {*
+text \<open>
   To sum up, both @{ML FOCUS in Subgoal} and @{ML SUBPROOF} are rather
   convenient, but can impose a considerable run-time penalty in automatic
   proofs. If speed is of the essence, then maybe the two lower level combinators 
@@ -665,9 +662,9 @@
   cterm}). With them you can implement a tactic that applies a rule according
   to the topmost logic connective in the subgoal (to illustrate this we only
   analyse a few connectives). The code of the tactic is as follows.
-*}
-
-ML %linenosgray{*fun select_tac ctxt (t, i) =
+\<close>
+
+ML %linenosgray\<open>fun select_tac ctxt (t, i) =
   case t of
      @{term "Trueprop"} $ t' => select_tac ctxt (t', i)
    | @{term "(\<Longrightarrow>)"} $ _ $ t' => select_tac ctxt (t', i)
@@ -675,15 +672,15 @@
    | @{term "(\<longrightarrow>)"} $ _ $ _ => resolve_tac ctxt [@{thm impI}] i
    | @{term "Not"} $ _ => resolve_tac ctxt [@{thm notI}] i
    | Const (@{const_name "All"}, _) $ _ => resolve_tac ctxt [@{thm allI}] i
-   | _ => all_tac*}text_raw{*\label{tac:selecttac}*}
-
-text {*
+   | _ => all_tac\<close>text_raw\<open>\label{tac:selecttac}\<close>
+
+text \<open>
   The input of the function is a term representing the subgoal and a number
   specifying the subgoal of interest. In Line 3 you need to descend under the
   outermost @{term "Trueprop"} in order to get to the connective you like to
   analyse. Otherwise goals like @{prop "A \<and> B"} are not properly
   analysed. Similarly with meta-implications in the next line.  While for the
-  first five patterns we can use the @{text "@term"}-antiquotation to
+  first five patterns we can use the \<open>@term\<close>-antiquotation to
   construct the patterns, the pattern in Line 8 cannot be constructed in this
   way. The reason is that an antiquotation would fix the type of the
   quantified variable. So you really have to construct this pattern using the
@@ -695,36 +692,36 @@
 
 
   Let us now see how to apply this tactic. Consider the four goals:
-*}
+\<close>
 
 
 lemma 
   shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
-apply(tactic {* SUBGOAL (select_tac @{context}) 4 *})
-apply(tactic {* SUBGOAL (select_tac @{context}) 3 *})
-apply(tactic {* SUBGOAL (select_tac @{context}) 2 *})
-apply(tactic {* SUBGOAL (select_tac @{context}) 1 *})
-txt{* \begin{minipage}{\textwidth}
+apply(tactic \<open>SUBGOAL (select_tac @{context}) 4\<close>)
+apply(tactic \<open>SUBGOAL (select_tac @{context}) 3\<close>)
+apply(tactic \<open>SUBGOAL (select_tac @{context}) 2\<close>)
+apply(tactic \<open>SUBGOAL (select_tac @{context}) 1\<close>)
+txt\<open>\begin{minipage}{\textwidth}
       @{subgoals [display]}
-      \end{minipage} *}
+      \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   where in all but the last the tactic applies an introduction rule. 
   Note that we applied the tactic to the goals in ``reverse'' order. 
   This is a trick in order to be independent from the subgoals that are 
   produced by the rule. If we had applied it in the other order 
-*}
+\<close>
 
 lemma 
   shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
-apply(tactic {* SUBGOAL (select_tac @{context}) 1 *})
-apply(tactic {* SUBGOAL (select_tac @{context}) 3 *})
-apply(tactic {* SUBGOAL (select_tac @{context}) 4 *})
-apply(tactic {* SUBGOAL (select_tac @{context}) 5 *})
+apply(tactic \<open>SUBGOAL (select_tac @{context}) 1\<close>)
+apply(tactic \<open>SUBGOAL (select_tac @{context}) 3\<close>)
+apply(tactic \<open>SUBGOAL (select_tac @{context}) 4\<close>)
+apply(tactic \<open>SUBGOAL (select_tac @{context}) 5\<close>)
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   then we have to be careful to not apply the tactic to the two subgoals produced by 
   the first goal. To do this can result in quite messy code. In contrast, 
   the ``reverse application'' is easy to implement.
@@ -746,31 +743,31 @@
   applies theorems as shown above. The reason is that a number of theorems
   introduce schematic variables into the goal state. Consider for example the
   proof
-*}
+\<close>
 
 lemma 
   shows "\<forall>x \<in> A. P x \<Longrightarrow> Q x"
-apply(tactic {* dresolve_tac @{context} [@{thm bspec}] 1 *})
-txt{*\begin{minipage}{\textwidth}
+apply(tactic \<open>dresolve_tac @{context} [@{thm bspec}] 1\<close>)
+txt\<open>\begin{minipage}{\textwidth}
      @{subgoals [display]} 
-     \end{minipage}*}
+     \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {*
-  where the application of theorem @{text bspec} generates two subgoals involving the
-  new schematic variable @{text "?x"}. Now, if you are not careful, tactics 
+text \<open>
+  where the application of theorem \<open>bspec\<close> generates two subgoals involving the
+  new schematic variable \<open>?x\<close>. Now, if you are not careful, tactics 
   applied to the first subgoal might instantiate this schematic variable in such a 
-  way that the second subgoal becomes unprovable. If it is clear what the @{text "?x"}
+  way that the second subgoal becomes unprovable. If it is clear what the \<open>?x\<close>
   should be, then this situation can be avoided by introducing a more
-  constrained version of the @{text bspec}-theorem. One way to give such 
+  constrained version of the \<open>bspec\<close>-theorem. One way to give such 
   constraints is by pre-instantiating theorems with other theorems. 
   The function @{ML_ind RS in Drule}, for example, does this.
   
   @{ML_response_fake [display,gray]
   "@{thm disjI1} RS @{thm conjI}" "\<lbrakk>?P1; ?Q\<rbrakk> \<Longrightarrow> (?P1 \<or> ?Q1) \<and> ?Q"}
 
-  In this example it instantiates the first premise of the @{text conjI}-theorem 
-  with the theorem @{text disjI1}. If the instantiation is impossible, as in the 
+  In this example it instantiates the first premise of the \<open>conjI\<close>-theorem 
+  with the theorem \<open>disjI1\<close>. If the instantiation is impossible, as in the 
   case of
 
   @{ML_response_fake_both [display,gray]
@@ -817,58 +814,58 @@
 
   \begin{isabelle}
   \isacommand{thm}~@{thm [source] cong}\\
-  @{text "> "}~@{thm cong}
+  \<open>> \<close>~@{thm cong}
   \end{isabelle}
 
   is applicable in the following proof producing the subgoals
   @{term "t x = s u"} and @{term "y = w"}. 
-*}
+\<close>
 
 lemma 
   fixes x y u w::"'a"
   shows "t x y = s u w"
 apply(rule cong)
-txt{*\begin{minipage}{\textwidth}
+txt\<open>\begin{minipage}{\textwidth}
      @{subgoals [display]}
-     \end{minipage}*}
+     \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   As you can see this is unfortunately \emph{not} the case if we apply @{thm [source] 
-  cong} with the method @{text rule}. The problem is
+  cong} with the method \<open>rule\<close>. The problem is
   that higher-order unification produces an instantiation that is not the
   intended one. While we can use \isacommand{back} to interactively find the
   intended instantiation, this is not an option for an automatic proof
   procedure. Fortunately, the tactic @{ML_ind cong_tac in Cong_Tac} helps 
   with applying congruence theorems and finding the intended instantiation.
   For example
-*}
+\<close>
 
 lemma 
   fixes x y u w::"'a"
   shows "t x y = s u w"
-apply(tactic {* Cong_Tac.cong_tac @{context} @{thm cong} 1 *})
-txt{*\begin{minipage}{\textwidth}
+apply(tactic \<open>Cong_Tac.cong_tac @{context} @{thm cong} 1\<close>)
+txt\<open>\begin{minipage}{\textwidth}
      @{subgoals [display]}
-     \end{minipage}*}
+     \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   However, frequently it is necessary to explicitly match a theorem against a goal
   state and in doing so construct manually an appropriate instantiation. Imagine 
   you have the theorem
-*}
+\<close>
 
 lemma rule:
   shows "\<lbrakk>f = g; x = y\<rbrakk> \<Longrightarrow> R (f x) (g y)"
 sorry
 
-text {* 
+text \<open>
   and you want to apply it to the goal @{term "(t\<^sub>1 t\<^sub>2) \<le>
   (s\<^sub>1 s\<^sub>2)"}. Since in the theorem all variables are
-  schematic, we have a nasty higher-order unification problem and @{text rtac}
+  schematic, we have a nasty higher-order unification problem and \<open>rtac\<close>
   will not be able to apply this rule in the way we want. For the goal at hand 
-  we want to use it so that @{term R} is instantiated to the constant @{text \<le>} and
+  we want to use it so that @{term R} is instantiated to the constant \<open>\<le>\<close> and
   similarly ``obvious'' instantiations for the other variables.  To achieve this 
   we need to match the conclusion of 
   @{thm [source] rule} against the goal reading off an instantiation for
@@ -877,34 +874,34 @@
   that can be used to instantiate the theorem. The instantiation 
   can be done with the function @{ML_ind instantiate_normalize in Drule}. To automate 
   this we implement the following function.
-*}
-
-ML %linenosgray{*fun fo_rtac thm = Subgoal.FOCUS (fn {context, concl, ...} =>
+\<close>
+
+ML %linenosgray\<open>fun fo_rtac thm = Subgoal.FOCUS (fn {context, concl, ...} =>
   let 
     val concl_pat = Drule.strip_imp_concl (Thm.cprop_of thm)
     val insts = Thm.first_order_match (concl_pat, concl)
   in
     resolve_tac context [(Drule.instantiate_normalize insts thm)] 1
-  end)*}
-
-text {*
+  end)\<close>
+
+text \<open>
   Note that we use @{ML FOCUS in Subgoal} because it gives us directly access
   to the conclusion of the goal state, but also because this function 
   takes care of correctly handling parameters that might be present
   in the goal state. In Line 3 we use the function @{ML_ind strip_imp_concl in Drule}
   for calculating the conclusion of a theorem (produced as @{ML_type cterm}).
   An example of @{ML fo_rtac} is as follows.
-*}
+\<close>
 
 lemma
   shows "\<And>t\<^sub>1 s\<^sub>1 (t\<^sub>2::'a) (s\<^sub>2::'a). (t\<^sub>1 t\<^sub>2) \<le> (s\<^sub>1 s\<^sub>2)"
-apply(tactic {* fo_rtac @{thm rule} @{context} 1 *})
-txt{*\begin{minipage}{\textwidth}
+apply(tactic \<open>fo_rtac @{thm rule} @{context} 1\<close>)
+txt\<open>\begin{minipage}{\textwidth}
      @{subgoals [display]}
-     \end{minipage}*}
+     \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   We obtain the intended subgoals and also the parameters are correctly
   introduced in both of them. Such manual instantiations are quite frequently
   necessary in order to appropriately constrain the application of theorems. 
@@ -916,83 +913,83 @@
   Functions for instantiating schematic variables in theorems are defined
   in @{ML_file "Pure/drule.ML"}.
   \end{readmore}
-*}
-
-section {* Tactic Combinators *}
-
-text {* 
+\<close>
+
+section \<open>Tactic Combinators\<close>
+
+text \<open>
   The purpose of tactic combinators is to build compound tactics out of
   smaller tactics. In the previous section we already used @{ML_ind THEN in Tactical}, 
   which just strings together two tactics in a sequence. For example:
-*}
+\<close>
 
 lemma 
   shows "(Foo \<and> Bar) \<and> False"
-apply(tactic {* resolve_tac @{context} [@{thm conjI}] 1 THEN resolve_tac @{context} [@{thm conjI}] 1 *})
-txt {* \begin{minipage}{\textwidth}
+apply(tactic \<open>resolve_tac @{context} [@{thm conjI}] 1 THEN resolve_tac @{context} [@{thm conjI}] 1\<close>)
+txt \<open>\begin{minipage}{\textwidth}
        @{subgoals [display]} 
-       \end{minipage} *}
+       \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   If you want to avoid the hard-coded subgoal addressing in each component, 
   then, as seen earlier, you can use the ``primed'' version of @{ML THEN}. 
   For example:
-*}
+\<close>
 
 lemma 
   shows "(Foo \<and> Bar) \<and> False"
-apply(tactic {* (resolve_tac @{context} [@{thm conjI}] THEN' resolve_tac @{context} [@{thm conjI}]) 1 *})
-txt {* \begin{minipage}{\textwidth}
+apply(tactic \<open>(resolve_tac @{context} [@{thm conjI}] THEN' resolve_tac @{context} [@{thm conjI}]) 1\<close>)
+txt \<open>\begin{minipage}{\textwidth}
        @{subgoals [display]} 
-       \end{minipage} *}
+       \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {* 
+text \<open>
   Here you have to specify the subgoal of interest only once and it is
   consistently applied to the component.  For most tactic combinators such a
   ``primed'' version exists and in what follows we will usually prefer it over
   the ``unprimed'' one.
 
-  The tactic combinator @{ML_ind RANGE in Tactical} takes a list of @{text n}
-  tactics, say, and applies them to each of the first @{text n} subgoals. 
+  The tactic combinator @{ML_ind RANGE in Tactical} takes a list of \<open>n\<close>
+  tactics, say, and applies them to each of the first \<open>n\<close> subgoals. 
   For example below we first apply the introduction rule for conjunctions
   and then apply a tactic to each of the two subgoals. 
-*}
+\<close>
 
 lemma 
   shows "A \<Longrightarrow> True \<and> A"
-apply(tactic {* (resolve_tac @{context} [@{thm conjI}] 
-                 THEN' RANGE [resolve_tac @{context} [@{thm TrueI}], assume_tac @{context}]) 1 *})
-txt {* \begin{minipage}{\textwidth}
+apply(tactic \<open>(resolve_tac @{context} [@{thm conjI}] 
+                 THEN' RANGE [resolve_tac @{context} [@{thm TrueI}], assume_tac @{context}]) 1\<close>)
+txt \<open>\begin{minipage}{\textwidth}
        @{subgoals [display]} 
-       \end{minipage} *}
+       \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   If there is a list of tactics that should all be tried out in sequence on
   one specified subgoal, you can use the combinator @{ML_ind EVERY' in
   Tactical}. For example the function @{ML foo_tac'} from
   page~\pageref{tac:footacprime} can also be written as:
-*}
-
-ML %grayML{*fun foo_tac'' ctxt = 
+\<close>
+
+ML %grayML\<open>fun foo_tac'' ctxt = 
   EVERY' [eresolve_tac ctxt [@{thm disjE}], resolve_tac ctxt [@{thm disjI2}], 
-          assume_tac ctxt, resolve_tac ctxt [@{thm disjI1}], assume_tac ctxt]*}
-
-text {*
+          assume_tac ctxt, resolve_tac ctxt [@{thm disjI1}], assume_tac ctxt]\<close>
+
+text \<open>
   There is even another way of implementing this tactic: in automatic proof
   procedures (in contrast to tactics that might be called by the user) there
   are often long lists of tactics that are applied to the first
   subgoal. Instead of writing the code above and then calling @{ML "foo_tac'' @{context} 1"}, 
   you can also just write
-*}
-
-ML %grayML{*fun foo_tac1 ctxt = 
+\<close>
+
+ML %grayML\<open>fun foo_tac1 ctxt = 
   EVERY1 [eresolve_tac ctxt [@{thm disjE}], resolve_tac ctxt [@{thm disjI2}], 
-          assume_tac ctxt, resolve_tac ctxt [@{thm disjI1}], assume_tac ctxt]*}
-
-text {*
+          assume_tac ctxt, resolve_tac ctxt [@{thm disjI1}], assume_tac ctxt]\<close>
+
+text \<open>
   and call @{ML foo_tac1}.  
 
   With the combinators @{ML THEN'}, @{ML EVERY'} and @{ML_ind  EVERY1 in Tactical} it must be
@@ -1001,99 +998,99 @@
   then you can use the combinator @{ML_ind  ORELSE' in Tactical} for two tactics, and @{ML_ind
    FIRST' in Tactical} (or @{ML_ind  FIRST1 in Tactical}) for a list of tactics. For example, the tactic
 
-*}
-
-ML %grayML{*fun orelse_xmp_tac ctxt = 
-  resolve_tac ctxt [@{thm disjI1}] ORELSE' resolve_tac ctxt [@{thm conjI}]*}
-
-text {*
-  will first try out whether theorem @{text disjI} applies and in case of failure 
-  will try @{text conjI}. To see this consider the proof
-*}
+\<close>
+
+ML %grayML\<open>fun orelse_xmp_tac ctxt = 
+  resolve_tac ctxt [@{thm disjI1}] ORELSE' resolve_tac ctxt [@{thm conjI}]\<close>
+
+text \<open>
+  will first try out whether theorem \<open>disjI\<close> applies and in case of failure 
+  will try \<open>conjI\<close>. To see this consider the proof
+\<close>
 
 lemma 
   shows "True \<and> False" and "Foo \<or> Bar"
-apply(tactic {* orelse_xmp_tac @{context} 2 *})
-apply(tactic {* orelse_xmp_tac @{context} 1 *})
-txt {* which results in the goal state
+apply(tactic \<open>orelse_xmp_tac @{context} 2\<close>)
+apply(tactic \<open>orelse_xmp_tac @{context} 1\<close>)
+txt \<open>which results in the goal state
        \begin{isabelle}
        @{subgoals [display]} 
        \end{isabelle}
-*}
+\<close>
 (*<*)oops(*>*)
 
 
-text {*
+text \<open>
   Using @{ML FIRST'} we can simplify our @{ML select_tac} from Page~\pageref{tac:selecttac} 
   as follows:
-*}
-
-ML %grayML{*fun select_tac' ctxt = 
+\<close>
+
+ML %grayML\<open>fun select_tac' ctxt = 
   FIRST' [resolve_tac ctxt [@{thm conjI}], resolve_tac ctxt [@{thm impI}], 
-          resolve_tac ctxt [@{thm notI}], resolve_tac ctxt [@{thm allI}], K all_tac]*}text_raw{*\label{tac:selectprime}*}
-
-text {*
+          resolve_tac ctxt [@{thm notI}], resolve_tac ctxt [@{thm allI}], K all_tac]\<close>text_raw\<open>\label{tac:selectprime}\<close>
+
+text \<open>
   Since we like to mimic the behaviour of @{ML select_tac} as closely as possible, 
   we must include @{ML all_tac} at the end of the list, otherwise the tactic will
   fail if no theorem applies (we also have to wrap @{ML all_tac} using the 
   @{ML K}-combinator, because it does not take a subgoal number as argument). You can 
   test the tactic on the same goals:
-*}
+\<close>
 
 lemma 
   shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
-apply(tactic {* select_tac' @{context} 4 *})
-apply(tactic {* select_tac' @{context} 3 *})
-apply(tactic {* select_tac' @{context} 2 *})
-apply(tactic {* select_tac' @{context} 1 *})
-txt{* \begin{minipage}{\textwidth}
+apply(tactic \<open>select_tac' @{context} 4\<close>)
+apply(tactic \<open>select_tac' @{context} 3\<close>)
+apply(tactic \<open>select_tac' @{context} 2\<close>)
+apply(tactic \<open>select_tac' @{context} 1\<close>)
+txt\<open>\begin{minipage}{\textwidth}
       @{subgoals [display]}
-      \end{minipage} *}
+      \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {* 
+text \<open>
   Since such repeated applications of a tactic to the reverse order of 
   \emph{all} subgoals is quite common, there is the tactic combinator 
   @{ML_ind  ALLGOALS in Tactical} that simplifies this. Using this combinator you can simply 
-  write: *}
+  write:\<close>
 
 lemma 
   shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
-apply(tactic {* ALLGOALS (select_tac' @{context}) *})
-txt{* \begin{minipage}{\textwidth}
+apply(tactic \<open>ALLGOALS (select_tac' @{context})\<close>)
+txt\<open>\begin{minipage}{\textwidth}
       @{subgoals [display]}
-      \end{minipage} *}
+      \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   Remember that we chose to implement @{ML select_tac'} so that it 
   always  succeeds by fact of having @{ML all_tac} at the end of the tactic
   list. The same can be achieved with the tactic combinator @{ML_ind  TRY in Tactical}.
   For example:
-*}
-
-ML %grayML{*fun select_tac'' ctxt = 
+\<close>
+
+ML %grayML\<open>fun select_tac'' ctxt = 
  TRY o FIRST' [resolve_tac ctxt [@{thm conjI}], resolve_tac ctxt [@{thm impI}], 
-               resolve_tac ctxt [@{thm notI}], resolve_tac ctxt [@{thm allI}]]*}
-text_raw{*\label{tac:selectprimeprime}*}
-
-text {*
+               resolve_tac ctxt [@{thm notI}], resolve_tac ctxt [@{thm allI}]]\<close>
+text_raw\<open>\label{tac:selectprimeprime}\<close>
+
+text \<open>
   This tactic behaves in the same way as @{ML select_tac'}: it tries out
   one of the given tactics and if none applies leaves the goal state 
   unchanged. This, however, can be potentially very confusing when visible to 
   the user, for example,  in cases where the goal is the form
 
-*}
+\<close>
 
 lemma 
   shows "E \<Longrightarrow> F"
-apply(tactic {* select_tac' @{context} 1 *})
-txt{* \begin{minipage}{\textwidth}
+apply(tactic \<open>select_tac' @{context} 1\<close>)
+txt\<open>\begin{minipage}{\textwidth}
       @{subgoals [display]}
-      \end{minipage} *}
+      \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   In this case no theorem applies. But because we wrapped the tactic in a @{ML
   TRY}, it does not fail anymore. The problem is that for the user there is
   little chance to see whether progress in the proof has been made, or not. By
@@ -1106,40 +1103,40 @@
   option. In such cases, you can use the combinator @{ML_ind CHANGED in
   Tactical} to make sure the subgoal has been changed by the tactic. Because
   now
-*}
+\<close>
 
 lemma 
   shows "E \<Longrightarrow> F"
-apply(tactic {* CHANGED (select_tac' @{context} 1) *})(*<*)?(*>*)
-txt{* gives the error message:
+apply(tactic \<open>CHANGED (select_tac' @{context} 1)\<close>)(*<*)?(*>*)
+txt\<open>gives the error message:
   \begin{isabelle}
-  @{text "*** empty result sequence -- proof command failed"}\\
-  @{text "*** At command \"apply\"."}
+  \<open>*** empty result sequence -- proof command failed\<close>\\
+  \<open>*** At command "apply".\<close>
   \end{isabelle} 
-*}(*<*)oops(*>*)
-
-
-text {*
+\<close>(*<*)oops(*>*)
+
+
+text \<open>
   We can further extend the @{ML select_tac}s so that they not just apply to the topmost
   connective, but also to the ones immediately ``underneath'', i.e.~analyse the goal 
   completely. For this you can use the tactic combinator @{ML_ind REPEAT in Tactical}. As an example 
   suppose the following tactic
-*}
-
-ML %grayML{*fun repeat_xmp_tac ctxt = REPEAT (CHANGED (select_tac' ctxt 1)) *}
-
-text {* which applied to the proof *}
+\<close>
+
+ML %grayML\<open>fun repeat_xmp_tac ctxt = REPEAT (CHANGED (select_tac' ctxt 1))\<close>
+
+text \<open>which applied to the proof\<close>
 
 lemma 
   shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
-apply(tactic {* repeat_xmp_tac @{context} *})
-txt{* produces
+apply(tactic \<open>repeat_xmp_tac @{context}\<close>)
+txt\<open>produces
       \begin{isabelle}
       @{subgoals [display]}
-      \end{isabelle} *}
+      \end{isabelle}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   Here it is crucial that @{ML select_tac'} is prefixed with @{ML CHANGED},
   because otherwise @{ML REPEAT} runs into an infinite loop (it applies the
   tactic as long as it succeeds). The function @{ML_ind REPEAT1 in Tactical}
@@ -1148,11 +1145,11 @@
 
   If you are after the ``primed'' version of @{ML repeat_xmp_tac}, then you 
   can implement it as
-*}
-
-ML %grayML{*fun repeat_xmp_tac' ctxt = REPEAT o CHANGED o select_tac' ctxt*}
-
-text {*
+\<close>
+
+ML %grayML\<open>fun repeat_xmp_tac' ctxt = REPEAT o CHANGED o select_tac' ctxt\<close>
+
+text \<open>
   since there are no ``primed'' versions of @{ML REPEAT} and @{ML CHANGED}.
 
   If you look closely at the goal state above, then you see the tactics @{ML repeat_xmp_tac}
@@ -1161,76 +1158,76 @@
   is applied repeatedly only to the first subgoal. To analyse also all
   resulting subgoals, you can use the tactic combinator @{ML_ind  REPEAT_ALL_NEW in Tactical}. 
   Supposing the tactic
-*}
-
-ML %grayML{*fun repeat_all_new_xmp_tac ctxt = REPEAT_ALL_NEW (CHANGED o select_tac' ctxt)*}
-
-text {* 
+\<close>
+
+ML %grayML\<open>fun repeat_all_new_xmp_tac ctxt = REPEAT_ALL_NEW (CHANGED o select_tac' ctxt)\<close>
+
+text \<open>
   you can see that the following goal
-*}
+\<close>
 
 lemma 
   shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
-apply(tactic {* repeat_all_new_xmp_tac @{context} 1 *})
-txt{* \begin{minipage}{\textwidth}
+apply(tactic \<open>repeat_all_new_xmp_tac @{context} 1\<close>)
+txt\<open>\begin{minipage}{\textwidth}
       @{subgoals [display]}
-      \end{minipage} *}
+      \end{minipage}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   is completely analysed according to the theorems we chose to
   include in @{ML select_tac'}. 
 
   Recall that tactics produce a lazy sequence of successor goal states. These
   states can be explored using the command \isacommand{back}. For example
 
-*}
+\<close>
 
 lemma 
   shows "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
-apply(tactic {* eresolve_tac @{context} [@{thm disjE}] 1 *})
-txt{* applies the rule to the first assumption yielding the goal state:
+apply(tactic \<open>eresolve_tac @{context} [@{thm disjE}] 1\<close>)
+txt\<open>applies the rule to the first assumption yielding the goal state:
       \begin{isabelle}
       @{subgoals [display]}
       \end{isabelle}
 
       After typing
-  *}
+\<close>
 (*<*)
 oops
 lemma 
   shows "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
-apply(tactic {* eresolve_tac @{context} [@{thm disjE}] 1 *})
+apply(tactic \<open>eresolve_tac @{context} [@{thm disjE}] 1\<close>)
 (*>*)
 back
-txt{* the rule now applies to the second assumption.
+txt\<open>the rule now applies to the second assumption.
       \begin{isabelle}
       @{subgoals [display]}
-      \end{isabelle} *}
+      \end{isabelle}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   Sometimes this leads to confusing behaviour of tactics and also has
   the potential to explode the search space for tactics.
   These problems can be avoided by prefixing the tactic with the tactic 
   combinator @{ML_ind  DETERM in Tactical}.
-*}
+\<close>
 
 lemma 
   shows "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
-apply(tactic {* DETERM (eresolve_tac @{context} [@{thm disjE}] 1) *})
-txt {*\begin{minipage}{\textwidth}
+apply(tactic \<open>DETERM (eresolve_tac @{context} [@{thm disjE}] 1)\<close>)
+txt \<open>\begin{minipage}{\textwidth}
       @{subgoals [display]}
-      \end{minipage} *}
+      \end{minipage}\<close>
 (*<*)oops(*>*)
-text {*
+text \<open>
   This combinator will prune the search space to just the first successful application.
   Attempting to apply \isacommand{back} in this goal states gives the
   error message:
 
   \begin{isabelle}
-  @{text "*** back: no alternatives"}\\
-  @{text "*** At command \"back\"."}
+  \<open>*** back: no alternatives\<close>\\
+  \<open>*** At command "back".\<close>
   \end{isabelle}
 
 
@@ -1241,9 +1238,9 @@
   Most tactic combinators described in this section are defined in @{ML_file "Pure/tactical.ML"}.
   Some combinators for the purpose of proof search are implemented in @{ML_file "Pure/search.ML"}.
   \end{readmore}
-*}
-
-text {*
+\<close>
+
+text \<open>
   \begin{exercise}\label{ex:dyckhoff}
   Dyckhoff presents in \cite{Dyckhoff92} inference rules of a sequent
   calculus, called G4ip, for intuitionistic propositional logic. The
@@ -1304,11 +1301,11 @@
   equality and run your tactic on  the de Bruijn formulae discussed 
   in Exercise~\ref{ex:debruijn}.
   \end{exercise}
-*}
-
-section {* Simplifier Tactics\label{sec:simplifier} *}
-
-text {*
+\<close>
+
+section \<open>Simplifier Tactics\label{sec:simplifier}\<close>
+
+text \<open>
   A lot of convenience in reasoning with Isabelle derives from its
   powerful simplifier. We will describe it in this section. However,
   due to its complexity, we can mostly only scratch the surface. 
@@ -1323,34 +1320,34 @@
   \begin{isabelle}
   \begin{center}
   \begin{tabular}{l@ {\hspace{2cm}}l}
-   @{ML_ind simp_tac in Simplifier}            & @{text "(simp (no_asm))"} \\
-   @{ML_ind asm_simp_tac in Simplifier}        & @{text "(simp (no_asm_simp))"} \\
-   @{ML_ind full_simp_tac in Simplifier}       & @{text "(simp (no_asm_use))"} \\
-   @{ML_ind asm_lr_simp_tac in Simplifier}     & @{text "(simp (asm_lr))"} \\
-   @{ML_ind asm_full_simp_tac in Simplifier}   & @{text "(simp)"}
+   @{ML_ind simp_tac in Simplifier}            & \<open>(simp (no_asm))\<close> \\
+   @{ML_ind asm_simp_tac in Simplifier}        & \<open>(simp (no_asm_simp))\<close> \\
+   @{ML_ind full_simp_tac in Simplifier}       & \<open>(simp (no_asm_use))\<close> \\
+   @{ML_ind asm_lr_simp_tac in Simplifier}     & \<open>(simp (asm_lr))\<close> \\
+   @{ML_ind asm_full_simp_tac in Simplifier}   & \<open>(simp)\<close>
   \end{tabular}
   \end{center}
   \end{isabelle}
 
   All these tactics take a simpset and an integer as argument (the latter as usual 
   to specify  the goal to be analysed). So the proof
-*}
+\<close>
 
 lemma 
   shows "Suc (1 + 2) < 3 + 2"
 apply(simp)
 done
 
-text {*
+text \<open>
   corresponds on the ML-level to the tactic
-*}
+\<close>
 
 lemma 
   shows "Suc (1 + 2) < 3 + 2"
-apply(tactic {* asm_full_simp_tac @{context} 1 *})
+apply(tactic \<open>asm_full_simp_tac @{context} 1\<close>)
 done
 
-text {*
+text \<open>
   If the simplifier cannot make any progress, then it leaves the goal unchanged,
   i.e., does not raise any error message. That means if you use it to unfold a 
   definition for a constant and this constant is not present in the goal state, 
@@ -1380,15 +1377,15 @@
 
   \begin{isabelle}
   \begin{center}
-  \mbox{\inferrule{@{text "t\<^sub>1 \<equiv> s\<^sub>1 \<dots> t\<^sub>n \<equiv> s\<^sub>n"}}
-                  {@{text "constr t\<^sub>1\<dots>t\<^sub>n \<equiv> constr s\<^sub>1\<dots>s\<^sub>n"}}}
+  \mbox{\inferrule{\<open>t\<^sub>1 \<equiv> s\<^sub>1 \<dots> t\<^sub>n \<equiv> s\<^sub>n\<close>}
+                  {\<open>constr t\<^sub>1\<dots>t\<^sub>n \<equiv> constr s\<^sub>1\<dots>s\<^sub>n\<close>}}
   \end{center}
   \end{isabelle}
 
-  with @{text "constr"} being a constant, like @{const "If"}, @{const "Let"}
+  with \<open>constr\<close> being a constant, like @{const "If"}, @{const "Let"}
   and so on. Every simpset contains only one congruence rule for each
   term-constructor, which however can be overwritten. The user can declare
-  lemmas to be congruence rules using the attribute @{text "[cong]"}. Note that 
+  lemmas to be congruence rules using the attribute \<open>[cong]\<close>. Note that 
   in HOL these congruence theorems are usually stated as equations, which are 
   then internally transformed into meta-equations.
 
@@ -1423,13 +1420,13 @@
   \end{tabular}
   \end{isabelle}
 
-*}
-
-text_raw {*
+\<close>
+
+text_raw \<open>
 \begin{figure}[t]
 \begin{minipage}{\textwidth}
-\begin{isabelle}*}
-ML %grayML{*fun print_ss ctxt ss =
+\begin{isabelle}\<close>
+ML %grayML\<open>fun print_ss ctxt ss =
 let
   val {simps, congs, procs, ...} = Raw_Simplifier.dest_ss ss
 
@@ -1446,17 +1443,17 @@
 in
   pps |> Pretty.chunks
       |> pwriteln
-end*}
-text_raw {* 
+end\<close>
+text_raw \<open>
 \end{isabelle}
 \end{minipage}
 \caption{The function @{ML_ind dest_ss in Raw_Simplifier} returns a record containing
   all printable information stored in a simpset. We are here only interested in the 
   simplification rules, congruence rules and simprocs.\label{fig:printss}}
-\end{figure} *}
-
-
-text {* 
+\end{figure}\<close>
+
+
+text \<open>
   To see how they work, consider the function in Figure~\ref{fig:printss}, which
   prints out some parts of a simpset. If you use it to print out the components of the
   empty simpset, i.e., @{ML_ind empty_ss in Raw_Simplifier}
@@ -1470,11 +1467,11 @@
   you can see it contains nothing. This simpset is usually not useful, except as a 
   building block to build bigger simpsets. For example you can add to @{ML empty_ss} 
   the simplification rule @{thm [source] Diff_Int} as follows:
-*}
-
-ML %grayML{*val ss1 = put_simpset empty_ss @{context} addsimps [@{thm Diff_Int} RS @{thm eq_reflection}] *}
-
-text {*
+\<close>
+
+ML %grayML\<open>val ss1 = put_simpset empty_ss @{context} addsimps [@{thm Diff_Int} RS @{thm eq_reflection}]\<close>
+
+text \<open>
   Printing then out the components of the simpset gives:
 
   @{ML_response_fake [display,gray]
@@ -1487,11 +1484,11 @@
   \footnote{\bf FIXME: Why does it print out ??.unknown}
 
   Adding also the congruence rule @{thm [source] strong_INF_cong} 
-*}
-
-ML %grayML{*val ss2 = ss1 |> Simplifier.add_cong (@{thm strong_INF_cong} RS @{thm eq_reflection}) *}
-
-text {*
+\<close>
+
+ML %grayML\<open>val ss2 = ss1 |> Simplifier.add_cong (@{thm strong_INF_cong} RS @{thm eq_reflection})\<close>
+
+text \<open>
   gives
 
   @{ML_response_fake [display,gray]
@@ -1526,7 +1523,7 @@
   \end{isabelle}
 
   and also resolve with assumptions. For example:
-*}
+\<close>
 
 lemma 
  shows "True" 
@@ -1534,10 +1531,10 @@
   and  "t \<equiv> t" 
   and  "False \<Longrightarrow> Foo" 
   and  "\<lbrakk>A; B; C\<rbrakk> \<Longrightarrow> A"
-apply(tactic {* ALLGOALS (simp_tac (put_simpset HOL_basic_ss @{context})) *})
+apply(tactic \<open>ALLGOALS (simp_tac (put_simpset HOL_basic_ss @{context}))\<close>)
 done
 
-text {*
+text \<open>
   This behaviour is not because of simplification rules, but how the subgoaler, solver 
   and looper are set up in @{ML HOL_basic_ss}.
 
@@ -1568,33 +1565,33 @@
   simplifier implements the tactic @{ML_ind rewrite_goal_tac in Raw_Simplifier}.\footnote{\bf FIXME: 
   see LocalDefs infrastructure.} Suppose for example the
   definition
-*}
+\<close>
 
 definition "MyTrue \<equiv> True"
 
-text {*
+text \<open>
   then we can use this tactic to unfold the definition of this constant.
-*}
+\<close>
 
 lemma 
   shows "MyTrue \<Longrightarrow> True"
-apply(tactic {* rewrite_goal_tac @{context} @{thms MyTrue_def} 1 *})
-txt{* producing the goal state
+apply(tactic \<open>rewrite_goal_tac @{context} @{thms MyTrue_def} 1\<close>)
+txt\<open>producing the goal state
       \begin{isabelle}
       @{subgoals [display]}
-      \end{isabelle} *}
+      \end{isabelle}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   If you want to unfold definitions in \emph{all} subgoals, not just one, 
   then use the the tactic @{ML_ind rewrite_goals_tac in Raw_Simplifier}.
-*}
-
-
-text_raw {*
+\<close>
+
+
+text_raw \<open>
 \begin{figure}[p]
 \begin{boxedminipage}{\textwidth}
-\begin{isabelle} *}
+\begin{isabelle}\<close>
 type_synonym  prm = "(nat \<times> nat) list"
 consts perm :: "prm \<Rightarrow> 'a \<Rightarrow> 'a"  ("_ \<bullet> _" [80,80] 80)
 
@@ -1643,7 +1640,7 @@
 fixes c::"nat" and pi\<^sub>1 pi\<^sub>2::"prm"
 shows "pi\<^sub>1\<bullet>(pi\<^sub>2\<bullet>c) = (pi\<^sub>1\<bullet>pi\<^sub>2)\<bullet>(pi\<^sub>1\<bullet>c)" 
 by (induct pi\<^sub>2) (auto)
-text_raw {*
+text_raw \<open>
 \end{isabelle}
 \end{boxedminipage}
 \caption{A simple theory about permutations over @{typ nat}s. The point is that the
@@ -1651,10 +1648,10 @@
   it would cause the simplifier to loop. It can still be used as a simplification 
   rule if the permutation in the right-hand side is sufficiently protected.
   \label{fig:perms}}
-\end{figure} *}
-
-
-text {*
+\end{figure}\<close>
+
+
+text \<open>
   The simplifier is often used in order to bring terms into a normal form.
   Unfortunately, often the situation arises that the corresponding
   simplification rules will cause the simplifier to run into an infinite
@@ -1667,7 +1664,7 @@
   the right-hand side of this lemma is again an instance of the left-hand side 
   and so causes an infinite loop. There seems to be no easy way to reformulate 
   this rule and so one ends up with clunky proofs like:
-*}
+\<close>
 
 lemma 
 fixes c d::"nat" and pi\<^sub>1 pi\<^sub>2::"prm"
@@ -1678,19 +1675,19 @@
 apply(simp)
 done 
 
-text {*
+text \<open>
   It is however possible to create a single simplifier tactic that solves
   such proofs. The trick is to introduce an auxiliary constant for permutations 
   and split the simplification into two phases (below actually three). Let 
   assume the auxiliary constant is
-*}
+\<close>
 
 definition
   perm_aux :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet>aux _" [80,80] 80)
 where
   "pi \<bullet>aux c \<equiv> pi \<bullet> c"
 
-text {* Now the two lemmas *}
+text \<open>Now the two lemmas\<close>
 
 lemma perm_aux_expand:
   fixes c::"nat" and pi\<^sub>1 pi\<^sub>2::"prm"
@@ -1702,7 +1699,7 @@
   shows "pi\<^sub>1\<bullet>(pi\<^sub>2\<bullet>aux c) = (pi\<^sub>1\<bullet>pi\<^sub>2) \<bullet>aux (pi\<^sub>1\<bullet>c)" 
 unfolding perm_aux_def by (rule perm_compose)
 
-text {* 
+text \<open>
   are simple consequence of the definition and @{thm [source] perm_compose}. 
   More importantly, the lemma @{thm [source] perm_compose_aux} can be safely 
   added to the simplifier, because now the right-hand side is not anymore an instance 
@@ -1715,9 +1712,9 @@
   other permutations by rule @{thm [source] perm_compose_aux} (Line 10); and
   finally ``unfreeze'' all instances of permutation compositions by unfolding 
   the definition of the auxiliary constant. 
-*}
-
-ML %linenosgray{*fun perm_simp_tac ctxt =
+\<close>
+
+ML %linenosgray\<open>fun perm_simp_tac ctxt =
 let
   val thms1 = [@{thm perm_aux_expand}]
   val thms2 = [@{thm perm_append}, @{thm perm_bij}, @{thm perm_rev}, 
@@ -1729,22 +1726,22 @@
   simp_tac (ss addsimps thms1)
   THEN' simp_tac (ss addsimps thms2)
   THEN' simp_tac (ss addsimps thms3)
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   For all three phases we have to build simpsets adding specific lemmas. As is sufficient
   for our purposes here, we can add these lemmas to @{ML HOL_basic_ss} in order to obtain
   the desired results. Now we can solve the following lemma
-*}
+\<close>
 
 lemma 
   fixes c d::"nat" and pi\<^sub>1 pi\<^sub>2::"prm"
   shows "pi\<^sub>1\<bullet>(c, pi\<^sub>2\<bullet>((rev pi\<^sub>1)\<bullet>d)) = (pi\<^sub>1\<bullet>c, (pi\<^sub>1\<bullet>pi\<^sub>2)\<bullet>d)"
-apply(tactic {* perm_simp_tac @{context} 1 *})
+apply(tactic \<open>perm_simp_tac @{context} 1\<close>)
 done
 
 
-text {*
+text \<open>
   in one step. This tactic can deal with most instances of normalising permutations.
   In order to solve all cases we have to deal with corner-cases such as the
   lemma being an exact instance of the permutation composition lemma. This can
@@ -1760,11 +1757,11 @@
 
   (FIXME: explain @{ML simplify} and @{ML "Simplifier.rewrite_rule"} etc.)
 
-*}
-
-section {* Simprocs *}
-
-text {*
+\<close>
+
+section \<open>Simprocs\<close>
+
+text \<open>
   In Isabelle you can also implement custom simplification procedures, called
   \emph{simprocs}. Simprocs can be triggered by the simplifier on a specified
   term-pattern and rewrite a term according to a theorem. They are useful in
@@ -1774,17 +1771,17 @@
   To see how simprocs work, let us first write a simproc that just prints out
   the pattern which triggers it and otherwise does nothing. For this
   you can use the function:
-*}
-
-ML %linenosgray{*fun fail_simproc ctxt redex = 
+\<close>
+
+ML %linenosgray\<open>fun fail_simproc ctxt redex = 
 let
   val _ = pwriteln (Pretty.block 
     [Pretty.str "The redex: ", pretty_cterm ctxt redex])
 in
   NONE
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   This function takes a simpset and a redex (a @{ML_type cterm}) as
   arguments. In Lines 3 and~4, we first extract the context from the given
   simpset and then print out a message containing the redex.  The function
@@ -1792,28 +1789,28 @@
   moment we are \emph{not} interested in actually rewriting anything. We want
   that the simproc is triggered by the pattern @{term "Suc n"}. This can be
   done by adding the simproc to the current simpset as follows
-*}
-
-simproc_setup %gray fail ("Suc n") = {* K fail_simproc *}
-
-text {*
+\<close>
+
+simproc_setup %gray fail ("Suc n") = \<open>K fail_simproc\<close>
+
+text \<open>
   where the second argument specifies the pattern and the right-hand side
   contains the code of the simproc (we have to use @{ML K} since we are ignoring
   an argument about morphisms. 
   After this, the simplifier is aware of the simproc and you can test whether 
   it fires on the lemma:
-*}
+\<close>
 
 lemma 
   shows "Suc 0 = 1"
 apply(simp)
-txt{*
+txt\<open>
   \begin{minipage}{\textwidth}
-  \small@{text "> The redex: Suc 0"}\\
-  @{text "> The redex: Suc 0"}\\
-  \end{minipage}*}(*<*)oops(*>*)
-
-text {* 
+  \small\<open>> The redex: Suc 0\<close>\\
+  \<open>> The redex: Suc 0\<close>\\
+  \end{minipage}\<close>(*<*)oops(*>*)
+
+text \<open>
   This will print out the message twice: once for the left-hand side and
   once for the right-hand side. The reason is that during simplification the
   simplifier will at some point reduce the term @{term "1::nat"} to @{term "Suc
@@ -1822,23 +1819,23 @@
   We can add or delete the simproc from the current simpset by the usual 
   \isacommand{declare}-statement. For example the simproc will be deleted
   with the declaration
-*}
+\<close>
 
 declare [[simproc del: fail]]
 
-text {*
+text \<open>
   If you want to see what happens with just \emph{this} simproc, without any 
-  interference from other rewrite rules, you can call @{text fail} 
+  interference from other rewrite rules, you can call \<open>fail\<close> 
   as follows:
-*}
+\<close>
 
 lemma 
   shows "Suc 0 = 1"
-apply(tactic {* simp_tac (put_simpset 
-  HOL_basic_ss @{context} addsimprocs [@{simproc fail}]) 1*})
+apply(tactic \<open>simp_tac (put_simpset 
+  HOL_basic_ss @{context} addsimprocs [@{simproc fail}]) 1\<close>)
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   Now the message shows up only once since the term @{term "1::nat"} is 
   left unchanged. 
 
@@ -1847,28 +1844,28 @@
   want this, then you have to use a slightly different method for setting 
   up the simproc. First the function @{ML fail_simproc} needs to be modified
   to
-*}
-
-ML %grayML{*fun fail_simproc' _ ctxt redex = 
+\<close>
+
+ML %grayML\<open>fun fail_simproc' _ ctxt redex = 
 let
   val _ = pwriteln (Pretty.block 
     [Pretty.str "The redex:", pretty_cterm ctxt redex])
 in
   NONE
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   Here the redex is given as a @{ML_type term}, instead of a @{ML_type cterm}
   (therefore we printing it out using the function @{ML pretty_term in Pretty}).
   We can turn this function into a proper simproc using the function 
   @{ML Simplifier.make_simproc}:
-*}
-
-ML %grayML{*val fail' = 
+\<close>
+
+ML %grayML\<open>val fail' = 
   Simplifier.make_simproc @{context} "fail_simproc'"
-    {lhss = [@{term "Suc n"}], proc = fail_simproc'}*}
-
-text {*
+    {lhss = [@{term "Suc n"}], proc = fail_simproc'}\<close>
+
+text \<open>
   Here the pattern is given as @{ML_type term} (instead of @{ML_type cterm}).
   The function also takes a list of patterns that can trigger the simproc.
   Now the simproc is set up and can be explicitly added using
@@ -1877,14 +1874,14 @@
 
   Simprocs are applied from inside to outside and from left to right. You can
   see this in the proof
-*}
+\<close>
 
 lemma 
   shows "Suc (Suc 0) = (Suc 1)"
-apply(tactic {* simp_tac ((put_simpset HOL_basic_ss @{context}) addsimprocs [fail']) 1*})
+apply(tactic \<open>simp_tac ((put_simpset HOL_basic_ss @{context}) addsimprocs [fail']) 1\<close>)
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   The simproc @{ML fail'} prints out the sequence 
 
 @{text [display]
@@ -1894,52 +1891,52 @@
 
   To see how a simproc applies a theorem,  let us implement a simproc that
   rewrites terms according to the equation:
-*}
+\<close>
 
 lemma plus_one: 
   shows "Suc n \<equiv> n + 1" by simp
 
-text {*
+text \<open>
   Simprocs expect that the given equation is a meta-equation, however the
   equation can contain preconditions (the simproc then will only fire if the
   preconditions can be solved). To see that one has relatively precise control over
   the rewriting with simprocs, let us further assume we want that the simproc
   only rewrites terms ``greater'' than @{term "Suc 0"}. For this we can write 
-*}
-
-
-ML %grayML{*fun plus_one_simproc _ ctxt redex =
+\<close>
+
+
+ML %grayML\<open>fun plus_one_simproc _ ctxt redex =
   case Thm.term_of redex of
     @{term "Suc 0"} => NONE
-  | _ => SOME @{thm plus_one}*}
-
-text {*
+  | _ => SOME @{thm plus_one}\<close>
+
+text \<open>
   and set up the simproc as follows.
-*}
-
-ML %grayML{*val plus_one =
+\<close>
+
+ML %grayML\<open>val plus_one =
   Simplifier.make_simproc @{context} "sproc +1"
-    {lhss = [@{term "Suc n"}], proc = plus_one_simproc}*}
-
-text {*
+    {lhss = [@{term "Suc n"}], proc = plus_one_simproc}\<close>
+
+text \<open>
   Now the simproc is set up so that it is triggered by terms
   of the form @{term "Suc n"}, but inside the simproc we only produce
   a theorem if the term is not @{term "Suc 0"}. The result you can see
   in the following proof
-*}
+\<close>
 
 lemma 
   shows "P (Suc (Suc (Suc 0))) (Suc 0)"
-apply(tactic {* simp_tac (put_simpset HOL_basic_ss @{context} addsimprocs [plus_one]) 1*})
-txt{*
+apply(tactic \<open>simp_tac (put_simpset HOL_basic_ss @{context} addsimprocs [plus_one]) 1\<close>)
+txt\<open>
   where the simproc produces the goal state
   \begin{isabelle}
   @{subgoals[display]}
   \end{isabelle}
-*}  
+\<close>  
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   As usual with rewriting you have to worry about looping: you already have
   a loop with @{ML plus_one}, if you apply it with the default simpset (because
   the default simpset contains a rule which just does the opposite of @{ML plus_one},
@@ -1947,50 +1944,50 @@
   in choosing the right simpset to which you add a simproc. 
 
   Next let us implement a simproc that replaces terms of the form @{term "Suc n"}
-  with the number @{text n} increased by one. First we implement a function that
+  with the number \<open>n\<close> increased by one. First we implement a function that
   takes a term and produces the corresponding integer value.
-*}
-
-ML %grayML{*fun dest_suc_trm ((Const (@{const_name "Suc"}, _)) $ t) = 1 + dest_suc_trm t
-  | dest_suc_trm t = snd (HOLogic.dest_number t)*}
-
-text {* 
+\<close>
+
+ML %grayML\<open>fun dest_suc_trm ((Const (@{const_name "Suc"}, _)) $ t) = 1 + dest_suc_trm t
+  | dest_suc_trm t = snd (HOLogic.dest_number t)\<close>
+
+text \<open>
   It uses the library function @{ML_ind  dest_number in HOLogic} that transforms
   (Isabelle) terms, like @{term "0::nat"}, @{term "1::nat"}, @{term "2::nat"} and so
   on, into integer values. This function raises the exception @{ML TERM}, if
   the term is not a number. The next function expects a pair consisting of a term
-  @{text t} (containing @{term Suc}s) and the corresponding integer value @{text n}. 
-*}
-
-ML %linenosgray{*fun get_thm ctxt (t, n) =
+  \<open>t\<close> (containing @{term Suc}s) and the corresponding integer value \<open>n\<close>. 
+\<close>
+
+ML %linenosgray\<open>fun get_thm ctxt (t, n) =
 let
   val num = HOLogic.mk_number @{typ "nat"} n
   val goal = Logic.mk_equals (t, num)
 in
   Goal.prove ctxt [] [] goal (K (Arith_Data.arith_tac ctxt 1))
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   From the integer value it generates the corresponding number term, called 
-  @{text num} (Line 3), and then generates the meta-equation @{text "t \<equiv> num"} 
+  \<open>num\<close> (Line 3), and then generates the meta-equation \<open>t \<equiv> num\<close> 
   (Line 4), which it proves by the arithmetic tactic in Line 6. 
 
   For our purpose at the moment, proving the meta-equation using @{ML
   arith_tac in Arith_Data} is fine, but there is also an alternative employing
   the simplifier with a special simpset. For the kind of lemmas we
-  want to prove here, the simpset @{text "num_ss"} should suffice.
-*}
-
-ML %grayML{*fun get_thm_alt ctxt (t, n) =
+  want to prove here, the simpset \<open>num_ss\<close> should suffice.
+\<close>
+
+ML %grayML\<open>fun get_thm_alt ctxt (t, n) =
 let
   val num = HOLogic.mk_number @{typ "nat"} n
   val goal = Logic.mk_equals (t, num)
   val num_ss = put_simpset HOL_ss ctxt addsimps @{thms semiring_norm}
 in
   Goal.prove ctxt [] [] goal (K (simp_tac num_ss 1))
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   The advantage of @{ML get_thm_alt} is that it leaves very little room for 
   something to go wrong; in contrast it is much more difficult to predict 
   what happens with @{ML arith_tac in Arith_Data}, especially in more complicated 
@@ -2003,39 +2000,39 @@
 
   Anyway, either version can be used in the function that produces the actual 
   theorem for the simproc.
-*}
-
-ML %grayML{*fun nat_number_simproc _ ctxt ct =
+\<close>
+
+ML %grayML\<open>fun nat_number_simproc _ ctxt ct =
   SOME (get_thm_alt ctxt (Thm.term_of ct, dest_suc_trm (Thm.term_of ct)))
-  handle TERM _ => NONE *}
-
-text {*
+  handle TERM _ => NONE\<close>
+
+text \<open>
   This function uses the fact that @{ML dest_suc_trm} might raise an exception 
   @{ML TERM}. In this case there is nothing that can be rewritten and therefore no
   theorem is produced (i.e.~the function returns @{ML NONE}). To try out the simproc 
   on an example, you can set it up as follows:
-*}
-
-ML %grayML{*val nat_number =
+\<close>
+
+ML %grayML\<open>val nat_number =
   Simplifier.make_simproc @{context} "nat_number"
-    {lhss = [@{term "Suc n"}], proc = nat_number_simproc}*}
-
-
-text {* 
+    {lhss = [@{term "Suc n"}], proc = nat_number_simproc}\<close>
+
+
+text \<open>
   Now in the lemma
-  *}
+\<close>
 
 lemma 
   shows "P (Suc (Suc 2)) (Suc 99) (0::nat) (Suc 4 + Suc 0) (Suc (0 + 0))"
-apply(tactic {* simp_tac (put_simpset HOL_ss @{context} addsimprocs [nat_number]) 1*})
-txt {* 
+apply(tactic \<open>simp_tac (put_simpset HOL_ss @{context} addsimprocs [nat_number]) 1\<close>)
+txt \<open>
   you obtain the more legible goal state
   \begin{isabelle}
   @{subgoals [display]}
-  \end{isabelle}*}
+  \end{isabelle}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   where the simproc rewrites all @{term "Suc"}s except in the last argument. There it cannot 
   rewrite anything, because it does not know how to transform the term @{term "Suc (0 + 0)"}
   into a number. To solve this problem have a look at the next exercise. 
@@ -2048,21 +2045,21 @@
 
   (FIXME: We did not do anything with morphisms. Anything interesting
   one can say about them?)
-*}
-
-section {* Conversions\label{sec:conversion} *}
-
-text {*
+\<close>
+
+section \<open>Conversions\label{sec:conversion}\<close>
+
+text \<open>
   Conversions are a thin layer on top of Isabelle's inference kernel, and can
   be viewed as a controllable, bare-bone version of Isabelle's simplifier.
   The purpose of conversions is to manipulate @{ML_type cterm}s. However,
   we will also show in this section how conversions can be applied to theorems
   and to goal states. The type of conversions is
-*}
-
-ML %grayML{*type conv = cterm -> thm*}
-
-text {*
+\<close>
+
+ML %grayML\<open>type conv = cterm -> thm\<close>
+
+text \<open>
   whereby the produced theorem is always a meta-equality. A simple conversion
   is the function @{ML_ind  all_conv in Conv}, which maps a @{ML_type cterm} to an
   instance of the (meta)reflexivity theorem. For example:
@@ -2140,12 +2137,12 @@
   @{ML_ind  rewr_conv in Conv}, which expects a meta-equation as an 
   argument. Suppose the following meta-equation.
   
-*}
+\<close>
 
 lemma true_conj1: 
   shows "True \<and> P \<equiv> P" by simp
 
-text {* 
+text \<open>
   It can be used for example to rewrite @{term "True \<and> (Foo \<longrightarrow> Bar)"} 
   to @{term "Foo \<longrightarrow> Bar"}. The code is as follows.
 
@@ -2232,11 +2229,11 @@
 end"
 "P \<or> (True \<and> Q) \<equiv> P \<or> Q"}
 
-  The reason for this behaviour is that @{text "(op \<or>)"} expects two
-  arguments.  Therefore the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The
-  conversion is then applied to @{text "t2"}, which in the example above
+  The reason for this behaviour is that \<open>(op \<or>)\<close> expects two
+  arguments.  Therefore the term must be of the form \<open>(Const \<dots> $ t1) $ t2\<close>. The
+  conversion is then applied to \<open>t2\<close>, which in the example above
   stands for @{term "True \<and> Q"}. The function @{ML fun_conv in Conv} would apply
-  the conversion to the term @{text "(Const \<dots> $ t1)"}.
+  the conversion to the term \<open>(Const \<dots> $ t1)\<close>.
 
   The function @{ML_ind  abs_conv in Conv} applies a conversion under an 
   abstraction. For example:
@@ -2251,7 +2248,7 @@
   "\<lambda>P. True \<and> (P \<and> Foo) \<equiv> \<lambda>P. P \<and> Foo"}
   
   Note that this conversion needs a context as an argument. We also give the
-  conversion as @{text "(K conv)"}, which is a function that ignores its
+  conversion as \<open>(K conv)\<close>, which is a function that ignores its
   argument (the argument being a sufficiently freshened version of the
   variable that is abstracted and a context). The conversion that goes under
   an application is @{ML_ind  combination_conv in Conv}. It expects two
@@ -2263,19 +2260,19 @@
   We can now apply all these functions in a conversion that recursively
   descends a term and applies a ``@{thm [source] true_conj1}''-conversion 
   in all possible positions.
-*}
-
-ML %linenosgray{*fun true_conj1_conv ctxt ctrm =
+\<close>
+
+ML %linenosgray\<open>fun true_conj1_conv ctxt ctrm =
   case (Thm.term_of ctrm) of
     @{term "(\<and>)"} $ @{term True} $ _ => 
       (Conv.arg_conv (true_conj1_conv ctxt) then_conv
          Conv.rewr_conv @{thm true_conj1}) ctrm
   | _ $ _ => Conv.comb_conv (true_conj1_conv ctxt) ctrm
   | Abs _ => Conv.abs_conv (fn (_, ctxt) => true_conj1_conv ctxt) ctxt ctrm
-  | _ => Conv.all_conv ctrm*}
-
-text {*
-  This function ``fires'' if the term is of the form @{text "(True \<and> \<dots>)"}. 
+  | _ => Conv.all_conv ctrm\<close>
+
+text \<open>
+  This function ``fires'' if the term is of the form \<open>(True \<and> \<dots>)\<close>. 
   It descends under applications (Line 6) and abstractions 
   (Line 7); otherwise it leaves the term unchanged (Line 8). In Line 2
   we need to transform the @{ML_type cterm} into a @{ML_type term} in order
@@ -2290,28 +2287,28 @@
   conv ctrm
 end"
   "distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x \<equiv> distinct [1, x] \<longrightarrow> 1 \<noteq> x"}
-*}
-
-text {*
+\<close>
+
+text \<open>
   Conversions that traverse terms, like @{ML true_conj1_conv} above, can be
   implemented more succinctly with the combinators @{ML_ind bottom_conv in
   Conv} and @{ML_ind top_conv in Conv}. For example:
-*}
-
-ML %grayML{*fun true_conj1_conv ctxt =
+\<close>
+
+ML %grayML\<open>fun true_conj1_conv ctxt =
 let
   val conv = Conv.try_conv (Conv.rewr_conv @{thm true_conj1})
 in
   Conv.bottom_conv (K conv) ctxt
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   If we regard terms as trees with variables and constants on the top, then 
   @{ML bottom_conv in Conv} traverses first the the term and
   on the ``way down'' applies the conversion, whereas @{ML top_conv in
   Conv} applies the conversion on the ``way up''. To see the difference, 
   assume the following two meta-equations
-*}
+\<close>
 
 lemma conj_assoc:
   fixes A B C::bool
@@ -2319,8 +2316,8 @@
   and   "(A \<and> B) \<and> C \<equiv> A \<and> (B \<and> C)"
 by simp_all
 
-text {*
-  and the @{ML_type cterm} @{text "(a \<and> (b \<and> c)) \<and> d"}. Here you should
+text \<open>
+  and the @{ML_type cterm} \<open>(a \<and> (b \<and> c)) \<and> d\<close>. Here you should
   pause for a moment to be convinced that rewriting top-down and bottom-up 
   according to the two meta-equations produces two results. Below these
   two results are calculated. 
@@ -2340,19 +2337,19 @@
 
   To see how much control you have over rewriting with conversions, let us 
   make the task a bit more complicated by rewriting according to the rule
-  @{text true_conj1}, but only in the first arguments of @{term If}s. Then 
+  \<open>true_conj1\<close>, but only in the first arguments of @{term If}s. Then 
   the conversion should be as follows.
-*}
-
-ML %grayML{*fun if_true1_simple_conv ctxt ctrm =
+\<close>
+
+ML %grayML\<open>fun if_true1_simple_conv ctxt ctrm =
   case Thm.term_of ctrm of
     Const (@{const_name If}, _) $ _ =>
       Conv.arg_conv (true_conj1_conv ctxt) ctrm
   | _ => Conv.no_conv ctrm 
 
-val if_true1_conv = Conv.top_sweep_conv if_true1_simple_conv*}
-
-text {*
+val if_true1_conv = Conv.top_sweep_conv if_true1_simple_conv\<close>
+
+text \<open>
   In the first function we only treat the top-most redex and also only the
   success case. As default we return @{ML no_conv in Conv}.  To apply this
   ``simple'' conversion inside a term, we use in the last line the combinator @{ML_ind
@@ -2370,18 +2367,18 @@
 end"
 "if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False 
 \<equiv> if P (1 \<noteq> 2) then True \<and> True else True \<and> False"}
-*}
-
-text {*
+\<close>
+
+text \<open>
   So far we only applied conversions to @{ML_type cterm}s. Conversions can, however, 
   also work on theorems using the function @{ML_ind  fconv_rule in Conv}. As an example, 
   consider again the conversion @{ML true_conj1_conv} and the lemma:
-*}
+\<close>
 
 lemma foo_test: 
   shows "P \<or> (True \<and> \<not>P)" by simp
 
-text {*
+text \<open>
   Using the conversion you can transform this theorem into a 
   new theorem as follows
 
@@ -2402,7 +2399,7 @@
 
   \begin{itemize}
   \item @{ML_ind params_conv in Conv} for converting under parameters
-  (i.e.~where a goal state is of the form @{text "\<And>x. P x \<Longrightarrow> Q x"})
+  (i.e.~where a goal state is of the form \<open>\<And>x. P x \<Longrightarrow> Q x\<close>)
 
   \item @{ML_ind prems_conv in Conv} for applying a conversion to 
   premises of a goal state, and
@@ -2414,35 +2411,35 @@
   Assume we want to apply @{ML true_conj1_conv} only in the conclusion
   of the goal, and @{ML if_true1_conv} should only apply to the premises.
   Here is a tactic doing exactly that:
-*}
-
-ML %grayML{*fun true1_tac ctxt =
+\<close>
+
+ML %grayML\<open>fun true1_tac ctxt =
   CONVERSION 
     (Conv.params_conv ~1 (fn ctxt =>
        (Conv.prems_conv ~1 (if_true1_conv ctxt) then_conv
-          Conv.concl_conv ~1 (true_conj1_conv ctxt))) ctxt)*}
-
-text {* 
+          Conv.concl_conv ~1 (true_conj1_conv ctxt))) ctxt)\<close>
+
+text \<open>
   We call the conversions with the argument @{ML "~1"}. By this we 
   analyse all parameters, all premises and the conclusion of a goal
   state. If we call @{ML concl_conv in Conv} with 
-  a non-negative number, say @{text n}, then this conversions will 
-  skip the first @{text n} premises and applies the conversion to the 
+  a non-negative number, say \<open>n\<close>, then this conversions will 
+  skip the first \<open>n\<close> premises and applies the conversion to the 
   ``rest'' (similar for parameters and conclusions). To test the 
   tactic, consider the proof
-*}
+\<close>
 
 lemma
   "if True \<and> P then P else True \<and> False \<Longrightarrow>
   (if True \<and> Q then True \<and> Q else P) \<longrightarrow> True \<and> (True \<and> Q)"
-apply(tactic {* true1_tac @{context} 1 *})
-txt {* where the tactic yields the goal state
+apply(tactic \<open>true1_tac @{context} 1\<close>)
+txt \<open>where the tactic yields the goal state
   \begin{isabelle}
   @{subgoals [display]}
-  \end{isabelle}*}
+  \end{isabelle}\<close>
 (*<*)oops(*>*)
 
-text {*
+text \<open>
   As you can see, the premises are rewritten according to @{ML if_true1_conv}, while
   the conclusion according to @{ML true_conj1_conv}. If we only have one
   conversion that should be uniformly applied to the whole goal state, we
@@ -2451,7 +2448,7 @@
   Conversions are also be helpful for constructing meta-equality theorems.
   Such theorems are often definitions. As an example consider the following
   two ways of defining the identity function in Isabelle. 
-*}
+\<close>
 
 definition id1::"'a \<Rightarrow> 'a" 
 where "id1 x \<equiv> x"
@@ -2459,7 +2456,7 @@
 definition id2::"'a \<Rightarrow> 'a"
 where "id2 \<equiv> \<lambda>x. x"
 
-text {*
+text \<open>
   Although both definitions define the same function, the theorems @{thm
   [source] id1_def} and @{thm [source] id2_def} are different meta-equations. However it is
   easy to transform one into the other. The function @{ML_ind abs_def
@@ -2478,9 +2475,9 @@
   @{ML_type cterm} to a conversion we obtain a meta-equality theorem (recall
   that the type of conversions is an abbreviation for 
   @{ML_type "cterm -> thm"}). The code of the transformation is below.
-*}
-
-ML %linenosgray{*fun unabs_def ctxt def =
+\<close>
+
+ML %linenosgray\<open>fun unabs_def ctxt def =
 let
   val (lhs, rhs) = Thm.dest_equals (Thm.cprop_of def)
   val xs = strip_abs_vars (Thm.term_of rhs)
@@ -2494,16 +2491,16 @@
 in
   get_conv xs new_lhs |>  
   singleton (Proof_Context.export ctxt' ctxt)
-end*}
-
-text {*
+end\<close>
+
+text \<open>
   In Line 3 we destruct the meta-equality into the @{ML_type cterm}s
   corresponding to the left-hand and right-hand side of the meta-equality. The
   assumption in @{ML unabs_def} is that the right-hand side is an
   abstraction. We compute the possibly empty list of abstracted variables in
   Line 4 using the function @{ML_ind strip_abs_vars in Term}. For this we have to
   first transform the @{ML_type cterm} into a @{ML_type term}. In Line 5 we
-  import these variables into the context @{text "ctxt'"}, in order to
+  import these variables into the context \<open>ctxt'\<close>, in order to
   export them again in Line 15.  In Line 8 we certify the list of variables,
   which in turn we apply to the @{ML_text lhs} of the definition using the
   function @{ML_ind list_comb in Drule}. In Line 11 and 12 we generate the
@@ -2513,15 +2510,15 @@
   conversion with @{ML_ind fun_conv in Conv}. Now in Line 14 we only have to
   apply the new left-hand side to the generated conversion and obtain the the
   theorem we want to construct. As mentioned above, in Line 15 we only have to
-  export the context @{text "ctxt'"} back to @{text "ctxt"} in order to 
+  export the context \<open>ctxt'\<close> back to \<open>ctxt\<close> in order to 
   produce meta-variables for the theorem.  An example is as follows.
 
   @{ML_response_fake [display, gray]
   "unabs_def @{context} @{thm id2_def}"
   "id2 ?x1 \<equiv> ?x1"}
-*}
-
-text {*
+\<close>
+
+text \<open>
   To sum up this section, conversions are more general than the simplifier
   or simprocs, but you have to do more work yourself. Also conversions are
   often much less efficient than the simplifier. The advantage of conversions, 
@@ -2547,19 +2544,19 @@
   @{ML_file "Pure/drule.ML"} and @{ML_file "Pure/raw_simplifier.ML"}.
   \end{readmore}
 
-*}
-
-text {*
+\<close>
+
+text \<open>
   (FIXME: check whether @{ML Pattern.match_rew} and @{ML Pattern.rewrite_term}
   are of any use/efficient)
-*}
-
-
-section {* Declarations (TBD) *}
-
-section {* Structured Proofs\label{sec:structured} (TBD) *}
-
-text {* TBD *}
+\<close>
+
+
+section \<open>Declarations (TBD)\<close>
+
+section \<open>Structured Proofs\label{sec:structured} (TBD)\<close>
+
+text \<open>TBD\<close>
 
 lemma True
 proof
@@ -2581,7 +2578,7 @@
   }
 oops
 
-ML {* 
+ML \<open>
   val ctxt0 = @{context};
   val ctxt = ctxt0;
   val (_, ctxt) = Variable.add_fixes ["A", "B", "C"] ctxt;
@@ -2591,15 +2588,15 @@
   val this = r OF this;
   val this = Assumption.export false ctxt ctxt0 this 
   val this = Variable.export ctxt ctxt0 [this] 
-*}
-
-section {* Summary *}
-
-text {*
+\<close>
+
+section \<open>Summary\<close>
+
+text \<open>
 
   \begin{conventions}
   \begin{itemize}
-  \item Reference theorems with the antiquotation @{text "@{thm \<dots>}"}.
+  \item Reference theorems with the antiquotation \<open>@{thm \<dots>}\<close>.
   \item If a tactic is supposed to fail, return the empty sequence.
   \item If you apply a tactic to a sequence of subgoals, apply it 
   in reverse order (i.e.~start with the last subgoal). 
@@ -2607,6 +2604,6 @@
   \end{itemize}
   \end{conventions}
 
-*}
+\<close>
 
 end