--- a/ProgTutorial/Advanced.thy Tue May 14 16:59:53 2019 +0200
+++ b/ProgTutorial/Advanced.thy Tue May 14 17:10:47 2019 +0200
@@ -3,9 +3,9 @@
begin
-chapter {* Advanced Isabelle\label{chp:advanced} *}
+chapter \<open>Advanced Isabelle\label{chp:advanced}\<close>
-text {*
+text \<open>
\begin{flushright}
{\em All things are difficult before they are easy.} \\[1ex]
proverb\\[2ex]
@@ -27,11 +27,11 @@
other hand, store local data for a task at hand. They act like the
``short-term memory'' and there can be many of them that are active in
parallel.
-*}
+\<close>
-section {* Theories\label{sec:theories} *}
+section \<open>Theories\label{sec:theories}\<close>
-text {*
+text \<open>
Theories, as said above, are the most basic layer of abstraction in
Isabelle. They record information about definitions, syntax declarations, axioms,
theorems and much more. For example, if a definition is made, it
@@ -43,21 +43,21 @@
\begin{isabelle}
\isacommand{print\_theory}\\
- @{text "> names: Pure Code_Generator HOL \<dots>"}\\
- @{text "> classes: Inf < type \<dots>"}\\
- @{text "> default sort: type"}\\
- @{text "> syntactic types: #prop \<dots>"}\\
- @{text "> logical types: 'a \<times> 'b \<dots>"}\\
- @{text "> type arities: * :: (random, random) random \<dots>"}\\
- @{text "> logical constants: == :: 'a \<Rightarrow> 'a \<Rightarrow> prop \<dots>"}\\
- @{text "> abbreviations: \<dots>"}\\
- @{text "> axioms: \<dots>"}\\
- @{text "> oracles: \<dots>"}\\
- @{text "> definitions: \<dots>"}\\
- @{text "> theorems: \<dots>"}
+ \<open>> names: Pure Code_Generator HOL \<dots>\<close>\\
+ \<open>> classes: Inf < type \<dots>\<close>\\
+ \<open>> default sort: type\<close>\\
+ \<open>> syntactic types: #prop \<dots>\<close>\\
+ \<open>> logical types: 'a \<times> 'b \<dots>\<close>\\
+ \<open>> type arities: * :: (random, random) random \<dots>\<close>\\
+ \<open>> logical constants: == :: 'a \<Rightarrow> 'a \<Rightarrow> prop \<dots>\<close>\\
+ \<open>> abbreviations: \<dots>\<close>\\
+ \<open>> axioms: \<dots>\<close>\\
+ \<open>> oracles: \<dots>\<close>\\
+ \<open>> definitions: \<dots>\<close>\\
+ \<open>> theorems: \<dots>\<close>
\end{isabelle}
- Functions acting on theories often end with the suffix @{text "_global"},
+ Functions acting on theories often end with the suffix \<open>_global\<close>,
for example the function @{ML read_term_global in Syntax} in the structure
@{ML_struct Syntax}. The reason is to set them syntactically apart from
functions acting on contexts or local theories, which will be discussed in
@@ -77,25 +77,25 @@
function @{ML_ind declare_const in Sign} from the structure @{ML_struct
Sign}. To see how \isacommand{setup} works, consider the following code:
-*}
+\<close>
-ML %grayML{*let
+ML %grayML\<open>let
val thy = @{theory}
val bar_const = ((@{binding "BAR"}, @{typ "nat"}), NoSyn)
in
Sign.declare_const @{context} bar_const thy
-end*}
+end\<close>
-text {*
+text \<open>
If you simply run this code\footnote{Recall that ML-code needs to be enclosed in
- \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}.} with the
- intention of declaring a constant @{text "BAR"} having type @{typ nat}, then
+ \isacommand{ML}~\<open>\<verbopen> \<dots> \<verbclose>\<close>.} with the
+ intention of declaring a constant \<open>BAR\<close> having type @{typ nat}, then
indeed you obtain a theory as result. But if you query the
constant on the Isabelle level using the command \isacommand{term}
\begin{isabelle}
- \isacommand{term}~@{text BAR}\\
- @{text "> \"BAR\" :: \"'a\""}
+ \isacommand{term}~\<open>BAR\<close>\\
+ \<open>> "BAR" :: "'a"\<close>
\end{isabelle}
you can see that you do \emph{not} obtain the expected constant of type @{typ nat}, but a free
@@ -103,113 +103,113 @@
ML-expression above did not ``register'' the declaration with the current theory.
This is what the command \isacommand{setup} is for. The constant is properly
declared with
-*}
+\<close>
-setup %gray {* fn thy =>
+setup %gray \<open>fn thy =>
let
val bar_const = ((@{binding "BAR"}, @{typ "nat"}), NoSyn)
val (_, thy') = Sign.declare_const @{context} bar_const thy
in
thy'
-end *}
+end\<close>
-text {*
+text \<open>
where the declaration is actually applied to the current theory and
\begin{isabelle}
\isacommand{term}~@{text [quotes] "BAR"}\\
- @{text "> \"BAR\" :: \"nat\""}
+ \<open>> "BAR" :: "nat"\<close>
\end{isabelle}
now returns a (black) constant with the type @{typ nat}, as expected.
In a sense, \isacommand{setup} can be seen as a transaction that
- takes the current theory @{text thy}, applies an operation, and
- produces a new current theory @{text thy'}. This means that we have
+ takes the current theory \<open>thy\<close>, applies an operation, and
+ produces a new current theory \<open>thy'\<close>. This means that we have
to be careful to apply operations always to the most current theory,
not to a \emph{stale} one. Consider again the function inside the
\isacommand{setup}-command:
\begin{isabelle}
\begin{graybox}
- \isacommand{setup}~@{text "\<verbopen>"} @{ML
+ \isacommand{setup}~\<open>\<verbopen>\<close> @{ML
"fn thy =>
let
val bar_const = ((@{binding \"BAR\"}, @{typ \"nat\"}), NoSyn)
val (_, thy') = Sign.declare_const @{context} bar_const thy
in
thy
-end"}~@{text "\<verbclose>"}\isanewline
- @{text "> ERROR: \"Stale theory encountered\""}
+end"}~\<open>\<verbclose>\<close>\isanewline
+ \<open>> ERROR: "Stale theory encountered"\<close>
\end{graybox}
\end{isabelle}
- This time we erroneously return the original theory @{text thy}, instead of
- the modified one @{text thy'}. Such buggy code will always result into
+ This time we erroneously return the original theory \<open>thy\<close>, instead of
+ the modified one \<open>thy'\<close>. Such buggy code will always result into
a runtime error message about stale theories.
\begin{readmore}
Most of the functions about theories are implemented in
@{ML_file "Pure/theory.ML"} and @{ML_file "Pure/global_theory.ML"}.
\end{readmore}
-*}
+\<close>
-section {* Contexts *}
+section \<open>Contexts\<close>
-text {*
+text \<open>
Contexts are arguably more important than theories, even though they only
contain ``short-term memory data''. The reason is that a vast number of
functions in Isabelle depend in one way or another on contexts. Even such
mundane operations like printing out a term make essential use of contexts.
For this consider the following contrived proof-snippet whose purpose is to
fix two variables:
-*}
+\<close>
lemma "True"
proof -
- ML_prf {* Variable.dest_fixes @{context} *}
+ ML_prf \<open>Variable.dest_fixes @{context}\<close>
fix x y
- ML_prf {* Variable.dest_fixes @{context} *}(*<*)oops(*>*)
+ ML_prf \<open>Variable.dest_fixes @{context}\<close>(*<*)oops(*>*)
-text {*
+text \<open>
The interesting point is that we injected ML-code before and after
the variables are fixed. For this remember that ML-code inside a proof
- needs to be enclosed inside \isacommand{ML\_prf}~@{text "\<verbopen> \<dots> \<verbclose>"},
- not \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}. The function
+ needs to be enclosed inside \isacommand{ML\_prf}~\<open>\<verbopen> \<dots> \<verbclose>\<close>,
+ not \isacommand{ML}~\<open>\<verbopen> \<dots> \<verbclose>\<close>. The function
@{ML_ind dest_fixes in Variable} from the structure @{ML_struct Variable} takes
a context and returns all its currently fixed variable (names). That
means a context has a dataslot containing information about fixed variables.
- The ML-antiquotation @{text "@{context}"} points to the context that is
+ The ML-antiquotation \<open>@{context}\<close> points to the context that is
active at that point of the theory. Consequently, in the first call to
@{ML dest_fixes in Variable} this dataslot is empty; in the second it is
- filled with @{text x} and @{text y}. What is interesting is that contexts
+ filled with \<open>x\<close> and \<open>y\<close>. What is interesting is that contexts
can be stacked. For this consider the following proof fragment:
-*}
+\<close>
lemma "True"
proof -
fix x y
{ fix z w
- ML_prf {* Variable.dest_fixes @{context} *}
+ ML_prf \<open>Variable.dest_fixes @{context}\<close>
}
- ML_prf {* Variable.dest_fixes @{context} *}(*<*)oops(*>*)
+ ML_prf \<open>Variable.dest_fixes @{context}\<close>(*<*)oops(*>*)
-text {*
+text \<open>
Here the first time we call @{ML dest_fixes in Variable} we have four fixes variables;
- the second time we get only the fixes variables @{text x} and @{text y} as answer, since
- @{text z} and @{text w} are not anymore in the scope of the context.
+ the second time we get only the fixes variables \<open>x\<close> and \<open>y\<close> as answer, since
+ \<open>z\<close> and \<open>w\<close> are not anymore in the scope of the context.
This means the curly-braces act on the Isabelle level as opening and closing statements
for a context. The above proof fragment corresponds roughly to the following ML-code
-*}
+\<close>
-ML %grayML{*val ctxt0 = @{context};
+ML %grayML\<open>val ctxt0 = @{context};
val ([x, y], ctxt1) = Variable.add_fixes ["x", "y"] ctxt0;
-val ([z, w], ctxt2) = Variable.add_fixes ["z", "w"] ctxt1*}
+val ([z, w], ctxt2) = Variable.add_fixes ["z", "w"] ctxt1\<close>
-text {*
+text \<open>
where the function @{ML_ind add_fixes in Variable} fixes a list of variables
specified by strings. Let us come back to the point about printing terms. Consider
- printing out the term \mbox{@{text "(x, y, z, w)"}} using our function @{ML_ind pretty_term}.
+ printing out the term \mbox{\<open>(x, y, z, w)\<close>} using our function @{ML_ind pretty_term}.
This function takes a term and a context as argument. Notice how the printing
of the term changes according to which context is used.
@@ -224,18 +224,18 @@
pretty_term ctxt2 trm ])
end"}\\
\setlength{\fboxsep}{0mm}
- @{text ">"}~@{text "("}\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text x}}}@{text ","}~%
- \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text y}}}@{text ","}~%
- \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text z}}}@{text ","}~%
- \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text w}}}@{text ")"}\\
- @{text ">"}~@{text "("}\colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text x}}}@{text ","}~%
- \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text y}}}@{text ","}~%
- \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text z}}}@{text ","}~%
- \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text w}}}@{text ")"}\\
- @{text ">"}~@{text "("}\colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text x}}}@{text ","}~%
- \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text y}}}@{text ","}~%
- \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text z}}}@{text ","}~%
- \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text w}}}@{text ")"}
+ \<open>>\<close>~\<open>(\<close>\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>x\<close>}}\<open>,\<close>~%
+ \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>y\<close>}}\<open>,\<close>~%
+ \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>z\<close>}}\<open>,\<close>~%
+ \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>w\<close>}}\<open>)\<close>\\
+ \<open>>\<close>~\<open>(\<close>\colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{\<open>x\<close>}}\<open>,\<close>~%
+ \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{\<open>y\<close>}}\<open>,\<close>~%
+ \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>z\<close>}}\<open>,\<close>~%
+ \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>w\<close>}}\<open>)\<close>\\
+ \<open>>\<close>~\<open>(\<close>\colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{\<open>x\<close>}}\<open>,\<close>~%
+ \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{\<open>y\<close>}}\<open>,\<close>~%
+ \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{\<open>z\<close>}}\<open>,\<close>~%
+ \colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{\<open>w\<close>}}\<open>)\<close>
\end{graybox}
\end{isabelle}
@@ -243,8 +243,8 @@
The term we are printing is in all three cases the same---a tuple containing
four free variables. As you can see, however, in case of @{ML "ctxt0"} all
variables are highlighted indicating a problem, while in case of @{ML
- "ctxt1"} @{text x} and @{text y} are printed as normal (blue) free
- variables, but not @{text z} and @{text w}. In the last case all variables
+ "ctxt1"} \<open>x\<close> and \<open>y\<close> are printed as normal (blue) free
+ variables, but not \<open>z\<close> and \<open>w\<close>. In the last case all variables
are printed as expected. The point of this example is that the context
contains the information which variables are fixed, and designates all other
free variables as being alien or faulty. Therefore the highlighting.
@@ -266,10 +266,10 @@
|> Variable.variant_fixes [\"y\", \"y\", \"z\"]"
"([\"y\", \"ya\", \"z\"], ...)"}
- Now a fresh variant for the second occurence of @{text y} is created
+ Now a fresh variant for the second occurence of \<open>y\<close> is created
avoiding any clash. In this way we can also create fresh free variables
that avoid any clashes with fixed variables. In Line~3 below we fix
- the variable @{text x} in the context @{text ctxt1}. Next we want to
+ the variable \<open>x\<close> in the context \<open>ctxt1\<close>. Next we want to
create two fresh variables of type @{typ nat} as variants of the
string @{text [quotes] "x"} (Lines 6 and 7).
@@ -285,9 +285,9 @@
"([(\"x\", \"nat\"), (\"xa\", \"nat\")],
[(\"xa\", \"nat\"), (\"xb\", \"nat\")])"}
- As you can see, if we create the fresh variables with the context @{text ctxt0},
- then we obtain @{text "x"} and @{text "xa"}; but in @{text ctxt1} we obtain @{text "xa"}
- and @{text "xb"} avoiding @{text x}, which was fixed in this context.
+ As you can see, if we create the fresh variables with the context \<open>ctxt0\<close>,
+ then we obtain \<open>x\<close> and \<open>xa\<close>; but in \<open>ctxt1\<close> we obtain \<open>xa\<close>
+ and \<open>xb\<close> avoiding \<open>x\<close>, which was fixed in this context.
Often one has the problem that given some terms, create fresh variables
avoiding any variable occurring in those terms. For this you can use the
@@ -302,8 +302,8 @@
end"
"[(\"xb\", \"nat\"), (\"xc\", \"nat\")]"}
- The result is @{text xb} and @{text xc} for the names of the fresh
- variables, since @{text x} and @{text xa} occur in the term we declared.
+ The result is \<open>xb\<close> and \<open>xc\<close> for the names of the fresh
+ variables, since \<open>x\<close> and \<open>xa\<close> occur in the term we declared.
Note that @{ML_ind declare_term in Variable} does not fix the
variables; it just makes them ``known'' to the context. You can see
that if you print out a declared term.
@@ -317,10 +317,10 @@
pwriteln (pretty_term ctxt1 trm)
end"}\\
\setlength{\fboxsep}{0mm}
- @{text ">"}~\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text P}}}~%
- \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text x}}}~%
- \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text y}}}~%
- \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text z}}}
+ \<open>>\<close>~\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>P\<close>}}~%
+ \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>x\<close>}}~%
+ \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>y\<close>}}~%
+ \colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>z\<close>}}
\end{graybox}
\end{isabelle}
@@ -339,8 +339,8 @@
end"
"(Free (\"x\", \"'a\"), Free (\"x\", \"nat\"))"}
- Parsing the string in the context @{text "ctxt0"} results in a free variable
- with a default polymorphic type, but in case of @{text "ctxt1"} we obtain a
+ Parsing the string in the context \<open>ctxt0\<close> results in a free variable
+ with a default polymorphic type, but in case of \<open>ctxt1\<close> we obtain a
free variable of type @{typ nat} as previously declared. Which
type the parsed term receives depends upon the last declaration that
is made, as the next example illustrates.
@@ -372,22 +372,22 @@
end"}
\end{linenos}
\setlength{\fboxsep}{0mm}
- @{text ">"}~\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text P}}}~%
- @{text "?x ?y ?z"}
+ \<open>>\<close>~\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{\<open>P\<close>}}~%
+ \<open>?x ?y ?z\<close>
\end{graybox}
\end{isabelle}
In Line 3 we fix the variables @{term x}, @{term y} and @{term z} in
- context @{text ctxt1}. The function @{ML_ind export_terms in
+ context \<open>ctxt1\<close>. The function @{ML_ind export_terms in
Variable} from the structure @{ML_struct Variable} can be used to transfer
terms between contexts. Transferring means to turn all (free)
variables that are fixed in one context, but not in the other, into
schematic variables. In our example, we are transferring the term
- @{text "P x y z"} from context @{text "ctxt1"} to @{text "ctxt0"},
+ \<open>P x y z\<close> from context \<open>ctxt1\<close> to \<open>ctxt0\<close>,
which means @{term x}, @{term y} and @{term z} become schematic
variables (as can be seen by the leading question marks in the result).
- Note that the variable @{text P} stays a free variable, since it not fixed in
- @{text ctxt1}; it is even highlighed, because @{text "ctxt0"} does
+ Note that the variable \<open>P\<close> stays a free variable, since it not fixed in
+ \<open>ctxt1\<close>; it is even highlighed, because \<open>ctxt0\<close> does
not know about it. Note also that in Line 6 we had to use the
function @{ML_ind singleton}, because the function @{ML_ind
export_terms in Variable} normally works over lists of terms.
@@ -411,7 +411,7 @@
end"
"?P ?x ?y ?z ?x ?y ?z"}
- Since we fixed all variables in @{text ctxt1}, in the exported
+ Since we fixed all variables in \<open>ctxt1\<close>, in the exported
result all of them are schematic. The great point of contexts is
that exporting from one to another is not just restricted to
variables, but also works with assumptions. For this we can use the
@@ -429,14 +429,13 @@
"x \<equiv> y \<Longrightarrow> y \<equiv> x"}
The function @{ML_ind add_assumes in Assumption} from the structure
- @{ML_struct Assumption} adds the assumption \mbox{@{text "x \<equiv> y"}}
- to the context @{text ctxt1} (Line 3). This function expects a list
+ @{ML_struct Assumption} adds the assumption \mbox{\<open>x \<equiv> y\<close>}
+ to the context \<open>ctxt1\<close> (Line 3). This function expects a list
of @{ML_type cterm}s and returns them as theorems, together with the
new context in which they are ``assumed''. In Line 4 we use the
function @{ML_ind symmetric in Thm} from the structure @{ML_struct
Thm} in order to obtain the symmetric version of the assumed
- meta-equality. Now exporting the theorem @{text "eq'"} from @{text
- ctxt1} to @{text ctxt0} means @{term "y \<equiv> x"} will be prefixed with
+ meta-equality. Now exporting the theorem \<open>eq'\<close> from \<open>ctxt1\<close> to \<open>ctxt0\<close> means @{term "y \<equiv> x"} will be prefixed with
the assumed theorem. The boolean flag in @{ML_ind export in
Assumption} indicates whether the assumptions should be marked with
the goal marker (see Section~\ref{sec:basictactics}). In normal
@@ -462,13 +461,13 @@
Proof_Context.export ctxt1 ctxt0 [eq']
end"
"[?x \<equiv> ?y \<Longrightarrow> ?y \<equiv> ?x]"}
-*}
+\<close>
-text {*
+text \<open>
-*}
+\<close>
(*
@@ -505,9 +504,9 @@
oops
*)
-section {* Local Theories and Local Setups\label{sec:local} (TBD) *}
+section \<open>Local Theories and Local Setups\label{sec:local} (TBD)\<close>
-text {*
+text \<open>
In contrast to an ordinary theory, which simply consists of a type
signature, as well as tables for constants, axioms and theorems, a local
theory contains additional context information, such as locally fixed
@@ -524,44 +523,44 @@
use the commands \isacommand{method\_setup} for installing methods in the
current theory and \isacommand{simproc\_setup} for adding new simprocs to
the current simpset.
-*}
+\<close>
-section {* Morphisms (TBD) *}
+section \<open>Morphisms (TBD)\<close>
-text {*
+text \<open>
Morphisms are arbitrary transformations over terms, types, theorems and bindings.
They can be constructed using the function @{ML_ind morphism in Morphism},
which expects a record with functions of type
\begin{isabelle}
\begin{tabular}{rl}
- @{text "binding:"} & @{text "binding -> binding"}\\
- @{text "typ:"} & @{text "typ -> typ"}\\
- @{text "term:"} & @{text "term -> term"}\\
- @{text "fact:"} & @{text "thm list -> thm list"}
+ \<open>binding:\<close> & \<open>binding -> binding\<close>\\
+ \<open>typ:\<close> & \<open>typ -> typ\<close>\\
+ \<open>term:\<close> & \<open>term -> term\<close>\\
+ \<open>fact:\<close> & \<open>thm list -> thm list\<close>
\end{tabular}
\end{isabelle}
The simplest morphism is the @{ML_ind identity in Morphism}-morphism defined as
-*}
+\<close>
-ML %grayML{*val identity = Morphism.morphism "" {binding = [], typ = [], term = [], fact = []}*}
+ML %grayML\<open>val identity = Morphism.morphism "" {binding = [], typ = [], term = [], fact = []}\<close>
-text {*
+text \<open>
Morphisms can be composed with the function @{ML_ind "$>" in Morphism}
-*}
+\<close>
-ML %grayML{*fun trm_phi (Free (x, T)) = Var ((x, 0), T)
+ML %grayML\<open>fun trm_phi (Free (x, T)) = Var ((x, 0), T)
| trm_phi (Abs (x, T, t)) = Abs (x, T, trm_phi t)
| trm_phi (t $ s) = (trm_phi t) $ (trm_phi s)
- | trm_phi t = t*}
+ | trm_phi t = t\<close>
-ML %grayML{*val phi = Morphism.term_morphism "foo" trm_phi*}
+ML %grayML\<open>val phi = Morphism.term_morphism "foo" trm_phi\<close>
-ML %grayML{*Morphism.term phi @{term "P x y"}*}
+ML %grayML\<open>Morphism.term phi @{term "P x y"}\<close>
-text {*
+text \<open>
@{ML_ind term_morphism in Morphism}
@{ML_ind term in Morphism},
@@ -570,25 +569,25 @@
\begin{readmore}
Morphisms are implemented in the file @{ML_file "Pure/morphism.ML"}.
\end{readmore}
-*}
+\<close>
-section {* Misc (TBD) *}
+section \<open>Misc (TBD)\<close>
-text {*
+text \<open>
FIXME: association lists:
@{ML_file "Pure/General/alist.ML"}
FIXME: calling the ML-compiler
-*}
+\<close>
-section {* What Is In an Isabelle Name? (TBD) *}
+section \<open>What Is In an Isabelle Name? (TBD)\<close>
-text {*
+text \<open>
On the ML-level of Isabelle, you often have to work with qualified names.
These are strings with some additional information, such as positional
information and qualifiers. Such qualified names can be generated with the
- antiquotation @{text "@{binding \<dots>}"}. For example
+ antiquotation \<open>@{binding \<dots>}\<close>. For example
@{ML_response [display,gray]
"@{binding \"name\"}"
@@ -597,18 +596,18 @@
An example where a qualified name is needed is the function
@{ML_ind define in Local_Theory}. This function is used below to define
the constant @{term "TrueConj"} as the conjunction @{term "True \<and> True"}.
-*}
+\<close>
-local_setup %gray {*
+local_setup %gray \<open>
Local_Theory.define ((@{binding "TrueConj"}, NoSyn),
- ((@{binding "TrueConj_def"}, []), @{term "True \<and> True"})) #> snd *}
+ ((@{binding "TrueConj_def"}, []), @{term "True \<and> True"})) #> snd\<close>
-text {*
+text \<open>
Now querying the definition you obtain:
\begin{isabelle}
- \isacommand{thm}~@{text "TrueConj_def"}\\
- @{text "> "}~@{thm TrueConj_def}
+ \isacommand{thm}~\<open>TrueConj_def\<close>\\
+ \<open>> \<close>~@{thm TrueConj_def}
\end{isabelle}
\begin{readmore}
@@ -622,13 +621,13 @@
\footnote{\bf FIXME: There should probably a separate section on binding, long-names
and sign.}
-*}
+\<close>
-ML {* Sign.intern_type @{theory} "list" *}
-ML {* Sign.intern_const @{theory} "prod_fun" *}
+ML \<open>Sign.intern_type @{theory} "list"\<close>
+ML \<open>Sign.intern_const @{theory} "prod_fun"\<close>
-text {*
+text \<open>
\footnote{\bf FIXME: Explain the following better; maybe put in a separate
section and link with the comment in the antiquotation section.}
@@ -641,27 +640,27 @@
Functions about naming are implemented in @{ML_file "Pure/General/name_space.ML"};
functions about signatures in @{ML_file "Pure/sign.ML"}.
\end{readmore}
-*}
+\<close>
-text {*
+text \<open>
@{ML_ind "Binding.name_of"} returns the string without markup
@{ML_ind "Binding.concealed"}
-*}
+\<close>
-section {* Concurrency (TBD) *}
+section \<open>Concurrency (TBD)\<close>
-text {*
+text \<open>
@{ML_ind prove_future in Goal}
@{ML_ind future_result in Goal}
-*}
+\<close>
-section {* Parse and Print Translations (TBD) *}
+section \<open>Parse and Print Translations (TBD)\<close>
-section {* Summary *}
+section \<open>Summary\<close>
-text {*
+text \<open>
TBD
-*}
+\<close>
end