theory Advanced+ −
imports Base First_Steps+ −
begin+ −
+ −
(*<*)+ −
setup{*+ −
open_file_with_prelude + −
"Advanced_Code.thy"+ −
["theory Advanced", "imports Base First_Steps", "begin"]+ −
*}+ −
(*>*)+ −
+ −
+ −
chapter {* Advanced Isabelle\label{chp:advanced} *}+ −
+ −
text {*+ −
\begin{flushright}+ −
{\em All things are difficult before they are easy.} \\[1ex]+ −
proverb+ −
\end{flushright}+ −
+ −
\medskip+ −
While terms, types and theorems are the most basic data structures in+ −
Isabelle, there are a number of layers built on top of them. Most of these+ −
layers are concerned with storing and manipulating data. Handling them+ −
properly is an essential skill for programming on the ML-level of Isabelle. + −
The most basic layer are theories. They contain global data and+ −
can be seen as the ``long-term memory'' of Isabelle. There is usually only+ −
one theory active at each moment. Proof contexts and local theories, on the+ −
other hand, store local data for a task at hand. They act like the+ −
``short-term memory'' and there can be many of them that are active in+ −
parallel.+ −
*}+ −
+ −
section {* Theories\label{sec:theories} *}+ −
+ −
text {*+ −
Theories, as said above, are the most basic layer of abstraction in+ −
Isabelle. They record information about definitions, syntax declarations, axioms,+ −
theorems and much more. For example, if a definition is made, it+ −
must be stored in a theory in order to be usable later on. Similar+ −
with proofs: once a proof is finished, the proved theorem needs to+ −
be stored in the theorem database of the theory in order to be+ −
usable. All relevant data of a theory can be queried with the+ −
Isabelle command \isacommand{print\_theory}.+ −
+ −
\begin{isabelle}+ −
\isacommand{print\_theory}\\+ −
@{text "> names: Pure Code_Generator HOL \<dots>"}\\+ −
@{text "> classes: Inf < type \<dots>"}\\+ −
@{text "> default sort: type"}\\+ −
@{text "> syntactic types: #prop \<dots>"}\\+ −
@{text "> logical types: 'a \<times> 'b \<dots>"}\\+ −
@{text "> type arities: * :: (random, random) random \<dots>"}\\+ −
@{text "> logical constants: == :: 'a \<Rightarrow> 'a \<Rightarrow> prop \<dots>"}\\+ −
@{text "> abbreviations: \<dots>"}\\+ −
@{text "> axioms: \<dots>"}\\+ −
@{text "> oracles: \<dots>"}\\+ −
@{text "> definitions: \<dots>"}\\+ −
@{text "> theorems: \<dots>"}+ −
\end{isabelle}+ −
+ −
Functions acting on theories often end with the suffix @{text "_global"},+ −
for example the function @{ML read_term_global in Syntax} in the structure+ −
@{ML_struct Syntax}. The reason is to set them syntactically apart from+ −
functions acting on contexts or local theories, which will be discussed in+ −
the next sections. There is a tendency amongst Isabelle developers to prefer+ −
``non-global'' operations, because they have some advantages, as we will also+ −
discuss later. However, some basic understanding of theories is still necessary+ −
for effective Isabelle programming.+ −
+ −
An important Isabelle command with theories is \isacommand{setup}. In the+ −
previous chapters we used it already to make a theorem attribute known+ −
to Isabelle and to register a theorem under a name. What happens behind the+ −
scenes is that \isacommand{setup} expects a function of type @{ML_type+ −
"theory -> theory"}: the input theory is the current theory and the output+ −
the theory where the attribute has been registered or the theorem has been+ −
stored. This is a fundamental principle in Isabelle. A similar situation+ −
arises with declaring a constant, which can be done on the ML-level with + −
function @{ML_ind declare_const in Sign} from the structure @{ML_struct+ −
Sign}. To see how \isacommand{setup} works, consider the following code:+ −
+ −
*} + −
+ −
ML{*let+ −
val thy = @{theory}+ −
val bar_const = ((@{binding "BAR"}, @{typ "nat"}), NoSyn)+ −
in + −
Sign.declare_const @{context} bar_const thy + −
end*}+ −
+ −
text {*+ −
If you simply run this code\footnote{Recall that ML-code needs to be enclosed in+ −
\isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}.} with the+ −
intention of declaring a constant @{text "BAR"} having type @{typ nat}, then + −
indeed you obtain a theory as result. But if you query the+ −
constant on the Isabelle level using the command \isacommand{term}+ −
+ −
\begin{isabelle}+ −
\isacommand{term}~@{text BAR}\\+ −
@{text "> \"BAR\" :: \"'a\""}+ −
\end{isabelle}+ −
+ −
you can see that you do \emph{not} obtain the expected constant of type @{typ nat}, but a free + −
variable (printed in blue) of polymorphic type. The problem is that the + −
ML-expression above did not ``register'' the declaration with the current theory. + −
This is what the command \isacommand{setup} is for. The constant is properly + −
declared with+ −
*}+ −
+ −
setup %gray {* fn thy => + −
let+ −
val bar_const = ((@{binding "BAR"}, @{typ "nat"}), NoSyn)+ −
val (_, thy') = Sign.declare_const @{context} bar_const thy+ −
in + −
thy'+ −
end *}+ −
+ −
text {* + −
where the declaration is actually applied to the current theory and+ −
+ −
\begin{isabelle}+ −
\isacommand{term}~@{text [quotes] "BAR"}\\+ −
@{text "> \"BAR\" :: \"nat\""}+ −
\end{isabelle}+ −
+ −
now returns a (black) constant with the type @{typ nat}, as expected.+ −
+ −
In a sense, \isacommand{setup} can be seen as a transaction that+ −
takes the current theory @{text thy}, applies an operation, and+ −
produces a new current theory @{text thy'}. This means that we have+ −
to be careful to apply operations always to the most current theory,+ −
not to a \emph{stale} one. Consider again the function inside the+ −
\isacommand{setup}-command:+ −
+ −
\begin{isabelle}+ −
\begin{graybox}+ −
\isacommand{setup}~@{text "\<verbopen>"} @{ML+ −
"fn thy => + −
let+ −
val bar_const = ((@{binding \"BAR\"}, @{typ \"nat\"}), NoSyn)+ −
val (_, thy') = Sign.declare_const @{context} bar_const thy+ −
in+ −
thy+ −
end"}~@{text "\<verbclose>"}\isanewline+ −
@{text "> ERROR: \"Stale theory encountered\""}+ −
\end{graybox}+ −
\end{isabelle}+ −
+ −
This time we erroneously return the original theory @{text thy}, instead of+ −
the modified one @{text thy'}. Such buggy code will always result into + −
a runtime error message about stale theories.+ −
+ −
However, sometimes it does make sense to work with two theories at the same+ −
time, especially in the context of parsing and typing. In the code below we+ −
use in Line 3 the function @{ML_ind copy in Theory} from the structure+ −
@{ML_struct Theory} for obtaining a new theory that contains the same+ −
data, but is unrelated to the existing theory.+ −
*}+ −
+ −
setup %graylinenos {* fn thy => + −
let+ −
val tmp_thy = Theory.copy thy+ −
val foo_const = ((@{binding "FOO"}, @{typ "nat \<Rightarrow> nat"}), NoSyn)+ −
val (_, tmp_thy') = Sign.declare_const @{context} foo_const tmp_thy+ −
val trm1 = Syntax.read_term_global tmp_thy' "FOO baz"+ −
val trm2 = Syntax.read_term_global thy "FOO baz"+ −
val _ = pwriteln + −
(Pretty.str (@{make_string} trm1 ^ "\n" ^ @{make_string} trm2))+ −
in+ −
thy+ −
end *}+ −
+ −
text {*+ −
That means we can make changes to the theory @{text tmp_thy} without+ −
affecting the current theory @{text thy}. In this case we declare in @{text+ −
"tmp_thy"} the constant @{text FOO} (Lines 4 and 5). The point of this code+ −
is that we next, in Lines 6 and 7, parse a string to become a term (both+ −
times the string is @{text [quotes] "FOO baz"}). But since we parse the string+ −
once in the context of the theory @{text tmp_thy'} in which @{text FOO} is+ −
declared to be a constant of type @{typ "nat \<Rightarrow>nat"} and once in the context + −
of @{text thy} where it is not, we obtain two different terms, namely + −
+ −
\begin{isabelle}+ −
\begin{graybox}+ −
@{text "> Const (\"Advanced.FOO\", \"nat \<Rightarrow> nat\") $ Free (\"baz\", \"nat\")"}\isanewline+ −
@{text "> Free (\"FOO\", \"'a \<Rightarrow> 'b\") $ Free (\"baz\", \"'a\")"}+ −
\end{graybox}+ −
\end{isabelle}+ −
+ −
There are two reasons for parsing a term in a temporary theory. One is to+ −
obtain fully qualified names for constants and the other is appropriate type + −
inference. This is relevant in situations where definitions are made later, + −
but parsing and type inference has to take already proceed as if the definitions + −
were already made.+ −
+ −
\begin{readmore}+ −
Most of the functions about theories are implemented in+ −
@{ML_file "Pure/theory.ML"} and @{ML_file "Pure/global_theory.ML"}.+ −
\end{readmore}+ −
*}+ −
+ −
section {* Contexts *}+ −
+ −
text {*+ −
Contexts are arguably more important than theories, even though they only + −
contain ``short-term memory data''. The reason is that a vast number of+ −
functions in Isabelle depend in one way or another on contexts. Even such+ −
mundane operations like printing out a term make essential use of contexts.+ −
For this consider the following contrived proof-snippet whose purpose is to + −
fix two variables:+ −
*}+ −
+ −
lemma "True"+ −
proof -+ −
txt_raw {*\mbox{}\\[-7mm]*} + −
ML_prf {* Variable.dest_fixes @{context} *} + −
txt_raw {*\mbox{}\\[-7mm]\mbox{}*}+ −
fix x y + −
txt_raw {*\mbox{}\\[-7mm]*}+ −
ML_prf {* Variable.dest_fixes @{context} *} + −
txt_raw {*\mbox{}\\[-7mm] \ldots*}(*<*)oops(*>*)+ −
+ −
text {*+ −
The interesting point is that we injected ML-code before and after+ −
the variables are fixed. For this remember that ML-code inside a proof+ −
needs to be enclosed inside \isacommand{ML\_prf}~@{text "\<verbopen> \<dots> \<verbclose>"},+ −
not \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}. The function + −
@{ML_ind dest_fixes in Variable} from the structure @{ML_struct Variable} takes + −
a context and returns all its currently fixed variable (names). That + −
means a context has a dataslot containing information about fixed variables.+ −
The ML-antiquotation @{text "@{context}"} points to the context that is+ −
active at that point of the theory. Consequently, in the first call to + −
@{ML dest_fixes in Variable} this dataslot is empty; in the second it is + −
filled with @{text x} and @{text y}. What is interesting is that contexts+ −
can be stacked. For this consider the following proof fragment:+ −
*}+ −
+ −
lemma "True"+ −
proof -+ −
fix x y+ −
{ fix z w+ −
txt_raw {*\mbox{}\\[-7mm]*}+ −
ML_prf {* Variable.dest_fixes @{context} *} + −
txt_raw {*\mbox{}\\[-7mm]\mbox{}*}+ −
}+ −
txt_raw {*\mbox{}\\[-7mm]*}+ −
ML_prf {* Variable.dest_fixes @{context} *} + −
txt_raw {*\mbox{}\\[-7mm] \ldots*}(*<*)oops(*>*)+ −
+ −
text {*+ −
Here the first time we call @{ML dest_fixes in Variable} we have four fixes variables;+ −
the second time we get only the fixes variables @{text x} and @{text y} as answer, since+ −
@{text z} and @{text w} are not anymore in the scope of the context. + −
This means the curly-braces act on the Isabelle level as opening and closing statements + −
for a context. The above proof fragment corresponds roughly to the following ML-code+ −
*}+ −
+ −
ML{*val ctxt0 = @{context};+ −
val ([x, y], ctxt1) = Variable.add_fixes ["x", "y"] ctxt0;+ −
val ([z, w], ctxt2) = Variable.add_fixes ["z", "w"] ctxt1*}+ −
+ −
text {*+ −
where the function @{ML_ind add_fixes in Variable} fixes a list of variables+ −
specified by strings. Let us come back to the point about printing terms. Consider+ −
printing out the term \mbox{@{text "(x, y, z, w)"}} using our function @{ML_ind pretty_term}.+ −
This function takes a term and a context as argument. Notice how the printing+ −
of the term changes according to which context is used.+ −
+ −
\begin{isabelle}+ −
\begin{graybox}+ −
@{ML "let+ −
val trm = @{term \"(x, y, z, w)\"}+ −
in+ −
pwriteln (Pretty.chunks + −
[ pretty_term ctxt0 trm,+ −
pretty_term ctxt1 trm,+ −
pretty_term ctxt2 trm ])+ −
end"}\\+ −
\setlength{\fboxsep}{0mm}+ −
@{text ">"}~@{text "("}\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text x}}}@{text ","}~%+ −
\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text y}}}@{text ","}~%+ −
\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text z}}}@{text ","}~%+ −
\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text w}}}@{text ")"}\\+ −
@{text ">"}~@{text "("}\colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text x}}}@{text ","}~%+ −
\colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text y}}}@{text ","}~%+ −
\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text z}}}@{text ","}~%+ −
\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text w}}}@{text ")"}\\+ −
@{text ">"}~@{text "("}\colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text x}}}@{text ","}~%+ −
\colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text y}}}@{text ","}~%+ −
\colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text z}}}@{text ","}~%+ −
\colorbox{gray!20}{\raisebox{0mm}[3mm][1mm]{@{text w}}}@{text ")"}+ −
\end{graybox}+ −
\end{isabelle}+ −
+ −
+ −
The term we are printing is in all three cases the same---a tuple containing+ −
four free variables. As you can see, however, in case of @{ML "ctxt0"} all+ −
variables are highlighted indicating a problem, while in case of @{ML+ −
"ctxt1"} @{text x} and @{text y} are printed as normal (blue) free+ −
variables, but not @{text z} and @{text w}. In the last case all variables+ −
are printed as expected. The point of this example is that the context+ −
contains the information which variables are fixed, and designates all other+ −
free variables as being alien or faulty. Therefore the highlighting. + −
While this seems like a minor detail, the concept of making the context aware + −
of fixed variables is actually quite useful. For example it prevents us from + −
fixing a variable twice+ −
+ −
@{ML_response_fake [gray, display]+ −
"@{context}+ −
|> Variable.add_fixes [\"x\", \"x\"]" + −
"ERROR: Duplicate fixed variable(s): \"x\""}+ −
+ −
More importantly it also allows us to easily create fresh names for+ −
fixed variables. For this you have to use the function @{ML_ind+ −
variant_fixes in Variable} from the structure @{ML_struct Variable}.+ −
+ −
@{ML_response_fake [gray, display]+ −
"@{context}+ −
|> Variable.variant_fixes [\"y\", \"y\", \"z\"]" + −
"([\"y\", \"ya\", \"z\"], ...)"}+ −
+ −
Now a fresh variant for the second occurence of @{text y} is created+ −
avoiding any clash. In this way we can also create fresh free variables+ −
that avoid any clashes with fixed variables. In Line~3 below we fix+ −
the variable @{text x} in the context @{text ctxt1}. Next we want to+ −
create two fresh variables of type @{typ nat} as variants of the+ −
string @{text [quotes] "x"} (Lines 6 and 7).+ −
+ −
@{ML_response_fake [display, gray, linenos]+ −
"let+ −
val ctxt0 = @{context}+ −
val (_, ctxt1) = Variable.add_fixes [\"x\"] ctxt0+ −
val frees = replicate 2 (\"x\", @{typ nat})+ −
in+ −
(Variable.variant_frees ctxt0 [] frees,+ −
Variable.variant_frees ctxt1 [] frees)+ −
end"+ −
"([(\"x\", \"nat\"), (\"xa\", \"nat\")], + −
[(\"xa\", \"nat\"), (\"xb\", \"nat\")])"}+ −
+ −
As you can see, if we create the fresh variables with the context @{text ctxt0},+ −
then we obtain @{text "x"} and @{text "xa"}; but in @{text ctxt1} we obtain @{text "xa"}+ −
and @{text "xb"} avoiding @{text x}, which was fixed in this context.+ −
+ −
Often one has the problem that given some terms, create fresh variables+ −
avoiding any variable occurring in those terms. For this you can use the+ −
function @{ML_ind declare_term in Variable} from the structure @{ML_struct Variable}.+ −
+ −
@{ML_response_fake [gray, display]+ −
"let+ −
val ctxt1 = Variable.declare_term @{term \"(x, xa)\"} @{context}+ −
val frees = replicate 2 (\"x\", @{typ nat})+ −
in+ −
Variable.variant_frees ctxt1 [] frees+ −
end"+ −
"[(\"xb\", \"nat\"), (\"xc\", \"nat\")]"}+ −
+ −
The result is @{text xb} and @{text xc} for the names of the fresh+ −
variables, since @{text x} and @{text xa} occur in the term we declared. + −
Note that @{ML_ind declare_term in Variable} does not fix the+ −
variables; it just makes them ``known'' to the context. You can see+ −
that if you print out a declared term.+ −
+ −
\begin{isabelle}+ −
\begin{graybox}+ −
@{ML "let+ −
val trm = @{term \"P x y z\"}+ −
val ctxt1 = Variable.declare_term trm @{context}+ −
in+ −
pwriteln (pretty_term ctxt1 trm)+ −
end"}\\+ −
\setlength{\fboxsep}{0mm}+ −
@{text ">"}~\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text P}}}~%+ −
\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text x}}}~%+ −
\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text y}}}~%+ −
\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text z}}}+ −
\end{graybox}+ −
\end{isabelle}+ −
+ −
All variables are highligted, indicating that they are not+ −
fixed. However, declaring a term is helpful when parsing terms using+ −
the function @{ML_ind read_term in Syntax} from the structure+ −
@{ML_struct Syntax}. Consider the following code:+ −
+ −
@{ML_response_fake [gray, display]+ −
"let+ −
val ctxt0 = @{context}+ −
val ctxt1 = Variable.declare_term @{term \"x::nat\"} ctxt0+ −
in+ −
(Syntax.read_term ctxt0 \"x\", + −
Syntax.read_term ctxt1 \"x\") + −
end"+ −
"(Free (\"x\", \"'a\"), Free (\"x\", \"nat\"))"}+ −
+ −
Parsing the string in the context @{text "ctxt0"} results in a free variable + −
with a default polymorphic type, but in case of @{text "ctxt1"} we obtain a+ −
free variable of type @{typ nat} as previously declared. Which+ −
type the parsed term receives depends upon the last declaration that+ −
is made, as the next example illustrates.+ −
+ −
@{ML_response_fake [gray, display]+ −
"let+ −
val ctxt1 = Variable.declare_term @{term \"x::nat\"} @{context}+ −
val ctxt2 = Variable.declare_term @{term \"x::int\"} ctxt1+ −
in+ −
(Syntax.read_term ctxt1 \"x\", + −
Syntax.read_term ctxt2 \"x\") + −
end"+ −
"(Free (\"x\", \"nat\"), Free (\"x\", \"int\"))"}+ −
+ −
The most useful feature of contexts is that one can export, or transfer, + −
terms and theorems between them. We show this first for terms. + −
+ −
\begin{isabelle}+ −
\begin{graybox}+ −
\begin{linenos}+ −
@{ML "let+ −
val ctxt0 = @{context}+ −
val (_, ctxt1) = Variable.add_fixes [\"x\", \"y\", \"z\"] ctxt0 + −
val foo_trm = @{term \"P x y z\"}+ −
in+ −
singleton (Variable.export_terms ctxt1 ctxt0) foo_trm+ −
|> pretty_term ctxt0+ −
|> pwriteln+ −
end"}+ −
\end{linenos}+ −
\setlength{\fboxsep}{0mm}+ −
@{text ">"}~\colorbox{gray!5}{\raisebox{0mm}[3mm][1mm]{@{text P}}}~%+ −
@{text "?x ?y ?z"}+ −
\end{graybox}+ −
\end{isabelle}+ −
+ −
In Line 3 we fix the variables @{term x}, @{term y} and @{term z} in+ −
context @{text ctxt1}. The function @{ML_ind export_terms in+ −
Variable} from the structure @{ML_struct Variable} can be used to transfer+ −
terms between contexts. Transferring means to turn all (free)+ −
variables that are fixed in one context, but not in the other, into+ −
schematic variables. In our example, we are transferring the term+ −
@{text "P x y z"} from context @{text "ctxt1"} to @{text "ctxt0"},+ −
which means @{term x}, @{term y} and @{term z} become schematic+ −
variables (as can be seen by the leading question marks in the result). + −
Note that the variable @{text P} stays a free variable, since it not fixed in+ −
@{text ctxt1}; it is even highlighed, because @{text "ctxt0"} does+ −
not know about it. Note also that in Line 6 we had to use the+ −
function @{ML_ind singleton}, because the function @{ML_ind+ −
export_terms in Variable} normally works over lists of terms.+ −
+ −
The case of transferring theorems is even more useful. The reason is+ −
that the generalisation of fixed variables to schematic variables is+ −
not trivial if done manually. For illustration purposes we use in the + −
following code the function @{ML_ind make_thm in Skip_Proof} from the + −
structure @{ML_struct Skip_Proof}. This function will turn an arbitray + −
term, in our case @{term "P x y z x y z"}, into a theorem (disregarding + −
whether it is actually provable).+ −
+ −
@{ML_response_fake [display, gray]+ −
"let+ −
val thy = @{theory}+ −
val ctxt0 = @{context}+ −
val (_, ctxt1) = Variable.add_fixes [\"P\", \"x\", \"y\", \"z\"] ctxt0 + −
val foo_thm = Skip_Proof.make_thm thy @{prop \"P x y z x y z\"}+ −
in+ −
singleton (Proof_Context.export ctxt1 ctxt0) foo_thm+ −
end"+ −
"?P ?x ?y ?z ?x ?y ?z"}+ −
+ −
Since we fixed all variables in @{text ctxt1}, in the exported+ −
result all of them are schematic. The great point of contexts is+ −
that exporting from one to another is not just restricted to+ −
variables, but also works with assumptions. For this we can use the+ −
function @{ML_ind export in Assumption} from the structure+ −
@{ML_struct Assumption}. Consider the following code.+ −
+ −
@{ML_response_fake [display, gray, linenos]+ −
"let+ −
val ctxt0 = @{context}+ −
val ([eq], ctxt1) = Assumption.add_assumes [@{cprop \"x \<equiv> y\"}] ctxt0+ −
val eq' = Thm.symmetric eq+ −
in+ −
Assumption.export false ctxt1 ctxt0 eq'+ −
end"+ −
"x \<equiv> y \<Longrightarrow> y \<equiv> x"}+ −
+ −
The function @{ML_ind add_assumes in Assumption} from the structure+ −
@{ML_struct Assumption} adds the assumption \mbox{@{text "x \<equiv> y"}}+ −
to the context @{text ctxt1} (Line 3). This function expects a list+ −
of @{ML_type cterm}s and returns them as theorems, together with the+ −
new context in which they are ``assumed''. In Line 4 we use the+ −
function @{ML_ind symmetric in Thm} from the structure @{ML_struct+ −
Thm} in order to obtain the symmetric version of the assumed+ −
meta-equality. Now exporting the theorem @{text "eq'"} from @{text+ −
ctxt1} to @{text ctxt0} means @{term "y \<equiv> x"} will be prefixed with+ −
the assumed theorem. The boolean flag in @{ML_ind export in+ −
Assumption} indicates whether the assumptions should be marked with+ −
the goal marker (see Section~\ref{sec:basictactics}). In normal+ −
circumstances this is not necessary and so should be set to @{ML+ −
false}. The result of the export is then the theorem \mbox{@{term+ −
"x \<equiv> y \<Longrightarrow> y \<equiv> x"}}. As can be seen this is an easy way for obtaing+ −
simple theorems. We will explain this in more detail in+ −
Section~\ref{sec:structured}.+ −
+ −
The function @{ML_ind export in Proof_Context} from the structure + −
@{ML_struct Proof_Context} combines both export functions from + −
@{ML_struct Variable} and @{ML_struct Assumption}. This can be seen+ −
in the following example.+ −
+ −
@{ML_response_fake [display, gray]+ −
"let+ −
val ctxt0 = @{context}+ −
val ((fvs, [eq]), ctxt1) = ctxt0+ −
|> Variable.add_fixes [\"x\", \"y\"]+ −
||>> Assumption.add_assumes [@{cprop \"x \<equiv> y\"}]+ −
val eq' = Thm.symmetric eq+ −
in+ −
Proof_Context.export ctxt1 ctxt0 [eq']+ −
end"+ −
"[?x \<equiv> ?y \<Longrightarrow> ?y \<equiv> ?x]"}+ −
*}+ −
+ −
+ −
+ −
text {*+ −
+ −
*}+ −
+ −
+ −
(*+ −
ML{*Proof_Context.debug := true*}+ −
ML{*Proof_Context.verbose := true*}+ −
*)+ −
+ −
(*+ −
lemma "True"+ −
proof -+ −
{ -- "\<And>x. _"+ −
fix x+ −
have "B x" sorry+ −
thm this+ −
}+ −
+ −
thm this+ −
+ −
{ -- "A \<Longrightarrow> _"+ −
assume A+ −
have B sorry+ −
thm this+ −
}+ −
+ −
thm this+ −
+ −
{ -- "\<And>x. x = _ \<Longrightarrow> _"+ −
def x \<equiv> a+ −
have "B x" sorry+ −
}+ −
+ −
thm this+ −
+ −
oops+ −
*)+ −
+ −
section {* Local Theories and Local Setups\label{sec:local} (TBD) *}+ −
+ −
text {*+ −
In contrast to an ordinary theory, which simply consists of a type+ −
signature, as well as tables for constants, axioms and theorems, a local+ −
theory contains additional context information, such as locally fixed+ −
variables and local assumptions that may be used by the package. The type+ −
@{ML_type local_theory} is identical to the type of \emph{proof contexts}+ −
@{ML_type "Proof.context"}, although not every proof context constitutes a+ −
valid local theory.+ −
+ −
@{ML "Context.>> o Context.map_theory"}+ −
@{ML_ind "Local_Theory.declaration"}+ −
+ −
A similar command is \isacommand{local\_setup}, which expects a function+ −
of type @{ML_type "local_theory -> local_theory"}. Later on we will also+ −
use the commands \isacommand{method\_setup} for installing methods in the+ −
current theory and \isacommand{simproc\_setup} for adding new simprocs to+ −
the current simpset.+ −
*}+ −
+ −
+ −
section {* Morphisms (TBD) *}+ −
+ −
text {*+ −
Morphisms are arbitrary transformations over terms, types, theorems and bindings.+ −
They can be constructed using the function @{ML_ind morphism in Morphism},+ −
which expects a record with functions of type+ −
+ −
\begin{isabelle}+ −
\begin{tabular}{rl}+ −
@{text "binding:"} & @{text "binding -> binding"}\\+ −
@{text "typ:"} & @{text "typ -> typ"}\\+ −
@{text "term:"} & @{text "term -> term"}\\+ −
@{text "fact:"} & @{text "thm list -> thm list"}+ −
\end{tabular}+ −
\end{isabelle}+ −
+ −
The simplest morphism is the @{ML_ind identity in Morphism}-morphism defined as+ −
*}+ −
+ −
ML{*val identity = Morphism.morphism {binding = [], typ = [], term = [], fact = []}*}+ −
+ −
text {*+ −
Morphisms can be composed with the function @{ML_ind "$>" in Morphism}+ −
*}+ −
+ −
ML{*fun trm_phi (Free (x, T)) = Var ((x, 0), T) + −
| trm_phi (Abs (x, T, t)) = Abs (x, T, trm_phi t)+ −
| trm_phi (t $ s) = (trm_phi t) $ (trm_phi s)+ −
| trm_phi t = t*}+ −
+ −
ML{*val phi = Morphism.term_morphism trm_phi*}+ −
+ −
ML{*Morphism.term phi @{term "P x y"}*}+ −
+ −
text {*+ −
@{ML_ind term_morphism in Morphism}+ −
+ −
@{ML_ind term in Morphism},+ −
@{ML_ind thm in Morphism}+ −
+ −
\begin{readmore}+ −
Morphisms are implemented in the file @{ML_file "Pure/morphism.ML"}.+ −
\end{readmore}+ −
*}+ −
+ −
section {* Misc (TBD) *}+ −
+ −
ML {*Datatype.get_info @{theory} "List.list"*}+ −
+ −
text {* + −
FIXME: association lists:+ −
@{ML_file "Pure/General/alist.ML"}+ −
+ −
FIXME: calling the ML-compiler+ −
+ −
*}+ −
+ −
section {* What Is In an Isabelle Name? (TBD) *}+ −
+ −
text {*+ −
On the ML-level of Isabelle, you often have to work with qualified names.+ −
These are strings with some additional information, such as positional+ −
information and qualifiers. Such qualified names can be generated with the+ −
antiquotation @{text "@{binding \<dots>}"}. For example+ −
+ −
@{ML_response [display,gray]+ −
"@{binding \"name\"}"+ −
"name"}+ −
+ −
An example where a qualified name is needed is the function + −
@{ML_ind define in Local_Theory}. This function is used below to define + −
the constant @{term "TrueConj"} as the conjunction @{term "True \<and> True"}.+ −
*}+ −
+ −
local_setup %gray {* + −
Local_Theory.define ((@{binding "TrueConj"}, NoSyn), + −
(Attrib.empty_binding, @{term "True \<and> True"})) #> snd *}+ −
+ −
text {* + −
Now querying the definition you obtain:+ −
+ −
\begin{isabelle}+ −
\isacommand{thm}~@{text "TrueConj_def"}\\+ −
@{text "> "}~@{thm TrueConj_def}+ −
\end{isabelle}+ −
+ −
\begin{readmore}+ −
The basic operations on bindings are implemented in + −
@{ML_file "Pure/General/binding.ML"}.+ −
\end{readmore}+ −
+ −
\footnote{\bf FIXME give a better example why bindings are important}+ −
\footnote{\bf FIXME give a pointer to \isacommand{local\_setup}; if not, then explain+ −
why @{ML snd} is needed.}+ −
\footnote{\bf FIXME: There should probably a separate section on binding, long-names+ −
and sign.}+ −
+ −
*}+ −
+ −
+ −
ML {* Sign.intern_type @{theory} "list" *}+ −
ML {* Sign.intern_const @{theory} "prod_fun" *}+ −
+ −
text {*+ −
\footnote{\bf FIXME: Explain the following better; maybe put in a separate+ −
section and link with the comment in the antiquotation section.}+ −
+ −
Occasionally you have to calculate what the ``base'' name of a given+ −
constant is. For this you can use the function @{ML_ind Long_Name.base_name}. For example:+ −
+ −
@{ML_response [display,gray] "Long_Name.base_name \"List.list.Nil\"" "\"Nil\""}+ −
+ −
\begin{readmore}+ −
Functions about naming are implemented in @{ML_file "Pure/General/name_space.ML"};+ −
functions about signatures in @{ML_file "Pure/sign.ML"}.+ −
\end{readmore}+ −
*}+ −
+ −
text {* + −
@{ML_ind "Binding.name_of"} returns the string without markup+ −
+ −
@{ML_ind "Binding.conceal"} + −
*}+ −
+ −
section {* Concurrency (TBD) *}+ −
+ −
text {*+ −
@{ML_ind prove_future in Goal}+ −
@{ML_ind future_result in Goal}+ −
@{ML_ind fork_pri in Future}+ −
*}+ −
+ −
section {* Parse and Print Translations (TBD) *}+ −
+ −
section {* Summary *}+ −
+ −
text {*+ −
TBD+ −
*}+ −
+ −
end+ −