--- a/ProgTutorial/First_Steps.thy Tue May 14 17:45:13 2019 +0200
+++ b/ProgTutorial/First_Steps.thy Thu May 16 19:56:12 2019 +0200
@@ -45,10 +45,8 @@
\end{graybox}
\end{isabelle}
- If you work with ProofGeneral then like normal Isabelle scripts
- \isacommand{ML}-commands can be evaluated by using the advance and
- undo buttons of your Isabelle environment. If you work with the
- Jedit GUI, then you just have to hover the cursor over the code
+ If you work with the
+ Isabelle/jEdit, then you just have to hover the cursor over the code
and you see the evaluated result in the ``Output'' window.
As mentioned in the Introduction, we will drop the \isacommand{ML}~\<open>\<verbopen> \<dots> \<verbclose>\<close> scaffolding whenever we show code. The lines
@@ -95,7 +93,7 @@
code. This can be done in a ``quick-and-dirty'' fashion using the function
@{ML_ind writeln in Output}. For example
- @{ML_response_fake [display,gray] "writeln \"any string\"" "\"any string\""}
+ @{ML_matchresult_fake [display,gray] "writeln \"any string\"" "\"any string\""}
will print out @{text [quotes] "any string"}.
This function expects a string as argument. If you develop under
@@ -103,7 +101,7 @@
for converting values into strings, namely the antiquotation
\<open>@{make_string}\<close>:
- @{ML_response_fake [display,gray] \<open>writeln (@{make_string} 1)\<close> \<open>"1"\<close>}
+ @{ML_matchresult_fake [display,gray] \<open>writeln (@{make_string} 1)\<close> \<open>"1"\<close>}
However, \<open>@{make_string}\<close> only works if the type of what
is converted is monomorphic and not a function.
@@ -111,7 +109,7 @@
You can print out error messages with the function @{ML_ind error in Library};
for example:
- @{ML_response_fake [display,gray]
+ @{ML_matchresult_fake [display,gray]
"if 0 = 1 then true else (error \"foo\")"
"*** foo
***"}
@@ -128,8 +126,8 @@
and @{ML_type thm}. Isabelle provides elaborate pretty-printing functions,
which we will explain in more detail in Section \ref{sec:pretty}. For now
we just use the functions @{ML_ind writeln in Pretty} from the structure
- @{ML_struct Pretty} and @{ML_ind pretty_term in Syntax} from the structure
- @{ML_struct Syntax}. For more convenience, we bind them to the toplevel.
+ @{ML_structure Pretty} and @{ML_ind pretty_term in Syntax} from the structure
+ @{ML_structure Syntax}. For more convenience, we bind them to the toplevel.
\<close>
ML %grayML\<open>val pretty_term = Syntax.pretty_term
@@ -138,7 +136,7 @@
text \<open>
They can be used as follows
- @{ML_response_fake [display,gray]
+ @{ML_matchresult_fake [display,gray]
"pwriteln (pretty_term @{context} @{term \"1::nat\"})"
"\"1\""}
@@ -161,7 +159,7 @@
text \<open>
Now by using this context @{ML pretty_term} prints out
- @{ML_response_fake [display, gray]
+ @{ML_matchresult_fake [display, gray]
"pwriteln (pretty_term show_types_ctxt @{term \"(1::nat, x)\"})"
"(1::nat, x::'a)"}
@@ -205,7 +203,7 @@
@{thm [source] conjI} shown below can be used for any (typable)
instantiation of \<open>?P\<close> and \<open>?Q\<close>.
- @{ML_response_fake [display, gray]
+ @{ML_matchresult_fake [display, gray]
"pwriteln (pretty_thm @{context} @{thm conjI})"
"\<lbrakk>?P; ?Q\<rbrakk> \<Longrightarrow> ?P \<and> ?Q"}
@@ -223,7 +221,7 @@
text \<open>
With this function, theorem @{thm [source] conjI} is now printed as follows:
- @{ML_response_fake [display, gray]
+ @{ML_matchresult_fake [display, gray]
"pwriteln (pretty_thm_no_vars @{context} @{thm conjI})"
"\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> P \<and> Q"}
@@ -266,7 +264,7 @@
should try to print these parcels together in a single string. Therefore do
\emph{not} print out information as
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
"pwriteln (Pretty.str \"First half,\");
pwriteln (Pretty.str \"and second half.\")"
"First half,
@@ -274,7 +272,7 @@
but as a single string with appropriate formatting. For example
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
"pwriteln (Pretty.str (\"First half,\" ^ \"\\n\" ^ \"and second half.\"))"
"First half,
and second half."}
@@ -284,7 +282,7 @@
@{ML_ind cat_lines in Library} concatenates a list of strings
and inserts newlines in between each element.
- @{ML_response_fake [display, gray]
+ @{ML_matchresult_fake [display, gray]
"pwriteln (Pretty.str (cat_lines [\"foo\", \"bar\"]))"
"foo
bar"}
@@ -389,7 +387,7 @@
applied to it. For example, below three variables are applied to the term
@{term [show_types] "P::nat \<Rightarrow> int \<Rightarrow> unit \<Rightarrow> bool"}:
- @{ML_response_fake [display,gray]
+ @{ML_matchresult_fake [display,gray]
"let
val trm = @{term \"P::nat \<Rightarrow> int \<Rightarrow> unit \<Rightarrow> bool\"}
val ctxt = @{context}
@@ -416,7 +414,7 @@
terms involving fresh variables. For this the infrastructure helps
tremendously to avoid any name clashes. Consider for example:
- @{ML_response_fake [display,gray]
+ @{ML_matchresult_fake [display,gray]
"let
val trm = @{term \"za::'a \<Rightarrow> 'b \<Rightarrow> 'c\"}
val ctxt = @{context}
@@ -462,7 +460,7 @@
text \<open>
The function @{ML_text "my_note"} in line 3 is just a wrapper for the function
- @{ML_ind note in Local_Theory} in the structure @{ML_struct Local_Theory};
+ @{ML_ind note in Local_Theory} in the structure @{ML_structure Local_Theory};
it stores a theorem under a name.
In lines 5 to 6 we call this function to give alternative names for the three
theorems. The point of @{ML "#>"} is that you can sequence such function calls.
@@ -589,19 +587,19 @@
@{ML ||>>} operates on pairs). Each of the next three lines just increment
the value by one, but also nest the intermediate results to the left. For example
- @{ML_response [display,gray]
+ @{ML_matchresult [display,gray]
"acc_incs 1"
"((((\"\", 1), 2), 3), 4)"}
You can continue this chain with:
- @{ML_response [display,gray]
+ @{ML_matchresult [display,gray]
"acc_incs 1 ||>> (fn x => (x, x + 2))"
"(((((\"\", 1), 2), 3), 4), 6)"}
An example where this combinator is useful is as follows
- @{ML_response_fake [display, gray]
+ @{ML_matchresult_fake [display, gray]
"let
val ((names1, names2), _) =
@{context}
@@ -641,7 +639,7 @@
both as pairs. We can use this information for example to print out the definiens and
the theorem corresponding to the definitions. For example for the first definition:
- @{ML_response_fake [display, gray]
+ @{ML_matchresult_fake [display, gray]
"let
val (one_trm, (_, one_thm)) = one_def
in
@@ -675,7 +673,7 @@
@{ML_ind check_terms in Syntax}, whose purpose is to simultaneously type-check
a list of terms. Consider the code:
- @{ML_response_fake [display, gray]
+ @{ML_matchresult_fake [display, gray]
"let
val ctxt = @{context}
in
@@ -696,7 +694,7 @@
Syntax} in the file @{ML_file "Pure/Syntax/syntax.ML"} that is implemented
in terms of @{ML singleton} and @{ML check_terms in Syntax}.} For example
- @{ML_response_fake [display, gray, linenos]
+ @{ML_matchresult_fake [display, gray, linenos]
"let
val ctxt = @{context}
in
@@ -745,7 +743,7 @@
are indicated by the @{ML_text @}-sign followed by text wrapped in \<open>{\<dots>}\<close>. For example, one can print out the name of the current theory with
the code
- @{ML_response [display,gray] "Context.theory_name @{theory}" "\"First_Steps\""}
+ @{ML_matchresult [display,gray] "Context.theory_name @{theory}" "\"First_Steps\""}
where \<open>@{theory}\<close> is an antiquotation that is substituted with the
current theory (remember that we assumed we are inside the theory
@@ -773,7 +771,7 @@
function @{ML print_abbrevs in Proof_Context} that list of all currently
defined abbreviations. For example
- @{ML_response_fake [display, gray]
+ @{ML_matchresult_fake [display, gray]
"Proof_Context.print_abbrevs false @{context}"
"\<dots>
INTER \<equiv> INFI
@@ -785,11 +783,11 @@
You can also use antiquotations to refer to proved theorems:
\<open>@{thm \<dots>}\<close> for a single theorem
- @{ML_response_fake [display,gray] "@{thm allI}" "(\<And>x. ?P x) \<Longrightarrow> \<forall>x. ?P x"}
+ @{ML_matchresult_fake [display,gray] "@{thm allI}" "(\<And>x. ?P x) \<Longrightarrow> \<forall>x. ?P x"}
and \<open>@{thms \<dots>}\<close> for more than one
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
"@{thms conj_ac}"
"(?P \<and> ?Q) = (?Q \<and> ?P)
(?P \<and> ?Q \<and> ?R) = (?Q \<and> ?P \<and> ?R)
@@ -799,7 +797,7 @@
example, if you need the version of the theorem @{thm [source] refl} that
has a meta-equality instead of an equality, you can write
-@{ML_response_fake [display,gray]
+@{ML_matchresult_fake [display,gray]
"@{thm refl[THEN eq_reflection]}"
"?x \<equiv> ?x"}
@@ -818,7 +816,7 @@
text \<open>
The result can be printed out as follows.
- @{ML_response_fake [gray,display]
+ @{ML_matchresult_fake [gray,display]
"foo_thms |> pretty_thms_no_vars @{context}
|> pwriteln"
"True, False \<Longrightarrow> P"}
@@ -842,7 +840,7 @@
simp-rules. Now you can feed in the current simpset into this function.
The current simpset can be referred to using @{ML_ind simpset_of in Raw_Simplifier}.
- @{ML_response_fake [display,gray]
+ @{ML_matchresult_fake [display,gray]
"get_thm_names_from_ss @{context}"
"[\"Nat.of_nat_eq_id\", \"Int.of_int_eq_id\", \"Nat.One_nat_def\", \<dots>]"}
@@ -857,7 +855,7 @@
of this antiquotation is that it does not allow you to use schematic
variables in terms. If you want to have an antiquotation that does not have
this restriction, you can implement your own using the function @{ML_ind
- inline in ML_Antiquotation} from the structure @{ML_struct ML_Antiquotation}. The code
+ inline in ML_Antiquotation} from the structure @{ML_structure ML_Antiquotation}. The code
for the antiquotation \<open>term_pat\<close> is as follows.
\<close>
@@ -886,7 +884,7 @@
so that the ML-system can understand it. (All these functions will be explained
in more detail in later sections.) An example for this antiquotation is:
- @{ML_response_fake [display,gray]
+ @{ML_matchresult_fake [display,gray]
"@{term_pat \"Suc (?x::nat)\"}"
"Const (\"Suc\", \"nat \<Rightarrow> nat\") $ Var ((\"x\", 0), \"nat\")"}
@@ -982,7 +980,7 @@
text \<open>
The data can be retrieved with the projection functions defined above.
- @{ML_response_fake [display, gray]
+ @{ML_matchresult_fake [display, gray]
"project_int (nth foo_list 0);
project_bool (nth foo_list 1)"
"13
@@ -992,7 +990,7 @@
a boolean. If we attempt to access the integer as a boolean, then we get
a runtime error.
- @{ML_response_fake [display, gray]
+ @{ML_matchresult_fake [display, gray]
"project_bool (nth foo_list 0)"
"*** exception Match raised"}
@@ -1012,13 +1010,13 @@
changes according to what is needed at the time).
For theories and proof contexts there are, respectively, the functors
- @{ML_funct_ind Theory_Data} and @{ML_funct_ind Proof_Data} that help
+ @{ML_functor_ind Theory_Data} and @{ML_functor_ind Proof_Data} that help
with the data storage. Below we show how to implement a table in which you
can store theorems and look them up according to a string key. The
intention in this example is to be able to look up introduction rules for logical
connectives. Such a table might be useful in an automatic proof procedure
and therefore it makes sense to store this data inside a theory.
- Consequently we use the functor @{ML_funct Theory_Data}.
+ Consequently we use the functor @{ML_functor Theory_Data}.
The code for the table is:
\<close>
@@ -1064,7 +1062,7 @@
\emph{current} theory is updated (this is explained further in
Section~\ref{sec:theories}). The lookup can now be performed as follows.
- @{ML_response_fake [display, gray]
+ @{ML_matchresult_fake [display, gray]
"lookup @{theory} \"conj\""
"SOME \"\<lbrakk>?P; ?Q\<rbrakk> \<Longrightarrow> ?P \<and> ?Q\""}
@@ -1078,7 +1076,7 @@
text \<open>
and accordingly, @{ML lookup} now produces the introduction rule for @{term "True"}
-@{ML_response_fake [display, gray]
+@{ML_matchresult_fake [display, gray]
"lookup @{theory} \"conj\""
"SOME \"True\""}
@@ -1100,7 +1098,7 @@
We initialise the reference with the empty list. Consequently a first
lookup produces @{ML "ref []" in Unsynchronized}.
- @{ML_response_fake [display,gray]
+ @{ML_matchresult_fake [display,gray]
"WrongRefData.get @{theory}"
"ref []"}
@@ -1122,7 +1120,7 @@
A lookup in the current theory gives then the expected list
@{ML "ref [1]" in Unsynchronized}.
- @{ML_response_fake [display,gray]
+ @{ML_matchresult_fake [display,gray]
"WrongRefData.get @{theory}"
"ref [1]"}
@@ -1134,7 +1132,7 @@
\isacommand{setup}-command again, we do not obtain @{ML "ref [1]" in
Unsynchronized}, but
- @{ML_response_fake [display,gray]
+ @{ML_matchresult_fake [display,gray]
"WrongRefData.get @{theory}"
"ref [1, 1]"}
@@ -1154,7 +1152,7 @@
\end{readmore}
Storing data in a proof context is done in a similar fashion. As mentioned
- before, the corresponding functor is @{ML_funct_ind Proof_Data}. With the
+ before, the corresponding functor is @{ML_functor_ind Proof_Data}. With the
following code we can store a list of terms in a proof context.
\<close>
@@ -1181,7 +1179,7 @@
Next we start with the context generated by the antiquotation
\<open>@{context}\<close> and update it in various ways.
- @{ML_response_fake [display,gray]
+ @{ML_matchresult_fake [display,gray]
"let
val ctxt0 = @{context}
val ctxt1 = ctxt0 |> update @{term \"False\"}
@@ -1224,7 +1222,7 @@
There are two special instances of the data storage mechanism described
above. The first instance implements named theorem lists using the functor
- @{ML_funct_ind Named_Thms}. This is because storing theorems in a list
+ @{ML_functor_ind Named_Thms}. This is because storing theorems in a list
is such a common task. To obtain a named theorem list, you just declare
\<close>
@@ -1233,7 +1231,7 @@
val description = "Theorems for foo")\<close>
text \<open>
- and set up the @{ML_struct FooRules} with the command
+ and set up the @{ML_structure FooRules} with the command
\<close>
setup %gray \<open>FooRules.setup\<close>
@@ -1273,7 +1271,7 @@
The rules in the list can be retrieved using the function
@{ML FooRules.get}:
- @{ML_response_fake [display,gray]
+ @{ML_matchresult_fake [display,gray]
"FooRules.get @{context}"
"[\"True\", \"?C\",\"?B\"]"}
@@ -1310,13 +1308,13 @@
On the ML-level these values can be retrieved using the
function @{ML_ind get in Config} from a proof context
- @{ML_response [display,gray]
+ @{ML_matchresult [display,gray]
"Config.get @{context} bval"
"true"}
or directly from a theory using the function @{ML_ind get_global in Config}
- @{ML_response [display,gray]
+ @{ML_matchresult [display,gray]
"Config.get_global @{theory} bval"
"true"}
@@ -1324,7 +1322,7 @@
from the ML-level with the functions @{ML_ind put in Config}
and @{ML_ind put_global in Config}. For example
- @{ML_response [display,gray]
+ @{ML_matchresult [display,gray]
"let
val ctxt = @{context}
val ctxt' = Config.put sval \"foo\" ctxt