--- a/CookBook/FirstSteps.thy Mon Oct 20 06:22:11 2008 +0000
+++ b/CookBook/FirstSteps.thy Mon Oct 27 18:48:52 2008 +0100
@@ -5,7 +5,8 @@
chapter {* First Steps *}
-text {*
+text {*
+
Isabelle programming is done in Standard ML.
Just like lemmas and proofs, code in Isabelle is part of a
theory. If you want to follow the code written in this chapter, we
@@ -27,11 +28,11 @@
The easiest and quickest way to include code in a theory is
by using the \isacommand{ML} command. For example\smallskip
-\isacommand{ML}
-@{ML_text "{"}@{ML_text "*"}\isanewline
+\isa{\isacommand{ML}
+\isacharverbatimopen\isanewline
\hspace{5mm}@{ML "3 + 4"}\isanewline
-@{ML_text "*"}@{ML_text "}"}\isanewline
-@{ML_text "> 7"}\smallskip
+\isacharverbatimclose\isanewline
+@{ML_text "> 7"}\smallskip}
Expressions inside \isacommand{ML} commands are immediately evaluated,
like ``normal'' Isabelle proof scripts, by using the advance and undo buttons of
@@ -40,15 +41,15 @@
commands the undo operation behaves slightly counter-intuitive, because
if you define\smallskip
-\isacommand{ML}
-@{ML_text "{"}@{ML_text "*"}\isanewline
-@{ML_text "val foo = true"}\isanewline
-@{ML_text "*"}@{ML_text "}"}\isanewline
-@{ML_text "> true"}\smallskip
+\isa{\isacommand{ML}
+\isacharverbatimopen\isanewline
+\hspace{5mm}@{ML_text "val foo = true"}\isanewline
+\isacharverbatimclose\isanewline
+@{ML_text "> true"}\smallskip}
then Isabelle's undo operation has no effect on the definition of
@{ML_text "foo"}. Note that from now on we will drop the
- \isacommand{ML} @{ML_text "{"}@{ML_text "* \<dots> *"}@{ML_text "}"} whenever
+ \isacommand{ML} \isa{\isacharverbatimopen \ldots \isacharverbatimclose} whenever
we show code and its response.
Once a portion of code is relatively stable, one usually wants to
@@ -65,7 +66,8 @@
\end{tabular}
\end{center}
- Note that from now on we are going to drop the the
+ Note that from now on we are going to drop the
+ \isacommand{ML} \isa{\isacharverbatimopen \ldots \isacharverbatimclose}.
*}
@@ -100,25 +102,22 @@
It is also possible to redirect the channel where the @{ML_text "foo"} is
printed to a separate file, e.g. to prevent Proof General from choking on massive
amounts of trace output. This rediretion can be achieved using the code
-*}
-
-ML {*
- val strip_specials =
- let
- fun strip ("\^A" :: _ :: cs) = strip cs
- | strip (c :: cs) = c :: strip cs
- | strip [] = [];
- in implode o strip o explode end;
- fun redirect_tracing stream =
- Output.tracing_fn := (fn s =>
+@{ML_decl [display]
+"val strip_specials =
+let
+ fun strip (\"\\^A\" :: _ :: cs) = strip cs
+ | strip (c :: cs) = c :: strip cs
+ | strip [] = [];
+in implode o strip o explode end;
+
+fun redirect_tracing stream =
+ Output.tracing_fn := (fn s =>
(TextIO.output (stream, (strip_specials s));
- TextIO.output (stream, "\n");
- TextIO.flushOut stream));
-*}
+ TextIO.output (stream, \"\\n\");
+ TextIO.flushOut stream));"}
-text {*
- Calling @{ML "redirect_tracing"} with @{ML "(TextIO.openOut \"foo.bar\")"}
+ Calling @{ML_text "redirect_tracing"} with @{ML "(TextIO.openOut \"foo.bar\")"}
will cause that all tracing information is printed into the file @{ML_text "foo.bar"}.
*}
@@ -143,13 +142,9 @@
Note, however, that antiquotations are statically scoped, that is the value is
determined at ``compile-time'', not ``run-time''. For example the function
-*}
+ @{ML_decl [display]
+ "fun not_current_thyname () = Context.theory_name @{theory}"}
-ML {*
- fun not_current_thyname () = Context.theory_name @{theory}
-*}
-
-text {*
does \emph{not} return the name of the current theory, if it is run in a
different theory. Instead, the code above defines the constant function
that always returns the string @{ML_text "FirstSteps"}, no matter where the
@@ -160,7 +155,6 @@
In a similar way you can use antiquotations to refer to theorems or simpsets:
-
@{ML [display] "@{thm allI}"}
@{ML [display] "@{simpset}"}
@@ -240,7 +234,7 @@
text {*
- @{ML_response [display] "@{typ \"bool \<Rightarrow> nat\"}" "Type \<dots>"}
+ @{ML_response_fake [display] "@{typ \"bool \<Rightarrow> nat\"}" "bool \<Rightarrow> nat"}
(FIXME: Unlike the term antiquotation, @{text "@{typ \<dots>}"} does not print the
internal representation. Is there a reason for this, that needs to be explained
@@ -263,29 +257,19 @@
dynamically. For example, a function that returns the implication
@{text "\<And>(x::\<tau>). P x \<Longrightarrow> Q x"} taking @{term P}, @{term Q} and the typ @{term "\<tau>"}
as arguments can only be written as
-*}
-
-ML {*
- fun make_imp P Q tau =
+@{ML_decl [display]
+"fun make_imp P Q tau =
let
- val x = Free ("x",tau)
+ val x = Free (\"x\",tau)
in Logic.all x (Logic.mk_implies (HOLogic.mk_Trueprop (P $ x),
HOLogic.mk_Trueprop (Q $ x)))
- end
-*}
-
-text {*
+ end"}
The reason is that one cannot pass the arguments @{term P}, @{term Q} and
@{term "tau"} into an antiquotation. For example the following does not work.
-*}
-ML {*
- fun make_wrong_imp P Q tau = @{prop "\<And>x. P x \<Longrightarrow> Q x"}
-*}
-
-text {*
+ @{ML_decl [display] "fun make_wrong_imp P Q tau = @{prop \"\<And>x. P x \<Longrightarrow> Q x\"}"}
To see this apply @{text "@{term S}"}, @{text "@{term T}"} and @{text "@{typ nat}"}
to both functions.
@@ -397,18 +381,16 @@
application. This combinator, and several variants are defined in
@{ML_file "Pure/General/basics.ML"}.}
-*}
-
-ML {*
-let
+@{ML_response_fake [display]
+"let
val thy = @{theory}
- val assm1 = cterm_of thy @{prop "\<And>(x::nat). P x \<Longrightarrow> Q x"}
- val assm2 = cterm_of thy @{prop "((P::nat\<Rightarrow>bool) t)"}
+ val assm1 = cterm_of thy @{prop \"\<And>(x::nat). P x \<Longrightarrow> Q x\"}
+ val assm2 = cterm_of thy @{prop \"((P::nat\<Rightarrow>bool) t)\"}
val Pt_implies_Qt =
assume assm1
- |> forall_elim (cterm_of thy @{term "t::nat"});
+ |> forall_elim (cterm_of thy @{term \"t::nat\"});
val Qt = implies_elim Pt_implies_Qt (assume assm2);
in
@@ -416,10 +398,7 @@
Qt
|> implies_intr assm2
|> implies_intr assm1
-end
-*}
-
-text {*
+end" "(FIXME)"}
This code-snippet constructs the following proof:
@@ -507,51 +486,43 @@
with the variables @{text xs} that will be generalised once the
goal is proved. The @{text "tac"} is the tactic which proves the goal and which
can make use of the local assumptions.
-*}
-
-
-ML {*
-let
+@{ML_response_fake [display]
+"let
val ctxt = @{context}
- val goal = @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"}
+ val goal = @{prop \"P \<or> Q \<Longrightarrow> Q \<or> P\"}
in
- Goal.prove ctxt ["P", "Q"] [] goal (fn _ =>
+ Goal.prove ctxt [\"P\", \"Q\"] [] goal (fn _ =>
eresolve_tac [disjE] 1
THEN resolve_tac [disjI2] 1
THEN assume_tac 1
THEN resolve_tac [disjI1] 1
THEN assume_tac 1)
-end
-*}
-
-text {*
+end" "(FIXME)"}
\begin{readmore}
To learn more about the function @{ML Goal.prove} see \isccite{sec:results}.
\end{readmore}
-*}
-
-text {* An alternative way to transcribe this proof is as follows *}
+ An alternative way to transcribe this proof is as follows
-ML {*
-let
+@{ML_response_fake [display]
+"let
val ctxt = @{context}
- val goal = @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"}
+ val goal = @{prop \"P \<or> Q \<Longrightarrow> Q \<or> P\"}
in
- Goal.prove ctxt ["P", "Q"] [] goal (fn _ =>
+ Goal.prove ctxt [\"P\", \"Q\"] [] goal (fn _ =>
(eresolve_tac [disjE]
THEN' resolve_tac [disjI2]
THEN' assume_tac
THEN' resolve_tac [disjI1]
THEN' assume_tac) 1)
-end
+end" "(FIXME)"}
+
+ (FIXME: are there any advantages/disadvantages about this way?)
*}
-text {* (FIXME: are there any advantages/disadvantages about this way?) *}
-
section {* Storing Theorems *}
section {* Theorem Attributes *}
--- a/CookBook/Intro.thy Mon Oct 20 06:22:11 2008 +0000
+++ b/CookBook/Intro.thy Mon Oct 27 18:48:52 2008 +0100
@@ -16,10 +16,10 @@
text {*
This cookbook targets an audience who already knows how to use Isabelle
for writing theories and proofs. We also assume that readers are
- familiar with the Standard ML, the programming language in which
+ familiar with Standard ML, the programming language in which
most of Isabelle is implemented. If you are unfamiliar with either of
these two subjects, you should first work through the Isabelle/HOL
- tutorial \cite{isa-tutorial} and Paulson's book on Standard ML
+ tutorial \cite{isa-tutorial} or Paulson's book on Standard ML
\cite{paulson-ml2}.
*}
--- a/CookBook/Package/Ind_Interface.thy Mon Oct 20 06:22:11 2008 +0000
+++ b/CookBook/Package/Ind_Interface.thy Mon Oct 27 18:48:52 2008 +0100
@@ -1,5 +1,5 @@
theory Ind_Interface
-imports Base Simple_Inductive_Package
+imports "../Base" Simple_Inductive_Package
begin
(*<*)
@@ -114,7 +114,7 @@
Const (\"all\", \<dots>) $ Abs (\"z\", TFree (\"'a\", \<dots>),
Const (\"==>\", \<dots>) $
(Const (\"Trueprop\", \<dots>) $
- (Free (\"trcl\", \<dots>) $ Free (\"r\", \<dots>) $ Bound 2 $ Bound 1)) $
+ (Free (\"trcl\", \<dots>) $ Free (\"r\", \<dots>) $ Bound 2 $ Bound 1)) $
(Const (\"==>\", \<dots>) $ \<dots> $ \<dots>))))])]),
\<dots>)
: (((Name.binding * typ) * mixfix) list *
--- a/CookBook/Parsing.thy Mon Oct 20 06:22:11 2008 +0000
+++ b/CookBook/Parsing.thy Mon Oct 27 18:48:52 2008 +0100
@@ -3,6 +3,7 @@
begin
+
chapter {* Parsing *}
text {*
@@ -15,16 +16,19 @@
is driven by priority grammars. Parsers for outer syntax are built up by functional
parsing combinators. These combinators are a well-established technique for parsing,
which has, for example, been described in Paulson's classic book \cite{paulson-ml2}.
-
- Isabelle developers are usually concerned with writing parsers for outer
- syntax, either for new definitional packages or for calling tactics. The library
- for writing such parsers in can roughly be split up into two parts.
+ Isabelle developers are usually concerned with writing these outer syntax parsers,
+ either for new definitional packages or for calling tactics with specific arguments.
+
+ \begin{readmore}
+ The library
+ for writing parser combinators can be split up, roughly, into two parts.
The first part consists of a collection of generic parser combinators defined
in the structure @{ML_struct Scan} in the file
@{ML_file "Pure/General/scan.ML"}. The second part of the library consists of
combinators for dealing with specific token types, which are defined in the
structure @{ML_struct OuterParse} in the file
@{ML_file "Pure/Isar/outer_parse.ML"}.
+ \end{readmore}
*}
@@ -50,6 +54,8 @@
will raise the exception @{ML_text "FAIL"}.
There are three exceptions used in the parsing combinators:
+ (FIXME: do the exception need to be explained, because the user cannot use them from ``outside''?)
+
\begin{itemize}
\item @{ML_text "FAIL"} is used to indicate that alternative routes of parsing
might be explored.
@@ -86,7 +92,7 @@
Parsers that explore
alternatives can be constructed using the function @{ML "(op ||)"}. For example, the
- parser @{ML_open "(p || q)" for p q} returns the result of @{ML_text "p"}, if it succeeds,
+ parser @{ML_open "(p || q)" for p q} returns the result of @{ML_text "p"}, in case it succeeds,
otherwise it returns the result of @{ML_text "q"}. For example
@{ML_response [display]
@@ -135,24 +141,27 @@
@{ML_open [display] "(p -- q) || r" for p q r}
However, this parser is problematic for producing an appropriate error message, in case
- the parsing of @{ML_open "(p -- q)" for p q} fails. Because one loses the information
- that @{ML_text p} should be followed by @{ML_text q}. To see this consider the case that @{ML_text p}
- is present in the input, but not @{ML_text q}. So @{ML_open "(p -- q)" for p q} will fail and the
+ the parsing of @{ML_open "(p -- q)" for p q} fails. Because in that case one loses with the parser
+ above the information
+ that @{ML_text p} should be followed by @{ML_text q}. To see this consider the case in
+ which @{ML_text p}
+ is present in the input, but not @{ML_text q}. That means @{ML_open "(p -- q)" for p q} will fail
+ and the
alternative parser @{ML_text r} will be tried. However in many circumstanes this will be the wrong
- parser for the input and therefore will fail. The error message is then caused by the
- failure of @{ML_text r}, not by the absense of @{ML_text p} in the input. These situations
- can be avoided using the funtion @{ML "(op !!)"}. This function aborts the whole process of
+ parser for the input ``p-followed-by-q'' and therefore will fail. The error message is then caused by the
+ failure of @{ML_text r}, not by the absense of @{ML_text q} in the input. This kind of situation
+ can be avoided by using the funtion @{ML "(op !!)"}. This function aborts the whole process of
parsing in case of failure and invokes an error message. For example if we invoke the parser
@{ML [display] "(!! (fn _ => \"foo\") ($$ \"h\"))"}
- on @{ML_text "hello"}, the parsing succeeds
+ on @{ML_text [quotes] "hello"}, the parsing succeeds
@{ML_response [display]
"(!! (fn _ => \"foo\") ($$ \"h\")) (explode \"hello\")"
"(\"h\", [\"e\", \"l\", \"l\", \"o\"])"}
- but if we invoke it on @{ML_text "world"}
+ but if we invoke it on @{ML_text [quotes] "world"}
@{ML_response_fake [display] "(!! (fn _ => \"foo\") ($$ \"h\")) (explode \"world\")"
"Exception ABORT raised"}
@@ -165,14 +174,14 @@
"Exception Error \"foo\" raised"}
This ``prefixing'' is usually done by wrappers such as @{ML "OuterSyntax.command"}
- (FIXME: give reference to late).
+ (FIXME: give reference to later place).
Returning to our example of parsing @{ML_open "(p -- q) || r" for p q r}. If we want
- to generate the correct error message for @{ML_text q} not following @{ML_text p}, then
+ to generate the correct error message for p-followed-by-q, then
we have to write
*}
-ML {*
+ML {*
fun p_followed_by_q p q r =
let
val err = (fn _ => p ^ " is not followed by " ^ q)
@@ -240,12 +249,43 @@
This is because the parsers for the theory syntax, as well as the parsers for the
argument syntax of proof methods and attributes, use the type
@{ML_type OuterParse.token} (which is identical to the type @{ML_type OuterLex.token}).
+
+ \begin{readmore}
The parser functions for the theory syntax are contained in the structure
- @{ML_struct OuterParse} defined in the file\linebreak @{ML_file "Pure/Isar/outer_parse.ML"}.
+ @{ML_struct OuterParse} defined in the file @{ML_file "Pure/Isar/outer_parse.ML"}.
+ The definition for tokens is in the file @{ML_file "Pure/Isar/outer_lex.ML"}.
+ \end{readmore}
+*}
+
+ML {* OuterLex.mk_text "hello" *}
+
+ML {* print_depth 50 *}
+
+ML {*
+ let open OuterLex in
+ OuterSyntax.scan Position.none "for"
+ end;
+
*}
ML {* map OuterLex.mk_text (explode "hello") *}
+ML {*
+
+fun my_scan lex pos str =
+ Source.of_string str
+ |> Symbol.source {do_recover = false}
+ |> OuterLex.source {do_recover = SOME false}
+ (fn () => pairself (Scan.make_lexicon o map Symbol.explode) lex) pos
+ |> Source.exhaust;
+
+*}
+
+ML {*
+let val toks = my_scan (["hello"], []) Position.none "hello"
+in (OuterParse.$$$ "hello") toks end
+*}
+
text {*
@{ML_response_fake [display] "map OuterLex.mk_text (explode \"hello\")"
--- a/CookBook/antiquote_setup_plus.ML Mon Oct 20 06:22:11 2008 +0000
+++ b/CookBook/antiquote_setup_plus.ML Mon Oct 27 18:48:52 2008 +0100
@@ -1,79 +1,65 @@
-(*
-Auxiliary antiquotations for the Cookbook.
-
-*)
+(* Auxiliary antiquotations for the Cookbook. *)
structure AntiquoteSetup: sig end =
struct
val str_of_source = space_implode " " o map OuterLex.unparse o #2 o #1 o Args.dest_src;
-fun ml_val ys txt = "fn " ^
- (case ys of [] => "_" | _ => enclose "(" ")" (commas ys)) ^
- " => (" ^ txt ^ ");";
+fun ml_val_open (ys, xs) txt =
+ let fun ml_val_open_aux ys txt =
+ "fn " ^ (case ys of [] => "_" | _ => enclose "(" ")" (commas ys)) ^ " => (" ^ txt ^ ")";
+ in
+ (case xs of
+ [] => ml_val_open_aux ys txt
+ | _ => ml_val_open_aux ys ("let open " ^ (space_implode " " xs) ^ " in " ^ txt ^ " end"))
+ end;
-fun ml_val_open (ys, []) txt = ml_val ys txt
- | ml_val_open (ys, xs) txt =
- ml_val ys ("let open " ^ space_implode " " xs ^ " in " ^ txt ^ " end");
+fun ml_val txt = ml_val_open ([],[]) txt;
fun ml_pat (rhs, pat) =
- let val pat' = implode (map (fn "\\<dots>" => "_" | s => s)
- (Symbol.explode pat))
+ let val pat' = implode (map (fn "\\<dots>" => "_" | s => s) (Symbol.explode pat))
in
"val " ^ pat' ^ " = " ^ rhs
end;
-(* modified from original *)
-fun ml_val txt = "fn _ => (" ^ txt ^ ");";
-fun ml_struct txt = "functor DUMMY_FUNCTOR() = struct structure DUMMY = " ^ txt ^ " end;";
-fun ml_type txt = "val _ = NONE : (" ^ txt ^ ") option;";
+fun ml_struct txt = "functor DUMMY_FUNCTOR() = struct structure DUMMY = " ^ txt ^ " end";
+fun ml_type txt = "val _ = NONE : (" ^ txt ^ ") option";
+fun ml_decl txt = txt
-fun output_verbatim f src ctxt x =
- let val txt = f ctxt x
- in
- (if ! ThyOutput.source then str_of_source src else txt)
- |> (if ! ThyOutput.quotes then quote else I)
- |> ThyOutput.output (fn _ => fn s => Pretty.str s) src ctxt
- end;
-
-fun output_ml_old ml = output_verbatim
- (fn ctxt => fn ((txt, p), pos) =>
- (ML_Context.eval_in (SOME ctxt) false pos (ml p txt);
- txt |> Source.of_string |> ML_Lex.source |> Source.exhaust |>
- Chunks.transform_cmts |> implode));
+fun output_ml_open ml src ctxt (txt,ovars) =
+ (ML_Context.eval_in (SOME ctxt) false Position.none (ml ovars txt);
+ ThyOutput.output_list (fn _ => fn s => Pretty.str s) src ctxt (space_explode "\n" txt))
fun output_ml ml src ctxt txt =
- (ML_Context.eval_in (SOME ctxt) false Position.none (ml txt);
- ThyOutput.output (fn _ => fn s => Pretty.str s) src ctxt txt)
+ output_ml_open (K ml) src ctxt (txt,())
fun add_response_indicator txt =
- space_implode "\n" (map (fn s => "> " ^ s) (space_explode "\n" txt))
+ map (fn s => "> " ^ s) (space_explode "\n" txt)
fun output_ml_response ml src ctxt (lhs,pat) =
(ML_Context.eval_in (SOME ctxt) false Position.none (ml (lhs,pat));
- let val txt = lhs ^ "\n" ^ (add_response_indicator pat)
- in ThyOutput.output (fn _ => fn s => Pretty.str s) src ctxt txt end)
+ let val txt = (space_explode "\n" lhs) @ (add_response_indicator pat)
+ in ThyOutput.output_list (fn _ => fn s => Pretty.str s) src ctxt txt end)
fun output_ml_response_fake ml src ctxt (lhs,pat) =
(ML_Context.eval_in (SOME ctxt) false Position.none (ml lhs);
- let val txt = lhs ^ "\n" ^ (add_response_indicator pat)
- in ThyOutput.output (fn _ => fn s => Pretty.str s) src ctxt txt end)
+ let val txt = (space_explode "\n" lhs) @ (add_response_indicator pat)
+ in ThyOutput.output_list (fn _ => fn s => Pretty.str s) src ctxt txt end)
fun check_file_exists ctxt name =
if File.exists (Path.append (Path.explode ("~~/src")) (Path.explode name)) then ()
else error ("Source file " ^ quote name ^ " does not exist.")
-
-
val _ = ThyOutput.add_commands
- [("ML_open", ThyOutput.args (Scan.lift (OuterParse.position (Args.name --
+ [("ML_open", ThyOutput.args (Scan.lift (Args.name --
(Scan.optional (Args.$$$ "for" |-- OuterParse.!!! (Scan.repeat1 Args.name)) [] --
- Scan.optional (Args.parens (OuterParse.list1 Args.name)) [])))) (output_ml_old ml_val_open)),
+ Scan.optional (Args.parens (OuterParse.list1 Args.name)) []))) (output_ml_open ml_val_open)),
("ML_file", ThyOutput.args (Scan.lift Args.name)
(ThyOutput.output (fn _ => fn s => (check_file_exists s; Pretty.str s)))),
("ML_text", ThyOutput.args (Scan.lift Args.name)
(ThyOutput.output (fn _ => fn s => Pretty.str s))),
("ML", ThyOutput.args (Scan.lift Args.name) (output_ml ml_val)),
+ ("ML_decl", ThyOutput.args (Scan.lift Args.name) (output_ml ml_decl)),
("ML_response", ThyOutput.args (Scan.lift (Args.name -- Args.name)) (output_ml_response ml_pat)),
("ML_response_fake",
ThyOutput.args (Scan.lift (Args.name -- Args.name)) (output_ml_response_fake ml_val)),
--- a/CookBook/document/root.tex Mon Oct 20 06:22:11 2008 +0000
+++ b/CookBook/document/root.tex Mon Oct 27 18:48:52 2008 +0100
@@ -36,7 +36,7 @@
{\hspace{-3pt}\readmoremarginpar{\fbox{\textbf{Read More}}}\it}{}
% to work around a problem with \isanewline
-\renewcommand{\isanewline}{\parindent0pt\parskip0pt\mbox{}\par\mbox{}}
+\renewcommand{\isanewline}{{\parindent0pt\parskip0pt\mbox{}\par\mbox{}}}
\renewenvironment{isabelle}
{\begin{trivlist}\begin{isabellebody}\small\item\relax}
Binary file cookbook.pdf has changed