--- a/CookBook/FirstSteps.thy Tue Oct 14 01:33:55 2008 -0400
+++ b/CookBook/FirstSteps.thy Fri Oct 17 05:02:04 2008 -0400
@@ -14,7 +14,7 @@
\begin{center}
\begin{tabular}{@ {}l}
- \isacommand{theory} CookBook\\
+ \isacommand{theory} FirstSteps\\
\isacommand{imports} Main\\
\isacommand{begin}\\
\ldots
@@ -26,29 +26,31 @@
text {*
The easiest and quickest way to include code in a theory is
- by using the \isacommand{ML} command. For example
-*}
+ by using the \isacommand{ML} command. For example\smallskip
-ML {*
- 3 + 4
-*}
+\isacommand{ML}
+@{ML_text "{"}@{ML_text "*"}\isanewline
+\hspace{5mm}@{ML "3 + 4"}\isanewline
+@{ML_text "*"}@{ML_text "}"}\isanewline
+@{ML_text "> 7"}\smallskip
-text {*
Expressions inside \isacommand{ML} commands are immediately evaluated,
like ``normal'' Isabelle proof scripts, by using the advance and undo buttons of
your Isabelle environment. The code inside the \isacommand{ML} command
can also contain value and function bindings. However on such \isacommand{ML}
commands the undo operation behaves slightly counter-intuitive, because
- if you define
-*}
+ if you define\smallskip
+
+\isacommand{ML}
+@{ML_text "{"}@{ML_text "*"}\isanewline
+@{ML_text "val foo = true"}\isanewline
+@{ML_text "*"}@{ML_text "}"}\isanewline
+@{ML_text "> true"}\smallskip
-ML {*
- val foo = true
-*}
-
-text {*
then Isabelle's undo operation has no effect on the definition of
- @{ML "foo"}.
+ @{ML_text "foo"}. Note that from now on we will drop the
+ \isacommand{ML} @{ML_text "{"}@{ML_text "* \<dots> *"}@{ML_text "}"} whenever
+ we show code.
Once a portion of code is relatively stable, one usually wants to
export it to a separate ML-file. Such files can then be included in a
@@ -63,6 +65,9 @@
\ldots
\end{tabular}
\end{center}
+
+ Note that from now on we are going to drop the the
+
*}
section {* Debugging and Printing *}
@@ -72,40 +77,30 @@
During developments you might find it necessary to quickly inspect some data
in your code. This can be done in a ``quick-and-dirty'' fashion using
the function @{ML "warning"}. For example
-*}
-ML {* warning "any string" *}
+ @{ML [display] "warning \"any string\""}
-text {*
- will print out @{ML "\"any string\""} inside the response buffer of Isabelle.
+ will print out @{ML_text "\"any string\""} inside the response buffer of Isabelle.
If you develop under PolyML, then there is a convenient, though again
``quick-and-dirty'', method for converting values into strings,
for example:
-*}
-ML {* warning (makestring 1) *}
+ @{ML [display] "warning (makestring 1)"}
-text {*
However this only works if the type of what is converted is monomorphic and not
a function.
-*}
-text {*
- The funtion warning should only be used for testing purposes, because
- any output this funtion generates will be overwritten, as soon as an error
- is raised.
- Therefore for printing anything more serious and elaborate, the function @{ML tracing}
- should be used. This function writes all output into a separate buffer.
-*}
+ The funtion warning should only be used for testing purposes, because any
+ output this funtion generates will be overwritten, as soon as an error is
+ raised. Therefore for printing anything more serious and elaborate, the
+ function @{ML tracing} should be used. This function writes all output into
+ a separate buffer.
-ML {* tracing "foo" *}
-
-text {*
+ @{ML [display] "tracing \"foo\""}
It is also possible to redirect the channel where the @{ML_text "foo"} is
printed to a separate file, e.g. to prevent Proof General from choking on massive
amounts of trace output. This rediretion can be achieved using the code
-
*}
ML {*
@@ -124,7 +119,7 @@
*}
text {*
- Calling @{ML_text "redirect_tracing"} with @{ML_text "(TextIO.openOut \"foo.bar\")"}
+ Calling @{ML "redirect_tracing"} with @{ML "(TextIO.openOut \"foo.bar\")"}
will cause that all tracing information is printed into the file @{ML_text "foo.bar"}.
*}
@@ -138,15 +133,13 @@
on the logical level of Isabelle. This is done using antiquotations.
For example, one can print out the name of
the current theory by typing
-*}
-
-ML {* Context.theory_name @{theory} *}
-
-text {*
+
+ @{ML_response [display] "Context.theory_name @{theory}" "FirstSteps : string"}
+
where @{text "@{theory}"} is an antiquotation that is substituted with the
current theory (remember that we assumed we are inside the theory CookBook).
- The name of this theory can be extracted using the function @{ML "Context.theory_name"}.
- So the code above returns the string @{ML "\"CookBook\""}.
+ The name of this theory can be extracted using the function
+ @{ML "Context.theory_name"}.
Note, however, that antiquotations are statically scoped, that is the value is
determined at ``compile-time'', not ``run-time''. For example the function
@@ -160,19 +153,18 @@
text {*
does \emph{not} return the name of the current theory, if it is run in a
different theory. Instead, the code above defines the constant function
- that always returns the string @{ML "\"CookBook\""}, no matter where the
+ that always returns the string @{ML_text "FirstSteps"}, no matter where the
function is called. Operationally speaking, @{text "@{theory}"} is
\emph{not} replaced with code that will look up the current theory in
some data structure and return it. Instead, it is literally
replaced with the value representing the theory name.
In a similar way you can use antiquotations to refer to theorems or simpsets:
-*}
+
-ML {* @{thm allI} *}
-ML {* @{simpset} *}
+ @{ML [display] "@{thm allI}"}
+ @{ML [display] "@{simpset}"}
-text {*
In the course of this introduction, we will learn more about
these antiquotations: they greatly simplify Isabelle programming since one
can directly access all kinds of logical elements from ML.
@@ -184,14 +176,13 @@
text {*
One way to construct terms of Isabelle on the ML level is by using the antiquotation
\mbox{@{text "@{term \<dots>}"}}:
-*}
-ML {* @{term "(a::nat) + b = c"} *}
+ @{ML_response [display] "@{term \"(a::nat) + b = c\"}"
+ "Const (\"op =\", \<dots>) $ (Const (\"HOL.plus_class.plus\", \<dots>) $ \<dots> $ \<dots>) $ \<dots>"}
-text {*
This will show the term @{term "(a::nat) + b = c"}, but printed out using the internal
representation of this term. This internal representation corresponds to the
- datatype @{ML_text "term"}.
+ datatype @{ML_type "term"}.
The internal representation of terms uses the usual de Bruijn index mechanism where bound
variables are represented by the constructor @{ML Bound}. The index in @{ML Bound} refers to
@@ -225,34 +216,33 @@
can use the following ML funtion to set the limit to a value high
enough:
\end{exercise}
-*}
-ML {* print_depth 50 *}
-text {*
-
+ @{ML [display] "print_depth 50"}
+
The antiquotation @{text "@{prop \<dots>}"} constructs terms of propositional type,
inserting the invisible @{text "Trueprop"} coercions whenever necessary.
Consider for example
-*}
-
-ML {* @{term "P x"} ; @{prop "P x"} *}
+ @{ML_response [display] "@{term \"P x\"}" "Free (\"P\", \<dots>) $ Free (\"x\", \<dots>)"}
+ @{ML_response [display] "@{prop \"P x\"}"
+ "Const (\"Trueprop\", \<dots>) $ (Free (\"P\", \<dots>) $ Free (\"x\", \<dots>))"}
-text {* which inserts the coercion in the latter case and *}
-
+ which inserts the coercion in the latter case and
-ML {* @{term "P x \<Longrightarrow> Q x"} ; @{prop "P x \<Longrightarrow> Q x"} *}
+ @{ML_response [display] "@{term \"P x \<Longrightarrow> Q x\"}" "Const (\"==>\", \<dots>) $ \<dots> $ \<dots>"}
+ @{ML_response [display] "@{prop \"P x \<Longrightarrow> Q x\"}" "Const (\"==>\", \<dots>) $ \<dots> $ \<dots>"}
-text {*
which does not.
Types can be constructed using the antiquotation @{text "@{typ \<dots>}"}. For example
*}
-ML {* @{typ "bool \<Rightarrow> nat"} *}
text {*
+
+ @{ML_response [display] "@{typ \"bool \<Rightarrow> nat\"}" "Type \<dots>"}
+
(FIXME: Unlike the term antiquotation, @{text "@{typ \<dots>}"} does not print the
internal representation. Is there a reason for this, that needs to be explained
here?)
@@ -265,8 +255,6 @@
*}
-
-
section {* Constructing Terms and Types Manually *}
text {*
@@ -369,25 +357,23 @@
Type checking is always relative to a theory context. For now we can use
the @{ML "@{theory}"} antiquotation to get hold of the current theory.
For example we can write:
-*}
+
+ (FIXME ML-response-unchecked needed)
-ML {* cterm_of @{theory} @{term "(a::nat) + b = c"} *}
+ @{ML [display] "@{term \"(a::nat) + b = c\"}"}
-text {* and *}
+ and
-ML {*
- let
- val natT = @{typ "nat"}
- val zero = @{term "0::nat"}
- in
- cterm_of @{theory}
- (Const (@{const_name plus}, natT --> natT --> natT)
- $ zero $ zero)
- end
-*}
+@{ML [display]
+"let
+ val natT = @{typ \"nat\"}
+ val zero = @{term \"0::nat\"}
+in
+ cterm_of @{theory}
+ (Const (@{const_name plus}, natT --> natT --> natT)
+ $ zero $ zero)
+end"}
-text {*
-
\begin{exercise}
Check that the function defined in Exercise~\ref{fun:revsum} returns a
result that type checks.
@@ -574,5 +560,61 @@
section {* Theorem Attributes *}
+ML {*
+ val str_of_source = space_implode " " o map OuterLex.unparse o #2 o #1 o Args.dest_src;
+
+fun ml_val ys txt = "fn " ^
+ (case ys of [] => "_" | _ => enclose "(" ")" (commas ys)) ^
+ " => (" ^ txt ^ ");";
+
+fun ml_val_open (ys, []) txt = ml_val ys txt
+ | ml_val_open (ys, xs) txt =
+ ml_val ys ("let open " ^ space_implode " " xs ^ " in " ^ txt ^ " end");
+
+fun ml_pat (rhs, pat) =
+ let val pat' = implode (map (fn "\\<dots>" => "_" | s => s)
+ (Symbol.explode pat))
+ in
+ "val " ^ pat' ^ " = " ^ rhs
+ end;
+
+(* modified from original *)
+fun ml_val txt = "fn _ => (" ^ txt ^ ");";
+fun ml_struct txt = "functor DUMMY_FUNCTOR() = struct structure DUMMY = " ^ txt ^ " end;";
+fun ml_type txt = "val _ = NONE : (" ^ txt ^ ") option;";
+
+*}
+
+ML {* ml_pat *}
+ML {* K ml_pat *}
+
+ML {*
+fun output_verbatim ml src ctxt (txt, pos) =
+ let val txt = ml ctxt (txt, pos)
+ in
+ (if ! ThyOutput.source then str_of_source src else txt)
+ |> (if ! ThyOutput.quotes then quote else I)
+ |> ThyOutput.output (fn _ => fn s => Pretty.str s) src ctxt
+ end;
+*}
+
+ML {*
+fun output_ml ml = output_verbatim
+ (fn ctxt => fn ((txt, p), pos) =>
+ (ML_Context.eval_in (SOME ctxt) false pos (ml p txt);
+ txt |> Source.of_string |> ML_Lex.source |> Source.exhaust |>
+ Chunks.transform_cmts |> implode));
+*}
+
+ML {*
+fun output_ml_checked ml src ctxt (txt, pos) =
+ (ML_Context.eval_in (SOME ctxt) false pos (ml txt);
+ (if ! ThyOutput.source then str_of_source src else SymbolPos.content (SymbolPos.explode (txt, pos)))
+ |> (if ! ThyOutput.quotes then quote else I)
+ |> ThyOutput.output (fn _ => fn s => Pretty.str s) src ctxt)
+
+*}
+
+
end
\ No newline at end of file
--- a/CookBook/Parsing.thy Tue Oct 14 01:33:55 2008 -0400
+++ b/CookBook/Parsing.thy Fri Oct 17 05:02:04 2008 -0400
@@ -10,7 +10,7 @@
Isabelle distinguishes between \emph{outer} and \emph{inner} syntax.
Theory commands, such as \isacommand{definition}, \isacommand{inductive} and so
on, belong to the outer syntax, whereas items inside double quotation marks, such
- as terms and types, belong to the inner syntax. For parsing inner syntax,
+ as terms, types and so on, belong to the inner syntax. For parsing inner syntax,
Isabelle uses a rather general and sophisticated algorithm due to Earley, which
is driven by priority grammars. Parsers for outer syntax are built up by functional
parsing combinators. These combinators are a well-established technique for parsing,
@@ -30,100 +30,113 @@
section {* Building Up Generic Parsers *}
-text {*
- Let us first have a look at parsing strings using generic parsing combinators.
- The function @{ML "$$"} takes a string and will ``consume'' this string from
- a given input list of strings. Consume in this context means that it will
- return a pair consisting of this string and the rest of the list.
- For example:
-*}
-
-ML {* ($$ "h") (explode "hello");
- ($$ "w") (explode "world") *}
text {*
- returns @{ML "(\"h\", [\"e\", \"l\", \"l\", \"o\"])"} and then
- @{ML "(\"w\", [\"o\", \"r\", \"l\", \"d\"])"}. This function will either succeed (as in
- the two examples above) or raise the exeption @{ML_text "FAIL"} if no string
- can be consumed: for example @{ML_text "($$ \"x\") (explode \"world\")"}.
+
+ Let us first have a look at parsing strings using generic parsing combinators.
+ The function @{ML "(op $$)"} takes a string and will ``consume'' this string from
+ a given input list of strings. ``Consume'' in this context means that it will
+ return a pair consisting of this string and the rest of the input list.
+ For example:
+
+ @{ML_response [display] "($$ \"h\") (explode \"hello\")" "(\"h\", [\"e\", \"l\", \"l\", \"o\"])"}
+ @{ML_response [display] "($$ \"w\") (explode \"world\")" "(\"w\", [\"o\", \"r\", \"l\", \"d\"])"}
+
+ This function will either succeed (as in the two examples above) or raise the exeption
+ @{ML_text "FAIL"} if no string can be consumed. For example in the next example
+ try to parse:
+
+ @{ML_text [display] "($$ \"x\") (explode \"world\")"}
+
+ There are three exceptions used in the parsing combinators:
+
+ (FIXME: describe)
- Two such parser can be connected in sequence using the funtion @{ML "(op --)"}.
+ \begin{itemize}
+ \item @{ML_text "FAIL"}
+ \item @{ML_text "MORE"}
+ \item @{ML_text "ABORT"}
+ \end{itemize}
+
+ Slightly more general than @{ML "(op $$)"} is the function @{ML Scan.one} in that it
+ takes a predicate as argument and then parses exactly one item from the input list
+ satisfying this prediate. For example the following parser either consumes an @{ML_text "h"}
+ or a @{ML_text "w"}:
+
+@{ML_response [display]
+"let val hw = Scan.one (fn x => x = \"h\" orelse x = \"w\")
+ val input1 = (explode \"hello\")
+ val input2 = (explode \"world\")
+in
+ (hw input1, hw input2)
+end"
+ "((\"h\", [\"e\", \"l\", \"l\", \"o\"]),(\"w\", [\"o\", \"r\", \"l\", \"d\"]))"}
+
+ Two parser can be connected in sequence by using the funtion @{ML "(op --)"}.
For example
-*}
-
-ML {* (($$ "h") -- ($$ "e") -- ($$ "l")) (explode "hello") *}
-text {*
- returns @{ML "(((\"h\", \"e\"), \"l\"), [\"l\", \"o\"])"}. Note how the result of
- consumed strings builds up on the left as nested pairs.
+ @{ML_response [display] "(($$ \"h\") -- ($$ \"e\") -- ($$ \"l\")) (explode \"hello\")"
+ "(((\"h\", \"e\"), \"l\"), [\"l\", \"o\"])"}
+
+ Note how the result of consumed strings builds up on the left as nested pairs.
Parsers that explore
alternatives can be constructed using the function @{ML "(op ||)"}. For example, the
parser @{ML_open "p || q" for p q} returns the result of @{ML_text "p"}, if it succeeds,
otherwise it returns the result of @{ML_text "q"}. For example
-*}
-ML {*
- let val hw = ($$ "h") || ($$ "w")
- val input1 = (explode "hello")
- val input2 = (explode "world")
- in
- (hw input1, hw input2)
- end
-*}
+@{ML_response [display]
+"let val hw = ($$ \"h\") || ($$ \"w\")
+ val input1 = (explode \"hello\")
+ val input2 = (explode \"world\")
+in
+ (hw input1, hw input2)
+end"
+ "((\"h\", [\"e\", \"l\", \"l\", \"o\"]), (\"w\", [\"o\", \"r\", \"l\", \"d\"]))"}
-text {*
will in the first case consume the @{ML_text "h"} and in the second the @{ML_text "w"}.
The functions @{ML "(op |--)"} and @{ML "(op --|)"} work like the sequencing funtion
for parsers, except that they discard the item parsed by the first (respectively second)
parser. For example
-*}
-
-ML {*
- let val just_h = ($$ "h") --| ($$ "e")
- val just_e = ($$ "h") |-- ($$ "e")
- val input = (explode "hello")
- in
- (just_h input, just_e input)
- end
-*}
-
-text {*
- produces @{ML "(\"h\", [\"l\", \"l\", \"o\"])"} and @{ML "(\"e\", [\"l\", \"l\", \"o\"])"},
- respectively.
-
- (FIXME does it make sense to explain the other functions from @{ML_text BASIC_SCAN}?)
+
+@{ML_response [display]
+"let val just_h = ($$ \"h\") |-- ($$ \"e\")
+ val just_e = ($$ \"h\") --| ($$ \"e\")
+ val input = (explode \"hello\")
+in
+ (just_h input, just_e input)
+end"
+ "((\"e\", [\"l\", \"l\", \"o\"]),(\"h\", [\"l\", \"l\", \"o\"]))"}
The parser @{ML_open "Scan.optional p x" for p x} returns the result of the parser
@{ML_text "p"}, if it succeeds; otherwise it returns
the default value @{ML_text "x"}. For example
-*}
-ML {*
- let val p = Scan.optional ($$ "h") "x"
- val input1 = (explode "hello")
- val input2 = (explode "world")
- in
- (p input1, p input2)
- end
+@{ML_response [display]
+"let val p = Scan.optional ($$ \"h\") \"x\"
+ val input1 = (explode \"hello\")
+ val input2 = (explode \"world\")
+in
+ (p input1, p input2)
+end"
+ "((\"h\", [\"e\", \"l\", \"l\", \"o\"]), (\"x\", [\"w\", \"o\", \"r\", \"l\", \"d\"]))"}
+
+ The function @{ML "(op !!)"} helps to produce appropriate error messages
+ during parsing.
+
*}
-text {*
- returns @{ML "(\"h\", [\"e\", \"l\", \"l\", \"o\"])"} in the first case and
- in the second case @{ML "(\"x\", [\"w\", \"o\", \"r\", \"l\", \"d\"])"}.
-
-*}
-
-text {* (FIXME: explain @{ML "op !!"}) *}
-
ML {*
val err_fn = (fn _ => "foo");
val p = (!! err_fn ($$ "h")) || ($$ "w");
val input1 = (explode "hello");
val input2 = (explode "world");
-
- p input1;
+*}
+
+ML {*
+
+ (*Scan.error p input2;*)
*}
text {* (FIXME: why does @{ML_text "p input2"} not do anything with foo?) *}
--- a/CookBook/antiquote_setup.ML Tue Oct 14 01:33:55 2008 -0400
+++ b/CookBook/antiquote_setup.ML Fri Oct 17 05:02:04 2008 -0400
@@ -80,12 +80,12 @@
"\\indexml" ^ kind ^ enclose "{" "}" (clean_string txt1) ^
((if ! O.source then str_of_source src else txt')
|> (if ! O.quotes then quote else I)
- |> (if ! O.display then enclose "\\begin{verbatim}\n" "\n\\end{verbatim}"
+ |> (if ! O.display then enclose "\\begin{alltt}\\isastyleminor\n" "\n\\end{alltt}"
else split_lines #> map verbatim #> space_implode "\\isasep\\isanewline%\n"))
end;
fun output_ml ctxt src =
- if ! O.display then enclose "\\begin{verbatim}\n" "\n\\end{verbatim}"
+ if ! O.display then enclose "\\begin{alltt}\\isastyleminor\n" "\n\\end{alltt}"
else
split_lines
#> map (space_implode "\\verb,|," o map (enclose "\\verb|" "|") o space_explode "|")
--- a/CookBook/antiquote_setup_plus.ML Tue Oct 14 01:33:55 2008 -0400
+++ b/CookBook/antiquote_setup_plus.ML Fri Oct 17 05:02:04 2008 -0400
@@ -16,16 +16,24 @@
| ml_val_open (ys, xs) txt =
ml_val ys ("let open " ^ space_implode " " xs ^ " in " ^ txt ^ " end");
+fun ml_pat (rhs, pat) =
+ let val pat' = implode (map (fn "\\<dots>" => "_" | s => s)
+ (Symbol.explode pat))
+ in
+ "val " ^ pat' ^ " = " ^ rhs
+ end;
+
+(* modified from original *)
+fun ml_val txt = "fn _ => (" ^ txt ^ ");";
+fun ml_struct txt = "functor DUMMY_FUNCTOR() = struct structure DUMMY = " ^ txt ^ " end;";
+fun ml_type txt = "val _ = NONE : (" ^ txt ^ ") option;";
+
fun output_verbatim f src ctxt x =
let val txt = f ctxt x
in
(if ! ThyOutput.source then str_of_source src else txt)
|> (if ! ThyOutput.quotes then quote else I)
- |> (if ! ThyOutput.display then enclose "\\begin{alltt}\n" "\n\\end{alltt}"
- else
- split_lines
- #> map (space_implode "\\verb,|," o map (enclose "\\verb|" "|") o space_explode "|")
- #> space_implode "\\isasep\\\\%\n")
+ |> ThyOutput.output (fn _ => fn s => Pretty.str s) src ctxt
end;
fun output_ml ml = output_verbatim
@@ -34,32 +42,20 @@
txt |> Source.of_string |> ML_Lex.source |> Source.exhaust |>
Chunks.transform_cmts |> implode));
-fun transform_char "\\" = "{\\isacharbackslash}"
- | transform_char "$" = "{\\$}"
- | transform_char "{" = "{\\isacharbraceleft}"
- | transform_char "}" = "{\\isacharbraceright}"
- | transform_char x = x;
-
-fun transform_symbol x =
- if Symbol.is_char x then transform_char x else Latex.output_symbols [x];
-
-fun transform_str s = implode (map transform_symbol (Symbol.explode s));
-
-fun ml_pat (rhs, pat) =
- let val pat' = implode (map (fn "\\<dots>" => "_" | s => s)
- (Symbol.explode pat))
- in
- "val " ^ pat' ^ " = " ^ rhs
- end;
-
fun output_ml_response ml = output_verbatim
(fn ctxt => fn ((x as (rhs, pat), p), pos) =>
(ML_Context.eval_in (SOME ctxt) false pos (ml p x);
- transform_str rhs ^ "\n\n" ^
- space_implode "\n" (map (enclose "\\textit{" "}")
- (space_explode "\n" (transform_str pat)))));
+ rhs ^ "\n" ^
+ space_implode "\n" (map (fn s => "> " ^ s)
+ (space_explode "\n" pat))));
-fun check_exists ctxt name =
+fun output_ml_checked ml src ctxt (txt, pos) =
+ (ML_Context.eval_in (SOME ctxt) false pos (ml txt);
+ (if ! ThyOutput.source then str_of_source src else SymbolPos.content (SymbolPos.explode (txt, pos)))
+ |> (if ! ThyOutput.quotes then quote else I)
+ |> ThyOutput.output (fn _ => fn s => Pretty.str s) src ctxt)
+
+fun check_file_exists ctxt name =
if File.exists (Path.append (Path.explode ("~~/src")) (Path.explode name)) then ()
else error ("Source file " ^ quote name ^ " does not exist.")
@@ -68,9 +64,13 @@
(Scan.optional (Args.$$$ "for" |-- OuterParse.!!! (Scan.repeat1 Args.name)) [] --
Scan.optional (Args.parens (OuterParse.list1 Args.name)) [])))) (output_ml ml_val_open)),
("ML_response", ThyOutput.args (Scan.lift (OuterParse.position
- ((Args.name -- Args.name) >> rpair ())))
- (output_ml_response (K ml_pat))),
+ ((Args.name -- Args.name) >> rpair ()))) (output_ml_response (K ml_pat))),
("ML_file", ThyOutput.args (Scan.lift Args.name)
- (ThyOutput.output (fn _ => fn name => (check_exists name; Pretty.str name))))];
-
+ (ThyOutput.output (fn _ => fn name => (check_file_exists name; Pretty.str name)))),
+ ("ML_text", ThyOutput.args (Scan.lift Args.name)
+ (ThyOutput.output (fn _ => fn s => Pretty.str s))),
+ ("ML", ThyOutput.args (Scan.lift Args.name_source_position) (output_ml_checked ml_val)),
+ ("ML_struct", ThyOutput.args (Scan.lift Args.name_source_position) (output_ml_checked ml_struct)),
+ ("ML_type", ThyOutput.args (Scan.lift Args.name_source_position) (output_ml_checked ml_type))];
+
end;
--- a/CookBook/document/root.tex Tue Oct 14 01:33:55 2008 -0400
+++ b/CookBook/document/root.tex Fri Oct 17 05:02:04 2008 -0400
@@ -21,8 +21,9 @@
\urlstyle{rm}
\renewcommand{\isastyletxt}{\isastyletext}% use same formatting for txt and text
-\renewcommand{\isastyle}{\isastyleminor}% use same formatting for txt and text
-\isadroptag{theory}
+\renewcommand{\isastyleminor}{\tt\slshape}%
+\renewcommand{\isastyle}{\small\tt\slshape}%
+%%\isadroptag{theory}
% sane default for proof documents
\parindent 0pt\parskip 0.6ex
@@ -34,6 +35,13 @@
\newenvironment{readmore}%
{\hspace{-3pt}\readmoremarginpar{\fbox{\textbf{Read More}}}\it}{}
+% to work around a problem with \isanewline
+\renewcommand{\isanewline}{\parindent0pt\parskip0pt\mbox{}\par\mbox{}}
+
+\renewenvironment{isabelle}
+{\begin{trivlist}\begin{isabellebody}\small\item\relax}
+{\end{isabellebody}\end{trivlist}}
+
% for exercises and comments
\newtheorem{exercise}{Exercise}[section]
\newcommand{\solution}[1]{{\bf Solution for Exercise~\ref{#1}.}}
Binary file cookbook.pdf has changed