diff -r 6e2479089226 -r cecd7a941885 ProgTutorial/Parsing.thy --- a/ProgTutorial/Parsing.thy Tue May 14 16:59:53 2019 +0200 +++ b/ProgTutorial/Parsing.thy Tue May 14 17:10:47 2019 +0200 @@ -4,10 +4,10 @@ "foobar_goal" "foobar_prove" :: thy_goal begin -chapter {* Parsing\label{chp:parsing} *} +chapter \Parsing\label{chp:parsing}\ -text {* +text \ \begin{flushright} {\em An important principle underlying the success and popularity of Unix\\ is the philosophy of building on the work of others.} \\[1ex] @@ -47,11 +47,11 @@ are defined in @{ML_file "Pure/Isar/args.ML"}. \end{readmore} -*} +\ -section {* Building Generic Parsers *} +section \Building Generic Parsers\ -text {* +text \ Let us first have a look at parsing strings using generic parsing combinators. The function @{ML_ind "$$" in Scan} takes a string as argument and will @@ -66,24 +66,24 @@ "($$ \"w\") (Symbol.explode \"world\")" "(\"w\", [\"o\", \"r\", \"l\", \"d\"])"} The function @{ML "$$"} will either succeed (as in the two examples above) - or raise the exception @{text "FAIL"} if no string can be consumed. For + or raise the exception \FAIL\ if no string can be consumed. For example trying to parse @{ML_response_fake [display,gray] "($$ \"x\") (Symbol.explode \"world\")" "Exception FAIL raised"} - will raise the exception @{text "FAIL"}. The function @{ML_ind "$$" in Scan} will also + will raise the exception \FAIL\. The function @{ML_ind "$$" in Scan} will also fail if you try to consume more than a single character. There are three exceptions that are raised by the parsing combinators: \begin{itemize} - \item @{text "FAIL"} indicates that alternative routes of parsing + \item \FAIL\ indicates that alternative routes of parsing might be explored. - \item @{text "MORE"} indicates that there is not enough input for the parser. For example - in @{text "($$ \"h\") []"}. - \item @{text "ABORT"} indicates that a dead end is reached. + \item \MORE\ indicates that there is not enough input for the parser. For example + in \($$ "h") []\. + \item \ABORT\ indicates that a dead end is reached. It is used for example in the function @{ML "!!"} (see below). \end{itemize} @@ -94,7 +94,7 @@ structure @{ML_struct Symbol}, instead of the more standard library function @{ML_ind explode in String}, for obtaining an input list for the parser. The reason is that @{ML explode in Symbol} is aware of character - sequences, for example @{text "\"}, that have a special meaning in + sequences, for example \\\, that have a special meaning in Isabelle. To see the difference consider @{ML_response_fake [display,gray] @@ -124,7 +124,7 @@ "((\"h\", [\"e\", \"l\", \"l\", \"o\"]),(\"w\", [\"o\", \"r\", \"l\", \"d\"]))"} Two parsers can be connected in sequence by using the function @{ML_ind "--" in Scan}. - For example parsing @{text "h"}, @{text "e"} and @{text "l"} (in this + For example parsing \h\, \e\ and \l\ (in this order) you can achieve by: @{ML_response [display,gray] @@ -142,8 +142,8 @@ Parsers that explore alternatives can be constructed using the function @{ML_ind "||" in Scan}. The parser @{ML "(p || q)" for p q} returns the - result of @{text "p"}, in case it succeeds; otherwise it returns the - result of @{text "q"}. For example: + result of \p\, in case it succeeds; otherwise it returns the + result of \q\. For example: @{ML_response [display,gray] @@ -173,8 +173,8 @@ "((\"e\", [\"l\", \"l\", \"o\"]), (\"h\", [\"l\", \"l\", \"o\"]))"} The parser @{ML "Scan.optional p x" for p x} returns the result of the parser - @{text "p"}, in case it succeeds; otherwise it returns - the default value @{text "x"}. For example: + \p\, in case it succeeds; otherwise it returns + the default value \x\. For example: @{ML_response [display,gray] "let @@ -187,7 +187,7 @@ "((\"h\", [\"e\", \"l\", \"l\", \"o\"]), (\"x\", [\"w\", \"o\", \"r\", \"l\", \"d\"]))"} The function @{ML_ind option in Scan} works similarly, except no default value can - be given. Instead, the result is wrapped as an @{text "option"}-type. For example: + be given. Instead, the result is wrapped as an \option\-type. For example: @{ML_response [display,gray] "let @@ -206,21 +206,21 @@ "(\"foo\", [\"f\", \"o\", \"o\"])"} The function @{ML_ind "!!" in Scan} helps with producing appropriate error messages - during parsing. For example if you want to parse @{text p} immediately - followed by @{text q}, or start a completely different parser @{text r}, + during parsing. For example if you want to parse \p\ immediately + followed by \q\, or start a completely different parser \r\, you might write: @{ML [display,gray] "(p -- q) || r" for p q r} However, this parser is problematic for producing a useful error message, if the parsing of @{ML "(p -- q)" for p q} fails. Because with the - parser above you lose the information that @{text p} should be followed by @{text q}. - To see this assume that @{text p} is present in the input, but it is not - followed by @{text q}. That means @{ML "(p -- q)" for p q} will fail and - hence the alternative parser @{text r} will be tried. However, in many - circumstances this will be the wrong parser for the input ``@{text "p"}-followed-by-something'' + parser above you lose the information that \p\ should be followed by \q\. + To see this assume that \p\ is present in the input, but it is not + followed by \q\. That means @{ML "(p -- q)" for p q} will fail and + hence the alternative parser \r\ will be tried. However, in many + circumstances this will be the wrong parser for the input ``\p\-followed-by-something'' and therefore will also fail. The error message is then caused by the failure - of @{text r}, not by the absence of @{text q} in the input. Such + of \r\, not by the absence of \q\ in the input. Such situation can be avoided when using the function @{ML "!!"}. This function aborts the whole process of parsing in case of a failure and prints an error message. For example if you invoke the parser @@ -239,7 +239,7 @@ @{ML_response_fake [display,gray] "(!! (fn _ => fn _ => \"foo\") ($$ \"h\")) (Symbol.explode \"world\")" "Exception ABORT raised"} - then the parsing aborts and the error message @{text "foo"} is printed. In order to + then the parsing aborts and the error message \foo\ is printed. In order to see the error message properly, you need to prefix the parser with the function @{ML_ind error in Scan}. For example: @@ -253,18 +253,18 @@ Let us now return to our example of parsing @{ML "(p -- q) || r" for p q r}. If you want to generate the correct error message for failure - of parsing @{text "p"}-followed-by-@{text "q"}, then you have to write: -*} + of parsing \p\-followed-by-\q\, then you have to write: +\ -ML %grayML{*fun p_followed_by_q p q r = +ML %grayML\fun p_followed_by_q p q r = let val err_msg = fn _ => p ^ " is not followed by " ^ q in ($$ p -- (!! (fn _ => err_msg) ($$ q))) || ($$ r -- $$ r) -end *} +end\ -text {* +text \ Running this parser with the arguments @{text [quotes] "h"}, @{text [quotes] "e"} and @{text [quotes] "w"}, and the input @{text [quotes] "holle"} @@ -279,17 +279,17 @@ yields the expected parsing. - The function @{ML "Scan.repeat p" for p} will apply a parser @{text p} as + The function @{ML "Scan.repeat p" for p} will apply a parser \p\ as long as it succeeds. For example: @{ML_response [display,gray] "Scan.repeat ($$ \"h\") (Symbol.explode \"hhhhello\")" "([\"h\", \"h\", \"h\", \"h\"], [\"e\", \"l\", \"l\", \"o\"])"} Note that @{ML_ind repeat in Scan} stores the parsed items in a list. The function - @{ML_ind repeat1 in Scan} is similar, but requires that the parser @{text "p"} + @{ML_ind repeat1 in Scan} is similar, but requires that the parser \p\ succeeds at least once. - Also note that the parser would have aborted with the exception @{text MORE}, if + Also note that the parser would have aborted with the exception \MORE\, if you had it run with the string @{text [quotes] "hhhh"}. This can be avoided by using the wrapper @{ML_ind finite in Scan} and the ``stopper-token'' @{ML_ind stopper in Symbol}. With them you can write: @@ -349,8 +349,8 @@ After parsing is done, you almost always want to apply a function to the parsed items. One way to do this is the function @{ML_ind ">>" in Scan} where @{ML "(p >> f)" for p f} runs - first the parser @{text p} and upon successful completion applies the - function @{text f} to the result. For example + first the parser \p\ and upon successful completion applies the + function \f\ to the result. For example @{ML_response [display,gray] "let @@ -383,42 +383,42 @@ "Scan.lift ($$ \"h\" -- $$ \"e\") (1, Symbol.explode \"hello\")" "((\"h\", \"e\"), (1, [\"l\", \"l\", \"o\"]))"} - \footnote{\bf FIXME: In which situations is @{text "lift"} useful? Give examples.} + \footnote{\bf FIXME: In which situations is \lift\ useful? Give examples.} Be aware of recursive parsers. Suppose you want to read strings separated by commas and by parentheses into a tree datastructure; for example, generating the tree corresponding to the string @{text [quotes] "(A, A), (A, A)"} where - the @{text "A"}s will be the leaves. We assume the trees are represented by the + the \A\s will be the leaves. We assume the trees are represented by the datatype: -*} +\ -ML %grayML{*datatype tree = +ML %grayML\datatype tree = Lf of string - | Br of tree * tree*} + | Br of tree * tree\ -text {* +text \ Since nested parentheses should be treated in a meaningful way---for example the string @{text [quotes] "((A))"} should be read into a single leaf---you might implement the following parser. -*} +\ -ML %grayML{*fun parse_basic s = +ML %grayML\fun parse_basic s = $$ s >> Lf || $$ "(" |-- parse_tree s --| $$ ")" and parse_tree s = - parse_basic s --| $$ "," -- parse_tree s >> Br || parse_basic s*} + parse_basic s --| $$ "," -- parse_tree s >> Br || parse_basic s\ -text {* +text \ This parser corresponds to the grammar: \begin{center} \begin{tabular}{lcl} - @{text ""} & @{text "::="} & @{text " | ()"}\\ - @{text ""} & @{text "::="} & @{text ", | "}\\ + \\ & \::=\ & \ | ()\\\ + \\ & \::=\ & \, | \\\ \end{tabular} \end{center} - The parameter @{text "s"} is the string over which the tree is parsed. The + The parameter \s\ is the string over which the tree is parsed. The parser @{ML parse_basic} reads either a leaf or a tree enclosed in parentheses. The parser @{ML parse_tree} reads either a pair of trees separated by a comma, or acts like @{ML parse_basic}. Unfortunately, @@ -435,18 +435,18 @@ parser until an input is given. This can be done by adding the parsed string as an explicit argument. So the parser above should be implemented as follows. -*} +\ -ML %grayML{*fun parse_basic s xs = +ML %grayML\fun parse_basic s xs = ($$ s >> Lf || $$ "(" |-- parse_tree s --| $$ ")") xs and parse_tree s xs = - (parse_basic s --| $$ "," -- parse_tree s >> Br || parse_basic s) xs*} + (parse_basic s --| $$ "," -- parse_tree s >> Br || parse_basic s) xs\ -text {* +text \ While the type of the parser is unchanged by the addition, its behaviour changed: with this version of the parser the execution is delayed until - some string is applied for the argument @{text "xs"}. This gives us + some string is applied for the argument \xs\. This gives us exactly the parser what we wanted. An example is as follows: @{ML_response [display, gray] @@ -460,24 +460,24 @@ \begin{exercise}\label{ex:scancmts} Write a parser that parses an input string so that any comment enclosed - within @{text "(*\*)"} is replaced by the same comment but enclosed within - @{text "(**\**)"} in the output string. To enclose a string, you can use the + within \(*\*)\ is replaced by the same comment but enclosed within + \(**\**)\ in the output string. To enclose a string, you can use the function @{ML "enclose s1 s2 s" for s1 s2 s} which produces the string @{ML "s1 ^ s ^ s2" for s1 s2 s}. Hint: To simplify the task ignore the proper nesting of comments. \end{exercise} -*} +\ -section {* Parsing Theory Syntax *} +section \Parsing Theory Syntax\ -text {* +text \ Most of the time, however, Isabelle developers have to deal with parsing tokens, not strings. These token parsers have the type: -*} +\ -ML %grayML{*type 'a parser = Token.T list -> 'a * Token.T list*} +ML %grayML\type 'a parser = Token.T list -> 'a * Token.T list\ ML "Thy_Header.get_keywords'" -text {* +text \ {\bf REDO!!} @@ -514,14 +514,14 @@ We can easily change what is recognised as a keyword with the function @{ML_ind add_keywords in Thy_Header}. For example calling it with -*} +\ -setup {* Thy_Header.add_keywords [(("hello", Position.none),Keyword.no_spec)] *} +setup \Thy_Header.add_keywords [(("hello", Position.none),Keyword.no_spec)]\ -text {* +text \ then lexing @{text [quotes] "hello world"} will produce @{ML_response_fake [display,gray] "Token.explode @@ -544,13 +544,13 @@ "[Token (\,(Ident, \"hello\"), \), Token (\,(Ident, \"world\"), \)]"} For convenience we define the function: -*} +\ -ML %grayML{*fun filtered_input str = - filter Token.is_proper (Token.explode (Thy_Header.get_keywords' @{context}) Position.none str) *} +ML %grayML\fun filtered_input str = + filter Token.is_proper (Token.explode (Thy_Header.get_keywords' @{context}) Position.none str)\ ML \filtered_input "inductive | for"\ -text {* +text \ If you now parse @{ML_response_fake [display,gray] @@ -563,7 +563,7 @@ If you want to see which keywords and commands are currently known to Isabelle, use the function @{ML_ind get_keywords' in Thy_Header}: - You might have to adjust the @{text ML_print_depth} in order to + You might have to adjust the \ML_print_depth\ in order to see the complete list. The parser @{ML_ind "$$$" in Parse} parses a single keyword. For example: @@ -600,8 +600,8 @@ "((\"|\", \"in\"), [])"} The parser @{ML "Parse.enum s p" for s p} parses a possibly empty - list of items recognised by the parser @{text p}, where the items being parsed - are separated by the string @{text s}. For example: + list of items recognised by the parser \p\, where the items being parsed + are separated by the string \s\. For example: @{ML_response [display,gray] "let @@ -614,8 +614,7 @@ The function @{ML_ind enum1 in Parse} works similarly, except that the parsed list must be non-empty. Note that we had to add a string @{text [quotes] "foo"} at the end of the parsed string, otherwise the parser would - have consumed all tokens and then failed with the exception @{text - "MORE"}. Like in the previous section, we can avoid this exception using the + have consumed all tokens and then failed with the exception \MORE\. Like in the previous section, we can avoid this exception using the wrapper @{ML Scan.finite}. This time, however, we have to use the ``stopper-token'' @{ML Token.stopper}. We can write: @@ -629,11 +628,11 @@ "([\"in\", \"in\", \"in\"], [])"} The following function will help to run examples. -*} +\ -ML %grayML{*fun parse p input = Scan.finite Token.stopper (Scan.error p) input *} +ML %grayML\fun parse p input = Scan.finite Token.stopper (Scan.error p) input\ -text {* +text \ The function @{ML_ind "!!!" in Parse} can be used to force termination of the parser in case of a dead end, just like @{ML "Scan.!!"} (see previous section). A difference, however, is that the error message of @{ML @@ -664,12 +663,12 @@ occurred. For this Isabelle can attach positional information to tokens and then thread this information up the ``processing chain''. To see this, modify the function @{ML filtered_input}, described earlier, as follows -*} +\ -ML %grayML{*fun filtered_input' str = - filter Token.is_proper (Token.explode (Thy_Header.get_keywords' @{context}) (Position.line 7) str) *} +ML %grayML\fun filtered_input' str = + filter Token.is_proper (Token.explode (Thy_Header.get_keywords' @{context}) (Position.line 7) str)\ -text {* +text \ where we pretend the parsed string starts on line 7. An example is @{ML_response_fake [display,gray] @@ -704,27 +703,27 @@ \begin{center} \begin{tabular}{lcl} - @{text ""} & @{text "::="} & @{text " | ()"}\\ - @{text ""} & @{text "::="} & @{text " * | "}\\ - @{text ""} & @{text "::="} & @{text " + | "}\\ + \\ & \::=\ & \ | ()\\\ + \\ & \::=\ & \ * | \\\ + \\ & \::=\ & \ + | \\\ \end{tabular} \end{center} Hint: Be careful with recursive parsers. \end{exercise} -*} +\ -section {* Parsers for ML-Code (TBD) *} +section \Parsers for ML-Code (TBD)\ -text {* +text \ @{ML_ind ML_source in Parse} -*} +\ -section {* Context Parser (TBD) *} +section \Context Parser (TBD)\ -text {* +text \ @{ML_ind Args.context} -*} +\ (* ML {* let @@ -739,17 +738,17 @@ *} *) -text {* +text \ @{ML_ind Args.context} Used for example in \isacommand{attribute\_setup} and \isacommand{method\_setup}. -*} +\ -section {* Argument and Attribute Parsers (TBD) *} +section \Argument and Attribute Parsers (TBD)\ -section {* Parsing Inner Syntax *} +section \Parsing Inner Syntax\ -text {* +text \ There is usually no need to write your own parser for parsing inner syntax, that is for terms and types: you can just call the predefined parsers. Terms can be parsed using the function @{ML_ind term in Parse}. For example: @@ -792,17 +791,17 @@ @{ML_ind read_term in Syntax} @{ML_ind read_term in Syntax} -*} +\ -section {* Parsing Specifications\label{sec:parsingspecs} *} +section \Parsing Specifications\label{sec:parsingspecs}\ -text {* +text \ There are a number of special purpose parsers that help with parsing specifications of function definitions, inductive predicates and so on. In Chapter~\ref{chp:package}, for example, we will need to parse specifications for inductive predicates of the form: -*} +\ simple_inductive @@ -812,19 +811,19 @@ | evenS: "odd n \ even (Suc n)" | oddS: "even n \ odd (Suc n)" -text {* +text \ For this we are going to use the parser: -*} +\ -ML %linenosgray{*val spec_parser = +ML %linenosgray\val spec_parser = Parse.vars -- Scan.optional (Parse.$$$ "where" |-- Parse.!!! (Parse.enum1 "|" - (Parse_Spec.opt_thm_name ":" -- Parse.prop))) []*} + (Parse_Spec.opt_thm_name ":" -- Parse.prop))) []\ -text {* +text \ Note that the parser must not parse the keyword \simpleinductive, even if it is meant to process definitions as shown above. The parser of the keyword will be given by the infrastructure that will eventually call @{ML spec_parser}. @@ -848,7 +847,7 @@ [((even0,\),\), ((evenS,\),\), ((oddS,\),\)]), [])"} -*} +\ ML \let val input = filtered_input @@ -857,7 +856,7 @@ parse Parse.vars input end\ -text {* +text \ As you see, the result is a pair consisting of a list of variables with optional type-annotation and syntax-annotation, and a list of rules where every rule has optionally a name and an attribute. @@ -866,7 +865,7 @@ \isacommand{and}-separated list of variables that can include optional type annotations and syntax translations. For example:\footnote{Note that in the code we need to write - @{text "\\\"int \ bool\\\""} in order to properly escape the double quotes + \\"int \ bool\"\ in order to properly escape the double quotes in the compound type.} @{ML_response_fake [display,gray] @@ -879,9 +878,9 @@ "([(foo, SOME \, NoSyn), (bar, SOME \, Mixfix (Source {text=\"BAR\",...}, [], 100, \)), (blonk, NONE, NoSyn)],[])"} -*} +\ -text {* +text \ Whenever types are given, they are stored in the @{ML SOME}s. The types are not yet used to type the variables: this must be done by type-inference later on. Since types are part of the inner syntax they are strings with some @@ -916,9 +915,9 @@ Attributes and arguments are implemented in the files @{ML_file "Pure/Isar/attrib.ML"} and @{ML_file "Pure/Isar/args.ML"}. \end{readmore} -*} +\ -text_raw {* +text_raw \ \begin{exercise} Have a look at how the parser @{ML Parse_Spec.where_multi_specs} is implemented in file @{ML_file "Pure/Isar/parse_spec.ML"}. This parser corresponds @@ -926,8 +925,8 @@ we paraphrase the code of @{ML_ind where_multi_specs in Parse_Spec} adapted to our purposes. \begin{isabelle} -*} -ML %linenosgray{*val spec_parser' = +\ +ML %linenosgray\val spec_parser' = Parse.vars -- Scan.optional (Parse.$$$ "where" |-- @@ -936,8 +935,8 @@ ((Parse_Spec.opt_thm_name ":" -- Parse.prop) --| Scan.option (Scan.ahead (Parse.name || Parse.$$$ "[") -- - Parse.!!! (Parse.$$$ "|"))))) [] *} -text_raw {* + Parse.!!! (Parse.$$$ "|"))))) []\ +text_raw \ \end{isabelle} Both parsers accept the same input% that's not true: % spec_parser accepts input that is refuted by spec_parser' @@ -945,16 +944,16 @@ an additional ``tail'' (Lines 8 to 10) in @{ML spec_parser'}. What is the purpose of this additional ``tail''? \end{exercise} -*} +\ -text {* +text \ (FIXME: @{ML Parse.type_args}, @{ML Parse.typ}, @{ML Parse.opt_mixfix}) -*} +\ -section {* New Commands\label{sec:newcommand} *} +section \New Commands\label{sec:newcommand}\ -text {* +text \ Often new commands, for example for providing new definitional principles, need to be implemented. While this is not difficult on the ML-level and for jEdit, in order to be backwards compatible, new commands need also to be recognised @@ -969,39 +968,39 @@ we need to write something like \begin{graybox} - \isacommand{theory}~@{text Foo}\\ - \isacommand{imports}~@{text Main}\\ - \isacommand{keywords} @{text [quotes] "foobar"} @{text "::"} @{text "thy_decl"}\\ + \isacommand{theory}~\Foo\\\ + \isacommand{imports}~\Main\\\ + \isacommand{keywords} @{text [quotes] "foobar"} \::\ \thy_decl\\\ ... \end{graybox} where @{ML_ind "thy_decl" in Keyword} indicates the kind of the - command. Another possible kind is @{text "thy_goal"}, or you can + command. Another possible kind is \thy_goal\, or you can also omit the kind entirely, in which case you declare a keyword (something that is part of a command). Now you can implement \isacommand{foobar} as follows. -*} +\ -ML %grayML{*let +ML %grayML\let val do_nothing = Scan.succeed (Local_Theory.background_theory I) in Outer_Syntax.local_theory @{command_keyword "foobar"} "description of foobar" do_nothing -end *} +end\ -text {* +text \ The crucial function @{ML_ind local_theory in Outer_Syntax} expects the name for the command, a kind indicator, a short description and a parser producing a local theory transition (explained later). For the - name and the kind, you can use the ML-antiquotation @{text "@{command_spec ...}"}. + name and the kind, you can use the ML-antiquotation \@{command_spec ...}\. You can now write in your theory -*} +\ foobar -text {* +text \ but of course you will not see anything since \isacommand{foobar} is not intended to do anything. Remember, however, that this only works in jEdit. In order to enable also Proof-General recognise this @@ -1013,7 +1012,7 @@ the command \isacommand{foobar\_trace} in the theory header as \begin{graybox} - \isacommand{keywords} @{text [quotes] "foobar_trace"} @{text "::"} @{text "thy_decl"} + \isacommand{keywords} @{text [quotes] "foobar_trace"} \::\ \thy_decl\ \end{graybox} The crucial part of a command is the function that determines the @@ -1023,27 +1022,27 @@ function @{ML "Local_Theory.background_theory I"}. We can replace this code by a function that first parses a proposition (using the parser @{ML Parse.prop}), then prints out some tracing information - (using the function @{text trace_prop}) and finally does + (using the function \trace_prop\) and finally does nothing. For this you can write: -*} +\ -ML %grayML{*let +ML %grayML\let fun trace_prop str = Local_Theory.background_theory (fn ctxt => (tracing str; ctxt)) in Outer_Syntax.local_theory @{command_keyword "foobar_trace"} "traces a proposition" (Parse.prop >> trace_prop) -end *} +end\ -text {* +text \ This command can now be used to see the proposition in the tracing buffer. -*} +\ foobar_trace "True \ False" -text {* +text \ Note that so far we used @{ML_ind thy_decl in Keyword} as the kind indicator for the new command. This means that the command finishes as soon as the arguments are processed. Examples of this kind of commands are @@ -1057,16 +1056,16 @@ Below we show the command \isacommand{foobar\_goal} which takes a proposition as argument and then starts a proof in order to prove it. Therefore, we need to announce this command in the header - as @{text "thy_goal"}. + as \thy_goal\. \begin{graybox} - \isacommand{keywords} @{text [quotes] "foobar_goal"} @{text "::"} @{text "thy_goal"} + \isacommand{keywords} @{text [quotes] "foobar_goal"} \::\ \thy_goal\ \end{graybox} Then we can write: -*} +\ -ML%linenosgray{*let +ML%linenosgray\let fun goal_prop str ctxt = let val prop = Syntax.read_prop ctxt str @@ -1077,10 +1076,10 @@ Outer_Syntax.local_theory_to_proof @{command_keyword "foobar_goal"} "proves a proposition" (Parse.prop >> goal_prop) -end *} +end\ -text {* - The function @{text goal_prop} in Lines 2 to 7 takes a string (the proposition to be +text \ + The function \goal_prop\ in Lines 2 to 7 takes a string (the proposition to be proved) and a context as argument. The context is necessary in order to be able to use @{ML_ind read_prop in Syntax}, which converts a string into a proper proposition. In Line 6 the function @{ML_ind theorem in Proof} starts the proof for the @@ -1091,21 +1090,21 @@ If you now type \isacommand{foobar\_goal}~@{text [quotes] "True \ True"}, you obtain the following proof state: -*} +\ foobar_goal "True \ True" -txt {* +txt \ \begin{minipage}{\textwidth} @{subgoals [display]} \end{minipage}\medskip and can prove the proposition as follows. -*} +\ apply(rule conjI) apply(rule TrueI)+ done -text {* +text \ The last command we describe here is \isacommand{foobar\_proof}. Like \isacommand{foobar\_goal}, its purpose is to take a proposition and open a corresponding proof-state that @@ -1122,21 +1121,21 @@ @{ML_ind "ML_source" in Parse} in the structure @{ML_struct Parse}. For running the ML-interpreter we need the following scaffolding code. -*} +\ -ML %grayML{* +ML %grayML\ structure Result = Proof_Data (type T = unit -> term fun init thy () = error "Result") -val result_cookie = (Result.get, Result.put, "Result.put") *} +val result_cookie = (Result.get, Result.put, "Result.put")\ -text {* +text \ With this in place, we can implement the code for \isacommand{foobar\_prove} as follows. -*} +\ -ML %linenosgray{*let +ML %linenosgray\let fun after_qed thm_name thms lthy = Local_Theory.note (thm_name, (flat thms)) lthy |> snd @@ -1153,39 +1152,39 @@ Outer_Syntax.local_theory_to_proof @{command_keyword "foobar_prove"} "proving a proposition" (parser >> setup_proof) -end*} +end\ -text {* +text \ In Line 12, we implement a parser that first reads in an optional lemma name (terminated by ``:'') and then some ML-code. The function in Lines 5 to 10 takes the ML-text and lets the ML-runtime evaluate it using the function @{ML_ind value in Code_Runtime} in the structure @{ML_struct Code_Runtime}. Once the ML-text has been turned into a term, the function @{ML theorem in Proof} opens a corresponding proof-state. This function takes the - function @{text "after_qed"} as argument, whose purpose is to store the theorem - (once it is proven) under the given name @{text "thm_name"}. + function \after_qed\ as argument, whose purpose is to store the theorem + (once it is proven) under the given name \thm_name\. You can now define a term, for example -*} +\ -ML %grayML{*val prop_true = @{prop "True"}*} +ML %grayML\val prop_true = @{prop "True"}\ -text {* +text \ and give it a proof using \isacommand{foobar\_prove}: -*} +\ foobar_prove test: prop_true apply(rule TrueI) done -text {* +text \ Finally you can test whether the lemma has been stored under the given name. \begin{isabelle} - \isacommand{thm}~@{text "test"}\\ - @{text "> "}~@{thm TrueI} + \isacommand{thm}~\test\\\ + \> \~@{thm TrueI} \end{isabelle} -*} +\ (* @@ -1324,9 +1323,9 @@ *} *) -section {* Methods (TBD) *} +section \Methods (TBD)\ -text {* +text \ (FIXME: maybe move to after the tactic section) Methods are central to Isabelle. You use them, for example, @@ -1335,65 +1334,65 @@ \begin{isabelle} \isacommand{print\_methods}\\ - @{text "> methods:"}\\ - @{text "> -: do nothing (insert current facts only)"}\\ - @{text "> HOL.default: apply some intro/elim rule (potentially classical)"}\\ - @{text "> ..."} + \> methods:\\\ + \> -: do nothing (insert current facts only)\\\ + \> HOL.default: apply some intro/elim rule (potentially classical)\\\ + \> ...\ \end{isabelle} An example of a very simple method is: -*} +\ method_setup %gray foo = - {* Scan.succeed - (fn ctxt => (SIMPLE_METHOD ((eresolve_tac ctxt [@{thm conjE}] THEN' resolve_tac ctxt [@{thm conjI}]) 1))) *} + \Scan.succeed + (fn ctxt => (SIMPLE_METHOD ((eresolve_tac ctxt [@{thm conjE}] THEN' resolve_tac ctxt [@{thm conjI}]) 1)))\ "foo method for conjE and conjI" -text {* - It defines the method @{text foo}, which takes no arguments (therefore the +text \ + It defines the method \foo\, which takes no arguments (therefore the parser @{ML Scan.succeed}) and only applies a single tactic, namely the tactic which applies @{thm [source] conjE} and then @{thm [source] conjI}. The function @{ML_ind SIMPLE_METHOD in Method} - turns such a tactic into a method. The method @{text "foo"} can be used as follows -*} + turns such a tactic into a method. The method \foo\ can be used as follows +\ lemma shows "A \ B \ C \ D" apply(foo) -txt {* +txt \ where it results in the goal state \begin{minipage}{\textwidth} @{subgoals} - \end{minipage} *} + \end{minipage}\ (*<*)oops(*>*) -method_setup test = {* - Scan.lift (Scan.succeed (K Method.succeed)) *} {* bla *} +method_setup test = \ + Scan.lift (Scan.succeed (K Method.succeed))\ \bla\ lemma "True" apply(test) oops -method_setup joker = {* - Scan.lift (Scan.succeed (fn ctxt => Method.cheating true)) *} {* bla *} +method_setup joker = \ + Scan.lift (Scan.succeed (fn ctxt => Method.cheating true))\ \bla\ lemma "False" apply(joker) oops -text {* if true is set then always works *} +text \if true is set then always works\ -ML {* assume_tac *} +ML \assume_tac\ -method_setup first_atac = {* Scan.lift (Scan.succeed (fn ctxt => (SIMPLE_METHOD (assume_tac ctxt 1)))) *} {* bla *} +method_setup first_atac = \Scan.lift (Scan.succeed (fn ctxt => (SIMPLE_METHOD (assume_tac ctxt 1))))\ \bla\ -ML {* HEADGOAL *} +ML \HEADGOAL\ lemma "A \ A" apply(first_atac) oops -method_setup my_atac = {* Scan.lift (Scan.succeed (fn ctxt => (SIMPLE_METHOD' (assume_tac ctxt)))) *} {* bla *} +method_setup my_atac = \Scan.lift (Scan.succeed (fn ctxt => (SIMPLE_METHOD' (assume_tac ctxt))))\ \bla\ lemma "A \ A" apply(my_atac) @@ -1412,11 +1411,11 @@ ML {* Scan.succeed *} *) -ML {* resolve_tac *} +ML \resolve_tac\ method_setup myrule = - {* Scan.lift (Scan.succeed (fn ctxt => (METHOD (fn thms => resolve_tac ctxt thms 1)))) *} - {* bla *} + \Scan.lift (Scan.succeed (fn ctxt => (METHOD (fn thms => resolve_tac ctxt thms 1))))\ + \bla\ lemma assumes a: "A \ B \ C" @@ -1427,9 +1426,9 @@ -text {* +text \ (********************************************************) (FIXME: explain a version of rule-tac) -*} +\ end