--- a/ProgTutorial/FirstSteps.thy Mon Mar 30 17:40:20 2009 +0200
+++ b/ProgTutorial/FirstSteps.thy Wed Apr 01 12:28:14 2009 +0100
@@ -294,7 +294,7 @@
avoided: it is more than easy to get the intermediate values wrong, not to
mention the nightmares the maintenance of this code causes!
- In the context of Isabelle, a ``real world'' example for a function written in
+ In Isabelle, a ``real world'' example for a function written in
the waterfall fashion might be the following code:
*}
@@ -398,7 +398,8 @@
ML{*fun (x, y) |-> f = f x y*}
-text {* and can be used to write the following roundabout version
+text {*
+ and can be used to write the following roundabout version
of the @{text double} function:
*}
@@ -406,13 +407,45 @@
x |> (fn x => (x, x))
|-> (fn x => fn y => x + y)*}
+text {*
+ The combinator @{ML ||>>} plays a central rôle whenever your task is to update a
+ theory and the update also produces a side-result (for example a theorem). Functions
+ for such tasks return a pair whose second component is the theory and the fist
+ component is the side-result. Using @{ML ||>>}, you can do conveniently the update
+ and also accumulate the side-results. Considder the following simple function.
+*}
+
+ML %linenosgray{*fun acc_incs x =
+ x |> (fn x => ("", x))
+ ||>> (fn x => (x, x + 1))
+ ||>> (fn x => (x, x + 1))
+ ||>> (fn x => (x, x + 1))*}
+
+text {*
+ The purpose of Line 2 is to just pair up the argument with a dummy value (since
+ @{ML "||>>"} operates on pairs). Each of the next three lines just increment
+ the value by one, but also nest the intrermediate results to the left. For example
+
+ @{ML_response [display,gray]
+ "acc_incs 1"
+ "((((\"\", 1), 2), 3), 4)"}
+
+ You can continue this chain with:
+
+ @{ML_response [display,gray]
+ "acc_incs 1 ||>> (fn x => (x, x + 2))"
+ "(((((\"\", 1), 2), 3), 4), 6)"}
+
+ (FIXME: maybe give a ``real world'' example)
+*}
+
text {*
Recall that @{ML "|>"} is the reverse function application. Recall also that
the related
reverse function composition is @{ML "#>"}. In fact all the combinators @{ML "|->"},
- @{ML "|>>"} and @{ML "||>"} described above have related combinators for function
- composition, namely @{ML "#->"}, @{ML "#>>"} and @{ML "##>"}. Using @{ML "#->"},
- for example, the function @{text double} can also be written as:
+ @{ML "|>>"} , @{ML "||>"} and @{ML "||>>"} described above have related combinators for
+ function composition, namely @{ML "#->"}, @{ML "#>>"}, @{ML "##>"} and @{ML "##>>"}.
+ Using @{ML "#->"}, for example, the function @{text double} can also be written as:
*}
ML{*val double =
--- a/ProgTutorial/Package/Ind_Code.thy Mon Mar 30 17:40:20 2009 +0200
+++ b/ProgTutorial/Package/Ind_Code.thy Wed Apr 01 12:28:14 2009 +0100
@@ -1,13 +1,14 @@
theory Ind_Code
-imports "../Base" "../FirstSteps" Ind_General_Scheme
+imports "../Base" "../FirstSteps" Ind_Interface Ind_General_Scheme
begin
section {* The Gory Details\label{sec:code} *}
text {*
- As mentioned before the code falls roughly into three parts: code that deals
- with the definitions, the induction principles and the introduction rules, respectively.
- In addition there are some administrative functions that string everything together.
+ As mentioned before the code falls roughly into three parts: code that deals
+ with the definitions, withe the induction principles and the introduction
+ rules, respectively. In addition there are some administrative functions
+ that string everything together.
*}
subsection {* Definitions *}
@@ -206,8 +207,8 @@
subsection {* Induction Principles *}
text {*
- Recall that the proof of the induction principle
- for @{text "even"} was:
+ Recall that the manual proof for the induction principle
+ of @{text "even"} was:
*}
lemma manual_ind_prin_even:
@@ -236,7 +237,7 @@
text {*
This helper function instantiates the @{text "?x"} in the theorem
@{thm spec} with a given @{ML_type cterm}. We call this helper function
- in the following tactic, called @{text inst_spec_tac}.
+ in the following tactic, called @{text inst_spec_tac}\label{fun:instspectac}.
*}
ML{*fun inst_spec_tac ctrms =
@@ -273,12 +274,13 @@
assume_tac]*}
text {*
- We have to give it as arguments the definitions, the premise (this premise
- is @{text "even n"} in lemma @{thm [source] manual_ind_prin_even}) and the
- instantiations. Compare this tactic with the manual proof for the lemma @{thm
- [source] manual_ind_prin_even}: as you can see there is almost a one-to-one
- correspondence between the \isacommand{apply}-script and the @{ML
- ind_tac}. Two testcases for this tactic are:
+ We have to give it as arguments the definitions, the premise (a list of
+ formulae) and the instantiations. The premise is @{text "even n"} in lemma
+ @{thm [source] manual_ind_prin_even}; in our code it will always be a list
+ consisting of a single formula. Compare this tactic with the manual proof
+ for the lemma @{thm [source] manual_ind_prin_even}: as you can see there is
+ almost a one-to-one correspondence between the \isacommand{apply}-script and
+ the @{ML ind_tac}. Two testcases for this tactic are:
*}
lemma automatic_ind_prin_even:
@@ -309,11 +311,15 @@
\end{isabelle}
The variables @{text "z"}, @{text "P"} and @{text "Q"} are schematic
- variables (since they are not quantified in the lemma). These schematic
- variables are needed so that they can be instantiated by the user.
- It will take a bit of work to generate these schematic variables when
- constructing the goals for the induction principles. In general we have
- to construct for each predicate @{text "pred"} a goal of the form
+ variables (since they are not quantified in the lemma). These
+ variables must be schematic, otherwise they cannot be instantiated
+ by the user. To generate these schematic variables we use a common trick
+ in Isabelle programming: we first declare them as \emph{free},
+ \emph{but fixed}, and then use the infrastructure to turn them into
+ schematic variables.
+
+ In general we have to construct for each predicate @{text "pred"} a goal
+ of the form
@{text [display]
"pred ?zs \<Longrightarrow> rules[preds := ?Ps] \<Longrightarrow> ?P ?zs"}
@@ -321,9 +327,7 @@
where the predicates @{text preds} are replaced in @{text rules} by new
distinct variables @{text "?Ps"}. We also need to generate fresh arguments
@{text "?zs"} for the predicate @{text "pred"} and the @{text "?P"} in
- the conclusion. As just mentioned, the crucial point is that the
- @{text "?Ps"} and @{text "?zs"} need to be schematic variables that can
- be instantiated by the user.
+ the conclusion.
We generate these goals in two steps. The first function, named @{text prove_ind},
expects that the introduction rules are already appropriately substituted. The argument
@@ -352,7 +356,7 @@
text {*
In Line 3 we produce names @{text "zs"} for each type in the
argument type list. Line 4 makes these names unique and declares them as
- \emph{free but fixed} variables in the local theory @{text "lthy'"}.
+ free, but fixed, variables in the local theory @{text "lthy'"}.
That means they are not schematic variables (yet).
In Line 5 we construct the terms corresponding to these variables.
The variables are applied to the predicate in Line 7 (this corresponds
@@ -514,6 +518,10 @@
end"
"P a b c"}
+ Note the difference with @{ML inst_spec_tac} from Page~\pageref{fun:instspectac}:
+ @{ML inst_spec_tac} is a tactic which operates on a goal state; in contrast
+ @{ML all_elims} operates on theorems.
+
Similarly, the function @{ML imp_elims} eliminates preconditions from implications.
For example we can eliminate the preconditions @{text "A"} and @{text "B"} from
@{thm [source] imp_elims_test}:
@@ -543,7 +551,7 @@
text {*
The function in Line 2 ``rulifies'' the lemma. This will turn out to
- be important later on. Applying this tactic
+ be important later on. Applying this tactic in our proof of @{text "fresh_Lem"}
*}
(*<*)
@@ -553,30 +561,29 @@
apply(tactic {* expand_tac @{thms fresh_def} *})
txt {*
- we end up in the goal state
+ gives us the goal state
\begin{isabelle}
@{subgoals [display]}
\end{isabelle}
- As you can see, there are parameters (namely @{text "a"}, @{text "b"}
- and @{text "t"}) which come from the introduction rule and parameters
- (in the case above only @{text "fresh"}) which come from the universal
- quantification in the definition @{term "fresh a (App t s)"}.
- Similarly, there are assumptions
- that come from the premises of the rule and assumptions from the
- definition of the predicate. We need to treat these
- parameters and assumptions differently. In the code below
- we will therefore separate them into @{text "params1"} and @{text params2},
- respectively @{text "prems1"} and @{text "prems2"}. To do this
- separation, it is best to open a subproof with the tactic
- @{ML SUBPROOF}, since this tactic provides us
- with the parameters (as list of @{ML_type cterm}s) and the assumptions
- (as list of @{ML_type thm}s called @{text prems}). The problem we have
- to overcome with @{ML SUBPROOF} is, however, that this tactic always
- expects us to completely discharge the goal (see Section~\ref{sec:simpletacs}).
- This is inconvenient for our gradual explanation of the proof here. To
- circumvent this inconvenience we use the following modified tactic:
+ As you can see, there are parameters (namely @{text "a"}, @{text "b"} and
+ @{text "t"}) which come from the introduction rule and parameters (in the
+ case above only @{text "fresh"}) which come from the universal
+ quantification in the definition @{term "fresh a (App t s)"}. Similarly,
+ there are assumptions that come from the premises of the rule (namely the
+ first two) and assumptions from the definition of the predicate (assumption
+ three to six). We need to treat these parameters and assumptions
+ differently. In the code below we will therefore separate them into @{text
+ "params1"} and @{text params2}, respectively @{text "prems1"} and @{text
+ "prems2"}. To do this separation, it is best to open a subproof with the
+ tactic @{ML SUBPROOF}, since this tactic provides us with the parameters (as
+ list of @{ML_type cterm}s) and the assumptions (as list of @{ML_type thm}s).
+ The problem we have to overcome with @{ML SUBPROOF}
+ is, however, that this tactic always expects us to completely discharge the
+ goal (see Section~\ref{sec:simpletacs}). This is inconvenient for our
+ gradual explanation of the proof here. To circumvent this inconvenience we
+ use the following modified tactic:
*}
(*<*)oops(*>*)
ML{*fun SUBPROOF_test tac ctxt = (SUBPROOF tac ctxt 1) ORELSE all_tac*}
@@ -699,7 +706,7 @@
text {*
The argument @{text i} corresponds to the number of the
- introduction we want to prove. We will later on lat it range
+ introduction we want to prove. We will later on let it range
from @{text 0} to the number of @{text "rules - 1"}.
Below we apply this function with @{text 3}, since
we are proving the fourth introduction rule.
@@ -722,9 +729,9 @@
@{text "2."}~@{prop "fresh a t"}
\end{isabelle}
- As expected there are two subgoals, where the first comes from a
+ As expected there are two subgoals, where the first comes from the
non-recursive premise of the introduction rule and the second comes
- from a recursive premise. The first goal can be solved immediately
+ from the recursive one. The first goal can be solved immediately
by @{text "prems1"}. The second needs more work. It can be solved
with the other premise in @{text "prems1"}, namely
@@ -773,7 +780,7 @@
text {*
Note that the tactic is now @{ML SUBPROOF}, not @{ML SUBPROOF_test}.
- The full proof of the introduction rule now as follows:
+ The full proof of the introduction rule is as follows:
*}
lemma fresh_Lam:
@@ -783,7 +790,7 @@
done
text {*
- Phew! Unfortunately, not everything is done yet. If you look closely
+ Phew! ...Unfortunately, not everything is done yet. If you look closely
at the general principle outlined for the introduction rules in
Section~\ref{sec:nutshell}, we have not yet dealt with the case where
recursive premises have preconditions. The introduction rule
@@ -791,7 +798,7 @@
*}
lemma accpartI:
- shows "\<And>x. (\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
+ shows "\<And>R x. (\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
apply(tactic {* expand_tac @{thms accpart_def} *})
apply(tactic {* chop_test_tac [acc_pred] acc_rules @{context} *})
apply(tactic {* apply_prem_tac 0 [acc_pred] acc_rules @{context} *})
@@ -872,7 +879,7 @@
*}
lemma accpartI:
- shows "\<And>x. (\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
+ shows "\<And>R x. (\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
apply(tactic {* expand_tac @{thms accpart_def} *})
apply(tactic {* prove_intro_tac 0 [acc_pred] acc_rules @{context} 1 *})
done
@@ -889,8 +896,8 @@
prove_intro_tac i preds rules ctxt]*}
text {*
- Lines 2 to 4 correspond to the function @{ML expand_tac}. Some testcases
- for this tactic are:
+ Lines 2 to 4 in this tactic correspond to the function @{ML expand_tac}.
+ Some testcases for this tactic are:
*}
lemma even0_intro:
@@ -907,7 +914,7 @@
intro_tac @{thms fresh_def} fresh_rules [fresh_pred] 1 @{context} *})
text {*
- The second function sets up in Line 4 the goals (this is easy
+ The second function sets up in Line 4 the goals to be proved (this is easy
for the introduction rules since they are exactly the rules
given by the user) and iterates @{ML intro_tac} over all
introduction rules.
@@ -930,12 +937,34 @@
code that strings everything together.
*}
-subsection {* Main Function *}
+subsection {* Administrative Functions *}
+
+text {*
+ We have produced various theorems (definitions, induction principles and
+ introduction rules), but apart from the definitions, we have not yet
+ registered them with the theorem database. This is what the functions
+ @{ML LocalTheory.note} does.
+
+
+ For convenience, we use the following
+ three wrappers this function:
+*}
-text {* main internal function *}
+ML{*fun reg_many qname ((name, attrs), thms) =
+ LocalTheory.note Thm.theoremK
+ ((Binding.qualify false qname name, attrs), thms)
+
+fun reg_single1 qname ((name, attrs), thm) =
+ reg_many qname ((name, attrs), [thm])
-ML {* LocalTheory.notes *}
+fun reg_single2 name attrs (qname, thm) =
+ reg_many (Binding.name_of qname) ((name, attrs), [thm]) *}
+text {*
+ The function that ``holds everything together'' is @{text "add_inductive"}.
+ Its arguments are the specification of the predicates @{text "pred_specs"}
+ and the introduction rules @{text "rule_spec"}.
+*}
ML %linenosgray{*fun add_inductive pred_specs rule_specs lthy =
let
@@ -943,32 +972,74 @@
val pred_specs' = map fst pred_specs
val prednames = map fst pred_specs'
val preds = map (fn (p, ty) => Free (Binding.name_of p, ty)) pred_specs'
+ val tyss = map (binder_types o fastype_of) preds
- val tyss = map (binder_types o fastype_of) preds
- val (attrs, rules) = split_list rule_specs
+ val (namesattrs, rules) = split_list rule_specs
val (defs, lthy') = defns rules preds prednames syns tyss lthy
- val ind_rules = inds rules defs preds tyss lthy'
+ val ind_prin = inds rules defs preds tyss lthy'
val intro_rules = intros rules preds defs lthy'
val mut_name = space_implode "_" (map Binding.name_of prednames)
- val case_names = map (Binding.name_of o fst) attrs
+ val case_names = map (Binding.name_of o fst) namesattrs
in
- lthy'
- |> LocalTheory.notes Thm.theoremK (map (fn (((a, atts), _), th) =>
- ((Binding.qualify false mut_name a, atts), [([th], [])])) (rule_specs ~~ intro_rules))
- |-> (fn intross => LocalTheory.note Thm.theoremK
- ((Binding.qualify false mut_name (@{binding "intros"}), []), maps snd intross))
- |>> snd
- ||>> (LocalTheory.notes Thm.theoremK (map (fn (((R, _), _), th) =>
- ((Binding.qualify false (Binding.name_of R) (@{binding "induct"}),
- [Attrib.internal (K (RuleCases.case_names case_names)),
- Attrib.internal (K (RuleCases.consumes 1)),
- Attrib.internal (K (Induct.induct_pred ""))]), [([th], [])]))
- (pred_specs ~~ ind_rules)) #>> maps snd)
- |> snd
+ lthy' |> reg_many mut_name ((@{binding "intros"}, []), intro_rules)
+ ||>> fold_map (reg_single1 mut_name) (namesattrs ~~ intro_rules)
+ ||>> fold_map (reg_single2 @{binding "induct"}
+ [Attrib.internal (K (RuleCases.case_names case_names)),
+ Attrib.internal (K (RuleCases.consumes 1)),
+ Attrib.internal (K (Induct.induct_pred ""))])
+ (prednames ~~ ind_prin)
+ |> snd
end*}
+text {*
+ In Line 3 the function extracts the syntax annotations from the predicates.
+ Lines 4 to 6 extract the names of the predicates and generate
+ the variables terms (with types) corresponding to the predicates.
+ Line 7 produces the argument types for each predicate.
+
+ Line 9 extracts the introduction rules from the specifications
+ and stores also in @{text namesattrs} the names and attributes the
+ user may have attached to these rules.
+
+ Line 11 produces the definitions and also registers the definitions
+ in the local theory @{text "lthy'"}. The next two lines produce
+ the induction principles and the introduction rules (all of them
+ as theorems). Both need the local theory @{text lthy'} in which
+ the definitions have been registered.
+
+ Lines 15 produces the name that is used to register the introduction
+ rules. It is costum to collect all introduction rules under
+ @{text "string.intros"}, whereby @{text "string"} stands for the
+ @{text [quotes] "_"}-separated list of predicate names (for example
+ @{text "even_odd"}. Also by custom, the case names in intuction
+ proofs correspond to the names of the introduction rules. These
+ are generated in Line 16.
+
+ Line 18 now adds to @{text "lthy'"} all the introduction rules
+ under the name @{text "mut_name.intros"} (see previous paragraph).
+ Line 19 add further every introduction rule under its own name
+ (given by the user).\footnote{FIXME: what happens if the user did not give
+ any name.} Line 20 registers the induction principles. For this we have
+ to use some specific attributes. The first @{ML "case_names" in RuleCases}
+ corresponds to the case names that are used by Isar to reference the proof
+ obligations in the induction. The second @{ML "consumes 1" in RuleCases}
+ indicates that the first premise of the induction principle (namely
+ the predicate over which the induction proceeds) is eliminated.
+
+ (FIXME: What does @{ML Induct.induct_pred} do?)
+
+ (FIXME: why the mut-name?)
+
+ (FIXME: What does @{ML Binding.qualify} do?)
+
+
+ This completes all the code and fits in with the ``front end'' described
+ in Section \ref{sec:interface}
+*}
+
+
ML{*fun add_inductive_cmd pred_specs rule_specs lthy =
let
val ((pred_specs', rule_specs'), _) =
@@ -987,12 +1058,15 @@
ML{*val specification =
spec_parser >>
- (fn ((pred_specs), rule_specs) => add_inductive_cmd pred_specs rule_specs)*}
+ (fn (pred_specs, rule_specs) => add_inductive_cmd pred_specs rule_specs)*}
ML{*val _ = OuterSyntax.local_theory "simple_inductive"
"define inductive predicates"
OuterKeyword.thy_decl specification*}
+
+section {* Extensions (TBD) *}
+
text {*
Things to include at the end:
@@ -1009,6 +1083,7 @@
*}
+(*
simple_inductive
Even and Odd
where
@@ -1016,4 +1091,85 @@
| EvenS: "Odd n \<Longrightarrow> Even (Suc n)"
| OddS: "Even n \<Longrightarrow> Odd (Suc n)"
-end
+thm Even0
+thm EvenS
+thm OddS
+
+thm Even_Odd.intros
+thm Even.induct
+thm Odd.induct
+
+thm Even_def
+thm Odd_def
+*)
+
+(*
+text {*
+ Second, we want that the user can specify fixed parameters.
+ Remember in the previous section we stated that the user can give the
+ specification for the transitive closure of a relation @{text R} as
+*}
+*)
+
+(*
+simple_inductive
+ trcl\<iota>\<iota> :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool"
+where
+ base: "trcl\<iota>\<iota> R x x"
+| step: "trcl\<iota>\<iota> R x y \<Longrightarrow> R y z \<Longrightarrow> trcl\<iota>\<iota> R x z"
+*)
+
+(*
+text {*
+ Note that there is no locale given in this specification---the parameter
+ @{text "R"} therefore needs to be included explicitly in @{term trcl\<iota>\<iota>}, but
+ stays fixed throughout the specification. The problem with this way of
+ stating the specification for the transitive closure is that it derives the
+ following induction principle.
+
+ \begin{center}\small
+ \mprset{flushleft}
+ \mbox{\inferrule{
+ @{thm_style prem1 trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}\\\\
+ @{thm_style prem2 trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}\\\\
+ @{thm_style prem3 trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}}
+ {@{thm_style concl trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}}}
+ \end{center}
+
+ But this does not correspond to the induction principle we derived by hand, which
+ was
+
+
+ %\begin{center}\small
+ %\mprset{flushleft}
+ %\mbox{\inferrule{
+ % @ { thm_style prem1 trcl_induct[no_vars]}\\\\
+ % @ { thm_style prem2 trcl_induct[no_vars]}\\\\
+ % @ { thm_style prem3 trcl_induct[no_vars]}}
+ % {@ { thm_style concl trcl_induct[no_vars]}}}
+ %\end{center}
+
+ The difference is that in the one derived by hand the relation @{term R} is not
+ a parameter of the proposition @{term P} to be proved and it is also not universally
+ qunatified in the second and third premise. The point is that the parameter @{term R}
+ stays fixed thoughout the definition and we do not want to regard it as an ``ordinary''
+ argument of the transitive closure, but one that can be freely instantiated.
+ In order to recognise such parameters, we have to extend the specification
+ to include a mechanism to state fixed parameters. The user should be able
+ to write
+
+*}
+*)
+(*
+simple_inductive
+ trcl'' for R :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
+where
+ base: "trcl'' R x x"
+| step: "trcl'' R x y \<Longrightarrow> R y z \<Longrightarrow> trcl'' R x z"
+
+simple_inductive
+ accpart'' for R :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
+where
+ accpartI: "(\<And>y. R y x \<Longrightarrow> accpart'' R y) \<Longrightarrow> accpart'' R x"
+*)
+(*<*)end(*>*)
\ No newline at end of file
--- a/ProgTutorial/Package/Ind_General_Scheme.thy Mon Mar 30 17:40:20 2009 +0200
+++ b/ProgTutorial/Package/Ind_General_Scheme.thy Wed Apr 01 12:28:14 2009 +0100
@@ -1,23 +1,10 @@
theory Ind_General_Scheme
-imports Simple_Inductive_Package Ind_Prelims
+imports Ind_Interface
begin
-(*<*)
-datatype trm =
- Var "string"
-| App "trm" "trm"
-| Lam "string" "trm"
+section {* The Code in a Nutshell\label{sec:nutshell} *}
-simple_inductive
- fresh :: "string \<Rightarrow> trm \<Rightarrow> bool"
-where
- fresh_var: "a\<noteq>b \<Longrightarrow> fresh a (Var b)"
-| fresh_app: "\<lbrakk>fresh a t; fresh a s\<rbrakk> \<Longrightarrow> fresh a (App t s)"
-| fresh_lam1: "fresh a (Lam a t)"
-| fresh_lam2: "\<lbrakk>a\<noteq>b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
-(*>*)
-section {* The Code in a Nutshell\label{sec:nutshell} *}
text {*
The inductive package will generate the reasoning infrastructure for
@@ -70,13 +57,11 @@
has a single recursive premise that has a precondition. As usual all
rules are stated without the leading meta-quantification @{text "\<And>xs"}.
- The code of the inductive package falls roughly in tree parts: the first
- deals with the definitions, the second with the induction principles and
- the third with the introduction rules.
-
- For the definitions we need to have the @{text rules} in a form where
- the meta-quantifiers and meta-implications are replaced by their object
- logic equivalents. Therefore an @{text "orule"} is of the form
+ The output of the inductive package will be definitions for the predicates,
+ induction principles and introduction rules. For the definitions we need to have the
+ @{text rules} in a form where the meta-quantifiers and meta-implications are
+ replaced by their object logic equivalents. Therefore an @{text "orule"} is
+ of the form
@{text [display] "orule ::= \<forall>xs. As \<longrightarrow> (\<forall>ys. Bs \<longrightarrow> pred ss)\<^isup>* \<longrightarrow> pred ts"}
@@ -111,13 +96,13 @@
with the @{text "Ps"}. Then we are done since we are left with a simple
identity.
- Although the user declares introduction rules @{text rules}, they must
+ Although the user declares the introduction rules @{text rules}, they must
also be derived from the @{text defs}. These derivations are a bit involved.
Assuming we want to prove the introduction rule
@{text [display] "\<And>xs. As \<Longrightarrow> (\<And>ys. Bs \<Longrightarrow> pred ss)\<^isup>* \<Longrightarrow> pred ts"}
- then we can have assumptions of the form
+ then we have assumptions of the form
\begin{isabelle}
(i)~~@{text "As"}\\
@@ -143,7 +128,8 @@
In the last step we removed some quantifiers and moved the precondition @{text "orules"}
into the assumtion. The @{text "orules"} stand for all introduction rules that are given
by the user. We apply the @{text orule} that corresponds to introduction rule we are
- proving. This introduction rule must necessarily be of the form
+ proving. After lifting to the meta-connectives, this introduction rule must necessarily
+ be of the form
@{text [display] "As \<Longrightarrow> (\<And>ys. Bs \<Longrightarrow> pred ss)\<^isup>* \<Longrightarrow> pred ts"}
@@ -167,10 +153,10 @@
in @{text "(iii)"} gives us @{text "pred ss"}, which is the goal we are after.
This completes the proof for introduction rules.
- What remains is to implement the reasoning outlined in this section. We do this
- next. For building testcases, we use the shorthands for
+ What remains is to implement in Isabelle the reasoning outlined in this section.
+ We will describe the code in the next section. For building testcases, we use the shorthands for
@{text "even/odd"}, @{term "fresh"} and @{term "accpart"}
- given in Figure~\ref{fig:shorthands}.
+ defined in Figure~\ref{fig:shorthands}.
*}
--- a/ProgTutorial/Package/Ind_Interface.thy Mon Mar 30 17:40:20 2009 +0200
+++ b/ProgTutorial/Package/Ind_Interface.thy Wed Apr 01 12:28:14 2009 +0100
@@ -1,25 +1,116 @@
theory Ind_Interface
-imports "../Base" "../Parsing" Ind_Prelims Simple_Inductive_Package
+imports "../Base" "../Parsing" Simple_Inductive_Package
begin
-section {* Parsing and Typing the Specification *}
+section {* Parsing and Typing the Specification\label{sec:interface} *}
+
+text_raw {*
+\begin{figure}[p]
+\begin{boxedminipage}{\textwidth}
+\begin{isabelle}
+*}
+simple_inductive
+ trcl :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool"
+where
+ base: "trcl R x x"
+| step: "trcl R x y \<Longrightarrow> R y z \<Longrightarrow> trcl R x z"
+
+simple_inductive
+ even and odd
+where
+ even0: "even 0"
+| evenS: "odd n \<Longrightarrow> even (Suc n)"
+| oddS: "even n \<Longrightarrow> odd (Suc n)"
+
+simple_inductive
+ accpart :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> bool"
+where
+ accpartI: "(\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
+
+(*<*)
+datatype trm =
+ Var "string"
+| App "trm" "trm"
+| Lam "string" "trm"
+(*>*)
+
+simple_inductive
+ fresh :: "string \<Rightarrow> trm \<Rightarrow> bool"
+where
+ fresh_var: "a\<noteq>b \<Longrightarrow> fresh a (Var b)"
+| fresh_app: "\<lbrakk>fresh a t; fresh a s\<rbrakk> \<Longrightarrow> fresh a (App t s)"
+| fresh_lam1: "fresh a (Lam a t)"
+| fresh_lam2: "\<lbrakk>a\<noteq>b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
+text_raw {*
+\end{isabelle}
+\end{boxedminipage}
+\caption{Specification given by the user for the inductive predicates
+@{term "trcl"}, @{term "even"} and @{term "odd"}, @{term "accpart"} and
+@{text "fresh"}.\label{fig:specs}}
+\end{figure}
+*}
+
+text {*
+ \begin{figure}[p]
+ \begin{boxedminipage}{\textwidth}
+ \begin{isabelle}
+ \railnontermfont{\rmfamily\itshape}
+ \railterm{simpleinductive,where,for}
+ \railalias{simpleinductive}{\simpleinductive{}}
+ \railalias{where}{\isacommand{where}}
+ \railalias{for}{\isacommand{for}}
+ \begin{rail}
+ simpleinductive target?\\ fixes
+ (where (thmdecl? prop + '|'))?
+ ;
+ \end{rail}
+ \end{isabelle}
+ \end{boxedminipage}
+ \caption{A railroad diagram describing the syntax of \simpleinductive{}.
+ The \emph{target} indicates an optional locale; the \emph{fixes} are an
+ \isacommand{and}-separated list of names for the inductive predicates (they
+ can also contain typing- and syntax anotations); \emph{prop} stands for a
+ introduction rule with an optional theorem declaration (\emph{thmdecl}).
+ \label{fig:railroad}}
+ \end{figure}
+*}
text {*
To be able to write down the specification in Isabelle, we have to introduce
a new command (see Section~\ref{sec:newcommand}). As the keyword for the
- new command we chose \simpleinductive{}. In the package we want to support
- some ``advanced'' features: First, we want that the package can cope with
- specifications inside locales. For example it should be possible to declare
+ new command we chose \simpleinductive{}. Examples of specifications we expect
+ the user gives for the inductive predicates from the previous section are
+ shown in Figure~ref{fig:specs}. The general syntax we will parse is specified
+ in the railroad diagram shown in Figure~\ref{fig:railroad} for the syntax of
+ \simpleinductive{}. This diagram more or less translates directly into
+ the parser:
+*}
+
+ML{*val spec_parser =
+ OuterParse.fixes --
+ Scan.optional
+ (OuterParse.$$$ "where" |--
+ OuterParse.!!!
+ (OuterParse.enum1 "|"
+ (SpecParse.opt_thm_name ":" -- OuterParse.prop))) []*}
+
+text {*
+ which we explaind in Section~\ref{sec:parsingspecs}.
+ If you look closely, there is no code for parsing the optional
+ target in Figure~\ref{fig:railroad}. This is an ``advanced'' feature
+ which we will inherit for ``free'' from the infrastructure on which
+ we build the package. The target stands for a locale and allows us
+ to specify
*}
locale rel =
fixes R :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
text {*
- and then define the transitive closure and the accessible part as follows:
+ and then define the transitive closure and the accessible part of this
+ locale as follows:
*}
-
simple_inductive (in rel)
trcl'
where
@@ -31,100 +122,9 @@
where
accpartI: "(\<And>y. R y x \<Longrightarrow> accpart' y) \<Longrightarrow> accpart' x"
-text {*
- Second, we want that the user can specify fixed parameters.
- Remember in the previous section we stated that the user can give the
- specification for the transitive closure of a relation @{text R} as
-*}
-
-simple_inductive
- trcl\<iota>\<iota> :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool"
-where
- base: "trcl\<iota>\<iota> R x x"
-| step: "trcl\<iota>\<iota> R x y \<Longrightarrow> R y z \<Longrightarrow> trcl\<iota>\<iota> R x z"
-
text {*
- Note that there is no locale given in this specification---the parameter
- @{text "R"} therefore needs to be included explicitly in @{term trcl\<iota>\<iota>}, but
- stays fixed throughout the specification. The problem with this way of
- stating the specification for the transitive closure is that it derives the
- following induction principle.
-
- \begin{center}\small
- \mprset{flushleft}
- \mbox{\inferrule{
- @{thm_style prem1 trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}\\\\
- @{thm_style prem2 trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}\\\\
- @{thm_style prem3 trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}}
- {@{thm_style concl trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}}}
- \end{center}
-
- But this does not correspond to the induction principle we derived by hand, which
- was
-
- \begin{center}\small
- \mprset{flushleft}
- \mbox{\inferrule{
- @{thm_style prem1 trcl_induct[no_vars]}\\\\
- @{thm_style prem2 trcl_induct[no_vars]}\\\\
- @{thm_style prem3 trcl_induct[no_vars]}}
- {@{thm_style concl trcl_induct[no_vars]}}}
- \end{center}
-
- The difference is that in the one derived by hand the relation @{term R} is not
- a parameter of the proposition @{term P} to be proved and it is also not universally
- qunatified in the second and third premise. The point is that the parameter @{term R}
- stays fixed thoughout the definition and we do not want to regard it as an ``ordinary''
- argument of the transitive closure, but one that can be freely instantiated.
- In order to recognise such parameters, we have to extend the specification
- to include a mechanism to state fixed parameters. The user should be able
- to write
-
-*}
-
-simple_inductive
- trcl'' for R :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
-where
- base: "trcl'' R x x"
-| step: "trcl'' R x y \<Longrightarrow> R y z \<Longrightarrow> trcl'' R x z"
-
-simple_inductive
- accpart'' for R :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
-where
- accpartI: "(\<And>y. R y x \<Longrightarrow> accpart'' R y) \<Longrightarrow> accpart'' R x"
-
-text {*
- \begin{figure}[t]
- \begin{isabelle}
- \railnontermfont{\rmfamily\itshape}
- \railterm{simpleinductive,where,for}
- \railalias{simpleinductive}{\simpleinductive{}}
- \railalias{where}{\isacommand{where}}
- \railalias{for}{\isacommand{for}}
- \begin{rail}
- simpleinductive target? fixes (for fixes)? \\
- (where (thmdecl? prop + '|'))?
- ;
- \end{rail}
- \end{isabelle}
- \caption{A railroad diagram describing the syntax of \simpleinductive{}.
- The \emph{target} indicates an optional locale; the \emph{fixes} are an
- \isacommand{and}-separated list of names for the inductive predicates (they
- can also contain typing- and syntax anotations); similarly the \emph{fixes}
- after \isacommand{for} to indicate fixed parameters; \emph{prop} stands for a
- introduction rule with an optional theorem declaration (\emph{thmdecl}).
- \label{fig:railroad}}
- \end{figure}
-*}
-
-text {*
- This leads directly to the railroad diagram shown in
- Figure~\ref{fig:railroad} for the syntax of \simpleinductive{}. This diagram
- more or less translates directly into the parser:
-
- @{ML_chunk [display,gray] parser}
-
- which we described in Section~\ref{sec:parsingspecs}. If we feed into the
+ Note that in these definitions the parameter @{text R} for the
+ relation is left implicit. If we feed into the
parser the string (which corresponds to our definition of @{term even} and
@{term odd}):
@@ -143,22 +143,37 @@
[((even0,\<dots>), \"\\^E\\^Ftoken\\^Eeven 0\\^E\\^F\\^E\"),
((evenS,\<dots>), \"\\^E\\^Ftoken\\^Eodd n \<Longrightarrow> even (Suc n)\\^E\\^F\\^E\"),
((oddS,\<dots>), \"\\^E\\^Ftoken\\^Eeven n \<Longrightarrow> odd (Suc n)\\^E\\^F\\^E\")]), [])"}
+
+ then we get back the predicates (with type
+ and syntax annotations), and the specifications of the introduction
+ rules. This is all the information we
+ need for calling the package and setting up the keyword. The latter is
+ done in Lines 6 and 7 in the code below.
*}
+(*<*)
+ML{* fun add_inductive pred_specs rule_specs lthy = lthy *}
+(*>*)
+
+ML{*fun add_inductive_cmd pred_specs rule_specs lthy =
+let
+ val ((pred_specs', rule_specs'), _) =
+ Specification.read_spec pred_specs rule_specs lthy
+in
+ add_inductive pred_specs' rule_specs' lthy
+end*}
+
+
+ML{*val specification =
+ spec_parser >>
+ (fn (pred_specs, rule_specs) => add_inductive_cmd pred_specs rule_specs)*}
+
+ML{*val _ = OuterSyntax.local_theory "simple_inductive"
+ "definition of simple inductive predicates"
+ OuterKeyword.thy_decl specification*}
+
text {*
- then we get back a locale (in this case @{ML NONE}), the predicates (with type
- and syntax annotations), the parameters (similar as the predicates) and
- the specifications of the introduction rules.
-
-
-
- This is all the information we
- need for calling the package and setting up the keyword. The latter is
- done in Lines 6 and 7 in the code below.
-
- @{ML_chunk [display,gray,linenos] syntax}
-
We call @{ML OuterSyntax.command} with the kind-indicator @{ML
OuterKeyword.thy_decl} since the package does not need to open up any goal
state (see Section~\ref{sec:newcommand}). Note that the predicates and
@@ -448,9 +463,10 @@
should do in order to solve all inductive definitions we throw at them. For this
it is instructive to look at the general construction principle
of inductive definitions, which we shall do in the next section.
+
+ The code of the inductive package falls roughly in tree parts: the first
+ deals with the definitions, the second with the induction principles and
+ the third with the introduction rules.
+
*}
-
-
-(*<*)
-end
-(*>*)
+(*<*)end(*>*)
\ No newline at end of file
--- a/ProgTutorial/Package/Ind_Intro.thy Mon Mar 30 17:40:20 2009 +0200
+++ b/ProgTutorial/Package/Ind_Intro.thy Wed Apr 01 12:28:14 2009 +0100
@@ -2,7 +2,7 @@
imports Main
begin
-chapter {* How to Write a Definitional Package\label{chp:package} (TBD) *}
+chapter {* How to Write a Definitional Package\label{chp:package} *}
text {*
\begin{flushright}
@@ -17,9 +17,9 @@
\medskip
HOL is based on just a few primitive constants, like equality and
implication, whose properties are described by axioms. All other concepts,
- such as inductive predicates, datatypes, or recursive functions have to be defined
+ such as inductive predicates, datatypes or recursive functions, have to be defined
in terms of those constants, and the desired properties, for example
- induction theorems, or recursion equations have to be derived from the definitions
+ induction theorems or recursion equations, have to be derived from the definitions
by a formal proof. Since it would be very tedious for a user to define
complex inductive predicates or datatypes ``by hand'' just using the
primitive operators of higher order logic, \emph{definitional packages} have
@@ -29,7 +29,7 @@
definitions and proofs behind the scenes. In this chapter we explain how
such a package can be implemented.
- As a running example, we have chosen a rather simple package for defining
+ As the running example we have chosen a rather simple package for defining
inductive predicates. To keep things really simple, we will not use the
general Knaster-Tarski fixpoint theorem on complete lattices, which forms
the basis of Isabelle's standard inductive definition package. Instead, we
--- a/ProgTutorial/Package/Ind_Prelims.thy Mon Mar 30 17:40:20 2009 +0200
+++ b/ProgTutorial/Package/Ind_Prelims.thy Wed Apr 01 12:28:14 2009 +0100
@@ -1,52 +1,33 @@
theory Ind_Prelims
-imports Main LaTeXsugar"../Base" Simple_Inductive_Package
+imports Main LaTeXsugar "../Base"
begin
section{* Preliminaries *}
text {*
- The user will just give a specification of an inductive predicate and
+ The user will just give a specification of inductive predicate(s) and
expects from the package to produce a convenient reasoning
infrastructure. This infrastructure needs to be derived from the definition
- that correspond to the specified predicate. This will roughly mean that the
- package has three main parts, namely:
+ that correspond to the specified predicate(s). Before we start with
+ explaining all parts of the package, let us first give some examples
+ showing how to define inductive predicates and then also how
+ to generate a reasoning infrastructure for them. From the examples
+ we will figure out a general method for
+ defining inductive predicates. The aim in this section is \emph{not} to
+ write proofs that are as beautiful as possible, but as close as possible to
+ the ML-code we will develop in later sections.
- \begin{itemize}
- \item parsing the specification and typing the parsed input,
- \item making the definitions and deriving the reasoning infrastructure, and
- \item storing the results in the theory.
- \end{itemize}
- Before we start with explaining all parts, let us first give three examples
- showing how to define inductive predicates by hand and then also how to
- prove by hand important properties about them. From these examples, we will
- figure out a general method for defining inductive predicates. The aim in
- this section is \emph{not} to write proofs that are as beautiful as
- possible, but as close as possible to the ML-code we will develop in later
- sections.
-
-
- We first consider the transitive closure of a relation @{text R}. It is
- an inductive predicate characterised by the two introduction rules:
+ We first consider the transitive closure of a relation @{text R}. The
+ ``pencil-and-paper'' specification for the transitive closure is:
\begin{center}\small
@{prop[mode=Axiom] "trcl R x x"} \hspace{5mm}
@{prop[mode=Rule] "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"}
\end{center}
- In Isabelle, the user will state for @{term trcl\<iota>} the specification:
-*}
-
-simple_inductive
- trcl\<iota> :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool"
-where
- base: "trcl\<iota> R x x"
-| step: "trcl\<iota> R x y \<Longrightarrow> R y z \<Longrightarrow> trcl\<iota> R x z"
-
-text {*
- As said above the package has to make an appropriate definition and provide
- lemmas to reason about the predicate @{term trcl\<iota>}. Since an inductively
+ The package has to make an appropriate definition. Since an inductively
defined predicate is the least predicate closed under a collection of
introduction rules, the predicate @{text "trcl R x y"} can be defined so
that it holds if and only if @{text "P x y"} holds for every predicate
@@ -58,25 +39,25 @@
\<longrightarrow> (\<forall>x y z. R x y \<longrightarrow> P y z \<longrightarrow> P x z) \<longrightarrow> P x y"
text {*
- where we quantify over the predicate @{text P}. We have to use the
- object implication @{text "\<longrightarrow>"} and object quantification @{text "\<forall>"} for
- stating this definition (there is no other way for definitions in
- HOL). However, the introduction rules and induction principles
- should use the meta-connectives since they simplify the
- reasoning for the user.
+ We have to use the object implication @{text "\<longrightarrow>"} and object quantification
+ @{text "\<forall>"} for stating this definition (there is no other way for
+ definitions in HOL). However, the introduction rules and induction
+ principles associated with the transitive closure should use the meta-connectives,
+ since they simplify the reasoning for the user.
+
With this definition, the proof of the induction principle for @{term trcl}
is almost immediate. It suffices to convert all the meta-level
connectives in the lemma to object-level connectives using the
- proof method @{text atomize} (Line 4), expand the definition of @{term trcl}
+ proof method @{text atomize} (Line 4 below), expand the definition of @{term trcl}
(Line 5 and 6), eliminate the universal quantifier contained in it (Line~7),
- and then solve the goal by assumption (Line 8).
+ and then solve the goal by @{text assumption} (Line 8).
*}
lemma %linenos trcl_induct:
- assumes "trcl R x y"
- shows "(\<And>x. P x x) \<Longrightarrow> (\<And>x y z. R x y \<Longrightarrow> P y z \<Longrightarrow> P x z) \<Longrightarrow> P x y"
+assumes "trcl R x y"
+shows "(\<And>x. P x x) \<Longrightarrow> (\<And>x y z. R x y \<Longrightarrow> P y z \<Longrightarrow> P x z) \<Longrightarrow> P x y"
apply(atomize (full))
apply(cut_tac prems)
apply(unfold trcl_def)
@@ -90,7 +71,7 @@
*}
lemma %linenos trcl_base:
- shows "trcl R x x"
+shows "trcl R x x"
apply(unfold trcl_def)
apply(rule allI impI)+
apply(drule spec)
@@ -110,9 +91,10 @@
(*<*)oops(*>*)
text {*
- The two assumptions correspond to the introduction rules. Thus, all we have
- to do is to eliminate the universal quantifier in front of the first
- assumption (Line 5), and then solve the goal by assumption (Line 6).
+ The two assumptions come from the definition of @{term trcl} and correspond
+ to the introduction rules. Thus, all we have to do is to eliminate the
+ universal quantifier in front of the first assumption (Line 5), and then
+ solve the goal by @{text assumption} (Line 6).
*}
text {*
@@ -123,7 +105,7 @@
*}
lemma trcl_step:
- shows "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"
+shows "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"
apply (unfold trcl_def)
apply (rule allI impI)+
@@ -147,9 +129,9 @@
txt {*
The assumptions @{text "p1"} and @{text "p2"} correspond to the premises of
- the second introduction rule; the assumptions @{text "r1"} and @{text "r2"}
- correspond to the introduction rules. We apply @{text "r2"} to the goal
- @{term "P x z"}. In order for the assumption to be applicable as a rule, we
+ the second introduction rule (unfolded); the assumptions @{text "r1"} and @{text "r2"}
+ come from the definition of @{term trcl}. We apply @{text "r2"} to the goal
+ @{term "P x z"}. In order for this assumption to be applicable as a rule, we
have to eliminate the universal quantifier and turn the object-level
implications into meta-level ones. This can be accomplished using the @{text
rule_format} attribute. So we continue the proof with:
@@ -183,7 +165,7 @@
*}
lemma trcl_step_blast:
- shows "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"
+shows "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"
apply(unfold trcl_def)
apply(blast)
done
@@ -195,40 +177,28 @@
declare new intro- or simplification rules that can throw automatic tactics
off course) and also it is very hard to debug proofs involving automatic
tactics whenever something goes wrong. Therefore if possible, automatic
- tactics should be avoided or sufficiently constrained.
+ tactics should be avoided or be constrained sufficiently.
The method of defining inductive predicates by impredicative quantification
also generalises to mutually inductive predicates. The next example defines
- the predicates @{text even} and @{text odd} characterised by the following
- rules:
-
+ the predicates @{text even} and @{text odd} given by
+
\begin{center}\small
@{prop[mode=Axiom] "even (0::nat)"} \hspace{5mm}
@{prop[mode=Rule] "odd n \<Longrightarrow> even (Suc n)"} \hspace{5mm}
@{prop[mode=Rule] "even n \<Longrightarrow> odd (Suc n)"}
\end{center}
-
- The user will state for this inductive definition the specification:
-*}
-simple_inductive
- even and odd
-where
- even0: "even 0"
-| evenS: "odd n \<Longrightarrow> even (Suc n)"
-| oddS: "even n \<Longrightarrow> odd (Suc n)"
-
-text {*
Since the predicates @{term even} and @{term odd} are mutually inductive, each
corresponding definition must quantify over both predicates (we name them
below @{text "P"} and @{text "Q"}).
*}
-definition "even\<iota> \<equiv>
+definition "even \<equiv>
\<lambda>n. \<forall>P Q. P 0 \<longrightarrow> (\<forall>m. Q m \<longrightarrow> P (Suc m))
\<longrightarrow> (\<forall>m. P m \<longrightarrow> Q (Suc m)) \<longrightarrow> P n"
-definition "odd\<iota> \<equiv>
+definition "odd \<equiv>
\<lambda>n. \<forall>P Q. P 0 \<longrightarrow> (\<forall>m. Q m \<longrightarrow> P (Suc m))
\<longrightarrow> (\<forall>m. P m \<longrightarrow> Q (Suc m)) \<longrightarrow> Q n"
@@ -238,9 +208,8 @@
*}
lemma even_induct:
- assumes "even n"
- shows "P 0 \<Longrightarrow>
- (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P n"
+assumes "even n"
+shows "P 0 \<Longrightarrow> (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P n"
apply(atomize (full))
apply(cut_tac prems)
apply(unfold even_def)
@@ -252,14 +221,14 @@
text {*
The only difference with the proof @{text "trcl_induct"} is that we have to
instantiate here two universal quantifiers. We omit the other induction
- principle that has @{term "Q n"} as conclusion. The proofs of the
- introduction rules are also very similar to the ones in the @{text
- "trcl"}-example. We only show the proof of the second introduction rule.
-
+ principle that has @{prop "even n"} as premise and @{term "Q n"} as conclusion.
+ The proofs of the introduction rules are also very similar to the ones in
+ the @{text "trcl"}-example. We only show the proof of the second introduction
+ rule.
*}
lemma %linenos evenS:
- shows "odd m \<Longrightarrow> even (Suc m)"
+shows "odd m \<Longrightarrow> even (Suc m)"
apply (unfold odd_def even_def)
apply (rule allI impI)+
proof -
@@ -277,6 +246,9 @@
qed
text {*
+ The interesting lines are 7 to 15. The assumptions fall into two categories:
+ @{text p1} corresponds to the premise of the introduction rule; @{text "r1"}
+ to @{text "r3"} come from the definition of @{text "even"}.
In Line 13, we apply the assumption @{text "r2"} (since we prove the second
introduction rule). In Lines 14 and 15 we apply assumption @{text "p1"} (if
the second introduction rule had more premises we have to do that for all
@@ -284,41 +256,26 @@
need to be instantiated and then also the implications need to be resolved
with the other rules.
+ Next we define the accessible part of a relation @{text R} given by
+ the single rule:
- As a final example, we define the accessible part of a relation @{text R} characterised
- by the introduction rule
-
\begin{center}\small
\mbox{\inferrule{@{term "\<And>y. R y x \<Longrightarrow> accpart R y"}}{@{term "accpart R x"}}}
\end{center}
- whose premise involves a universal quantifier and an implication. The
- definition of @{text accpart} is:
+ The definition of @{text "accpart"} is:
*}
definition "accpart \<equiv> \<lambda>R x. \<forall>P. (\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x) \<longrightarrow> P x"
text {*
- The proof of the induction principle is again straightforward.
-*}
-
-lemma accpart_induct:
- assumes "accpart R x"
- shows "(\<And>x. (\<And>y. R y x \<Longrightarrow> P y) \<Longrightarrow> P x) \<Longrightarrow> P x"
-apply(atomize (full))
-apply(cut_tac prems)
-apply(unfold accpart_def)
-apply(drule spec[where x=P])
-apply(assumption)
-done
-
-text {*
- Proving the introduction rule is a little more complicated, because the quantifier
- and the implication in the premise. The proof is as follows.
+ The proof of the induction principle is again straightforward and omitted.
+ Proving the introduction rule is a little more complicated, because the
+ quantifier and the implication in the premise. The proof is as follows.
*}
lemma %linenos accpartI:
- shows "(\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
+shows "(\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
apply (unfold accpart_def)
apply (rule allI impI)+
proof -
@@ -338,14 +295,32 @@
qed
text {*
- In Line 11, applying the assumption @{text "r1"} generates a goal state with
- the new local assumption @{term "R y x"}, named @{text "r1_prem"} in the
- proof above (Line 14). This local assumption is used to solve
- the goal @{term "P y"} with the help of assumption @{text "p1"}.
+ As you can see, there are now two subproofs. The assumptions fall again into
+ two categories (Lines 7 to 9). In Line 11, applying the assumption @{text
+ "r1"} generates a goal state with the new local assumption @{term "R y x"},
+ named @{text "r1_prem"} in the second subproof (Line 14). This local assumption is
+ used to solve the goal @{term "P y"} with the help of assumption @{text
+ "p1"}.
+
- The point of these examples is to get a feeling what the automatic proofs
- should do in order to solve all inductive definitions we throw at them.
- This is usually the first step in writing a package. We next explain
+ \begin{exercise}
+ Give the definition for the freshness predicate for lambda-terms. The rules
+ for this predicate are:
+
+ \begin{center}\small
+ @{prop[mode=Rule] "a\<noteq>b \<Longrightarrow> fresh a (Var b)"}\hspace{5mm}
+ @{prop[mode=Rule] "\<lbrakk>fresh a t; fresh a s\<rbrakk> \<Longrightarrow> fresh a (App t s)"}\\[2mm]
+ @{prop[mode=Axiom] "fresh a (Lam a t)"}\hspace{5mm}
+ @{prop[mode=Rule] "\<lbrakk>a\<noteq>b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"}
+ \end{center}
+
+ From the definition derive the induction principle and the introduction
+ rules.
+ \end{exercise}
+
+ The point of all these examples is to get a feeling what the automatic
+ proofs should do in order to solve all inductive definitions we throw at
+ them. This is usually the first step in writing a package. We next explain
the parsing and typing part of the package.
*}
--- a/ProgTutorial/Parsing.thy Mon Mar 30 17:40:20 2009 +0200
+++ b/ProgTutorial/Parsing.thy Wed Apr 01 12:28:14 2009 +0100
@@ -195,7 +195,7 @@
@{ML_response_fake [display,gray] "Scan.error (!! (fn _ => \"foo\") ($$ \"h\"))"
"Exception Error \"foo\" raised"}
- This ``prefixing'' is usually done by wrappers such as @{ML "OuterSyntax.command"}
+ This ``prefixing'' is usually done by wrappers such as @{ML "OuterSyntax.local_theory"}
(see Section~\ref{sec:newcommand} which explains this function in more detail).
Let us now return to our example of parsing @{ML "(p -- q) || r" for p q
@@ -350,8 +350,6 @@
section {* Parsing Theory Syntax *}
text {*
- (FIXME: context parser)
-
Most of the time, however, Isabelle developers have to deal with parsing
tokens, not strings. These token parsers have the type:
*}
@@ -620,12 +618,14 @@
for inductive predicates of the form:
*}
+(*
simple_inductive
even and odd
where
even0: "even 0"
| evenS: "odd n \<Longrightarrow> even (Suc n)"
| oddS: "even n \<Longrightarrow> odd (Suc n)"
+*)
text {*
For this we are going to use the parser:
@@ -680,7 +680,7 @@
val input = filtered_input
\"foo::\\\"int \<Rightarrow> bool\\\" and bar::nat (\\\"BAR\\\" 100) and blonk\"
in
- parse OuterParse.fixes input
+ parse OuterParse.fixes input
end"
"([(foo, SOME \"\\^E\\^Ftoken\\^Eint \<Rightarrow> bool\\^E\\^F\\^E\", NoSyn),
(bar, SOME \"\\^E\\^Ftoken\\^Enat\\^E\\^F\\^E\", Mixfix (\"BAR\", [], 100)),
@@ -700,10 +700,10 @@
\end{readmore}
Lines 3 to 7 in the function @{ML spec_parser} implement the parser for a
- list of introduction rules, that is propositions with theorem
- annotations. The introduction rules are propositions parsed by @{ML
- OuterParse.prop}. However, they can include an optional theorem name plus
- some attributes. For example
+ list of introduction rules, that is propositions with theorem annotations
+ such as rule names and attributes. The introduction rules are propositions
+ parsed by @{ML OuterParse.prop}. However, they can include an optional
+ theorem name plus some attributes. For example
@{ML_response [display,gray] "let
val input = filtered_input \"foo_lemma[intro,dest!]:\"
@@ -753,8 +753,6 @@
section {* New Commands and Keyword Files\label{sec:newcommand} *}
text {*
- (FIXME: update to the right command setup --- is this still needed?)
-
Often new commands, for example for providing new definitional principles,
need to be implemented. While this is not difficult on the ML-level,
new commands, in order to be useful, need to be recognised by
@@ -767,16 +765,16 @@
*}
ML{*let
- val do_nothing = Scan.succeed (Toplevel.theory I)
+ val do_nothing = Scan.succeed (LocalTheory.theory I)
val kind = OuterKeyword.thy_decl
in
- OuterSyntax.command "foobar" "description of foobar" kind do_nothing
+ OuterSyntax.local_theory "foobar" "description of foobar" kind do_nothing
end *}
text {*
- The crucial function @{ML OuterSyntax.command} expects a name for the command, a
- short description, a kind indicator (which we will explain later on more thoroughly) and a
- parser producing a top-level transition function (its purpose will also explained
+ The crucial function @{ML OuterSyntax.local_theory} expects a name for the command, a
+ short description, a kind indicator (which we will explain later more thoroughly) and a
+ parser producing a local theory transition (its purpose will also explained
later).
While this is everything you have to do on the ML-level, you need a keyword
@@ -804,15 +802,15 @@
\isacommand{ML}~@{text "\<verbopen>"}\\
@{ML
"let
- val do_nothing = Scan.succeed (Toplevel.theory I)
+ val do_nothing = Scan.succeed (LocalTheory.theory I)
val kind = OuterKeyword.thy_decl
in
- OuterSyntax.command \"foobar\" \"description of foobar\" kind do_nothing
+ OuterSyntax.local_theory \"foobar\" \"description of foobar\" kind do_nothing
end"}\\
@{text "\<verbclose>"}\\
\isacommand{end}
\end{graybox}
- \caption{\small The file @{text "Command.thy"} is necessary for generating a log
+ \caption{The file @{text "Command.thy"} is necessary for generating a log
file. This log file enables Isabelle to generate a keyword file containing
the command \isacommand{foobar}.\label{fig:commandtheory}}
\end{figure}
@@ -910,22 +908,22 @@
The crucial part of a command is the function that determines the behaviour
of the command. In the code above we used a ``do-nothing''-function, which
because of @{ML Scan.succeed} does not parse any argument, but immediately
- returns the simple toplevel function @{ML "Toplevel.theory I"}. We can
+ returns the simple function @{ML "LocalTheory.theory I"}. We can
replace this code by a function that first parses a proposition (using the
parser @{ML OuterParse.prop}), then prints out the tracing
- information (using a new top-level function @{text trace_top_lvl}) and
+ information (using a new function @{text trace_prop}) and
finally does nothing. For this you can write:
*}
ML{*let
- fun trace_top_lvl str =
- Toplevel.theory (fn thy => (tracing str; thy))
+ fun trace_prop str =
+ LocalTheory.theory (fn lthy => (tracing str; lthy))
- val trace_prop = OuterParse.prop >> trace_top_lvl
-
+ val trace_prop_parser = OuterParse.prop >> trace_prop
val kind = OuterKeyword.thy_decl
in
- OuterSyntax.command "foobar" "traces a proposition" kind trace_prop
+ OuterSyntax.local_theory "foobar" "traces a proposition"
+ kind trace_prop_parser
end *}
text {*
@@ -938,17 +936,19 @@
and see the proposition in the tracing buffer.
- Note that so far we used @{ML thy_decl in OuterKeyword} as the kind indicator
- for the command. This means that the command finishes as soon as the
- arguments are processed. Examples of this kind of commands are
- \isacommand{definition} and \isacommand{declare}. In other cases,
- commands are expected to parse some arguments, for example a proposition,
- and then ``open up'' a proof in order to prove the proposition (for example
+ Note that so far we used @{ML thy_decl in OuterKeyword} as the kind
+ indicator for the command. This means that the command finishes as soon as
+ the arguments are processed. Examples of this kind of commands are
+ \isacommand{definition} and \isacommand{declare}. In other cases, commands
+ are expected to parse some arguments, for example a proposition, and then
+ ``open up'' a proof in order to prove the proposition (for example
\isacommand{lemma}) or prove some other properties (for example
- \isacommand{function}). To achieve this kind of behaviour, you have to use the kind
- indicator @{ML thy_goal in OuterKeyword}. Note, however, once you change the
- ``kind'' of a command from @{ML thy_decl in OuterKeyword} to @{ML thy_goal in OuterKeyword}
- then the keyword file needs to be re-created!
+ \isacommand{function}). To achieve this kind of behaviour, you have to use
+ the kind indicator @{ML thy_goal in OuterKeyword} and the function @{ML
+ "local_theory_to_proof" in OuterSyntax} to set up the command. Note,
+ however, once you change the ``kind'' of a command from @{ML thy_decl in
+ OuterKeyword} to @{ML thy_goal in OuterKeyword} then the keyword file needs
+ to be re-created!
Below we change \isacommand{foobar} so that it takes a proposition as
argument and then starts a proof in order to prove it. Therefore in Line 13,
@@ -956,34 +956,32 @@
*}
ML%linenosgray{*let
- fun set_up_thm str ctxt =
+ fun prove_prop str ctxt =
let
val prop = Syntax.read_prop ctxt str
in
Proof.theorem_i NONE (K I) [[(prop,[])]] ctxt
end;
- val prove_prop = OuterParse.prop >>
- (fn str => Toplevel.print o
- Toplevel.local_theory_to_proof NONE (set_up_thm str))
-
+ val prove_prop_parser = OuterParse.prop >> prove_prop
val kind = OuterKeyword.thy_goal
in
- OuterSyntax.command "foobar" "proving a proposition" kind prove_prop
+ OuterSyntax.local_theory_to_proof "foobar" "proving a proposition"
+ kind prove_prop_parser
end *}
text {*
- The function @{text set_up_thm} in Lines 2 to 7 takes a string (the proposition to be
+ The function @{text prove_prop} in Lines 2 to 7 takes a string (the proposition to be
proved) and a context as argument. The context is necessary in order to be able to use
@{ML Syntax.read_prop}, which converts a string into a proper proposition.
In Line 6 the function @{ML Proof.theorem_i} starts the proof for the
proposition. Its argument @{ML NONE} stands for a locale (which we chose to
omit); the argument @{ML "(K I)"} stands for a function that determines what
should be done with the theorem once it is proved (we chose to just forget
- about it). Lines 9 to 11 contain the parser for the proposition.
+ about it). Line 9 contains the parser for the proposition.
If you now type \isacommand{foobar}~@{text [quotes] "True \<and> True"}, you obtain the following
- proof state:
+ proof state
\begin{isabelle}
\isacommand{foobar}~@{text [quotes] "True \<and> True"}\\
@@ -991,7 +989,7 @@
@{text "1. True \<and> True"}
\end{isabelle}
- and you can build the proof
+ and you can build the following proof
\begin{isabelle}
\isacommand{foobar}~@{text [quotes] "True \<and> True"}\\
@@ -1000,12 +998,6 @@
\isacommand{done}
\end{isabelle}
-
-
- (FIXME What do @{ML "Toplevel.theory"}
- @{ML "Toplevel.print"}
- @{ML Toplevel.local_theory} do?)
-
(FIXME read a name and show how to store theorems)
*}
--- a/ProgTutorial/Tactical.thy Mon Mar 30 17:40:20 2009 +0200
+++ b/ProgTutorial/Tactical.thy Wed Apr 01 12:28:14 2009 +0100
@@ -1652,11 +1652,10 @@
@{text num} (Line 3), and then generates the meta-equation @{text "t \<equiv> num"}
(Line 4), which it proves by the arithmetic tactic in Line 6.
- For our purpose at the moment, proving the meta-equation using
- @{ML Arith_Data.arith_tac} is
- fine, but there is also an alternative employing the simplifier with a very
- restricted simpset. For the kind of lemmas we want to prove, the simpset
- @{text "num_ss"} in the code will suffice.
+ For our purpose at the moment, proving the meta-equation using @{ML
+ arith_tac in Arith_Data} is fine, but there is also an alternative employing
+ the simplifier with a special simpset. For the kind of lemmas we
+ want to prove here, the simpset @{text "num_ss"} should suffice.
*}
ML{*fun get_thm_alt ctxt (t, n) =
@@ -1672,7 +1671,7 @@
text {*
The advantage of @{ML get_thm_alt} is that it leaves very little room for
something to go wrong; in contrast it is much more difficult to predict
- what happens with @{ML Arith_Data.arith_tac}, especially in more complicated
+ what happens with @{ML arith_tac in Arith_Data}, especially in more complicated
circumstances. The disatvantage of @{ML get_thm_alt} is to find a simpset
that is sufficiently powerful to solve every instance of the lemmas
we like to prove. This requires careful tuning, but is often necessary in
@@ -2081,6 +2080,8 @@
*}
+section {* Declarations (TBD) *}
+
section {* Structured Proofs (TBD) *}
text {* TBD *}
@@ -2105,15 +2106,12 @@
}
oops
-ML {* fun prop ctxt s =
- Thm.cterm_of (ProofContext.theory_of ctxt) (Syntax.read_prop ctxt s) *}
-
ML {*
val ctxt0 = @{context};
val ctxt = ctxt0;
val (_, ctxt) = Variable.add_fixes ["A", "B", "C"] ctxt;
- val ([r], ctxt) = Assumption.add_assumes [prop ctxt "A & B \<Longrightarrow> C"] ctxt;
- val (this, ctxt) = Assumption.add_assumes [prop ctxt "A", prop ctxt "B"] ctxt;
+ val ([r], ctxt) = Assumption.add_assumes [@{cprop "A & B \<Longrightarrow> C"}] ctxt
+ val (this, ctxt) = Assumption.add_assumes [@{cprop "A"}, @{cprop "B"}] ctxt;
val this = [@{thm conjI} OF this];
val this = r OF this;
val this = Assumption.export false ctxt ctxt0 this
--- a/ProgTutorial/document/root.rao Mon Mar 30 17:40:20 2009 +0200
+++ b/ProgTutorial/document/root.rao Wed Apr 01 12:28:14 2009 +0100
@@ -1,8 +1,8 @@
-% This file was generated by 'rail' from 'CookBook/generated/root.rai'
+% This file was generated by 'rail' from 'ProgTutorial/generated/root.rai'
\rail@t {simpleinductive}
\rail@t {where}
\rail@t {for}
-\rail@i {1}{ simpleinductive target? fixes (for fixes)? \\ (where (thmdecl? prop + '|'))? ; }
+\rail@i {1}{ simpleinductive target?\\ fixes (where (thmdecl? prop + '|'))? ; }
\rail@o {1}{
\rail@begin{7}{}
\rail@token{simpleinductive}[]
@@ -10,14 +10,9 @@
\rail@nextbar{1}
\rail@nont{target}[]
\rail@endbar
+\rail@cr{3}
\rail@nont{fixes}[]
\rail@bar
-\rail@nextbar{1}
-\rail@token{for}[]
-\rail@nont{fixes}[]
-\rail@endbar
-\rail@cr{3}
-\rail@bar
\rail@nextbar{4}
\rail@token{where}[]
\rail@plus
--- a/ProgTutorial/document/root.tex Mon Mar 30 17:40:20 2009 +0200
+++ b/ProgTutorial/document/root.tex Wed Apr 01 12:28:14 2009 +0100
@@ -14,6 +14,7 @@
\usepackage{framed}
\usepackage{boxedminipage}
\usepackage{mathpartir}
+\usepackage{flafter}
\usepackage{pdfsetup}
\urlstyle{rm}