ProgTutorial/Package/Ind_Code.thy
author Christian Urban <urbanc@in.tum.de>
Fri, 27 Mar 2009 12:49:28 +0000
changeset 211 d5accbc67e1b
parent 210 db8e302f44c8
child 212 ac01ddb285f6
permissions -rw-r--r--
more work on simple inductive and marked all sections that are still seriously incomplete with TBD

theory Ind_Code
imports "../Base" "../FirstSteps" Ind_General_Scheme 
begin

section {* The Gory Details\label{sec:code} *} 

text {*
  As mentioned before the code falls roughly into three parts: the definitions,
  the induction principles and the introduction rules. In addition there is an 
  administrative function that strings everything together. 
*}

subsection {* Definitions *}

text {*
  We first have to produce for each predicate the definition, whose general form is

  @{text [display] "pred \<equiv> \<lambda>zs. \<forall>preds. orules \<longrightarrow> pred zs"}

  and then ``register'' the definition inside a local theory. 
  To do the latter, we use the following wrapper for 
  @{ML LocalTheory.define}. The wrapper takes a predicate name, a syntax
  annotation and a term representing the right-hand side of the definition.
*}

ML %linenosgray{*fun make_defn ((predname, syn), trm) lthy =
let 
  val arg = ((predname, syn), (Attrib.empty_binding, trm))
  val ((_, (_ , thm)), lthy') = LocalTheory.define Thm.internalK arg lthy
in 
  (thm, lthy') 
end*}

text {*
  It returns the definition (as a theorem) and the local theory in which this definition has 
  been made. In Line 4, @{ML internalK in Thm} is a flag attached to the 
  theorem (others possibile flags are @{ML definitionK in Thm} and @{ML axiomK in Thm}). 
  These flags just classify theorems and have no significant meaning, except 
  for tools that, for example, find theorems in the theorem database. We also
  use @{ML empty_binding in Attrib} in Line 3, since for our inductive predicates 
  the definitions do not need to have any theorem attributes. A testcase for this 
  function is
*}

local_setup %gray {* fn lthy =>
let
  val arg = ((@{binding "MyTrue"}, NoSyn), @{term True})
  val (def, lthy') = make_defn arg lthy 
in
  warning (str_of_thm_no_vars lthy' def); lthy'
end *}

text {*
  which introduces the definition @{prop "MyTrue \<equiv> True"} and then prints it out. 
  Since we are testing the function inside \isacommand{local\_setup}, i.e., make
  actual changes to the ambient theory, we can query the definition with the usual
  command \isacommand{thm}:

  \begin{isabelle}
  \isacommand{thm}~@{text "MyTrue_def"}\\
  @{text "> MyTrue \<equiv> True"}
  \end{isabelle}

  The next two functions construct the right-hand sides of the definitions, 
  which are terms of the form

  @{text [display] "\<lambda>zs. \<forall>preds. orules \<longrightarrow> pred zs"}

  When constructing them, the variables @{text "zs"} need to be chosen so that
  they do not occur in the @{text orules} and also be distinct from the @{text
  "preds"}.


  The first function, named @{text defn_aux}, constructs the term for one
  particular predicate (the argument @{text "pred"} in the code below). The
  number of arguments of this predicate is determined by the number of
  argument types given in @{text "arg_tys"}. The other arguments of the
  function are the @{text orules} and all the @{text "preds"}.
*}

ML %linenosgray{*fun defn_aux lthy orules preds (pred, arg_tys) =
let 
  fun mk_all x P = HOLogic.all_const (fastype_of x) $ lambda x P

  val fresh_args = 
        arg_tys 
        |> map (pair "z")
        |> Variable.variant_frees lthy (preds @ orules) 
        |> map Free
in
  list_comb (pred, fresh_args)
  |> fold_rev (curry HOLogic.mk_imp) orules
  |> fold_rev mk_all preds
  |> fold_rev lambda fresh_args 
end*}

text {*
  The function @{text mk_all} in Line 3 is just a helper function for constructing 
  universal quantifications. The code in Lines 5 to 9 produces the fresh @{text
  "zs"}. For this it pairs every argument type with the string
  @{text [quotes] "z"} (Line 7); then generates variants for all these strings
  so that they are unique w.r.t.~to the predicates and @{text "orules"} (Line 8);
  in Line 9 it generates the corresponding variable terms for the unique
  strings.

  The unique variables are applied to the predicate in Line 11 using the
  function @{ML list_comb}; then the @{text orules} are prefixed (Line 12); in
  Line 13 we quantify over all predicates; and in line 14 we just abstract
  over all the @{text "zs"}, i.e., the fresh arguments of the
  predicate. A testcase for this function is
*}

local_setup %gray{* fn lthy =>
let
  val def = defn_aux lthy eo_orules eo_preds (e_pred, e_arg_tys)
in
  warning (Syntax.string_of_term lthy def); lthy
end *}

text {*
  where we use the shorthands defined in Figure~\ref{fig:shorthands}.
  The testcase calls @{ML defn_aux} for the predicate @{text "even"} and prints
  out the generated definition. So we obtain as printout 

  @{text [display] 
"\<lambda>z. \<forall>even odd. (even 0) \<longrightarrow> (\<forall>n. odd n \<longrightarrow> even (Suc n)) 
                         \<longrightarrow> (\<forall>n. even n \<longrightarrow> odd (Suc n)) \<longrightarrow> even z"}

  If we try out the function with the rules for freshness
*}

local_setup %gray{* fn lthy =>
 (warning (Syntax.string_of_term lthy
    (defn_aux lthy fresh_orules [fresh_pred] (fresh_pred, fresh_arg_tys)));
  lthy) *}

text {*
  we obtain

  @{term [display] 
"\<lambda>z za. \<forall>fresh. (\<forall>a b. \<not> a = b \<longrightarrow> fresh a (Var b)) \<longrightarrow>
               (\<forall>a s t. fresh a t \<longrightarrow> fresh a s \<longrightarrow> fresh a (App t s)) \<longrightarrow>
                (\<forall>a t. fresh a (Lam a t)) \<longrightarrow>
                (\<forall>a b t. \<not> a = b \<longrightarrow> fresh a t \<longrightarrow> fresh a (Lam b t)) \<longrightarrow> fresh z za"}


  The second function, named @{text defns}, has to just iterate the function
  @{ML defn_aux} over all predicates. The argument @{text "preds"} is again
  the the list of predicates as @{ML_type term}s; the argument @{text
  "prednames"} is the list of binding names of the predicates; @{text syns} 
  are the list of syntax annotations for the predicates; @{text "arg_tyss"} is
  the list of argument-type-lists.
*}

ML %linenosgray{*fun defns rules preds prednames syns arg_typss lthy =
let
  val thy = ProofContext.theory_of lthy
  val orules = map (ObjectLogic.atomize_term thy) rules
  val defs = map (defn_aux lthy orules preds) (preds ~~ arg_typss) 
in
  fold_map make_defn (prednames ~~ syns ~~ defs) lthy
end*}

text {*
  The user will state the introduction rules using meta-implications and
  meta-quanti\-fications. In Line 4, we transform these introduction rules into
  the object logic (since definitions cannot be stated with
  meta-connectives). To do this transformation we have to obtain the theory
  behind the local theory (Line 3); with this theory we can use the function
  @{ML ObjectLogic.atomize_term} to make the transformation (Line 4). The call
  to @{ML defn_aux} in Line 5 produces all right-hand sides of the
  definitions. The actual definitions are then made in Line 7.  The result
  of the function is a list of theorems and a local theory. A testcase for 
  this function is 
*}

local_setup %gray {* fn lthy =>
let
  val (defs, lthy') = 
    defns eo_rules eo_preds eo_prednames eo_syns eo_arg_tyss lthy
in
  warning (str_of_thms_no_vars lthy' defs); lthy
end *}

text {*
  where we feed into the function all parameters corresponding to
  the @{text even}-@{text odd} example. The definitions we obtain
  are:

  @{text [display, break]
"even \<equiv> \<lambda>z. \<forall>even odd. (even 0) \<longrightarrow> (\<forall>n. odd n \<longrightarrow> even (Suc n))  
                                \<longrightarrow> (\<forall>n. even n \<longrightarrow> odd (Suc n)) \<longrightarrow> even z,
odd \<equiv> \<lambda>z. \<forall>even odd. (even 0) \<longrightarrow> (\<forall>n. odd n \<longrightarrow> even (Suc n)) 
                               \<longrightarrow> (\<forall>n. even n \<longrightarrow> odd (Suc n)) \<longrightarrow> odd z"}

  Note that in the testcase we return the local theory @{text lthy} 
  (not the modified @{text lthy'}). As a result the test case has no effect
  on the ambient theory. The reason is that if we introduce the
  definition again, we pollute the name space with two versions of @{text "even"} 
  and @{text "odd"}.

  This completes the code for introducing the definitions. Next we deal with
  the induction principles. 
*}

subsection {* Induction Principles *}

text {*
  Recall that the proof of the induction principle 
  for @{text "even"} was:
*}

lemma manual_ind_prin_even: 
assumes prem: "even z"
shows "P 0 \<Longrightarrow> (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P z"
apply(atomize (full))
apply(cut_tac prem)
apply(unfold even_def)
apply(drule spec[where x=P])
apply(drule spec[where x=Q])
apply(assumption)
done

text {* 
  The code for automating such induction principles has to accomplish two tasks: 
  constructing the induction principles from the given introduction
  rules and then automatically generating proofs for them using a tactic. 
  
  The tactic will use the following helper function for instantiating universal 
  quantifiers. 
*}

ML{*fun inst_spec ctrm = 
 Drule.instantiate' [SOME (ctyp_of_term ctrm)] [NONE, SOME ctrm] @{thm spec}*}

text {*
  This helper function instantiates the @{text "?x"} in the theorem 
  @{thm spec} with a given @{ML_type cterm}. We call this helper function
  in the next tactic, called @{text inst_spec_tac}.
*}

ML{*fun inst_spec_tac ctrms = 
  EVERY' (map (dtac o inst_spec) ctrms)*}

text {*
  This tactic expects a list of @{ML_type cterm}s. It allows us in the following 
  proof to instantiate the three quantifiers in the assumption. 
*}

lemma 
  fixes P::"nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> bool"
  shows "\<forall>x y z. P x y z \<Longrightarrow> True"
apply (tactic {* 
  inst_spec_tac  [@{cterm "a::nat"},@{cterm "b::nat"},@{cterm "c::nat"}] 1 *})
txt {* 
  We obtain the goal state

  \begin{minipage}{\textwidth}
  @{subgoals} 
  \end{minipage}*}
(*<*)oops(*>*)

text {*
  The complete tactic for proving the induction principles can now
  be implemented as follows:
*}

ML %linenosgray{*fun ind_tac defs prem insts =
  EVERY1 [ObjectLogic.full_atomize_tac,
          cut_facts_tac prem,
          K (rewrite_goals_tac defs),
          inst_spec_tac insts,
          assume_tac]*}

text {*
  We have to give it as arguments the definitions, the premise (this premise
  is @{text "even n"} in lemma @{thm [source] manual_ind_prin_even}) and the
  instantiations. Compare this tactic with the manual proof for the lemma @{thm
  [source] manual_ind_prin_even}: as you can see there is almost a one-to-one
  correspondence between the \isacommand{apply}-script and the @{ML
  ind_tac}. Two testcases for this tactic are:
*}

lemma automatic_ind_prin_even:
assumes prem: "even z"
shows "P 0 \<Longrightarrow> (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P z"
by (tactic {* ind_tac eo_defs @{thms prem} 
                    [@{cterm "P::nat\<Rightarrow>bool"}, @{cterm "Q::nat\<Rightarrow>bool"}] *})

lemma automatic_ind_prin_fresh:
assumes prem: "fresh z za" 
shows "(\<And>a b. a \<noteq> b \<Longrightarrow> P a (Var b)) \<Longrightarrow> 
        (\<And>a t s. \<lbrakk>P a t; P a s\<rbrakk> \<Longrightarrow> P a (App t s)) \<Longrightarrow>
        (\<And>a t. P a (Lam a t)) \<Longrightarrow> 
        (\<And>a b t. \<lbrakk>a \<noteq> b; P a t\<rbrakk> \<Longrightarrow> P a (Lam b t)) \<Longrightarrow> P z za"
by (tactic {* ind_tac @{thms fresh_def} @{thms prem} 
                          [@{cterm "P::string\<Rightarrow>trm\<Rightarrow>bool"}] *})


text {*
  While the tactic for proving the induction principles is relatively simple,
  it will be a bit of work to construct the goals from the introduction rules
  the user provides.  Therefore let us have a closer look at the first 
  proved theorem:

  \begin{isabelle}
  \isacommand{thm}~@{thm [source] automatic_ind_prin_even}\\
  @{text "> "}~@{thm automatic_ind_prin_even}
  \end{isabelle}

  The variables @{text "z"}, @{text "P"} and @{text "Q"} are schematic
  variables (since they are not quantified in the lemma). These schematic
  variables are needed so that they can be instantiated by the user. 
  We have to take care to also generate these schematic variables when
  generating the goals for the induction principles.  In general we have 
  to construct for each predicate @{text "pred"} a goal of the form

  @{text [display] 
  "pred ?zs \<Longrightarrow> rules[preds := ?Ps] \<Longrightarrow> ?P ?zs"}

  where the predicates @{text preds} are replaced in @{text rules} by new 
  distinct variables @{text "?Ps"}. We also need to generate fresh arguments 
  @{text "?zs"} for the predicate  @{text "pred"} and the @{text "?P"} in 
  the conclusion. The crucial point is that the  @{text "?Ps"} and 
  @{text "?zs"} need to be schematic variables that can be instantiated 
  by the user.

  We generate these goals in two steps. The first function, named @{text prove_ind}, 
  expects that the introduction rules are already appropriately substituted. The argument
  @{text "srules"} stands for these substituted rules; @{text cnewpreds} are
  the certified terms coresponding to the variables @{text "?Ps"}; @{text
  "pred"} is the predicate for which we prove the induction principle;
  @{text "newpred"} is its replacement and @{text "arg_tys"} are the argument
  types of this predicate.
*}

ML %linenosgray{*fun prove_ind lthy defs srules cnewpreds ((pred, newpred), arg_tys) =
let
  val zs = replicate (length arg_tys) "z"
  val (newargnames, lthy') = Variable.variant_fixes zs lthy;
  val newargs = map Free (newargnames ~~ arg_tys)
  
  val prem = HOLogic.mk_Trueprop (list_comb (pred, newargs))
  val goal = Logic.list_implies 
         (srules, HOLogic.mk_Trueprop (list_comb (newpred, newargs)))
in
  Goal.prove lthy' [] [prem] goal
      (fn {prems, ...} => ind_tac defs prems cnewpreds)
  |> singleton (ProofContext.export lthy' lthy)
end *}

text {* 
  In Line 3 we produce names @{text "zs"} for each type in the 
  argument type list. Line 4 makes these names unique and declares them as 
  \emph{free} (but fixed) variables in the local theory @{text "lthy'"}. 
  That means they are not (yet) schematic variables.
  In Line 5 we construct the terms corresponding to these variables. 
  The variables are applied to the predicate in Line 7 (this corresponds
  to the first premise @{text "pred zs"} of the induction principle). 
  In Line 8 and 9, we first construct the term  @{text "P zs"} 
  and then add the (substituted) introduction rules as premises. In case that
  no introduction rules are given, the conclusion of this implication needs
  to be wrapped inside a @{term Trueprop}, otherwise the Isabelle's goal
  mechanism will fail. 

  In Line 11 we set up the goal to be proved; in the next line we call the
  tactic for proving the induction principle. As mentioned before, this tactic
  expects the definitions, the premise and the (certified) predicates with
  which the introduction rules have been substituted. The code in these two
  lines will return a theorem. However, it is a theorem
  proved inside the local theory @{text "lthy'"}, where the variables @{text
  "zs"} are free, but fixed (see Line 4). By exporting this theorem from @{text
  "lthy'"} (which contains the @{text "zs"} as free variables) to @{text
  "lthy"} (which does not), we obtain the desired schematic variables @{text "?zs"}.
  A testcase for this function is
*}

local_setup %gray{* fn lthy =>
let
  val newpreds = [@{term "P::nat\<Rightarrow>bool"}, @{term "Q::nat\<Rightarrow>bool"}]
  val cnewpreds = [@{cterm "P::nat\<Rightarrow>bool"}, @{cterm "Q::nat\<Rightarrow>bool"}]
  val newpred = @{term "P::nat\<Rightarrow>bool"}
  val srules =  map (subst_free (eo_preds ~~ newpreds)) eo_rules
  val intro = 
      prove_ind lthy eo_defs srules cnewpreds ((e_pred, newpred), e_arg_tys)
in
  warning (str_of_thm lthy intro); lthy
end *}

text {*
  This prints out the theorem:

  @{text [display]
  " \<lbrakk>even ?z; P 0; \<And>n. Q n \<Longrightarrow> P (Suc n); \<And>n. P n \<Longrightarrow> Q (Suc n)\<rbrakk> \<Longrightarrow> P ?z"}

  The export from @{text lthy'} to @{text lthy} in Line 13 above 
  has correctly turned the free, but fixed, @{text "z"} into a schematic 
  variable @{text "?z"}; the variables @{text "P"} and @{text "Q"} are not yet
  schematic. 

  We still have to produce the new predicates with which the introduction
  rules are substituted and iterate @{ML prove_ind} over all
  predicates. This is what the second function, named @{text inds} does. 
*}

ML %linenosgray{*fun inds rules defs preds arg_tyss lthy  =
let
  val Ps = replicate (length preds) "P"
  val (newprednames, lthy') = Variable.variant_fixes Ps lthy
  
  val thy = ProofContext.theory_of lthy'

  val tyss' = map (fn tys => tys ---> HOLogic.boolT) arg_tyss
  val newpreds = map Free (newprednames ~~ tyss')
  val cnewpreds = map (cterm_of thy) newpreds
  val srules = map (subst_free (preds ~~ newpreds)) rules

in
  map (prove_ind lthy' defs srules cnewpreds) 
        (preds ~~ newpreds ~~ arg_tyss)
          |> ProofContext.export lthy' lthy
end*}

text {*
  In Line 3, we generate a string @{text [quotes] "P"} for each predicate. 
  In Line 4, we use the same trick as in the previous function, that is making the 
  @{text "Ps"} fresh and declaring them as free, but fixed, in
  the new local theory @{text "lthy'"}. From the local theory we extract
  the ambient theory in Line 6. We need this theory in order to certify 
  the new predicates. In Line 8, we construct the types of these new predicates
  using the given argument types. Next we turn them into terms and subsequently
  certify them (Line 9 and 10). We can now produce the substituted introduction rules 
  (Line 11) using the function @{ML subst_free}. Line 14 and 15 just iterate 
  the proofs for all predicates.
  From this we obtain a list of theorems. Finally we need to export the 
  fixed variables @{text "Ps"} to obtain the schematic variables @{text "?Ps"} 
  (Line 16).

  A testcase for this function is
*}

local_setup %gray {* fn lthy =>
let 
  val ind_thms = inds eo_rules eo_defs eo_preds eo_arg_tyss lthy
in
  warning (str_of_thms lthy ind_thms); lthy
end *}


text {*
  which prints out

@{text [display]
"even ?z \<Longrightarrow> ?P1 0 \<Longrightarrow> 
 (\<And>m. ?Pa1 m \<Longrightarrow> ?P1 (Suc m)) \<Longrightarrow> (\<And>m. ?P1 m \<Longrightarrow> ?Pa1 (Suc m)) \<Longrightarrow> ?P1 ?z,
odd ?z \<Longrightarrow> ?P1 0 \<Longrightarrow>
 (\<And>m. ?Pa1 m \<Longrightarrow> ?P1 (Suc m)) \<Longrightarrow> (\<And>m. ?P1 m \<Longrightarrow> ?Pa1 (Suc m)) \<Longrightarrow> ?Pa1 ?z"}

  Note that now both, the @{text "?Ps"} and the @{text "?zs"}, are schematic
  variables. The numbers attached to these variables have been introduced by 
  the pretty-printer and are \emph{not} important for the user. 

  This completes the code for the induction principles. The final peice
  of reasoning infrastructure we need are the introduction rules. 
*}

subsection {* Introduction Rules *}

text {*
  The proofs of the introduction rules are quite a bit
  more involved than the ones for the induction principles. 
  To ease somewhat our work here, we use the following two helper
  functions.

*}

ML{*val all_elims = fold (fn ct => fn th => th RS inst_spec ct)
val imp_elims = fold (fn th => fn th' => [th', th] MRS @{thm mp})*}

text {* 
  To see what these functions do, let us suppose whe have the following three
  theorems. 
*}

lemma all_elims_test:
  fixes P::"nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> bool"
  shows "\<forall>x y z. P x y z" sorry

lemma imp_elims_test:
  shows "A \<longrightarrow> B \<longrightarrow> C" sorry

lemma imp_elims_test':
  shows "A" "B" sorry

text {*
  The function @{ML all_elims} takes a list of (certified) terms and instantiates
  theorems of the form @{thm [source] all_elims_test}. For example we can instantiate
  the quantifiers in this theorem with @{term a}, @{term b} and @{term c} as follows:

  @{ML_response_fake [display, gray]
"let
  val ctrms = [@{cterm \"a::nat\"}, @{cterm \"b::nat\"}, @{cterm \"c::nat\"}]
  val new_thm = all_elims ctrms @{thm all_elims_test}
in
  warning (str_of_thm_no_vars @{context} new_thm)
end"
  "P a b c"}

  Similarly, the function @{ML imp_elims} eliminates preconditions from implications. 
  For example we can eliminate the preconditions @{text "A"} and @{text "B"} from
  @{thm [source] imp_elims_test}:

  @{ML_response_fake [display, gray]
"warning (str_of_thm_no_vars @{context} 
            (imp_elims @{thms imp_elims_test'} @{thm imp_elims_test}))"
  "C"}

  To explain the proof for the introduction rule, our running example will be
  the rule:

  \begin{isabelle}
  @{prop "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"}
  \end{isabelle}
  
  for freshness of applications. We set up the proof as follows:
*}

lemma fresh_Lam:
  shows "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
(*<*)oops(*>*)

text {*
  The first step will be to expand the definitions of freshness
  and then introduce quantifiers and implications. For this we
  will use the tactic
*}

ML{*fun expand_tac defs =
  ObjectLogic.rulify_tac 1
  THEN rewrite_goals_tac defs
  THEN (REPEAT (resolve_tac [@{thm allI}, @{thm impI}] 1)) *}

text {*
  The first step of ``rulifying'' the lemma will turn out to be important
  later on. Applying this tactic 
*}

(*<*)
lemma fresh_Lam:
  shows "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
(*>*)
apply(tactic {* expand_tac @{thms fresh_def} *})

txt {*
  we end up in the goal state

  \begin{isabelle}
  @{subgoals [display]}
  \end{isabelle}

  As you can see, there are parameters (namely @{text "a"}, @{text "b"} 
  and @{text "t"}) which come from the introduction rule and parameters
  (in the case above only @{text "fresh"}) which come from the universal
  quantification in the definition @{term "fresh a (App t s)"}.
  Similarly, there are preconditions
  that come from the premises of the rule and premises from the
  definition. We need to treat these 
  parameters and preconditions differently. In the code below
  we will therefore separate them into @{text "params1"} and @{text params2},
  respectively @{text "prems1"} and @{text "prems2"}. To do this 
  separation, it is best to open a subproof with the tactic 
  @{ML SUBPROOF}, since this tactic provides us
  with the parameters (as list of @{ML_type cterm}s) and the premises
  (as list of @{ML_type thm}s). The problem we have to overcome 
  with @{ML SUBPROOF} is, however, that this tactic always expects us to completely 
  discharge the goal (see Section~\ref{sec:simpletacs}). This is inconvenient for
  our gradual explanation of the proof here. To circumvent this inconvenience
  we use the following modified tactic: 
*}
(*<*)oops(*>*)
ML{*fun SUBPROOF_test tac ctxt = (SUBPROOF tac ctxt 1) ORELSE all_tac*}

text {*
  If the tactic inside @{ML SUBPROOF} fails, then the overall tactic will
  still succeed. With this testing tactic, we can gradually implement
  all necessary proof steps inside a subproof.
*}

text_raw {*
\begin{figure}[t]
\begin{minipage}{\textwidth}
\begin{isabelle}
*}
ML{*fun chop_print params1 params2 prems1 prems2 ctxt =
let 
  val s = ["Params1 from the rule:", str_of_cterms ctxt params1] 
        @ ["Params2 from the predicate:", str_of_cterms ctxt params2] 
        @ ["Prems1 from the rule:"] @ (map (str_of_thm ctxt) prems1) 
        @ ["Prems2 from the predicate:"] @ (map (str_of_thm ctxt) prems2) 
in 
  s |> separate "\n"
    |> implode
    |> warning
end*}
text_raw{*
\end{isabelle}
\end{minipage}
\caption{A helper function that prints out the parameters and premises that
  need to be treated differently.\label{fig:chopprint}}
\end{figure}
*}

text {*
  First we calculate the values for @{text "params1/2"} and @{text "prems1/2"}
  from @{text "params"} and @{text "prems"}, respectively. To see what is
  going in our example, we will print out the values using the printing
  function in Figure~\ref{fig:chopprint}. Since the tactic @{ML SUBPROOF} will
  supply us the @{text "params"} and @{text "prems"} as lists, we can 
  separate them using the function @{ML chop}. 
*}

ML{*fun chop_test_tac preds rules =
  SUBPROOF_test (fn {params, prems, context, ...} =>
  let
    val (params1, params2) = chop (length params - length preds) params
    val (prems1, prems2) = chop (length prems - length rules) prems
  in
    chop_print params1 params2 prems1 prems2 context; no_tac
  end) *}

text {* 
  For the separation we can rely on that Isabelle deterministically 
  produces parameter and premises in a goal state. The last parameters
  that were introduced come from the quantifications in the definitions.
  Therefore we only have to subtract the number of predicates (in this
  case only @{text "1"}) from the lenghts of all parameters. Similarly
  with the @{text "prems"}: the last premises in the goal state come from 
  unfolding the definition of the predicate in the conclusion. So we can 
  just subtract the number of rules from the number of all premises. 
  Applying this tactic in our example 
*}

(*<*)
lemma fresh_Lam:
  shows "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
apply(tactic {* expand_tac @{thms fresh_def} *})
(*>*)
apply(tactic {* chop_test_tac [fresh_pred] fresh_rules @{context} *})
(*<*)oops(*>*)

text {*
  gives

  \begin{isabelle}
  @{text "Params1 from the rule:"}\\
  @{text "a, b, t"}\\
  @{text "Params2 from the predicate:"}\\
  @{text "fresh"}\\
  @{text "Prems1 from the rule:"}\\
  @{term "a \<noteq> b"}\\
  @{text [break]
"\<forall>fresh.
      (\<forall>a b. a \<noteq> b \<longrightarrow> fresh a (Var b)) \<longrightarrow>
      (\<forall>a t s. fresh a t \<longrightarrow> fresh a s \<longrightarrow> fresh a (App t s)) \<longrightarrow>
      (\<forall>a t. fresh a (Lam a t)) \<longrightarrow> 
      (\<forall>a b t. a \<noteq> b \<longrightarrow> fresh a t \<longrightarrow> fresh a (Lam b t)) \<longrightarrow> fresh a t"}\\
   @{text "Prems2 from the predicate:"}\\
   @{term "\<forall>a b. a \<noteq> b \<longrightarrow> fresh a (Var b)"}\\
   @{term "\<forall>a t s. fresh a t \<longrightarrow> fresh a s \<longrightarrow> fresh a (App t s)"}\\
   @{term "\<forall>a t. fresh a (Lam a t)"}\\
   @{term "\<forall>a b t. a \<noteq> b \<longrightarrow> fresh a t \<longrightarrow> fresh a (Lam b t)"}
  \end{isabelle}


  We now have to select from @{text prems2} the premise 
  that corresponds to the introduction rule we prove, namely:

  @{term [display] "\<forall>a t s. fresh a t \<longrightarrow> fresh a s \<longrightarrow> fresh a (App t s)"}

  To use this premise with @{ML rtac}, we need to instantiate its 
  quantifiers (with @{text params1}) and transform it into rule 
  format (using @{ML "ObjectLogic.rulify"}. So we can modify the 
  subproof as follows:
*}

ML{*fun apply_prem_tac i preds rules =
  SUBPROOF_test (fn {params, prems, context, ...} =>
  let
    val (params1, params2) = chop (length params - length preds) params
    val (prems1, prems2) = chop (length prems - length rules) prems
  in
    rtac (ObjectLogic.rulify (all_elims params1 (nth prems2 i))) 1
    THEN print_tac ""
    THEN no_tac
  end) *}

text {* 
  The argument @{text i} corresponds to the number of the 
  introduction we want to analyse. We will later on lat it range
  from @{text 0} to the number of introduction rules.
  Below we applying this function with @{text 3}, since 
  we are proving the fourth introduction rule. 
*}

(*<*)
lemma fresh_Lam:
  shows "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
apply(tactic {* expand_tac @{thms fresh_def} *})
(*>*)
apply(tactic {* apply_prem_tac 3 [fresh_pred] fresh_rules @{context} *})
(*<*)oops(*>*)

text {*
  Since we print out the goal state just after the application of 
  @{ML rtac}, we can see the goal state we obtain: as expected it has 
  the two subgoals

  \begin{isabelle}
  @{text "1."}~@{prop "a \<noteq> b"}\\
  @{text "2."}~@{prop "fresh a t"}
  \end{isabelle}

  where the first comes from a non-recursive premise of the rule
  and the second comes from a recursive premise. The first goal
  can be solved immediately by @{text "prems1"}. The second
  needs more work. It can be solved with the other premise 
  in @{text "prems1"}, namely

  @{term [break,display]
  "\<forall>fresh.
      (\<forall>a b. a \<noteq> b \<longrightarrow> fresh a (Var b)) \<longrightarrow>
      (\<forall>a t s. fresh a t \<longrightarrow> fresh a s \<longrightarrow> fresh a (App t s)) \<longrightarrow>
      (\<forall>a t. fresh a (Lam a t)) \<longrightarrow> 
      (\<forall>a b t. a \<noteq> b \<longrightarrow> fresh a t \<longrightarrow> fresh a (Lam b t)) \<longrightarrow> fresh a t"}

  but we have to instantiate it appropriately. These instantiations
  come from @{text "params1"} and @{text "prems2"}. We can determine
  whether we are in the simple or complicated case by checking whether
  the topmost connective is an @{text "\<forall>"}. The premises in the simple
  case cannot have such a quantification, since in the first step 
  of @{ML "expand_tac"} was the ``rulification'' of the lemma. 
  The premise of the complicated case must have at least one  @{text "\<forall>"}
  coming from the quantification over the @{text preds}. So 
  we can implement the following function
*}

ML{*fun prepare_prem params2 prems2 prem =  
  rtac (case prop_of prem of
           _ $ (Const (@{const_name All}, _) $ _) =>
                 prem |> all_elims params2 
                      |> imp_elims prems2
         | _ => prem) *}

text {* 
  which either applies the premise outright (the default case) or if 
  it has an outermost universial quantification, instantiates it first 
  with  @{text "params1"} and then @{text "prems1"}. The following 
  tactic will therefore prove the lemma completely.
*}

ML{*fun prove_intro_tac i preds rules =
  SUBPROOF (fn {params, prems, ...} =>
  let
    val (params1, params2) = chop (length params - length preds) params
    val (prems1, prems2) = chop (length prems - length rules) prems
  in
    rtac (ObjectLogic.rulify (all_elims params1 (nth prems2 i))) 1
    THEN EVERY1 (map (prepare_prem params2 prems2) prems1)
  end) *}

text {*
  The full proof of the introduction rule now as follows:
*}

lemma fresh_Lam:
  shows "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
apply(tactic {* expand_tac @{thms fresh_def} *})
apply(tactic {* prove_intro_tac 3 [fresh_pred] fresh_rules @{context} 1 *})
done

text {* 
  Unfortunately, not everything is done yet. If you look closely
  at the general principle outlined in Section~\ref{sec:nutshell}, 
  we have  not yet dealt with the case when recursive premises 
  in a rule have preconditions @{text Bs}. The introduction rule
  of the accessible part is such a rule. 
*}

lemma accpartI:
  shows "\<And>x. (\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
apply(tactic {* expand_tac @{thms accpart_def} *})
apply(tactic {* chop_test_tac [acc_pred] acc_rules @{context} *})
apply(tactic {* apply_prem_tac 0 [acc_pred] acc_rules @{context} *})

txt {*
  Here @{ML chop_test_tac} prints out the following
  values for @{text "params1/2"} and @{text "prems1/2"}

  \begin{isabelle}
  @{text "Params1 from the rule:"}\\
  @{text "x"}\\
  @{text "Params2 from the predicate:"}\\
  @{text "P"}\\
  @{text "Prems1 from the rule:"}\\
  @{text "R ?y x \<Longrightarrow> \<forall>P. (\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x) \<longrightarrow> P ?y"}\\
  @{text "Prems2 from the predicate:"}\\
  @{term "\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x"}\\
  \end{isabelle}

  and after application of the introduction rule 
  using @{ML apply_prem_tac}, we are in the goal state

  \begin{isabelle}
  @{text "1."}~@{term "\<And>y. R y x \<Longrightarrow> P y"}
  \end{isabelle}
  
  
*}(*<*)oops(*>*)

text {*
  In order to make progress as before, we have to use the precondition
  @{text "R y x"} (in general there can be many of them). The best way
  to get a handle on these preconditions is to open up another subproof,
  since the preconditions will be bound to @{text prems}. Therfore we
  modify the function @{ML prepare_prem} as follows
*}

ML %linenosgray{*fun prepare_prem params2 prems2 ctxt prem =  
  SUBPROOF (fn {prems, ...} =>
  let
    val prem' = prems MRS prem
  in 
    rtac (case prop_of prem' of
           _ $ (Const (@{const_name All}, _) $ _) =>
                 prem' |> all_elims params2 
                       |> imp_elims prems2
         | _ => prem') 1
  end) ctxt *}

text {*
  In Line 4 we use the @{text prems} from the @{ML SUBPROOF} and resolve 
  them with @{text prem}. In the simple case, that is where the @{text prem} 
  comes from a non-recursive premise of the rule, @{text prems} will be 
  just the empty list and the @{ML MRS} does nothing. Similarly, in the 
  cases where the recursive premises of the rule do not have preconditions. 
  
  The function @{ML prove_intro_tac} only needs to give the context to
  @{ML prepare_prem} (Line 8) and is as follows.
*}

ML %linenosgray{*fun prove_intro_tac i preds rules =
  SUBPROOF (fn {params, prems, context, ...} =>
  let
    val (params1, params2) = chop (length params - length preds) params
    val (prems1, prems2) = chop (length prems - length rules) prems
  in
    rtac (ObjectLogic.rulify (all_elims params1 (nth prems2 i))) 1
    THEN EVERY1 (map (prepare_prem params2 prems2 context) prems1)
  end) *}

text {*
  With these extended function we can also prove the introduction
  rule for the accessible part. 
*}

lemma accpartI:
  shows "\<And>x. (\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
apply(tactic {* expand_tac @{thms accpart_def} *})
apply(tactic {* prove_intro_tac 0 [acc_pred] acc_rules @{context} 1 *})
done

text {*
  Finally we need two functions that string everything together. The first
  function is the tactic that performs the proofs.
*}

ML %linenosgray{*fun intro_tac defs rules preds i ctxt =
  EVERY1 [ObjectLogic.rulify_tac,
          K (rewrite_goals_tac defs),
          REPEAT o (resolve_tac [@{thm allI}, @{thm impI}]),
          prove_intro_tac i preds rules ctxt]*}

text {*
  Lines 2 to 4 correspond to the function @{ML expand_tac}. Some testcases
  dor this tactic are:
*}

lemma even0_intro:
  shows "even 0"
by (tactic {* intro_tac eo_defs eo_rules eo_preds 0 @{context} *})


lemma evenS_intro:
  shows "\<And>m. odd m \<Longrightarrow> even (Suc m)"
by (tactic {* intro_tac eo_defs eo_rules eo_preds 1 @{context} *})

lemma fresh_App:
  shows "\<And>a t s. \<lbrakk>fresh a t; fresh a s\<rbrakk> \<Longrightarrow> fresh a (App t s)"
by (tactic {* 
  intro_tac @{thms fresh_def} fresh_rules [fresh_pred] 1 @{context} *})

text {*
  The second function sets up in Line 4 the goals (in this case this is easy
  since they are exactly the introduction rules the user gives)
  and iterates @{ML intro_tac} over all introduction rules.
*}

ML %linenosgray{*fun intros rules preds defs lthy = 
let
  fun intros_aux (i, goal) =
    Goal.prove lthy [] [] goal
      (fn {context, ...} => intro_tac defs rules preds i context)
in
  map_index intros_aux rules
end*}

subsection {* Main Function *}

text {* main internal function *}

ML {* LocalTheory.notes *}


ML %linenosgray{*fun add_inductive pred_specs rule_specs lthy =
let
  val syns = map snd pred_specs
  val pred_specs' = map fst pred_specs
  val prednames = map fst pred_specs'
  val preds = map (fn (p, ty) => Free (Binding.name_of p, ty)) pred_specs'

  val tyss = map (binder_types o fastype_of) preds   
  val (attrs, rules) = split_list rule_specs    

  val (defs, lthy') = defns rules preds prednames syns tyss lthy      
  val ind_rules = inds rules defs preds tyss lthy' 	
  val intro_rules = intros rules preds defs lthy'

  val mut_name = space_implode "_" (map Binding.name_of prednames)
  val case_names = map (Binding.name_of o fst) attrs
in
    lthy' 
    |> LocalTheory.notes Thm.theoremK (map (fn (((a, atts), _), th) =>
        ((Binding.qualify false mut_name a, atts), [([th], [])])) (rule_specs ~~ intro_rules)) 
    |-> (fn intross => LocalTheory.note Thm.theoremK
         ((Binding.qualify false mut_name (@{binding "intros"}), []), maps snd intross)) 
    |>> snd 
    ||>> (LocalTheory.notes Thm.theoremK (map (fn (((R, _), _), th) =>
         ((Binding.qualify false (Binding.name_of R) (@{binding "induct"}),
          [Attrib.internal (K (RuleCases.case_names case_names)),
           Attrib.internal (K (RuleCases.consumes 1)),
           Attrib.internal (K (Induct.induct_pred ""))]), [([th], [])]))
          (pred_specs ~~ ind_rules)) #>> maps snd) 
    |> snd
end*}

ML{*fun add_inductive_cmd pred_specs rule_specs lthy =
let
  val ((pred_specs', rule_specs'), _) = 
         Specification.read_spec pred_specs rule_specs lthy
in
  add_inductive pred_specs' rule_specs' lthy
end*} 

ML{*val spec_parser = 
   OuterParse.fixes -- 
   Scan.optional 
     (OuterParse.$$$ "where" |--
        OuterParse.!!! 
          (OuterParse.enum1 "|" 
             (SpecParse.opt_thm_name ":" -- OuterParse.prop))) []*}

ML{*val specification =
  spec_parser >>
    (fn ((pred_specs), rule_specs) => add_inductive_cmd pred_specs rule_specs)*}

ML{*val _ = OuterSyntax.local_theory "simple_inductive" 
              "define inductive predicates"
                 OuterKeyword.thy_decl specification*}

text {*
  Things to include at the end:

  \begin{itemize}
  \item include the code for the parameters
  \item say something about add-inductive-i to return
  the rules
  \item say that the induction principle is weaker (weaker than
  what the standard inductive package generates)
  \item say that no conformity test is done
  \item exercise about strong induction principles
  \item exercise about the test for the intro rules
  \end{itemize}
  
*}

simple_inductive
  Even and Odd
where
  Even0: "Even 0"
| EvenS: "Odd n \<Longrightarrow> Even (Suc n)"
| OddS: "Even n \<Longrightarrow> Odd (Suc n)"

end