ProgTutorial/Package/Ind_Code.thy
author Norbert Schirmer <norbert.schirmer@web.de>
Tue, 21 May 2019 14:37:39 +0200
changeset 572 438703674711
parent 569 f875a25aa72d
child 574 034150db9d91
permissions -rw-r--r--
prefer more result checking in ML antiquotations

theory Ind_Code
imports Ind_General_Scheme "../First_Steps" 
begin

section \<open>The Gory Details\label{sec:code}\<close> 

text \<open>
  As mentioned before the code falls roughly into three parts: the code that deals
  with the definitions, with the induction principles and with the introduction
  rules. In addition there are some administrative functions that string everything 
  together.
\<close>

subsection \<open>Definitions\<close>

text \<open>
  We first have to produce for each predicate the user specifies an appropriate
  definition, whose general form is

  @{text [display] "pred \<equiv> \<lambda>zs. \<forall>preds. orules \<longrightarrow> pred zs"}

  and then ``register'' the definition inside a local theory. 
  To do the latter, we use the following wrapper for the function
  @{ML_ind define in Local_Theory}. The wrapper takes a predicate name, a syntax
  annotation and a term representing the right-hand side of the definition.
\<close>

ML %linenosgray\<open>fun make_defn ((predname, mx), trm) lthy =
let 
  val arg = ((predname, mx), (Binding.empty_atts, trm))
  val ((_, (_ , thm)), lthy') = Local_Theory.define arg lthy
in 
  (thm, lthy') 
end\<close>

text \<open>
  It returns the definition (as a theorem) and the local theory in which the
  definition has been made. We use @{ML_ind empty_atts in Binding} in Line 3, 
  since the definitions for our inductive predicates are not meant to be seen 
  by the user and therefore do not need to have any theorem attributes. 
 
  The next two functions construct the right-hand sides of the definitions, 
  which are terms whose general form is:

  @{text [display] "\<lambda>zs. \<forall>preds. orules \<longrightarrow> pred zs"}

  When constructing these terms, the variables \<open>zs\<close> need to be chosen so 
  that they do not occur in the \<open>orules\<close> and also be distinct from the 
  \<open>preds\<close>.


  The first function, named \<open>defn_aux\<close>, constructs the term for one
  particular predicate (the argument \<open>pred\<close> in the code below). The
  number of arguments of this predicate is determined by the number of
  argument types given in \<open>arg_tys\<close>. The other arguments of the
  function are the \<open>orules\<close> and all the \<open>preds\<close>.
\<close>

ML %linenosgray\<open>fun defn_aux lthy orules preds (pred, arg_tys) =
let 
  fun mk_all x P = HOLogic.all_const (fastype_of x) $ lambda x P

  val fresh_args = 
        arg_tys 
        |> map (pair "z")
        |> Variable.variant_frees lthy (preds @ orules) 
        |> map Free
in
  list_comb (pred, fresh_args)
  |> fold_rev (curry HOLogic.mk_imp) orules
  |> fold_rev mk_all preds
  |> fold_rev lambda fresh_args 
end\<close>

text \<open>
  The function \<open>mk_all\<close> in Line 3 is just a helper function for constructing 
  universal quantifications. The code in Lines 5 to 9 produces the fresh \<open>zs\<close>. For this it pairs every argument type with the string
  @{text [quotes] "z"} (Line 7); then generates variants for all these strings
  so that they are unique w.r.t.~to the predicates and \<open>orules\<close> (Line 8);
  in Line 9 it generates the corresponding variable terms for the unique
  strings.

  The unique variables are applied to the predicate in Line 11 using the
  function @{ML list_comb}; then the \<open>orules\<close> are prefixed (Line 12); in
  Line 13 we quantify over all predicates; and in line 14 we just abstract
  over all the \<open>zs\<close>, i.e., the fresh arguments of the
  predicate. A testcase for this function is
\<close>

local_setup %gray \<open>fn lthy =>
let
  val def = defn_aux lthy eo_orules eo_preds (e_pred, e_arg_tys)
in
  pwriteln (pretty_term lthy def); lthy
end\<close>

text \<open>
  where we use the shorthands defined in Figure~\ref{fig:shorthands}.
  The testcase calls @{ML defn_aux} for the predicate \<open>even\<close> and prints
  out the generated definition. So we obtain as printout 

  @{text [display] 
"\<lambda>z. \<forall>even odd. (even 0) \<longrightarrow> (\<forall>n. odd n \<longrightarrow> even (Suc n)) 
                         \<longrightarrow> (\<forall>n. even n \<longrightarrow> odd (Suc n)) \<longrightarrow> even z"}

  If we try out the function with the rules for freshness
\<close>

local_setup %gray \<open>fn lthy =>
let
  val arg = (fresh_pred, fresh_arg_tys)
  val def = defn_aux lthy fresh_orules [fresh_pred] arg
in
  pwriteln (pretty_term lthy def); lthy
end\<close>


text \<open>
  we obtain

  @{term [display] 
"\<lambda>z za. \<forall>fresh. (\<forall>a b. \<not> a = b \<longrightarrow> fresh a (Var b)) \<longrightarrow>
               (\<forall>a s t. fresh a t \<longrightarrow> fresh a s \<longrightarrow> fresh a (App t s)) \<longrightarrow>
                (\<forall>a t. fresh a (Lam a t)) \<longrightarrow>
                (\<forall>a b t. \<not> a = b \<longrightarrow> fresh a t \<longrightarrow> fresh a (Lam b t)) \<longrightarrow> fresh z za"}


  The second function, named \<open>defns\<close>, has to iterate the function
  @{ML defn_aux} over all predicates. The argument \<open>preds\<close> is again
  the list of predicates as @{ML_type term}s; the argument \<open>prednames\<close> is the list of binding names of the predicates; \<open>mxs\<close> 
  are the list of syntax, or mixfix, annotations for the predicates; 
  \<open>arg_tyss\<close> is the list of argument-type-lists.
\<close>

ML %linenosgray\<open>fun defns rules preds prednames mxs arg_typss lthy =
let
  val orules = map (Object_Logic.atomize_term lthy) rules
  val defs = map (defn_aux lthy orules preds) (preds ~~ arg_typss) 
in
  fold_map make_defn (prednames ~~ mxs ~~ defs) lthy
end\<close>

text \<open>
  The user will state the introduction rules using meta-implications and
  meta-quanti\-fications. In Line 4, we transform these introduction rules
  into the object logic (since definitions cannot be stated with
  meta-connectives). To do this transformation we have to obtain the theory
  behind the local theory using the function @{ML_ind theory_of in Proof_Context} 
  (Line 3); with this theory we can use the function
  @{ML_ind  atomize_term in Object_Logic} to make the transformation (Line 4). The call
  to @{ML defn_aux} in Line 5 produces all right-hand sides of the
  definitions. The actual definitions are then made in Line 7.  The result of
  the function is a list of theorems and a local theory (the theorems are
  registered with the local theory). A testcase for this function is
\<close>

local_setup %gray \<open>fn lthy =>
let
  val (defs, lthy') = 
    defns eo_rules eo_preds eo_prednames eo_mxs eo_arg_tyss lthy
in
  pwriteln (pretty_thms_no_vars lthy' defs); lthy
end\<close>

text \<open>
  where we feed into the function all parameters corresponding to
  the \<open>even\<close>/\<open>odd\<close> example. The definitions we obtain
  are:

  @{text [display, break]
"even \<equiv> \<lambda>z. \<forall>even odd. (even 0) \<longrightarrow> (\<forall>n. odd n \<longrightarrow> even (Suc n))  
                                \<longrightarrow> (\<forall>n. even n \<longrightarrow> odd (Suc n)) \<longrightarrow> even z,
odd \<equiv> \<lambda>z. \<forall>even odd. (even 0) \<longrightarrow> (\<forall>n. odd n \<longrightarrow> even (Suc n)) 
                               \<longrightarrow> (\<forall>n. even n \<longrightarrow> odd (Suc n)) \<longrightarrow> odd z"}

  Note that in the testcase we return the local theory \<open>lthy\<close> 
  (not the modified \<open>lthy'\<close>). As a result the test case has no effect
  on the ambient theory. The reason is that if we introduce the
  definition again, we pollute the name space with two versions of 
  \<open>even\<close> and \<open>odd\<close>. We want to avoid this here.

  This completes the code for introducing the definitions. Next we deal with
  the induction principles. 
\<close>

subsection \<open>Induction Principles\<close>

text \<open>
  Recall that the manual proof for the induction principle 
  of \<open>even\<close> was:
\<close>

lemma manual_ind_prin_even: 
assumes prem: "even z"
shows "P 0 \<Longrightarrow> (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P z"
apply(atomize (full))
apply(cut_tac prem)
apply(unfold even_def)
apply(drule spec[where x=P])
apply(drule spec[where x=Q])
apply(assumption)
done

text \<open>
  The code for automating such induction principles has to accomplish two tasks: 
  constructing the induction principles from the given introduction
  rules and then automatically generating proofs for them using a tactic. 
  
  The tactic will use the following helper function for instantiating universal 
  quantifiers. 
\<close>

ML %grayML\<open>fun inst_spec ctrm =
let
  val cty = Thm.ctyp_of_cterm ctrm
in 
  Thm.instantiate' [SOME cty] [NONE, SOME ctrm] @{thm spec} 
end\<close>

text \<open>
  This helper function uses the function @{ML_ind instantiate' in Thm}
  and instantiates the \<open>?x\<close> in the theorem @{thm spec} with a given
  @{ML_type cterm}. We call this helper function in the following
  tactic.\label{fun:instspectac}.
\<close>

ML %grayML\<open>fun inst_spec_tac ctxt ctrms = 
  EVERY' (map (dresolve_tac ctxt o single o inst_spec) ctrms)\<close>

text \<open>
  This tactic expects a list of @{ML_type cterm}s. It allows us in the 
  proof below to instantiate the three quantifiers in the assumption. 
\<close>

lemma 
fixes P::"nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> bool"
shows "\<forall>x y z. P x y z \<Longrightarrow> True"
apply (tactic \<open>
  inst_spec_tac @{context} [@{cterm "a::nat"},@{cterm "b::nat"},@{cterm "c::nat"}] 1\<close>)
txt \<open>
  We obtain the goal state

  \begin{minipage}{\textwidth}
  @{subgoals} 
  \end{minipage}\<close>
(*<*)oops(*>*)

text \<open>
  The complete tactic for proving the induction principles can now
  be implemented as follows:
\<close>

ML %linenosgray\<open>fun ind_tac ctxt defs prem insts =
  EVERY1 [Object_Logic.full_atomize_tac ctxt,
          cut_facts_tac prem,
          rewrite_goal_tac ctxt defs,
          inst_spec_tac ctxt insts,
          assume_tac ctxt]\<close>

text \<open>
  We have to give it as arguments the definitions, the premise (a list of
  formulae) and the instantiations. The premise is \<open>even n\<close> in lemma
  @{thm [source] manual_ind_prin_even} shown above; in our code it will always be a list
  consisting of a single formula. Compare this tactic with the manual proof
  for the lemma @{thm [source] manual_ind_prin_even}: as you can see there is
  almost a one-to-one correspondence between the \isacommand{apply}-script and
  the @{ML ind_tac}. We first rewrite the goal to use only object connectives (Line 2),
  "cut in" the premise (Line 3), unfold the definitions (Line 4), instantiate
  the assumptions of the goal (Line 5) and then conclude with @{ML assume_tac}.

  Two testcases for this tactic are:
\<close>

lemma automatic_ind_prin_even:
assumes prem: "even z"
shows "P 0 \<Longrightarrow> (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P z"
by (tactic \<open>ind_tac @{context} eo_defs @{thms prem} 
                    [@{cterm "P::nat\<Rightarrow>bool"}, @{cterm "Q::nat\<Rightarrow>bool"}]\<close>)

lemma automatic_ind_prin_fresh:
assumes prem: "fresh z za" 
shows "(\<And>a b. a \<noteq> b \<Longrightarrow> P a (Var b)) \<Longrightarrow> 
        (\<And>a t s. \<lbrakk>P a t; P a s\<rbrakk> \<Longrightarrow> P a (App t s)) \<Longrightarrow>
        (\<And>a t. P a (Lam a t)) \<Longrightarrow> 
        (\<And>a b t. \<lbrakk>a \<noteq> b; P a t\<rbrakk> \<Longrightarrow> P a (Lam b t)) \<Longrightarrow> P z za"
by (tactic \<open>ind_tac @{context} @{thms fresh_def} @{thms prem} 
                          [@{cterm "P::string\<Rightarrow>trm\<Rightarrow>bool"}]\<close>)


text \<open>
  While the tactic for proving the induction principles is relatively simple,
  it will be a bit more work to construct the goals from the introduction rules
  the user provides.  Therefore let us have a closer look at the first 
  proved theorem:

  \begin{isabelle}
  \isacommand{thm}~@{thm [source] automatic_ind_prin_even}\\
  \<open>> \<close>~@{thm automatic_ind_prin_even}
  \end{isabelle}

  The variables \<open>z\<close>, \<open>P\<close> and \<open>Q\<close> are schematic
  variables (since they are not quantified in the lemma). These 
  variables must be schematic, otherwise they cannot be instantiated 
  by the user. To generate these schematic variables we use a common trick
  in Isabelle programming: we first declare them as \emph{free}, 
  \emph{but fixed}, and then use the infrastructure to turn them into 
  schematic variables.

  In general we have to construct for each predicate \<open>pred\<close> a goal 
  of the form

  @{text [display] 
  "pred ?zs \<Longrightarrow> rules[preds := ?Ps] \<Longrightarrow> ?P ?zs"}

  where the predicates \<open>preds\<close> are replaced in \<open>rules\<close> by new 
  distinct variables \<open>?Ps\<close>. We also need to generate fresh arguments 
  \<open>?zs\<close> for the predicate  \<open>pred\<close> and the \<open>?P\<close> in 
  the conclusion. 

  We generate these goals in two steps. The first function, named \<open>prove_ind\<close>, 
  expects that the introduction rules are already appropriately substituted. The argument
  \<open>srules\<close> stands for these substituted rules; \<open>cnewpreds\<close> are
  the certified terms coresponding to the variables \<open>?Ps\<close>; \<open>pred\<close> is the predicate for which we prove the induction principle;
  \<open>newpred\<close> is its replacement and \<open>arg_tys\<close> are the argument
  types of this predicate.
\<close>

ML %linenosgray\<open>fun prove_ind lthy defs srules cnewpreds ((pred, newpred), arg_tys) =
let
  val zs = replicate (length arg_tys) "z"
  val (newargnames, lthy') = Variable.variant_fixes zs lthy;
  val newargs = map Free (newargnames ~~ arg_tys)
  
  val prem = HOLogic.mk_Trueprop (list_comb (pred, newargs))
  val goal = Logic.list_implies 
         (srules, HOLogic.mk_Trueprop (list_comb (newpred, newargs)))
in
  Goal.prove lthy' [] [prem] goal
      (fn {prems, context, ...} => ind_tac context defs prems cnewpreds)
  |> singleton (Proof_Context.export lthy' lthy)
end\<close>

text \<open>
  In Line 3 we produce names \<open>zs\<close> for each type in the 
  argument type list. Line 4 makes these names unique and declares them as 
  free, but fixed, variables in the local theory \<open>lthy'\<close>. 
  That means they are not schematic variables (yet).
  In Line 5 we construct the terms corresponding to these variables. 
  The variables are applied to the predicate in Line 7 (this corresponds
  to the first premise \<open>pred zs\<close> of the induction principle). 
  In Line 8 and 9, we first construct the term  \<open>P zs\<close> 
  and then add the (substituted) introduction rules as preconditions. In 
  case that no introduction rules are given, the conclusion of this 
  implication needs to be wrapped inside a @{term Trueprop}, otherwise 
  the Isabelle's goal mechanism will fail.\footnote{FIXME: check with 
  Stefan...is this so?} 

  In Line 11 we set up the goal to be proved using the function @{ML_ind 
  prove in Goal}; in the next line we call the tactic for proving the
  induction principle. As mentioned before, this tactic expects the
  definitions, the premise and the (certified) predicates with which the
  introduction rules have been substituted. The code in these two lines will
  return a theorem. However, it is a theorem proved inside the local theory
  \<open>lthy'\<close>, where the variables \<open>zs\<close> are free, but fixed (see
  Line 4). By exporting this theorem from \<open>lthy'\<close> (which contains the
  \<open>zs\<close> as free variables) to \<open>lthy\<close> (which does not), we
  obtain the desired schematic variables \<open>?zs\<close>.  A testcase for this
  function is
\<close>

local_setup %gray \<open>fn lthy =>
let
  val newpreds = [@{term "P::nat \<Rightarrow> bool"}, @{term "Q::nat \<Rightarrow> bool"}]
  val cnewpreds = [@{cterm "P::nat \<Rightarrow> bool"}, @{cterm "Q::nat \<Rightarrow> bool"}]
  val newpred = @{term "P::nat \<Rightarrow> bool"}
  val srules =  map (subst_free (eo_preds ~~ newpreds)) eo_rules
  val intro = 
      prove_ind lthy eo_defs srules cnewpreds ((e_pred, newpred), e_arg_tys)
in
  pwriteln (pretty_thm lthy intro); lthy
end\<close>

text \<open>
  This prints out the theorem:

  @{text [display]
  " \<lbrakk>even ?z; P 0; \<And>n. Q n \<Longrightarrow> P (Suc n); \<And>n. P n \<Longrightarrow> Q (Suc n)\<rbrakk> \<Longrightarrow> P ?z"}

  The export from \<open>lthy'\<close> to \<open>lthy\<close> in Line 13 above 
  has correctly turned the free, but fixed, \<open>z\<close> into a schematic 
  variable \<open>?z\<close>; the variables \<open>P\<close> and \<open>Q\<close> are not yet
  schematic. 

  We still have to produce the new predicates with which the introduction
  rules are substituted and iterate @{ML prove_ind} over all
  predicates. This is what the second function, named \<open>inds\<close> does. 
\<close>

ML %linenosgray\<open>fun inds rules defs preds arg_tyss lthy  =
let
  val Ps = replicate (length preds) "P"
  val (newprednames, lthy') = Variable.variant_fixes Ps lthy
  
  val tyss' = map (fn tys => tys ---> HOLogic.boolT) arg_tyss
  val newpreds = map Free (newprednames ~~ tyss')
  val cnewpreds = map (Thm.cterm_of lthy') newpreds
  val srules = map (subst_free (preds ~~ newpreds)) rules

in
  map (prove_ind lthy' defs srules cnewpreds) 
        (preds ~~ newpreds ~~ arg_tyss)
          |> Proof_Context.export lthy' lthy
end\<close>

text \<open>
  In Line 3, we generate a string @{text [quotes] "P"} for each predicate. 
  In Line 4, we use the same trick as in the previous function, that is making the 
  \<open>Ps\<close> fresh and declaring them as free, but fixed, in
  the new local theory \<open>lthy'\<close>. In Line 6, we construct the types of these new predicates
  using the given argument types. Next we turn them into terms and subsequently
  certify them (Line 7 and 8). We can now produce the substituted introduction rules 
  (Line 9) using the function @{ML_ind subst_free in Term}. Line 12 and 13 just iterate 
  the proofs for all predicates.
  From this we obtain a list of theorems. Finally we need to export the 
  fixed variables \<open>Ps\<close> to obtain the schematic variables \<open>?Ps\<close> 
  (Line 14).

  A testcase for this function is
\<close>

local_setup %gray \<open>fn lthy =>
let 
  val ind_thms = inds eo_rules eo_defs eo_preds eo_arg_tyss lthy
in
  pwriteln (pretty_thms lthy ind_thms); lthy
end\<close>


text \<open>
  which prints out

@{text [display]
"even ?z \<Longrightarrow> ?P1 0 \<Longrightarrow> 
 (\<And>m. ?Pa1 m \<Longrightarrow> ?P1 (Suc m)) \<Longrightarrow> (\<And>m. ?P1 m \<Longrightarrow> ?Pa1 (Suc m)) \<Longrightarrow> ?P1 ?z,
odd ?z \<Longrightarrow> ?P1 0 \<Longrightarrow>
 (\<And>m. ?Pa1 m \<Longrightarrow> ?P1 (Suc m)) \<Longrightarrow> (\<And>m. ?P1 m \<Longrightarrow> ?Pa1 (Suc m)) \<Longrightarrow> ?Pa1 ?z"}

  Note that now both, the \<open>?Ps\<close> and the \<open>?zs\<close>, are schematic
  variables. The numbers attached to these variables have been introduced by 
  the pretty-printer and are \emph{not} important for the user. 

  This completes the code for the induction principles. The final peice
  of reasoning infrastructure we need are the introduction rules. 
\<close>

subsection \<open>Introduction Rules\<close>

text \<open>
  Constructing the goals for the introduction rules is easy: they
  are just the rules given by the user. However, their proofs are 
  quite a bit more involved than the ones for the induction principles. 
  To explain the general method, our running example will be
  the introduction rule

  \begin{isabelle}
  @{prop "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"}
  \end{isabelle}
  
  about freshness for lambdas. In order to ease somewhat 
  our work here, we use the following two helper functions.
\<close>

ML %grayML\<open>val all_elims = fold (fn ct => fn th => th RS inst_spec ct)
val imp_elims = fold (fn th => fn th' => [th', th] MRS @{thm mp})\<close>

text \<open>
  To see what these functions do, let us suppose we have the following three
  theorems. 
\<close>

lemma all_elims_test:
fixes P::"nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> bool"
shows "\<forall>x y z. P x y z" sorry

lemma imp_elims_test:
shows "A \<longrightarrow> B \<longrightarrow> C" sorry

lemma imp_elims_test':
shows "A" "B" sorry

text \<open>
  The function @{ML all_elims} takes a list of (certified) terms and instantiates
  theorems of the form @{thm [source] all_elims_test}. For example we can instantiate
  the quantifiers in this theorem with @{term a}, @{term b} and @{term c} as follows:

  @{ML_response [display, gray]
\<open>let
  val ctrms = [@{cterm "a::nat"}, @{cterm "b::nat"}, @{cterm "c::nat"}]
  val new_thm = all_elims ctrms @{thm all_elims_test}
in
  pwriteln (pretty_thm_no_vars @{context} new_thm)
end\<close>
  \<open>P a b c\<close>}

  Note the difference with @{ML inst_spec_tac} from Page~\pageref{fun:instspectac}:
  @{ML inst_spec_tac} is a tactic which operates on a goal state; in contrast
  @{ML all_elims} operates on theorems. 

  Similarly, the function @{ML imp_elims} eliminates preconditions from implications. 
  For example we can eliminate the preconditions \<open>A\<close> and \<open>B\<close> from
  @{thm [source] imp_elims_test}:

  @{ML_response [display, gray]
\<open>let
  val res = imp_elims @{thms imp_elims_test'} @{thm imp_elims_test}
in
  pwriteln (pretty_thm_no_vars @{context} res)
end\<close>
  \<open>Q\<close>}

  Now we set up the proof for the introduction rule as follows:
\<close>

lemma fresh_Lam:
shows "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
(*<*)oops(*>*)

text \<open>
  The first step in the proof will be to expand the definitions of freshness
  and then introduce quantifiers and implications. For this we
  will use the tactic
\<close>

ML %linenosgray\<open>fun expand_tac ctxt defs =
  Object_Logic.rulify_tac ctxt 1
  THEN rewrite_goal_tac ctxt defs 1
  THEN (REPEAT (resolve_tac ctxt [@{thm allI}, @{thm impI}] 1))\<close>

text \<open>
  The function in Line 2 ``rulifies'' the lemma.\footnote{FIXME: explain this better} 
  This will turn out to 
  be important later on. Applying this tactic in our proof of \<open>fresh_Lem\<close>
\<close>

(*<*)
lemma fresh_Lam:
shows "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
(*>*)
apply(tactic \<open>expand_tac @{context} @{thms fresh_def}\<close>)

txt \<open>
  gives us the goal state

  \begin{isabelle}
  @{subgoals [display]}
  \end{isabelle}

  As you can see, there are parameters (namely \<open>a\<close>, \<open>b\<close> and
  \<open>t\<close>) which come from the introduction rule and parameters (in the
  case above only \<open>fresh\<close>) which come from the universal
  quantification in the definition @{term "fresh a (App t s)"}.  Similarly,
  there are assumptions that come from the premises of the rule (namely the
  first two) and assumptions from the definition of the predicate (assumption
  three to six). We need to treat these parameters and assumptions
  differently. In the code below we will therefore separate them into \<open>params1\<close> and \<open>params2\<close>, respectively \<open>prems1\<close> and \<open>prems2\<close>. To do this separation, it is best to open a subproof with the
  tactic @{ML_ind SUBPROOF in Subgoal}, since this tactic provides us with the parameters (as
  list of @{ML_type cterm}s) and the assumptions (as list of @{ML_type thm}s). 
  The problem with @{ML SUBPROOF}, however, is that it always expects us to 
  completely discharge the goal (see Section~\ref{sec:simpletacs}). This is 
  a bit inconvenient for our gradual explanation of the proof here. Therefore
  we use first the function @{ML_ind  FOCUS in Subgoal}, which does s
  ame as @{ML SUBPROOF} 
  but does not require us to completely discharge the goal. 
\<close>
(*<*)oops(*>*)
text_raw \<open>
\begin{figure}[t]
\begin{minipage}{\textwidth}
\begin{isabelle}
\<close>
ML %grayML\<open>fun chop_print params1 params2 prems1 prems2 ctxt =
let 
 val pps = [Pretty.big_list "Params1 from the rule:" (map (pretty_cterm ctxt) params1), 
            Pretty.big_list "Params2 from the predicate:" (map (pretty_cterm ctxt) params2), 
            Pretty.big_list "Prems1 from the rule:" (map (pretty_thm ctxt) prems1),
            Pretty.big_list "Prems2 from the predicate:" (map (pretty_thm ctxt) prems2)] 
in 
  pps |> Pretty.chunks
      |> Pretty.string_of
      |> tracing
end\<close>

text_raw\<open>
\end{isabelle}
\end{minipage}
\caption{A helper function that prints out the parameters and premises that
  need to be treated differently.\label{fig:chopprint}}
\end{figure}
\<close>

text \<open>
  First we calculate the values for \<open>params1/2\<close> and \<open>prems1/2\<close>
  from \<open>params\<close> and \<open>prems\<close>, respectively. To better see what is
  going in our example, we will print out these values using the printing
  function in Figure~\ref{fig:chopprint}. Since @{ML FOCUS in Subgoal} will
  supply us the \<open>params\<close> and \<open>prems\<close> as lists, we can 
  separate them using the function @{ML_ind chop in Library}. 
\<close>

ML %linenosgray\<open>fun chop_test_tac preds rules =
  Subgoal.FOCUS (fn {params, prems, context, ...} =>
  let
    val cparams = map snd params
    val (params1, params2) = chop (length cparams - length preds) cparams
    val (prems1, prems2) = chop (length prems - length rules) prems
  in
    chop_print params1 params2 prems1 prems2 context; all_tac
  end)\<close>

text \<open>
  For the separation we can rely on the fact that Isabelle deterministically 
  produces parameters and premises in a goal state. The last parameters
  that were introduced come from the quantifications in the definitions
  (see the tactic @{ML expand_tac}).
  Therefore we only have to subtract in Line 5 the number of predicates (in this
  case only \<open>1\<close>) from the lenghts of all parameters. Similarly
  with the \<open>prems\<close> in line 6: the last premises in the goal state come from 
  unfolding the definition of the predicate in the conclusion. So we can 
  just subtract the number of rules from the number of all premises. 
  To check our calculations we print them out in Line 8 using the
  function @{ML chop_print} from Figure~\ref{fig:chopprint} and then 
  just do nothing, that is @{ML all_tac}. Applying this tactic in our example 
\<close>

(*<*)
lemma fresh_Lam:
shows "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
apply(tactic \<open>expand_tac @{context} @{thms fresh_def}\<close>)
(*>*)
apply(tactic \<open>chop_test_tac [fresh_pred] fresh_rules @{context} 1\<close>)
(*<*)oops(*>*)

text \<open>
  gives

  \begin{isabelle}
  \<open>Params1 from the rule:\<close>\\
  \<open>a, b, t\<close>\\
  \<open>Params2 from the predicate:\<close>\\
  \<open>fresh\<close>\\
  \<open>Prems1 from the rule:\<close>\\
  @{term "a \<noteq> b"}\\
  @{text [break]
"\<forall>fresh.
      (\<forall>a b. a \<noteq> b \<longrightarrow> fresh a (Var b)) \<longrightarrow>
      (\<forall>a t s. fresh a t \<longrightarrow> fresh a s \<longrightarrow> fresh a (App t s)) \<longrightarrow>
      (\<forall>a t. fresh a (Lam a t)) \<longrightarrow> 
      (\<forall>a b t. a \<noteq> b \<longrightarrow> fresh a t \<longrightarrow> fresh a (Lam b t)) \<longrightarrow> fresh a t"}\\
   \<open>Prems2 from the predicate:\<close>\\
   @{term "\<forall>a b. a \<noteq> b \<longrightarrow> fresh a (Var b)"}\\
   @{term "\<forall>a t s. fresh a t \<longrightarrow> fresh a s \<longrightarrow> fresh a (App t s)"}\\
   @{term "\<forall>a t. fresh a (Lam a t)"}\\
   @{term "\<forall>a b t. a \<noteq> b \<longrightarrow> fresh a t \<longrightarrow> fresh a (Lam b t)"}
  \end{isabelle}


  We now have to select from \<open>prems2\<close> the premise 
  that corresponds to the introduction rule we prove, namely:

  @{term [display] "\<forall>a b t. a \<noteq> b \<longrightarrow> fresh a t \<longrightarrow> fresh a (Lam a t)"}

  To use this premise with @{ML resolve_tac}, we need to instantiate its 
  quantifiers (with \<open>params1\<close>) and transform it into rule 
  format (using @{ML_ind  rulify in Object_Logic}). So we can modify the 
  code as follows:
\<close>

ML %linenosgray\<open>fun apply_prem_tac i preds rules =
  Subgoal.FOCUS (fn {params, prems, context, ...} =>
  let
    val cparams = map snd params
    val (params1, params2) = chop (length cparams - length preds) cparams
    val (prems1, prems2) = chop (length prems - length rules) prems
  in
    resolve_tac  context [Object_Logic.rulify context (all_elims params1 (nth prems2 i))] 1
  end)\<close>

text \<open>
  The argument \<open>i\<close> corresponds to the number of the 
  introduction we want to prove. We will later on let it range
  from \<open>0\<close> to the number of \<open>rules - 1\<close>.
  Below we apply this function with \<open>3\<close>, since 
  we are proving the fourth introduction rule. 
\<close>

(*<*)
lemma fresh_Lam:
shows "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
apply(tactic \<open>expand_tac @{context} @{thms fresh_def}\<close>)
(*>*)
apply(tactic \<open>apply_prem_tac 3 [fresh_pred] fresh_rules @{context} 1\<close>)
(*<*)oops(*>*)

text \<open>
  The goal state we obtain is: 

  \begin{isabelle}
  \<open>1.\<close>~\<open>\<dots> \<Longrightarrow> \<close>~@{prop "a \<noteq> b"}\\
  \<open>2.\<close>~\<open>\<dots> \<Longrightarrow> \<close>~@{prop "fresh a t"}
  \end{isabelle}

  As expected there are two subgoals, where the first comes from the
  non-recursive premise of the introduction rule and the second comes 
  from the recursive one. The first goal can be solved immediately 
  by \<open>prems1\<close>. The second needs more work. It can be solved 
  with the other premise in \<open>prems1\<close>, namely


  @{term [break,display]
  "\<forall>fresh.
      (\<forall>a b. a \<noteq> b \<longrightarrow> fresh a (Var b)) \<longrightarrow>
      (\<forall>a t s. fresh a t \<longrightarrow> fresh a s \<longrightarrow> fresh a (App t s)) \<longrightarrow>
      (\<forall>a t. fresh a (Lam a t)) \<longrightarrow> 
      (\<forall>a b t. a \<noteq> b \<longrightarrow> fresh a t \<longrightarrow> fresh a (Lam b t)) \<longrightarrow> fresh a t"}

  but we have to instantiate it appropriately. These instantiations
  come from \<open>params1\<close> and \<open>prems2\<close>. We can determine
  whether we are in the simple or complicated case by checking whether
  the topmost connective is an \<open>\<forall>\<close>. The premises in the simple
  case cannot have such a quantification, since the first step 
  of @{ML \<open>expand_tac\<close>} was to ``rulify'' the lemma. 
  The premise of the complicated case must have at least one  \<open>\<forall>\<close>
  coming from the quantification over the \<open>preds\<close>. So 
  we can implement the following function
\<close>

ML %grayML\<open>fun prepare_prem ctxt params2 prems2 prem =  
  resolve_tac ctxt [case Thm.prop_of prem of
           _ $ (Const (@{const_name All}, _) $ _) =>
                 prem |> all_elims params2 
                      |> imp_elims prems2
         | _ => prem]\<close>

text \<open>
  which either applies the premise outright (the default case) or if 
  it has an outermost universial quantification, instantiates it first 
  with  \<open>params1\<close> and then \<open>prems1\<close>. The following 
  tactic will therefore prove the lemma completely.
\<close>

ML %grayML\<open>fun prove_intro_tac i preds rules =
  SUBPROOF (fn {params, prems, context, ...} =>
  let
    val cparams = map snd params
    val (params1, params2) = chop (length cparams - length preds) cparams
    val (prems1, prems2) = chop (length prems - length rules) prems
  in
    resolve_tac context [Object_Logic.rulify context (all_elims params1 (nth prems2 i))] 1
    THEN EVERY1 (map (prepare_prem context params2 prems2) prems1)
  end)\<close>

text \<open>
  Note that the tactic is now @{ML SUBPROOF}, not @{ML FOCUS in Subgoal} anymore. 
  The full proof of the introduction rule is as follows:
\<close>

lemma fresh_Lam:
shows "\<And>a b t. \<lbrakk>a \<noteq> b; fresh a t\<rbrakk> \<Longrightarrow> fresh a (Lam b t)"
apply(tactic \<open>expand_tac @{context} @{thms fresh_def}\<close>)
apply(tactic \<open>prove_intro_tac 3 [fresh_pred] fresh_rules @{context} 1\<close>)
done

text \<open>
  Phew!\ldots  

  Unfortunately, not everything is done yet. If you look closely
  at the general principle outlined for the introduction rules in 
  Section~\ref{sec:nutshell}, we have  not yet dealt with the case where 
  recursive premises have preconditions. The introduction rule
  of the accessible part is such a rule. 
\<close>


lemma accpartI:
shows "\<And>R x. (\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
apply(tactic \<open>expand_tac @{context} @{thms accpart_def}\<close>)
apply(tactic \<open>chop_test_tac [acc_pred] acc_rules @{context} 1\<close>)
apply(tactic \<open>apply_prem_tac 0 [acc_pred] acc_rules @{context} 1\<close>)

txt \<open>
  Here @{ML chop_test_tac} prints out the following
  values for \<open>params1/2\<close> and \<open>prems1/2\<close>

  \begin{isabelle}
  \<open>Params1 from the rule:\<close>\\
  \<open>x\<close>\\
  \<open>Params2 from the predicate:\<close>\\
  \<open>P\<close>\\
  \<open>Prems1 from the rule:\<close>\\
  \<open>R ?y x \<Longrightarrow> \<forall>P. (\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x) \<longrightarrow> P ?y\<close>\\
  \<open>Prems2 from the predicate:\<close>\\
  @{term "\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x"}\\
  \end{isabelle}

  and after application of the introduction rule 
  using @{ML apply_prem_tac}, we are in the goal state

  \begin{isabelle}
  \<open>1.\<close>~@{term "\<And>y. R y x \<Longrightarrow> P y"}
  \end{isabelle}
  
  
\<close>(*<*)oops(*>*)

text \<open>
  In order to make progress, we have to use the precondition
  \<open>R y x\<close> (in general there can be many of them). The best way
  to get a handle on these preconditions is to open up another subproof,
  since the preconditions will then be bound to \<open>prems\<close>. Therfore we
  modify the function @{ML prepare_prem} as follows
\<close>

ML %linenosgray\<open>fun prepare_prem params2 prems2 ctxt prem =  
  SUBPROOF (fn {prems, ...} =>
  let
    val prem' = prems MRS prem
  in 
    resolve_tac ctxt [case Thm.prop_of prem' of
           _ $ (Const (@{const_name All}, _) $ _) =>
                 prem' |> all_elims params2 
                       |> imp_elims prems2
         | _ => prem'] 1
  end) ctxt\<close>

text \<open>
  In Line 4 we use the \<open>prems\<close> from the @{ML SUBPROOF} and resolve 
  them with \<open>prem\<close>. In the simple cases, that is where the \<open>prem\<close> 
  comes from a non-recursive premise of the rule, \<open>prems\<close> will be 
  just the empty list and the function @{ML_ind MRS in Drule} does nothing. Similarly, in the 
  cases where the recursive premises of the rule do not have preconditions. 
  In case there are preconditions, then Line 4 discharges them. After
  that we can proceed as before, i.e., check whether the outermost
  connective is \<open>\<forall>\<close>.
  
  The function @{ML prove_intro_tac} only needs to be changed so that it
  gives the context to @{ML prepare_prem} (Line 8). The modified version
  is below.
\<close>

ML %linenosgray\<open>fun prove_intro_tac i preds rules =
  SUBPROOF (fn {params, prems, context, ...} =>
  let
    val cparams = map snd params
    val (params1, params2) = chop (length cparams - length preds) cparams
    val (prems1, prems2) = chop (length prems - length rules) prems
  in
    resolve_tac context [Object_Logic.rulify context (all_elims params1 (nth prems2 i))] 1
    THEN EVERY1 (map (prepare_prem params2 prems2 context) prems1)
  end)\<close>

text \<open>
  With these two functions we can now also prove the introduction
  rule for the accessible part. 
\<close>

lemma accpartI:
shows "\<And>R x. (\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
apply(tactic \<open>expand_tac @{context} @{thms accpart_def}\<close>)
apply(tactic \<open>prove_intro_tac 0 [acc_pred] acc_rules @{context} 1\<close>)
done

text \<open>
  Finally we need two functions that string everything together. The first
  function is the tactic that performs the proofs.
\<close>

ML %linenosgray\<open>fun intro_tac defs rules preds i ctxt =
  EVERY1 [Object_Logic.rulify_tac ctxt,
          rewrite_goal_tac ctxt defs,
          REPEAT o (resolve_tac ctxt [@{thm allI}, @{thm impI}]),
          prove_intro_tac i preds rules ctxt]\<close>

text \<open>
  Lines 2 to 4 in this tactic correspond to the function @{ML expand_tac}. 
  Some testcases for this tactic are:
\<close>

lemma even0_intro:
shows "even 0"
by (tactic \<open>intro_tac eo_defs eo_rules eo_preds 0 @{context}\<close>)

lemma evenS_intro:
shows "\<And>m. odd m \<Longrightarrow> even (Suc m)"
by (tactic \<open>intro_tac eo_defs eo_rules eo_preds 1 @{context}\<close>)

lemma fresh_App:
shows "\<And>a t s. \<lbrakk>fresh a t; fresh a s\<rbrakk> \<Longrightarrow> fresh a (App t s)"
by (tactic \<open>
  intro_tac @{thms fresh_def} fresh_rules [fresh_pred] 1 @{context}\<close>)

text \<open>
  The second function sets up in Line 4 the goals to be proved (this is easy
  for the introduction rules since they are exactly the rules 
  given by the user) and iterates @{ML intro_tac} over all 
  introduction rules.
\<close>

ML %linenosgray\<open>fun intros rules preds defs lthy = 
let
  fun intros_aux (i, goal) =
    Goal.prove lthy [] [] goal
      (fn {context, ...} => intro_tac defs rules preds i context)
in
  map_index intros_aux rules
end\<close>

text \<open>
  The iteration is done with the function @{ML_ind map_index in Library} since we
  need the introduction rule together with its number (counted from
  \<open>0\<close>). This completes the code for the functions deriving the
  reasoning infrastructure. It remains to implement some administrative
  code that strings everything together.
\<close>

subsection \<open>Administrative Functions\<close>

text \<open>
  We have produced various theorems (definitions, induction principles and
  introduction rules), but apart from the definitions, we have not yet 
  registered them with the theorem database. This is what the functions 
  @{ML_ind  note in Local_Theory} does. 


  For convenience, we use the following 
  three wrappers this function:
\<close>

ML %grayML\<open>fun note_many qname ((name, attrs), thms) = 
  Local_Theory.note ((Binding.qualify false qname name, attrs), thms) 

fun note_single1 qname ((name, attrs), thm) = 
  note_many qname ((name, attrs), [thm]) 

fun note_single2 name attrs (qname, thm) = 
  note_many (Binding.name_of qname) ((name, attrs), [thm])\<close>

text \<open>
  The function that ``holds everything together'' is \<open>add_inductive\<close>. 
  Its arguments are the specification of the predicates \<open>pred_specs\<close> 
  and the introduction rules \<open>rule_spec\<close>.   
\<close>

ML %linenosgray\<open>fun add_inductive pred_specs rule_specs lthy =
let
  val mxs = map snd pred_specs
  val pred_specs' = map fst pred_specs
  val prednames = map fst pred_specs'
  val preds = map (fn (p, ty) => Free (Binding.name_of p, ty)) pred_specs'
  val tyss = map (binder_types o fastype_of) preds   

  val (namesattrs, rules) = split_list rule_specs    

  val (defs, lthy') = defns rules preds prednames mxs tyss lthy      
  val ind_prins = inds rules defs preds tyss lthy' 	
  val intro_rules = intros rules preds defs lthy'

  val mut_name = space_implode "_" (map Binding.name_of prednames)
  val case_names = map (Binding.name_of o fst) namesattrs
in
  lthy' |> note_many mut_name ((@{binding "intros"}, []), intro_rules) 
        ||>> note_many mut_name ((@{binding "inducts"}, []), ind_prins)
        ||>> fold_map (note_single1 mut_name) (namesattrs ~~ intro_rules)  
        ||>> fold_map (note_single2 @{binding "induct"} 
              [Attrib.internal (K (Rule_Cases.case_names case_names)),
               Attrib.internal (K (Rule_Cases.consumes 1)),
               Attrib.internal (K (Induct.induct_pred ""))]) 
             (prednames ~~ ind_prins) 
        |> snd
end\<close>

text \<open>
  In Line 3 the function extracts the syntax annotations from the predicates. 
  Lines 4 to 6 extract the names of the predicates and generate
  the variables terms (with types) corresponding to the predicates. 
  Line 7 produces the argument types for each predicate. 

  Line 9 extracts the introduction rules from the specifications
  and stores also in \<open>namesattrs\<close> the names and attributes the
  user may have attached to these rules.

  Line 11 produces the definitions and also registers the definitions
  in the local theory \<open>lthy'\<close>. The next two lines produce
  the induction principles and the introduction rules (all of them
  as theorems). Both need the local theory \<open>lthy'\<close> in which
  the definitions have been registered.

  Lines 15 produces the name that is used to register the introduction
  rules. It is costum to collect all introduction rules under 
  \<open>string.intros\<close>, whereby \<open>string\<close> stands for the 
  @{text [quotes] "_"}-separated list of predicate names (for example
  \<open>even_odd\<close>. Also by custom, the case names in intuction 
  proofs correspond to the names of the introduction rules. These
  are generated in Line 16.

  Lines 18 and 19 now add to \<open>lthy'\<close> all the introduction rules 
  und induction principles under the name \<open>mut_name.intros\<close> and
  \<open>mut_name.inducts\<close>, respectively (see previous paragraph).
  
  Line 20 add further every introduction rule under its own name
  (given by the user).\footnote{FIXME: what happens if the user did not give
  any name.} Line 21 registers the induction principles. For this we have
  to use some specific attributes. The first @{ML_ind  case_names in Rule_Cases} 
  corresponds to the case names that are used by Isar to reference the proof
  obligations in the induction. The second @{ML \<open>consumes 1\<close> in Rule_Cases}
  indicates that the first premise of the induction principle (namely
  the predicate over which the induction proceeds) is eliminated. 

  This completes all the code and fits in with the ``front end'' described
  in Section~\ref{sec:interface}.\footnote{FIXME: Describe @{ML Induct.induct_pred}. 
  Why the mut-name? 
  What does @{ML Binding.qualify} do?}
\<close>
(*<*)end(*>*)